![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
This book describes research performed in the context of trust/distrust propagation and aggregation, and their use in recommender systems. This is a hot research topic with important implications for various application areas. The main innovative contributions of the work are: -new bilattice-based model for trust and distrust, allowing for ignorance and inconsistency -proposals for various propagation and aggregation operators, including the analysis of mathematical properties -Evaluation of these operators on real data, including a discussion on the data sets and their characteristics. -A novel approach for identifying controversial items in a recommender system -An analysis on the utility of including distrust in recommender systems -Various approaches for trust based recommendations (a.o. base on collaborative filtering), an in depth experimental analysis, and proposal for a hybrid approach -Analysis of various user types in recommender systems to optimize bootstrapping of cold start users.
This monograph studies the logical aspects of domains as used in de notational semantics of programming languages. Frameworks of domain logics are introduced; these serve as foundations for systematic derivations of proof systems from denotational semantics of programming languages. Any proof system so derived is guaranteed to agree with denotational se mantics in the sense that the denotation of any program coincides with the set of assertions true of it. The study focuses on two categories for dena tational semantics: SFP domains, and the less standard, but important, category of stable domains. The intended readership of this monograph includes researchers and graduate students interested in the relation between semantics of program ming languages and formal means of reasoning about programs. A basic knowledge of denotational semantics, mathematical logic, general topology, and category theory is helpful for a full understanding of the material. Part I SFP Domains Chapter 1 Introduction This chapter provides a brief exposition to domain theory, denotational se mantics, program logics, and proof systems. It discusses the importance of ideas and results on logic and topology to the understanding of the relation between denotational semantics and program logics. It also describes the motivation for the work presented by this monograph, and how that work fits into a more general program. Finally, it gives a short summary of the results of each chapter. 1. 1 Domain Theory Programming languages are languages with which to perform computa tion."
This book has been written for practitioners, researchers and stu dents in the fields of parallel and distributed computing. Its objective is to provide detailed coverage of the applications of graph theoretic tech niques to the problems of matching resources and requirements in multi ple computer systems. There has been considerable research in this area over the last decade and intense work continues even as this is being written. For the practitioner, this book serves as a rich source of solution techniques for problems that are routinely encountered in the real world. Algorithms are presented in sufficient detail to permit easy implementa tion; background material and fundamental concepts are covered in full. The researcher will find a clear exposition of graph theoretic tech niques applied to parallel and distributed computing. Research results are covered and many hitherto unpublished spanning the last decade results by the author are included. There are many unsolved problems in this field-it is hoped that this book will stimulate further research."
Compact Models and Measurement Techniques for High-Speed Interconnects provides detailed analysis of issues related to high-speed interconnects from the perspective of modeling approaches and measurement techniques. Particular focus is laid on the unified approach (variational method combined with the transverse transmission line technique) to develop efficient compact models for planar interconnects. This book will give a qualitative summary of the various reported modeling techniques and approaches and will help researchers and graduate students with deeper insights into interconnect models in particular and interconnect in general. Time domain and frequency domain measurement techniques and simulation methodology are also explained in this book.
In contemporary research, the supercomputer now ranks, along with radio telescopes, particle accelerators and the other apparatus of "big science", as an expensive resource, which is nevertheless essential for state of the art research. Supercomputers are usually provided as shar.ed central facilities. However, unlike, telescopes and accelerators, they are find a wide range of applications which extends across a broad spectrum of research activity. The difference in performance between a "good" and a "bad" computer program on a traditional serial computer may be a factor of two or three, but on a contemporary supercomputer it can easily be a factor of one hundred or even more! Furthermore, this factor is likely to increase with future generations of machines. In keeping with the large capital and recurrent costs of these machines, it is appropriate to devote effort to training and familiarization so that supercomputers are employed to best effect. This volume records the lectures delivered at a Summer School held at The Coseners House in Abingdon, which was an attempt to disseminate research methods in the different areas in which supercomputers are used. It is hoped that the publication of the lectures in this form will enable the experiences and achievements of supercomputer users to be shared with a larger audience. We thank all the lecturers and participants for making the Summer School an enjoyable and profitable experience. Finally, we thank the Science and Engineering Research Council and The Computer Board for supporting the Summer School.
A new advanced textbook/reference providing a comprehensive survey of hardware and software architectural principles and methods of computer systems organization and design. The book is suitable for a first course in computer organization. The style is similar to that of the author's book on assembly language in that it strongly supports self-study by students. This organization facilitates compressed presentation of material. Emphasis is also placed on related concepts to practical designs/chips. Topics: material presentation suitable for self- study; concepts related to practical designs and implementations; extensive examples and figures; details provided on several digital logic simulation packages; free MASM download instructions provided; and end-of-chapter exercises.
This volume contains a selection of papers that focus on the state-of the-art in real-time scheduling and resource management. Preliminary versions of these papers were presented at a workshop on the foundations of real-time computing sponsored by the Office of Naval Research in October, 1990 in Washington, D.C. A companion volume by the title Foundations of Real-Time Computing: Fonnal Specifications and Methods complements this book by addressing many of the most advanced approaches currently being investigated in the arena of formal specification and verification of real-time systems. Together, these two texts provide a comprehensive snapshot of current insights into the process of designing and building real-time computing systems on a scientific basis. Many of the papers in this book take care to define the notion of real-time system precisely, because it is often easy to misunderstand what is meant by that term. Different communities of researchers variously use the term real-time to refer to either very fast computing, or immediate on-line data acquisition, or deadline-driven computing. This text is concerned with the very difficult problems of scheduling tasks and resource management in computer systems whose performance is inextricably fused with the achievement of deadlines. Such systems have been enabled for a rapidly increasing set of diverse end-uses by the unremitting advances in computing power per constant-dollar cost and per constant-unit-volume of space. End-use applications of deadline-driven real-time computers span a spectrum that includes transportation systems, robotics and manufacturing, aerospace and defense, industrial process control, and telecommunications."
Compilers and Operating Systems for Low Power focuses on both application-level compiler directed energy optimization and low-power operating systems. Chapters have been written exclusively for this volume by several of the leading researchers and application developers active in the field. The first six chapters focus on low energy operating systems, or more in general, energy-aware middleware services. The next five chapters are centered on compilation and code optimization. Finally, the last chapter takes a more general viewpoint on mobile computing. The material demonstrates the state-of-the-art work and proves that to obtain the best energy/performance characteristics, compilers, system software, and architecture must work together. The relationship between energy-aware middleware and wireless microsensors, mobile computing and other wireless applications are covered. This work will be of interest to researchers in the areas of low-power computing, embedded systems, compiler optimizations, and operating systems.
This book contains the papers that were accepted for presentation at the 1988 NATO Advanced Study Institute on Underwater Acoustic Data Processing, held at the Royal Military College of Canada from 18 to 29 July, 1988. Approximately 110 participants from various NATO countries were in attendance during this two week period. Their research interests range from underwater acoustics to signal processing and computer science; some are renowned scientists and some are recent Ph.D. graduates. The purpose of the ASI was to provide an authoritative summing up of the various research activities related to sonar technology. The exposition on each subject began with one or two tutorials prepared by invited lecturers, followed by research papers which provided indications of the state of development in that specific area. I have broadly classified the papers into three sections under the titles of I. Propagation and Noise, II. Signal Processing and III. Post Processing. The reader will find in Section I papers on low frequency acoustic sources and effects of the medium on underwater acoustic propagation. Problems such as coherence loss due to boundary interaction, wavefront distortion and multipath transmission were addressed. Besides the medium, corrupting noise sources also have a strong influence on the performance of a sonar system and several researchers described methods of modeling these sources.
For real-time systems, the worst-case execution time (WCET) is the key objective to be considered. Traditionally, code for real-time systems is generated without taking this objective into account and the WCET is computed only after code generation. Worst-Case Execution Time Aware Compilation Techniques for Real-Time Systems presents the first comprehensive approach integrating WCET considerations into the code generation process. Based on the proposed reconciliation between a compiler and a timing analyzer, a wide range of novel optimization techniques is provided. Among others, the techniques cover source code and assembly level optimizations, exploit machine learning techniques and address the design of modern systems that have to meet multiple objectives. Using these optimizations, the WCET of real-time applications can be reduced by about 30% to 45% on the average. This opens opportunities for decreasing clock speeds, costs and energy consumption of embedded processors. The proposed techniques can be used for all types real-time systems, including automotive and avionics IT systems.
As we continue to build faster and fast. er computers, their performance is be coming increasingly dependent on the memory hierarchy. Both the clock speed of the machine and its throughput per clock depend heavily on the memory hierarchy. The time to complet. e a cache acce88 is oft. en the factor that det. er mines the cycle time. The effectiveness of the hierarchy in keeping the average cost of a reference down has a major impact on how close the sustained per formance is to the peak performance. Small changes in the performance of the memory hierarchy cause large changes in overall system performance. The strong growth of ruse machines, whose performance is more tightly coupled to the memory hierarchy, has created increasing demand for high performance memory systems. This trend is likely to accelerate: the improvements in main memory performance will be small compared to the improvements in processor performance. This difference will lead to an increasing gap between prOCe880r cycle time and main memory acce. time. This gap must be closed by improving the memory hierarchy. Computer architects have attacked this gap by designing machines with cache sizes an order of magnitude larger than those appearing five years ago. Microproce880r-based RISe systems now have caches that rival the size of those in mainframes and supercomputers."
Computer vision is one of the most complex and computationally intensive problem. Like any other computationally intensive problems, parallel pro cessing has been suggested as an approach to solving the problems in com puter vision. Computer vision employs algorithms from a wide range of areas such as image and signal processing, advanced mathematics, graph theory, databases and artificial intelligence. Hence, not only are the comput ing requirements for solving vision problems tremendous but they also demand computers that are efficient to solve problems exhibiting vastly dif ferent characteristics. With recent advances in VLSI design technology, Single Instruction Multiple Data (SIMD) massively parallel computers have been proposed and built. However, such architectures have been shown to be useful for solving a very limited subset of the problems in vision. Specifically, algorithms from low level vision that involve computations closely mimicking the architec ture and require simple control and computations are suitable for massively parallel SIMD computers. An Integrated Vision System (IVS) involves com putations from low to high level vision to be executed in a systematic fashion and repeatedly. The interaction between computations and information dependent nature of the computations suggests that architectural require ments for computer vision systems can not be satisfied by massively parallel SIMD computers."
This book is on dependence concepts and general methods for dependence testing. Here, dependence means data dependence and the tests are compile-time tests. We felt the time was ripe to create a solid theory of the subject, to provide the research community with a uniform conceptual framework in which things fit together nicely. How successful we have been in meeting these goals, of course, remains to be seen. We do not try to include all the minute details that are known, nor do we deal with clever tricks that all good programmers would want to use. We do try to convince the reader that there is a mathematical basis consisting of theories of bounds of linear functions and linear diophantine equations, that levels and direction vectors are concepts that arise rather natu rally, that different dependence tests are really special cases of some general tests, and so on. Some mathematical maturity is needed for a good understand ing of the book: mainly calculus and linear algebra. We have cov ered diophantine equations rather thoroughly and given a descrip of some matrix theory ideas that are not very widely known. tion A reader familiar with linear programming would quickly recog nize several concepts. We have learned a great deal from the works of M. Wolfe, and K. Kennedy and R. Allen. Wolfe's Ph. D. thesis at the University of Illinois and Kennedy & Allen's paper on vectorization of Fortran programs are still very useful sources on this subject."
Tremendous achievements in the area of semiconductor electronics turn - croelectronics into nanoelectronics. Actually, we observe a real technical boom connected with achievements in nanoelectronics. It results in devel- mentofverycomplexintegratedcircuits, particularlythe?eldprogrammable logic devices (FPLD). Up-to-day FPLD chips are so huge, that it is enough only one chip to implement a really complex digital system including a da- path and a control unit. Because of the extreme complexity of modern - crochips, it is very important to develop e?ective design methods oriented on particular properties of logic elements. The development of digital s- tems with use of FPLD microchips is not possible without use of di?erent hardware description languages(HDL), such as VHDL and Verilog. Di?erent computer-aided design tools (CAD) are wide used to develop digital system hardware. As majorityof researchespoint out, the design processis nowvery similar to the process of program development. It allows a researcher to pay more attention to some speci?c problems, where there are no standard f- mal methods of their solution. But application of all these achievements does not guaranteeper sedevelopmentof some competitiveelectronic product, - pecially in the acceptable time-to-market. This problem solution is possible only if a researcher possesses fundamental knowledge of a design process and knows exactly the mode of operation of industrial CAD tools in use. As it is known, any digital system can be represented as a composition of a da- path and a control uni
This volume consists of a collection of 28 papers presented at the NATO Advanced Study Institute held July 14-27, 1985 in the beautiful resort at Les Arcs, France. The director of this ASI was A. K. Sood and A. H. Qureshi was the co-director. Since its introduction in the early 1970s the relational data model has been widely accepted. Several research and industrial efforts are being undertaken to develop special purpose database machines to implement the relational model. In addition, database machines are being explored for applications such as image processing and information retrieval. In this NATO-ASI the lecturers discussed special purpose database machine architectures from the viewpoint of architecture and hardware detail, software, user needs, theoretical framework and applications. The papers presented were of two types - regular papers and short papers. The research in database machines is being conducted in several countries. This fact is under scored when it is noted that papers in this volume are authored by researchers in France, Germany, Italy, Japan, Portugal, Turkey, U.K. and U.S.A. The first paper discusses the experience and applications of users with a commercially available database machine. In the following eight papers the characteristics of six database machines are discussed. The second, third and fourth papers deal with the RDBM project at the Technical University of Braunschweig (Germany). Zeidler discusses the design objectives, architecture and system design of RDBM. Teich presents the hardware utilized for sorting."
[2]. The Cell Processor from Sony, Toshiba and IBM (STI) [3], and the Sun UltraSPARC T1 (formerly codenamed Niagara) [4] signal the growing popularity of such systems. Furthermore, Intel's very recently announced 80-core TeraFLOP chip [5] exemplifies the irreversible march toward many-core systems with tens or even hundreds of processing elements. 1.2 The Dawn of the Communication-Centric Revolution The multi-core thrust has ushered the gradual displacement of the computati- centric design model by a more communication-centric approach [6]. The large, sophisticated monolithic modules are giving way to several smaller, simpler p- cessing elements working in tandem. This trend has led to a surge in the popularity of multi-core systems, which typically manifest themselves in two distinct incarnations: heterogeneous Multi-Processor Systems-on-Chip (MPSoC) and homogeneous Chip Multi-Processors (CMP). The SoC philosophy revolves around the technique of Platform-Based Design (PBD) [7], which advocates the reuse of Intellectual Property (IP) cores in flexible design templates that can be customized accordingly to satisfy the demands of particular implementations. The appeal of such a modular approach lies in the substantially reduced Time-To- Market (TTM) incubation period, which is a direct outcome of lower circuit complexity and reduced design effort. The whole system can now be viewed as a diverse collection of pre-existing IP components integrated on a single die.
Java is an exciting new object-oriented technology. Hardware for supporting objects and other features of Java such as multithreading, dynamic linking and loading is the focus of this book. The impact of Java's features on micro-architectural resources and issues in the design of Java-specific architectures are interesting topics that require the immediate attention of the research community. While Java has become an important part of desktop applications, it is now being used widely in high-end server markets, and will soon be widespread in low-end embedded computing. Java Microarchitectures contains a collection of papers providing a snapshot of the state of the art in hardware support for Java. The book covers the behavior of Java applications, embedded processors for Java, memory system design, and high-performance single-chip architectures designed to execute Java applications efficiently.
The Dawn of Massively Parallel Processing in Meteorology presents collected papers of the third workshop on this topic held at the European Centre of Medium-Range Weather Forecasts (ECMWF). It provides an insight into the state of the art in using parallel processors operationally, and allows extrapolation to other time-critical applications. It also documents the advent of massively parallel systems to cope with these applications.
Soft computing is a consortium of computing methodologies that provide a foundation for the conception, design, and deployment of intelligent systems and aims to formalize the human ability to make rational decisions in an environment of uncertainty and imprecision. This book is based on a NATO Advanced Study Institute held in 1996 on soft computing and its applications. The distinguished contributors consider the principal constituents of soft computing, namely fuzzy logic, neurocomputing, genetic computing, and probabilistic reasoning, the relations between them, and their fusion in industrial applications. Two areas emphasized in the book are how to achieve a synergistic combination of the main constituents of soft computing and how the combination can be used to achieve a high Machine Intelligence Quotient.
This textbook is based on a lecture course in synergetics given at the University of Moscow. In this second of two volumes, we discuss the emergence and properties of complex chaotic patterns in distributed active systems. Such patterns can be produced autonomously by a system, or can result from selective amplification of fluctuations caused by external weak noise. Although the material in this book is often described by refined mathematical theories, we have tried to avoid a formal mathematical style. Instead of rigorous proofs, the reader will usually be offered only "demonstrations" (the term used by Prof. V. I. Arnold) to encourage intuitive understanding of a problem and to explain why a particular statement seems plausible. We also refrained from detailing concrete applications in physics or in other scientific fields, so that the book can be used by students of different disciplines. While preparing the lecture course and producing this book, we had intensive discussions with and asked the advice of Prof. V. I. Arnold, Prof. S. Grossmann, Prof. H. Haken, Prof. Yu. L. Klimontovich, Prof. R. L. Stratonovich and Prof. Ya.
Embedded systems have an increasing importance in our everyday lives. The growing complexity of embedded systems and the emerging trend to interconnections between them lead to new challenges. Intelligent solutions are necessary to overcome these challenges and to provide reliable and secure systems to the customer under a strict time and financial budget. Solutions on Embedded Systems documents results of several innovative approaches that provide intelligent solutions in embedded systems. The objective is to present mature approaches, to provide detailed information on the implementation and to discuss the results obtained.
There is nO' dDubt that the mioroprooessor (~p) revDlutiDn will cDntinue intO' the future and many will be required to' specify and integrate mi- crDprDceSSDrs intO' prDducts Dr systems in their Dwn disciplines. There- fDre, well-designed flexible interfaoes will be required to' ensure CDm- patibility with Dther equipments and to' extend design DptiDns. AlthDugh there are several bDDks Dn micrDcDmputers and micrDprDcessDrs, Dnly few Df thDse devDte but a small part Dn the impDrtant aspects Df interfaces. It was with this in mind that the present bDDk was written as a selfcDn- tained vDlume to' be part Df the mDre general series : Mioroprooessors- Based Systems Engineering. It fills an existing gap in technDIDgy, as in- terfaces are the last items to' be seriDusly cDnsidered in the race Df new technDIDgy, and it deals with the systematic study Df micrDprDcessDr interfaces and their applicatiDns in many diversified fields. This bDDk is aimed at engineers in industry and engineering stu- dents whO' need to' learn hDW to' interface micrDprDcessDrs, and hence mi- crDcDmputers and Dther related equipments, to' external digital Dr analDg devices. It is suitable fDr use as a textbDDk Dr fDr supplementary read- ing, either in an applied undergraduate CDurse in electrical engineering Dr in the last year Df three-year-curriculum technical cDlleges.
Design is an art form in which the designer selects from a myriad of alternatives to bring an "optimum" choice to a user. In many complex of "optimum" is difficult to define. Indeed, the users systems the notion themselves will not agree, so the "best" system is simply the one in which the designer and the user have a congruent viewpoint. Compounding the design problem are tradeoffs that span a variety of technologies and user requirements. The electronic business system is a classically complex system whose tradeoff criteria and user views are constantly changing with rapidly developing underlying technology. Professor Milutinovic has chosen this area for his capstone contribution to the computer systems design. This book completes his trilogy on design issue in computer systems. His first work, "Surviving the Design of a 200 MHz RISC Microprocessor" (1997) focused on the tradeoffs and design issues within a processor. His second work, "Surviving the Design of Microprocessor and Multiprocessor Systems" (2000) considers the design issues involved with assembling a number of processors into a coherent system. Finally, this book generalizes the system design problem to electronic commerce on the Internet, a global system of immense consequence.
Integrating associative processing concepts with massively parallel SIMD technology, this volume explores a model for accessing data by content rather than abstract address mapping.
Despite the ample number of articles on parallel-vector computational algorithms published over the last 20 years, there is a lack of texts in the field customized for senior undergraduate and graduate engineering research. Parallel-Vector Equation Solvers for Finite Element Engineering Applications aims to fill this gap, detailing both the theoretical development and important implementations of equation-solution algorithms. The mathematical background necessary to understand their inception balances well with descriptions of their practical uses. Illustrated with a number of state-of-the-art FORTRAN codes developed as examples for the book, Dr. Nguyen's text is a perfect choice for instructors and researchers alike. |
You may like...
Novel Approaches to Information Systems…
Naveen Prakash, Deepika Prakash
Hardcover
R5,924
Discovery Miles 59 240
Clean Architecture - A Craftsman's Guide…
Robert Martin
Paperback
(1)
Creativity in Computing and DataFlow…
Suyel Namasudra, Veljko Milutinovic
Hardcover
R4,204
Discovery Miles 42 040
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, …
Paperback
R2,281
Discovery Miles 22 810
Grammatical and Syntactical Approaches…
Juhyun Lee, Michael J. Ostwald
Hardcover
R5,315
Discovery Miles 53 150
|