![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design > General
[2]. The Cell Processor from Sony, Toshiba and IBM (STI) [3], and the Sun UltraSPARC T1 (formerly codenamed Niagara) [4] signal the growing popularity of such systems. Furthermore, Intel's very recently announced 80-core TeraFLOP chip [5] exemplifies the irreversible march toward many-core systems with tens or even hundreds of processing elements. 1.2 The Dawn of the Communication-Centric Revolution The multi-core thrust has ushered the gradual displacement of the computati- centric design model by a more communication-centric approach [6]. The large, sophisticated monolithic modules are giving way to several smaller, simpler p- cessing elements working in tandem. This trend has led to a surge in the popularity of multi-core systems, which typically manifest themselves in two distinct incarnations: heterogeneous Multi-Processor Systems-on-Chip (MPSoC) and homogeneous Chip Multi-Processors (CMP). The SoC philosophy revolves around the technique of Platform-Based Design (PBD) [7], which advocates the reuse of Intellectual Property (IP) cores in flexible design templates that can be customized accordingly to satisfy the demands of particular implementations. The appeal of such a modular approach lies in the substantially reduced Time-To- Market (TTM) incubation period, which is a direct outcome of lower circuit complexity and reduced design effort. The whole system can now be viewed as a diverse collection of pre-existing IP components integrated on a single die.
Java is an exciting new object-oriented technology. Hardware for supporting objects and other features of Java such as multithreading, dynamic linking and loading is the focus of this book. The impact of Java's features on micro-architectural resources and issues in the design of Java-specific architectures are interesting topics that require the immediate attention of the research community. While Java has become an important part of desktop applications, it is now being used widely in high-end server markets, and will soon be widespread in low-end embedded computing. Java Microarchitectures contains a collection of papers providing a snapshot of the state of the art in hardware support for Java. The book covers the behavior of Java applications, embedded processors for Java, memory system design, and high-performance single-chip architectures designed to execute Java applications efficiently.
The Dawn of Massively Parallel Processing in Meteorology presents collected papers of the third workshop on this topic held at the European Centre of Medium-Range Weather Forecasts (ECMWF). It provides an insight into the state of the art in using parallel processors operationally, and allows extrapolation to other time-critical applications. It also documents the advent of massively parallel systems to cope with these applications.
Soft computing is a consortium of computing methodologies that provide a foundation for the conception, design, and deployment of intelligent systems and aims to formalize the human ability to make rational decisions in an environment of uncertainty and imprecision. This book is based on a NATO Advanced Study Institute held in 1996 on soft computing and its applications. The distinguished contributors consider the principal constituents of soft computing, namely fuzzy logic, neurocomputing, genetic computing, and probabilistic reasoning, the relations between them, and their fusion in industrial applications. Two areas emphasized in the book are how to achieve a synergistic combination of the main constituents of soft computing and how the combination can be used to achieve a high Machine Intelligence Quotient.
This textbook is based on a lecture course in synergetics given at the University of Moscow. In this second of two volumes, we discuss the emergence and properties of complex chaotic patterns in distributed active systems. Such patterns can be produced autonomously by a system, or can result from selective amplification of fluctuations caused by external weak noise. Although the material in this book is often described by refined mathematical theories, we have tried to avoid a formal mathematical style. Instead of rigorous proofs, the reader will usually be offered only "demonstrations" (the term used by Prof. V. I. Arnold) to encourage intuitive understanding of a problem and to explain why a particular statement seems plausible. We also refrained from detailing concrete applications in physics or in other scientific fields, so that the book can be used by students of different disciplines. While preparing the lecture course and producing this book, we had intensive discussions with and asked the advice of Prof. V. I. Arnold, Prof. S. Grossmann, Prof. H. Haken, Prof. Yu. L. Klimontovich, Prof. R. L. Stratonovich and Prof. Ya.
Embedded systems have an increasing importance in our everyday lives. The growing complexity of embedded systems and the emerging trend to interconnections between them lead to new challenges. Intelligent solutions are necessary to overcome these challenges and to provide reliable and secure systems to the customer under a strict time and financial budget. Solutions on Embedded Systems documents results of several innovative approaches that provide intelligent solutions in embedded systems. The objective is to present mature approaches, to provide detailed information on the implementation and to discuss the results obtained.
There is nO' dDubt that the mioroprooessor (~p) revDlutiDn will cDntinue intO' the future and many will be required to' specify and integrate mi- crDprDceSSDrs intO' prDducts Dr systems in their Dwn disciplines. There- fDre, well-designed flexible interfaoes will be required to' ensure CDm- patibility with Dther equipments and to' extend design DptiDns. AlthDugh there are several bDDks Dn micrDcDmputers and micrDprDcessDrs, Dnly few Df thDse devDte but a small part Dn the impDrtant aspects Df interfaces. It was with this in mind that the present bDDk was written as a selfcDn- tained vDlume to' be part Df the mDre general series : Mioroprooessors- Based Systems Engineering. It fills an existing gap in technDIDgy, as in- terfaces are the last items to' be seriDusly cDnsidered in the race Df new technDIDgy, and it deals with the systematic study Df micrDprDcessDr interfaces and their applicatiDns in many diversified fields. This bDDk is aimed at engineers in industry and engineering stu- dents whO' need to' learn hDW to' interface micrDprDcessDrs, and hence mi- crDcDmputers and Dther related equipments, to' external digital Dr analDg devices. It is suitable fDr use as a textbDDk Dr fDr supplementary read- ing, either in an applied undergraduate CDurse in electrical engineering Dr in the last year Df three-year-curriculum technical cDlleges.
Design is an art form in which the designer selects from a myriad of alternatives to bring an "optimum" choice to a user. In many complex of "optimum" is difficult to define. Indeed, the users systems the notion themselves will not agree, so the "best" system is simply the one in which the designer and the user have a congruent viewpoint. Compounding the design problem are tradeoffs that span a variety of technologies and user requirements. The electronic business system is a classically complex system whose tradeoff criteria and user views are constantly changing with rapidly developing underlying technology. Professor Milutinovic has chosen this area for his capstone contribution to the computer systems design. This book completes his trilogy on design issue in computer systems. His first work, "Surviving the Design of a 200 MHz RISC Microprocessor" (1997) focused on the tradeoffs and design issues within a processor. His second work, "Surviving the Design of Microprocessor and Multiprocessor Systems" (2000) considers the design issues involved with assembling a number of processors into a coherent system. Finally, this book generalizes the system design problem to electronic commerce on the Internet, a global system of immense consequence.
First published in 1991, this thesis concentrates upon the design of three-dimensional, rather than the traditional two-dimensional, circuits. The theory behind such circuits is presented in detail, together with experimental results. Winner of the Distinguished Dissertation in Computer Science award, this work will prove invaluable to both designers and hardware engineers involved in the development of practical three-dimensional integrated circuits.
Integrating associative processing concepts with massively parallel SIMD technology, this volume explores a model for accessing data by content rather than abstract address mapping.
Despite the ample number of articles on parallel-vector computational algorithms published over the last 20 years, there is a lack of texts in the field customized for senior undergraduate and graduate engineering research. Parallel-Vector Equation Solvers for Finite Element Engineering Applications aims to fill this gap, detailing both the theoretical development and important implementations of equation-solution algorithms. The mathematical background necessary to understand their inception balances well with descriptions of their practical uses. Illustrated with a number of state-of-the-art FORTRAN codes developed as examples for the book, Dr. Nguyen's text is a perfect choice for instructors and researchers alike.
Multiscalar Processors presents a comprehensive treatment of the basic principles of Multiscalar execution, and advanced techniques for implementing the Multiscalar concepts. Special emphasis is placed on highlighting the major challenges involved in Multiscalar processing. This book is organized into nine chapters, and provides an excellent synopsis of a large body of research carried out on multiscalar processors in the last decade. It starts with technology trends that provide an impetus to the development of multiscalar processors and shape the development of future processors. The work ends with a review of the recent developments related to multiscalar processors.
This text has been produced for the benefit of students in computer and infor mation science and for experts involved in the design of microprocessors. It deals with the design of complex VLSI chips, specifically of microprocessor chip sets. The aim is on the one hand to provide an overview of the state of the art, and on the other hand to describe specific design know-how. The depth of detail presented goes considerably beyond the level of information usually found in computer science text books. The rapidly developing discipline of designing complex VLSI chips, especially microprocessors, requires a significant extension of the state of the art. We are observing the genesis of a new engineering discipline, the design and realization of very complex logical structures, and we are obviously only at the beginning. This discipline is still young and immature, alternate concepts are still evolving, and "the best way to do it" is still being explored. Therefore it is not yet possible to describe the different methods in use and to evaluate them. However, the economic impact is significant today, and the heavy investment that companies in the USA, the Far East, and in Europe, are making in gener ating VLSI design competence is a testimony to the importance this field is expected to have in the future. Staying competitive requires mastering and extending this competence.
This project had its beginnings in the Fall of 1980. At that time Robert Wagner suggested that I investigate compiler optimi zation of data organization, suitable for use in a parallel or vector machine environment. We developed a scheme in which the compiler, having knowledge of the machine's access patterns, does a global analysis of a program's operations, and automatically determines optimum organization for the data. For example, for certain architectures and certain operations, large improvements in performance can be attained by storing a matrix in row major order. However a subsequent operation may require the matrix in column major order. A determination must be made whether or not it is the best solution globally to store the matrix in row order, column order, or even have two copies of it, each organized differently. We have developed two algorithms for making this determination. The technique shows promise in a vector machine environ ment, particularly if memory interleaving is used. Supercomputers such as the Cray, the CDC Cyber 205, the IBM 3090, as well as superminis such as the Convex are possible environments for implementation."
This book contains papers presented at the NATO Advanced Research Workshop on "Real-time Object and Environment Measurement and Classification" held in Maratea, Italy, August 31 - September 3, 1987. This workshop was organized within the activities of the NATO Special Programme on Sensory Systems for Robotic Control. Four major themes were discussed at this workshop: Real-time Requirements, Feature Measurement, Object Representation and Recognition, and Architecture for Measurement and Classification. A total of twenty-five technical presentations, contained in this book, cover a wide spectrum of topics including hardware implementation of specific vision algorithms, a complete vision system for object tracking and inspection, using three cameras (trinocular stereo) for feature measurement, neural network for object recognition, integration of CAD (Computer Aided Design) and vision systems, and the use of pyramid architectures for solving various computer vision problems. These papers are written by some of the very well-known researchers in the computer vision and pattern recognition community, and represent both industrial and academic viewpoints. The authors come from thirteen different countries from Europe and North America. Therefore, readers will get a first hand and current information about the status of computer vision research in various western countries. Further, this book will also be useful in understanding the current research issues in computer vision and the difficulties in designing real-time vision systems.
Dependence Analysis may be considered to be the second edition of the author's 1988 book, Dependence Analysis for Supercomputing. It is, however, a completely new work that subsumes the material of the 1988 publication. This book is the third volume in the series Loop Transformations for Restructuring Compilers. This series has been designed to provide a complete mathematical theory of transformations that can be used to automatically change a sequential program containing FORTRAN-like do loops into an equivalent parallel form. In Dependence Analysis, the author extends the model to a program consisting of do loops and assignment statements, where the loops need not be sequentially nested and are allowed to have arbitrary strides. In the context of such a program, the author studies, in detail, dependence between statements of the program caused by program variables that are elements of arrays. Dependence Analysis is directed toward graduate and undergraduate students, and professional writers of restructuring compilers. The prerequisite for the book consists of some knowledge of programming languages, and familiarity with calculus and graph theory. No knowledge of linear programming is required.
Dynamic Reconfiguration: Architectures and Algorithms offers a comprehensive treatment of dynamically reconfigurable computer architectures and algorithms for them. The coverage is broad starting from fundamental algorithmic techniques, ranging across algorithms for a wide array of problems and applications, to simulations between models. The presentation employs a single reconfigurable model (the reconfigurable mesh) for most algorithms, to enable the reader to distill key ideas without the cumbersome details of a myriad of models. In addition to algorithms, the book discusses topics that provide a better understanding of dynamic reconfiguration such as scalability and computational power, and more recent advances such as optical models, run-time reconfiguration (on FPGA and related platforms), and implementing dynamic reconfiguration. The book, featuring many examples and a large set of exercises, is an excellent textbook or reference for a graduate course. It is also a useful reference to researchers and system developers in the area.
This state-of-the-art survey gives a systematic presentation of recent advances in the design and validation of computer architectures. The book covers a comprehensive range of architecture design and validation methods, from computer aided high-level design of VLSI circuits and systems to layout and testable design, including the modeling and synthesis of behavior and dataflow, cell-based logic optimization, machine assisted verification, and virtual machine design.
Scalable High Performance Computing for Knowledge Discovery and Data Mining brings together in one place important contributions and up-to-date research results in this fast moving area. Scalable High Performance Computing for Knowledge Discovery and Data Mining serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
These are the proceedings of a NATO Advanced Study Institute (ASI) held in Cetraro, Italy during 6-17 June 1983. The title of the ASI was Computer Arehiteetures for SpatiaZZy vistributed Vata, and it brouqht together some 60 participants from Europe and America. Presented ere are 21 of the lectures that were delivered. The articles cover a wide spectrum of topics related to computer architecture s specially oriented toward the fast processing of spatial data, and represent an excellent review of the state-of-the-art of this topic. For more than 20 years now researchers in pattern recognition, image processing, meteorology, remote sensing, and computer engineering have been looking toward new forms of computer architectures to speed the processing of data from two- and three-dimensional processes. The work can be said to have commenced with the landmark article by Steve Unger in 1958, and it received a strong forward push with the development of the ILIAC III and IV computers at the University of Illinois during the 1960's. One clear obstacle faced by the computer designers in those days was the limitation of the state-of-the-art of hardware, when the only switching devices available to them were discrete transistors. As aresult parallel processing was generally considered to be imprae tieal, and relatively little progress was made."
This book contains the proceedings of the NATO Advanced Research Workshop held in Maratea (Italy), May 5-9, 1986 on Pyramidal Systems for Image Processing and Computer Vision. We had 40 participants from 11 countries playing an active part in the workshop and all the leaders of groups that have produced a prototype pyramid machine or a design for such a machine were present. Within the wide field of parallel architectures for image processing a new area was recently born and is growing healthily: the area of pyramidally structured multiprocessing systems. Essentially, the processors are arranged in planes (from a base to an apex) each one of which is generally a reduced (usually by a power of two) version of the plane underneath: these processors are horizontally interconnected (within a plane) and vertically connected with "fathers" (on top planes) and "children" on the plane below. This arrangement has a number of interesting features, all of which were amply discussed in our Workshop including the cellular array and hypercube versions of pyramids. A number of projects (in different parts of the world) are reported as well as some interesting applications in computer vision, tactile systems and numerical calculations.
Artificial Intelligence is entering the mainstream of com- puter applications and as techniques are developed and integrated into a wide variety of areas they are beginning to tax the pro- cessing power of conventional architectures. To meet this demand, specialized architectures providing support for the unique features of symbolic processing languages are emerging. The goal of the research presented here is to show that an archi- tecture specialized for Prolog can achieve a ten-fold improve- ment in performance over conventional, general-purpose architec- tures. This book presents such an architecture for high perfor- mance execution of Prolog programs. The architecture is based on the abstract machine descrip- tion introduced by David H.D. Warren known as the Warren Abstract Machine (W AM). The execution model of the W AM is described and extended to provide a complete Instruction Set Architecture (lSA) for Prolog known as the PLM. This ISA is then realized in a microarchitecture and finally in a hardware design. The work described here represents one of the first efforts to implement the W AM model in hardware. The approach taken is that of direct implementation of the high level WAM instruction set in hardware resulting in a elSe style archi- tecture.
Computations with Markov Chains presents the edited and reviewed proceedings of the Second International Workshop on the Numerical Solution of Markov Chains, held January 16--18, 1995, in Raleigh, North Carolina. New developments of particular interest include recent work on stability and conditioning, Krylov subspace-based methods for transient solutions, quadratic convergent procedures for matrix geometric problems, further analysis of the GTH algorithm, the arrival of stochastic automata networks at the forefront of modelling stratagems, and more. An authoritative overview of the field for applied probabilists, numerical analysts and systems modelers, including computer scientists and engineers.
'Et moi, .. " si j'avait su comment en revenir, je One service mathematics bas rendered the human race. It bas put common sense back n'y serais point aile.' where it belongs, on the topmost shelf next to Jules Verne the dusty canister labelled 'discarded nonsense' . Eric T. Bell The series is divergent; therefore we may be able to do something with it O. Heaviside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and nonlineari ties abound. Similarly, all kinds of parts of mathematics serve as tools for other parts and for other sci ences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One ser vice topology has rendered mathematical physics .. .'; 'One service logic has rendered computer science .. .'; 'One service category theory has rendered mathematics .. .'. All arguably true. And all statements obtainable this way form part of the raison d'ctre of this series."
Supercomputing is an important science and technology that enables the scientist or the engineer to simulate numerically very complex physical phenomena related to large-scale scientific, industrial and military applications. It has made considerable progress since the first NATO Workshop on High-Speed Computation in 1983 (Vol. 7 of the same series). This book is a collection of papers presented at the NATO Advanced Research Workshop held in Trondheim, Norway, in June 1989. It presents key research issues related to: - hardware systems, architecture and performance; - compilers and programming tools; - user environments and visualization; - algorithms and applications. Contributions include critical evaluations of the state-of-the-art and many original research results. |
You may like...
Freud and Forbidden Knowledge
Peter L. Rudnytsky, Ellen Handler Spitz
Hardcover
R2,844
Discovery Miles 28 440
Capitalism and the New Political…
Fabio Vighi, Riccardo Panattoni
Hardcover
R3,012
Discovery Miles 30 120
|