![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
Multiscalar Processors presents a comprehensive treatment of the basic principles of Multiscalar execution, and advanced techniques for implementing the Multiscalar concepts. Special emphasis is placed on highlighting the major challenges involved in Multiscalar processing. This book is organized into nine chapters, and provides an excellent synopsis of a large body of research carried out on multiscalar processors in the last decade. It starts with technology trends that provide an impetus to the development of multiscalar processors and shape the development of future processors. The work ends with a review of the recent developments related to multiscalar processors.
This text has been produced for the benefit of students in computer and infor mation science and for experts involved in the design of microprocessors. It deals with the design of complex VLSI chips, specifically of microprocessor chip sets. The aim is on the one hand to provide an overview of the state of the art, and on the other hand to describe specific design know-how. The depth of detail presented goes considerably beyond the level of information usually found in computer science text books. The rapidly developing discipline of designing complex VLSI chips, especially microprocessors, requires a significant extension of the state of the art. We are observing the genesis of a new engineering discipline, the design and realization of very complex logical structures, and we are obviously only at the beginning. This discipline is still young and immature, alternate concepts are still evolving, and "the best way to do it" is still being explored. Therefore it is not yet possible to describe the different methods in use and to evaluate them. However, the economic impact is significant today, and the heavy investment that companies in the USA, the Far East, and in Europe, are making in gener ating VLSI design competence is a testimony to the importance this field is expected to have in the future. Staying competitive requires mastering and extending this competence.
This project had its beginnings in the Fall of 1980. At that time Robert Wagner suggested that I investigate compiler optimi zation of data organization, suitable for use in a parallel or vector machine environment. We developed a scheme in which the compiler, having knowledge of the machine's access patterns, does a global analysis of a program's operations, and automatically determines optimum organization for the data. For example, for certain architectures and certain operations, large improvements in performance can be attained by storing a matrix in row major order. However a subsequent operation may require the matrix in column major order. A determination must be made whether or not it is the best solution globally to store the matrix in row order, column order, or even have two copies of it, each organized differently. We have developed two algorithms for making this determination. The technique shows promise in a vector machine environ ment, particularly if memory interleaving is used. Supercomputers such as the Cray, the CDC Cyber 205, the IBM 3090, as well as superminis such as the Convex are possible environments for implementation."
This book contains papers presented at the NATO Advanced Research Workshop on "Real-time Object and Environment Measurement and Classification" held in Maratea, Italy, August 31 - September 3, 1987. This workshop was organized within the activities of the NATO Special Programme on Sensory Systems for Robotic Control. Four major themes were discussed at this workshop: Real-time Requirements, Feature Measurement, Object Representation and Recognition, and Architecture for Measurement and Classification. A total of twenty-five technical presentations, contained in this book, cover a wide spectrum of topics including hardware implementation of specific vision algorithms, a complete vision system for object tracking and inspection, using three cameras (trinocular stereo) for feature measurement, neural network for object recognition, integration of CAD (Computer Aided Design) and vision systems, and the use of pyramid architectures for solving various computer vision problems. These papers are written by some of the very well-known researchers in the computer vision and pattern recognition community, and represent both industrial and academic viewpoints. The authors come from thirteen different countries from Europe and North America. Therefore, readers will get a first hand and current information about the status of computer vision research in various western countries. Further, this book will also be useful in understanding the current research issues in computer vision and the difficulties in designing real-time vision systems.
Dependence Analysis may be considered to be the second edition of the author's 1988 book, Dependence Analysis for Supercomputing. It is, however, a completely new work that subsumes the material of the 1988 publication. This book is the third volume in the series Loop Transformations for Restructuring Compilers. This series has been designed to provide a complete mathematical theory of transformations that can be used to automatically change a sequential program containing FORTRAN-like do loops into an equivalent parallel form. In Dependence Analysis, the author extends the model to a program consisting of do loops and assignment statements, where the loops need not be sequentially nested and are allowed to have arbitrary strides. In the context of such a program, the author studies, in detail, dependence between statements of the program caused by program variables that are elements of arrays. Dependence Analysis is directed toward graduate and undergraduate students, and professional writers of restructuring compilers. The prerequisite for the book consists of some knowledge of programming languages, and familiarity with calculus and graph theory. No knowledge of linear programming is required.
Dynamic Reconfiguration: Architectures and Algorithms offers a comprehensive treatment of dynamically reconfigurable computer architectures and algorithms for them. The coverage is broad starting from fundamental algorithmic techniques, ranging across algorithms for a wide array of problems and applications, to simulations between models. The presentation employs a single reconfigurable model (the reconfigurable mesh) for most algorithms, to enable the reader to distill key ideas without the cumbersome details of a myriad of models. In addition to algorithms, the book discusses topics that provide a better understanding of dynamic reconfiguration such as scalability and computational power, and more recent advances such as optical models, run-time reconfiguration (on FPGA and related platforms), and implementing dynamic reconfiguration. The book, featuring many examples and a large set of exercises, is an excellent textbook or reference for a graduate course. It is also a useful reference to researchers and system developers in the area.
This state-of-the-art survey gives a systematic presentation of recent advances in the design and validation of computer architectures. The book covers a comprehensive range of architecture design and validation methods, from computer aided high-level design of VLSI circuits and systems to layout and testable design, including the modeling and synthesis of behavior and dataflow, cell-based logic optimization, machine assisted verification, and virtual machine design.
Scalable High Performance Computing for Knowledge Discovery and Data Mining brings together in one place important contributions and up-to-date research results in this fast moving area. Scalable High Performance Computing for Knowledge Discovery and Data Mining serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
These are the proceedings of a NATO Advanced Study Institute (ASI) held in Cetraro, Italy during 6-17 June 1983. The title of the ASI was Computer Arehiteetures for SpatiaZZy vistributed Vata, and it brouqht together some 60 participants from Europe and America. Presented ere are 21 of the lectures that were delivered. The articles cover a wide spectrum of topics related to computer architecture s specially oriented toward the fast processing of spatial data, and represent an excellent review of the state-of-the-art of this topic. For more than 20 years now researchers in pattern recognition, image processing, meteorology, remote sensing, and computer engineering have been looking toward new forms of computer architectures to speed the processing of data from two- and three-dimensional processes. The work can be said to have commenced with the landmark article by Steve Unger in 1958, and it received a strong forward push with the development of the ILIAC III and IV computers at the University of Illinois during the 1960's. One clear obstacle faced by the computer designers in those days was the limitation of the state-of-the-art of hardware, when the only switching devices available to them were discrete transistors. As aresult parallel processing was generally considered to be imprae tieal, and relatively little progress was made."
This book contains the proceedings of the NATO Advanced Research Workshop held in Maratea (Italy), May 5-9, 1986 on Pyramidal Systems for Image Processing and Computer Vision. We had 40 participants from 11 countries playing an active part in the workshop and all the leaders of groups that have produced a prototype pyramid machine or a design for such a machine were present. Within the wide field of parallel architectures for image processing a new area was recently born and is growing healthily: the area of pyramidally structured multiprocessing systems. Essentially, the processors are arranged in planes (from a base to an apex) each one of which is generally a reduced (usually by a power of two) version of the plane underneath: these processors are horizontally interconnected (within a plane) and vertically connected with "fathers" (on top planes) and "children" on the plane below. This arrangement has a number of interesting features, all of which were amply discussed in our Workshop including the cellular array and hypercube versions of pyramids. A number of projects (in different parts of the world) are reported as well as some interesting applications in computer vision, tactile systems and numerical calculations.
Artificial Intelligence is entering the mainstream of com- puter applications and as techniques are developed and integrated into a wide variety of areas they are beginning to tax the pro- cessing power of conventional architectures. To meet this demand, specialized architectures providing support for the unique features of symbolic processing languages are emerging. The goal of the research presented here is to show that an archi- tecture specialized for Prolog can achieve a ten-fold improve- ment in performance over conventional, general-purpose architec- tures. This book presents such an architecture for high perfor- mance execution of Prolog programs. The architecture is based on the abstract machine descrip- tion introduced by David H.D. Warren known as the Warren Abstract Machine (W AM). The execution model of the W AM is described and extended to provide a complete Instruction Set Architecture (lSA) for Prolog known as the PLM. This ISA is then realized in a microarchitecture and finally in a hardware design. The work described here represents one of the first efforts to implement the W AM model in hardware. The approach taken is that of direct implementation of the high level WAM instruction set in hardware resulting in a elSe style archi- tecture.
Computations with Markov Chains presents the edited and reviewed proceedings of the Second International Workshop on the Numerical Solution of Markov Chains, held January 16--18, 1995, in Raleigh, North Carolina. New developments of particular interest include recent work on stability and conditioning, Krylov subspace-based methods for transient solutions, quadratic convergent procedures for matrix geometric problems, further analysis of the GTH algorithm, the arrival of stochastic automata networks at the forefront of modelling stratagems, and more. An authoritative overview of the field for applied probabilists, numerical analysts and systems modelers, including computer scientists and engineers.
'Et moi, .. " si j'avait su comment en revenir, je One service mathematics bas rendered the human race. It bas put common sense back n'y serais point aile.' where it belongs, on the topmost shelf next to Jules Verne the dusty canister labelled 'discarded nonsense' . Eric T. Bell The series is divergent; therefore we may be able to do something with it O. Heaviside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and nonlineari ties abound. Similarly, all kinds of parts of mathematics serve as tools for other parts and for other sci ences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One ser vice topology has rendered mathematical physics .. .'; 'One service logic has rendered computer science .. .'; 'One service category theory has rendered mathematics .. .'. All arguably true. And all statements obtainable this way form part of the raison d'ctre of this series."
Supercomputing is an important science and technology that enables the scientist or the engineer to simulate numerically very complex physical phenomena related to large-scale scientific, industrial and military applications. It has made considerable progress since the first NATO Workshop on High-Speed Computation in 1983 (Vol. 7 of the same series). This book is a collection of papers presented at the NATO Advanced Research Workshop held in Trondheim, Norway, in June 1989. It presents key research issues related to: - hardware systems, architecture and performance; - compilers and programming tools; - user environments and visualization; - algorithms and applications. Contributions include critical evaluations of the state-of-the-art and many original research results.
This book gives an introduction to the mathematical theory of cooperative behavior in active systems of various origins, both natural and artificial. It is based on a lecture course in synergetics which I held for almost ten years at the University of Moscow. The first volume deals mainly with the problems of pattern fonnation and the properties of self-organized regular patterns in distributed active systems. It also contains a discussion of distributed analog information processing which is based on the cooperative dynamics of active systems. The second volume is devoted to the stochastic aspects of self-organization and the properties of self-established chaos. I have tried to avoid delving into particular applications. The primary intention is to present general mathematical models that describe the principal kinds of coopera tive behavior in distributed active systems. Simple examples, ranging from chemical physics to economics, serve only as illustrations of the typical context in which a particular model can apply. The manner of exposition is more in the tradition of theoretical physics than of in mathematics: Elaborate fonnal proofs and rigorous estimates are often replaced the text by arguments based on an intuitive understanding of the relevant models. Because of the interdisciplinary nature of this book, its readers might well come from very diverse fields of endeavor. It was therefore desirable to minimize the re quired preliminary knowledge. Generally, a standard university course in differential calculus and linear algebra is sufficient."
Performance Evaluation, Prediction and Visualization in Parallel Systems presents a comprehensive and systematic discussion of theoretics, methods, techniques and tools for performance evaluation, prediction and visualization of parallel systems. Chapter 1 gives a short overview of performance degradation of parallel systems, and presents a general discussion on the importance of performance evaluation, prediction and visualization of parallel systems. Chapter 2 analyzes and defines several kinds of serial and parallel runtime, points out some of the weaknesses of parallel speedup metrics, and discusses how to improve and generalize them. Chapter 3 describes formal definitions of scalability, addresses the basic metrics affecting the scalability of parallel systems, discusses scalability of parallel systems from three aspects: parallel architecture, parallel algorithm and parallel algorithm-architecture combinations, and analyzes the relations of scalability and speedup. Chapter 4 discusses the methodology of performance measurement, describes the benchmark- oriented performance test and analysis and how to measure speedup and scalability in practice. Chapter 5 analyzes the difficulties in performance prediction, discusses application-oriented and architecture-oriented performance prediction and how to predict speedup and scalability in practice. Chapter 6 discusses performance visualization techniques and tools for parallel systems from three stages: performance data collection, performance data filtering and performance data visualization, and classifies the existing performance visualization tools. Chapter 7 describes parallel compiling-based, search-based and knowledge-based performance debugging, which assists programmers to optimize the strategy or algorithm in their parallel programs, and presents visual programming-based performance debugging to help programmers identify the location and cause of the performance problem. It also provides concrete suggestions on how to modify their parallel program to improve the performance. Chapter 8 gives an overview of current interconnection networks for parallel systems, analyzes the scalability of interconnection networks, and discusses how to measure and improve network performances. Performance Evaluation, Prediction and Visualization in Parallel Systems serves as an excellent reference for researchers, and may be used as a text for advanced courses on the topic.
Dependable Network Computing provides insights into various problems facing millions of global users resulting from the 'internet revolution'. It covers real-time problems involving software, servers, and large-scale storage systems with adaptive fault-tolerant routing and dynamic reconfiguration techniques. Also included is material on routing protocols, QoS, and dead- and live-lock free related issues. All chapters are written by leading specialists in their respective fields. Dependable Network Computing provides useful information for scientists, researchers, and application developers building networks based on commercially off-the-shelf components.
The most important uses of computing in the future will be those related to the global 'digital convergence' where all computing becomes digital and internetworked. This convergence will be propelled by new and advanced applications in storage, searching, retrieval and exchanging of information in a myriad of forms. All of these will place heavy demands on large parallel and distributed computer systems because these systems have high intrinsic failure rates. The challenge to the computer scientist is to build a system that is inexpensive, accessible and dependable. The chapters in this book provide insight into many of these issues and others that will challenge researchers and applications developers. Included among these topics are: * Fault-tolerance in communication protocols for distributed systems including synchronous and asynchronous group communication. * Methods and approaches for achieving fault-tolerance in distributed systems such as those used in networks of workstations (NOW), dependable cluster systems, and scalable coherent interfaces (SCI)-based local area multiprocessors (LAMP).* General models and features of distributed safety-critical systems built from commercial off-the-shelf components as well as service dependability in telecomputing systems. * Dependable parallel systems for real-time processing of video signals. * Embedding in faulty multiprocessor systems, broadcasting, system-level testing techniques, on-line detection and recovery from intermittent and permanent faults, and more. Fault-Tolerant Parallel and Distributed Systems is a coherent and uniform collection of chapters with contributions by several of the leading experts working on fault-resilient applications. The numerous techniques and methods included will be of special interest to researchers, developers, and graduate students.
This volume contains refereed papers and extended abstracts of papers presented at the NATO Advanced Research Workshop entitled 'Numerical Integration: Recent Developments, Software and Applications', held at Dalhousie University, Halifax, Canada, August 11-15, 1986. The Workshop was attended by thirty-six scientists from eleven NATO countries. Thirteen invited lectures and twenty-two contributed lectures were presented, of which twenty-five appear in full in this volume, together with extended abstracts of the remaining ten. It is more than ten years since the last workshop of this nature was held, in Los Alamos in 1975. Many developments have occurred in quadrature in the intervening years, and it seemed an opportune time to bring together again researchers in this area. The development of QUADPACK by Piessens, de Doncker, Uberhuber and Kahaner has changed the focus of research in the area of one dimensional quadrature from the construction of new rules to an emphasis on reliable robust software. There has been a dramatic growth in interest in the testing and evaluation of software, stimulated by the work of Lyness and Kaganove, Einarsson, and Piessens. The earlier research of Patterson into Kronrod extensions of Gauss rules, followed by the work of Monegato, and Piessens and Branders, has greatly increased interest in Gauss-based formulas for one-dimensional integration.
As more and more devices become interconnected through the Internet of Things (IoT), there is an even greater need for this book,which explains the technology, the internetworking, and applications that are making IoT an everyday reality. The book begins with a discussion of IoT "ecosystems" and the technology that enables them, which includes: Wireless Infrastructure and Service Discovery Protocols Integration Technologies and Tools Application and Analytics Enablement Platforms A chapter on next-generation cloud infrastructure explains hosting IoT platforms and applications. A chapter on data analytics throws light on IoT data collection, storage, translation, real-time processing, mining, and analysis, all of which can yield actionable insights from the data collected by IoT applications. There is also a chapter on edge/fog computing. The second half of the book presents various IoT ecosystem use cases. One chapter discusses smart airports and highlights the role of IoT integration. It explains how mobile devices, mobile technology, wearables, RFID sensors, and beacons work together as the core technologies of a smart airport. Integrating these components into the airport ecosystem is examined in detail, and use cases and real-life examples illustrate this IoT ecosystem in operation. Another in-depth look is on envisioning smart healthcare systems in a connected world. This chapter focuses on the requirements, promising applications, and roles of cloud computing and data analytics. The book also examines smart homes, smart cities, and smart governments. The book concludes with a chapter on IoT security and privacy. This chapter examines the emerging security and privacy requirements of IoT environments. The security issues and an assortment of surmounting techniques and best practices are also discussed in this chapter.
The advent of the world-wide web and web-based applications have dramatically changed the nature of computer applications. Computer system design, in the light of these changes, involves understanding these modem workloads, identifying bottlenecks during their execution, and appropriately tailoring microprocessors, memory systems, and the overall system to minimize bottlenecks. This book contains ten chapters dealing with several contemporary programming paradigms including Java, web server and database workloads. The first two chapters concentrate on Java. While Barisone et al.'s characterization in Chapter 1 deals with instruction set usage of Java applications, Kim et al.'s analysis in Chapter 2 focuses on memory referencing behavior of Java workloads. Several applications including the SPECjvm98 suite are studied using interpreter and Just-In-Time (TIT) compilers. Barisone et al.'s work includes an analytical model to compute the utilization of various functional units. Kim et al. present information on locality, live-range of objects, object lifetime distribution, etc. Studying database workloads has been a challenge to research groups, due to the difficulty in accessing standard benchmarks. Configuring hardware and software for database benchmarks such as those from the Transactions Processing Council (TPC) requires extensive effort. In Chapter 3, Keeton and Patterson present a simplified workload (microbenchmark) that approximates the characteristics of complex standardized benchmarks.
Over the last decade, the role of computational simulations in all aspects of aerospace design has steadily increased. However, despite the many advances, the time required for computations is far too long. This book examines new ideas and methodologies that may, in the next twenty years, revolutionize scientific computing. The book specifically looks at trends in algorithm research, human computer interface, network-based computing, surface modeling and grid generation and computer hardware and architecture. The book provides a good overview of the current state-of-the-art and provides guidelines for future research directions. The book is intended for computational scientists active in the field and program managers making strategic research decisions.
Parallel Language and Compiler Research in Japan offers the international community an opportunity to learn in-depth about key Japanese research efforts in the particular software domains of parallel programming and parallelizing compilers. These are important topics that strongly bear on the effectiveness and affordability of high performance computing systems. The chapters of this book convey a comprehensive and current depiction of leading edge research efforts in Japan that focus on parallel software design, development, and optimization that could be obtained only through direct and personal interaction with the researchers themselves.
As the technology of Supercomputing processes, methodologies for approaching problems have also been developed. The main object of this symposium was the interdisciplinary participation of experts in related fields and passionate discussion to work toward the solution of problems. An executive committee especially arranged for this symposium selected speakers and other participants who submitted papers which are included in this volume. Also included are selected extracts from the two sessions of panel discussion, the "Needs and Seeds of Supercomputing," and "The Future of Supercomputing," which arose during a wide-ranging exchange of viewpoints.
Although asynchronous circuits date back to the early 1950s most of
the digital circuits in use today are synchronous because,
traditionally, asynchronous circuits have been viewed as difficult
to understand and design. In recent years, however, there has been
a great surge of interest in asynchronous circuits, largely through
the development of new asynchronous design methodologies. |
You may like...
Rental Property Investing - Unlock the…
Charles Pennyfeather
Hardcover
Immobilien Komplettset - Intelligent…
Bernd Ebersbach
Hardcover
|