![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
This state-of-the-art monograph presents a coherent survey of a variety of methods and systems for formal hardware verification. It emphasizes the presentation of approaches that have matured into tools and systems usable for the actual verification of nontrivial circuits. All in all, the book is a representative and well-structured survey on the success and future potential of formal methods in proving the correctness of circuits. The various chapters describe the respective approaches supplying theoretical foundations as well as taking into account the application viewpoint. By applying all methods and systems presented to the same set of IFIP WG10.5 hardware verification examples, a valuable and fair analysis of the strenghts and weaknesses of the various approaches is given.
It gives me immense pleasure to introduce this timely handbook to the research/- velopment communities in the ?eld of signal processing systems (SPS). This is the ?rst of its kind and represents state-of-the-arts coverage of research in this ?eld. The driving force behind information technologies (IT) hinges critically upon the major advances in both component integration and system integration. The major breakthrough for the former is undoubtedly the invention of IC in the 50's by Jack S. Kilby, the Nobel Prize Laureate in Physics 2000. In an integrated circuit, all components were made of the same semiconductor material. Beginning with the pocket calculator in 1964, there have been many increasingly complex applications followed. In fact, processing gates and memory storage on a chip have since then grown at an exponential rate, following Moore's Law. (Moore himself admitted that Moore's Law had turned out to be more accurate, longer lasting and deeper in impact than he ever imagined. ) With greater device integration, various signal processing systems have been realized for many killer IT applications. Further breakthroughs in computer sciences and Internet technologies have also catalyzed large-scale system integration. All these have led to today's IT revolution which has profound impacts on our lifestyle and overall prospect of humanity. (It is hard to imagine life today without mobiles or Internets ) The success of SPS requires a well-concerted integrated approach from mul- ple disciplines, such as device, design, and application.
The exploitationof parallel processing to improve computing speeds is being examined at virtually all levels of computer science, from the study of parallel algorithms to the development of microarchitectures which employ multiple functional units. The most visible aspect of this interest in parallel processing is the commercially available multiprocessor systems which have appeared in the past decade. Unfortunately, the lack of adequate software support for the development of scientific applications that will run efficiently on multiple processors has stunted the acceptance of such systems. One of the major impediments to achieving high parallel efficiency on many data-parallel scientific applications is communication overhead, which is exemplified by cache coherency traffic and global memory overhead of interprocessors with a logically shared address space and physically distributed memory. Such techniques can be used by scientific application designers seeking to optimize code for a particular high-performance computer. In addition, these techniques can be seen as a necesary step toward developing software to support efficient paralled programs. In multiprocessor sytems with physically distributed memory, reducing communication overhead involves both data partitioning and data placement. Adaptive Data Partitioning (ADP) reduces the execution time of parallel programs by minimizing interprocessor communication for iterative data-parallel loops with near-neighbor communication. Data placement schemes are presented that reduce communication overhead. Under the loop partition specified by ADP, global data is partitioned into classes for each processor, allowing each processor to cachecertain regions of the global data set. In addition, for many scientific applications, peak parallel efficiency is achieved only when machine-specific tradeoffs between load imbalance and communication are evaluated and utilized in choosing the data partition. The techniques in this book evaluate these tradeoffs to generate optimum cyclic partitions for data-parallel loops with either a linearly varying or uniform computational structure and either neighborhood or dimensional multicast communication patterns. This tradeoff is also treated within the CPR (Collective Partitioning and Remapping) algorithm, which partitions a collection of loops with various computational structures and communication patterns. Experiments that demonstrate the advantage of ADP, data placement, cyclic partitioning and CPR were conducted on the Encore Multimax and BBN TC2000 multiprocessors using the ADAPT system, a program partitioner which automatically restructures iterative data-parallel loops. This book serves as an excellent reference and may be used as the text for an advanced course on the subject.
Multiple processor systems are an important class of parallel systems. Over the years, several architectures have been proposed to build such systems to satisfy the requirements of high performance computing. These architectures span a wide variety of system types. At the low end of the spectrum, we can build a small, shared-memory parallel system with tens of processors. These systems typically use a bus to interconnect the processors and memory. Such systems, for example, are becoming commonplace in high-performance graph ics workstations. These systems are called uniform memory access (UMA) multiprocessors because they provide uniform access of memory to all pro cessors. These systems provide a single address space, which is preferred by programmers. This architecture, however, cannot be extended even to medium systems with hundreds of processors due to bus bandwidth limitations. To scale systems to medium range i. e., to hundreds of processors, non-bus interconnection networks have been proposed. These systems, for example, use a multistage dynamic interconnection network. Such systems also provide global, shared memory like the UMA systems. However, they introduce local and remote memories, which lead to non-uniform memory access (NUMA) architecture. Distributed-memory architecture is used for systems with thousands of pro cessors. These systems differ from the shared-memory architectures in that there is no globally accessible shared memory. Instead, they use message pass ing to facilitate communication among the processors. As a result, they do not provide single address space."
THE CONTEXT OF PARALLEL PROCESSING The field of digital computer architecture has grown explosively in the past two decades. Through a steady stream of experimental research, tool-building efforts, and theoretical studies, the design of an instruction-set architecture, once considered an art, has been transformed into one of the most quantitative branches of computer technology. At the same time, better understanding of various forms of concurrency, from standard pipelining to massive parallelism, and invention of architectural structures to support a reasonably efficient and user-friendly programming model for such systems, has allowed hardware performance to continue its exponential growth. This trend is expected to continue in the near future. This explosive growth, linked with the expectation that performance will continue its exponential rise with each new generation of hardware and that (in stark contrast to software) computer hardware will function correctly as soon as it comes off the assembly line, has its down side. It has led to unprecedented hardware complexity and almost intolerable dev- opment costs. The challenge facing current and future computer designers is to institute simplicity where we now have complexity; to use fundamental theories being developed in this area to gain performance and ease-of-use benefits from simpler circuits; to understand the interplay between technological capabilities and limitations, on the one hand, and design decisions based on user and application requirements on the other.
The primary audience for this book are advanced undergraduate students and graduate students. Computer architecture, as it happened in other fields such as electronics, evolved from the small to the large, that is, it left the realm of low-level hardware constructs, and gained new dimensions, as distributed systems became the keyword for system implementation. As such, the system architect, today, assembles pieces of hardware that are at least as large as a computer or a network router or a LAN hub, and assigns pieces of software that are self-contained, such as client or server programs, Java applets or pro tocol modules, to those hardware components. The freedom she/he now has, is tremendously challenging. The problems alas, have increased too. What was before mastered and tested carefully before a fully-fledged mainframe or a closely-coupled computer cluster came out on the market, is today left to the responsibility of computer engineers and scientists invested in the role of system architects, who fulfil this role on behalf of software vendors and in tegrators, add-value system developers, R&D institutes, and final users. As system complexity, size and diversity grow, so increases the probability of in consistency, unreliability, non responsiveness and insecurity, not to mention the management overhead. What System Architects Need to Know The insight such an architect must have includes but goes well beyond, the functional properties of distributed systems."
Developing correct and efficient software is far more complex for parallel and distributed systems than it is for sequential processors. Some of the reasons for this added complexity are: the lack of a universally acceptable parallel and distributed programming paradigm, the criticality of achieving high performance, and the difficulty of writing correct parallel and distributed programs. These factors collectively influence the current status of parallel and distributed software development tools efforts. Tools and Environments for Parallel and Distributed Systems addresses the above issues by describing working tools and environments, and gives a solid overview of some of the fundamental research being done worldwide. Topics covered in this collection are: mainstream program development tools, performance prediction tools and studies; debugging tools and research; and nontraditional tools. Audience: Suitable as a secondary text for graduate level courses in software engineering and parallel and distributed systems, and as a reference for researchers and practitioners in industry.
This book describes a circuit architecture for converting real analog signals into a digital format, suitable for digital signal processors. This architecture, referred to as multi-stage noise-shaping (MASH) Continuous-Time Sigma-Delta Modulators (CT- M), has the potential to provide better digital data quality and achieve better data rate conversion with lower power consumption. The authors not only cover MASH continuous-time sigma delta modulator fundamentals, but also provide a literature review that will allow students, professors, and professionals to catch up on the latest developments in related technology.
Recently, a variety ofresults on the complexitystatusofthegraph isomorphism problem has been obtained. These results belong to the so-called structural part of Complexity Theory. Our idea behind this book is to summarize such results which might otherwise not be easily accessible in the literature, and also, to give the reader an understanding of the aims and topics in Structural Complexity Theory, in general. The text is basically self contained; the only prerequisite for reading it is some elementary knowledge from Complexity Theory and Probability Theory. It can be used to teach a seminar or a monographic graduate course, but also parts of it (especially Chapter 1) provide a source of examples for a standard graduate course on Complexity Theory. Many people have helped us in different ways III the process of writing this book. Especially, we would like to thank V. Arvind, R.V. Book, E. May ordomo, and the referee who gave very constructive comments. This book project was especially made possible by a DAAD grant in the "Acciones In tegrada" program. The third author has been supported by the ESPRIT project ALCOM-II."
Parallel processing is seen today as the means to improve the power of computing facilities by breaking the Von Neumann bottleneck of conventional sequential computer architectures. By defining appropriate parallel computation models definite advantages can be obtained. Parallel processing is the center of the research in Europe in the field of Information Processing Systems so the CEC has funded the ESPRIT Supemode project to develop a low cost, high performance, multiprocessor machine. The result of this project is a modular, reconfigurable architecture based on !NMOS transputers: T.Node. This machine can be considered as a research, industrial and commercial success. The CEC has decided to continue to encourage manufacturers as well as research and end-users of transputers by funding other projects in this field. This book presents course papers of the Eurocourse given at the Joint Research Centre in ISPRA (Italy) from the 4th to 8 of November 1991. First we present an overview of various trends in the design of parallel architectures and specially of the T.Node with it's software development environments, new distributed system aspects and also new hardware extensions based on the !NMOS T9000 processor. In a second part, we review some real case applications in the field of image synthesis, image processing, signal processing, terrain modeling, particle physics simulation and also enhanced parallel and distributed numerical methods on T.Node.
Multithreaded computer architecture has emerged as one of the most promising and exciting avenues for the exploitation of parallelism. This new field represents the confluence of several independent research directions which have united over a common set of issues and techniques. Multithreading draws on recent advances in dataflow, RISC, compiling for fine-grained parallel execution, and dynamic resource management. It offers the hope of dramatic performance increases through parallel execution for a broad spectrum of significant applications based on extensions to traditional' approaches. Multithreaded Computer Architecture is divided into four parts, reflecting four major perspectives on the topic. Part I provides the reader with basic background information, definitions, and surveys of work which have in one way or another been pivotal in defining and shaping multithreading as an architectural discipline. Part II examines key elements of multithreading, highlighting the fundamental nature of latency and synchronization. This section presents clever techniques for hiding latency and supporting large synchronization name spaces. Part III looks at three major multithreaded systems, considering issues of machine organization and compilation strategy. Part IV concludes the volume with an analysis of multithreaded architectures, showcasing methodologies and actual measurements. Multithreaded Computer Architecture: A Summary of the State of the Art is an excellent reference source and may be used as a text for advanced courses on the subject.
This book provides a comprehensive survey of recent progress in the design and implementation of Networks-on-Chip. It addresses a wide spectrum of on-chip communication problems, ranging from physical, network, to application layers. Specific topics that are explored in detail include packet routing, resource arbitration, error control/correction, application mapping, and communication scheduling. Additionally, a novel bi-directional communication channel NoC (BiNoC) architecture is described, with detailed explanation. Written for practicing engineers in need of practical knowledge about the design and implementation of networks-on-chip; Includes tutorial-like details to introduce readers to a diverse range of NoC designs, as well as in-depth analysis for designers with NoC experience to explore advanced issues; Describes a variety of on-chip communication architectures, including a novel bi-directional communication channel NoC. From the Foreword: Overall this book shows important advances over the state of the art that will affect future system design as well as R&D in tools and methods for NoC design. It represents an important reference point for both designers and electronic design automation researchers and developers. --Giovanni De Micheli"
The most important use of computing in the future will be in the context of the global "digital convergence" where everything becomes digital and every thing is inter-networked. The application will be dominated by storage, search, retrieval, analysis, exchange and updating of information in a wide variety of forms. Heavy demands will be placed on systems by many simultaneous re quests. And, fundamentally, all this shall be delivered at much higher levels of dependability, integrity and security. Increasingly, large parallel computing systems and networks are providing unique challenges to industry and academia in dependable computing, espe cially because of the higher failure rates intrinsic to these systems. The chal lenge in the last part of this decade is to build a systems that is both inexpensive and highly available. A machine cluster built of commodity hardware parts, with each node run ning an OS instance and a set of applications extended to be fault resilient can satisfy the new stringent high-availability requirements. The focus of this book is to present recent techniques and methods for im plementing fault-tolerant parallel and distributed computing systems. Section I, Fault-Tolerant Protocols, considers basic techniques for achieving fault-tolerance in communication protocols for distributed systems, including synchronous and asynchronous group communication, static total causal order ing protocols, and fail-aware datagram service that supports communications by time."
This book presents high-/mixed-voltage analog and radio frequency (RF) circuit techniques for developing low-cost multistandard wireless receivers in nm-length CMOS processes. Key benefits of high-/mixed-voltage RF and analog CMOS circuits are explained, state-of-the-art examples are studied, and circuit solutions before and after voltage-conscious design are compared. Three real design examples are included, which demonstrate the feasibility of high-/mixed-voltage circuit techniques. Provides a valuable summary and real case studies of the state-of-the-art in high-/mixed-voltage circuits and systems; Includes novel high-/mixed-voltage analog and RF circuit techniques - from concept to practice; Describes the first high-voltage-enabled mobile-TVRF front-end in 90nm CMOS and the first mixed-voltage full-band mobile-TV Receiver in 65nm CMOS;Demonstrates the feasibility of high-/mixed-voltage circuit techniques with real design examples."
This book provides the most comprehensive and consistent survey of the field of IC design for Biological Sensing and Processing. The authors describe a multitude of applications that require custom CMOS IC design and highlight the techniques in analog and mixed-signal circuit design that potentially can cross boundaries and benefit the very wide community of bio-medical engineers.
This book is intended to serve as a textbook for a second course in the im plementation (Le. microarchitecture) of computer architectures. The subject matter covered is the collection of techniques that are used to achieve the highest performance in single-processor machines; these techniques center the exploitation of low-level parallelism (temporal and spatial) in the processing of machine instructions. The target audience consists students in the final year of an undergraduate program or in the first year of a postgraduate program in computer science, computer engineering, or electrical engineering; professional computer designers will also also find the book useful as an introduction to the topics covered. Typically, the author has used the material presented here as the basis of a full-semester undergraduate course or a half-semester post graduate course, with the other half of the latter devoted to multiple-processor machines. The background assumed of the reader is a good first course in computer architecture and implementation - to the level in, say, Computer Organization and Design, by D. Patterson and H. Hennessy - and familiarity with digital-logic design. The book consists of eight chapters: The first chapter is an introduction to all of the main ideas that the following chapters cover in detail: the topics covered are the main forms of pipelining used in high-performance uniprocessors, a taxonomy of the space of pipelined processors, and performance issues. It is also intended that this chapter should be readable as a brief "stand-alone" survey."
This book describes the algorithms and computer architectures used to create and analyze photographs in modern digital cameras. It also puts the capabilities of digital cameras into context for applications in art, entertainment, and video analysis. The author discusses the entire range of topics relevant to digital camera design, including image processing, computer vision, image sensors, system-on-chip, and optics, while clearly describing the interactions between design decisions at these different levels of abstraction. Readers will benefit from this comprehensive view of digital camera design, describing the range of algorithms used to compose, enhance, and analyze images, as well as the characteristics of optics, image sensors, and computing platforms that determine the physical limits of image capture and computing. The content is designed to be used by algorithm designers and does not require an extensive background in optics or electronics.
Performance Evaluation, Prediction and Visualization in Parallel Systems presents a comprehensive and systematic discussion of theoretics, methods, techniques and tools for performance evaluation, prediction and visualization of parallel systems. Chapter 1 gives a short overview of performance degradation of parallel systems, and presents a general discussion on the importance of performance evaluation, prediction and visualization of parallel systems. Chapter 2 analyzes and defines several kinds of serial and parallel runtime, points out some of the weaknesses of parallel speedup metrics, and discusses how to improve and generalize them. Chapter 3 describes formal definitions of scalability, addresses the basic metrics affecting the scalability of parallel systems, discusses scalability of parallel systems from three aspects: parallel architecture, parallel algorithm and parallel algorithm-architecture combinations, and analyzes the relations of scalability and speedup. Chapter 4 discusses the methodology of performance measurement, describes the benchmark- oriented performance test and analysis and how to measure speedup and scalability in practice. Chapter 5 analyzes the difficulties in performance prediction, discusses application-oriented and architecture-oriented performance prediction and how to predict speedup and scalability in practice. Chapter 6 discusses performance visualization techniques and tools for parallel systems from three stages: performance data collection, performance data filtering and performance data visualization, and classifies the existing performance visualization tools. Chapter 7 describes parallel compiling-based, search-based and knowledge-based performance debugging, which assists programmers to optimize the strategy or algorithm in their parallel programs, and presents visual programming-based performance debugging to help programmers identify the location and cause of the performance problem. It also provides concrete suggestions on how to modify their parallel program to improve the performance. Chapter 8 gives an overview of current interconnection networks for parallel systems, analyzes the scalability of interconnection networks, and discusses how to measure and improve network performances. Performance Evaluation, Prediction and Visualization in Parallel Systems serves as an excellent reference for researchers, and may be used as a text for advanced courses on the topic.
This book guides readers through the design of hardware architectures using VHDL for digital communication and image processing applications that require performance computing. Further it includes the description of all the VHDL-related notions, such as language, levels of abstraction, combinational vs. sequential logic, structural and behavioral description, digital circuit design, and finite state machines. It also includes numerous examples to make the concepts presented in text more easily understandable.
Multiprocessing: Trade-Offs in Computation and Communication presents an in-depth analysis of several commonly observed regular and irregular computations for multiprocessor systems. This book includes techniques which enable researchers and application developers to quantitatively determine the effects of algorithm data dependencies on execution time, on communication requirements, on processor utilization and on the speedups possible. Starting with simple, two-dimensional, diamond-shaped directed acyclic graphs, the analysis is extended to more complex and higher dimensional directed acyclic graphs. The analysis allows for the quantification of the computation and communication costs and their interdependencies. The practical significance of these results on the performance of various data distribution schemes is clearly explained. Using these results, the performance of the parallel computations are formulated in an architecture independent fashion. These formulations allow for the parameterization of the architecture specitific entities such as the computation and communication rates. This type of parameterized performance analysis can be used at compile time or at run-time so as to achieve the most optimal distribution of the computations. The material in Multiprocessing: Trade-Offs in Computation and Communication connects theory with practice, so that the inherent performance limitations in many computations can be understood, and practical methods can be devised that would assist in the development of software for scalable high performance systems.
Component Models and Systems for Grid Applications is the essential reference for the most current research on Grid technologies. This first volume of the CoreGRID series addresses such vital issues as the architecture of the Grid, the way software will influence the development of the Grid, and the practical applications of Grid technologies for individuals and businesses alike. Part I of the book, "Application-Oriented Designs," focuses on development methodology and how it may contribute to a more component-based use of the Grid. "Middleware Architecture," the second part, examines portable Grid engines, hierarchical infrastructures, interoperability, as well as workflow modeling environments. The final part of the book, "Communication Frameworks," looks at dynamic self-adaptation, collective operations, and higher-order components. With Component Models and Systems for Grid Applications, editors Vladimir Getov and Thilo Kielmann offer the computing professional and the computing researcher the most informative, up-to-date, and forward-looking thoughts on the fast-growing field of Grid studies.
This book provides an overview of current Intellectual Property (IP) based System-on-Chip (SoC) design methodology and highlights how security of IP can be compromised at various stages in the overall SoC design-fabrication-deployment cycle. Readers will gain a comprehensive understanding of the security vulnerabilities of different types of IPs. This book would enable readers to overcome these vulnerabilities through an efficient combination of proactive countermeasures and design-for-security solutions, as well as a wide variety of IP security and trust assessment and validation techniques. This book serves as a single-source of reference for system designers and practitioners for designing secure, reliable and trustworthy SoCs.
Cache And Interconnect Architectures In Multiprocessors Eilat, Israel May 25-261989 Michel Dubois UniversityofSouthernCalifornia Shreekant S. Thakkar SequentComputerSystems The aim of the workshop was to bring together researchers working on cache coherence protocols for shared-memory multiprocessors with various interconnect architectures. Shared-memory multiprocessors have become viable systems for many applications. Bus based shared-memory systems (Eg. Sequent's Symmetry, Encore's Multimax) are currently limited to 32 processors. The fIrst goal of the workshop was to learn about the performance ofapplications on current cache-based systems. The second goal was to learn about new network architectures and protocols for future scalable systems. These protocols and interconnects would allow shared-memory architectures to scale beyond current imitations. The workshop had 20 speakers who talked about their current research. The discussions were lively and cordial enough to keep the participants away from the wonderful sand and sun for two days. The participants got to know each other well and were able to share their thoughts in an informal manner. The workshop was organized into several sessions. The summary of each session is described below. This book presents revisions of some of the papers presented at the workshop."
Instruction-Level Parallelism presents a collection of papers that attempts to capture the most significant work that took place during the 1980s in the area of instruction-level (ILP) parallel processing. The papers in this book discuss both compiler techniques and actual implementation experience on very long instruction word (VLIW) and superscalar architectures. |
You may like...
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, …
Paperback
R2,281
Discovery Miles 22 810
Best Practices and New Perspectives in…
Patricia Ordonez De Pablos, Robert Tennyson
Hardcover
R4,729
Discovery Miles 47 290
Novel Approaches to Information Systems…
Naveen Prakash, Deepika Prakash
Hardcover
R5,924
Discovery Miles 59 240
Clean Architecture - A Craftsman's Guide…
Robert Martin
Paperback
(1)
|