![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
With the development of Very-Deep Sub-Micron technologies, process variability is becoming increasingly important and is a very important issue in the design of complex circuits. Process variability is the statistical variation of process parameters, meaning that these parameters do not have always the same value, but become a random variable, with a given mean value and standard deviation. This effect can lead to several issues in digital circuit design. The logical consequence of this parameter variation is that circuit characteristics, as delay and power, also become random variables. Because of the delay variability, not all circuits will now have the same performance, but some will be faster and some slower. However, the slowest circuits may be so slow that they will not be appropriate for sale. On the other hand, the fastest circuits that could be sold for a higher price can be very leaky, and also not very appropriate for sale. A main consequence of power variability is that the power consumption of some circuits will be different than expected, reducing reliability, average life expectancy and warranty of products. Sometimes the circuits will not work at all, due to reasons associated with process variations. At the end, these effects result in lower yield and lower profitability. To understand these effects, it is necessary to study the consequences of variability in several aspects of circuit design, like logic gates, storage elements, clock distribution, and any other that can be affected by process variations. The main focus of this book will be storage elements.
This Handbook is about methods, tools and examples of how to architect an enterprise through considering all life cycle aspects of Enterprise Entities (such as individual enterprises, enterprise networks, virtual enterprises, projects and other complex systems including a mixture of automated and human processes). The book is based on ISO15704:2000, or the GERAM Framework (Generalised Enterprise Reference Architecture and Methodology) that generalises the requirements of Enterprise Reference Architectures. Various Architecture Frameworks (PERA, CIMOSA, Grai-GIM, Zachman, C4ISR/DoDAF) are shown in light of GERAM to allow a deeper understanding of their contributions and therefore their correct and knowledgeable use. The handbook addresses a wide variety of audience, and covers methods and tools necessary to design or redesign enterprises, as well as to structure the implementation into manageable projects.
Media processing applications, such as three-dimensional graphics, video compression, and image processing, currently demand 10-100 billion operations per second of sustained computation. Fortunately, hundreds of arithmetic units can easily fit on a modestly sized 1cm2 chip in modern VLSI. The challenge is to provide these arithmetic units with enough data to enable them to meet the computation demands of media processing applications. Conventional storage hierarchies, which frequently include caches, are unable to bridge the data bandwidth gap between modern DRAM and tens to hundreds of arithmetic units. A data bandwidth hierarchy, however, can bridge this gap by scaling the provided bandwidth across the levels of the storage hierarchy. The stream programming model enables media processing applications to exploit a data bandwidth hierarchy effectively. Media processing applications can naturally be expressed as a sequence of computation kernels that operate on data streams. This programming model exposes the locality and concurrency inherent in these applications and enables them to be mapped efficiently to the data bandwidth hierarchy. Stream programs are able to utilize inexperience local data bandwidth when possible and consume expensive global data bandwidth only when necessary. Stream Processor Architecture presents the architecture of the Imagine streaming media processor, which delivers a peak performance of 20 billion floating-point operations per second. Imagine efficiently supports 48 arithmetic units with a three-tiered data bandwidth hierarchy. At the base of the hierarchy, the streaming memory system employs memory access scheduling to maximize the sustained bandwidth of external DRAM. At the center of the hierarchy, the global stream register file enables streams of data to be recirculated directly from one computation kernel to the next without returning data to memory. Finally, local distributed register files that directly feed the arithmetic units enable temporary data to be stored locally so that it does not need to consume costly global register bandwidth. The bandwidth hierarchy enables Imagine to achieve up to 96% of the performance of a stream processor with infinite bandwidth from memory and the global register file.
Due to the decreasing production costs of IT systems, applications that had to be realised as expensive PCBs formerly, can now be realised as a system-on-chip. Furthermore, low cost broadband communication media for wide area communication as well as for the realisation of local distributed systems are available. Typically the market requires IT systems that realise a set of specific features for the end user in a given environment, so called embedded systems. Some examples for such embedded systems are control systems in cars, airplanes, houses or plants, information and communication devices like digital TV, mobile phones or autonomous systems like service- or edutainment robots. For the design of embedded systems the designer has to tackle three major aspects: The application itself including the man-machine interface, The (target) architecture of the system including all functional and non-functional constraints and, the design methodology including modelling, specification, synthesis, test and validation. The last two points are a major focus of this book. This book documents the high quality approaches and results that were presented at the International Workshop on Distributed and Parallel Embedded Systems (DIPES 2000), which was sponsored by the International Federation for Information Processing (IFIP), and organised by IFIP working groups WG10.3, WG10.4 and WG10.5. The workshop took place on October 18-19, 2000, in Schloss Eringerfeld near Paderborn, Germany. Architecture and Design of Distributed Embedded Systems is organised similar to the workshop. Chapters 1 and 4 (Methodology I and II) deal with different modelling and specification paradigms and the corresponding design methodologies. Generic system architectures for different classes of embedded systems are presented in Chapter 2. In Chapter 3 several design environments for the support of specific design methodologies are presented. Problems concerning test and validation are discussed in Chapter 5. The last two chapters include distribution and communication aspects (Chapter 6) and synthesis techniques for embedded systems (Chapter 7). This book is essential reading for computer science researchers and application developers."
Transaction processing is an established technique for the concurrent and fault tolerant access of persistent data. While this technique has been successful in standard database systems, factors such as time-critical applications, emerg ing technologies, and a re-examination of existing systems suggest that the performance, functionality and applicability of transactions may be substan tially enhanced if temporal considerations are taken into account. That is, transactions should not only execute in a "legal" (i.e., logically correct) man ner, but they should meet certain constraints with regard to their invocation and completion times. Typically, these logical and temporal constraints are application-dependent, and we address some fundamental issues for the man agement of transactions in the presence of such constraints. Our model for transaction-processing is based on extensions to established mod els, and we briefly outline how logical and temporal constraints may be ex pressed in it. For scheduling the transactions, we describe how legal schedules differ from one another in terms of meeting the temporal constraints. Exist ing scheduling mechanisms do not differentiate among legal schedules, and are thereby inadequate with regard to meeting temporal constraints. This provides the basis for seeking scheduling strategies that attempt to meet the temporal constraints while continuing to produce legal schedules."
Nonlinear Assignment Problems (NAPs) are natural extensions of the classic Linear Assignment Problem, and despite the efforts of many researchers over the past three decades, they still remain some of the hardest combinatorial optimization problems to solve exactly. The purpose of this book is to provide in a single volume, major algorithmic aspects and applications of NAPs as contributed by leading international experts. The chapters included in this book are concerned with major applications and the latest algorithmic solution approaches for NAPs. Approximation algorithms, polyhedral methods, semidefinite programming approaches and heuristic procedures for NAPs are included, while applications of this problem class in the areas of multiple-target tracking in the context of military surveillance systems, of experimental high energy physics, and of parallel processing are presented. Audience: Researchers and graduate students in the areas of combinatorial optimization, mathematical programming, operations research, physics, and computer science.
Despite five decades of research, parallel computing remains an exotic, frontier technology on the fringes of mainstream computing. Its much-heralded triumph over sequential computing has yet to materialize. This is in spite of the fact that the processing needs of many signal processing applications continue to eclipse the capabilities of sequential computing. The culprit is largely the software development environment. Fundamental shortcomings in the development environment of many parallel computer architectures thwart the adoption of parallel computing. Foremost, parallel computing has no unifying model to accurately predict the execution time of algorithms on parallel architectures. Cost and scarce programming resources prohibit deploying multiple algorithms and partitioning strategies in an attempt to find the fastest solution. As a consequence, algorithm design is largely an intuitive art form dominated by practitioners who specialize in a particular computer architecture. This, coupled with the fact that parallel computer architectures rarely last more than a couple of years, makes for a complex and challenging design environment. To navigate this environment, algorithm designers need a road map, a detailed procedure they can use to efficiently develop high performance, portable parallel algorithms. The focus of this book is to draw such a road map. The Parallel Algorithm Synthesis Procedure can be used to design reusable building blocks of adaptable, scalable software modules from which high performance signal processing applications can be constructed. The hallmark of the procedure is a semi-systematic process for introducing parameters to control the partitioning andscheduling of computation and communication. This facilitates the tailoring of software modules to exploit different configurations of multiple processors, multiple floating-point units, and hierarchical memories. To showcase the efficacy of this procedure, the book presents three case studies requiring various degrees of optimization for parallel execution. This book can be used as a reference for algorithm designers or as a text for an advanced course on parallel programming.
Artificial Intelligence is entering the mainstream of com- puter applications and as techniques are developed and integrated into a wide variety of areas they are beginning to tax the pro- cessing power of conventional architectures. To meet this demand, specialized architectures providing support for the unique features of symbolic processing languages are emerging. The goal of the research presented here is to show that an archi- tecture specialized for Prolog can achieve a ten-fold improve- ment in performance over conventional, general-purpose architec- tures. This book presents such an architecture for high perfor- mance execution of Prolog programs. The architecture is based on the abstract machine descrip- tion introduced by David H.D. Warren known as the Warren Abstract Machine (W AM). The execution model of the W AM is described and extended to provide a complete Instruction Set Architecture (lSA) for Prolog known as the PLM. This ISA is then realized in a microarchitecture and finally in a hardware design. The work described here represents one of the first efforts to implement the W AM model in hardware. The approach taken is that of direct implementation of the high level WAM instruction set in hardware resulting in a elSe style archi- tecture.
Automatic transformation of a sequential program into a parallel form is a subject that presents a great intellectual challenge and promises a great practical award. There is a tremendous investment in existing sequential programs, and scientists and engineers continue to write their application programs in sequential languages (primarily in Fortran). The demand for higher speedups increases. The job of a restructuring compiler is to discover the dependence structure and the characteristics of the given machine. Much attention has been focused on the Fortran do loop. This is where one expects to find major chunks of computation that need to be performed repeatedly for different values of the index variable. Many loop transformations have been designed over the years, and several of them can be found in any parallelizing compiler currently in use in industry or at a university research facility. The book series on KappaLoop Transformations for Restructuring Compilerskappa provides a rigorous theory of loop transformations and dependence analysis. We want to develop the transformations in a consistent mathematical framework using objects like directed graphs, matrices, and linear equations. Then, the algorithms that implement the transformations can be precisely described in terms of certain abstract mathematical algorithms. The first volume, Loop Transformations for Restructuring Compilers: The Foundations, provided the general mathematical background needed for loop transformations (including those basic mathematical algorithms), discussed data dependence, and introduced the major transformations. The current volume, Loop Parallelization, builds a detailed theory of iteration-level loop transformations based on the material developed in the previous book.
This book presents some of the latest applications of new theories based on the concept of paraconsistency and correlated topics in informatics, such as pattern recognition (bioinformatics), robotics, decision-making themes, and sample size. Each chapter is self-contained, and an introductory chapter covering the logic theoretical basis is also included. The aim of the text is twofold: to serve as an introductory text on the theories and applications of new logic, and as a textbook for undergraduate or graduate-level courses in AI. Today AI frequently has to cope with problems of vagueness, incomplete and conflicting (inconsistent) information. One of the most notable formal theories for addressing them is paraconsistent (paracomplete and non-alethic) logic.
This book is structured in a practical, example-driven, manner. The use of VHDL for constructing logic synthesisers is one of the aims of the book; the second is the application of the tools to the design process. Worked examples, questions and answers are provided together with do and don'ts of good practice. An appendix on logic design the source code are available free of charge over the Internet.
This book provides practical solutions for delay and power reduction for on-chip interconnects and buses. It provides an in depth description of the problem of signal delay and extra power consumption, possible solutions for delay and glitch removal, while considering the power reduction of the total system. Coverage focuses on use of the Schmitt Trigger as an alternative approach to buffer insertion for delay and power reduction in VLSI interconnects. In the last section of the book, various bus coding techniques are discussed to minimize delay and power in address and data buses.
This book serves a dual purpose: firstly to combine the treatment of circuits and digital electronics, and secondly, to establish a strong connection with the contemporary world of digital systems. The need for this approach arises from the observation that introducing digital electronics through a course in traditional circuit analysis is fast becoming obsolete. Our world has gone digital. Automata theory helps with the design of digital circuits such as parts of computers, telephone systems and control systems. A complete perspective is emphasized, because even the most elegant computer architecture will not function without adequate supporting circuits. The focus is on explaining the real-world implementation of complete digital systems. In doing so, the reader is prepared to immediately begin design and implementation work. This work serves as a bridge to take readers from the theoretical world to the everyday design world where solutions must be complete to be successful.
This book looks at relationships between the organisation of physical objects in space and the organisation of ideas. Historical, philosophical, psychological and architectural knowledge are united to develop an understanding of the relationship between information and its representation. Despite its potential to break the mould, digital information has relied on metaphors from a pre-digital era. In particular, architectural ideas have pervaded discussions of digital information, from the urbanisation of cyberspace in science fiction, through to the adoption of spatial visualisations in the design of graphical user interfaces. This book tackles: * the historical importance of physical places to the organisation and expression of knowledge * the limitations of using the physical organisation of objects as the basis for systems of categorisation and taxonomy * the emergence of digital technologies and the 20th century new conceptual understandings of knowledge and its organisation * the concept of disconnecting storage of information objects from their presentation and retrieval * ideas surrounding semantic space' * the realities of the types of user interface which now dominate modern computing.
Per Martin-Loef's work on the development of constructive type theory has been of huge significance in the fields of logic and the foundations of mathematics. It is also of broader philosophical significance, and has important applications in areas such as computing science and linguistics. This volume draws together contributions from researchers whose work builds on the theory developed by Martin-Loef over the last twenty-five years. As well as celebrating the anniversary of the birth of the subject it covers many of the diverse fields which are now influenced by type theory. It is an invaluable record of areas of current activity, but also contains contributions from N. G. de Bruijn and William Tait, both important figures in the early development of the subject. Also published for the first time is one of Per Martin-Loef's earliest papers.
Mining Very Large Databases with Parallel Processing addresses the problem of large-scale data mining. It is an interdisciplinary text, describing advances in the integration of three computer science areas, namely intelligent' (machine learning-based) data mining techniques, relational databases and parallel processing. The basic idea is to use concepts and techniques of the latter two areas - particularly parallel processing - to speed up and scale up data mining algorithms. The book is divided into three parts. The first part presents a comprehensive review of intelligent data mining techniques such as rule induction, instance-based learning, neural networks and genetic algorithms. Likewise, the second part presents a comprehensive review of parallel processing and parallel databases. Each of these parts includes an overview of commercially-available, state-of-the-art tools. The third part deals with the application of parallel processing to data mining. The emphasis is on finding generic, cost-effective solutions for realistic data volumes. Two parallel computational environments are discussed, the first excluding the use of commercial-strength DBMS, and the second using parallel DBMS servers. It is assumed that the reader has a knowledge roughly equivalent to a first degree (BSc) in accurate sciences, so that (s)he is reasonably familiar with basic concepts of statistics and computer science. The primary audience for Mining Very Large Databases with Parallel Processing is industry data miners and practitioners in general, who would like to apply intelligent data mining techniques to large amounts of data. The book will also be of interest to academic researchers and postgraduate students, particularly database researchers, interested in advanced, intelligent database applications, and artificial intelligence researchers interested in industrial, real-world applications of machine learning.
This book provides a comprehensive coverage of System-on-Chip (SoC) post-silicon validation and debug challenges and state-of-the-art solutions with contributions from SoC designers, academic researchers as well as SoC verification experts. The readers will get a clear understanding of the existing debug infrastructure and how they can be effectively utilized to verify and debug SoCs.
We planned this book as a Festschrift for Smitty Stevens because we thought he might be retiring around 1974, although we knew very well that only death or deep illness would stop Smitty from doing science. Death came suddenly, unexpectedly - after a full day of skiing at Vail, Colorado on the annual trip with wife Didi to the Winter Conference on Brain Research. Smitty liked winter conferences near ski resorts and often tried to get us other psychophysicists to organize one. Every person is unique. Smitty would have said it's mainly because each of us has so many genes that two combinations just alike would be well-nigh impossible. But most of us strive in many ways to be like others, and to abide by the norms (some smaller number try even harder to be unlike other people); as a result many persons seem to lose their uniqueness, their individuality. Not Smitty. He tried neither to be like others nor to be different. He took himself as he found himself, and ascribed peculiarities, strengths, and weaknesses to his pioneering Utah forebears, in whom he took much pride. His was the true and right nonconformity. He approached each task, each problem, ready to grapple with the facts and set them into meaningful order. And if the answer he came up with was different from everyone else's, well that was too bad.
In brief summary, the following results were presented in this work: * A linear time approach was developed to find register requirements for any specified CS schedule or filled MRT. * An algorithm was developed for finding register requirements for any kernel that has a dependence graph that is acyclic and has no data reuse on machines with depth independent instruction templates. * We presented an efficient method of estimating register requirements as a function of pipeline depth. * We developed a technique for efficiently finding bounds on register require ments as a function of pipeline depth. * Presented experimental data to verify these new techniques. * discussed some interesting design points for register file size on a number of different architectures. REFERENCES [1] Robert P. Colwell, Robert P. Nix, John J O'Donnell, David B Papworth, and Paul K. Rodman. A VLIW Architecture for a Trace Scheduling Com piler. In Architectural Support for Programming Languages and Operating Systems, pages 180-192, 1982. [2] C. Eisenbeis, W. Jalby, and A. Lichnewsky. Compile-Time Optimization of Memory and Register Usage on the Cray-2. In Proceedings of the Second Workshop on Languages and Compilers, Urbana l/inois, August 1989. [3] C. Eisenbeis, William Jalby, and Alain Lichnewsky. Squeezing More CPU Performance Out of a Cray-2 by Vector Block Scheduling. In Proceedings of Supercomputing '88, pages 237-246, 1988. [4] Michael J. Flynn. Very High-Speed Computing Systems. Proceedings of the IEEE, 54:1901-1909, December 1966.
A quality-driven design and verification flow for digital systems is developed and presented in Quality-Driven SystemC Design. Two major enhancements characterize the new flow: First, dedicated verification techniques are integrated which target the different levels of abstraction. Second, each verification technique is complemented by an approach to measure the achieved verification quality. The new flow distinguishes three levels of abstraction (namely system level, top level and block level) and can be incorporated in existing approaches. After reviewing the preliminary concepts, in the following chapters the three levels for modeling and verification are considered in detail. At each level the verification quality is measured. In summary, following the new design and verification flow a high overall quality results.
This book introduces readers to various threats faced during design and fabrication by today's integrated circuits (ICs) and systems. The authors discuss key issues, including illegal manufacturing of ICs or "IC Overproduction," insertion of malicious circuits, referred as "Hardware Trojans", which cause in-field chip/system malfunction, and reverse engineering and piracy of hardware intellectual property (IP). The authors provide a timely discussion of these threats, along with techniques for IC protection based on hardware obfuscation, which makes reverse-engineering an IC design infeasible for adversaries and untrusted parties with any reasonable amount of resources. This exhaustive study includes a review of the hardware obfuscation methods developed at each level of abstraction (RTL, gate, and layout) for conventional IC manufacturing, new forms of obfuscation for emerging integration strategies (split manufacturing, 2.5D ICs, and 3D ICs), and on-chip infrastructure needed for secure exchange of obfuscation keys- arguably the most critical element of hardware obfuscation.
This volume contains the papers presented at the Fifth International Workshop on Database Machines. The papers cover a wide spectrum of topics on Database Machines and Knowledge Base Machines. Reports of major projects, ECRC, MCC, and ICOT are included. Topics on DBM cover new database machine architectures based on vector processing and hypercube parallel processing, VLSI oriented architecture, filter processor, sorting machine, concurrency control mechanism for DBM, main memory database, interconnection network for DBM, and performance evaluation. In this workshop much more attention was given to knowledge base management as compared to the previous four workshops. Many papers discuss deductive database processing. Architectures for semantic network, prolog, and production system were also proposed. We would like to express our deep thanks to all those who contributed to the success of the workshop. We would also like to express our apprecia tion for the valuable suggestions given to us by Prof. D. K. Hsiao, Prof. D."
This volume contains papers presented at the NATO sponsored Advanced Research Workshop on "Software for Parallel Computation" held at the University of Calabria, Cosenza, Italy, from June 22 to June 26, 1992. The purpose of the workshop was to evaluate the current state-of-the-art of the software for parallel computation, identify the main factors inhibiting practical applications of parallel computers and suggest possible remedies. In particular it focused on parallel software, programming tools, and practical experience of using parallel computers for solving demanding problems. Critical issues relative to the practical use of parallel computing included: portability, reusability and debugging, parallelization of sequential programs, construction of parallel algorithms, and performance of parallel programs and systems. In addition to NATO, the principal sponsor, the following organizations provided a generous support for the workshop: CERFACS, France, C.I.R.A., Italy, C.N.R., Italy, University of Calabria, Italy, ALENIA, Italy, The Boeing Company, U.S.A., CISE, Italy, ENEL - D.S.R., Italy, Alliant Computer Systems, Bull RN Sud, Italy, Convex Computer, Digital Equipment Corporation, Rewlett Packard, Meiko Scientific, U.K., PARSYTEC Computer, Germany, TELMAT Informatique, France, Thinking Machines Corporation.
Advanced research on the description of distributed systems and on design calculi for software and hardware is presented in this volume. Distinguished researchers give an overview of the latest state of the art.
Despite the ample number of articles on parallel-vector computational algorithms published over the last 20 years, there is a lack of texts in the field customized for senior undergraduate and graduate engineering research. Parallel-Vector Equation Solvers for Finite Element Engineering Applications aims to fill this gap, detailing both the theoretical development and important implementations of equation-solution algorithms. The mathematical background necessary to understand their inception balances well with descriptions of their practical uses. Illustrated with a number of state-of-the-art FORTRAN codes developed as examples for the book, Dr. Nguyen's text is a perfect choice for instructors and researchers alike. |
You may like...
Emerging Technologies - A Primer for…
Jennifer Koerber, Michael Sauers
Hardcover
R3,012
Discovery Miles 30 120
Planning Second Generation Automated…
Edwin Cortez, Tom Smorch
Hardcover
Handbook of Research on Emerging…
Alfonso Ippolito, Michela Cigola
Hardcover
R6,986
Discovery Miles 69 860
Advances in Library Automation and…
Charles W. Bailey, Joe A. Hewitt
Hardcover
R3,171
Discovery Miles 31 710
|