![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
Fault-Tolerant Parallel Computation presents recent advances in algorithmic ways of introducing fault-tolerance in multiprocessors under the constraint of preserving efficiency. The difficulty associated with combining fault-tolerance and efficiency is that the two have conflicting means: fault-tolerance is achieved by introducing redundancy, while efficiency is achieved by removing redundancy. This monograph demonstrates how in certain models of parallel computation it is possible to combine efficiency and fault-tolerance and shows how it is possible to develop efficient algorithms without concern for fault-tolerance, and then correctly and efficiently execute these algorithms on parallel machines whose processors are subject to arbitrary dynamic fail-stop errors. The efficient algorithmic approaches to multiprocessor fault-tolerance presented in this monograph make a contribution towards bridging the gap between the abstract models of parallel computation and realizable parallel architectures. Fault-Tolerant Parallel Computation presents the state of the art in algorithmic approaches to fault-tolerance in efficient parallel algorithms. The monograph synthesizes work that was presented in recent symposia and published in refereed journals by the authors and other leading researchers. This is the first text that takes the reader on the grand tour of this new field summarizing major results and identifying hard open problems. This monograph will be of interest to academic and industrial researchers and graduate students working in the areas of fault-tolerance, algorithms and parallel computation and may also be used as a text in a graduate course on parallel algorithmic techniques and fault-tolerance.
This book provides a comprehensive introduction to processing-in-memory (PIM) technology, from its architectures to circuits implementations on multiple memory types and describes how it can be a viable computer architecture in the era of AI and big data. The authors summarize the challenges of AI hardware systems, processing-in-memory (PIM) constraints and approaches to derive system-level requirements for a practical and feasible PIM solution. The presentation focuses on feasible PIM solutions that can be implemented and used in real systems, including architectures, circuits, and implementation cases for each major memory type (SRAM, DRAM, and ReRAM).
In Symbolic Analysis for Parallelizing Compilers the author presents an excellent demonstration of the effectiveness of symbolic analysis in tackling important optimization problems, some of which inhibit loop parallelization. The framework that Haghighat presents has proved extremely successful in induction and wraparound variable analysis, strength reduction, dead code elimination and symbolic constant propagation. The approach can be applied to any program transformation or optimization problem that uses properties and value ranges of program names. Symbolic analysis can be used on any transformational system or optimization problem that relies on compile-time information about program variables. This covers the majority of, if not all optimization and parallelization techniques. The book makes a compelling case for the potential of symbolic analysis, applying it for the first time - and with remarkable results - to a number of classical optimization problems: loop scheduling, static timing or size analysis, and dependence analysis. It demonstrates how symbolic analysis can solve these problems faster and more accurately than existing hybrid techniques.
This second edition focuses on the thought process of digital design and implementation in the context of VLSI and system design. It covers the Verilog 2001 and Verilog 2005 RTL design styles, constructs and the optimization at the RTL and synthesis level. The book also covers the logic synthesis, low power, multiple clock domain design concepts and design performance improvement techniques. The book includes 250 design examples/illustrations and 100 exercise questions. This volume can be used as a core or supplementary text in undergraduate courses on logic design and as a text for professional and vocational coursework. In addition, it will be a hands-on professional reference and a self-study aid for hobbyists.
Efficient parallel solutions have been found to many problems. Some of them can be obtained automatically from sequential programs, using compilers. However, there is a large class of problems - irregular problems - that lack efficient solutions. IRREGULAR 94 - a workshop and summer school organized in Geneva - addressed the problems associated with the derivation of efficient solutions to irregular problems. This book, which is based on the workshop, draws on the contributions of outstanding scientists to present the state of the art in irregular problems, covering aspects ranging from scientific computing, discrete optimization, and automatic extraction of parallelism. Audience: This first book on parallel algorithms for irregular problems is of interest to advanced graduate students and researchers in parallel computer science.
Provides a wide snapshot of building knowledge-based systems, inconsistency measures, methods for handling consistency, and methods for integrating knowledge bases. Provides the mathematical background to solve problems of restoring consistency and problems of integrating probabilistic knowledge bases in the integrating process. The research results presented in the book can be applied in decision support systems, semantic web systems, multimedia information retrieval systems, medical imaging systems, cooperative information systems, and more.
Based on the Lectures given during the Eurocourse on 'Computing with Parallel Architectures' held at the Joint Research Centre Ispra, Italy, September 10-14, 1990
It has been widely recognized that artificial intelligence computations offer large potential for distributed and parallel processing. Unfortunately, not much is known about designing parallel AI algorithms and efficient, easy-to-use parallel computer architectures for AI applications. The field of parallel computation and computers for AI is in its infancy, but some significant ideas have appeared and initial practical experience has become available. The purpose of this book has been to collect in one volume contributions from several leading researchers and pioneers of AI that represent a sample of these ideas and experiences. This sample does not include all schools of thought nor contributions from all leading researchers, but it covers a relatively wide variety of views and topics and in this sense can be helpful in assessing the state ofthe art. We hope that the book will serve, at least, as a pointer to more specialized literature and that it will stimulate interest in the area of parallel AI processing. It has been a great pleasure and a privilege to cooperate with all contributors to this volume. They have my warmest thanks and gratitude. Mrs. Birgitta Knapp has assisted me in the editorial task and demonstrated a great deal of skill and patience. Janusz S. Kowalik vii INTRODUCTION Artificial intelligence (AI) computer programs can be very time-consuming.
For courses in Logic and Computer design. Understanding Logic and Computer Design for All Audiences Logic and Computer Design Fundamentals is a thoroughly up-to-date text that makes logic design, digital system design, and computer design available to students of all levels. The Fifth Edition brings this widely recognised source to modern standards by ensuring that all information is relevant and contemporary. The material focuses on industry trends and successfully bridges the gap between the much higher levels of abstraction students in the field must work with today than in the past. Broadly covering logic and computer design, Logic and Computer Design Fundamentals is a flexibly organised source material that allows instructors to tailor its use to a wide range of student audiences.
This book describes automated debugging approaches for the bugs and the faults which appear in different abstraction levels of a hardware system. The authors employ a transaction-based debug approach to systems at the transaction-level, asserting the correct relation of transactions. The automated debug approach for design bugs finds the potential fault candidates at RTL and gate-level of a circuit. Debug techniques for logic bugs and synchronization bugs are demonstrated, enabling readers to localize the most difficult bugs. Debug automation for electrical faults (delay faults)finds the potentially failing speedpaths in a circuit at gate-level. The various debug approaches described achieve high diagnosis accuracy and reduce the debugging time, shortening the IC development cycle and increasing the productivity of designers. Describes a unified framework for debug automation used at both pre-silicon and post-silicon stages; Provides approaches for debug automation of a hardware system at different levels of abstraction, i.e., chip, gate-level, RTL and transaction level; Includes techniques for debug automation of design bugs and electrical faults, as well as an infrastructure to debug NoC-based multiprocessor SoCs.
Neural network and artificial intelligence algorithrns and computing have increased not only in complexity but also in the number of applications. This in turn has posed a tremendous need for a larger computational power that conventional scalar processors may not be able to deliver efficiently. These processors are oriented towards numeric and data manipulations. Due to the neurocomputing requirements (such as non-programming and learning) and the artificial intelligence requirements (such as symbolic manipulation and knowledge representation) a different set of constraints and demands are imposed on the computer architectures/organizations for these applications. Research and development of new computer architectures and VLSI circuits for neural networks and artificial intelligence have been increased in order to meet the new performance requirements. This book presents novel approaches and trends on VLSI implementations of machines for these applications. Papers have been drawn from a number of research communities; the subjects span analog and digital VLSI design, computer design, computer architectures, neurocomputing and artificial intelligence techniques. This book has been organized into four subject areas that cover the two major categories of this book; the areas are: analog circuits for neural networks, digital implementations of neural networks, neural networks on multiprocessor systems and applications, and VLSI machines for artificial intelligence. The topics that are covered in each area are briefly introduced below.
There is nO' dDubt that the mioroprooessor (~p) revDlutiDn will cDntinue intO' the future and many will be required to' specify and integrate mi- crDprDceSSDrs intO' prDducts Dr systems in their Dwn disciplines. There- fDre, well-designed flexible interfaoes will be required to' ensure CDm- patibility with Dther equipments and to' extend design DptiDns. AlthDugh there are several bDDks Dn micrDcDmputers and micrDprDcessDrs, Dnly few Df thDse devDte but a small part Dn the impDrtant aspects Df interfaces. It was with this in mind that the present bDDk was written as a selfcDn- tained vDlume to' be part Df the mDre general series : Mioroprooessors- Based Systems Engineering. It fills an existing gap in technDIDgy, as in- terfaces are the last items to' be seriDusly cDnsidered in the race Df new technDIDgy, and it deals with the systematic study Df micrDprDcessDr interfaces and their applicatiDns in many diversified fields. This bDDk is aimed at engineers in industry and engineering stu- dents whO' need to' learn hDW to' interface micrDprDcessDrs, and hence mi- crDcDmputers and Dther related equipments, to' external digital Dr analDg devices. It is suitable fDr use as a textbDDk Dr fDr supplementary read- ing, either in an applied undergraduate CDurse in electrical engineering Dr in the last year Df three-year-curriculum technical cDlleges.
It has become clear in recent years from such major forums as the various international conferences on flexible manufacruring systems (FMSs) that the computer-controlled and -integrated "factory of the furure" is now being considered as a commercially viable and technically achievable goal. To date, most attention has been given to the design, development, and evalu ation of flexible machining systems. Now, with the essential support of increasing numbers of industrial examples, the general concepts, technical requirements, and cost-effectiveness of responsive, computer-integrated, flexible machining systems are fast becoming established knowledge. There is, of course, much still to be done in the development of modular com puter hardware and software, and the scope for cost-effective developments in pro gramming systems, workpiece handling, and quality control will ensure that contin uing development will occur over the next decade. However, international attention is now increasingly rurning toward the flexible computer control of the assembly process as the next logical step in progressive factory automation. It is here at this very early stage that Tony Owen has bravely set out to encompass the future field of flexible assembly systems (FASs) in his own distinctive, wide-ranging style."
This book comprehensively covers the state-of-the-art security applications of machine learning techniques. The first part explains the emerging solutions for anti-tamper design, IC Counterfeits detection and hardware Trojan identification. It also explains the latest development of deep-learning-based modeling attacks on physically unclonable functions and outlines the design principles of more resilient PUF architectures. The second discusses the use of machine learning to mitigate the risks of security attacks on cyber-physical systems, with a particular focus on power plants. The third part provides an in-depth insight into the principles of malware analysis in embedded systems and describes how the usage of supervised learning techniques provides an effective approach to tackle software vulnerabilities.
This book puts in focus various techniques for checking modeling fidelity of Cyber Physical Systems (CPS), with respect to the physical world they represent. The authors' present modeling and analysis techniques representing different communities, from very different angles, discuss their possible interactions, and discuss the commonalities and differences between their practices. Coverage includes model driven development, resource-driven development, statistical analysis, proofs of simulator implementation, compiler construction, power/temperature modeling of digital devices, high-level performance analysis, and code/device certification. Several industrial contexts are covered, including modeling of computing and communication, proof architectures models and statistical based validation techniques.
Now in a thoroughly revised second edition, this practical practitioner guide provides a comprehensive overview of the SoC design process. It explains end-to-end system on chip (SoC) design processes and includes updated coverage of design methodology, the design environment, EDA tool flow, design decisions, choice of design intellectual property (IP) cores, sign-off procedures, and design infrastructure requirements. The second edition provides new information on SOC trends and updated design cases. Coverage also includes critical advanced guidance on the latest UPF-based low power design flow, challenges of deep submicron technologies, and 3D design fundamentals, which will prepare the readers for the challenges of working at the nanotechnology scale. A Practical Approach to VLSI System on Chip (SoC) Design: A Comprehensive Guide, Second Edition provides engineers who aspire to become VLSI designers with all the necessary information and details of EDA tools. It will be a valuable professional reference for those working on VLSI design and verification portfolios in complex SoC designs
Dynamic Reconfigurable Architectures and Transparent Optimization Techniques presents a detailed study on new techniques to cope with the aforementioned limitations. First, characteristics of reconfigurable systems are discussed in details, and a large number of case studies is shown. Then, a detailed analysis of several benchmarks demonstrates that such architectures need to attack a diverse range of applications with very different behaviours, besides supporting code compatibility. This requires the use of dynamic optimization techniques, such as Binary Translation and Trace reuse. Finally, works that combine both reconfigurable systems and dynamic techniques are discussed and a quantitative analysis of one them, the DIM architecture, is presented.
This book provides a comprehensive coverage of hardware security concepts, derived from the unique characteristics of emerging logic and memory devices and related architectures. The primary focus is on mapping device-specific properties, such as multi-functionality, runtime polymorphism, intrinsic entropy, nonlinearity, ease of heterogeneous integration, and tamper-resilience to the corresponding security primitives that they help realize, such as static and dynamic camouflaging, true random number generation, physically unclonable functions, secure heterogeneous and large-scale systems, and tamper-proof memories. The authors discuss several device technologies offering the desired properties (including spintronics switches, memristors, silicon nanowire transistors and ferroelectric devices) for such security primitives and schemes, while also providing a detailed case study for each of the outlined security applications. Overall, the book gives a holistic perspective of how the promising properties found in emerging devices, which are not readily afforded by traditional CMOS devices and systems, can help advance the field of hardware security.
The one instruction set computer (OISC) is the ultimate reduced instruction set computer (RISC). In OISC, the instruction set consists of only one instruction, and then by composition, all other necessary instructions are synthesized. This is an approach completely opposite to that of a complex instruction set computer (CISC), which incorporates complex instructions as microprograms within the processor. Computer Architecture: A Minimalist Perspective examines
computer architecture, computability theory, and the history of
computers from the perspective of one instruction set computing - a
novel approach in which the computer supports only one, simple
instruction. This bold, new paradigm offers significant promise in
biological, chemical, optical, and molecular scale computers. - Provides a comprehensive study of computer architecture using
computability theory as a base.
In recent years, tremendous research has been devoted to the design of database systems for real-time applications, called real-time database systems (RTDBS), where transactions are associated with deadlines on their completion times, and some of the data objects in the database are associated with temporal constraints on their validity. Examples of important applications of RTDBS include stock trading systems, navigation systems and computer integrated manufacturing. Different transaction scheduling algorithms and concurrency control protocols have been proposed to satisfy transaction timing data temporal constraints. Other design issues important to the performance of a RTDBS are buffer management, index accesses and I/O scheduling. Real-Time Database Systems: Architecture and Techniques summarizes important research results in this area, and serves as an excellent reference for practitioners, researchers and educators of real-time systems and database systems.
This text offers complete information on the latest developments in the emerging technology of polymer thick film--from the mechanics to applications in telephones, radio and television, and smart cards. Readers discover how specific markets for PTF are growing and changing and how construction schemes can alter and improve performance. Each aspect of PTF technology is discussed in detail.
This book describes the most recent techniques for turbo decoder implementation, especially for 4G and beyond 4G applications. The authors reveal techniques for the design of high-throughput decoders for future telecommunication systems, enabling designers to reduce hardware cost and shorten processing time. Coverage includes an explanation of VLSI implementation of the turbo decoder, from basic functional units to advanced parallel architecture. The authors discuss both hardware architecture techniques and experimental results, showing the variations in area/throughput/performance with respect to several techniques. This book also illustrates turbo decoders for 3GPP-LTE/LTE-A and IEEE 802.16e/m standards, which provide a low-complexity but high-flexibility circuit structure to support these standards in multiple parallel modes. Moreover, some solutions that can overcome the limitation upon the speedup of parallel architecture by modification to turbo codec are presented here. Compared to the traditional designs, these methods can lead to at most 33% gain in throughput with similar performance and similar cost.
High Performance Computing Systems and Applications contains a selection of fully refereed papers presented at the 14th International Conference on High Performance Computing Systems and Applications held in Victoria, Canada, in June 2000. This book presents the latest research in HPC Systems and Applications, including distributed systems and architecture, numerical methods and simulation, network algorithms and protocols, computer architecture, distributed memory, and parallel algorithms. It also covers such topics as applications in astrophysics and space physics, cluster computing, numerical simulations for fluid dynamics, electromagnetics and crystal growth, networks and the Grid, and biology and Monte Carlo techniques. High Performance Computing Systems and Applications is suitable as a secondary text for graduate level courses, and as a reference for researchers and practitioners in industry.
This book describes a specification, microarchitecture, VHDL implementation and evaluation of a SPARC v8 CPU with fine-grain multi-threading, called micro-threading. The CPU, named UTLEON3, is an alternative platform for exploring CPU multi-threading that is compatible with the industry-standard GRLIB package. The processor microarchitecture was designed to map in an efficient way the data-flow scheme on a classical von Neumann pipelined processing used in common processors, while retaining full binary compatibility with existing legacy programs. |
You may like...
IoT and AI Technologies for Sustainable…
Abid Hussain, Garima Tyagi, …
Hardcover
R3,102
Discovery Miles 31 020
Artificial Intelligence for Cognitive…
Pijush Dutta, Souvik Pal, …
Hardcover
R4,071
Discovery Miles 40 710
Computational Intelligence Aided Systems…
Akshansh Gupta, Hanuman Verma, …
Hardcover
R4,925
Discovery Miles 49 250
Data Science with Semantic Technologies…
Archana Patel, Narayan C Debnath
Hardcover
R5,199
Discovery Miles 51 990
Introduction to Diagnosis of Active…
Gianfranco Lamperti, Marina Zanella, …
Hardcover
R3,386
Discovery Miles 33 860
Multi-Criteria Decision Making Theory…
Mohamed Abdel-Basset, Ripon Kumar Chakrabortty, …
Hardcover
R4,208
Discovery Miles 42 080
Machine Learning for Edge Computing…
Amitoj Singh, Vinay Kukreja, …
Hardcover
R2,791
Discovery Miles 27 910
Handbook of AI-based Metaheuristics
Anand J. Kulkarni, Patrick Siarry
Hardcover
R6,357
Discovery Miles 63 570
|