![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
This book describes automated debugging approaches for the bugs and the faults which appear in different abstraction levels of a hardware system. The authors employ a transaction-based debug approach to systems at the transaction-level, asserting the correct relation of transactions. The automated debug approach for design bugs finds the potential fault candidates at RTL and gate-level of a circuit. Debug techniques for logic bugs and synchronization bugs are demonstrated, enabling readers to localize the most difficult bugs. Debug automation for electrical faults (delay faults)finds the potentially failing speedpaths in a circuit at gate-level. The various debug approaches described achieve high diagnosis accuracy and reduce the debugging time, shortening the IC development cycle and increasing the productivity of designers. Describes a unified framework for debug automation used at both pre-silicon and post-silicon stages; Provides approaches for debug automation of a hardware system at different levels of abstraction, i.e., chip, gate-level, RTL and transaction level; Includes techniques for debug automation of design bugs and electrical faults, as well as an infrastructure to debug NoC-based multiprocessor SoCs.
Now in a thoroughly revised second edition, this practical practitioner guide provides a comprehensive overview of the SoC design process. It explains end-to-end system on chip (SoC) design processes and includes updated coverage of design methodology, the design environment, EDA tool flow, design decisions, choice of design intellectual property (IP) cores, sign-off procedures, and design infrastructure requirements. The second edition provides new information on SOC trends and updated design cases. Coverage also includes critical advanced guidance on the latest UPF-based low power design flow, challenges of deep submicron technologies, and 3D design fundamentals, which will prepare the readers for the challenges of working at the nanotechnology scale. A Practical Approach to VLSI System on Chip (SoC) Design: A Comprehensive Guide, Second Edition provides engineers who aspire to become VLSI designers with all the necessary information and details of EDA tools. It will be a valuable professional reference for those working on VLSI design and verification portfolios in complex SoC designs
This book describes the most recent techniques for turbo decoder implementation, especially for 4G and beyond 4G applications. The authors reveal techniques for the design of high-throughput decoders for future telecommunication systems, enabling designers to reduce hardware cost and shorten processing time. Coverage includes an explanation of VLSI implementation of the turbo decoder, from basic functional units to advanced parallel architecture. The authors discuss both hardware architecture techniques and experimental results, showing the variations in area/throughput/performance with respect to several techniques. This book also illustrates turbo decoders for 3GPP-LTE/LTE-A and IEEE 802.16e/m standards, which provide a low-complexity but high-flexibility circuit structure to support these standards in multiple parallel modes. Moreover, some solutions that can overcome the limitation upon the speedup of parallel architecture by modification to turbo codec are presented here. Compared to the traditional designs, these methods can lead to at most 33% gain in throughput with similar performance and similar cost.
Dynamic Reconfigurable Architectures and Transparent Optimization Techniques presents a detailed study on new techniques to cope with the aforementioned limitations. First, characteristics of reconfigurable systems are discussed in details, and a large number of case studies is shown. Then, a detailed analysis of several benchmarks demonstrates that such architectures need to attack a diverse range of applications with very different behaviours, besides supporting code compatibility. This requires the use of dynamic optimization techniques, such as Binary Translation and Trace reuse. Finally, works that combine both reconfigurable systems and dynamic techniques are discussed and a quantitative analysis of one them, the DIM architecture, is presented.
This book provides a comprehensive coverage of hardware security concepts, derived from the unique characteristics of emerging logic and memory devices and related architectures. The primary focus is on mapping device-specific properties, such as multi-functionality, runtime polymorphism, intrinsic entropy, nonlinearity, ease of heterogeneous integration, and tamper-resilience to the corresponding security primitives that they help realize, such as static and dynamic camouflaging, true random number generation, physically unclonable functions, secure heterogeneous and large-scale systems, and tamper-proof memories. The authors discuss several device technologies offering the desired properties (including spintronics switches, memristors, silicon nanowire transistors and ferroelectric devices) for such security primitives and schemes, while also providing a detailed case study for each of the outlined security applications. Overall, the book gives a holistic perspective of how the promising properties found in emerging devices, which are not readily afforded by traditional CMOS devices and systems, can help advance the field of hardware security.
The one instruction set computer (OISC) is the ultimate reduced instruction set computer (RISC). In OISC, the instruction set consists of only one instruction, and then by composition, all other necessary instructions are synthesized. This is an approach completely opposite to that of a complex instruction set computer (CISC), which incorporates complex instructions as microprograms within the processor. Computer Architecture: A Minimalist Perspective examines
computer architecture, computability theory, and the history of
computers from the perspective of one instruction set computing - a
novel approach in which the computer supports only one, simple
instruction. This bold, new paradigm offers significant promise in
biological, chemical, optical, and molecular scale computers. - Provides a comprehensive study of computer architecture using
computability theory as a base.
In recent years, tremendous research has been devoted to the design of database systems for real-time applications, called real-time database systems (RTDBS), where transactions are associated with deadlines on their completion times, and some of the data objects in the database are associated with temporal constraints on their validity. Examples of important applications of RTDBS include stock trading systems, navigation systems and computer integrated manufacturing. Different transaction scheduling algorithms and concurrency control protocols have been proposed to satisfy transaction timing data temporal constraints. Other design issues important to the performance of a RTDBS are buffer management, index accesses and I/O scheduling. Real-Time Database Systems: Architecture and Techniques summarizes important research results in this area, and serves as an excellent reference for practitioners, researchers and educators of real-time systems and database systems.
This text offers complete information on the latest developments in the emerging technology of polymer thick film--from the mechanics to applications in telephones, radio and television, and smart cards. Readers discover how specific markets for PTF are growing and changing and how construction schemes can alter and improve performance. Each aspect of PTF technology is discussed in detail.
High Performance Computing Systems and Applications contains a selection of fully refereed papers presented at the 14th International Conference on High Performance Computing Systems and Applications held in Victoria, Canada, in June 2000. This book presents the latest research in HPC Systems and Applications, including distributed systems and architecture, numerical methods and simulation, network algorithms and protocols, computer architecture, distributed memory, and parallel algorithms. It also covers such topics as applications in astrophysics and space physics, cluster computing, numerical simulations for fluid dynamics, electromagnetics and crystal growth, networks and the Grid, and biology and Monte Carlo techniques. High Performance Computing Systems and Applications is suitable as a secondary text for graduate level courses, and as a reference for researchers and practitioners in industry.
This book describes a specification, microarchitecture, VHDL implementation and evaluation of a SPARC v8 CPU with fine-grain multi-threading, called micro-threading. The CPU, named UTLEON3, is an alternative platform for exploring CPU multi-threading that is compatible with the industry-standard GRLIB package. The processor microarchitecture was designed to map in an efficient way the data-flow scheme on a classical von Neumann pipelined processing used in common processors, while retaining full binary compatibility with existing legacy programs.
This book intends to unite studies in different fields related to the development of the relations between logic, law and legal reasoning. Combining historical and philosophical studies on legal reasoning in Civil and Common Law, and on the often neglected Arabic and Talmudic traditions of jurisprudence, this project unites these areas with recent technical developments in computer science. This combination has resulted in renewed interest in deontic logic and logic of norms that stems from the interaction between artificial intelligence and law and their applications to these areas of logic. The book also aims to motivate and launch a more intense interaction between the historical and philosophical work of Arabic, Talmudic and European jurisprudence. The publication discusses new insights in the interaction between logic and law, and more precisely the study of different answers to the question: what role does logic play in legal reasoning? Varying perspectives include that of foundational studies (such as logical principles and frameworks) to applications, and historical perspectives.
This book is intended for senior undergraduate and graduate students as well as practicing engineers who are involved in design and analysis of radio frequency (RF) circuits. Fully-solved, tutorial-like examples are used to put into practice major topics and to understand the underlying principles of the main sub-circuits required to design an RF transceiver and the whole communication system. Starting with review of principles in electromagnetic (EM) transmission and signal propagation, through detailed practical analysis of RF amplifier, mixer, modulator, demodulator, and oscillator circuit topologies, as well as basics of the system communication theory, this book systematically covers most relevant aspects in a way that is suitable for a single semester university level course. Readers will benefit from the author's sharp focus on radio receiver design, demonstrated through hundreds of fully-solved, realistic examples, as opposed to texts that cover many aspects of electronics and electromagnetic without making the required connection to wireless communication circuit design. Offers readers a complete, self-sufficient tutorial style textbook; Includes all relevant topics required to study and design an RF receiver in a consistent, coherent way with appropriate depth for a one-semester course; Uses hundreds of fully-solved, realistic examples of radio design technology to demonstrate concepts; Explains necessary physical/mathematical concepts and their interrelationship.
This book covers several aspects of the operational amplifier and includes theoretical explanations with simplified expressions and derivations. The book is designed to serve as a textbook for courses offered to undergraduate and postgraduate students enrolled in electronics and communication engineering. The topics included are DC amplifier, AC/DC analysis of DC amplifier, relevant derivations, a block diagram of the operational amplifier, positive and negative feedbacks, amplitude modulator, current to voltage and voltage to current converters, DAC and ADC, integrator, differentiator, active filters, comparators, sinusoidal and non-sinusoidal waveform generators, phase lock loop (PLL), etc. This book contains two parts-sections A and B. Section A includes theory, methodology, circuit design and derivations. Section B explains the design and study of experiments for laboratory practice. Laboratory experiments enable students to perform a practical activity that demonstrates applications of the operational amplifier. A simplified description of the circuits, working principle and practical approach towards understanding the concept is a unique feature of this book. Simple methods and easy steps of the derivation and lucid presentation are some other traits of this book for readers that do not have any background information about electronics. This book is student-centric towards the basics of the operational amplifier and its applications. The detailed coverage and pedagogical tools make this an ideal textbook for students and researchers enrolled in senior undergraduate and beginning postgraduate electronics and communication engineering courses.
Widespread use of parallel processing will become a reality only if the process of porting applications to parallel computers can be largely automated. Usually it is straightforward for a user to determine how an application can be mapped onto a parallel machine; however, the actual development of parallel code, if done by hand, is typically difficult and time consuming. Parallelizing compilers, which can gen erate parallel code automatically, are therefore a key technology for parallel processing. In this book, Ping-Sheng Tseng describes a parallelizing compiler for systolic arrays, called AL. Although parallelizing compilers are quite common for shared-memory parallel machines, the AL compiler is one of the first working parallelizing compilers for distributed memory machines, of which systolic arrays are a special case. The AL compiler takes advantage of the fine grain and high bandwidth interprocessor communication capabilities in a systolic architecture to generate efficient parallel code. xii Foreword While capable of handling an important class of applications, AL is not intended to be a general-purpose parallelizing compiler."
Load Balancing in Parallel Computers: Theory and Practice is about the essential software technique of load balancing in distributed memory message-passing parallel computers, also called multicomputers. Each processor has its own address space and has to communicate with other processors by message passing. In general, a direct, point-to-point interconnection network is used for the communications. Many commercial parallel computers are of this class, including the Intel Paragon, the Thinking Machine CM-5, and the IBM SP2. Load Balancing in Parallel Computers: Theory and Practice presents a comprehensive treatment of the subject using rigorous mathematical analyses and practical implementations. The focus is on nearest-neighbor load balancing methods in which every processor at every step is restricted to balancing its workload with its direct neighbours only. Nearest-neighbor methods are iterative in nature because a global balanced state can be reached through processors' successive local operations. Since nearest-neighbor methods have a relatively relaxed requirement for the spread of local load information across the system, they are flexible in terms of allowing one to control the balancing quality, effective for preserving communication locality, and can be easily scaled in parallel computers with a direct communication network. Load Balancing in Parallel Computers: Theory and Practice serves as an excellent reference source and may be used as a text for advanced courses on the subject.
This textbook teaches students techniques for the design of advanced digital systems using Field Programmable Gate Arrays (FPGAs). The authors focus on communication between FPGAs and peripheral devices (such as EEPROM, analog-to-digital converters, sensors, digital-to-analog converters, displays etc.) and in particular state machines and timed state machines for the implementation of serial communication protocols, such as UART, SPI, I(2)C, and display protocols, such as VGA, HDMI. VHDL is used as the programming language and all topics are covered in a structured, step-by-step manner.
This book introduces readers to various radiation soft-error mechanisms such as soft delays, radiation induced clock jitter and pulses, and single event (SE) coupling induced effects. In addition to discussing various radiation hardening techniques for combinational logic, the author also describes new mitigation strategies targeting commercial designs. Coverage includes novel soft error mitigation techniques such as the Dynamic Threshold Technique and Soft Error Filtering based on Transmission gate with varied gate and body bias. The discussion also includes modeling of SE crosstalk noise, delay and speed-up effects. Various mitigation strategies to eliminate SE coupling effects are also introduced. Coverage also includes the reliability of low power energy-efficient designs and the impact of leakage power consumption optimizations on soft error robustness. The author presents an analysis of various power optimization techniques, enabling readers to make design choices that reduce static power consumption and improve soft error reliability at the same time.
This book covers the essential elements of parallel processing and parallel algorithms. It is unique in that it is a self-contained book covering everything fundamental of parallel processing from computer architecture to parallel programming and parallel algorithms. It is designed to function as a text for an undergraduate course in parallel processing, but also works well as a comprehensive reference for professionals interested in all phases of parallel processing and parallel programming.
Multithreaded Processor Design takes the unique approach of designing a multithreaded processor from the ground up. Every aspect is carefully considered to form a balanced design rather than making incremental changes to an existing design and then ignoring problem areas. The general purpose parallel computer is an elusive goal. Multithreaded processors have emerged as a promising solution to this conundrum by forming some amalgam of the commonplace control-flow (von Neumann) processor model with the more exotic data-flow approach. This new processor model offers many exciting possibilities and there is much research to be performed to make this technology widespread. Multithreaded processors utilize the simple and efficient sequential execution technique of control-flow, and also data-flow like concurrency primitives. This supports the conceptually simple but powerful idea of rescheduling rather than blocking when waiting for data, e.g. from large and distributed memories, thereby tolerating long data transmission latencies. This makes multiprocessing far more efficient because the cost of moving data between distributed memories and processors can be hidden by other activity. The same hardware mechanisms may also be used to synchronize interprocess communications to awaiting threads, thereby alleviating operating system overheads. Supporting synchronization and scheduling mechanisms in hardware naturally adds complexity. Consequently, existing multithreaded processor designs have tended to make incremental changes to existing control-flow processor designs to resolve some problems but not others. Multithreaded Processor Design serves as an excellent reference source and is suitable as a text for advanced courses in computer architecture dealing with the subject.
Blockchain technology is an emerging distributed, decentralized architecture and computing paradigm, which has accelerated the development and application of cloud, fog and edge computing; artificial intelligence; cyber physical systems; social networking; crowdsourcing and crowdsensing; 5g; trust management and finance; and other many useful sectors. Nowadays, the primary blockchain technology uses are in information systems to keep information secure and private. However, many threats and vulnerabilities are facing blockchain in the past decade such 51% attacks, double spending attacks, etc. The popularity and rapid development of blockchain brings many technical and regulatory challenges for research and academic communities. The main goal of this book is to encourage both researchers and practitioners of Blockchain technology to share and exchange their experiences and recent studies between academia and industry. The reader will be provided with the most up-to-date knowledge of blockchain in mainstream areas of security and privacy in the decentralized domain, which is timely and essential (this is due to the fact that the distributed and p2p applications are increasing day-by-day, and the attackers adopt new mechanisms to threaten the security and privacy of the users in those environments). This book provides a detailed explanation of security and privacy with respect to blockchain for information systems, and will be an essential resource for students, researchers and scientists studying blockchain uses in information systems and those wanting to explore the current state of play.
Terraform has become a key player in the DevOps world for defining, launching, and managing infrastructure as code (IaC) across a variety of cloud and virtualization platforms, including AWS, Google Cloud, Azure, and more. This hands-on third edition, expanded and thoroughly updated for version 1.0 and beyond, shows you the fastest way to get up and running with Terraform. Gruntwork cofounder Yevgeniy (Jim) Brikman walks you through code examples that demonstrate Terraform's simple, declarative programming language for deploying and managing infrastructure with a few commands. Veteran sysadmins, DevOps engineers, and novice developers will quickly go from Terraform basics to running a full stack that can support a massive amount of traffic and a large team of developers. Compare Terraform with Chef, Puppet, Ansible, CloudFormation, Docker, and Packer Deploy servers, load balancers, and databases Create reusable infrastructure with Terraform modules Test your Terraform modules with static analysis, unit tests, and integration tests Configure CI/CD pipelines for both your apps and infrastructure code Use advanced Terraform syntax for loops, conditionals, and zero-downtime deployment New to the third edition: Get up to speed on Terraform 0.13 to 1.0 and beyond Manage secrets (passwords, API keys) with Terraform Work with multiple clouds and providers (including Kubernetes!)
The study of the connections between mathematical automata and for mal logic is as old as theoretical computer science itself. In the founding paper of the subject, published in 1936, Turing showed how to describe the behavior of a universal computing machine with a formula of first order predicate logic, and thereby concluded that there is no algorithm for deciding the validity of sentences in this logic. Research on the log ical aspects of the theory of finite-state automata, which is the subject of this book, began in the early 1960's with the work of J. Richard Biichi on monadic second-order logic. Biichi's investigations were extended in several directions. One of these, explored by McNaughton and Papert in their 1971 monograph Counter-free Automata, was the characterization of automata that admit first-order behavioral descriptions, in terms of the semigroup theoretic approach to automata that had recently been developed in the work of Krohn and Rhodes and of Schiitzenberger. In the more than twenty years that have passed since the appearance of McNaughton and Papert's book, the underlying semigroup theory has grown enor mously, permitting a considerable extension of their results. During the same period, however, fundamental investigations in the theory of finite automata by and large fell out of fashion in the theoretical com puter science community, which moved to other concerns."
This book presents architectural solutions of wireless network and its variations. It basically deals with modeling, analysis, design and enhancement of different architectural parts of wireless network. The main aim of this book is to enhance the applications of wireless network by reducing and controlling its architectural issues. The book discusses efficiency and robustness of wireless network as a platform for communication and data transmission and also discusses some challenges and security issues such as limited hardware resources, unreliable communication, dynamic topology of some wireless networks, vulnerability and unsecure environment. This book is edited for users, academicians and researchers of wireless network. Broadly, topics include modeling of security enhancements, optimization model for network lifetime, modeling of aggregation systems and analyzing of troubleshooting techniques.
With the end of Dennard scaling and Moore's law, IC chips, especially large-scale ones, now face more reliability challenges, and reliability has become one of the mainstay merits of VLSI designs. In this context, this book presents a built-in on-chip fault-tolerant computing paradigm that seeks to combine fault detection, fault diagnosis, and error recovery in large-scale VLSI design in a unified manner so as to minimize resource overhead and performance penalties. Following this computing paradigm, we propose a holistic solution based on three key components: self-test, self-diagnosis and self-repair, or "3S" for short. We then explore the use of 3S for general IC designs, general-purpose processors, network-on-chip (NoC) and deep learning accelerators, and present prototypes to demonstrate how 3S responds to in-field silicon degradation and recovery under various runtime faults caused by aging, process variations, or radical particles. Moreover, we demonstrate that 3S not only offers a powerful backbone for various on-chip fault-tolerant designs and implementations, but also has farther-reaching implications such as maintaining graceful performance degradation, mitigating the impact of verification blind spots, and improving chip yield. This book is the outcome of extensive fault-tolerant computing research pursued at the State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences over the past decade. The proposed built-in on-chip fault-tolerant computing paradigm has been verified in a broad range of scenarios, from small processors in satellite computers to large processors in HPCs. Hopefully, it will provide an alternative yet effective solution to the growing reliability challenges for large-scale VLSI designs. |
You may like...
Grammatical and Syntactical Approaches…
Juhyun Lee, Michael J. Ostwald
Hardcover
R5,315
Discovery Miles 53 150
Constraint Decision-Making Systems in…
Santosh Kumar Das, Nilanjan Dey
Hardcover
R6,687
Discovery Miles 66 870
Clean Architecture - A Craftsman's Guide…
Robert Martin
Paperback
(1)
Energy-Efficient Communication…
Robert Fasthuber, Francky Catthoor, …
Hardcover
R4,705
Discovery Miles 47 050
|