![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
This book puts in focus various techniques for checking modeling fidelity of Cyber Physical Systems (CPS), with respect to the physical world they represent. The authors' present modeling and analysis techniques representing different communities, from very different angles, discuss their possible interactions, and discuss the commonalities and differences between their practices. Coverage includes model driven development, resource-driven development, statistical analysis, proofs of simulator implementation, compiler construction, power/temperature modeling of digital devices, high-level performance analysis, and code/device certification. Several industrial contexts are covered, including modeling of computing and communication, proof architectures models and statistical based validation techniques.
Analog Integrated Circuits deals with the design and analysis of modem analog circuits using integrated bipolar and field-effect transistor technologies. This book is suitable as a text for a one-semester course for senior level or first-year graduate students as well as a reference work for practicing engin eers. Advanced students will also find the text useful in that some of the material presented here is not covered in many first courses on analog circuits. Included in this is an extensive coverage of feedback amplifiers, current-mode circuits, and translinear circuits. Suitable background would be fundamental courses in electronic circuits and semiconductor devices. This book contains numerous examples, many of which include commercial analog circuits. End-of-chapter problems are given, many illustrating practical circuits. Chapter 1 discuses the models commonly used to represent devices used in modem analog integrated circuits. Presented are models for bipolar junction transistors, junction diodes, junction field-effect transistors, and metal-oxide semiconductor field-effect transistors. Both large-signal and small-signal models are developed as well as their implementation in the SPICE circuit simulation program. The basic building blocks used in a large variety of analog circuits are analyzed in Chapter 2; these consist of current sources, dc level-shift stages, single-transistor gain stages, two-transistor gain stages, and output stages. Both bipolar and field-effect transistor implementations are presented. Chapter 3 deals with operational amplifier circuits. The four basic op-amp circuits are analyzed: (1) voltage-feedback amplifiers, (2) current-feedback amplifiers, (3) current-differencing amplifiers, and (4) transconductance ampli fiers. Selected applications are also presented."
This book describes the design and implementation of energy-efficient smart (digital output) temperature sensors in CMOS technology. To accomplish this, a new readout topology, namely the zoom-ADC, is presented. It combines a coarse SAR-ADC with a fine Sigma-Delta (SD) ADC. The digital result obtained from the coarse ADC is used to set the reference levels of the SD-ADC, thereby zooming its full-scale range into a small region around the input signal. This technique considerably reduces the SD-ADC's full-scale range, and notably relaxes the number of clock cycles needed for a given resolution, as well as the DC-gain and swing of the loop-filter. Both conversion time and power-efficiency can be improved, which results in a substantial improvement in energy-efficiency. Two BJT-based sensor prototypes based on 1st-order and 2nd-order zoom-ADCs are presented. They both achieve inaccuracies of less than +/-0.2 DegreesC over the military temperature range (-55 DegreesC to 125 DegreesC). A prototype capable of sensing temperatures up to 200 DegreesC is also presented. As an alternative to BJTs, sensors based on dynamic threshold MOSTs (DTMOSTs) are also presented. It is shown that DTMOSTs are capable of achieving low inaccuracy (+/-0.4 DegreesC over the military temperature range) as well as sub-1V operation, making them well suited for use in modern CMOS processes.
This book describes automated debugging approaches for the bugs and the faults which appear in different abstraction levels of a hardware system. The authors employ a transaction-based debug approach to systems at the transaction-level, asserting the correct relation of transactions. The automated debug approach for design bugs finds the potential fault candidates at RTL and gate-level of a circuit. Debug techniques for logic bugs and synchronization bugs are demonstrated, enabling readers to localize the most difficult bugs. Debug automation for electrical faults (delay faults)finds the potentially failing speedpaths in a circuit at gate-level. The various debug approaches described achieve high diagnosis accuracy and reduce the debugging time, shortening the IC development cycle and increasing the productivity of designers. Describes a unified framework for debug automation used at both pre-silicon and post-silicon stages; Provides approaches for debug automation of a hardware system at different levels of abstraction, i.e., chip, gate-level, RTL and transaction level; Includes techniques for debug automation of design bugs and electrical faults, as well as an infrastructure to debug NoC-based multiprocessor SoCs.
Edsger Wybe Dijkstra (1930-2002) was one of the most influential researchers in the history of computer science, making fundamental contributions to both the theory and practice of computing. Early in his career, he proposed the single-source shortest path algorithm, now commonly referred to as Dijkstra's algorithm. He wrote (with Jaap Zonneveld) the first ALGOL 60 compiler, and designed and implemented with his colleagues the influential THE operating system. Dijkstra invented the field of concurrent algorithms, with concepts such as mutual exclusion, deadlock detection, and synchronization. A prolific writer and forceful proponent of the concept of structured programming, he convincingly argued against the use of the Go To statement. In 1972 he was awarded the ACM Turing Award for "fundamental contributions to programming as a high, intellectual challenge; for eloquent insistence and practical demonstration that programs should be composed correctly, not just debugged into correctness; for illuminating perception of problems at the foundations of program design." Subsequently he invented the concept of self-stabilization relevant to fault-tolerant computing. He also devised an elegant language for nondeterministic programming and its weakest precondition semantics, featured in his influential 1976 book A Discipline of Programming in which he advocated the development of programs in concert with their correctness proofs. In the later stages of his life, he devoted much attention to the development and presentation of mathematical proofs, providing further support to his long-held view that the programming process should be viewed as a mathematical activity. In this unique new book, 31 computer scientists, including five recipients of the Turing Award, present and discuss Dijkstra's numerous contributions to computing science and assess their impact. Several authors knew Dijkstra as a friend, teacher, lecturer, or colleague. Their biographical essays and tributes provide a fascinating multi-author picture of Dijkstra, from the early days of his career up to the end of his life.
Neural network and artificial intelligence algorithrns and computing have increased not only in complexity but also in the number of applications. This in turn has posed a tremendous need for a larger computational power that conventional scalar processors may not be able to deliver efficiently. These processors are oriented towards numeric and data manipulations. Due to the neurocomputing requirements (such as non-programming and learning) and the artificial intelligence requirements (such as symbolic manipulation and knowledge representation) a different set of constraints and demands are imposed on the computer architectures/organizations for these applications. Research and development of new computer architectures and VLSI circuits for neural networks and artificial intelligence have been increased in order to meet the new performance requirements. This book presents novel approaches and trends on VLSI implementations of machines for these applications. Papers have been drawn from a number of research communities; the subjects span analog and digital VLSI design, computer design, computer architectures, neurocomputing and artificial intelligence techniques. This book has been organized into four subject areas that cover the two major categories of this book; the areas are: analog circuits for neural networks, digital implementations of neural networks, neural networks on multiprocessor systems and applications, and VLSI machines for artificial intelligence. The topics that are covered in each area are briefly introduced below.
There is nO' dDubt that the mioroprooessor (~p) revDlutiDn will cDntinue intO' the future and many will be required to' specify and integrate mi- crDprDceSSDrs intO' prDducts Dr systems in their Dwn disciplines. There- fDre, well-designed flexible interfaoes will be required to' ensure CDm- patibility with Dther equipments and to' extend design DptiDns. AlthDugh there are several bDDks Dn micrDcDmputers and micrDprDcessDrs, Dnly few Df thDse devDte but a small part Dn the impDrtant aspects Df interfaces. It was with this in mind that the present bDDk was written as a selfcDn- tained vDlume to' be part Df the mDre general series : Mioroprooessors- Based Systems Engineering. It fills an existing gap in technDIDgy, as in- terfaces are the last items to' be seriDusly cDnsidered in the race Df new technDIDgy, and it deals with the systematic study Df micrDprDcessDr interfaces and their applicatiDns in many diversified fields. This bDDk is aimed at engineers in industry and engineering stu- dents whO' need to' learn hDW to' interface micrDprDcessDrs, and hence mi- crDcDmputers and Dther related equipments, to' external digital Dr analDg devices. It is suitable fDr use as a textbDDk Dr fDr supplementary read- ing, either in an applied undergraduate CDurse in electrical engineering Dr in the last year Df three-year-curriculum technical cDlleges.
It has become clear in recent years from such major forums as the various international conferences on flexible manufacruring systems (FMSs) that the computer-controlled and -integrated "factory of the furure" is now being considered as a commercially viable and technically achievable goal. To date, most attention has been given to the design, development, and evalu ation of flexible machining systems. Now, with the essential support of increasing numbers of industrial examples, the general concepts, technical requirements, and cost-effectiveness of responsive, computer-integrated, flexible machining systems are fast becoming established knowledge. There is, of course, much still to be done in the development of modular com puter hardware and software, and the scope for cost-effective developments in pro gramming systems, workpiece handling, and quality control will ensure that contin uing development will occur over the next decade. However, international attention is now increasingly rurning toward the flexible computer control of the assembly process as the next logical step in progressive factory automation. It is here at this very early stage that Tony Owen has bravely set out to encompass the future field of flexible assembly systems (FASs) in his own distinctive, wide-ranging style."
This book describes the most recent techniques for turbo decoder implementation, especially for 4G and beyond 4G applications. The authors reveal techniques for the design of high-throughput decoders for future telecommunication systems, enabling designers to reduce hardware cost and shorten processing time. Coverage includes an explanation of VLSI implementation of the turbo decoder, from basic functional units to advanced parallel architecture. The authors discuss both hardware architecture techniques and experimental results, showing the variations in area/throughput/performance with respect to several techniques. This book also illustrates turbo decoders for 3GPP-LTE/LTE-A and IEEE 802.16e/m standards, which provide a low-complexity but high-flexibility circuit structure to support these standards in multiple parallel modes. Moreover, some solutions that can overcome the limitation upon the speedup of parallel architecture by modification to turbo codec are presented here. Compared to the traditional designs, these methods can lead to at most 33% gain in throughput with similar performance and similar cost.
Dynamic Reconfigurable Architectures and Transparent Optimization Techniques presents a detailed study on new techniques to cope with the aforementioned limitations. First, characteristics of reconfigurable systems are discussed in details, and a large number of case studies is shown. Then, a detailed analysis of several benchmarks demonstrates that such architectures need to attack a diverse range of applications with very different behaviours, besides supporting code compatibility. This requires the use of dynamic optimization techniques, such as Binary Translation and Trace reuse. Finally, works that combine both reconfigurable systems and dynamic techniques are discussed and a quantitative analysis of one them, the DIM architecture, is presented.
In recent years, tremendous research has been devoted to the design of database systems for real-time applications, called real-time database systems (RTDBS), where transactions are associated with deadlines on their completion times, and some of the data objects in the database are associated with temporal constraints on their validity. Examples of important applications of RTDBS include stock trading systems, navigation systems and computer integrated manufacturing. Different transaction scheduling algorithms and concurrency control protocols have been proposed to satisfy transaction timing data temporal constraints. Other design issues important to the performance of a RTDBS are buffer management, index accesses and I/O scheduling. Real-Time Database Systems: Architecture and Techniques summarizes important research results in this area, and serves as an excellent reference for practitioners, researchers and educators of real-time systems and database systems.
The one instruction set computer (OISC) is the ultimate reduced instruction set computer (RISC). In OISC, the instruction set consists of only one instruction, and then by composition, all other necessary instructions are synthesized. This is an approach completely opposite to that of a complex instruction set computer (CISC), which incorporates complex instructions as microprograms within the processor. Computer Architecture: A Minimalist Perspective examines
computer architecture, computability theory, and the history of
computers from the perspective of one instruction set computing - a
novel approach in which the computer supports only one, simple
instruction. This bold, new paradigm offers significant promise in
biological, chemical, optical, and molecular scale computers. - Provides a comprehensive study of computer architecture using
computability theory as a base.
This text offers complete information on the latest developments in the emerging technology of polymer thick film--from the mechanics to applications in telephones, radio and television, and smart cards. Readers discover how specific markets for PTF are growing and changing and how construction schemes can alter and improve performance. Each aspect of PTF technology is discussed in detail.
Highlights developments, discoveries, and practical and advanced experiences related to responsive distributed computing and how it can support the deployment of trajectory-based applications in intelligent systems. Presents metamodeling with new trajectories patterns which are very useful for intelligent transportation systems. Examines the processing aspects of raw trajectories to develop other types of semantic and activity-type and space-time path type trajectories. Discusses Complex Event Processing (CEP), Internet of Things (IoT), Internet of Vehicle (IoV), V2X communication, Big Data Analytics, distributed processing frameworks, and Cloud Computing. Presents a number of case studies to demonstrate smart trajectories related to spatio-temporal events such as traffic congestion, viral contamination, and pedestrian accidents.
This book presents the cellular wireless network standard NB-IoT (Narrow Band-Internet of Things), which addresses many key requirements of the IoT. NB-IoT is a topic that is inspiring the industry to create new business cases and associated products. The author first introduces the technology and typical IoT use cases. He then explains NB-IoT extended network coverage and outstanding power saving features which are enabling the design of IoT devices (e.g. sensors) to work everywhere and for more than 10 years, in a maintenance-free way. The book explains to industrial users how to utilize NB-IoT features for their own IoT projects. Other system ingredients (e.g. IoT cloud services) and embedded security aspects are covered as well. The author takes an in-depth look at NB-IoT from an application engineering point of view, focusing on IoT device design. The target audience is technical-minded IoT project owners and system design engineers who are planning to develop an IoT application.
High Performance Computing Systems and Applications contains a selection of fully refereed papers presented at the 14th International Conference on High Performance Computing Systems and Applications held in Victoria, Canada, in June 2000. This book presents the latest research in HPC Systems and Applications, including distributed systems and architecture, numerical methods and simulation, network algorithms and protocols, computer architecture, distributed memory, and parallel algorithms. It also covers such topics as applications in astrophysics and space physics, cluster computing, numerical simulations for fluid dynamics, electromagnetics and crystal growth, networks and the Grid, and biology and Monte Carlo techniques. High Performance Computing Systems and Applications is suitable as a secondary text for graduate level courses, and as a reference for researchers and practitioners in industry.
This book describes a specification, microarchitecture, VHDL implementation and evaluation of a SPARC v8 CPU with fine-grain multi-threading, called micro-threading. The CPU, named UTLEON3, is an alternative platform for exploring CPU multi-threading that is compatible with the industry-standard GRLIB package. The processor microarchitecture was designed to map in an efficient way the data-flow scheme on a classical von Neumann pipelined processing used in common processors, while retaining full binary compatibility with existing legacy programs.
This book provides a comprehensive introduction to embedded flash memory, describing the history, current status, and future projections for technology, circuits, and systems applications. The authors describe current main-stream embedded flash technologies from floating-gate 1Tr, floating-gate with split-gate (1.5Tr), and 1Tr/1.5Tr SONOS flash technologies and their successful creation of various applications. Comparisons of these embedded flash technologies and future projections are also provided. The authors demonstrate a variety of embedded applications for auto-motive, smart-IC cards, and low-power, representing the leading-edge technology developments for eFlash. The discussion also includes insights into future prospects of application-driven non-volatile memory technology in the era of smart advanced automotive system, such as ADAS (Advanced Driver Assistance System) and IoE (Internet of Everything). Trials on technology convergence and future prospects of embedded non-volatile memory in the new memory hierarchy are also described. Introduces the history of embedded flash memory technology for micro-controller products and how embedded flash innovations developed; Includes comprehensive and detailed descriptions of current main-stream embedded flash memory technologies, sub-system designs and applications; Explains why embedded flash memory requirements are different from those of stand-alone flash memory and how to achieve specific goals with technology development and circuit designs; Describes a mature and stable floating-gate 1Tr cell technology imported from stand-alone flash memory products - that then introduces embedded-specific split-gate memory cell technologies based on floating-gate storage structure and charge-trapping SONOS technology and their eFlash sub-system designs; Describes automotive and smart-IC card applications requirements and achievements in advanced eFlash beyond 4 0nm node.
Blockchain technology is an emerging distributed, decentralized architecture and computing paradigm, which has accelerated the development and application of cloud, fog and edge computing; artificial intelligence; cyber physical systems; social networking; crowdsourcing and crowdsensing; 5g; trust management and finance; and other many useful sectors. Nowadays, the primary blockchain technology uses are in information systems to keep information secure and private. However, many threats and vulnerabilities are facing blockchain in the past decade such 51% attacks, double spending attacks, etc. The popularity and rapid development of blockchain brings many technical and regulatory challenges for research and academic communities. The main goal of this book is to encourage both researchers and practitioners of Blockchain technology to share and exchange their experiences and recent studies between academia and industry. The reader will be provided with the most up-to-date knowledge of blockchain in mainstream areas of security and privacy in the decentralized domain, which is timely and essential (this is due to the fact that the distributed and p2p applications are increasing day-by-day, and the attackers adopt new mechanisms to threaten the security and privacy of the users in those environments). This book provides a detailed explanation of security and privacy with respect to blockchain for information systems, and will be an essential resource for students, researchers and scientists studying blockchain uses in information systems and those wanting to explore the current state of play.
Widespread use of parallel processing will become a reality only if the process of porting applications to parallel computers can be largely automated. Usually it is straightforward for a user to determine how an application can be mapped onto a parallel machine; however, the actual development of parallel code, if done by hand, is typically difficult and time consuming. Parallelizing compilers, which can gen erate parallel code automatically, are therefore a key technology for parallel processing. In this book, Ping-Sheng Tseng describes a parallelizing compiler for systolic arrays, called AL. Although parallelizing compilers are quite common for shared-memory parallel machines, the AL compiler is one of the first working parallelizing compilers for distributed memory machines, of which systolic arrays are a special case. The AL compiler takes advantage of the fine grain and high bandwidth interprocessor communication capabilities in a systolic architecture to generate efficient parallel code. xii Foreword While capable of handling an important class of applications, AL is not intended to be a general-purpose parallelizing compiler."
Load Balancing in Parallel Computers: Theory and Practice is about the essential software technique of load balancing in distributed memory message-passing parallel computers, also called multicomputers. Each processor has its own address space and has to communicate with other processors by message passing. In general, a direct, point-to-point interconnection network is used for the communications. Many commercial parallel computers are of this class, including the Intel Paragon, the Thinking Machine CM-5, and the IBM SP2. Load Balancing in Parallel Computers: Theory and Practice presents a comprehensive treatment of the subject using rigorous mathematical analyses and practical implementations. The focus is on nearest-neighbor load balancing methods in which every processor at every step is restricted to balancing its workload with its direct neighbours only. Nearest-neighbor methods are iterative in nature because a global balanced state can be reached through processors' successive local operations. Since nearest-neighbor methods have a relatively relaxed requirement for the spread of local load information across the system, they are flexible in terms of allowing one to control the balancing quality, effective for preserving communication locality, and can be easily scaled in parallel computers with a direct communication network. Load Balancing in Parallel Computers: Theory and Practice serves as an excellent reference source and may be used as a text for advanced courses on the subject.
Written with graduate and advanced undergraduate students in mind, this textbook introduces computational logic from the foundations of first-order logic to state-of-the-art decision procedures for arithmetic, data structures, and combination theories. The textbook also presents a logical approach to engineering correct software. Verification exercises are given to develop the reader's facility in specifying and verifying software using logic. The treatment of verification concludes with an introduction to the static analysis of software, an important component of modern verification systems. The final chapter outlines courses of further study.
This book introduces readers to various radiation soft-error mechanisms such as soft delays, radiation induced clock jitter and pulses, and single event (SE) coupling induced effects. In addition to discussing various radiation hardening techniques for combinational logic, the author also describes new mitigation strategies targeting commercial designs. Coverage includes novel soft error mitigation techniques such as the Dynamic Threshold Technique and Soft Error Filtering based on Transmission gate with varied gate and body bias. The discussion also includes modeling of SE crosstalk noise, delay and speed-up effects. Various mitigation strategies to eliminate SE coupling effects are also introduced. Coverage also includes the reliability of low power energy-efficient designs and the impact of leakage power consumption optimizations on soft error robustness. The author presents an analysis of various power optimization techniques, enabling readers to make design choices that reduce static power consumption and improve soft error reliability at the same time. |
![]() ![]() You may like...
SIMD Programming Manual for Linux and…
Paul Cockshott, Kenneth Renfrew
Hardcover
R3,191
Discovery Miles 31 910
System Center Configuration Manager…
Kerrie Meyler, Gerry Hampson, …
Paperback
![]()
Renewable Power for Sustainable Growth…
Atif Iqbal, Hasmat Malik, …
Hardcover
R4,535
Discovery Miles 45 350
Innovations in Electrical and Electronic…
Saad Mekhilef, Margarita Favorskaya, …
Hardcover
R5,821
Discovery Miles 58 210
|