![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design > General
This book covers several aspects of the operational amplifier and includes theoretical explanations with simplified expressions and derivations. The book is designed to serve as a textbook for courses offered to undergraduate and postgraduate students enrolled in electronics and communication engineering. The topics included are DC amplifier, AC/DC analysis of DC amplifier, relevant derivations, a block diagram of the operational amplifier, positive and negative feedbacks, amplitude modulator, current to voltage and voltage to current converters, DAC and ADC, integrator, differentiator, active filters, comparators, sinusoidal and non-sinusoidal waveform generators, phase lock loop (PLL), etc. This book contains two parts-sections A and B. Section A includes theory, methodology, circuit design and derivations. Section B explains the design and study of experiments for laboratory practice. Laboratory experiments enable students to perform a practical activity that demonstrates applications of the operational amplifier. A simplified description of the circuits, working principle and practical approach towards understanding the concept is a unique feature of this book. Simple methods and easy steps of the derivation and lucid presentation are some other traits of this book for readers that do not have any background information about electronics. This book is student-centric towards the basics of the operational amplifier and its applications. The detailed coverage and pedagogical tools make this an ideal textbook for students and researchers enrolled in senior undergraduate and beginning postgraduate electronics and communication engineering courses.
With the end of Dennard scaling and Moore's law, IC chips, especially large-scale ones, now face more reliability challenges, and reliability has become one of the mainstay merits of VLSI designs. In this context, this book presents a built-in on-chip fault-tolerant computing paradigm that seeks to combine fault detection, fault diagnosis, and error recovery in large-scale VLSI design in a unified manner so as to minimize resource overhead and performance penalties. Following this computing paradigm, we propose a holistic solution based on three key components: self-test, self-diagnosis and self-repair, or "3S" for short. We then explore the use of 3S for general IC designs, general-purpose processors, network-on-chip (NoC) and deep learning accelerators, and present prototypes to demonstrate how 3S responds to in-field silicon degradation and recovery under various runtime faults caused by aging, process variations, or radical particles. Moreover, we demonstrate that 3S not only offers a powerful backbone for various on-chip fault-tolerant designs and implementations, but also has farther-reaching implications such as maintaining graceful performance degradation, mitigating the impact of verification blind spots, and improving chip yield. This book is the outcome of extensive fault-tolerant computing research pursued at the State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences over the past decade. The proposed built-in on-chip fault-tolerant computing paradigm has been verified in a broad range of scenarios, from small processors in satellite computers to large processors in HPCs. Hopefully, it will provide an alternative yet effective solution to the growing reliability challenges for large-scale VLSI designs.
This book puts in focus various techniques for checking modeling fidelity of Cyber Physical Systems (CPS), with respect to the physical world they represent. The authors' present modeling and analysis techniques representing different communities, from very different angles, discuss their possible interactions, and discuss the commonalities and differences between their practices. Coverage includes model driven development, resource-driven development, statistical analysis, proofs of simulator implementation, compiler construction, power/temperature modeling of digital devices, high-level performance analysis, and code/device certification. Several industrial contexts are covered, including modeling of computing and communication, proof architectures models and statistical based validation techniques.
Analog Integrated Circuits deals with the design and analysis of modem analog circuits using integrated bipolar and field-effect transistor technologies. This book is suitable as a text for a one-semester course for senior level or first-year graduate students as well as a reference work for practicing engin eers. Advanced students will also find the text useful in that some of the material presented here is not covered in many first courses on analog circuits. Included in this is an extensive coverage of feedback amplifiers, current-mode circuits, and translinear circuits. Suitable background would be fundamental courses in electronic circuits and semiconductor devices. This book contains numerous examples, many of which include commercial analog circuits. End-of-chapter problems are given, many illustrating practical circuits. Chapter 1 discuses the models commonly used to represent devices used in modem analog integrated circuits. Presented are models for bipolar junction transistors, junction diodes, junction field-effect transistors, and metal-oxide semiconductor field-effect transistors. Both large-signal and small-signal models are developed as well as their implementation in the SPICE circuit simulation program. The basic building blocks used in a large variety of analog circuits are analyzed in Chapter 2; these consist of current sources, dc level-shift stages, single-transistor gain stages, two-transistor gain stages, and output stages. Both bipolar and field-effect transistor implementations are presented. Chapter 3 deals with operational amplifier circuits. The four basic op-amp circuits are analyzed: (1) voltage-feedback amplifiers, (2) current-feedback amplifiers, (3) current-differencing amplifiers, and (4) transconductance ampli fiers. Selected applications are also presented."
This book describes the design and implementation of energy-efficient smart (digital output) temperature sensors in CMOS technology. To accomplish this, a new readout topology, namely the zoom-ADC, is presented. It combines a coarse SAR-ADC with a fine Sigma-Delta (SD) ADC. The digital result obtained from the coarse ADC is used to set the reference levels of the SD-ADC, thereby zooming its full-scale range into a small region around the input signal. This technique considerably reduces the SD-ADC's full-scale range, and notably relaxes the number of clock cycles needed for a given resolution, as well as the DC-gain and swing of the loop-filter. Both conversion time and power-efficiency can be improved, which results in a substantial improvement in energy-efficiency. Two BJT-based sensor prototypes based on 1st-order and 2nd-order zoom-ADCs are presented. They both achieve inaccuracies of less than +/-0.2 DegreesC over the military temperature range (-55 DegreesC to 125 DegreesC). A prototype capable of sensing temperatures up to 200 DegreesC is also presented. As an alternative to BJTs, sensors based on dynamic threshold MOSTs (DTMOSTs) are also presented. It is shown that DTMOSTs are capable of achieving low inaccuracy (+/-0.4 DegreesC over the military temperature range) as well as sub-1V operation, making them well suited for use in modern CMOS processes.
This book describes automated debugging approaches for the bugs and the faults which appear in different abstraction levels of a hardware system. The authors employ a transaction-based debug approach to systems at the transaction-level, asserting the correct relation of transactions. The automated debug approach for design bugs finds the potential fault candidates at RTL and gate-level of a circuit. Debug techniques for logic bugs and synchronization bugs are demonstrated, enabling readers to localize the most difficult bugs. Debug automation for electrical faults (delay faults)finds the potentially failing speedpaths in a circuit at gate-level. The various debug approaches described achieve high diagnosis accuracy and reduce the debugging time, shortening the IC development cycle and increasing the productivity of designers. Describes a unified framework for debug automation used at both pre-silicon and post-silicon stages; Provides approaches for debug automation of a hardware system at different levels of abstraction, i.e., chip, gate-level, RTL and transaction level; Includes techniques for debug automation of design bugs and electrical faults, as well as an infrastructure to debug NoC-based multiprocessor SoCs.
Neural network and artificial intelligence algorithrns and computing have increased not only in complexity but also in the number of applications. This in turn has posed a tremendous need for a larger computational power that conventional scalar processors may not be able to deliver efficiently. These processors are oriented towards numeric and data manipulations. Due to the neurocomputing requirements (such as non-programming and learning) and the artificial intelligence requirements (such as symbolic manipulation and knowledge representation) a different set of constraints and demands are imposed on the computer architectures/organizations for these applications. Research and development of new computer architectures and VLSI circuits for neural networks and artificial intelligence have been increased in order to meet the new performance requirements. This book presents novel approaches and trends on VLSI implementations of machines for these applications. Papers have been drawn from a number of research communities; the subjects span analog and digital VLSI design, computer design, computer architectures, neurocomputing and artificial intelligence techniques. This book has been organized into four subject areas that cover the two major categories of this book; the areas are: analog circuits for neural networks, digital implementations of neural networks, neural networks on multiprocessor systems and applications, and VLSI machines for artificial intelligence. The topics that are covered in each area are briefly introduced below.
There is nO' dDubt that the mioroprooessor (~p) revDlutiDn will cDntinue intO' the future and many will be required to' specify and integrate mi- crDprDceSSDrs intO' prDducts Dr systems in their Dwn disciplines. There- fDre, well-designed flexible interfaoes will be required to' ensure CDm- patibility with Dther equipments and to' extend design DptiDns. AlthDugh there are several bDDks Dn micrDcDmputers and micrDprDcessDrs, Dnly few Df thDse devDte but a small part Dn the impDrtant aspects Df interfaces. It was with this in mind that the present bDDk was written as a selfcDn- tained vDlume to' be part Df the mDre general series : Mioroprooessors- Based Systems Engineering. It fills an existing gap in technDIDgy, as in- terfaces are the last items to' be seriDusly cDnsidered in the race Df new technDIDgy, and it deals with the systematic study Df micrDprDcessDr interfaces and their applicatiDns in many diversified fields. This bDDk is aimed at engineers in industry and engineering stu- dents whO' need to' learn hDW to' interface micrDprDcessDrs, and hence mi- crDcDmputers and Dther related equipments, to' external digital Dr analDg devices. It is suitable fDr use as a textbDDk Dr fDr supplementary read- ing, either in an applied undergraduate CDurse in electrical engineering Dr in the last year Df three-year-curriculum technical cDlleges.
It has become clear in recent years from such major forums as the various international conferences on flexible manufacruring systems (FMSs) that the computer-controlled and -integrated "factory of the furure" is now being considered as a commercially viable and technically achievable goal. To date, most attention has been given to the design, development, and evalu ation of flexible machining systems. Now, with the essential support of increasing numbers of industrial examples, the general concepts, technical requirements, and cost-effectiveness of responsive, computer-integrated, flexible machining systems are fast becoming established knowledge. There is, of course, much still to be done in the development of modular com puter hardware and software, and the scope for cost-effective developments in pro gramming systems, workpiece handling, and quality control will ensure that contin uing development will occur over the next decade. However, international attention is now increasingly rurning toward the flexible computer control of the assembly process as the next logical step in progressive factory automation. It is here at this very early stage that Tony Owen has bravely set out to encompass the future field of flexible assembly systems (FASs) in his own distinctive, wide-ranging style."
This book describes the most recent techniques for turbo decoder implementation, especially for 4G and beyond 4G applications. The authors reveal techniques for the design of high-throughput decoders for future telecommunication systems, enabling designers to reduce hardware cost and shorten processing time. Coverage includes an explanation of VLSI implementation of the turbo decoder, from basic functional units to advanced parallel architecture. The authors discuss both hardware architecture techniques and experimental results, showing the variations in area/throughput/performance with respect to several techniques. This book also illustrates turbo decoders for 3GPP-LTE/LTE-A and IEEE 802.16e/m standards, which provide a low-complexity but high-flexibility circuit structure to support these standards in multiple parallel modes. Moreover, some solutions that can overcome the limitation upon the speedup of parallel architecture by modification to turbo codec are presented here. Compared to the traditional designs, these methods can lead to at most 33% gain in throughput with similar performance and similar cost.
Dynamic Reconfigurable Architectures and Transparent Optimization Techniques presents a detailed study on new techniques to cope with the aforementioned limitations. First, characteristics of reconfigurable systems are discussed in details, and a large number of case studies is shown. Then, a detailed analysis of several benchmarks demonstrates that such architectures need to attack a diverse range of applications with very different behaviours, besides supporting code compatibility. This requires the use of dynamic optimization techniques, such as Binary Translation and Trace reuse. Finally, works that combine both reconfigurable systems and dynamic techniques are discussed and a quantitative analysis of one them, the DIM architecture, is presented.
In recent years, tremendous research has been devoted to the design of database systems for real-time applications, called real-time database systems (RTDBS), where transactions are associated with deadlines on their completion times, and some of the data objects in the database are associated with temporal constraints on their validity. Examples of important applications of RTDBS include stock trading systems, navigation systems and computer integrated manufacturing. Different transaction scheduling algorithms and concurrency control protocols have been proposed to satisfy transaction timing data temporal constraints. Other design issues important to the performance of a RTDBS are buffer management, index accesses and I/O scheduling. Real-Time Database Systems: Architecture and Techniques summarizes important research results in this area, and serves as an excellent reference for practitioners, researchers and educators of real-time systems and database systems.
This text offers complete information on the latest developments in the emerging technology of polymer thick film--from the mechanics to applications in telephones, radio and television, and smart cards. Readers discover how specific markets for PTF are growing and changing and how construction schemes can alter and improve performance. Each aspect of PTF technology is discussed in detail.
Highlights developments, discoveries, and practical and advanced experiences related to responsive distributed computing and how it can support the deployment of trajectory-based applications in intelligent systems. Presents metamodeling with new trajectories patterns which are very useful for intelligent transportation systems. Examines the processing aspects of raw trajectories to develop other types of semantic and activity-type and space-time path type trajectories. Discusses Complex Event Processing (CEP), Internet of Things (IoT), Internet of Vehicle (IoV), V2X communication, Big Data Analytics, distributed processing frameworks, and Cloud Computing. Presents a number of case studies to demonstrate smart trajectories related to spatio-temporal events such as traffic congestion, viral contamination, and pedestrian accidents.
This book presents the cellular wireless network standard NB-IoT (Narrow Band-Internet of Things), which addresses many key requirements of the IoT. NB-IoT is a topic that is inspiring the industry to create new business cases and associated products. The author first introduces the technology and typical IoT use cases. He then explains NB-IoT extended network coverage and outstanding power saving features which are enabling the design of IoT devices (e.g. sensors) to work everywhere and for more than 10 years, in a maintenance-free way. The book explains to industrial users how to utilize NB-IoT features for their own IoT projects. Other system ingredients (e.g. IoT cloud services) and embedded security aspects are covered as well. The author takes an in-depth look at NB-IoT from an application engineering point of view, focusing on IoT device design. The target audience is technical-minded IoT project owners and system design engineers who are planning to develop an IoT application.
High Performance Computing Systems and Applications contains a selection of fully refereed papers presented at the 14th International Conference on High Performance Computing Systems and Applications held in Victoria, Canada, in June 2000. This book presents the latest research in HPC Systems and Applications, including distributed systems and architecture, numerical methods and simulation, network algorithms and protocols, computer architecture, distributed memory, and parallel algorithms. It also covers such topics as applications in astrophysics and space physics, cluster computing, numerical simulations for fluid dynamics, electromagnetics and crystal growth, networks and the Grid, and biology and Monte Carlo techniques. High Performance Computing Systems and Applications is suitable as a secondary text for graduate level courses, and as a reference for researchers and practitioners in industry.
This book describes a specification, microarchitecture, VHDL implementation and evaluation of a SPARC v8 CPU with fine-grain multi-threading, called micro-threading. The CPU, named UTLEON3, is an alternative platform for exploring CPU multi-threading that is compatible with the industry-standard GRLIB package. The processor microarchitecture was designed to map in an efficient way the data-flow scheme on a classical von Neumann pipelined processing used in common processors, while retaining full binary compatibility with existing legacy programs.
This book provides a comprehensive introduction to embedded flash memory, describing the history, current status, and future projections for technology, circuits, and systems applications. The authors describe current main-stream embedded flash technologies from floating-gate 1Tr, floating-gate with split-gate (1.5Tr), and 1Tr/1.5Tr SONOS flash technologies and their successful creation of various applications. Comparisons of these embedded flash technologies and future projections are also provided. The authors demonstrate a variety of embedded applications for auto-motive, smart-IC cards, and low-power, representing the leading-edge technology developments for eFlash. The discussion also includes insights into future prospects of application-driven non-volatile memory technology in the era of smart advanced automotive system, such as ADAS (Advanced Driver Assistance System) and IoE (Internet of Everything). Trials on technology convergence and future prospects of embedded non-volatile memory in the new memory hierarchy are also described. Introduces the history of embedded flash memory technology for micro-controller products and how embedded flash innovations developed; Includes comprehensive and detailed descriptions of current main-stream embedded flash memory technologies, sub-system designs and applications; Explains why embedded flash memory requirements are different from those of stand-alone flash memory and how to achieve specific goals with technology development and circuit designs; Describes a mature and stable floating-gate 1Tr cell technology imported from stand-alone flash memory products - that then introduces embedded-specific split-gate memory cell technologies based on floating-gate storage structure and charge-trapping SONOS technology and their eFlash sub-system designs; Describes automotive and smart-IC card applications requirements and achievements in advanced eFlash beyond 4 0nm node.
Blockchain technology is an emerging distributed, decentralized architecture and computing paradigm, which has accelerated the development and application of cloud, fog and edge computing; artificial intelligence; cyber physical systems; social networking; crowdsourcing and crowdsensing; 5g; trust management and finance; and other many useful sectors. Nowadays, the primary blockchain technology uses are in information systems to keep information secure and private. However, many threats and vulnerabilities are facing blockchain in the past decade such 51% attacks, double spending attacks, etc. The popularity and rapid development of blockchain brings many technical and regulatory challenges for research and academic communities. The main goal of this book is to encourage both researchers and practitioners of Blockchain technology to share and exchange their experiences and recent studies between academia and industry. The reader will be provided with the most up-to-date knowledge of blockchain in mainstream areas of security and privacy in the decentralized domain, which is timely and essential (this is due to the fact that the distributed and p2p applications are increasing day-by-day, and the attackers adopt new mechanisms to threaten the security and privacy of the users in those environments). This book provides a detailed explanation of security and privacy with respect to blockchain for information systems, and will be an essential resource for students, researchers and scientists studying blockchain uses in information systems and those wanting to explore the current state of play.
Almost all the systems in our world, including technical, social, economic, and environmental systems, are becoming interconnected and increasingly complex, and as such they are vulnerable to various risks. Due to this trend, resilience creation is becoming more important to system managers and decision makers, this to ensure sustained performance. In order to be able to ensure an acceptable sustained performance under such interconnectedness and complexity, resilience creation with a system approach is a requirement. Mathematical modeling based approaches are the most common approach for system resilience creation. Mathematical Modelling of System Resilience covers resilience creation for various system aspects including a functional system of the supply chain, overall supply chain systems; various methodologies for modeling system resilience; satellite-based approach for addressing climate related risks, repair-based approach for sustainable performance of an engineering system, and modeling measures of the reliability for a vertical take-off and landing system. Each of the chapters contributes state of the art research for the relevant resilience related topic covered in the chapter. Technical topics covered in the book include: 1. Supply chain risk, vulnerability and disruptions 2. System resilience for containing failures and disruptions 3. Resiliency considering frequency and intensities of disasters 4. Resilience performance index 5. Resiliency of electric Traction system 6. Degree of resilience 7. Satellite observation and hydrological risk 8. Latitude of Resilience 9. On-line repair for resilience 10. Reliability design for Vertical Takeoff and landing Prototype
Widespread use of parallel processing will become a reality only if the process of porting applications to parallel computers can be largely automated. Usually it is straightforward for a user to determine how an application can be mapped onto a parallel machine; however, the actual development of parallel code, if done by hand, is typically difficult and time consuming. Parallelizing compilers, which can gen erate parallel code automatically, are therefore a key technology for parallel processing. In this book, Ping-Sheng Tseng describes a parallelizing compiler for systolic arrays, called AL. Although parallelizing compilers are quite common for shared-memory parallel machines, the AL compiler is one of the first working parallelizing compilers for distributed memory machines, of which systolic arrays are a special case. The AL compiler takes advantage of the fine grain and high bandwidth interprocessor communication capabilities in a systolic architecture to generate efficient parallel code. xii Foreword While capable of handling an important class of applications, AL is not intended to be a general-purpose parallelizing compiler."
This book introduces readers to various radiation soft-error mechanisms such as soft delays, radiation induced clock jitter and pulses, and single event (SE) coupling induced effects. In addition to discussing various radiation hardening techniques for combinational logic, the author also describes new mitigation strategies targeting commercial designs. Coverage includes novel soft error mitigation techniques such as the Dynamic Threshold Technique and Soft Error Filtering based on Transmission gate with varied gate and body bias. The discussion also includes modeling of SE crosstalk noise, delay and speed-up effects. Various mitigation strategies to eliminate SE coupling effects are also introduced. Coverage also includes the reliability of low power energy-efficient designs and the impact of leakage power consumption optimizations on soft error robustness. The author presents an analysis of various power optimization techniques, enabling readers to make design choices that reduce static power consumption and improve soft error reliability at the same time.
Written with graduate and advanced undergraduate students in mind, this textbook introduces computational logic from the foundations of first-order logic to state-of-the-art decision procedures for arithmetic, data structures, and combination theories. The textbook also presents a logical approach to engineering correct software. Verification exercises are given to develop the reader's facility in specifying and verifying software using logic. The treatment of verification concludes with an introduction to the static analysis of software, an important component of modern verification systems. The final chapter outlines courses of further study.
Multithreaded Processor Design takes the unique approach of designing a multithreaded processor from the ground up. Every aspect is carefully considered to form a balanced design rather than making incremental changes to an existing design and then ignoring problem areas. The general purpose parallel computer is an elusive goal. Multithreaded processors have emerged as a promising solution to this conundrum by forming some amalgam of the commonplace control-flow (von Neumann) processor model with the more exotic data-flow approach. This new processor model offers many exciting possibilities and there is much research to be performed to make this technology widespread. Multithreaded processors utilize the simple and efficient sequential execution technique of control-flow, and also data-flow like concurrency primitives. This supports the conceptually simple but powerful idea of rescheduling rather than blocking when waiting for data, e.g. from large and distributed memories, thereby tolerating long data transmission latencies. This makes multiprocessing far more efficient because the cost of moving data between distributed memories and processors can be hidden by other activity. The same hardware mechanisms may also be used to synchronize interprocess communications to awaiting threads, thereby alleviating operating system overheads. Supporting synchronization and scheduling mechanisms in hardware naturally adds complexity. Consequently, existing multithreaded processor designs have tended to make incremental changes to existing control-flow processor designs to resolve some problems but not others. Multithreaded Processor Design serves as an excellent reference source and is suitable as a text for advanced courses in computer architecture dealing with the subject. |
![]() ![]() You may like...
Energy-Efficient Communication…
Robert Fasthuber, Francky Catthoor, …
Hardcover
R5,000
Discovery Miles 50 000
Logic and Computer Design Fundamentals…
M. Morris Mano, Charles Kime, …
Paperback
R2,810
Discovery Miles 28 100
|