![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems
This book is a comprehensive guide to assertion-based verification of hardware designs using System Verilog Assertions (SVA). It enables readers to minimize the cost of verification by using assertion-based techniques in simulation testing, coverage collection and formal analysis. The book provides detailed descriptions of all the language features of SVA, accompanied by step-by-step examples of how to employ them to construct powerful and reusable sets of properties. The book also shows how SVA fits into the broader System Verilog language, demonstrating the ways that assertions can interact with other System Verilog components. The reader new to hardware verification will benefit from general material describing the nature of design models and behaviors, how they are exercised, and the different roles that assertions play. This second edition covers the features introduced by the recent IEEE 1800-2012. System Verilog standard, explaining in detail the new and enhanced assertion constructs. The book makes SVA usable and accessible for hardware designers, verification engineers, formal verification specialists and EDA tool developers. With numerous exercises, ranging in depth and difficulty, the book is also suitable as a text for students.
Logic circuits are becoming increasingly susceptible to probabilistic behavior caused by external radiation and process variation. In addition, inherently probabilistic quantum- and nano-technologies are on the horizon as we approach the limits of CMOS scaling. Ensuring the reliability of such circuits despite the probabilistic behavior is a key challenge in IC design---one that necessitates a fundamental, probabilistic reformulation of synthesis and testing techniques. This monograph will present techniques for analyzing, designing, and testing logic circuits with probabilistic behavior.
This book describes the life cycle process of IP cores, from specification to production, including IP modeling, verification, optimization, and protection. Various trade-offs in the design process are discussed, including those associated with many of the most common memory cores, controller IPs and system-on-chip (SoC) buses. Readers will also benefit from the author's practical coverage of new verification methodologies. such as bug localization, UVM, and scan-chain. A SoC case study is presented to compare traditional verification with the new verification methodologies. Discusses the entire life cycle process of IP cores, from specification to production, including IP modeling, verification, optimization, and protection; Introduce a deep introduction for Verilog for both implementation and verification point of view. Demonstrates how to use IP in applications such as memory controllers and SoC buses. Describes a new verification methodology called bug localization; Presents a novel scan-chain methodology for RTL debugging; Enables readers to employ UVM methodology in straightforward, practical terms.
Companies must confront an increasingly competitive environment with lean, flexible and market oriented structures. Therefore companies organize themselves according to their business processes. These processes are more and more often designed, implemented and managed based on standard software, mostly ERP or SCM packages. This is the first book delivering a complete description of a business driven implementation of standard software packages, accelerated by the use of reference models and other information models. The use of those models ensures best quality results and speeds up the software implementation. The book discusses how companies can optimize business processes and realize strategic goals with the implementation of software like SAP R/3, Oracle, Baan or Peoplesoft. It also includes the post implementation activities. The book cites numerous case studies and outlines each step of a process oriented implementation, including the goals, procedures and necessary methods and tools.
Effective compilers allow for a more efficient execution of application programs for a given computer architecture, while well-conceived architectural features can support more effective compiler optimization techniques. A well thought-out strategy of trade-offs between compilers and computer architectures is the key to the successful designing of highly efficient and effective computer systems. From embedded micro-controllers to large-scale multiprocessor systems, it is important to understand the interaction between compilers and computer architectures. The goal of the Annual Workshop on Interaction between Compilers and Computer Architectures (INTERACT) is to promote new ideas and to present recent developments in compiler techniques and computer architectures that enhance each other's capabilities and performance. Interaction Between Compilers and Computer Architectures is an updated and revised volume consisting of seven papers originally presented at the Fifth Workshop on Interaction between Compilers and Computer Architectures (INTERACT-5), which was held in conjunction with the IEEE HPCA-7 in Monterrey, Mexico in 2001. This volume explores recent developments and ideas for better integration of the interaction between compilers and computer architectures in designing modern processors and computer systems. Interaction Between Compilers and Computer Architectures is suitable as a secondary text for a graduate level course, and as a reference for researchers and practitioners in industry.
This book describes algorithmic methods and parallelization techniques to design a parallel sparse direct solver which is specifically targeted at integrated circuit simulation problems. The authors describe a complete flow and detailed parallel algorithms of the sparse direct solver. They also show how to improve the performance by simple but effective numerical techniques. The sparse direct solver techniques described can be applied to any SPICE-like integrated circuit simulator and have been proven to be high-performance in actual circuit simulation. Readers will benefit from the state-of-the-art parallel integrated circuit simulation techniques described in this book, especially the latest parallel sparse matrix solution techniques.
Logic and Complexity looks at basic logic as it is used in Computer Science, and provides students with a logical approach to Complexity theory. With plenty of exercises, this book presents classical notions of mathematical logic, such as decidability, completeness and incompleteness, as well as new ideas brought by complexity theory such as NP-completeness, randomness and approximations, providing a better understanding for efficient algorithmic solutions to problems. Divided into three parts, it covers: - Model Theory and Recursive Functions - introducing the basic model theory of propositional, 1st order, inductive definitions and 2nd order logic. Recursive functions, Turing computability and decidability are also examined. - Descriptive Complexity - looking at the relationship between definitions of problems, queries, properties of programs and their computational complexity. - Approximation - explaining how some optimization problems and counting problems can be approximated according to their logical form. Logic is important in Computer Science, particularly for verification problems and database query languages such as SQL. Students and researchers in this field will find this book of great interest.
This book describes recent findings in the domain of Boolean logic and Boolean algebra, covering application domains in circuit and system design, but also basic research in mathematics and theoretical computer science. Content includes invited chapters and a selection of the best papers presented at the 13th annual International Workshop on Boolean Problems. Provides a single-source reference to the state-of-the-art research in the field of logic synthesis and Boolean techniques; Includes a selection of the best papers presented at the 13th annual International Workshop on Boolean Problems; Covers Boolean algebras, Boolean logic, Boolean modeling, Combinatorial Search, Boolean and bitwise arithmetic, Software and tools for the solution of Boolean problems, Applications of Boolean logic and algebras, Applications to real-world problems, Boolean constraint solving, and Extensions of Boolean logic.
Logic Synthesis for Low Power VLSI Designs presents a systematic and comprehensive treatment of power modeling and optimization at the logic level. More precisely, this book provides a detailed presentation of methodologies, algorithms and CAD tools for power modeling, estimation and analysis, synthesis and optimization at the logic level. Logic Synthesis for Low Power VLSI Designs contains detailed descriptions of technology-dependent logic transformations and optimizations, technology decomposition and mapping, and post-mapping structural optimization techniques for low power. It also emphasizes the trade-off techniques for two-level and multi-level logic circuits that involve power dissipation and circuit speed, in the hope that the readers can better understand the issues and ways of achieving their power dissipation goal while meeting the timing constraints. Logic Synthesis for Low Power VLSI Designs is written for VLSI design engineers, CAD professionals, and students who have had a basic knowledge of CMOS digital design and logic synthesis.
Magnetic recording is expected to become core technology in a multi-billion dollar industry in the in the very near future. Some of the most critical discoveries regarding perpendicular write and playback heads and perpendicular media were made only during the last several years as a result of extensive and intensive research in both academia and industry in their fierce race to extend the superparamagnetic limit in the magnetic recording media. These discoveries appear to be critical for implementing perpendicular magnetic recording into an actual disk drive. This book addresses all the open questions and issues which need to be resolved before perpendicular recording can finally be implemented successfully, and is the first monograph in many years to address this subject. This book is intended for graduate students, young engineers and even senior and more experienced researchers in this field who need to acquire adequate knowledge of the physics of perpendicular magnetic recording in order to further develop the field of perpendicular recording.
BiCMOS Technology and Applications, Second Edition provides a synthesis of available knowledge about the combination of bipolar and MOS transistors in a common integrated circuit - BiCMOS. In this new edition all chapters have been updated and completely new chapters on emerging topics have been added. In addition, BiCMOS Technology and Applications, Second Edition provides the reader with a knowledge of either CMOS or Bipolar technology/design a reference with which they can make educated decisions regarding the viability of BiCMOS in their own application. BiCMOS Technology and Applications, Second Edition is vital reading for practicing integrated circuit engineers as well as technical managers trying to evaluate business issues related to BiCMOS. As a textbook, this book is also appropriate at the graduate level for a special topics course in BiCMOS. A general knowledge in device physics, processing and circuit design is assumed. Given the division of the book, it lends itself well to a two-part course; one on technology and one on design. This will provide advanced students with a good understanding of tradeoffs between bipolar and MOS devices and circuits.
This book encapsulates some work done in the DIRC project concerned with trust and responsibility in socio-technical systems. It brings together a range of disciplinary approaches - computer science, sociology and software engineering - to produce a socio-technical systems perspective on the issues surrounding trust in technology in complex settings. Computer systems can only bring about their purported benefits if functionality, users and usability are central to their design and deployment. Thus, technology can only be trusted in situ and in everyday use if these issues have been brought to bear on the process of technology design, implementation and use. The studies detailed in this book analyse the ways in which trust in technology is achieved and/or worked around in everyday situations in a range of settings - including hospitals, a steelworks, a public enquiry, the financial services sector and air traffic control.
The major thrust of this book is the realisation of an all optical computer. To that end it discusses optoelectronic devices and applications, transmission systems, integrated optoelectronic systems and, of course, all optical computers. The chapters on heterostructure light emitting devices' quantum well carrier transport optoelectronic devices' present the most recent advances in device physics, together with modern devices and their applications. The chapter on microcavity lasers' is essential to the discussion of present and future developments in solid-state laser physics and technology and puts into perspective the present state of research into and the technology of optoelectronic devices, within the context of their use in advanced systems. A significant part of the book deals with problems of propagation in quantum structures. soliton-based switching, gating and transmission systems' presents the basics of controlling the propagation of photons in solids and the use of this control in devices. The chapters on optoelectronic processing using smart pixels' and all optical computers' are preceded by introductory material in fundamentals of quantum structures for optoelectronic devices and systems' and linear and nonlinear absorption and reflection in quantum well structures'. It is clear that new architectures will be necessary if we are to fully utilise the potentiality of electrooptic devices in computing, but even current architectures and structures demonstrate the feasibility of the all optical computer: one that is possible today.
Integrated Research in Grid Computing presents a selection of the best papers presented at the CoreGRID Integration Workshop (CGIW2005), which took place on November 28-30, 2005 in Pisa, Italy. The aim of CoreGRID is to strengthen and advance scientific and technological excellence in the area of Grid and Peer-to-Peer technologies in order to overcome the current fragmentation and duplication of effort in this area. To achieve this objective, the workshop brought together a critical mass of well-established researchers (including 145 permanent researchers and 171 PhD students) from a number of institutions which have all constructed an ambitious joint program of activities. Priority in the workshop was given to work conducted in Tcollaboration between partners from different research institutions and to promising research proposals that could foster such collaboration in the future.
This book is a collection of the finalized versions of the papers presented at the third Eurographics Workshop on Graphics Hardware. The diversity of the contributions reflects the widening range of options for graphics hardware that can be exploited due to the constant evolution of VLSI and software technologies. The first part of the book deals with the algorithmic aspects of graphics systems in a hardware-oriented context. Topics are: VLSI design strategies, data distribution for ray-tracing, the advantages of point-driven image generation with respect to VLSI implementation, use of memory and ease of parallelization, ray-tracing, and image reconstruction. The second part is on specific hardware, on content addressable memories and voxel-based systems. The third part addresses parallel systems: massively parallel object-based architectures, two systems in which image generated by individual rendering systems are composited, a transputer-based parallel display processor.
Adiabatic logic is a potential successor for static CMOS circuit design when it comes to ultra-low-power energy consumption. Future development like the evolutionary shrinking of the minimum feature size as well as revolutionary novel transistor concepts will change the gate level savings gained by adiabatic logic. In addition, the impact of worsening degradation effects has to be considered in the design of adiabatic circuits. The impact of the technology trends on the figures of merit of adiabatic logic, energy saving potential and optimum operating frequency, are investigated, as well as degradation related issues. Adiabatic logic benefits from future devices, is not susceptible to Hot Carrier Injection, and shows less impact of Bias Temperature Instability than static CMOS circuits. Major interest also lies on the efficient generation of the applied power-clock signal. This oscillating power supply can be used to save energy in short idle times by disconnecting circuits. An efficient way to generate the power-clock is by means of the synchronous 2N2P LC oscillator, which is also robust with respect to pattern-induced capacitive variations. An easy to implement but powerful power-clock gating supplement is proposed by gating the synchronization signals. Diverse implementations to shut down the system are presented and rated for their applicability and other aspects like energy reduction capability and data retention. Advantageous usage of adiabatic logic requires compact and efficient arithmetic structures. A broad variety of adder structures and a Coordinate Rotation Digital Computer are compared and rated according to energy consumption and area usage, and the resulting energy saving potential against static CMOS proves the ultra-low-power capability of adiabatic logic. In the end, a new circuit topology has to compete with static CMOS also in productivity. On a 130nm test chip, a large scale test vehicle containing an FIR filter was implemented in adiabatic logic, utilizing a standard, library-based design flow, fabricated, measured and compared to simulations of a static CMOS counterpart, with measured saving factors compliant to the values gained by simulation. This leads to the conclusion that adiabatic logic is ready for productive design due to compatibility not only to CMOS technology, but also to electronic design automation (EDA) tools developed for static CMOS system design.
In Part I, the impact of an integro-differential operator on parity logic engines (PLEs) as a tool for scientific modeling from scratch is presented. Part II outlines the fuzzy structural modeling approach for building new linear and nonlinear dynamical causal forecasting systems in terms of fuzzy cognitive maps (FCMs). Part III introduces the new type of autogenetic algorithms (AGAs) to the field of evolutionary computing. Altogether, these PLEs, FCMs, and AGAs may serve as conceptual and computational power tools.
This fascinating cultural history of the personal computer explains how user-friendly design allows tech companies to build systems that we cannot understand. Modern personal computers are easy to use, and their welcoming, user-friendly interfaces encourage us to see them as designed for our individual benefit. Rarely, however, do these interfaces invite us to consider how our individual uses support the broader political and economic strategies of their designers. In Transparent Designs, Michael L. Black revisits early debates from hobbyist newsletters, computing magazines, user manuals, and advertisements about how personal computers could be seen as usable and useful by the average person. Black examines how early personal computers from the Tandy TRS-80 and Commodore PET to the IBM PC and Apple Macintosh were marketed to an American public that was high on the bold promises of the computing revolution but also skeptical about their ability to participate in it. Through this careful archival study, he shows how many of the foundational principles of usability theory were shaped through disagreements over the languages and business strategies developed in response to this skepticism. In short, this book asks us to consider the consequences of a computational culture that is based on the assumption that the average person does not need to know anything about the internal operations of the computers we've come to depend on for everything. Expanding our definition of usability, Transparent Designs examines how popular and technical rhetoric shapes user expectations about what counts as usable and useful as much as or even more so than hardware and software interfaces. Offering a fresh look at the first decade of personal computing, Black highlights how the concept of usability has been leveraged historically to smooth over conflicts between the rhetoric of computing and its material experience. Readers interested in vintage computing, the history of technology, digital rhetoric, or American culture will be fascinated in this book.
This state-of-the-art monograph presents a coherent survey of a variety of methods and systems for formal hardware verification. It emphasizes the presentation of approaches that have matured into tools and systems usable for the actual verification of nontrivial circuits. All in all, the book is a representative and well-structured survey on the success and future potential of formal methods in proving the correctness of circuits. The various chapters describe the respective approaches supplying theoretical foundations as well as taking into account the application viewpoint. By applying all methods and systems presented to the same set of IFIP WG10.5 hardware verification examples, a valuable and fair analysis of the strenghts and weaknesses of the various approaches is given.
As almost no other technology, embedded systems is an essential element of many innovations in automotive engineering. New functions and improvements of already existing functions, as well as the compliance with traffic regulations and customer requirements, have only become possible by the increasing use of electronic systems, especially in the fields of driving, safety, reliability, and functionality. Along with the functionalities that increase in number and have to cooperate, the complexity of the entire system will increase. Synergy effects resulting from distributed application functionalities via several electronic control devies, exchanging information through the network brings about more complex system architectures with many different sub-networks, operating with different velocities and different protocol implementations. To manage the increasing complexity of these systems, a deterministic behaviour of the control units and the communication network must be provided for, in particular when dealing with a distributed functionality. From Specification to Embedded Systems Application documents recent approaches and results presented at the International Embedded Systems Symposium (IESS 2005), which was held in August 2005 in Manaus (Brazil) and sponsored by the International Federation for Information Processing (IFIP). The topics which have been chosen for this working conference
are very timely: design methodology, modeling, specification,
software synthesis, power management, formal verification, testing,
network, communication systems, distributed control systems,
resource management and special aspects in system design.
Systems Performance, Second Edition, covers concepts, strategy, tools, and tuning for operating systems and applications, using Linux-based operating systems as the primary example. A deep understanding of these tools and techniques is critical for developers today. Implementing the strategies described in this thoroughly revised and updated edition can lead to a better end-user experience and lower costs, especially for cloud computing environments that charge by the OS instance. Systems performance expert and best-selling author Brendan Gregg summarizes relevant operating system, hardware, and application theory to quickly get professionals up to speed even if they have never analyzed performance before. Gregg then provides in-depth explanations of the latest tools and techniques, including extended BPF, and shows how to get the most out of cloud, web, and large-scale enterprise systems. Key topics covered include Hardware, kernel, and application internals, and how they perform Methodologies for rapid performance analysis of complex systems Optimizing CPU, memory, file system, disk, and networking usage Sophisticated profiling and tracing with perf, Ftrace, and BPF (BCC and bpftrace) Performance challenges associated with cloud computing hypervisors Benchmarking more effectively Featuring up-to-date coverage of Linux operating systems and environments, Systems Performance, Second Edition, also addresses issues that apply to any computer system. The book will be a go-to reference for many years to come and, like the first edition, required reading at leading tech companies. Register your book for convenient access to downloads, updates, and/or corrections as they become available. See inside book for details.
It gives me immense pleasure to introduce this timely handbook to the research/- velopment communities in the ?eld of signal processing systems (SPS). This is the ?rst of its kind and represents state-of-the-arts coverage of research in this ?eld. The driving force behind information technologies (IT) hinges critically upon the major advances in both component integration and system integration. The major breakthrough for the former is undoubtedly the invention of IC in the 50's by Jack S. Kilby, the Nobel Prize Laureate in Physics 2000. In an integrated circuit, all components were made of the same semiconductor material. Beginning with the pocket calculator in 1964, there have been many increasingly complex applications followed. In fact, processing gates and memory storage on a chip have since then grown at an exponential rate, following Moore's Law. (Moore himself admitted that Moore's Law had turned out to be more accurate, longer lasting and deeper in impact than he ever imagined. ) With greater device integration, various signal processing systems have been realized for many killer IT applications. Further breakthroughs in computer sciences and Internet technologies have also catalyzed large-scale system integration. All these have led to today's IT revolution which has profound impacts on our lifestyle and overall prospect of humanity. (It is hard to imagine life today without mobiles or Internets ) The success of SPS requires a well-concerted integrated approach from mul- ple disciplines, such as device, design, and application.
The exploitationof parallel processing to improve computing speeds is being examined at virtually all levels of computer science, from the study of parallel algorithms to the development of microarchitectures which employ multiple functional units. The most visible aspect of this interest in parallel processing is the commercially available multiprocessor systems which have appeared in the past decade. Unfortunately, the lack of adequate software support for the development of scientific applications that will run efficiently on multiple processors has stunted the acceptance of such systems. One of the major impediments to achieving high parallel efficiency on many data-parallel scientific applications is communication overhead, which is exemplified by cache coherency traffic and global memory overhead of interprocessors with a logically shared address space and physically distributed memory. Such techniques can be used by scientific application designers seeking to optimize code for a particular high-performance computer. In addition, these techniques can be seen as a necesary step toward developing software to support efficient paralled programs. In multiprocessor sytems with physically distributed memory, reducing communication overhead involves both data partitioning and data placement. Adaptive Data Partitioning (ADP) reduces the execution time of parallel programs by minimizing interprocessor communication for iterative data-parallel loops with near-neighbor communication. Data placement schemes are presented that reduce communication overhead. Under the loop partition specified by ADP, global data is partitioned into classes for each processor, allowing each processor to cachecertain regions of the global data set. In addition, for many scientific applications, peak parallel efficiency is achieved only when machine-specific tradeoffs between load imbalance and communication are evaluated and utilized in choosing the data partition. The techniques in this book evaluate these tradeoffs to generate optimum cyclic partitions for data-parallel loops with either a linearly varying or uniform computational structure and either neighborhood or dimensional multicast communication patterns. This tradeoff is also treated within the CPR (Collective Partitioning and Remapping) algorithm, which partitions a collection of loops with various computational structures and communication patterns. Experiments that demonstrate the advantage of ADP, data placement, cyclic partitioning and CPR were conducted on the Encore Multimax and BBN TC2000 multiprocessors using the ADAPT system, a program partitioner which automatically restructures iterative data-parallel loops. This book serves as an excellent reference and may be used as the text for an advanced course on the subject.
Multiple processor systems are an important class of parallel systems. Over the years, several architectures have been proposed to build such systems to satisfy the requirements of high performance computing. These architectures span a wide variety of system types. At the low end of the spectrum, we can build a small, shared-memory parallel system with tens of processors. These systems typically use a bus to interconnect the processors and memory. Such systems, for example, are becoming commonplace in high-performance graph ics workstations. These systems are called uniform memory access (UMA) multiprocessors because they provide uniform access of memory to all pro cessors. These systems provide a single address space, which is preferred by programmers. This architecture, however, cannot be extended even to medium systems with hundreds of processors due to bus bandwidth limitations. To scale systems to medium range i. e., to hundreds of processors, non-bus interconnection networks have been proposed. These systems, for example, use a multistage dynamic interconnection network. Such systems also provide global, shared memory like the UMA systems. However, they introduce local and remote memories, which lead to non-uniform memory access (NUMA) architecture. Distributed-memory architecture is used for systems with thousands of pro cessors. These systems differ from the shared-memory architectures in that there is no globally accessible shared memory. Instead, they use message pass ing to facilitate communication among the processors. As a result, they do not provide single address space."
THE CONTEXT OF PARALLEL PROCESSING The field of digital computer architecture has grown explosively in the past two decades. Through a steady stream of experimental research, tool-building efforts, and theoretical studies, the design of an instruction-set architecture, once considered an art, has been transformed into one of the most quantitative branches of computer technology. At the same time, better understanding of various forms of concurrency, from standard pipelining to massive parallelism, and invention of architectural structures to support a reasonably efficient and user-friendly programming model for such systems, has allowed hardware performance to continue its exponential growth. This trend is expected to continue in the near future. This explosive growth, linked with the expectation that performance will continue its exponential rise with each new generation of hardware and that (in stark contrast to software) computer hardware will function correctly as soon as it comes off the assembly line, has its down side. It has led to unprecedented hardware complexity and almost intolerable dev- opment costs. The challenge facing current and future computer designers is to institute simplicity where we now have complexity; to use fundamental theories being developed in this area to gain performance and ease-of-use benefits from simpler circuits; to understand the interplay between technological capabilities and limitations, on the one hand, and design decisions based on user and application requirements on the other. |
You may like...
Practical TCP/IP and Ethernet Networking…
Deon Reynders, Edwin Wright
Paperback
R1,491
Discovery Miles 14 910
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, …
Paperback
R2,281
Discovery Miles 22 810
Creativity in Computing and DataFlow…
Suyel Namasudra, Veljko Milutinovic
Hardcover
R4,204
Discovery Miles 42 040
|