![]() |
![]() |
Your cart is empty |
||
Books > Professional & Technical > Technology: general issues > Technical design > Computer aided design (CAD)
This self-contained book addresses the need for analysis, characterization, estimation, and optimization of the various forms of power dissipation in the presence of process variations of nano-CMOS technologies. The authors show very large-scale integration (VLSI) researchers and engineers how to minimize the different types of power consumption of digital circuits. The material deals primarily with high-level (architectural or behavioral) energy dissipation.
Intelligent agents are one of the most promising business tools in our information rich world. An intelligent agent consists of a software system capable of performing intelligent tasks within a dynamic and unpredictable environment. They can be characterised by various attributes including: autonomous, adaptive, collaborative, communicative, mobile, and reactive. Many problems are not well defined and the information needed to make decisions is not available. These problems are not easy to solve using conventional computing approaches. Here, the intelligent agent paradigm may play a major role in helping to solve these problems. This book, written for application researchers, covers a broad selection of research results that demonstrate, in an authoritative and clear manner, the applications of agents within our information society.
Cellular Neural Networks (CNNs) constitute a class of nonlinear, recurrent and locally coupled arrays of identical dynamical cells that operate in parallel. ANALOG chips are being developed for use in applications where sophisticated signal processing at low power consumption is required. Signal processing via CNNs only becomes efficient if the network is implemented in analog hardware. In view of the physical limitations that analog implementations entail, robust operation of a CNN chip with respect to parameter variations has to be insured. By far not all mathematically possible CNN tasks can be carried out reliably on an analog chip; some of them are inherently too sensitive. This book defines a robustness measure to quantify the degree of robustness and proposes an exact and direct analytical design method for the synthesis of optimally robust network parameters. The method is based on a design centering technique which is generally applicable where linear constraints have to be satisfied in an optimum way. Processing speed is always crucial when discussing signal-processing devices. In the case of the CNN, it is shown that the setting time can be specified in closed analytical expressions, which permits, on the one hand, parameter optimization with respect to speed and, on the other hand, efficient numerical integration of CNNs. Interdependence between robustness and speed issues are also addressed. Another goal pursued is the unification of the theory of continuous-time and discrete-time systems. By means of a delta-operator approach, it is proven that the same network parameters can be used for both of these classes, even if their nonlinear output functions differ. More complex CNN optimization problems that cannot be solved analytically necessitate resorting to numerical methods. Among these, stochastic optimization techniques such as genetic algorithms prove their usefulness, for example in image classification problems. Since the inception of the CNN, the problem of finding the network parameters for a desired task has been regarded as a learning or training problem, and computationally expensive methods derived from standard neural networks have been applied. Furthermore, numerous useful parameter sets have been derived by intuition. In this book, a direct and exact analytical design method for the network parameters is presented. The approach yields solutions which are optimum with respect to robustness, an aspect which is crucial for successful implementation of the analog CNN hardware that has often been neglected. `This beautifully rounded work provides many interesting and useful results, for both CNN theorists and circuit designers.' Leon O. Chua
Scalable Hardware Verification with Symbolic Simulation presents recent advancements in symbolic simulation-based solutions which radically improve scalability. It overviews current verification techniques, both based on logic simulation and formal verification methods, and unveils the inner workings of symbolic simulation. The core of this book focuses on new techniques that narrow the performance gap between the complexity of digital systems and the limited ability to verify them. In particular, it covers a range of solutions that exploit approximation and parametrization methods, including quasi-symbolic simulation, cycle-based symbolic simulation, and parameterizations based on disjoint-support decompositions. In structuring this book, the author s hope was to provide interesting reading for a broad range of design automation readers. The first two chapters provide an overview of digital systems design and, in particular, verification. Chapter 3 reviews mainstream symbolic techniques in formal verification, dedicating most of its focus to symbolic simulation. The fourth chapter covers the necessary principles of parametric forms and disjoint-support decompositions. Chapters 5 and 6 focus on recent symbolic simulation techniques, and the final chapter addresses key topics needing further research. Scalable Hardware Verification with Symbolic Simulation is for verification engineers and researchers in the design automation field. Highlights:
Verification is too often approached in an ad hoc fashion. Visually inspecting simulation results is no longer feasible and the directed test-case methodology is reaching its limit. Moore's Law demands a productivity revolution in functional verification methodology. Writing Testbenches Using SystemVerilog offers a clear blueprint of a verification process that aims for first-time success using the SystemVerilog language. From simulators to source management tools, from specification to functional coverage, from I's and O's to high-level abstractions, from interfaces to bus-functional models, from transactions to self-checking testbenches, from directed testcases to constrained random generators, from behavioral models to regression suites, this book covers it all. Writing Testbenches Using SystemVerilog presents many of the functional verification features that were added to the Verilog language as part of SystemVerilog. Interfaces, virtual modports, classes, program blocks, clocking blocks and others SystemVerilog features are introduced within a coherent verification methodology and usage model. Writing Testbenches Using SystemVerilog introduces the reader to all elements of a modern, scalable verification methodology. It is an introduction and prelude to the verification methodology detailed in the Verification Methodology Manual for SystemVerilog. It is a SystemVerilog version of the author's bestselling book Writing Testbenches: Functional Verification of HDL Models.
Constraint-Based Verification covers an emerging field in functional verification of electronic designs, referred to as the "constraint-based verification." The topics are developed in the context of a wide range of dynamic and static verification approaches including simulation, emulation, and formal methods. The goal is to show how constraints, or assertions, can be used towards automating the generation of testbenches, resulting in a seamless verification flow. Topics such as verification coverage, and connection with assertion based verification, are also covered. The book targets verification engineers as well as researchers. It covers both methodological and technical issues. Particular stress is given to the latest advances in functional verification. The research community has witnessed recent growth of interests in constraint-based functional verification. Various techniques have been developed. They are relatively new, but have reached a level of maturity so that they are appearing in commercial tools such as Vera and System Verilog.
This book details timing analysis and optimization techniques for circuits with level-sensitive memory elements. It contains a linear programming formulation applicable to the timing analysis of large scale circuits and includes a delay insertion methodology that improves the efficiency of clock skew scheduling. Coverage also provides a framework for and results from implementing timing optimization algorithms in a parallel computing environment.
On Optimal Interconnections for VLSI describes, from a geometric perspective, algorithms for high-performance, high-density interconnections during the global and detailed routing phases of circuit layout. First, the book addresses area minimization, with a focus on near-optimal approximation algorithms for minimum-cost Steiner routing. In addition to practical implementations of recent methods, the implications of recent results on spanning tree degree bounds and the method of Zelikovsky are discussed. Second, the book addresses delay minimization, starting with a discussion of accurate, yet algorithmically tractable, delay models. Recent minimum-delay constructions are highlighted, including provably good cost-radius tradeoffs, critical-sink routing algorithms, Elmore delay-optimal routing, graph Steiner arborescences, non-tree routing, and wiresizing. Third, the book addresses skew minimization for clock routing and prescribed-delay routing formulations. The discussion starts with early matching-based constructions and goes on to treat zero-skew routing with provably minimum wirelength, as well as planar clock routing. Finally, the book concludes with a discussion of multiple (competing) objectives, i.e., how to optimize area, delay, skew, and other objectives simultaneously. These techniques are useful when the routing instance has heterogeneous resources or is highly congested, as in FPGA routing, multi-chip packaging, and very dense layouts. Throughout the book, the emphasis is on practical algorithms and a complete self-contained development. On Optimal Interconnections for VLSI will be of use to both circuit designers (CAD tool users) as well as researchers and developers in the area of performance-driven physical design.
One of the foundations for change in our society comes from designing. Its genesis is the notion that the world around us either is unsuited to our needs or can be improved. The need for designing is driven by a society's view that it can improve or add value to human existence well beyond simple subsistence. As a consequence of designing the world which we inhabit is increasingly a designed rather than a naturally occurring one. In that sense it is an "artificial" world. Designing is a fundamental precursor to manufacturing, fabrication, construction or implementation. Design research aims to develop an understanding of designing and to produce models of designing that can be used to aid designing. Artificial intelligence has provided an environmental paradigm within which design research based on computational constructions, can be carried out. Design research can be carried out in variety of ways. It can be viewed as largely an empirical endeavour in which experiments are designed and executed in order to test some hypothesis about some design phenomenon or design behaviour. This is the approach adopted in cognitive science. It often manifests itself through the use of protocol studies of designers. The results of such research form the basis of a computational model. A second view is that design research can be carried out by positing axioms and then deriving consequences from them.
After a brief introduction to low-power VLSI design, the design space of ASIP instruction set architectures (ISAs) is introduced with a special focus on important features for digital signal processing. Based on the degrees of freedom offered by this design space, a consistent ASIP design flow is proposed: this design flow starts with a given application and uses incremental optimization of the ASIP hardware, of ASIP coprocessors and of the ASIP software by using a top-down approach and by applying application-specific modifications on all levels of design hierarchy. A broad range of real-world signal processing applications serves as vehicle to illustrate each design decision and provides a hands-on approach to ASIP design. Finally, two complete case studies demonstrate the feasibility and the efficiency of the proposed methodology and quantitatively evaluate the benefits of ASIPs in an industrial context.
Our society is faced with an increasing dependence on computing
systems, not only in high tech consumer applications but also in
areas (e.g., air and railway traffic control, nuclear plant
control, aircraft and car control) where a failure can be critical
for the safety of human beings. Unfortunately, it is accepted that
large digital systems cannot be fault-free. Some faults may be
attributed to inaccuracy during the development, while others can
come from external causes such as environmental stress. Radiations,
electromagnetic interference and power glitches are some of the
most common causes of transient faults.
Minimization of power dissipation in very large scale integrated (VLSI) circuits is important to improve reliability and reduce packaging costs. While many techniques have investigated power minimization during the functional (normal) mode of operation, it is important to examine the power dissipation during the test circuit activity is substantially higher during test than during functional operation. For example, during the execution of built-in self-test (BIST) in-field sessions, excessive power dissipation can decrease the reliability of the circuit under test due to higher temperature and current density. Power-Constrained Testing of VLSI Circuits focuses on techniques for minimizing power dissipation during test application at logic and register-transfer levels of abstraction of the VLSI design flow. The first part of this book surveys the existing techniques for power constrained testing of VLSI circuits. In the second part, several test automation techniques for reducing power in scan-based sequential circuits and BIST data paths are presented.
The Forum on Design Languages (FDL) is the European Forum to
exchange experiences and learn new trends, in the application of
languages and the associated design methods and tools, to design
complex electronic systems. By offering several co-located
workshops, this multi-faceted event gives an excellent opportunity
to gain up-to-date knowledge across main aspects of such a wide
field. All the workshops address as their common denominator the
different application domains of system-design languages with the
presentation of the latest research results and design
experiences. FDL served once more as the European Forum for electronic system design languages and consolidates as the main place in Europe where designers interested in design languages and their applications can meet and interchange experiences. In this fourth book in the CHDL Series, a selection of the best papers presented in FDL'02 is published. System Specification and Design Languages contains outstanding research contributions in the four areas mentioned above. So, The Analog and Mixed-Signal system design contributions cover the new methodological approaches like AMS behavioral specification, mixed-signal modeling and simulation, AMS reuse and MEMs design using the new modeling languages such as VHDL-AMS, Verilog-AMS, Modelica and analog-mixed signal extensions to SystemC. UML is the de-facto standard for SW development covering the early development stages of requirement analysis and system specification. The UML-based system specification and design contributions address latest results on hot-topic areas such as system profiling, performance analysis and UML application to complex, HW/SW embedded systems and SoC design.C/C++-for HW/SW systems design is entering standard industrial design flows. Selected papers cover system modeling, system verification and SW generation. The papers from the Specification Formalisms for Proven design workshop present formal methods for system modeling and design, semantic integrity and formal languages such as ALPHA, HANDLE and B.
The Integrated Circuit (IC) industry has gone without a standardized verification approach for decades. This book defines a uniform, standardizable methodology for verifying the logical behavior of an integrated circuit, whether an I/O controller, a microprocessor, or a complete digital system. This book will help Engineers and managers responsible for IC development to bring a single, standards-based methodology to their R & D efforts, cutting costs and improving results.
Combinatorial optimisation is a ubiquitous discipline whose usefulness spans vast applications domains. The intrinsic complexity of most combinatorial optimisation problems makes classical methods unaffordable in many cases. To acquire practical solutions to these problems requires the use of metaheuristic approaches that trade completeness for pragmatic effectiveness. Such approaches are able to provide optimal or quasi-optimal solutions to a plethora of difficult combinatorial optimisation problems. The application of metaheuristics to combinatorial optimisation is an active field in which new theoretical developments, new algorithmic models, and new application areas are continuously emerging. This volume presents recent advances in the area of metaheuristic combinatorial optimisation, with a special focus on evolutionary computation methods. Moreover, it addresses local search methods and hybrid approaches. In this sense, the book includes cutting-edge theoretical, methodological, algorithmic and applied developments in the field, from respected experts and with a sound perspective.
Along the years, rough set theory has earned a well-deserved reputation as a sound methodology for dealing with imperfect knowledge in a simple though mathematically sound way. This edited volume aims at continue stressing the benefits of applying rough sets in many real-life situations while still keeping an eye on topological aspects of the theory as well as strengthening its linkage with other soft computing paradigms. The volume comprises 11 chapters and is organized into three parts. Part 1 deals with theoretical contributions while Parts 2 and 3 focus on several real world data mining applications. Chapters authored by pioneers were selected on the basis of fundamental ideas/concepts rather than the thoroughness of techniques deployed. Academics, scientists as well as engineers working in the rough set, computational intelligence, soft computing and data mining research area will find the comprehensive coverage of this book invaluable.
In recent years, the issue of linkage in GEAs has garnered greater attention and recognition from researchers. Conventional approaches that rely much on ad hoc tweaking of parameters to control the search by balancing the level of exploitation and exploration are grossly inadequate. As shown in the work reported here, such parameters tweaking based approaches have their limits; they can be easily fooled by cases of triviality or peculiarity of the class of problems that the algorithms are designed to handle. Furthermore, these approaches are usually blind to the interactions between the decision variables, thereby disrupting the partial solutions that are being built up along the way.
Condition modelling and control is a technique used to enable decision-making in manufacturing processes of interest to researchers and practising engineering. Condition Monitoring and Control for Intelligent Manufacturing will be bought by researchers and graduate students in manufacturing and control and engineering, as well as practising engineers in industries such as automotive and packaging manufacturing.
Evolutionary algorithms are sophisticated search methods that have been found to be very efficient and effective in solving complex real-world multi-objective problems where conventional optimization tools fail to work well. Despite the tremendous amount of work done in the development of these algorithms in the past decade, many researchers assume that the optimization problems are deterministic and uncertainties are rarely examined. The primary motivation of this book is to provide a comprehensive introduction on the design and application of evolutionary algorithms for multi-objective optimization in the presence of uncertainties. In this book, we hope to expose the readers to a range of optimization issues and concepts, and to encourage a greater degree of appreciation of evolutionary computation techniques and the exploration of new ideas that can better handle uncertainties. "Evolutionary Multi-Objective Optimization in Uncertain Environments: Issues and Algorithms" is intended for a wide readership and will be a valuable reference for engineers, researchers, senior undergraduates and graduate students who are interested in the areas of evolutionary multi-objective optimization and uncertainties.
I am honored and delighted to write the foreword to this very first book about SystemC. It is now an excellent time to summarize what SystemC really is and what it can be used for. The main message in the area of design in the 2001 International Te- nologyRoadmapfor Semiconductors (ITRS) isthat"cost ofdesign is the greatest threat to the continuation ofthe semiconductor roadmap. " This recent revision of the ITRS describes the major productivity improvements of the last few years as "small block reuse," "large block reuse ," and "IC implementation tools. " In order to continue to reduce design cost, the - quired future solutions will be "intelligent test benches" and "embedded system-level methodology. " As the new system-level specification and design language, SystemC - rectly contributes to these two solutions. These will have the biggest - pact on future design technology and will reduce system implementation cost. Ittook SystemC less than two years to emerge as the leader among the many new and well-discussed system-level designlanguages. Inmy op- ion, this is due to the fact that SystemC adopted object-oriented syst- level design-the most promising method already applied by the majority of firms during the last couple of years. Even before the introduction of SystemC, many system designers have attempted to develop executable specifications in C++. These executable functional specifications are then refined to the well-known transaction level, to model the communication of system-level processes.
The design of computer systems to be embedded in critical real-time applications is a complex task. Such systems must not only guarantee to meet hard real-time deadlines imposed by their physical environment, they must guarantee to do so dependably, despite both physical faults (in hardware) and design faults (in hardware or software). A fault-tolerance approach is mandatory for these guarantees to be commensurate with the safety and reliability requirements of many life- and mission-critical applications. This book explains the motivations and the results of a collaborative project', whose objective was to significantly decrease the lifecycle costs of such fault tolerant systems. The end-user companies participating in this project already deploy fault-tolerant systems in critical railway, space and nuclear-propulsion applications. However, these are proprietary systems whose architectures have been tailored to meet domain-specific requirements. This has led to very costly, inflexible, and often hardware-intensive solutions that, by the time they are developed, validated and certified for use in the field, can already be out-of-date in terms of their underlying hardware and software technology."
This book is the first in a series of three dedicated to advanced topics in Mixed-Signal IC design methodologies. It is one of the results achieved by the Mixed-Signal Design Cluster, an initiative launched in 1998 as part of the TARDIS project, funded by the European Commission within the ESPRIT-IV Framework. This initiative aims to promote the development of new design and test methodologies for Mixed-Signal ICs, and to accelerate their adoption by industrial users. As Microelectronics evolves, Mixed-Signal techniques are gaining a significant importance due to the wide spread of applications where an analog front-end is needed to drive a complex digital-processing subsystem. In this sense, Analog and Mixed-Signal circuits are recognized as a bottleneck for the market acceptance of Systems-On-Chip, because of the inherent difficulties involved in the design and test of these circuits. Specially, problems arising from the use of a common substrate for analog and digital components are a main limiting factor. The Mixed-Signal Cluster has been formed by a group of 11 Research and Development projects, plus a specific action to promote the dissemination of design methodologies, techniques, and supporting tools developed within the Cluster projects. The whole action, ending in July 2002, has been assigned an overall budget of more than 8 million EURO.
Introduction to Hardware-Software Co-Design presents a number of issues of fundamental importance for the design of integrated hardware software products such as embedded, communication, and multimedia systems. This book is a comprehensive introduction to the fundamentals of hardware/software co-design. Co-design is still a new field but one which has substantially matured over the past few years. This book, written by leading international experts, covers all the major topics including: fundamental issues in co-design; hardware/software co-synthesis algorithms; prototyping and emulation; target architectures; compiler techniques; specification and verification; system-level specification. Special chapters describe in detail several leading-edge co-design systems including Cosyma, LYCOS, and Cosmos. Introduction to Hardware-Software Co-Design contains sufficient material for use by teachers and students in an advanced course of hardware/software co-design. It also contains extensive explanation of the fundamental concepts of the subject and the necessary background to bring practitioners up-to-date on this increasingly important topic.
Sigma delta modulation has become a very useful and widely applied technique for high performance Analog-to-Digital (A/D) conversion of narrow band signals. Through the use of oversampling and negative feedback, the quantization errors of a coarse quantizer are suppressed in a narrow signal band in the output of the modulator. Bandpass sigma delta modulation is well suited for A/D conversion of narrow band signals modulated on a carrier, as occurs in communication systems such as AM/FM receivers and mobile phones. Due to the nonlinearity of the quantizer in the feedback loop, a sigma delta modulator may exhibit input signal dependent stability properties. The same combination of the nonlinearity and the feedback loop complicates the stability analysis. In Bandpass Sigma Delta Modulators, the describing function method is used to analyze the stability of the sigma delta modulator. The linear gain model commonly used for the quantizer fails to predict small signal stability properties and idle patterns accurately. In Bandpass Sigma Delta Modulators an improved model for the quantizer is introduced, extending the linear gain model with a phase shift. Analysis shows that the phase shift of a sampled quantizer is in fact a phase uncertainty. Stability analysis of sigma delta modulators using the extended model allows accurate prediction of idle patterns and calculation of small-signal stability boundaries for loop filter parameters. A simplified rule of thumb is derived and applied to bandpass sigma delta modulators. The stability properties have a considerable impact on the design of single-loop, one-bit, high-order continuous-time bandpass sigma delta modulators. The continuous-time bandpass loop filter structure should have sufficient degrees of freedom to implement the desired (small-signal stable) sigma delta modulator behavior. Bandpass Sigma Delta Modulators will be of interest to practicing engineers and researchers in the areas of mixed-signal and analog integrated circuit design.
A Designer's Guide to VHDL Synthesis is intended for both design engineers who want to use VHDL-based logic synthesis ASICs and for managers who need to gain a practical understanding of the issues involved in using this technology. The emphasis is placed more on practical applications of VHDL and synthesis based on actual experiences, rather than on a more theoretical approach to the language. VHDL and logic synthesis tools provide very powerful capabilities for ASIC design, but are also very complex and represent a radical departure from traditional design methods. This situation has made it difficult to get started in using this technology for both designers and management, since a major learning effort and culture' change is required. A Designer's Guide to VHDL Synthesis has been written to help design engineers and other professionals successfully make the transition to a design methodology based on VHDL and log synthesis instead of the more traditional schematic based approach. While there are a number of texts on the VHDL language and its use in simulation, little has been written from a designer's viewpoint on how to use VHDL and logic synthesis to design real ASIC systems. The material in this book is based on experience gained in successfully using these techniques for ASIC design and relies heavily on realistic examples to demonstrate the principles involved. |
![]() ![]() You may like...
Recent Trends in Computer-aided…
Saptarshi Chatterjee, Debangshu Dey, …
Paperback
R2,729
Discovery Miles 27 290
AutoCAD Electrical 2023 Black Book…
Gaurav Verma, Matt Weber
Hardcover
R1,583
Discovery Miles 15 830
Creo Parametric 8.0 Black Book (Colored)
Gaurav Verma, Matt Weber
Hardcover
R2,310
Discovery Miles 23 100
SolidWorks Simulation 2022 Black Book…
Gaurav Verma, Matt Weber
Hardcover
R1,774
Discovery Miles 17 740
|