![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Professional & Technical > Technology: general issues > Technical design > Computer aided design (CAD)
This book presents the results of an international workshop on Modelling and Analysis of Arms Control Problems held in Spitzingsee near Munich in October 1985 under the joint sponsorship of NATO's Scientific Affairs Division and the Volkswagen Foundation. The idea for this workshop evolved in 1983, as a consequence of discussions in the annual Systems Science Seminar at the Computer Science Department of the Federal Armed Forces University ~1unich on the topic of Quantitative Assessment in Arms Control 1) * There was wide agreement among the contribu tors to that seminar and its participants that those efforts to assess the potential contributions of systems and decision sciences, as well as systems analysis and"mathematical modelling, to arms control issues should be ex panded and a forum should be provided for this activity. It was further agreed that such a forum should include political scientists and policy analysts working in the area of arms control.
This volume introduces innovative power estimation and optimization methodologies to support the design of low power embedded systems based on high-performance VLIW microprocessors. A VLIW processor is a (generally) pipelined processor that can execute, in each clock cycle, a set of explicitly parallel operations.
A number of fundamental topics in the field of high performance clock distribution networks is covered in this book. High Performance Clock Distribution Networks is composed of ten contributions from authors at academic and industrial institutions. Topically, these contributions can be grouped within three primary areas. The first topic area deals with exploiting the localized nature of clock skew. The second topic area deals with the implementation of these clock distribution networks, while the third topic area considers more long-range aspects of next-generation clock distribution networks. High Performance Clock Distribution Networks presents a number of interesting strategies for designing and building high performance clock distribution networks. Many aspects of the ideas presented in these contributions are being developed and applied today in next-generation high-performance microprocessors.
Mixed-Mode Simulation and Analog Multilevel Simulation addresses the problems of simulating entire mixed analog/digital systems in the time-domain. A complete hierarchy of modeling and simulation methods for analog and digital circuits is described. Mixed-Mode Simulation and Analog Multilevel Simulation also provides a chronology of the research in the field of mixed-mode simulation and analog multilevel simulation over the last ten to fifteen years. In addition, it provides enough information to the reader so that a prototype mixed-mode simulator could be developed using the algorithms in this book. Mixed-Mode Simulation and Analog Multilevel Simulation can also be used as documentation for the SPLICE family of mixed-mode programs as they are based on the algorithms and techniques described in this book.
In the summer of 1981 I was asked to consider the possibility of manufacturing a 600,000 transistor microprocessor in 1985. It was clear that the technology would only be capable of manufacturing 100,000-200,000 transistor chips with acceptable yields. The control store ROM occupied approximately half of the chip area, so I considered adding spare rows and columns to increase ROM yield. Laser-programmed polysilicon fuses would be used to switch between good and bad circuits. Since only half the chip area would have redundancy, I was concerned that the increase in yield would not outweigh the increased costs of testing and redundancy programming. The fabrication technology did not yet exist, so I was unable to experimentally verify the benefits of redundancy. When the technology did become available, it would be too late in the development schedule to spend time running test chips. The yield analysis had to be done analytically or by simulation. Analytic yield analysis techniques did not offer sufficient accuracy for dealing with complex structures. The simulation techniques then available were very labor-intensive and seemed more suitable for redundant memories and other very regular structures [Stapper 80J. I wanted a simulator that would allow me to evaluate the yield of arbitrary redundant layouts, hence I termed such a simulator a layout or yield simulator. Since I was unable to convince anyone to build such a simulator for me, I embarked on the research myself.
From the Foreword..... Modern digital signal processing applications provide a large challenge to the system designer. Algorithms are becoming increasingly complex, and yet they must be realized with tight performance constraints. Nevertheless, these DSP algorithms are often built from many constituent canonical subtasks (e.g., IIR and FIR filters, FFTs) that can be reused in other subtasks. Design is then a problem of composing these core entities into a cohesive whole to provide both the intended functionality and the required performance. In order to organize the design process, there have been two major approaches. The top-down approach starts with an abstract, concise, functional description which can be quickly generated. On the other hand, the bottom-up approach starts from a detailed low-level design where performance can be directly assessed, but where the requisite design and interface detail take a long time to generate. In this book, the authors show a way to effectively resolve this tension by retaining the high-level conciseness of VHDL while parameterizing it to get good fit to specific applications through reuse of core library components. Since they build on a pre-designed set of core elements, accurate area, speed and power estimates can be percolated to high- level design routines which explore the design space. Results are impressive, and the cost model provided will prove to be very useful. Overall, the authors have provided an up-to-date approach, doing a good job at getting performance out of high-level design. The methodology provided makes good use of extant design tools, and is realistic in terms of the industrial design process. The approach is interesting in its own right, but is also of direct utility, and it will give the existing DSP CAD tools a highly competitive alternative. The techniques described have been developed within ARPAs RASSP (Rapid Prototyping of Application Specific Signal Processors) project, and should be of great interest there, as well as to many industrial designers. Professor Jonathan Allen, Massachusetts Institute of Technology
System-on-Chip Methodologies & Design Languages brings together a selection of the best papers from three international electronic design language conferences in 2000. The conferences are the Hardware Description Language Conference and Exhibition (HDLCon), held in the Silicon Valley area of USA; the Forum on Design Languages (FDL), held in Europe; and the Asia Pacific Chip Design Language (APChDL) Conference. The papers cover a range of topics, including design methods, specification and modeling languages, tool issues, formal verification, simulation and synthesis. The results presented in these papers will help researchers and practicing engineers keep abreast of developments in this rapidly evolving field.
Test functions (fault detection, diagnosis, error correction, repair, etc.) that are applied concurrently while the system continues its intended function are defined as on-line testing. In its expanded scope, on-line testing includes the design of concurrent error checking subsystems that can be themselves self-checking, fail-safe systems that continue to function correctly even after an error occurs, reliability monitoring, and self-test and fault-tolerant designs. On-Line Testing for VLSI contains a selected set of articles that discuss many of the modern aspects of on-line testing as faced today. The contributions are largely derived from recent IEEE International On-Line Testing Workshops. Guest editors Michael Nicolaidis, Yervant Zorian and Dhiraj Pradhan organized the articles into six chapters. In the first chapter the editors introduce a large number of approaches with an expanded bibliography in which some references date back to the sixties. On-Line Testing for VLSI is an edited volume of original research comprising invited contributions by leading researchers.
For someone with a hammer the whole world looks like a nail. Within the last 10-13 years Binar.y Decision Diagmms (BDDs) have become the state-of-the-art data structure in VLSI CAD for representation and ma nipulation of Boolean functions. Today, BDDs are widely used and in the meantime have also been integrated in commercial tools, especially in the area of verijication and synthesis. The interest in BDDs results from the fact that the data structure is generally accepted as providing a good compromise between conciseness of representation and efficiency of manipulation. With increasing number of applications, also in non CAD areas, classical methods to handle BDDs are being improved and new questions and problems evolve and have to be solved. The book should help the reader who is not familiar with BDDs (or DDs in general) to get a quick start. On the other hand it will discuss several new aspects of BDDs, e.g. with respect to minimization and implementation of a package. This will help people working with BDDs (in industry or academia) to keep informed about recent developments in this area."
Edited book reporting recent results in AI research in power plant surveillance and diagnostics. High quality and applicability of the contributions through a thorough peer-reviewing process. Condition Monitoring and Early Fault Detection provide for better efficiency of energy systems, at lower costs. Inhalt Featured Topics: Analysis of important issues relating to specification, development and use of systems for computer-assisted plant surveillance and diagnosis.- Empirical and analytical methods for on-line calibration monitoring and data reconciliation.- Noise analysis methods for early fault detection, condition monitoring, leak detection and loose part monitoring.- Predictive maintenance and condition monitoring techniques.- Empirical and analytical methods for fault detection and recognition.
The recent boom in the mobile telecommunication market has trapped the interest of almost all electronic and communication companies worldwide. New applications arise every day, more and more countries are covered by digital cellular systems and the competition between the several providers has caused prices to drop rapidly. The creation of this essentially new market would not have been possible without the ap pearance of smalI, low-power, high-performant and certainly low-cost mobile termi nals. The evolution in microelectronics has played a dominant role in this by creating digital signal processing (DSP) chips with more and more computing power and com bining the discrete components of the RF front-end on a few ICs. This work is situated in this last area, i. e. the study of the full integration of the RF transceiver on a single die. Furthermore, in order to be compatible with the digital processing technology, a standard CMOS process without tuning, trimming or post-processing steps must be used. This should flatten the road towards the ultimate goal: the single chip mobile phone. The local oscillator (LO) frequency synthesizer poses some major problems for integration and is the subject of this work. The first, and also the largest, part of this text discusses the design of the Voltage Controlled Oscillator (VCO). The general phase noise theory of LC-oscillators is pre sented, and the concept of effective resistance and capacitance is introduced to char acterize and compare the performance of different LC-tanks."
Many interesting design trends are shown by the six papers on operational amplifiers (Op Amps). Firstly. there is the line of stand-alone Op Amps using a bipolar IC technology which combines high-frequency and high voltage. This line is represented in papers by Bill Gross and Derek Bowers. Bill Gross shows an improved high-frequency compensation technique of a high quality three stage Op Amp. Derek Bowers improves the gain and frequency behaviour of the stages of a two-stage Op Amp. Both papers also present trends in current-mode feedback Op Amps. Low-voltage bipolar Op Amp design is presented by leroen Fonderie. He shows how multipath nested Miller compensation can be applied to turn rail-to-rail input and output stages into high quality low-voltage Op Amps. Two papers on CMOS Op Amps by Michael Steyaert and Klaas Bult show how high speed and high gain VLSI building blocks can be realised. Without departing from a single-stage OT A structure with a folded cascode output, a thorough high frequency design technique and a gain-boosting technique contributed to the high-speed and the high-gain achieved with these Op Amps. . Finally. Rinaldo Castello shows us how to provide output power with CMOS buffer amplifiers. The combination of class A and AB stages in a multipath nested Miller structure provides the required linearity and bandwidth.
After a brief introduction to low-power VLSI design, the design space of ASIP instruction set architectures (ISAs) is introduced with a special focus on important features for digital signal processing. Based on the degrees of freedom offered by this design space, a consistent ASIP design flow is proposed: this design flow starts with a given application and uses incremental optimization of the ASIP hardware, of ASIP coprocessors and of the ASIP software by using a top-down approach and by applying application-specific modifications on all levels of design hierarchy. A broad range of real-world signal processing applications serves as vehicle to illustrate each design decision and provides a hands-on approach to ASIP design. Finally, two complete case studies demonstrate the feasibility and the efficiency of the proposed methodology and quantitatively evaluate the benefits of ASIPs in an industrial context.
Reasoning in Boolean Networks provides a detailed treatment of recent research advances in algorithmic techniques for logic synthesis, test generation and formal verification of digital circuits. The book presents the central idea of approaching design automation problems for logic-level circuits by specific Boolean reasoning techniques. While Boolean reasoning techniques have been a central element of two-level circuit theory for many decades Reasoning in Boolean Networks describes a basic reasoning methodology for multi-level circuits. This leads to a unified view on two-level and multi-level logic synthesis. The presented reasoning techniques are applied to various CAD-problems to demonstrate their usefulness for today's industrially relevant problems. Reasoning in Boolean Networks provides lucid descriptions of basic algorithmic concepts in automatic test pattern generation, logic synthesis and verification and elaborates their intimate relationship to provide further intuition and insight into the subject. Numerous examples are provide for ease in understanding the material. Reasoning in Boolean Networks is intended for researchers in logic synthesis, VLSI testing and formal verification as well as for integrated circuit designers who want to enhance their understanding of basic CAD methodologies.
Johan H. Huijsing This book contains 18 tutorial papers concentrated on 3 topics, each topic being covered by 6 papers. The topics are: Low-Noise, Low-Power, Low-Voltage Mixed-Mode Design with CAD Tools Voltage, Current, and Time References The papers of this book were written by top experts in the field, currently working at leading European and American universities and companies. These papers are the reviewed versions of the papers presented at the Workshop on Advances in Analog Circuit Design. which was held in Villach, Austria, 26-28 April 1995. The chairman of the Workshop was Dr. Franz Dielacher from Siemens, Austria. The program committee existed of Johan H. Huijsing from the Delft University of Technology, Prof.Willy Sansen from the Catholic University of Leuven, and Dr. Rudy 1. van der Plassche from Philips Eindhoven. This book is the fourth of aseries dedicated to the design of analog circuits. The topics which were covered earlier were: Operational Amplifiers Analog to Digital Converters Analog Computer Aided Design Mixed AlD Circuit Design Sensor Interface Circuits Communication Circuits Low-Power, Low-Voltage Integrated Filters Smart Power As the Workshop will be continued year by year, a valuable series of topics will be built up from all the important areas of analog circuit design. I hope that this book will help designers of analog circuits to improve their work and to speed it up.
Today more than 90% of all programmable processors are employed in embedded systems. This number is actually not surprising, contemplating that in a typical home you might find one or two PCs equipped with high-performance standard processors, and probably dozens of embedded systems, including electronic entertainment, household, and telecom devices, each of them equipped with one or more embedded processors. The question arises why programmable processors are so popular in embedded system design. The answer lies in the fact that they help to narrow the gap between chip capacity and designer productivity. Embedded processors cores are nothing but one step further towards improved design reuse, just along the lines of standard cells in logic synthesis and macrocells in RTL synthesis in earlier times of IC design. Additionally, programmable processors permit to migrate functionality from hardware to software, resulting in an even improved reuse factor as well as greatly increased flexibility. The LISA processor design platform (LPDP) presented in Architecture Exploration for Embedded Processors with LISA addresses recent design challenges and results in highly satisfactory solutions. The LPDP covers all major high-level phases of embedded processor design and is capable of automatically generating almost all required software development tools from processor models in the LISA language. It supports a profiling-based, stepwise refinement of processor models down to cycle-accurate and even RTL synthesis models. Moreover, it elegantly avoids model inconsistencies otherwise omnipresent in traditional design flows. The next step in design reuse is already in sight: SoC platforms, i.e., partially pre-designed multi-processor templates that can be quickly tuned towards given applications thereby guaranteeing a high degree of hardware/software reuse in system-level design. Consequently, the LPDP approach goes even beyond processor architecture design. The LPDP solution explicitly addresses SoC integration issues by offering comfortable APIs for external simulation environments as well as clever solutions for the problem of both efficient and user-friendly heterogeneous multiprocessor debugging.
This new book on Analog Circuit Design contains the revised contributions of all the tutorial speakers of the eight workshop AACD (Advances in Analog Circuit Design), which was held at Nice, France on March 23-25, 1999. The workshop was organized by Yves Leduc of TI Nice, France. The program committee consisted of Willy Sansen, K.U.Leuven, Belgium, Han Huijsing, T.U.Delft, The Netherlands and Rudy van de Plassche, T.U.Eindhoven, The Netherlands. The aim of these AACD workshops is to bring together a restricted group of about 100 people who are personally advancing the frontiers of analog circuit design to brainstorm on new possibilities and future developments in a restricted number of fields. They are concentrated around three topics. In each topic six speakers give a tutorial presentation. Eighteen papers are thus included in this book. The topics of 1999 are: (X)DSL and other communication systems RF MOST models Integrated filters and oscillators The other topics, which have been coverd before, are: 1992 Operational amplifiers A-D Converters Analog CAD 1993 Mixed-mode A]D design Sensor interfaces Communication circuits 1994 Low-power low-voltage design Integrated filters Smart power 1995 Low-noise low-power low-voltge design Mixed-mode design with CAD tools Voltage, current and time references vii viii 1996 RF CMOS circuit design Bandpass sigma-delta and other data converters Translinear circuits 1997 RF A-D Converters Sensor and actuator interfaces Low-noise oscillators, PLL's and synthesizers 1998 I-Volt electronics Design and implementation of mixed-mode systems Low-noise amplifiers and RF power amplifiers for telecommunications
Our society is faced with an increasing dependence on computing
systems, not only in high tech consumer applications but also in
areas (e.g., air and railway traffic control, nuclear plant
control, aircraft and car control) where a failure can be critical
for the safety of human beings. Unfortunately, it is accepted that
large digital systems cannot be fault-free. Some faults may be
attributed to inaccuracy during the development, while others can
come from external causes such as environmental stress. Radiations,
electromagnetic interference and power glitches are some of the
most common causes of transient faults.
Co-Design is the set of emerging techniques which allows for the simultaneous design of Hardware and Software. In many cases where the application is very demanding in terms of various performances (time, surface, power consumption), trade-offs between dedicated hardware and dedicated software are becoming increasingly difficult to decide upon in the early stages of a design. Verification techniques - such as simulation or proof techniques - that have proven necessary in the hardware design must be dramatically adapted to the simultaneous verification of Software and Hardware. Describing the latest tools available for both Co-Design and Co-Verification of systems, Hardware/Software Co-Design and Co-Verification offers a complete look at this evolving set of procedures for CAD environments. The book considers all trade-offs that have to be made when co-designing a system. Several models are presented for determining the optimum solution to any co-design problem, including partitioning, architecture synthesis and code generation. When deciding on trade-offs, one of the main factors to be considered is the flow of communication, especially to and from the outside world. This involves the modeling of communication protocols. An approach to the synthesis of interface circuits in the context of co-design is presented. Other chapters present a co-design oriented flexible component data-base and retrieval methods; a case study of an ethernet bridge, designed using LOTOS and co-design methodologies and finally a programmable user interface based on monitors. Hardware/Software Co-Design and Co-Verification will help designers and researchers to understand these latest techniques in system design and as such will be of interest to all involved in embedded system design.
A Designer's Guide to VHDL Synthesis is intended for both design engineers who want to use VHDL-based logic synthesis ASICs and for managers who need to gain a practical understanding of the issues involved in using this technology. The emphasis is placed more on practical applications of VHDL and synthesis based on actual experiences, rather than on a more theoretical approach to the language. VHDL and logic synthesis tools provide very powerful capabilities for ASIC design, but are also very complex and represent a radical departure from traditional design methods. This situation has made it difficult to get started in using this technology for both designers and management, since a major learning effort and culture' change is required. A Designer's Guide to VHDL Synthesis has been written to help design engineers and other professionals successfully make the transition to a design methodology based on VHDL and log synthesis instead of the more traditional schematic based approach. While there are a number of texts on the VHDL language and its use in simulation, little has been written from a designer's viewpoint on how to use VHDL and logic synthesis to design real ASIC systems. The material in this book is based on experience gained in successfully using these techniques for ASIC design and relies heavily on realistic examples to demonstrate the principles involved.
Contributions on UML address the application of UML in the
specification of embedded HW/SW systems. C-Based System Design
embraces the modeling of operating systems, modeling with different
models of computation, generation of test patterns, and experiences
from case studies with SystemC. Analog and Mixed-Signal Systems
covers rules for solving general modeling problems in VHDL-AMS,
modeling of multi-nature systems, synthesis, and modeling of
Mixed-Signal Systems with SystemC. Languages for formal methods are
addressed by contributions on formal specification and refinement
of hybrid, embedded and real-time stems.
It is well known that embedded systems have to be implemented efficiently. This requires that processors optimized for certain application domains are used in embedded systems. Such an optimization requires a careful exploration of the design space, including a detailed study of cost/performance tradeoffs. In order to avoid time-consuming assembly language programming during design space exploration, compilers are needed. In order to analyze the effect of various software or hardware configurations on the performance, retargetable compilers are needed that can generate code for numerous different potential hardware configurations. This book provides a comprehensive and up-to-date overview of the fast developing area of retargetable compilers for embedded systems. It describes a large set important tools as well as applications of retargetable compilers at different levels in the design flow. Retargetable Compiler Technology for Embedded Systems is mostly self-contained and requires only fundamental knowledge in software and compiler design. It is intended to be a key reference for researchers and designers working on software, compilers, and processor optimization for embedded systems.
The main intention of this book is to give an impression of the state-of-the-art in system-level memory management (data transfer and storage) related issues for complex data-dominated real-time signal and data processing applications. The material is based on research at IMEC in this area in the period 1989- 1997. In order to deal with the stringent timing requirements and the data dominated characteristics of this domain, we have adopted a target architecture style and a systematic methodology to make the exploration and optimization of such systems feasible. Our approach is also very heavily application driven which is illustrated by several realistic demonstrators, partly used as red-thread examples in the book. Moreover, the book addresses only the steps above the traditional high-level synthesis (scheduling and allocation) or compilation (traditional or ILP oriented) tasks. The latter are mainly focussed on scalar or scalar stream operations and data where the internal structure of the complex data types is not exploited, in contrast to the approaches discussed here. The proposed methodologies are largely independent of the level of programmability in the data-path and controller so they are valuable for the realisation of both hardware and software systems. Our target domain consists of signal and data processing systems which deal with large amounts of data."
One of the grand challenges in the nano-scopic computing era is
guarantees of robustness. Robust computing system design is
confronted with quantum physical, probabilistic, and even
biological phenomena, and guaranteeing high reliability is much
more difficult than ever before. Scaling devices down to the level
of single electron operation will bring forth new challenges due to
probabilistic effects and uncertainty in guaranteeing 'zero-one'
based computing. Minuscule devices imply billions of devices on a
single chip, which may help mitigate the challenge of uncertainty
by replication and redundancy. However, such device densities will
create a design and validation nightmare with the shear scale.
The area of analog integrated circuits is facing some serious challenges due to the ongoing trends towards low supply voltages, low power consumption and high-frequency operation. The situation is becoming even more complicated by the fact that many transfer functions have to be tunable or controllable. A promising approach to facing these challenges is given by the class of dynamic translinear circuits, which are, as a consequence, receiving increasing interest. Several different names are used in literature: log-domain, exponential state-space, current-mode companding, instantaneous companding, tanh-domain, sinh-domain, polynomial state-space, square-root domain and translinear filters. In fact, all these groups are (overlapping) subclasses of the overall class of dynamic translinear circuits. Research Perspectives on Dynamic Translinear and Log-Domain Circuits is a compilation of research findings in this growing field. It comprises ten contributions, coming from recognized dynamic-translinear' researchers in Europe and North America. Research Perspectives on Dynamic Translinear and Log-Domain Circuits is an edited volume of original research. |
You may like...
Handbook of Research on Embedded Systems…
Alessandra Bagnato, Leandro Soares Indrusiak, …
Hardcover
R8,607
Discovery Miles 86 070
Introduction to Research Methods and…
Ron McQueen, Christina Knussen
Paperback
An Interdisciplinary Approach to Modern…
Sabyasachi Pramanik, Anand Sharma, …
Hardcover
R3,295
Discovery Miles 32 950
Sensor Network Methodologies for Smart…
Salahddine Krit, Valentina Emilia Balas, …
Hardcover
R5,353
Discovery Miles 53 530
|