![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Professional & Technical > Technology: general issues > Technical design > Computer aided design (CAD)
In the summer of 1981 I was asked to consider the possibility of manufacturing a 600,000 transistor microprocessor in 1985. It was clear that the technology would only be capable of manufacturing 100,000-200,000 transistor chips with acceptable yields. The control store ROM occupied approximately half of the chip area, so I considered adding spare rows and columns to increase ROM yield. Laser-programmed polysilicon fuses would be used to switch between good and bad circuits. Since only half the chip area would have redundancy, I was concerned that the increase in yield would not outweigh the increased costs of testing and redundancy programming. The fabrication technology did not yet exist, so I was unable to experimentally verify the benefits of redundancy. When the technology did become available, it would be too late in the development schedule to spend time running test chips. The yield analysis had to be done analytically or by simulation. Analytic yield analysis techniques did not offer sufficient accuracy for dealing with complex structures. The simulation techniques then available were very labor-intensive and seemed more suitable for redundant memories and other very regular structures [Stapper 80J. I wanted a simulator that would allow me to evaluate the yield of arbitrary redundant layouts, hence I termed such a simulator a layout or yield simulator. Since I was unable to convince anyone to build such a simulator for me, I embarked on the research myself.
A number of optimization problems of the mechanics of space flight and the motion of walking robots and manipulators, and of quantum physics, eco momics and biology, have an irregular structure: classical variational proce dures do not formally make it possible to find optimal controls that, as we explain, have an impulse character. This and other well-known facts lead to the necessity for constructing dynamical models using the concept of a gener alized function (Schwartz distribution). The problem ofthe systematization of such models is very important. In particular, the problem of the construction of the general form of linear and nonlinear operator equations in distributions is timely. Another problem is related to the proper determination of solutions of equations that have nonlinear operations over generalized functions in their description. It is well-known that "the value of a distribution at a point" has no meaning. As a result the problem to construct the concept of stability for generalized processes arises. Finally, optimization problems for dynamic systems in distributions need finding optimality conditions. This book contains results that we have obtained in the above-mentioned directions. The aim of the book is to provide for electrical and mechanical engineers or mathematicians working in applications, a general and systematic treat ment of dynamic systems based on up-to-date mathematical methods and to demonstrate the power of these methods in solving dynamics of systems and applied control problems."
System-on-Chip Methodologies & Design Languages brings together a selection of the best papers from three international electronic design language conferences in 2000. The conferences are the Hardware Description Language Conference and Exhibition (HDLCon), held in the Silicon Valley area of USA; the Forum on Design Languages (FDL), held in Europe; and the Asia Pacific Chip Design Language (APChDL) Conference. The papers cover a range of topics, including design methods, specification and modeling languages, tool issues, formal verification, simulation and synthesis. The results presented in these papers will help researchers and practicing engineers keep abreast of developments in this rapidly evolving field.
Edited book reporting recent results in AI research in power plant surveillance and diagnostics. High quality and applicability of the contributions through a thorough peer-reviewing process. Condition Monitoring and Early Fault Detection provide for better efficiency of energy systems, at lower costs. Inhalt Featured Topics: Analysis of important issues relating to specification, development and use of systems for computer-assisted plant surveillance and diagnosis.- Empirical and analytical methods for on-line calibration monitoring and data reconciliation.- Noise analysis methods for early fault detection, condition monitoring, leak detection and loose part monitoring.- Predictive maintenance and condition monitoring techniques.- Empirical and analytical methods for fault detection and recognition.
Test functions (fault detection, diagnosis, error correction, repair, etc.) that are applied concurrently while the system continues its intended function are defined as on-line testing. In its expanded scope, on-line testing includes the design of concurrent error checking subsystems that can be themselves self-checking, fail-safe systems that continue to function correctly even after an error occurs, reliability monitoring, and self-test and fault-tolerant designs. On-Line Testing for VLSI contains a selected set of articles that discuss many of the modern aspects of on-line testing as faced today. The contributions are largely derived from recent IEEE International On-Line Testing Workshops. Guest editors Michael Nicolaidis, Yervant Zorian and Dhiraj Pradhan organized the articles into six chapters. In the first chapter the editors introduce a large number of approaches with an expanded bibliography in which some references date back to the sixties. On-Line Testing for VLSI is an edited volume of original research comprising invited contributions by leading researchers.
The recent boom in the mobile telecommunication market has trapped the interest of almost all electronic and communication companies worldwide. New applications arise every day, more and more countries are covered by digital cellular systems and the competition between the several providers has caused prices to drop rapidly. The creation of this essentially new market would not have been possible without the ap pearance of smalI, low-power, high-performant and certainly low-cost mobile termi nals. The evolution in microelectronics has played a dominant role in this by creating digital signal processing (DSP) chips with more and more computing power and com bining the discrete components of the RF front-end on a few ICs. This work is situated in this last area, i. e. the study of the full integration of the RF transceiver on a single die. Furthermore, in order to be compatible with the digital processing technology, a standard CMOS process without tuning, trimming or post-processing steps must be used. This should flatten the road towards the ultimate goal: the single chip mobile phone. The local oscillator (LO) frequency synthesizer poses some major problems for integration and is the subject of this work. The first, and also the largest, part of this text discusses the design of the Voltage Controlled Oscillator (VCO). The general phase noise theory of LC-oscillators is pre sented, and the concept of effective resistance and capacitance is introduced to char acterize and compare the performance of different LC-tanks."
Many interesting design trends are shown by the six papers on operational amplifiers (Op Amps). Firstly. there is the line of stand-alone Op Amps using a bipolar IC technology which combines high-frequency and high voltage. This line is represented in papers by Bill Gross and Derek Bowers. Bill Gross shows an improved high-frequency compensation technique of a high quality three stage Op Amp. Derek Bowers improves the gain and frequency behaviour of the stages of a two-stage Op Amp. Both papers also present trends in current-mode feedback Op Amps. Low-voltage bipolar Op Amp design is presented by leroen Fonderie. He shows how multipath nested Miller compensation can be applied to turn rail-to-rail input and output stages into high quality low-voltage Op Amps. Two papers on CMOS Op Amps by Michael Steyaert and Klaas Bult show how high speed and high gain VLSI building blocks can be realised. Without departing from a single-stage OT A structure with a folded cascode output, a thorough high frequency design technique and a gain-boosting technique contributed to the high-speed and the high-gain achieved with these Op Amps. . Finally. Rinaldo Castello shows us how to provide output power with CMOS buffer amplifiers. The combination of class A and AB stages in a multipath nested Miller structure provides the required linearity and bandwidth.
The contents of this book are an expanded treatment of a set of presentations given at the first IEEE Workshop on Signal Propagation on Interconnects held Trnvemiindc, Germany, May 14- 16, 1997. Traditional VLSI-based cost and complexity measures have principally incolved transistor counts and chip area. Yet with the increase in clock frequency transistor has become an issue of major concern" At present the emergence of systems on silicon feces designers with a new challenge: how to guarantee signal integrity while propagating high signals between embedded cores on a Thus, interconnects are becuming a significant limiter of future system performance. The element~ involved arc mainly transmission lines but also other interconnect devices life vias, and packages" The electrical phenomena that have to investigated, as for example delay and crosstalk, are governed by electromagnetic theory. Consequently, even in digital circuits there large sectians in whieh the can longer considered logical ones and zeros but must be treated as analog waveforms. To complicate matters, the descriptian of subcircuits by ordinary differential eyuations is inadequate in many instsnces. Only the use yartial differential aquations should guarantee sufficiently accurate results. Yet this would unfortunately increase the camplexity af simulatian and besign tremendously" Therefore, new approuuhes need to be developed.
One of the grand challenges in the nano-scopic computing era is
guarantees of robustness. Robust computing system design is
confronted with quantum physical, probabilistic, and even
biological phenomena, and guaranteeing high reliability is much
more difficult than ever before. Scaling devices down to the level
of single electron operation will bring forth new challenges due to
probabilistic effects and uncertainty in guaranteeing 'zero-one'
based computing. Minuscule devices imply billions of devices on a
single chip, which may help mitigate the challenge of uncertainty
by replication and redundancy. However, such device densities will
create a design and validation nightmare with the shear scale.
The main intention of this book is to give an impression of the state-of-the-art in system-level memory management (data transfer and storage) related issues for complex data-dominated real-time signal and data processing applications. The material is based on research at IMEC in this area in the period 1989- 1997. In order to deal with the stringent timing requirements and the data dominated characteristics of this domain, we have adopted a target architecture style and a systematic methodology to make the exploration and optimization of such systems feasible. Our approach is also very heavily application driven which is illustrated by several realistic demonstrators, partly used as red-thread examples in the book. Moreover, the book addresses only the steps above the traditional high-level synthesis (scheduling and allocation) or compilation (traditional or ILP oriented) tasks. The latter are mainly focussed on scalar or scalar stream operations and data where the internal structure of the complex data types is not exploited, in contrast to the approaches discussed here. The proposed methodologies are largely independent of the level of programmability in the data-path and controller so they are valuable for the realisation of both hardware and software systems. Our target domain consists of signal and data processing systems which deal with large amounts of data."
The area of analog integrated circuits is facing some serious challenges due to the ongoing trends towards low supply voltages, low power consumption and high-frequency operation. The situation is becoming even more complicated by the fact that many transfer functions have to be tunable or controllable. A promising approach to facing these challenges is given by the class of dynamic translinear circuits, which are, as a consequence, receiving increasing interest. Several different names are used in literature: log-domain, exponential state-space, current-mode companding, instantaneous companding, tanh-domain, sinh-domain, polynomial state-space, square-root domain and translinear filters. In fact, all these groups are (overlapping) subclasses of the overall class of dynamic translinear circuits. Research Perspectives on Dynamic Translinear and Log-Domain Circuits is a compilation of research findings in this growing field. It comprises ten contributions, coming from recognized dynamic-translinear' researchers in Europe and North America. Research Perspectives on Dynamic Translinear and Log-Domain Circuits is an edited volume of original research.
There are three outstanding points of this book. First: for the first time, a collective point of view on the role of artificial intelligence paradigm in logic design is introduced. Second, the book reveals new horizons of logic design tools on the technologies of the near future. Finally, the contributors of the book are twenty recognizable leaders in the field from the seven research centres. The chapters of the book have been carefully reviewed by equally qualified experts. All contributors are experienced in practical electronic design and in teaching engineering courses. Thus, the book's style is accessible to graduate students, practical engineers and researchers.
Design reuse is not just a topic of research but a real industrial necessity in the microelectronic domain and thus driving the competitiveness of relevant areas like for example telecommunication or automotive. Most companies have already dedicated a department or a central unit that transfer design reuse into reality. All main EDA conferences include a track to the topic, and even specific conferences have been established in this area, both in the USA and in Europe. Virtual Components Design and Reuse presents a selection of articles giving a mature and consolidated perspective to design reuse from different points of view. The authors stem from all relevant areas: research and academia, IP providers, EDA vendors and industry. Some classical topics in design reuse, like specification and generation of components, IP retrieval and cataloguing or interface customisation, are revisited and discussed in depth. Moreover, new hot topics are presented, among them IP quality, platform-based reuse, software IP, IP security, business models for design reuse, and major initiatives like the MEDEA EDA Roadmap.
Memory Design Techniques for Low Energy Embedded Systems centers one of the most outstanding problems in chip design for embedded application. It guides the reader through different memory organizations and technologies and it reviews the most successful strategies for optimizing them in the power and performance plane.
Contributions on UML address the application of UML in the
specification of embedded HW/SW systems. C-Based System Design
embraces the modeling of operating systems, modeling with different
models of computation, generation of test patterns, and experiences
from case studies with SystemC. Analog and Mixed-Signal Systems
covers rules for solving general modeling problems in VHDL-AMS,
modeling of multi-nature systems, synthesis, and modeling of
Mixed-Signal Systems with SystemC. Languages for formal methods are
addressed by contributions on formal specification and refinement
of hybrid, embedded and real-time stems.
Hardware Design and Petri Nets presents a summary of the state of the art in the applications of Petri nets to designing digital systems and circuits. The area of hardware design has traditionally been a fertile field for research in concurrency and Petri nets. Many new ideas about modelling and analysis of concurrent systems, and Petri nets in particular, originated in theory of asynchronous digital circuits. Similarly, the theory and practice of digital circuit design have always recognized Petri nets as a powerful and easy-to-understand modelling tool. The ever-growing demand in the electronic industry for design automation to build various types of computer-based systems creates many opportunities for Petri nets to establish their role of a formal backbone in future tools for constructing systems that are increasingly becoming distributed, concurrent and asynchronous. Petri nets have already proved very effective in supporting algorithms for solving key problems in synthesis of hardware control circuits. However, since the front end to any realistic design flow in the future is likely to rely on more pragmatic Hardware Description Languages (HDLs), such as VHDL and Verilog, it is crucial that Petri nets are well interfaced to such languages. Hardware Design and Petri Nets is divided into five parts, which cover aspects of behavioral modelling, analysis and verification, synthesis from Petri nets and STGs, design environments based on high-level Petri nets and HDLs, and finally performance analysis using Petri nets. Hardware Design and Petri Nets serves as an excellent reference source and may be used as a text for advanced courses on the subject.
Synthesis of Finite State Machines: Functional Optimization is one of two monographs devoted to the synthesis of Finite State Machines (FSMs). This volume addresses functional optimization, whereas the second addresses logic optimization. By functional optimization here we mean the body of techniques that: compute all permissible sequential functions for a given topology of interconnected FSMs, and select a `best' sequential function out of the permissible ones. The result is a symbolic description of the FSM representing the chosen sequential function. By logic optimization here we mean the steps that convert a symbolic description of an FSM into a hardware implementation, with the goal to optimize objectives like area, testability, performance and so on. Synthesis of Finite State Machines: Functional Optimization is divided into three parts. The first part presents some preliminary definitions, theories and techniques related to the exploration of behaviors of FSMs. The second part presents an implicit algorithm for exact state minimization of incompletely specified finite state machines (ISFSMs), and an exhaustive presentation of explicit and implicit algorithms for the binate covering problem. The third part addresses the computation of permissible behaviors at a node of a network of FSMs and the related minimization problems of non-deterministic finite state machines (NDFSMs). Key themes running through the book are the exploration of behaviors contained in a non-deterministic FSM (NDFSM), and the representation of combinatorial problems arising in FSM synthesis by means of Binary Decision Diagrams (BDDs). Synthesis of Finite State Machines: Functional Optimization will be of interest to researchers and designers in logic synthesis, CAD and design automation.
High-Level Synthesis for Real-Time Digital Signal Processing is a comprehensive reference work for researchers and practicing ASIC design engineers. It focuses on methods for compiling complex, low to medium throughput DSP system, and on the implementation of these methods in the CATHEDRAL-II compiler. The emergence of independent silicon foundries, the reduced price of silicon real estate and the shortened processing turn-around time bring silicon technology within reach of system houses. Even for low volumes, digital systems on application-specific integrated circuits (ASICs) are becoming an economically meaningful alternative for traditional boards with analogue and digital commodity chips. ASICs cover the application region where inefficiencies inherent to general-purpose components cannot be tolerated. However, full-custom handcrafted ASIC design is often not affordable in this competitive market. Long design times, a high development cost for a low production volume, the lack of silicon designers and the lack of suited design facilities are inherent difficulties to manual full-custom chip design. To overcome these drawbacks, complex systems have to be integrated in ASICs much faster and without losing too much efficiency in silicon area and operation speed compared to handcrafted chips. The gap between system design and silicon design can only be bridged by new design (CAD). The idea of a silicon compiler, translating a behavioural system specification directly into silicon, was born from the awareness that the ability to fabricate chips is indeed outrunning the ability to design them. At this moment, CAD is one order of magnitude behind schedule. Conceptual CAD is the keyword to mastering the design complexity in ASIC design and the topic of this book.
A critical step in the design of a DSP system is to identify for each of its components (DSP kernels) an implementation architecture that provides the desired degree of flexibility/programmability and optimises the area-delay-power parameters. The book covers the entire solution space comprising both hardware multiplier-based and multiplex-less architectures that offer varying degrees of programmability. For each of the implementation styles, several algorithmic and architectural transformations are proposed so as to optimally implement weighted-sum based DSP kernels over the area-display-power space. VLSI Synthesis of DSP Kernels presents the following: Six different target implementation styles - Programmable DSP-based implementation; Programmable processors with no dedicated hardware multiplier; Implementation using hardware multiplier(s) and adder(s); Distributed Arithmetic (DA)-based implementation; Residue Number System (RNS)-based implementation; and Multiplier-less implementation (using adders and shifters) for fixed coefficient DSP kernels. For each of the implementation styles, description and analysis of several algorithmic and architectural transformations aimed at one or more of reduced area, higher performance and low power; Automated and semi-automated techniques for applying each of these transformations; and Classification of the transformations based on the properties that they exploit and their encapsulation in a design framework. A methodology that uses the framework to systematically explore the application of these transformations depending on the characteristics of the algorithm and the target implementation style. VLSI Synthesis of DSP Kernels is essential reading for designers of both hardware- and software-based DSP systems, developers of IP modules for DSP applications, EDA tools developers, researchers and managers interested in getting a comprehensive overview of current trends and future challenges in optimal implementations of DSP kernels. It will also be suitable for graduate students specialising in the area of VLSI Digital Signal Processing.
This book contains selected contributions from the 6th CIRP International Seminar on Computer-Aided Tolerancing, which was held on 22-24 March, 1999, at the University of Twente, Enschede, The Netherlands. This volume presents the theory and application of consistent tolerancing. Until recently CADCAM systems did not even address the issue of tolerances and focused purely on nominal geometry. Therefore, CAD data was only of limited use for the downstream processes. The latest generation of CADCAM systems incorporates functionality for tolerance specification. However, the lack of consistency in existing tolerancing standards and everyday tolerancing practice still lead to ill-defined products, excessive manufacturing costs and unexpected failures. Research and improvement of education in tolerancing are hot items today. Global Consistency of Tolerances gives an excellent overview of the recent developments in the field of Computer-Aided Tolerancing, including such topics as tolerance specification; tolerance analysis; tolerance synthesis; tolerance representation; geometric product specification; functional product analysis; statistical tolerancing; education of tolerancing; computational metrology; tolerancing standards; and industrial applications and CAT systems. This book is well suited to users of new generation CADCAM systems who want to use the available tolerancing possibilities properly. It can also be used as a starting point for research activities.
This book provides a solid and uniform derivation of the various properties Bezier and B-spline representations have, and shows the beauty of the underlying rich mathematical structure. The book focuses on the core concepts of Computer Aided Geometric Design and provides a clear and illustrative presentation of the basic principles, as well as a treatment of advanced material including multivariate splines, some subdivision techniques and constructions of free form surfaces with arbitrary smoothness. The text is beautifully illustrated with many excellent figures to emphasize the geometric constructive approach of this book.
Design is a fundamental creative human activity. This certainly applies to the design of artefacts, the realisation of which has to meet many constraints and ever raising criteria. The world in which we live today, is enormously influenced by the human race. Over the last century, these artefacts have dramatically changed the living conditions of humans. The present wealth in very large parts of the world, depends on it. All the ideas for better and new artefacts brought forward by humans have gone through the minds of designers, who have turned them into feasible concepts and subsequently transformed them into realistic product models. The designers have been, still are, and will remain the leading 'change agents' in the physical world. Manufacturability of artefacts has always played a significant role in design. In pre industrial manufacturing, the blacksmith held the many design and realisation aspects of a product in one hand. The synthesis of the design and manufacturing aspects took, almost implicitly, place in the head of the man. All the knowledge and the skills were stored in one person. Education and training took place along the line of many years of apprenticeship. When the production volumes increased, -'assembling to measure' was no longer tolerated and production efficiency became essential - design, process planning, production planning and fabrication became separated concerns. The designers created their own world, separated from the production world. They argued that restrictions in the freedom of designing would badly influence their creativity in design."
Electronic Chips & Systems Design Languagesoutlines and describes the latest advances in design languages. The challenge of System on a Chip (SOC) design requires designers to work in a multi-lingual environment which is becoming increasingly difficult to master. It is therefore crucial for them to learn, almost in real time, from the experiences of their colleagues in the use of design languages and how these languages have become more advanced to cope with system design. System designers, as well as students willing to become system designers, often do not have the time to attend all scientific events where they could learn the necessary information. This book will bring them a selected digest of the best contributions and industry strength case studies. All the levels of abstraction that are relevant, from the informal user requirements down to the implementation specifications, are addressed by different contributors. The author, together with colleague authors who provide valuable additional experience, presents examples of actual industrial world applications. Furthermore the academic concepts presented in this book provide excellent theories to student readers and the concepts described are up to date and in so doing provide most suitable root information for Ph.D. postgraduates.
Rapid Prototyping of Application Specific Signal Processors presents leading-edge research that focuses on design methodology, infrastructure support and scalable architectures developed by the 150 million dollar DARPA United States Department of Defense RASSP Program. The contributions to this edited work include an introductory overview chapter that explains the origin, concepts and status of this effort. The RASSP Program is a multi-year DARPA/Tri-Service initiative intended to dramatically improve the process by which complex digital systems, particularly embedded signal processors, are designed, manufactured, upgraded and supported. This program was originally driven by military applications for signal processing. The requirements of military applications for real-time signal processing are typically more demanding than those of commercial applications, but the time gap between technology employed in advanced military prototypes and commercial products is narrowing rapidly. The research on methodologies, infrastructure and architectures presented in this book is applicable to commercial signal processing systems that are in design now, or will be developed before the end of the decade. Rapid Prototyping of Application Specific Signal Processors is a valuable reference for developers of embedded digital systems, particularly systems engineers for signal processing systems (such as digital TV, biomedical image processing systems and telecommunications) and for military contractors who are developing signal processing systems. This book will also be of interest to managers who are charged with responsibility for creating and maintaining environments and infrastructures for developing large embedded digital systems. The chief value for managers will be the defining of methods and processes that reduce development time and cost.
With the advent of portable and autonomous computing systems, power con sumption has emerged as a focal point in many research projects, commercial systems and DoD platforms. One current research initiative, which drew much attention to this area, is the Power Aware Computing and Communications (PAC/C) program sponsored by DARPA. Many of the chapters in this book include results from work that have been supported by the PACIC program. The performance of computer systems has been tremendously improving while the size and weight of such systems has been constantly shrinking. The capacities of batteries relative to their sizes and weights has been also improv ing but at a rate which is much slower than the rate of improvement in computer performance and the rate of shrinking in computer sizes. The relation between the power consumption of a computer system and it performance and size is a complex one which is very much dependent on the specific system and the technology used to build that system. We do not need a complex argument, however, to be convinced that energy and power, which is the rate of energy consumption, are becoming critical components in computer systems in gen eral, and portable and autonomous systems, in particular. Most of the early research on power consumption in computer systems ad dressed the issue of minimizing power in a given platform, which usually translates into minimizing energy consumption, and thus, longer battery life." |
You may like...
|