![]() |
![]() |
Your cart is empty |
||
Books > Professional & Technical > Technology: general issues > Technical design > Computer aided design (CAD)
Large system complexities and operation under tight timing constraints in rapidly shrinking technologies have made it extremely important to ensure correct temporal behavior of modern-day digital circuits, both before and after fabrication. Research in (pre-fabrication) timing verification and (post-fabrication) delay fault testing has evolved along largely disjoint lines in spite of the fact that they share many basic concepts. A Unified Approach for Timing Verification and Delay Fault Testing applies concepts developed in the context of delay fault testing to path sensitization, which allows an accurate timing analysis mechanism to be developed. This path sensitization strategy is further applied for efficient delay fault diagnosis and delay fault coverage estimation. A new path sensitization strategy called Signal Stabilization Time Analysis (SSTA) has been developed based on the fact that primitive PDFs determine the stabilization time of the circuit outputs. This analysis has been used to develop a feasible method of identifying the primitive PDFs in a general multi-level logic circuit. An approach to determine the maximum circuit delay using this primitive PDF identification mechanism is also presented. The Primitive PDF Identification-based Timing Analysis (PITA) approach is proved to determine the maximum floating mode circuit delay exactly under any component delay model, and provides several advantages over previously floating mode timing analyzers. A framework for the diagnosis of circuit failures caused by distributed path delay faults is also presented. A metric to quantify the diagnosability of a path delay fault for a test is also proposed. Finally, the book presents a very realistic metric for delay fault coverage which accounts for delay fault size distributions and is applicable to any delay fault model. A Unified Approach for Timing Verification and Delay Fault Testing will be of interest to university and industry researchers in timing analysis and delay fault testing as well as EDA tool development engineers and design verification engineers dealing with timing issues in ULSI circuits. The book should also be of interest to digital designers and others interested in knowing the state of the art in timing verification and delay fault testing.
The combination of VLSI process technology and real-time digital signal processing (DSP) has brought a break-through in information technology. This rapid technical (r)evolution allows the integration of ever more complex systems on a single chip. However, these technology and integration advances have not been matched by an increase in design productivity, causing technology to leapfrog the design of integrated circuits (ICs). The success of these emerging 'systems-on-a-chip' (SOC) can only be guaranteed by a systematic and formal design methodology, possibly automated in computer-aided design (CAD) tools, and effective re-use of existing intellectual property (IP). In this book, a contribution is made to the modeling, timing verification and analysis, and the automatic synthesis of integrated real-time DSP systems. Existing literature in these three domains is extensively reviewed, making this book the first to give a comprehensive overview of existing techniques.The emphasis throughout the book is on the support and guaranteeing of the real-time aspect and constraints of these systems, which avoids time consuming design iterations and safeguards the ever shrinking time-to-market. The proposed 'Multi-Thread Graph' (MTG) system model features two-layers, unifying a (timed) Petri net and a control-data flow graph. Its unique interface between both models offers the best of two worlds and introduces an extra abstraction level hiding the operation-level details which are unnecessary during global system exploration. The formulated timing analysis and verification approach supports the calculation of temporal separation between different MTG entities as well as realistic performance metrics for highly concurrent systems. The synthesis methodology focuses on managing the task-level concurrency (i.e. task scheduling), as part of a proposed overall system design meta flow. It emphasizes performance and timing aspects ('timeliness'), while minimizing processor cost overhead as driven by high-level cost estimators.The approach is new in the abstraction level it employs, and in its optimal hybrid dynamic/static scheduling policy which, driven by cost estimators, selects the scheduling policy for each behavior. At the low-level, RTOS synthesis generates an application-specific scheduler for the software component. The proposed synthesis methodology (at the task-level) is asserted to yield most optimal results when employed before the hardware/software partition is made. At this level, the distinction between these two is minimal, such that all steps in the design trajectory can be shared, thereby reducing the system cost significantly and allowing tighter satisfaction of timing/performance constraints. From the Foreword: This book is the first comprehensive treatment of software, and more general, system, generation (synthesis) techniques based on formal models. It can be used as a very valuable reference to understand the development of the field of embedded software design, and of system design and synthesis in general. The book offers an invaluable help to researchers and practitioners of the field of embedded system design. Prof. Alberto Sangiovanni-Vincentelli, Edgar L. and Harold H.Buttner Professor of Electrical Engineering and Computer Science, University of California, Berkeley, Chief Technology Advisor, Cadence Design Systems.
mental improvements during the same period. What is clearly needed in verification techniques and technology is the equivalent of a synthesis productivity breakthrough. In the second edition of Writing Testbenches, Bergeron raises the verification level of abstraction by introducing coverage-driven constrained-random transaction-level self-checking testbenches all made possible through the introduction of hardware verification languages (HVLs), such as e from Verisity and OpenVera from Synopsys. The state-of-art methodologies described in Writing Test benches will contribute greatly to the much-needed equivalent of a synthesis breakthrough in verification productivity. I not only highly recommend this book, but also I think it should be required reading by anyone involved in design and verification of today's ASIC, SoCs and systems. Harry Foster Chief Architect Verplex Systems, Inc. xviii Writing Testbenches: Functional Verification of HDL Models PREFACE If you survey hardware design groups, you will learn that between 60% and 80% of their effort is now dedicated to verification.
The success of VHDL since it has been balloted in 1987 as an IEEE standard may look incomprehensible to the large population of hardware designers, who had never heared of Hardware Description Languages before (for at least 90% of them), as well as to the few hundreds of specialists who had been working on these languages for a long time (25 years for some of them). Until 1988, only a very small subset of designers, in a few large companies, were used to describe their designs using a proprietary HDL, or sometimes a HDL inherited from a University when some software environment happened to be developped around it, allowing usability by third parties. A number of benefits were definitely recognized to this practice, such as functional verification of a specification through simulation, first performance evaluation of a tentative design, and sometimes automatic microprogram generation or even automatic high level synthesis. As there was apparently no market for HDL's, the ECAD vendors did not care about them, start-up companies were seldom able to survive in this area, and large users of proprietary tools were spending more and more people and money just to maintain their internal system.
Testing techniques for VLSI circuits are undergoing many exciting changes. The predominant method for testing digital circuits consists of applying a set of input stimuli to the IC and monitoring the logic levels at primary outputs. If, for one or more inputs, there is a discrepancy between the observed output and the expected output then the IC is declared to be defective. A new approach to testing digital circuits, which has come to be known as IDDQ testing, has been actively researched for the last fifteen years. In IDDQ testing, the steady state supply current, rather than the logic levels at the primary outputs, is monitored. Years of research suggests that IDDQ testing can significantly improve the quality and reliability of fabricated circuits. This has prompted many semiconductor manufacturers to adopt this testing technique, among them Philips Semiconductors, Ford Microelectronics, Intel, Texas Instruments, LSI Logic, Hewlett-Packard, SUN microsystems, Alcatel, and SGS Thomson. This increase in the use of IDDQ testing should be of interest to three groups of individuals associated with the IC business: Product Managers and Test Engineers, CAD Tool Vendors and Circuit Designers. Introduction to IDDQ Testing is designed to educate this community. The authors have summarized in one volume the main findings of more than fifteen years of research in this area.
There is much excitement in the design and verification community about assertion-based design. The question is, who should study assertion-based design? The emphatic answer is, both design and verification engineers. What may be unintuitive to many design engineers is that adding assertions to RTL code will actually reduce design time, while better documenting design intent. Every design engineer should read this book! Design engineers that add assertions to their design will not only reduce the time needed to complete a design, they will also reduce the number of interruptions from verification engineers to answer questions about design intent and to address verification suite mistakes. With design assertions in place, the majority of the interruptions from verification engineers will be related to actual design problems and the error feedback provided will be more useful to help identify design flaws. A design engineer who does not add assertions to the RTL code will spend more time with verification engineers explaining the design functionality and intended interface requirements, knowledge that is needed by the verification engineer to complete the job of testing the design.
Advanced ASIC Chip Synthesis: Using Synopsys (R) Design Compiler (R) and PrimeTime (R) describes the advanced concepts and techniques used for ASIC chip synthesis, formal verification and static timing analysis, using the Synopsys suite of tools. In addition, the entire ASIC design flow methodology targeted for VDSM (Very-Deep-Sub-Micron) technologies is covered in detail. The emphasis of this book is on real-time application of Synopsys tools used to combat various problems seen at VDSM geometries. Readers will be exposed to an effective design methodology for handling complex, sub-micron ASIC designs. Significance is placed on HDL coding styles, synthesis and optimization, dynamic simulation, formal verification, DFT scan insertion, links to layout, and static timing analysis. At each step, problems related to each phase of the design flow are identified, with solutions and work-arounds described in detail. In addition, crucial issues related to layout, which includes clock tree synthesis and back-end integration (links to layout) are also discussed at length. Furthermore, the book contains in-depth discussions on the basics of Synopsys technology libraries and HDL coding styles, targeted towards optimal synthesis solutions. Advanced ASIC Chip Synthesis: Using Synopsys (R) Design Compiler (R) and PrimeTime (R) is intended for anyone who is involved in the ASIC design methodology, starting from RTL synthesis to final tape-out. Target audiences for this book are practicing ASIC design engineers and graduate students undertaking advanced courses in ASIC chip design and DFT techniques. From the Foreword: `This book, written by Himanshu Bhatnagar, provides a comprehensive overview of the ASIC design flow targeted for VDSM technologies using the Synopsis suite of tools. It emphasizes the practical issues faced by the semiconductor design engineer in terms of synthesis and the integration of front-end and back-end tools. Traditional design methodologies are challenged and unique solutions are offered to help define the next generation of ASIC design flows. The author provides numerous practical examples derived from real-world situations that will prove valuable to practicing ASIC design engineers as well as to students of advanced VLSI courses in ASIC design'. Dr Dwight W. Decker, Chairman and CEO, Conexant Systems, Inc., (Formerly, Rockwell Semiconductor Systems), Newport Beach, CA, USA.
A spline is a thin flexible strip composed of a material such as bamboo or steel that can be bent to pass through or near given points in the plane, or in 3-space in a smooth manner. Mechanical engineers and drafting specialists find such (physical) splines useful in designing and in drawing plans for a wide variety of objects, such as for hulls of boats or for the bodies of automobiles where smooth curves need to be specified. These days, physi cal splines are largely replaced by computer software that can compute the desired curves (with appropriate encouragment). The same mathematical ideas used for computing "spline" curves can be extended to allow us to compute "spline" surfaces. The application ofthese mathematical ideas is rather widespread. Spline functions are central to computer graphics disciplines. Spline curves and surfaces are used in computer graphics renderings for both real and imagi nary objects. Computer-aided-design (CAD) systems depend on algorithms for computing spline functions, and splines are used in numerical analysis and statistics. Thus the construction of movies and computer games trav els side-by-side with the art of automobile design, sail construction, and architecture; and statisticians and applied mathematicians use splines as everyday computational tools, often divorced from graphic images."
Since the early 1980s, CAD frameworks have received a great deal of attention, both in the research community and in the commercial arena. It is generally agreed that CAD framework technology promises much: advanced CAD frameworks can turn collections of individual tools into effective and user-friendly design environments. But how can this promise be fulfilled? CAD Frameworks: Principles and Architecture describes the design and construction of CAD frameworks. It presents principles for building integrated design environments and shows how a CAD framework can be based on these principles. It derives the architecture of a CAD framework in a systematic way, using well-defined primitives for representation. This architecture defines how the many different framework sub-topics, ranging from concurrency control to design flow management, relate to each other and come together into an overall system. The origin of this work is the research and development performed in the context of the Nelsis CAD Framework, which has been a working system for well over eight years, gaining functionality while evolving from one release to the next. The principles and concepts presented in this book have been field-tested in the Nelsis CAD Framework. CAD Frameworks: Principles and Architecture is primarily intended for EDA professionals, both in industry and in academia, but is also valuable outside the domain of electronic design. Many of the principles and concepts presented are also applicable to other design-oriented application domains, such as mechanical design or computer-aided software engineering (CASE). It is thus a valuable reference for all those involved in computer-aided design.
Modeling in Analog Design highlights some of the most pressing issues in the use of modeling techniques for design of analogue circuits. Using models for circuit design gives designers the power to express directly the behaviour of parts of a circuit in addition to using other pre-defined components. There are numerous advantages to this new category of analog behavioral language. In the short term, by favouring the top-down design and raising the level of description abstraction, this approach provides greater freedom of implementation and a higher degree of technology independence. In the longer term, analog synthesis and formal optimisation are targeted. Modeling in Analog Design introduces the reader to two main language standards: VHDL-A and MHDL. It goes on to provide in-depth examples of the use of these languages to model analog devices. The final part is devoted to the very important topic of modeling the thermal and electrothermal aspects of devices. This book is essential reading for analog designers using behavioral languages and analog CAD tool development environments who have to provide the tools used by the designers.
Principles of Verilog PLI is a 'how to do' text on Verilog Programming Language Interface. The primary focus of the book is on how to use PLI for problem solving. Both PLI 1.0 and PLI 2.0 are covered. Particular emphasis has been put on adopting a generic step-by-step approach to create a fully functional PLI code. Numerous examples were carefully selected so that a variety of problems can be solved through ther use. A separate chapter on Bus Functional Model (BFM), one of the most widely used commercial applications of PLI, is included. Principles of Verilog PLI is written for the professional engineer who uses Verilog for ASIC design and verification. Principles of Verilog PLI will be also of interest to students who are learning Verilog.
Since the establishment of the CAAD Futures Foundation in 1985, CAAD experts from all over the world meet every two years to present and document the state of the art of research in Computer Aided Architectural Design. Together, the series provides a good record of the evolving state of research in this area over the last fourteen years. The Proceedings this year is the eighth in the series. The conference held at Georgia Institute of Technology in Atlanta, Georgia, includes twenty-five papers presenting new and exciting results and capabilities in areas such as computer graphics, building modeling, digital sketching and drawing systems, Web-based collaboration and information exchange. An overall reading shows that computers in architecture is still a young field, with many exciting results emerging out of both greater understanding of the human processes and information processing needed to support design and also the continuously expanding capabilities of digital technology.
Power supply current monitoring to detect CMOS IC defects during production testing quietly laid down its roots in the mid-1970s. Both Sandia Labs and RCA in the United States and Philips Labs in the Netherlands practiced this procedure on their CMOS ICs. At that time, this practice stemmed simply from an intuitive sense that CMOS ICs showing abnormal quiescent power supply current (IDDQ) contained defects. Later, this intuition was supported by data and analysis in the 1980s by Levi (RACD, Malaiya and Su (SUNY-Binghamton), Soden and Hawkins (Sandia Labs and the University of New Mexico), Jacomino and co-workers (Laboratoire d'Automatique de Grenoble), and Maly and co-workers (Carnegie Mellon University). Interest in IDDQ testing has advanced beyond the data reported in the 1980s and is now focused on applications and evaluations involving larger volumes of ICs that improve quality beyond what can be achieved by previous conventional means. In the conventional style of testing one attempts to propagate the logic states of the suspended nodes to primary outputs. This is done for all or most nodes of the circuit. For sequential circuits, in particular, the complexity of finding suitable tests is very high. In comparison, the IDDQ test does not observe the logic states, but measures the integrated current that leaks through all gates. In other words, it is like measuring a patient's temperature to determine the state of health. Despite perceived advantages, during the years that followed its initial announcements, skepticism about the practicality of IDDQ testing prevailed. The idea, however, provided a great opportunity to researchers. New results on test generation, fault simulation, design for testability, built-in self-test, and diagnosis for this style of testing have since been reported. After a decade of research, we are definitely closer to practice.
Oversampling techniques based on sigma-delta modulation are widely used to implement the analog/digital interfaces in CMOS VLSI technologies. This approach is relatively insensitive to imperfections in the manufacturing process and offers numerous advantages for the realization of high-resolution analog-to-digital (A/D) converters in the low-voltage environment that is increasingly demanded by advanced VLSI technologies and by portable electronic systems. In The Design of Low-Voltage, Low-Power Sigma-Delta Modulators, an analysis of power dissipation in sigma-delta modulators is presented, and a low-voltage implementation of a digital-audio performance A/D converter based on the results of this analysis is described. Although significant power savings can typically be achieved in digital circuits by reducing the power supply voltage, the power dissipation in analog circuits actually tends to increase with decreasing supply voltages. Oversampling architectures are a potentially power-efficient means of implementing high-resolution A/D converters because they reduce the number and complexity of the analog circuits in comparison with Nyquist-rate converters. In fact, it is shown that the power dissipation of a sigma-delta modulator can approach that of a single integrator with the resolution and bandwidth required for a given application. In this research the influence of various parameters on the power dissipation of the modulator has been evaluated and strategies for the design of a power-efficient implementation have been identified. The Design of Low-Voltage, Low-Power Sigma-Delta Modulators begins with an overview of A/D conversion, emphasizing sigma-delta modulators. It includes a detailed analysis of noise in sigma-delta modulators, analyzes power dissipation in integrator circuits, and addresses practical issues in the circuit design and testing of a high-resolution modulator. The Design of Low-Voltage, Low-Power Sigma-Delta Modulators will be of interest to practicing engineers and researchers in the areas of mixed-signal and analog integrated circuit design.
Over the past decade there has been a dramatic change in the role played by design automation for electronic systems. Ten years ago, integrated circuit (IC) designers were content to use the computer for circuit, logic, and limited amounts of high-level simulation, as well as for capturing the digitized mask layouts used for IC manufacture. The tools were only aids to design-the designer could always find a way to implement the chip or board manually if the tools failed or if they did not give acceptable results. Today, however, design technology plays an indispensable role in the design ofelectronic systems and is critical to achieving time-to-market, cost, and performance targets. In less than ten years, designers have come to rely on automatic or semi automatic CAD systems for the physical design ofcomplex ICs containing over a million transistors. In the past three years, practical logic synthesis systems that take into account both cost and performance have become a commercial reality and many designers have already relinquished control ofthe logic netlist level of design to automatic computer aids. To date, only in certain well-defined areas, especially digital signal process ing and telecommunications. have higher-level design methods and tools found significant success. However, the forces of time-to-market and growing system complexity will demand the broad-based adoption of high-level, automated methods and tools over the next few years.
Physical Design for Multichip Modules collects together a large body of important research work that has been conducted in recent years in the area of Multichip Module (MCM) design. The material consists of a survey of published results as well as original work by the authors. All major aspects of MCM physical design are discussed, including interconnect analysis and modeling, system partitioning and placement, and multilayer routing. For readers unfamiliar with MCMs, this book presents an overview of the different MCM technologies available today. An in-depth discussion of various recent approaches to interconnect analysis are also presented. Remaining chapters discuss the problems of partitioning, placement, and multilayer routing, with an emphasis on timing performance. For the first time, data from a wide range of sources is integrated to present a clear picture of a new, challenging and very important research area. For students and researchers looking for interesting research topics, open problems and suggestions for further research are clearly stated.Points of interest include: * Clear overview of MCM technology and its relationship to physical design; * Emphasis on performance-driven design, with a chapter devoted to recent techniques for rapid performance analysis and modeling of MCM interconnects; * Different approaches to multilayer MCM routing collected together and compared for the first time; * Explanation of algorithms is not overly mathematical, yet is detailed enough to give readers a clear understanding of the approach; * Quantitative data provided wherever possible for comparison of different approaches; * A comprehensive list of references to recent literature on MCMs provided.
What are the design or selection criteria for robots that will be capable of carrying out particular functions? How can robots and machines be installed in work locations to obtain maximum effectiveness? How can their programming be made easier? How can a work location be arranged so as to accommodate successfully automatic machines? Traditionally, these questions have only been answered as a result of long and exhaustive study, involving complex calculations and the use of many sketches and plans. Computers and interactive computer graphics provide the possibility of automation for this type of analysis, thus making the task of robot designers and users easier. This volume is concerned with mathematical modelling and graphics representation of robot performance (eg their fields of action, their performance index) as a function of their structure, mechanical parts and memory systems. Used in conjunction with operating specifications, such as movement programs and computer-aided design (CAD) data bases that describe parts or tools, these perform ance models can allow the potential of different robots or different models of the same type of robot to be compared, workstations to be organized efficiently, responses to be optimized, errors to be minimized and can make off-line programming by computer a real possibility. In the future, it is certain that the appearance of robots designed to monitor their own performances will allow applications and safety conditions to be considerably improved."
3. 2 Input Encoding Targeting Two-Level Logic . . . . . . . . 27 3. 2. 1 One-Hot Coding and Multiple-Valued Minimization 28 3. 2. 2 Input Constraints and Face Embedding 30 3. 3 Satisfying Encoding Constraints . . . . . . . 32 3. 3. 1 Definitions . . . . . . . . . . . . . . . 32 3. 3. 2 Column-Based Constraint Satisfaction 33 3. 3. 3 Row-Based Constraint Satisfaction . . 37 3. 3. 4 Constraint Satisfaction Using Dichotomies . 38 3. 3. 5 Simulated Annealing for Constraint Satisfaction 41 3. 4 Input Encoding Targeting Multilevel Logic. . 43 3. 4. 1 Kernels and Kernel Intersections . . . 44 3. 4. 2 Kernels and Multiple-Valued Variables 46 3. 4. 3 Multiple-Valued Factorization. . . . . 48 3. 4. 4 Size Estimation in Algebraic Decomposition . 53 3. 4. 5 The Encoding Step . 54 3. 5 Conclusion . . . . . . . . . 55 4 Encoding of Symbolic Outputs 57 4. 1 Heuristic Output Encoding Targeting Two-Level Logic. 59 4. 1. 1 Dominance Relations. . . . . . . . . . . . . . . . 59 4. 1. 2 Output Encoding by the Derivation of Dominance Relations . . . . . . . . . . . . . . . . . . . . . 60 . . 4. 1. 3 Heuristics to Minimize the Number of Encoding Bits . . . . . . . . . . . . 64 4. 1. 4 Disjunctive Relationships . . . . . . . . . . . 65 4. 1. 5 Summary . . . . . . . . . . . . . . . . . . 66 . . 4. 2 Exact Output Encoding Targeting Two-Level Logic. 66 4. 2. 1 Generation of Generalized Prime Implicants . 68 4. 2. 2 Selecting a Minimum Encodeable Cover . . . 68 4. 2. 3 Dominance and Disjunctive Relationships to S- isfy Constraints . . . . . . . . . . . 70 4. 2. 4 Constructing the Optimized Cover 73 4. 2. 5 Correctness of the Procedure . . 73 4. 2. 6 Multiple Symbolic Outputs . . .
Leaf Cell and Hierarchical Compaction Techniques presents novel algorithms developed for the compaction of large layouts. These algorithms have been implemented as part of a system that has been used on many industrial designs. The focus of Leaf Cell and Hierarchical Compaction Techniques is three-fold. First, new ideas for compaction of leaf cells are presented. These cells can range from small transistor-level layouts to very large layouts generated by automatic Place and Route tools. Second, new approaches for hierarchical pitchmatching compaction are described and the concept of a Minimum Design is introduced. The system for hierarchical compaction is built on top of the leaf cell compaction engine and uses the algorithms implemented for leaf cell compaction in a modular fashion. Third, a new representation for designs called Virtual Interface, which allows for efficient topological specification and representation of hierarchical layouts, is outlined. The Virtual Interface representation binds all of the algorithms and their implementations for leaf and hierarchical compaction into an intuitive and easy-to-use system. From the Foreword: `...In this book, the authors provide a comprehensive approach to compaction based on carefully conceived abstractions. They describe the design of algorithms that provide true hierarchical compaction based on linear programming, but cut down the complexity of the computations through introduction of innovative representations that capture the provably minimum amount of required information needed for correct compaction. In most compaction algorithms, the complexity goes up with the number of design objects, but in this approach, complexity is due to the irregularity of the design, and hence is often tractable for most designs which incorporate substantial regularity. Here the reader will find an elegant treatment of the many challenges of compaction, and a clear conceptual focus that provides a unified approach to all aspects of the compaction task...' Jonathan Allen, Massachusetts Institute of Technology
Codesign for Real-Time Video Applications describes a modern design approach for embedded systems. It combines the design of hardware, software, and algorithms. Traditionally, these design domains are treated separately to reduce the design complexity. Advanced design tools support a codesign of the different domains which opens an opportunity for exploiting synergetic effects. The design approach is illustrated by the design of a video compression system. It is integrated into the video card of a PC. A VLIW processor architecture is used as the basis of the compression system and popular video compression algorithms (MPEG, JPEG, H.261) are analyzed. A complete top-down design flow is presented and the design tools for each of the design steps are explained. The tools are integrated into an HTML-based design framework. The resulting design data can be directly integrated into the WWW. This is a crucial aspect for supporting distributed design groups. The design data can be directly documented an cross referencing in an almost arbitrary way is supported. This provides a platform for information sharing among the different design domains. Codesign for Real-Time Video Applications focuses on the multi-disciplinary aspects of embedded system design. It combines design automation and advanced processor design with an important application domain. A quantitative design approach is emphasized which focuses the design time on the most crucial components. Thus enabling a fast and cost efficient design methodology. This book will be of interest to researchers, designers and managers working in embedded system design.
The Background to the Institute The NATO Advanced Study Institute (ASI) 'People and Computers - Applying an Anthropocentric Approach to Integrated Production Systems and Organisations' came about after the distribution of a NATO fact sheet to BruneI University, which described the funding of ASls. The 'embryonic' director of the ASI brought this opportunity to the attention of the group of people, (some at BruneI and some from outside), who were together responsible for the teaching and management of the course in Computer Integrated Manufacturing (CIM) in BruneI's Department of Manufacturing and Engineering Systems. This course had been conceived in 1986 and was envisaged as a vehicle for teaching manufacturing engineering students the technology of information integration through project work. While the original idea of the course had also included the organisational aspects of CIM, the human factors questions were not considered. This shortcoming was recognised and the trial run of the course in 1988 contained some lectures on 'people' issues. The course team were therefore well prepared and keen to explore the People, Organisation and Technology (POT) aspects of computer integration, as applied to industrial production. A context was proposed which would allow the inclusion of people from many different backgrounds and which would open up time and space for reflection. The proposal to organise a NATO ASI was therefore welcomed by all concerned.
Switching Theory for Logic Synthesis covers the basic topics of switching theory and logic synthesis in fourteen chapters. Chapters 1 through 5 provide the mathematical foundation. Chapters 6 through 8 include an introduction to sequential circuits, optimization of sequential machines and asynchronous sequential circuits. Chapters 9 through 14 are the main feature of the book. These chapters introduce and explain various topics that make up the subject of logic synthesis: multi-valued input two-valued output function, logic design for PLDs/FPGAs, EXOR-based design, and complexity theories of logic networks. An appendix providing a history of switching theory is included. The reference list consists of over four hundred entries. Switching Theory for Logic Synthesis is based on the author's lectures at Kyushu Institute of Technology as well as seminars for CAD engineers from various Japanese technology companies. Switching Theory for Logic Synthesis will be of interest to CAD professionals and students at the advanced level. It is also useful as a textbook, as each chapter contains examples, illustrations, and exercises.
Although research in architectural synthesis has been conducted for over ten years it has had very little impact on industry. This in our view is due to the inability of current architectural synthesizers to provide area-delay competitive (or "optimal") architectures, that will support interfaces to analog, asynchronous, and other complex processes. They also fail to incorporate testability. The OASIC (optimal architectural synthesis with interface constraints) architectural synthesizer and the CATREE (computer aided trees) synthesizer demonstrate how these problems can be solved. Traditionally architectural synthesis is viewed as NP hard and there fore most research has involved heuristics. OASIC demonstrates by using an IP approach (using polyhedral analysis), that most input algo rithms can be synthesized very fast into globally optimal architectures. Since a mathematical model is used, complex interface constraints can easily be incorporated and solved. Research in test incorporation has in general been separate from syn thesis research. This is due to the fact that traditional test research has been at the gate or lower level of design representation. Nevertheless as technologies scale down, and complexity of design scales up, the push for reducing testing times is increased. On way to deal with this is to incorporate test strategies early in the design process. The second half of this text examines an approach for integrating architectural synthesis with test incorporation. Research showed that test must be considered during synthesis to provide good architectural solutions which minimize Xlll area delay cost functions."
This book is an extension of one author's doctoral thesis on the false path problem. The work was begun with the idea of systematizing the various solutions to the false path problem that had been proposed in the literature, with a view to determining the computational expense of each versus the gain in accuracy. However, it became clear that some of the proposed approaches in the literature were wrong in that they under estimated the critical delay of some circuits under reasonable conditions. Further, some other approaches were vague and so of questionable accu racy. The focus of the research therefore shifted to establishing a theory (the viability theory) and algorithms which could be guaranteed correct, and then using this theory to justify (or not) existing approaches. Our quest was successful enough to justify presenting the full details in a book. After it was discovered that some existing approaches were wrong, it became apparent that the root of the difficulties lay in the attempts to balance computational efficiency and accuracy by separating the tempo ral and logical (or functional) behaviour of combinational circuits. This separation is the fruit of several unstated assumptions; first, that one can ignore the logical relationships of wires in a network when considering timing behaviour, and, second, that one can ignore timing considerations when attempting to discover the values of wires in a circuit."
Synthesis of Finite State Machines: Logic Optimization is the second in a set of two monographs devoted to the synthesis of Finite State Machines (FSMs). The first volume, Synthesis of Finite State Machines: Functional Optimization, addresses functional optimization, whereas this one addresses logic optimization. The result of functional optimization is a symbolic description of an FSM which represents a sequential function chosen from a collection of permissible candidates. Logic optimization is the body of techniques for converting a symbolic description of an FSM into a hardware implementation. The mapping of a given symbolic representation into a two-valued logic implementation is called state encoding (or state assignment) and it impacts heavily area, speed, testability and power consumption of the realized circuit. The first part of the book introduces the relevant background, presents results previously scattered in the literature on the computational complexity of encoding problems, and surveys in depth old and new approaches to encoding in logic synthesis. The second part of the book presents two main results about symbolic minimization; a new procedure to find minimal two-level symbolic covers, under face, dominance and disjunctive constraints, and a unified frame to check encodability of encoding constraints and find codes of minimum length that satisfy them. The third part of the book introduces generalized prime implicants (GPIs), which are the counterpart, in symbolic minimization of two-level logic, to prime implicants in two-valued two-level minimization. GPIs enable the design of an exact procedure for two-level symbolic minimization, based on a covering step which is complicated by the need to guarantee encodability of the final cover. A new efficient algorithm to verify encodability of a selected cover is presented. If a cover is not encodable, it is shown how to augment it minimally until an encodable superset of GPIs is determined. To handle encodability the authors have extended the frame to satisfy encoding constraints presented in the second part. The covering problems generated in the minimization of GPIs tend to be very large. Recently large covering problems have been attacked successfully by representing the covering table with binary decision diagrams (BDD). In the fourth part of the book the authors introduce such techniques and extend them to the case of the implicit minimization of GPIs, where the encodability and augmentation steps are also performed implicitly. Synthesis of Finite State Machines: Logic Optimization will be of interest to researchers and professional engineers who work in the area of computer-aided design of integrated circuits. |
![]() ![]() You may like...
Aristotle's Politics - A Critical Guide
Thornton Lockwood, Thanassis Samaras
Hardcover
Antioxidant Nutraceuticals - Preventive…
Chuanhai Cao, Sarvadaman Pathak, …
Hardcover
R5,517
Discovery Miles 55 170
|