![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Professional & Technical > Technology: general issues > Technical design > Computer aided design (CAD)
A reactive system is one that is in continual interaction with its environment and executes at a pace determined by that environment. Examples of reactive systems are network protocols, air-traffic control systems, industrial-process control systems etc. Reactive systems are ubiquitous and represent an important class of systems. Due to their complex nature, such systems are extremely difficult to specify and implement. Many reactive systems are employed in highly-critical applications, making it crucial that one considers issues such as reliability and safety while designing such systems. The design of reactive systems is considered to be problematic, and p.oses one of the greatest challenges in the field of system design and development. In this paper, we discuss specification-modeling methodologies for reactive systems. Specification modeling is an important stage in reactive system design where the designer specifies the desired properties of the reactive system in the form of a specification model. This specification model acts as the guidance and source for the implementation. To develop the specification model of complex systems in an organized manner, designers resort to specification modeling methodologies. In the context of reactive systems, we can call such methodologies reactive-system specification modeling methodologies.
Since their introduction in 1984, Field-Programmable Gate Arrays (FPGAs) have become one of the most popular implementation media for digital circuits and have grown into a $2 billion per year industry. As process geometries have shrunk into the deep-submicron region, the logic capacity of FPGAs has greatly increased, making FPGAs a viable implementation alternative for larger and larger designs. To make the best use of these new deep-submicron processes, one must re-design one's FPGAs and Computer- Aided Design (CAD) tools. Architecture and CAD for Deep-Submicron FPGAs addresses several key issues in the design of high-performance FPGA architectures and CAD tools, with particular emphasis on issues that are important for FPGAs implemented in deep-submicron processes. Three factors combine to determine the performance of an FPGA: the quality of the CAD tools used to map circuits into the FPGA, the quality of the FPGA architecture, and the electrical (i.e. transistor-level) design of the FPGA. Architecture and CAD for Deep-Submicron FPGAs examines all three of these issues in concert. In order to investigate the quality of different FPGA architectures, one needs CAD tools capable of automatically implementing circuits in each FPGA architecture of interest. Once a circuit has been implemented in an FPGA architecture, one next needs accurate area and delay models to evaluate the quality (speed achieved, area required) of the circuit implementation in the FPGA architecture under test. This book therefore has three major foci: the development of a high-quality and highly flexible CAD infrastructure, the creation of accurate area and delay models for FPGAs, and the study of several important FPGA architectural issues. Architecture and CAD for Deep-Submicron FPGAs is an essential reference for researchers, professionals and students interested in FPGAs.
Research and development of logic synthesis and verification have matured considerably over the past two decades. Many commercial products are available, and they have been critical in harnessing advances in fabrication technology to produce today's plethora of electronic components. While this maturity is assuring, the advances in fabrication continue to seemingly present unwieldy challenges. Logic Synthesis and Verification provides a state-of-the-art view of logic synthesis and verification. It consists of fifteen chapters, each focusing on a distinct aspect. Each chapter presents key developments, outlines future challenges, and lists essential references. Two unique features of this book are technical strength and comprehensiveness. The book chapters are written by twenty-eight recognized leaders in the field and reviewed by equally qualified experts. The topics collectively span the field. Logic Synthesis and Verification fills a current gap in the existing CAD literature. Each chapter contains essential information to study a topic at a great depth, and to understand further developments in the field. The book is intended for seniors, graduate students, researchers, and developers of related Computer-Aided Design (CAD) tools. From the foreword: "The commercial success of logic synthesis and verification is due in large part to the ideas of many of the authors of this book. Their innovative work contributed to design automation tools that permanently changed the course of electronic design." by Aart J. de Geus, Chairman and CEO, Synopsys, Inc.
Concurrent simulation is over twenty years old. During that pe riod it has been widely adopted for the simulation of faults in digital circuits, for which it provides a combination of extreme efficiency and generality . Yet, it is remarkable that no book published so far presents a correct and sufficiently detailed treatment of concurrent simulation. A first reason to welcome into print the effort of the authors is, therefore, that it provides a much needed account of an important topic in design automation. This book is, however, unique for sev eral other reasons. It is safe to state that no individual has contrib uted more than Ernst Ulrich to the development of digital logic simulation. For concurrent simulation, one may say that Ernst has contributed more than the rest of the world. We would find such a claim difficult to dispute. The unique experience of the authors con fers a special character to this book: It is authoritative, inspired, and focused on what is conceptually important. Another unique aspect of this book, perhaps the one that will be the most surprising for many readers, is that it is strongly projected towards the future. Concurrent simulation is presented as a general experimentation methodology and new intriguing applications are analyzed. The discussion of multi-domain concurrent simulation-- recent work of Karen Panetta Lentz and Ernst Ulrich---is fascinat ing."
This book deals with Web applications in product design and manufacture, thus filling an information gap in digital manufacturing in the Internet era. It helps both developers and users to appreciate the potentials, as well as difficulties, in developing and adopting Web applications. The objective is to equip potential users and practitioners of Web applications with a better appreciation of the technology. In addition, Web application developers and new researchers in this field will gain a clearer understanding of the selection of system architecture and design, development and implementation techniques, and deployment strategies. The book is divided into two main parts. The first part gives an overview of Web and Internet and the second explains eight typical Web applications.
This book is a collection of several tutorials from the EUROGRAPHICS '90 conference in Montreux. The conference was held under the motto "IMAGES: Synthesis, Analysis and Interaction", and the tutorials, partly presented in this volume, reflect the conference theme. As such, this volume provides a unique collection of advanced texts on 'traditional' com puter graphics as well as of tutorials on image processing and image reconstruction. As with all the volumes of the series "Advances in Computer Graphics", the contributors are leading experts in their respective fields. The chapter Design and Display of Solid Models provides an extended introduction to interactive graphics techniques for design, fast display, and high-quality rendering of solid models. The text focuses on techniques for Constructive Solid Geometry (CSG). The follow ing topics are treated in depth: interactive design techniques (specification of curves, surfaces and solids; graphical user interfaces; procedural languages and direct manipulation) and display techniques (depth-buffer, scan-line and ray-tracing techniques; CSG classification techniques; efficiency-improving methods; software and hardware implementations).
The assembly sector is one of the least automated in the manufacturing industry. Automation is essential if industrial companies are to be competitive in the future. In assembly, an integrated and flexible approach is needed because 75% of the applications are produced in small and medium batches. The methodologies developed in this book deal with the integration of the assembly process from the initial design of the product to its production. In such an integrated system, assembly planning is one of the most important features. A well-chosen assembly plan will reduce both the number of tool changes and the fixtures within the assembly cell. It will prevent the handling of unstable subassemblies, simplify the design of the robot grippers and reduce production costs. An automatic generator of assembly sequences can be an efficient aid to the designer. Whenever he or she modifies features of the product, the influence of these modifications can immediately be checked on the sequences. For small batch production, the automatic generation of assembly sequences is faster, more reliable and more cost-effective than manual generation. By using this latter method interesting sequences could be missed because of the combinatorial explosion of solutions. The main subjects treated in this book are as follows. 1. Presentation and classification of existing systems of automatic generation of assembly sequences. Automatic assembly planning is, indeed, a very recent research area and, in my experience, no systematic study has been carried out up to now.
The appropriate interconnect model has changed several times over the past two decades due to the application of aggressive technology scaling. New, more accurate interconnect models are required to manage the changing physical characteristics of integrated circuits. Currently, RC models are used to analyze high resistance nets while capacitive models are used for less resistive interconnect. However, on-chip inductance is becoming more important with integrated circuits operating at higher frequencies, since the inductive impedance is proportional to the frequency. The operating frequencies of integrated circuits have increased dramatically over the past decade and are expected to maintain the same rate of increase over the next decade, approaching 10 GHz by the year 2012. Also, wide wires are frequently encountered in important global nets, such as clock distribution networks and in upper metal layers, and performance requirements are pushing the introduction of new materials for low resistance interconnect, such as copper interconnect already used in many commercial CMOS technologies. On-Chip Inductance in High Speed Integrated Circuits deals with the design and analysis of integrated circuits with a specific focus on on-chip inductance effects. It has been described throughout this book that inductance can have a tangible effect on current high speed integrated circuits. For example, neglecting inductance and using an RC interconnect model in a production 0.25 mum CMOS technology can cause large errors (over 35%) in estimates of the propagation delay of on-chip interconnect. It has also been shown that including inductance in the repeater insertion design process as compared to using an RC model improves the overall repeater solution in terms of area, power, and delay with average savings of 40.8%, 15.6%, and 6.7%, respectively. On-Chip Inductance in High Speed Integrated Circuits is full of design and analysis techniques for RLC interconnect. These techniques are compared to techniques traditionally used for RC interconnect design to emphasize the effect of inductance. On-Chip Inductance in High Speed Integrated Circuits will be of interest to researchers in the area of high frequency interconnect, noise, and high performance integrated circuit design.
The economic and social developments in the world continually pose new questions to both the social and physical sciences, to the state and to the economy. Currently, the social, natural, and technological fields are particularly impacted by these developments. The facilitation of scientific insights and the utilization of the assembled knowledge of millions of people in their daily work has evolved key questions about the very existence and continuation of society. In a time where almost anything is technically possible, the means of advancing/facilitating (didactics) technological capabilities are being pursued with even more fervor than the actual hunt for new technological capabilities - at least by the most far-seeing nations. In comparison to natural resources, it is a nation's human resources and their combined capability that is infinite. The development of these capabilities was the impetus for this workshop. To this challenge and process, new, hitherto unknown tasks and needs have emerged to energize the dynamic even further. Currently characterizing this search are key words and concepts that include interdisciplinarity, cognitive science, complexity, personal competence, synergistic competence, human resource development, technological literacy, school-industry links (partnerships), private practice partnerships, cognitive apprenticeship, reengineering education, concurrent education, hypermedia, meta-cognition, etc.
Cartesian Genetic Programming (CGP) is a highly effective and increasingly popular form of genetic programming. It represents programs in the form of directed graphs, and a particular characteristic is that it has a highly redundant genotype-phenotype mapping, in that genes can be noncoding. It has spawned a number of new forms, each improving on the efficiency, among them modular, or embedded, CGP, and self-modifying CGP. It has been applied to many problems in both computer science and applied sciences. This book contains chapters written by the leading figures in the development and application of CGP, and it will be essential reading for researchers in genetic programming and for engineers and scientists solving applications using these techniques. It will also be useful for advanced undergraduates and postgraduates seeking to understand and utilize a highly efficient form of genetic programming.
Uncertainty in key parameters within a chip and between different chips in the deep sub micron area plays a more and more important role. As a result, manufacturing process spreads need to be considered during the design process. Quantitative methodology is needed to ensure faultless functionality, despite existing process variations within given bounds, during product development. This book presents the technological, physical, and mathematical fundamentals for a design paradigm shift, from a deterministic process to a probability-orientated design process for microelectronic circuits. Readers will learn to evaluate the different sources of variations in the design flow in order to establish different design variants, while applying appropriate methods and tools to evaluate and optimize their design.
Large system complexities and operation under tight timing constraints in rapidly shrinking technologies have made it extremely important to ensure correct temporal behavior of modern-day digital circuits, both before and after fabrication. Research in (pre-fabrication) timing verification and (post-fabrication) delay fault testing has evolved along largely disjoint lines in spite of the fact that they share many basic concepts. A Unified Approach for Timing Verification and Delay Fault Testing applies concepts developed in the context of delay fault testing to path sensitization, which allows an accurate timing analysis mechanism to be developed. This path sensitization strategy is further applied for efficient delay fault diagnosis and delay fault coverage estimation. A new path sensitization strategy called Signal Stabilization Time Analysis (SSTA) has been developed based on the fact that primitive PDFs determine the stabilization time of the circuit outputs. This analysis has been used to develop a feasible method of identifying the primitive PDFs in a general multi-level logic circuit. An approach to determine the maximum circuit delay using this primitive PDF identification mechanism is also presented. The Primitive PDF Identification-based Timing Analysis (PITA) approach is proved to determine the maximum floating mode circuit delay exactly under any component delay model, and provides several advantages over previously floating mode timing analyzers. A framework for the diagnosis of circuit failures caused by distributed path delay faults is also presented. A metric to quantify the diagnosability of a path delay fault for a test is also proposed. Finally, the book presents a very realistic metric for delay fault coverage which accounts for delay fault size distributions and is applicable to any delay fault model. A Unified Approach for Timing Verification and Delay Fault Testing will be of interest to university and industry researchers in timing analysis and delay fault testing as well as EDA tool development engineers and design verification engineers dealing with timing issues in ULSI circuits. The book should also be of interest to digital designers and others interested in knowing the state of the art in timing verification and delay fault testing.
The combination of VLSI process technology and real-time digital signal processing (DSP) has brought a break-through in information technology. This rapid technical (r)evolution allows the integration of ever more complex systems on a single chip. However, these technology and integration advances have not been matched by an increase in design productivity, causing technology to leapfrog the design of integrated circuits (ICs). The success of these emerging 'systems-on-a-chip' (SOC) can only be guaranteed by a systematic and formal design methodology, possibly automated in computer-aided design (CAD) tools, and effective re-use of existing intellectual property (IP). In this book, a contribution is made to the modeling, timing verification and analysis, and the automatic synthesis of integrated real-time DSP systems. Existing literature in these three domains is extensively reviewed, making this book the first to give a comprehensive overview of existing techniques.The emphasis throughout the book is on the support and guaranteeing of the real-time aspect and constraints of these systems, which avoids time consuming design iterations and safeguards the ever shrinking time-to-market. The proposed 'Multi-Thread Graph' (MTG) system model features two-layers, unifying a (timed) Petri net and a control-data flow graph. Its unique interface between both models offers the best of two worlds and introduces an extra abstraction level hiding the operation-level details which are unnecessary during global system exploration. The formulated timing analysis and verification approach supports the calculation of temporal separation between different MTG entities as well as realistic performance metrics for highly concurrent systems. The synthesis methodology focuses on managing the task-level concurrency (i.e. task scheduling), as part of a proposed overall system design meta flow. It emphasizes performance and timing aspects ('timeliness'), while minimizing processor cost overhead as driven by high-level cost estimators.The approach is new in the abstraction level it employs, and in its optimal hybrid dynamic/static scheduling policy which, driven by cost estimators, selects the scheduling policy for each behavior. At the low-level, RTOS synthesis generates an application-specific scheduler for the software component. The proposed synthesis methodology (at the task-level) is asserted to yield most optimal results when employed before the hardware/software partition is made. At this level, the distinction between these two is minimal, such that all steps in the design trajectory can be shared, thereby reducing the system cost significantly and allowing tighter satisfaction of timing/performance constraints. From the Foreword: This book is the first comprehensive treatment of software, and more general, system, generation (synthesis) techniques based on formal models. It can be used as a very valuable reference to understand the development of the field of embedded software design, and of system design and synthesis in general. The book offers an invaluable help to researchers and practitioners of the field of embedded system design. Prof. Alberto Sangiovanni-Vincentelli, Edgar L. and Harold H.Buttner Professor of Electrical Engineering and Computer Science, University of California, Berkeley, Chief Technology Advisor, Cadence Design Systems.
Advanced ASIC Chip Synthesis: Using Synopsys (R) Design Compiler (R) and PrimeTime (R) describes the advanced concepts and techniques used for ASIC chip synthesis, formal verification and static timing analysis, using the Synopsys suite of tools. In addition, the entire ASIC design flow methodology targeted for VDSM (Very-Deep-Sub-Micron) technologies is covered in detail. The emphasis of this book is on real-time application of Synopsys tools used to combat various problems seen at VDSM geometries. Readers will be exposed to an effective design methodology for handling complex, sub-micron ASIC designs. Significance is placed on HDL coding styles, synthesis and optimization, dynamic simulation, formal verification, DFT scan insertion, links to layout, and static timing analysis. At each step, problems related to each phase of the design flow are identified, with solutions and work-arounds described in detail. In addition, crucial issues related to layout, which includes clock tree synthesis and back-end integration (links to layout) are also discussed at length. Furthermore, the book contains in-depth discussions on the basics of Synopsys technology libraries and HDL coding styles, targeted towards optimal synthesis solutions. Advanced ASIC Chip Synthesis: Using Synopsys (R) Design Compiler (R) and PrimeTime (R) is intended for anyone who is involved in the ASIC design methodology, starting from RTL synthesis to final tape-out. Target audiences for this book are practicing ASIC design engineers and graduate students undertaking advanced courses in ASIC chip design and DFT techniques. From the Foreword: `This book, written by Himanshu Bhatnagar, provides a comprehensive overview of the ASIC design flow targeted for VDSM technologies using the Synopsis suite of tools. It emphasizes the practical issues faced by the semiconductor design engineer in terms of synthesis and the integration of front-end and back-end tools. Traditional design methodologies are challenged and unique solutions are offered to help define the next generation of ASIC design flows. The author provides numerous practical examples derived from real-world situations that will prove valuable to practicing ASIC design engineers as well as to students of advanced VLSI courses in ASIC design'. Dr Dwight W. Decker, Chairman and CEO, Conexant Systems, Inc., (Formerly, Rockwell Semiconductor Systems), Newport Beach, CA, USA.
Since the early 1980s, CAD frameworks have received a great deal of attention, both in the research community and in the commercial arena. It is generally agreed that CAD framework technology promises much: advanced CAD frameworks can turn collections of individual tools into effective and user-friendly design environments. But how can this promise be fulfilled? CAD Frameworks: Principles and Architecture describes the design and construction of CAD frameworks. It presents principles for building integrated design environments and shows how a CAD framework can be based on these principles. It derives the architecture of a CAD framework in a systematic way, using well-defined primitives for representation. This architecture defines how the many different framework sub-topics, ranging from concurrency control to design flow management, relate to each other and come together into an overall system. The origin of this work is the research and development performed in the context of the Nelsis CAD Framework, which has been a working system for well over eight years, gaining functionality while evolving from one release to the next. The principles and concepts presented in this book have been field-tested in the Nelsis CAD Framework. CAD Frameworks: Principles and Architecture is primarily intended for EDA professionals, both in industry and in academia, but is also valuable outside the domain of electronic design. Many of the principles and concepts presented are also applicable to other design-oriented application domains, such as mechanical design or computer-aided software engineering (CASE). It is thus a valuable reference for all those involved in computer-aided design.
Modeling in Analog Design highlights some of the most pressing issues in the use of modeling techniques for design of analogue circuits. Using models for circuit design gives designers the power to express directly the behaviour of parts of a circuit in addition to using other pre-defined components. There are numerous advantages to this new category of analog behavioral language. In the short term, by favouring the top-down design and raising the level of description abstraction, this approach provides greater freedom of implementation and a higher degree of technology independence. In the longer term, analog synthesis and formal optimisation are targeted. Modeling in Analog Design introduces the reader to two main language standards: VHDL-A and MHDL. It goes on to provide in-depth examples of the use of these languages to model analog devices. The final part is devoted to the very important topic of modeling the thermal and electrothermal aspects of devices. This book is essential reading for analog designers using behavioral languages and analog CAD tool development environments who have to provide the tools used by the designers.
Over the past decade there has been a dramatic change in the role played by design automation for electronic systems. Ten years ago, integrated circuit (IC) designers were content to use the computer for circuit, logic, and limited amounts of high-level simulation, as well as for capturing the digitized mask layouts used for IC manufacture. The tools were only aids to design-the designer could always find a way to implement the chip or board manually if the tools failed or if they did not give acceptable results. Today, however, design technology plays an indispensable role in the design ofelectronic systems and is critical to achieving time-to-market, cost, and performance targets. In less than ten years, designers have come to rely on automatic or semi automatic CAD systems for the physical design ofcomplex ICs containing over a million transistors. In the past three years, practical logic synthesis systems that take into account both cost and performance have become a commercial reality and many designers have already relinquished control ofthe logic netlist level of design to automatic computer aids. To date, only in certain well-defined areas, especially digital signal process ing and telecommunications. have higher-level design methods and tools found significant success. However, the forces of time-to-market and growing system complexity will demand the broad-based adoption of high-level, automated methods and tools over the next few years.
3. 2 Input Encoding Targeting Two-Level Logic . . . . . . . . 27 3. 2. 1 One-Hot Coding and Multiple-Valued Minimization 28 3. 2. 2 Input Constraints and Face Embedding 30 3. 3 Satisfying Encoding Constraints . . . . . . . 32 3. 3. 1 Definitions . . . . . . . . . . . . . . . 32 3. 3. 2 Column-Based Constraint Satisfaction 33 3. 3. 3 Row-Based Constraint Satisfaction . . 37 3. 3. 4 Constraint Satisfaction Using Dichotomies . 38 3. 3. 5 Simulated Annealing for Constraint Satisfaction 41 3. 4 Input Encoding Targeting Multilevel Logic. . 43 3. 4. 1 Kernels and Kernel Intersections . . . 44 3. 4. 2 Kernels and Multiple-Valued Variables 46 3. 4. 3 Multiple-Valued Factorization. . . . . 48 3. 4. 4 Size Estimation in Algebraic Decomposition . 53 3. 4. 5 The Encoding Step . 54 3. 5 Conclusion . . . . . . . . . 55 4 Encoding of Symbolic Outputs 57 4. 1 Heuristic Output Encoding Targeting Two-Level Logic. 59 4. 1. 1 Dominance Relations. . . . . . . . . . . . . . . . 59 4. 1. 2 Output Encoding by the Derivation of Dominance Relations . . . . . . . . . . . . . . . . . . . . . 60 . . 4. 1. 3 Heuristics to Minimize the Number of Encoding Bits . . . . . . . . . . . . 64 4. 1. 4 Disjunctive Relationships . . . . . . . . . . . 65 4. 1. 5 Summary . . . . . . . . . . . . . . . . . . 66 . . 4. 2 Exact Output Encoding Targeting Two-Level Logic. 66 4. 2. 1 Generation of Generalized Prime Implicants . 68 4. 2. 2 Selecting a Minimum Encodeable Cover . . . 68 4. 2. 3 Dominance and Disjunctive Relationships to S- isfy Constraints . . . . . . . . . . . 70 4. 2. 4 Constructing the Optimized Cover 73 4. 2. 5 Correctness of the Procedure . . 73 4. 2. 6 Multiple Symbolic Outputs . . .
Power supply current monitoring to detect CMOS IC defects during production testing quietly laid down its roots in the mid-1970s. Both Sandia Labs and RCA in the United States and Philips Labs in the Netherlands practiced this procedure on their CMOS ICs. At that time, this practice stemmed simply from an intuitive sense that CMOS ICs showing abnormal quiescent power supply current (IDDQ) contained defects. Later, this intuition was supported by data and analysis in the 1980s by Levi (RACD, Malaiya and Su (SUNY-Binghamton), Soden and Hawkins (Sandia Labs and the University of New Mexico), Jacomino and co-workers (Laboratoire d'Automatique de Grenoble), and Maly and co-workers (Carnegie Mellon University). Interest in IDDQ testing has advanced beyond the data reported in the 1980s and is now focused on applications and evaluations involving larger volumes of ICs that improve quality beyond what can be achieved by previous conventional means. In the conventional style of testing one attempts to propagate the logic states of the suspended nodes to primary outputs. This is done for all or most nodes of the circuit. For sequential circuits, in particular, the complexity of finding suitable tests is very high. In comparison, the IDDQ test does not observe the logic states, but measures the integrated current that leaks through all gates. In other words, it is like measuring a patient's temperature to determine the state of health. Despite perceived advantages, during the years that followed its initial announcements, skepticism about the practicality of IDDQ testing prevailed. The idea, however, provided a great opportunity to researchers. New results on test generation, fault simulation, design for testability, built-in self-test, and diagnosis for this style of testing have since been reported. After a decade of research, we are definitely closer to practice.
Principles of Verilog PLI is a 'how to do' text on Verilog Programming Language Interface. The primary focus of the book is on how to use PLI for problem solving. Both PLI 1.0 and PLI 2.0 are covered. Particular emphasis has been put on adopting a generic step-by-step approach to create a fully functional PLI code. Numerous examples were carefully selected so that a variety of problems can be solved through ther use. A separate chapter on Bus Functional Model (BFM), one of the most widely used commercial applications of PLI, is included. Principles of Verilog PLI is written for the professional engineer who uses Verilog for ASIC design and verification. Principles of Verilog PLI will be also of interest to students who are learning Verilog.
The Background to the Institute The NATO Advanced Study Institute (ASI) 'People and Computers - Applying an Anthropocentric Approach to Integrated Production Systems and Organisations' came about after the distribution of a NATO fact sheet to BruneI University, which described the funding of ASls. The 'embryonic' director of the ASI brought this opportunity to the attention of the group of people, (some at BruneI and some from outside), who were together responsible for the teaching and management of the course in Computer Integrated Manufacturing (CIM) in BruneI's Department of Manufacturing and Engineering Systems. This course had been conceived in 1986 and was envisaged as a vehicle for teaching manufacturing engineering students the technology of information integration through project work. While the original idea of the course had also included the organisational aspects of CIM, the human factors questions were not considered. This shortcoming was recognised and the trial run of the course in 1988 contained some lectures on 'people' issues. The course team were therefore well prepared and keen to explore the People, Organisation and Technology (POT) aspects of computer integration, as applied to industrial production. A context was proposed which would allow the inclusion of people from many different backgrounds and which would open up time and space for reflection. The proposal to organise a NATO ASI was therefore welcomed by all concerned.
Switching Theory for Logic Synthesis covers the basic topics of switching theory and logic synthesis in fourteen chapters. Chapters 1 through 5 provide the mathematical foundation. Chapters 6 through 8 include an introduction to sequential circuits, optimization of sequential machines and asynchronous sequential circuits. Chapters 9 through 14 are the main feature of the book. These chapters introduce and explain various topics that make up the subject of logic synthesis: multi-valued input two-valued output function, logic design for PLDs/FPGAs, EXOR-based design, and complexity theories of logic networks. An appendix providing a history of switching theory is included. The reference list consists of over four hundred entries. Switching Theory for Logic Synthesis is based on the author's lectures at Kyushu Institute of Technology as well as seminars for CAD engineers from various Japanese technology companies. Switching Theory for Logic Synthesis will be of interest to CAD professionals and students at the advanced level. It is also useful as a textbook, as each chapter contains examples, illustrations, and exercises.
Research on high-level synthesis started over twenty years ago, but lower-level tools were not available to seriously support the insertion of high-level synthesis into the mainstream design methodology. Since then, substantial progress has been made in formulating and understanding the basic concepts in high-level synthesis. Although many open problems remain, high-level synthesis has matured. High-Level Synthesis: Introduction to Chip and System Design presents a summary of the basic concepts and results and defines the remaining open problems. This is the first textbook on high-level synthesis and includes the basic concepts, the main algorithms used in high-level synthesis and a discussion of the requirements and essential issues for high-level synthesis systems and environments. A reference text like this will allow the high-level synthesis community to grow and prosper in the future.
A Formal Approach to Hardware Design discusses designing computations to be realised by application specific hardware. It introduces a formal design approach based on a high-level design language called Synchronized Transitions. The models created using Synchronized Transitions enable the designer to perform different kinds of analysis and verification based on descriptions in a single language. It is, for example, possible to use exactly the same design description both for mechanically supported verification and synthesis. Synchronized Transitions is supported by a collection of public domain CAD tools. These tools can be used with the book in presenting a course on the subject. A Formal Approach to Hardware Design illustrates the benefits to be gained from adopting such techniques, but it does so without assuming prior knowledge of formal design methods. The book is thus not only an excellent reference, it is also suitable for use by students and practitioners.
For many years, the dominant fault model in automatic test pattern gen eration (ATPG) for digital integrated circuits has been the stuck-at fault model. The static nature of stuck-at fault testing when compared to the extremely dynamic nature of integrated circuit (IC) technology has caused many to question whether or not stuck-at fault based testing is still viable. Attempts at answering this question have not been wholly satisfying due to a lack of true quantification, statistical significance, and/or high computational expense. In this monograph we introduce a methodology to address the ques tion in a manner which circumvents the drawbacks of previous approaches. The method is based on symbolic Boolean functional analyses using Or dered Binary Decision Diagrams (OBDDs). OBDDs have been conjectured to be an attractive representation form for Boolean functions, although cases ex ist for which their complexity is guaranteed to grow exponentially with input cardinality. Classes of Boolean functions which exploit the efficiencies inherent in OBDDs to a very great extent are examined in Chapter 7. Exact equa tions giving their OBDD sizes are derived, whereas until very recently only size bounds have been available. These size equations suggest that straight forward applications of OBDDs to design and test related problems may not prove as fruitful as was once thought." |
You may like...
Exam Ref 70-532 Developing Microsoft…
Zoiner Tejada, Michele Bustamante, …
Paperback
Revolutionizing Pedagogy - Education for…
S. Macrine, P. McLaren, …
Hardcover
Science Education and International…
George Zhou, Yuanrong Li, …
Hardcover
R3,976
Discovery Miles 39 760
Curriculum, Pedagogy And Assessment - A…
Hasina Ebrahim, Manjula Waniganayake
Paperback
R736
Discovery Miles 7 360
Letters and Sounds: Principles and…
Shurville Publishing, Department for Education
Paperback
R669
Discovery Miles 6 690
|