![]() |
![]() |
Your cart is empty |
||
Books > Professional & Technical > Technology: general issues > Technical design > Computer aided design (CAD)
The development of computational models of design founded on the artificial intelligenceparadigm has provided an impetus for muchofcurrentdesign research. As artificial intelligence has matured and developed new approaches so the impact ofthese new approaches on design research has been felt. This can be seen in the wayconcepts from cognitive science has found theirway into artificial intelligence and hence into design research. And, also in the way in which agent-based systems arebeingincorporated into design systems. In design research there is an increasing blurring between notions drawn from artificial intelligence and those drawn from cognitive science. Whereas a number of years ago the focus was largely on applying artificial intelligence to designing as an activity, thus treating designing as a form ofproblem solving, today we are seeing a much wider variety ofconceptions of the role of artificial intelligence in helping to model and comprehend designing as a process. Thus, we see papers in this volume which have as their focus the development or implementationofframeworks for artificial intelligence in design - attempting to determine a unique locus for these ideas. We see papers which attempt to find foundations for the development of tools based on the artificial intelligence paradigm; often the foundations come from cognitive studiesofhuman designers.
The existence of electrical noise is basically due to the fact that electrical charge is not continuous but is carried in discrete amounts equal to the electron charge. Electrical noise represents a fundamental limit on the performance of electronic circuits and systems. With the explosive growth in the personal mobile communications market, the need for noise analysis/simulation techniques for nonlinear electronic circuits and systems has been re-emphasized. Even though most of the signal processing is done in the digital domain, every wireless communication device has an analog front-end which is usually the bottleneck in the design of the whole system. The requirements for low-power operation and higher levels of integration create new challenges in the design of the analog signal processing subsystems of these mobile communication devices. The effect of noise on the performance of these inherently nonlinear analog circuits is becoming more and more significant.Analysis and Simulation of Noise in Nonlinear Electronic Circuits and Systems presents analysis, simulation and characterization techniques and behavioral models for noise in nonlinear electronic circuits and systems, along with practical examples. This book treats the problem within the framework of, and using techniques from, the probabilistic theory of stochastic processes and stochastic differential systems. Analysis and Simulation of Noise in Nonlinear Electronic Circuits and Systems will be of interest to RF/analog designers as well as engineers interested in stochastic modeling and simulation.
The Verilog Programming Language Interface, commonly called the Verilog PU, is one of the more powerful features of Verilog. The PU provides a means for both hardware designers and software engineers to interface their own programs to commercial Verilog simulators. Through this interface, a Verilog simulator can be customized to perform virtually any engineering task desired. Just a few of the common uses of the PU include interfacing Veri log simulations to C language models, adding custom graphical tools to a simulator, reading and writing proprietary file formats from within a simulation, performing test coverage analysis during simulation, and so forth. The applications possible with the Verilog PLI are endless. Intended audience: this book is written for digital design engineers with a background in the Verilog Hardware Description Language and a fundamental knowledge of the C programming language. It is expected that the reader: Has a basic knowledge of hardware engineering, specifically digital design of ASIC and FPGA technologies. Is familiar with the Verilog Hardware Description Language (HDL), and can write models of hardware circuits in Verilog, can write simulation test fixtures in Verilog, and can run at least one Verilog logic simulator. Knows basic C-language programming, including the use of functions, pointers, structures and file I/O. Explanations of the concepts and terminology of digital
Embedded systems are characterized by the presence of processors running application-specific software. Recent years have seen a large growth of such systems, and this trend is projected to continue with the growth of systems on a chip. Many of these systems have strict performance and cost requirements. To design these systems, sophisticated timing analysis tools are needed to accurately determine the extreme case (best case and worst case) performance of the software components. Existing techniques for this analysis have one or more of the following limitations: * they cannot model complicated programs * they cannot model advanced micro-architectural features of the processor, such as cache memories and pipelines * they cannot be easily retargeted for new hardware platforms. In Performance Analysis of Real-Time Embedded Software, a new timing analysis technique is presented to overcome the above limitations. The technique determines the bounds on the extreme case (best case and worst case) execution time of a program when running on a given hardware system. It partitions the problem into two sub-problems: program path analysis and microarchitecture modeling.Performance Analysis of Real-Time Embedded Software will be of interest to Design Automation professionals as well as designers of circuits and systems.
This unique book deals with the migration of existing hard IP from one technology to another, using repeatable procedures. It will allow CAD practitioners to quickly develop methodologies that capitalize on the large volumes of legacy data available within a company today.
Perspectives On Software Requirements presents perspectives on several current approaches to software requirements. Each chapter addresses a specific problem where the authors summarize their experiences and results to produce well-fit and traceable requirements. Chapters highlight familiar issues with recent results and experiences, which are accompanied by chapters describing well-tuned new methods for specific domains.
Modeling of Induction Motors with One and Two Degrees of Mechanical Freedom presents the mathematical model of induction motors with two degrees of mechanical freedom (IM-2DMF), formed in the electromagnetic field as well as in circuit theory, which allows analyzing the performance of these three groups of motors taking into account edge effects, winding and current asymmetry. The model derived is based on the concept of magnetic field wave moving in the air-gap with a helical motion. In general, the rotor moves helically too with the rotary-linear slip. The electromagnetic field as well as motor performance of the particular motors is analyzed. The mathematical model of IM-2DMF is more general to the model of induction motors with one degree of mechanical freedom, i.e. rotary and linear motors. Examples of modeling two types of rotary disc motors and flat linear motor with twisted primary part are presented with inclusion of finite stator and rotor length and width effects. The simulation results are backed by the measurements carried out on the laboratory models, which were tested on the unique measurement stand.
Low-Energy FPGAs: Architecture and Design is a primary resource for both researchers and practicing engineers in the field of digital circuit design. The book addresses the energy consumption of Field-Programmable Gate Arrays (FPGAs). FPGAs are becoming popular as embedded components in computing platforms. The programmability of the FPGA can be used to customize implementations of functions on an application basis. This leads to performance gains, and enables reuse of expensive silicon. Chapter 1 provides an overview of digital circuit design and FPGAs. Chapter 2 looks at the implication of deep-submicron technology onFPGA power dissipation. Chapter 3 describes the exploration environment to guide and evaluate design decisions. Chapter 4 discusses the architectural optimization process to evaluate the trade-offs between the flexibility of the architecture, and the effect on the performance metrics. Chapter 5 reviews different circuit techniques to reduce the performance overhead of some of the dominant components. Chapter 6 shows methods to configure FPGAs to minimize the programming overhead. Chapter 7 addresses the physical realization of some of the critical components and the final implementation of a specific low-energy FPGA. Chapter 8 compares the prototype array to an equivalent commercial architecture.
The Dynamics of Digital Excitation provides a fundamental new viewpoint on circuit therapy. It begins with a very real and practical problem and then presents arguments that are set forth for the first time. The most commonly used parameter of digital circuits, the gate delay time, does not exist. This problem emerges most clearly in the high-speed CMOS, above 1 GHz clock frequency. This book explains why that is so and then how to deal with the situation in a practical manner. Most of the large IC companies, and many of the small IC design companies, are now racing to capture above 1 GHz clock CMOS IC markets. A few examples of such companies in the United States are Motorola, Intel and DEC. Numerous new small design-only companies are also interested in this technology. The above 1 GHz circuit design is an extremely difficult concept and, for the designers, the material discussed in this book is indispensable. The Dynamics of Digital Excitation shows that the fastest CMOS circuits can be understood and designed only after understanding their quantum-mechanical nature.The Dynamics of Digital Excitation will help the circuit designer to learn how to deal with the problems of circuit delay when the gate delay is not a valid concept at high switching speeds and how to design the fastest critical paths. This book outlines essential and fundamental guidelines for designing the fastest CMOS circuits. It also explains how to design and structure computer-aided designs to deal with above 1 GHz circuits. The Dynamics of Digital Excitation sets forth exciting new ideas and will be of interest to IC designers and CAD professionals alike.
by Phil Moorby The Verilog Hardware Description Language has had an amazing impact on the mod em electronics industry, considering that the essential composition of the language was developed in a surprisingly short period of time, early in 1984. Since its introduc tion, Verilog has changed very little. Over time, users have requested many improve ments to meet new methodology needs. But, it is a complex and time consuming process to add features to a language without ambiguity, and maintaining consistency. A group of Verilog enthusiasts, the IEEE 1364 Verilog committee, have broken the Verilog feature doldrums. These individuals should be applauded. They invested the time and energy, often their personal time, to understand and resolve an extensive wish-list of language enhancements. They took on the task of choosing a feature set that would stand up to the scrutiny of the standardization process. I would like to per sonally thank this group. They have shown that it is possible to evolve Verilog, rather than having to completely start over with some revolutionary new language. The Verilog 1364-2001 standard provides many of the advanced building blocks that users have requested. The enhancements include key components for verification, abstract design, and other new methodology capabilities. As designers tackle advanced issues such as automated verification, system partitioning, etc., the Verilog standard will rise to meet the continuing challenge of electronics design.
Written expressly for hardware designers, this book presents a formal model of VHDL clearly specifying both the static and dynamic semantics of VHDL. It provides a mathematical framework for representing VHDL constructs and shows how those constructs can be formally manipulated to reason about VHDL.
Dynamic power management is a design methodology aiming at controlling performance and power levels of digital circuits and systems, with the goal of extending the autonomous operation time of battery-powered systems, providing graceful performance degradation when supply energy is limited, and adapting power dissipation to satisfy environmental constraints. Dynamic Power Management: Design Techniques and CAD Tools addresses design techniques and computer-aided design solutions for power management. Different approaches are presented and organized in an order related to their applicability to control-units, macro-blocks, digital circuits and electronic systems, respectively. All approaches are based on the principle of exploiting idleness of circuits, systems, or portions thereof. They involve both the detection of idleness conditions and the freezing of power-consuming activities in the idle components. The book also describes some approaches to system-level power management, including Microsoft's OnNow architecture and the `Advanced Configuration and Power Management' standard proposed by Intel, Microsoft and Toshiba. These approaches migrate power management to the software layer running on hardware platforms, thus providing a flexible and self-configurable solution to adapting the power/performance tradeoff to the needs of mobile (and fixed) computing and communication. Dynamic Power Management: Design Techniques and CAD Tools is of interest to researchers and developers of computer-aided design tools for integrated circuits and systems, as well as to system designers.
Logic Synthesis for Low Power VLSI Designs presents a systematic and comprehensive treatment of power modeling and optimization at the logic level. More precisely, this book provides a detailed presentation of methodologies, algorithms and CAD tools for power modeling, estimation and analysis, synthesis and optimization at the logic level. Logic Synthesis for Low Power VLSI Designs contains detailed descriptions of technology-dependent logic transformations and optimizations, technology decomposition and mapping, and post-mapping structural optimization techniques for low power. It also emphasizes the trade-off techniques for two-level and multi-level logic circuits that involve power dissipation and circuit speed, in the hope that the readers can better understand the issues and ways of achieving their power dissipation goal while meeting the timing constraints. Logic Synthesis for Low Power VLSI Designs is written for VLSI design engineers, CAD professionals, and students who have had a basic knowledge of CMOS digital design and logic synthesis.
An open process of restandardization, conducted by the IEEE, has led to the definitions of the new VHDL standard. The changes make VHDL safer, more portable, and more powerful. VHDL also becomes bigger and more complete. The canonical simulator of VHDL is enriched by new mechanisms, the predefined environment is more complete, and the syntax is more regular and flexible. Discrepancies and known bugs of VHDL'87 have been fixed. However, the new VHDL'92 is compatible with VHDL'87, with some minor exceptions. This book presents the new VHDL'92 for the VHDL designer. New features ar explained and classified. Examples are provided, each new feature is given a rationale and its impact on design methodology, and performance is analysed. Where appropriate, pitfalls and traps are explained. The VHDL designer will quickly be able to find the feature needed to evaluate the benefits it brings, to modify previous VHDL'87 code to make it more efficient, more portable, and more flexible. VHDL'92 is the essential update for all VHDL designers and managers involved in electronic design.
Silicon technology now allows us to build chips consisting of tens of millions of transistors. This technology not only promises new levels of system integration onto a single chip, but also presents significant challenges to the chip designer. As a result, many ASIC developers and silicon vendors are re-examining their design methodologies, searching for ways to make effective use of the huge numbers of gates now available. These designers see current design tools and methodologies as inadequate for developing million-gate ASICs from scratch. There is considerable pressure to keep design team size and design schedules constant even as design complexities grow. Tools are not providing the productivity gains required to keep pace with the increasing gate counts available from deep submicron technology. Design reuse - the use of pre-designed and pre-verified cores - is the most promising opportunity to bridge the gap between available gate-count and designer productivity. Reuse Methodology Manual for System-On-A-Chip Designs, Second Edition outlines an effective methodology for creating reusable designs for use in a System-on-a-Chip (SoC) design methodology. Silicon and tool technologies move so quickly that no single methodology can provide a permanent solution to this highly dynamic problem. Instead, this manual is an attempt to capture and incrementally improve on current best practices in the industry, and to give a coherent, integrated view of the design process. Reuse Methodology Manual for System-On-A-Chip Designs, Second Edition will be updated on a regular basis as a result of changing technology and improved insight into the problems of design reuse and its role in producing high-quality SoC designs.
Robust Modal Control covers most classical multivariable modal
control design techniques that were shown to be effective in
practice, and in addition proposes several new tools. The proposed
new tools include: minimum energy eigenvector selection, low order
observer-based control design, conversion to observer-based
controllers, a new multimodel design technique, and modal analysis.
The text is accompanied by a CD-ROM containing MATLAB(r) software
for the implementation of the proposed techniques. The software is
in use in aeronautical industry and has proven to be effective and
functional.
The appropriate interconnect model has changed several times over the past two decades due to the application of aggressive technology scaling. New, more accurate interconnect models are required to manage the changing physical characteristics of integrated circuits. Currently, RC models are used to analyze high resistance nets while capacitive models are used for less resistive interconnect. However, on-chip inductance is becoming more important with integrated circuits operating at higher frequencies, since the inductive impedance is proportional to the frequency. The operating frequencies of integrated circuits have increased dramatically over the past decade and are expected to maintain the same rate of increase over the next decade, approaching 10 GHz by the year 2012. Also, wide wires are frequently encountered in important global nets, such as clock distribution networks and in upper metal layers, and performance requirements are pushing the introduction of new materials for low resistance interconnect, such as copper interconnect already used in many commercial CMOS technologies. On-Chip Inductance in High Speed Integrated Circuits deals with the design and analysis of integrated circuits with a specific focus on on-chip inductance effects. It has been described throughout this book that inductance can have a tangible effect on current high speed integrated circuits. For example, neglecting inductance and using an RC interconnect model in a production 0.25 mum CMOS technology can cause large errors (over 35%) in estimates of the propagation delay of on-chip interconnect. It has also been shown that including inductance in the repeater insertion design process as compared to using an RC model improves the overall repeater solution in terms of area, power, and delay with average savings of 40.8%, 15.6%, and 6.7%, respectively. On-Chip Inductance in High Speed Integrated Circuits is full of design and analysis techniques for RLC interconnect. These techniques are compared to techniques traditionally used for RC interconnect design to emphasize the effect of inductance. On-Chip Inductance in High Speed Integrated Circuits will be of interest to researchers in the area of high frequency interconnect, noise, and high performance integrated circuit design.
Depth recovery is important in machine vision applications when a 3-dimensional structure must be derived from 2-dimensional images. This is an active area of research with applications ranging from industrial robotics to military imaging. This book provides the comprehensive details of the methodology, along with the complete mathematics and algorithms involved. Many new models, both deterministic and statistical, are introduced.
Geometric algebra has established itself as a powerful and valuable mathematical tool for solving problems in computer science, engineering, physics, and mathematics. The articles in this volume, written by experts in various fields, reflect an interdisciplinary approach to the subject, and highlight a range of techniques and applications. Relevant ideas are introduced in a self-contained manner and only a knowledge of linear algebra and calculus is assumed. Features and Topics: * The mathematical foundations of geometric algebra are explored * Applications in computational geometry include models of reflection and ray-tracing and a new and concise characterization of the crystallographic groups * Applications in engineering include robotics, image geometry, control-pose estimation, inverse kinematics and dynamics, control and visual navigation * Applications in physics include rigid-body dynamics, elasticity, and electromagnetism * Chapters dedicated to quantum information theory dealing with multi- particle entanglement, MRI, and relativistic generalizations Practitioners, professionals, and researchers working in computer science, engineering, physics, and mathematics will find a wide range of useful applications in this state-of-the-art survey and reference book. Additionally, advanced graduate students interested in geometric algebra will find the most current applications and methods discussed.
Function Architecture Co-Design is a new paradigm for the design and implementation of embedded systems. Function/Architecture Optimization and Co-Design of Embedded Systems presents the authors' work in developing a function/architecture optimization and co-design formal methodology and framework for control-dominated embedded systems. The approach incorporates both data flow and control optimizations performed on a suitable novel intermediate design task representation. The aim is not only to enhance productivity of the designer and system developer, but also to improve quality of the final synthesis outcome. Function/Architecture Optimization and Co-Design of Embedded Systems discusses the proposed function/architecture co-design methodology, focusing on design representation, optimization, validation, and synthesis. Throughout the text, the difference between behavior specification and implementation is emphasized. The current need in co-design to move from synthesis-based technology to compiler-based technology is pointed out. The authors describe and show how performing data flow and control optimizations at the high abstraction level can lead to significant size and performance improvements in both the synthesized hardware and software. The work builds on bodies of research in the silicon and software compilation domains. The aforementioned techniques are specialized to the embedded systems domain. It is recognized that guided optimization can be applied on the internal design representation, no matter what the abstraction level, and need not be restricted to the final stages of software assembly code generation, or hardware synthesis. Function/Architecture Optimization and Co-Design of Embedded Systems will be of primary interest to researchers, developers, and professionals in the field of embedded systems design.
Computer graphics, computer-aided design, and computer-aided manufacturing are tools that have become indispensable to a wide array of activities in contemporary society. Euclidean processing provides the basis for these computer-aided design systems although it contains elements that inevitably lead to an inaccurate, non-robust, and complex system. The primary cause of the deficiencies of Euclidean processing is the division operation, which becomes necessary if an n-space problem is to be processed in n-space. The difficulties that accompany the division operation may be avoided if processing is conducted entirely in (n+1)-space. The paradigm attained through the logical extension of this approach, totally four-dimensional processing, is the subject of this book. This book offers a new system of geometric processing techniques that attain accurate, robust, and compact computations, and allow the construction of a systematically structured CAD system.
Complex Automated Negotiations have been widely studied and are becoming an important, emerging area in the field of Autonomous Agents and Multi-Agent Systems. In general, automated negotiations can be complex, since there are a lot of factors that characterize such negotiations. These factors include the number of issues, dependency between issues, representation of utility, negotiation protocol, negotiation form (bilateral or multi-party), time constraints, etc. Software agents can support automation or simulation of such complex negotiations on the behalf of their owners, and can provide them with adequate bargaining strategies. In many multi-issue bargaining settings, negotiation becomes more than a zero-sum game, so bargaining agents have an incentive to cooperate in order to achieve efficient win-win agreements. Also, in a complex negotiation, there could be multiple issues that are interdependent. Thus, agent's utility will become more complex than simple utility functions. Further, negotiation forms and protocols could be different between bilateral situations and multi-party situations. To realize such a complex automated negotiation, we have to incorporate advanced Artificial Intelligence technologies includes search, CSP, graphical utility models, Bays nets, auctions, utility graphs, predicting and learning methods. Applications could include e-commerce tools, decision-making support tools, negotiation support tools, collaboration tools, etc. In this book, we solicit papers on all aspects of such complex automated negotiations in the field of Autonomous Agents and Multi-Agent Systems. In addition, this book includes papers on the ANAC 2010 (Automated Negotiating Agents Competition), in which automated agents who have different negotiation strategies and implemented by different developers are automatically negotiate in the several negotiation domains. ANAC is one of real testbeds in which strategies for automated negotiating agents are evaluated in a tournament style.
Rapid increases in chip complexity, increasingly faster clocks, and the proliferation of portable devices have combined to make power dissipation an important design parameter. The power consumption of a digital system determines its heat dissipation as well as battery life. For some systems, power has become the most critical design constraint. Computer-Aided Design Techniques for Low Power Sequential Logic Circuits presents a methodology for low power design. The authors first present a survey of techniques for estimating the average power dissipation of a logic circuit. At the logic level, power dissipation is directly related to average switching activity. A symbolic simulation method that accurately computes the average switching activity in logic circuits is then described. This method is extended to handle sequential logic circuits by modeling correlation in time and by calculating the probabilities of present state lines. Computer-Aided Design Techniques for Low Power Sequential Logic Circuits then presents a survey of methods to optimize logic circuits for low power dissipation which target reduced switching activity. A method to retime a sequential logic circuit where registers are repositioned such that the overall glitching in the circuit is minimized is also described. The authors then detail a powerful optimization method that is based on selectively precomputing the output logic values of a circuit one clock cycle before they are required, and using the precomputed value to reduce internal switching activity in the succeeding clock cycle. Presented next is a survey of methods that reduce switching activity in circuits described at the register-transfer and behavioral levels. Also described is a scheduling algorithm that reduces power dissipation by maximising the inactivity period of the modules in a given circuit. Computer-Aided Design Techniques for Low Power Sequential Logic Circuits concludes with a summary and directions for future research.
E. KONTIZAS Astronomical Institute National Observatory of Athens P. O. Box 20048 Athens GR-1181O GREECE The international conference on "Wide-Field Spectroscopy" and its sub ject matter were agreed during the general assembly of the International Astronomical Union (IAU) in August 1994 by the Working Group of Com mision 9 "Wi de-Field Imaging". This meeting gave an opportunity to world experts on this subject to gather in Athens, in order to discuss the cur rent exploitation and the impending opportunities that exist in the area of multi-object spectroscopy, with particular emphasis on: 1. Astronomical instruments, data acquisition, processing and analysis techniques. 2. Astrophysical problems best tackled through wide-field, multi-object spectroscopy. The new fibre optic technology offers an important tool for the advancement of basic research and the development of industrial applications. Astronom ical spectroscopy is a field of astronomy which has contributed much to the advancement of fundamental physics. The spectra of hot stars have been used to determine the well-known Balmer formula for the wavelength of hydrogen lines, in the late 19th century. Since then, spectroscopy has made enormous progress in stellar atmosphere studies, in kinematics, and in the detection of high redshifts in the Universe. The traditional techniques of obtaining wide-field spectroscopic data are based on slitless spectroscopy (objective prism). Several observations, world wide, make use ofthese tech niques in order to obtain information on the spectral properties of objects in large areas of the sky.
With the ever-increasing speed of integrated circuits, violations of the performance specifications are becoming a major factor affecting the product quality level. The need for testing timing defects is further expected to grow with the current design trend of moving towards deep submicron devices. After a long period of prevailing belief that high stuck-at fault coverage is sufficient to guarantee high quality of shipped products, the industry is now forced to rethink other types of testing. Delay testing has been a topic of extensive research both in industry and in academia for more than a decade. As a result, several delay fault models and numerous testing methodologies have been proposed. Delay Fault Testing for VLSI Circuits presents a selection of existing delay testing research results. It combines introductory material with state-of-the-art techniques that address some of the current problems in delay testing. Delay Fault Testing for VLSI Circuits covers some basic topics such as fault modeling and test application schemes for detecting delay defects.It also presents summaries and conclusions of several recent case studies and experiments related to delay testing. A selection of delay testing issues and test techniques such as delay fault simulation, test generation, design for testability and synthesis for testability are also covered. Delay Fault Testing for VLSI Circuits is intended for use by CAD and test engineers, researchers, tool developers and graduate students. It requires a basic background in digital testing. The book can used as supplementary material for a graduate-level course on VLSI testing. |
![]() ![]() You may like...
Industrial Process Identification…
Ai-Hui Tan, Keith Richard Godfrey
Hardcover
R4,117
Discovery Miles 41 170
Deep In-memory Architectures for Machine…
Mingu Kang, Sujan Gonugondla, …
Hardcover
R2,627
Discovery Miles 26 270
Legal Challenges of Big Data
Joe Cannataci, Valeria Falce, …
Hardcover
R3,880
Discovery Miles 38 800
Paleomagnetism, Volume 73 - Continents…
Michael W. McElhinny, Phillip L. McFadden
Hardcover
R1,494
Discovery Miles 14 940
Competition Policy and Regional…
Josef Drexl, Mor Bakhoum, …
Hardcover
R4,026
Discovery Miles 40 260
Imperfect Bifurcation in Structures and…
Kiyohiro Ikeda, Kazuo Murota
Hardcover
R2,382
Discovery Miles 23 820
Mixed Finite Element Methods and…
Daniele Boffi, Franco Brezzi, …
Hardcover
R5,225
Discovery Miles 52 250
|