![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Professional & Technical > Technology: general issues > Technical design > Computer aided design (CAD)
As future generation information technology (FGIT) becomes specialized and fr- mented, it is easy to lose sight that many topics in FGIT have common threads and, because of this, advances in one discipline may be transmitted to others. Presentation of recent results obtained in different disciplines encourages this interchange for the advancement of FGIT as a whole. Of particular interest are hybrid solutions that c- bine ideas taken from multiple disciplines in order to achieve something more signi- cant than the sum of the individual parts. Through such hybrid philosophy, a new principle can be discovered, which has the propensity to propagate throughout mul- faceted disciplines. FGIT 2009 was the first mega-conference that attempted to follow the above idea of hybridization in FGIT in a form of multiple events related to particular disciplines of IT, conducted by separate scientific committees, but coordinated in order to expose the most important contributions. It included the following international conferences: Advanced Software Engineering and Its Applications (ASEA), Bio-Science and Bio-Technology (BSBT), Control and Automation (CA), Database Theory and Application (DTA), D- aster Recovery and Business Continuity (DRBC; published independently), Future G- eration Communication and Networking (FGCN) that was combined with Advanced Communication and Networking (ACN), Grid and Distributed Computing (GDC), M- timedia, Computer Graphics and Broadcasting (MulGraB), Security Technology (SecTech), Signal Processing, Image Processing and Pattern Recognition (SIP), and- and e-Service, Science and Technology (UNESST).
This book constitutes the thoroughly refereed post-conference proceedings of the 7th International Conference on Numerical Methods and Applications, NMA 2010, held in Borovets, Bulgaria, in August 2010. The 60 revised full papers presented together with 3 invited papers were carefully reviewed and selected from numerous submissions for inclusion in this book. The papers are organized in topical sections on Monte Carlo and quasi-Monte Carlo methods, environmental modeling, grid computing and applications, metaheuristics for optimization problems, and modeling and simulation of electrochemical processes.
The concept of CAST as Computer Aided Systems Theory was introduced by F. Pichler in the late 1980s to refer to computer theoretical and practical developments as tools for solving problems in system science. It was thought of as the third component (the other two being CAD and CAM) required to complete the path from computer and systems sciences to practical developments in science and engineering. Franz Pichler, of the University of Linz, organized the first CAST workshop in April 1988, which demonstrated the acceptance of the concepts by the scientific and technical community. Next, the University of Las Palmas de Gran Canaria joined the University of Linz to organize the first international meeting on CAST (Las Palmas, February 1989) under the name EUROCAST'89. This proved to be a very successful gathering of systems theorists, computer scientists and engineers from most European countries, North America and Japan. It was agreed that EUROCAST international conferences would be organized every two years, alternating between Las Palmas de Gran Canaria and a continental European location. From 2001 the conference has been held exclusively in Las Palmas. Thus, successive EUROCAST meetings took place in Krems (1991), Las Palmas (1993), In- bruck (1995), Las Palmas (1997), Vienna (1999), Las Palmas (2001), Las Palmas (2003) Las Palmas (2005) and Las Palmas (2007), in addition to an extra-European CAST c- ference in Ottawa in 1994.
When I attended college we studied vacuum tubes in our junior year. At that time an average radio had ?ve vacuum tubes and better ones even seven. Then transistors appeared in 1960s. A good radio was judged to be one with more thententransistors. Latergoodradioshad15-20transistors and after that everyone stopped counting transistors. Today modern processors runing personal computers have over 10milliontransistorsandmoremillionswillbeaddedevery year. The difference between 20 and 20M is in complexity, methodology and business models. Designs with 20 tr- sistors are easily generated by design engineers without any tools, whilst designs with 20M transistors can not be done by humans in reasonable time without the help of Prof. Dr. Gajski demonstrates the Y-chart automation. This difference in complexity introduced a paradigm shift which required sophisticated methods and tools, and introduced design automation into design practice. By the decomposition of the design process into many tasks and abstraction levels the methodology of designing chips or systems has also evolved. Similarly, the business model has changed from vertical integration, in which one company did all the tasks from product speci?cation to manufacturing, to globally distributed, client server production in which most of the design and manufacturing tasks are outsourced.
The finite difference method (FDM) hasbeen used tosolve differential equation systems for centuries. The FDM works well for problems of simple geometry and was widely used before the invention of the much more efficient, robust finite element method (FEM). FEM is now widely used in handling problems with complex geometry. Currently, we are using and developing even more powerful numerical techniques aiming to obtain more accurate approximate solutions in a more convenient manner for even more complex systems. The meshfree or meshless method is one such phenomenal development in the past decade, and is the subject of this book. There are many MFree methods proposed so far for different applications. Currently, three monographs on MFree methods have been published. Mesh Free Methods, Moving Beyond the Finite Element Method d by GR Liu (2002) provides a systematic discussion on basic theories, fundamentals for MFree methods, especially on MFree weak-form methods. It provides a comprehensive record of well-known MFree methods and the wide coverage of applications of MFree methods to problems of solids mechanics (solids, beams, plates, shells, etc.) as well as fluid mechanics. The Meshless Local Petrov-Galerkin (MLPG) Method d by Atluri and Shen (2002) provides detailed discussions of the meshfree local Petrov-Galerkin (MLPG) method and itsvariations. Formulations and applications of MLPG are well addressed in their book.
This book aims at providing a view of the current trends in the development of research on Synthesis and Control of Discrete Event Systems. Papers col lected in this volume are based on a selection of talks given in June and July 2001 at two independent meetings: the Workshop on Synthesis of Concurrent Systems, held in Newcastle upon Tyne as a satellite event of ICATPN/ICACSD and organized by Ph. Darondeau and L. Lavagno, and the Symposium on the Supervisory Control of Discrete Event Systems (SCODES), held in Paris as a satellite event of CAV and organized by B. Caillaud and X. Xie. Synthesis is a generic term that covers all procedures aiming to construct from specifications given as input objects matching these specifications. The ories and applications of synthesis have been studied and developped for long in connection with logics, programming, automata, discrete event systems, and hardware circuits. Logics and programming are outside the scope of this book, whose focus is on Discrete Event Systems and Supervisory Control. The stress today in this field is on a better applicability of theories and algorithms to prac tical systems design. Coping with decentralization or distribution and caring for an efficient realization of the synthesized systems or controllers are of the utmost importance in areas so diverse as the supervision of embedded or man ufacturing systems, or the implementation of protocols in software or in hard ware."
Assertion-based IP is much more than a comprehensive set of related assertions. It is a full-fledged reusable and configurable transaction-level verification component, which is used to detect both interesting and incorrect behaviors. Upon detecting interesting or incorrect behavior, the assertion-based IP alerts other verification components within a simulation environment, which are responsible for taking appropriate action. The focus of this book is to bring the assertion discussion up to a higher level and introduce a process for creating effective, reusable, assertion-based IP, which easily integrates with the user s existing verification environment, in other words the testbench infrastructure. The guiding principles promoted in this book when creating an assertion-based IP monitor are:
A unique feature of this book is the fully worked out, detailed examples. The concepts presented in the book are drawn from the authors experience developing assertion-based IP, as well as general assertion-based techniques. Creating Assertion-Based IP is an important resource for design and verification engineers. From the Foreword: Creating Assertion-Based IP " reduces to process the creation of
one of the most valuable kinds of VIP: assertion-based VIP This
book will serve as a valuable reference for years to come."
"As chip size and complexity continues to grow exponentially, the
challenges of functional verification are becoming a critical issue
in the electronics industry. It is now commonly heard that logical
errors missed during functional verification are the most common
cause of chip re-spins, and that the costs associated with
functional verification are now outweighing the costs of chip
design. To cope with these challenges engineers are increasingly
relying on new design and verification methodologies and languages.
Transaction-based design and verification, constrained random
stimulus generation, functional coverage analysis, and
assertion-based verification are all techniques that advanced
design and verification teams routinely use today. Engineers are
also increasingly turning to design and verification models based
on C/C++ and SystemC in order to build more abstract, higher
performance hardware and software models and to escape the
limitations of RTL HDLs. This new book, Advanced Verification
Techniques, provides specific guidance for these advanced
verification techniques. The book includes realistic examples and
shows how SystemC and SCV can be applied to a variety of advanced
design and verification tasks."
A look at important new tools and algorithms for future product modeling systems, based on a seminar at the International Conference and Research Center for Computer Science, Schloss Dagstuhl, Germany, presented by internationally recognised experts in CAD technology.
Statistical timing analysis is an area of growing importance in nanometer te- nologies' as the uncertainties associated with process and environmental var- tions increase' and this chapter has captured some of the major efforts in this area. This remains a very active field of research' and there is likely to be a great deal of new research to be found in conferences and journals after this book is published. In addition to the statistical analysis of combinational circuits' a good deal of work has been carried out in analyzing the effect of variations on clock skew. Although we will not treat this subject in this book' the reader is referred to [LNPS00' HN01' JH01' ABZ03a] for details. 7 TIMING ANALYSIS FOR SEQUENTIAL CIRCUITS 7.1 INTRODUCTION A general sequential circuit is a network of computational nodes (gates) and memory elements (registers). The computational nodes may be conceptualized as being clustered together in an acyclic network of gates that forms a c- binational logic circuit. A cyclic path in the direction of signal propagation 1 is permitted in the sequential circuit only if it contains at least one register . In general, it is possible to represent any sequential circuit in terms of the schematic shown in Figure 7.1, which has I inputs, O outputs and M registers. The registers outputs feed into the combinational logic which, in turn, feeds the register inputs. Thus, the combinational logic has I + M inputs and O + M outputs.
"CAAD Futures" is a bi-annual conference that aims to promote the advancement of computer-aided architectural design in the service of those concerned with the quality of the built environment. The conferences are organized under the auspices of the CAAD Futures Foundation, which has its secretariat at the Eindhoven University of Technology in the Netherlands. This book contains papers prepared for the 10th CAAD Futures conference that took place at the National Cheng Kung University, 28 to 30 April, 2003. The chapters provide an overview of the state-of-the-art in research on computer-aided architectural design at that time. Information on the CAAD Futures Foundation and its conferences can be found at http: //www.caadfutures.arch.tue.nl
This book contains the extended and revised editions of all the talks of the ninth AACD Workshop held in Hotel Bachmair, April 11 - 13 2000 in Rottach-Egem, Germany. The local organization was managed by Rudolf Koch of Infineon Technologies AG, Munich, Germany. The program consisted of six tutorials per day during three days. Experts in the field presented these tutorials and state of the art information is communicated. The audience at the end of the workshop selects program topics for the following workshop. The program committee, consisting of Johan Huijsing of Delft University of Technology, Willy Sansen of Katholieke Universiteit Leuven and Rudy van de Plassche of Broadcom Netherlands BV Bunnik elaborates the selected topics into a three-day program and selects experts in the field for presentation. Each AACD Workshop has given rise to publication of a book by Kluwer entitled "Analog Circuit Design." A series of nine books in a row provides valuable information and good overviews of all analog circuit techniques concerning design, CAD, simulation and device modeling. These books can be seen as a reference to those people involved in analog and mixed signal design. The aim of the workshop is to brainstorm on new and valuable design ideas in the area of analog circuit design. It is the hope of the program committee that this ninth book continues the tradition of emerging contributions to the design of analog and mixed signal systems in Europe and the rest of the world.
Sigma delta modulation has become a very useful and widely applied technique for high performance Analog-to-Digital (A/D) conversion of narrow band signals. Through the use of oversampling and negative feedback, the quantization errors of a coarse quantizer are suppressed in a narrow signal band in the output of the modulator. Bandpass sigma delta modulation is well suited for A/D conversion of narrow band signals modulated on a carrier, as occurs in communication systems such as AM/FM receivers and mobile phones. Due to the nonlinearity of the quantizer in the feedback loop, a sigma delta modulator may exhibit input signal dependent stability properties. The same combination of the nonlinearity and the feedback loop complicates the stability analysis. In Bandpass Sigma Delta Modulators, the describing function method is used to analyze the stability of the sigma delta modulator. The linear gain model commonly used for the quantizer fails to predict small signal stability properties and idle patterns accurately. In Bandpass Sigma Delta Modulators an improved model for the quantizer is introduced, extending the linear gain model with a phase shift. Analysis shows that the phase shift of a sampled quantizer is in fact a phase uncertainty. Stability analysis of sigma delta modulators using the extended model allows accurate prediction of idle patterns and calculation of small-signal stability boundaries for loop filter parameters. A simplified rule of thumb is derived and applied to bandpass sigma delta modulators. The stability properties have a considerable impact on the design of single-loop, one-bit, high-order continuous-time bandpass sigma delta modulators. The continuous-time bandpass loop filter structure should have sufficient degrees of freedom to implement the desired (small-signal stable) sigma delta modulator behavior. Bandpass Sigma Delta Modulators will be of interest to practicing engineers and researchers in the areas of mixed-signal and analog integrated circuit design.
This book is the first in aseries on novellow power design architectures, methods and design practices. It results from of a large European project started in 1997, whose goal is to promote the further development and the faster and wider industrial use of advanced design methods for reducing the power consumption of electronic systems. Low power design became crucial with the wide spread of portable information and cornrnunication terminals, where a small battery has to last for a long period. High performance electronics, in addition, suffers from a permanent increase of the dissipated power per square millimetre of silicon, due to the increasing eIock-rates, which causes cooling and reliability problems or otherwise limits the performance. The European Union's Information Technologies Programme 'Esprit' did there fore launch a 'Pilot action for Low Power Design', wh ich eventually grew to 19 R&D projects and one coordination project, with an overall budget of 14 million Euro. It is meanwhile known as European Low Power Initiative for Electronic System Design (ESD-LPD) and will be completed by the end of 2001. It involves 30 major Euro pean companies and 20 well-known institutes. The R&D projects aims to develop or demonstrate new design methods for power reduction, while the coordination project takes care that the methods, experiences and results are properly documented and pub licised."
Model Based Fuzzy Control uses a given conventional or fuzzy open loop model of the plant under control to derive the set of fuzzy rules for the fuzzy controller. Of central interest are the stability, performance, and robustness of the resulting closed loop system. The major objective of model based fuzzy control is to use the full range of linear and nonlinear design and analysis methods to design such fuzzy controllers with better stability, performance, and robustness properties than non-fuzzy controllers designed using the same techniques. This objective has already been achieved for fuzzy sliding mode controllers and fuzzy gain schedulers - the main topics of this book. The primary aim of the book is to serve as a guide for the practitioner and to provide introductory material for courses in control theory.
Introduction to Hardware-Software Co-Design presents a number of issues of fundamental importance for the design of integrated hardware software products such as embedded, communication, and multimedia systems. This book is a comprehensive introduction to the fundamentals of hardware/software co-design. Co-design is still a new field but one which has substantially matured over the past few years. This book, written by leading international experts, covers all the major topics including: fundamental issues in co-design; hardware/software co-synthesis algorithms; prototyping and emulation; target architectures; compiler techniques; specification and verification; system-level specification. Special chapters describe in detail several leading-edge co-design systems including Cosyma, LYCOS, and Cosmos. Introduction to Hardware-Software Co-Design contains sufficient material for use by teachers and students in an advanced course of hardware/software co-design. It also contains extensive explanation of the fundamental concepts of the subject and the necessary background to bring practitioners up-to-date on this increasingly important topic.
This book contains selected contributions from the 7th CIRP International Seminar on Computer Aided Tolerancing, which was held on 24-25 April 2001, at the Ecole Normale Superieure de Cachan, France. Tolerancing research is of major importance in the fields of design, manufacturing and inspection. Designers use tolerancing as a tool for expressing functional intents and for managing geometrical variations during a product life cycle. This book focuses in particular on Geometrical Product Specification and Verification which is an integrated tolerancing view and metrology proposed for ISO/TC213. Common geometrical bases for a language allowing to describe both functional specification and inspection procedures are provided. An extended view of the uncertainty concept is also given. Geometric Product Specification and Verification: Functionality Integration is an excellent resource to anyone interested in computer aided tolerancing, as well as CAD/CAM/CAQ. It can also be used as a good starting point for advanced research activity and is a good reference for industrial issues. A global view of geometrical product specification, models for tolerance representation, tolerance analysis, tolerance synthesis, tolerance in manufacturing, tolerance management, tolerance inspection, tolerancing standards, industrial applications and CAT systems are also included. "
With the advent of portable and autonomous computing systems, power con sumption has emerged as a focal point in many research projects, commercial systems and DoD platforms. One current research initiative, which drew much attention to this area, is the Power Aware Computing and Communications (PAC/C) program sponsored by DARPA. Many of the chapters in this book include results from work that have been supported by the PACIC program. The performance of computer systems has been tremendously improving while the size and weight of such systems has been constantly shrinking. The capacities of batteries relative to their sizes and weights has been also improv ing but at a rate which is much slower than the rate of improvement in computer performance and the rate of shrinking in computer sizes. The relation between the power consumption of a computer system and it performance and size is a complex one which is very much dependent on the specific system and the technology used to build that system. We do not need a complex argument, however, to be convinced that energy and power, which is the rate of energy consumption, are becoming critical components in computer systems in gen eral, and portable and autonomous systems, in particular. Most of the early research on power consumption in computer systems ad dressed the issue of minimizing power in a given platform, which usually translates into minimizing energy consumption, and thus, longer battery life."
Design reuse is not just a topic of research but a real industrial necessity in the microelectronic domain and thus driving the competitiveness of relevant areas like for example telecommunication or automotive. Most companies have already dedicated a department or a central unit that transfer design reuse into reality. All main EDA conferences include a track to the topic, and even specific conferences have been established in this area, both in the USA and in Europe. Virtual Components Design and Reuse presents a selection of articles giving a mature and consolidated perspective to design reuse from different points of view. The authors stem from all relevant areas: research and academia, IP providers, EDA vendors and industry. Some classical topics in design reuse, like specification and generation of components, IP retrieval and cataloguing or interface customisation, are revisited and discussed in depth. Moreover, new hot topics are presented, among them IP quality, platform-based reuse, software IP, IP security, business models for design reuse, and major initiatives like the MEDEA EDA Roadmap.
The contents of this book are an expanded treatment of a set of presentations given at the first IEEE Workshop on Signal Propagation on Interconnects held Trnvemiindc, Germany, May 14- 16, 1997. Traditional VLSI-based cost and complexity measures have principally incolved transistor counts and chip area. Yet with the increase in clock frequency transistor has become an issue of major concern" At present the emergence of systems on silicon feces designers with a new challenge: how to guarantee signal integrity while propagating high signals between embedded cores on a Thus, interconnects are becuming a significant limiter of future system performance. The element~ involved arc mainly transmission lines but also other interconnect devices life vias, and packages" The electrical phenomena that have to investigated, as for example delay and crosstalk, are governed by electromagnetic theory. Consequently, even in digital circuits there large sectians in whieh the can longer considered logical ones and zeros but must be treated as analog waveforms. To complicate matters, the descriptian of subcircuits by ordinary differential eyuations is inadequate in many instsnces. Only the use yartial differential aquations should guarantee sufficiently accurate results. Yet this would unfortunately increase the camplexity af simulatian and besign tremendously" Therefore, new approuuhes need to be developed.
On Optimal Interconnections for VLSI describes, from a geometric perspective, algorithms for high-performance, high-density interconnections during the global and detailed routing phases of circuit layout. First, the book addresses area minimization, with a focus on near-optimal approximation algorithms for minimum-cost Steiner routing. In addition to practical implementations of recent methods, the implications of recent results on spanning tree degree bounds and the method of Zelikovsky are discussed. Second, the book addresses delay minimization, starting with a discussion of accurate, yet algorithmically tractable, delay models. Recent minimum-delay constructions are highlighted, including provably good cost-radius tradeoffs, critical-sink routing algorithms, Elmore delay-optimal routing, graph Steiner arborescences, non-tree routing, and wiresizing. Third, the book addresses skew minimization for clock routing and prescribed-delay routing formulations. The discussion starts with early matching-based constructions and goes on to treat zero-skew routing with provably minimum wirelength, as well as planar clock routing. Finally, the book concludes with a discussion of multiple (competing) objectives, i.e., how to optimize area, delay, skew, and other objectives simultaneously. These techniques are useful when the routing instance has heterogeneous resources or is highly congested, as in FPGA routing, multi-chip packaging, and very dense layouts. Throughout the book, the emphasis is on practical algorithms and a complete self-contained development. On Optimal Interconnections for VLSI will be of use to both circuit designers (CAD tool users) as well as researchers and developers in the area of performance-driven physical design.
There are three outstanding points of this book. First: for the first time, a collective point of view on the role of artificial intelligence paradigm in logic design is introduced. Second, the book reveals new horizons of logic design tools on the technologies of the near future. Finally, the contributors of the book are twenty recognizable leaders in the field from the seven research centres. The chapters of the book have been carefully reviewed by equally qualified experts. All contributors are experienced in practical electronic design and in teaching engineering courses. Thus, the book's style is accessible to graduate students, practical engineers and researchers.
Memory Design Techniques for Low Energy Embedded Systems centers one of the most outstanding problems in chip design for embedded application. It guides the reader through different memory organizations and technologies and it reviews the most successful strategies for optimizing them in the power and performance plane.
The design of computer systems to be embedded in critical real-time applications is a complex task. Such systems must not only guarantee to meet hard real-time deadlines imposed by their physical environment, they must guarantee to do so dependably, despite both physical faults (in hardware) and design faults (in hardware or software). A fault-tolerance approach is mandatory for these guarantees to be commensurate with the safety and reliability requirements of many life- and mission-critical applications. This book explains the motivations and the results of a collaborative project', whose objective was to significantly decrease the lifecycle costs of such fault tolerant systems. The end-user companies participating in this project already deploy fault-tolerant systems in critical railway, space and nuclear-propulsion applications. However, these are proprietary systems whose architectures have been tailored to meet domain-specific requirements. This has led to very costly, inflexible, and often hardware-intensive solutions that, by the time they are developed, validated and certified for use in the field, can already be out-of-date in terms of their underlying hardware and software technology."
Advanced Formal Verification shows the latest developments in the
verification domain from the perspectives of the user and the
developer. World leading experts describe the underlying methods of
today's verification tools and describe various scenarios from
industrial practice. In the first part of the book the core
techniques of today's formal verification tools, such as SAT and
BDDs are addressed. In addition, multipliers, which are known to be
difficult, are studied. The second part gives insight in
professional tools and the underlying methodology, such as property
checking and assertion based verification. Finally, analog
components have to be considered to cope with complete system on
chip designs. |
You may like...
Mastercam 2023 for SolidWorks Black Book…
Gaurav Verma, Matt Weber
Hardcover
R2,311
Discovery Miles 23 110
Creo Parametric 9.0 Black Book (Colored)
Gaurav Verma, Matt Weber
Hardcover
R2,149
Discovery Miles 21 490
Building Information Modelling (BIM) in…
W. P. de Wilde, L. Mahdjoubi, …
Hardcover
R4,604
Discovery Miles 46 040
|