![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Professional & Technical > Technology: general issues > Technical design > Computer aided design (CAD)
Design for Manufacturability and Yield for Nano-Scale CMOS walks the reader through all the aspects of manufacturability and yield in a nano-CMOS process and how to address each aspect at the proper design step starting with the design and layout of standard cells and how to yield-grade libraries for critical area and lithography artifacts through place and route, CMP model based simulation and dummy-fill insertion, mask planning, simulation and manufacturing, and through statistical design and statistical timing closure of the design. It alerts the designer to the pitfalls to watch for and to the good practices that can enhance a design s manufacturability and yield. This book is a must read book the serious practicing IC designer and an excellent primer for any graduate student intent on having a career in IC design or in EDA tool development."
Covers in detail promising solutions at the device, circuit, and architecture levels of abstraction after first explaining the sensitivity of the various MOS leakage sources to these conditions from the first principles. Also treated are the resulting effects so the reader understands the effectiveness of leakage power reduction solutions under these different conditions. Case studies supply real-world examples that reap the benefits of leakage power reduction solutions as the book highlights different device design choices that exist to mitigate increases in the leakage components as technology scales.
This book deals with the analysis and design of CMOS current-mode circuits for data communications. CMOS current-mode sampled-data networks, i.e. switched-current circuits, are excluded. Major subjects covered in the book include: a critical comparison of voltage-mode and current-mode circuits; the building blocks of current-mode circuits: design techniques; modeling of wire channels, electrical signaling for Gbps data communications; ESD protection for current-mode circuits and more. This book will appeal to IC design engineers, hardware system engineers and others.
Constraint-Based Verification covers an emerging field in functional verification of electronic designs, referred to as the "constraint-based verification." The topics are developed in the context of a wide range of dynamic and static verification approaches including simulation, emulation, and formal methods. The goal is to show how constraints, or assertions, can be used towards automating the generation of testbenches, resulting in a seamless verification flow. Topics such as verification coverage, and connection with assertion based verification, are also covered. The book targets verification engineers as well as researchers. It covers both methodological and technical issues. Particular stress is given to the latest advances in functional verification. The research community has witnessed recent growth of interests in constraint-based functional verification. Various techniques have been developed. They are relatively new, but have reached a level of maturity so that they are appearing in commercial tools such as Vera and System Verilog.
New Algorithms, Architectures and Applications for Reconfigurable Computing consists of a collection of contributions from the authors of some of the best papers from the Field Programmable Logic conference (FPL 03) and the Design and Test Europe conference (DATE 03). In all, seventy-nine authors, from research teams from all over the world, were invited to present their latest research in the extended format permitted by this special volume. The result is a valuable book that is a unique record of the state of the art in research into field programmable logic and reconfigurable computing. The contributions are organized into twenty-four chapters and are grouped into three main categories: architectures, tools and applications. Within these three broad areas the most strongly represented themes are coarse-grained architectures; dynamically reconfigurable and multi-context architectures; tools for coarse-grained and reconfigurable architectures; networking, security and encryption applications. Field programmable logic and reconfigurable computing are exciting research disciplines that span the traditional boundaries of electronic engineering and computer science. When the skills of both research communities are combined to address the challenges of a single research discipline they serve as a catalyst for innovative research. The work reported in the chapters of this book captures that spirit of that innovation."
Many applications in science and engineering require a digital model of a real physical object. Advanced scanning technology has made it possible to scan such objects and generate point samples on their boundaries. This book, first published in 2007, shows how to compute a digital model from this point sample. After developing the basics of sampling theory and its connections to various geometric and topological properties, the author describes a suite of algorithms that have been designed for the reconstruction problem, including algorithms for surface reconstruction from dense samples, from samples that are not adequately dense and from noisy samples. Voronoi- and Delaunay-based techniques, implicit surface-based methods and Morse theory-based methods are covered. Scientists and engineers working in drug design, medical imaging, CAD, GIS, and many other areas will benefit from this first book on the subject.
The 33 papers presented in this book were selected from amongst the 97 papers presented during the sixth edition of the International Conference on Integrated Design and Manufacturing in Mechanical Engineering during 28 sessions. This conference represents the state-of-the-art research in the field. Two keynote papers introduce the subject of the Conference and are followed by the different themes highlighted during the conference.
This book provides insight into the behavior and design of power distribution systems for high speed, high complexity integrated circuits. Also presented are criteria for estimating minimum required on-chip decoupling capacitance. Techniques and algorithms for computer-aided design of on-chip power distribution networks are also described; however, the emphasis is on developing circuit intuition and understanding the principles that govern the design and operation of power distribution systems.
A unified and systematic description of analysis and decision problems within a wide class of uncertain systems, described by traditional mathematical methods and by relational knowledge representations. Prof. Bubnicki takes a unique approach to stability and stabilization of uncertain systems.
The building blocks of today's and future embedded systems are complex intellectual property components, or cores, many of which are programmable processors. Traditionally, these embedded processors mostly have been pro grammed in assembly languages due to efficiency reasons. This implies time consuming programming, extensive debugging, and low code portability. The requirements of short time-to-market and dependability of embedded systems are obviously much better met by using high-level language (e.g. C) compil ers instead of assembly. However, the use of C compilers frequently incurs a code quality overhead as compared to manually written assembly programs. Due to the need for efficient embedded systems, this overhead must be very low in order to make compilers useful in practice. In turn, this requires new compiler techniques that take the specific constraints in embedded system de sign into account. An example are the specialized architectures of recent DSP and multimedia processors, which are not yet sufficiently exploited by existing compilers."
For the past decade or so, Computational Intelligence (CI) has been an - tremely "hot" topic amongst researchers working in the ?elds of biomedicine and bioinformatics. There are many successful applications of CI in such areas ascomputationalgenomics, predictionofgeneexpression, proteinstructure, and protein-protein interactions, modeling of evolution, or neuronal systems mod- ing and analysis. However, there still are many problems in biomedicine and bioinformatics that are in desperate need of advanced and e?cient compu- tional methodologies to deal with tremendous amounts of data so prevalent in those kinds of researchpursuits. Unfortunately, scientists in both these ?elds are very often unaware of the abundance of computational techniques that could be put to use to help them analyze and understand the data underlying their research inquiries. On the other hand, computational intelligence practitioners are often unfamiliar with the particular problems that their algorithms could be successfully applied for. The separation between the two worlds is partially caused by the use of di?erent languages in these two spheres of science, but also by a relatively small number of publications devoted solely to the purpose of facilitating the exchange of new computational algorithms and methodologies on one hand, and the needs of the realms of biomedicine and bioinformatics on the other. Inordertohelp?llthegapbetweenthescientistsonbothsidesofthisspectrum, wehavesolicitedcontributionsfromresearchersactivelyapplyingcomputational intelligencetechniquestoimportantproblemsinbiomedicineandbioinformatics. The purpose of this book is to provide an overview of powerful state-of-the-art methodologiesthatarecurrentlyutilizedforbiomedicine-and/orbioinformati- orientedapplications, sothatresearchersworkinginthose?eldscouldlearnofnew methodstohelpthemtackletheirproblems. Ontheotherhand, wealsohopethat the CI community will ?nd this book useful by discovering a new and intriguing area of applications.
System Level Design Model with Reuse of System IP addresses system design by providing a framework for assessing and developing system design practices that observe and utilise reuse of system design know-how. The know-how accumulated in the companies represents an intellectual asset, or property ('IP'). The current situation regarding system design in general is, that the methods are insufficient, informally practised, and weakly supported by formal techniques and tools. Regarding system design reuse the methods and tools for exchanging system design data and know-how within companies are ad hoc and insufficient. The means available inside companies being already insufficient, there are actually no ways of exchanging between companies. To establish means for systematic reuse, the required system design concepts are identified through an analysis of existing design flows, and their definitions are catalogued in the form of a glossary and taxonomy. The System Design Conceptual Model (SDCM) formalises the concepts and their relationships by providing meta-models for both the system design process (SDPM) and the system under design (SUDM). The models are generic enough so that they can be applied in various organisations and for various kinds of electronic systems. System design patterns are presented as example means for enhancing reuse. The characteristics of system-level IP, a list of heuristic criteria of system-IP reusability, and guidelines for assessing system IP reusability within a particular design flow provide a pragmatic view to reuse. An analysis of selected languages and formalisms, and guidelines for the analysis of system-level languages provides means for assessing how the expression and representation of system design concepts are supported by languages. System Level Design Model with Reuse of System IP describes both a theoretical framework and various practical means for improving reuse in the design of complex systems. The information can be used in various ways in enhancing system design: Understanding system design, Analysing and assessing existing design flows, reuse practices and languages, Instantiating design flows for new design paradigms, Eliciting requirements for methods and tools, Organising teams, and Educating employees, partners and customers.
The roots of the project which culminates with the writing of this book can be traced to the work on logic synthesis started in 1979 at the IBM Watson Research Center and at University of California, Berkeley. During the preliminary phases of these projects, the impor tance of logic minimization for the synthesis of area and performance effective circuits clearly emerged. In 1980, Richard Newton stirred our interest by pointing out new heuristic algorithms for two-level logic minimization and the potential for improving upon existing approaches. In the summer of 1981, the authors organized and participated in a seminar on logic manipulation at IBM Research. One of the goals of the seminar was to study the literature on logic minimization and to look at heuristic algorithms from a fundamental and comparative point of view. The fruits of this investigation were surprisingly abundant: it was apparent from an initial implementation of recursive logic minimiza tion (ESPRESSO-I) that, if we merged our new results into a two-level minimization program, an important step forward in automatic logic synthesis could result. ESPRESSO-II was born and an APL implemen tation was created in the summer of 1982. The results of preliminary tests on a fairly large set of industrial examples were good enough to justify the publication of our algorithms. It is hoped that the strength and speed of our minimizer warrant its Italian name, which denotes both express delivery and a specially-brewed black coffee."
Computer Aided Tolerancing (CAT) is an important topic in any field of design and production where parts move relative to one another and/or are assembled together. Geometric variations from specified dimensions and form always occur when parts are manufactured. Improvements in production systems can cause the amounts of the variations to become smaller, but their presence does not disappear. To shorten the time from concept to market of a product, it has been increasingly important to take clearances and the tolerancing of manufacturing variations into consideration right from the beginning, at the stage of design. Hence, geometric models are defined that represent both the complete array of geometric variations possible during manufacture and also the influence of geometry on the function of individual parts and on assemblies of them. The contents of this book originate from a collection of selected papers presented at the 9th CIRP International Seminar on CAT that was held from April 10-12, 2005 at Arizona State University, USA. The CIRP (College International pour la Recherche en Production or International Institution for Production Engineering Research) plans this seminar every two years, and the book is one in a series of Proceedings on CAT. The book is organized into seven parts: Models for Tolerance Representation and Specification, Tolerance Analysis, Tolerance Synthesis, Computational Metrology and Verification, Tolerances in Manufacturing, Applications to Machinery, and Incorporating Elasticity in Tolerance Models."
This book reviews the theoretical fundamentals of grey-box identification and puts the spotlight on MoCaVa, a MATLAB-compatible software tool, for facilitating the procedure of effective grey-box identification. It demonstrates the application of MoCaVa using two case studies drawn from the paper and steel industries. In addition, the book answers common questions which will help in building accurate models for systems with unknown inputs.
MARTENS Bob and BROWN Andre Co-conference Chairs, CAAD Futures 2005 Computer Aided Architectural Design is a particularly dynamic field that is developing through the actions of architects, software developers, researchers, technologists, users, and society alike. CAAD tools in the architectural office are no longer prominent outsiders, but have become ubiquitous tools for all professionals in the design disciplines. At the same time, techniques and tools from other fields and uses, are entering the field of architectural design. This is exemplified by the tendency to speak of Information and Communication Technology as a field in which CAAD is embedded. Exciting new combinations are possible for those, who are firmly grounded in an understanding of architectural design and who have a clear vision of the potential use of ICT. CAAD Futures 2005 called for innovative and original papers in the field of Computer Aided Architectural Design, that present rigorous, high-quality research and development work. Papers should point towards the future, but be based on a thorough understanding of the past and present.
This book introduces all the relevant information required to understand and put Model Driven Architecture (MDA) into industrial practice. It clearly explains which conceptual primitives should be present in a system specification, how to use UML to properly represent this subset of basic conceptual constructs, how to identify just those diagrams and modeling constructs that are actually required to create a meaningful conceptual schema, and how to accomplish the transformation process between the problem space and the solution space. The approach is fully supported by commercially available tools.
Mass Customization and Footwear: Myth, Salvation or Reality is the only book dedicated to the application of mass customization in a particular industry. By showing examples of how a "mature" manufacturing sector like shoe making can be thoroughly renovated in business and mentality by applying this paradigm; Mass Customization and Footwear: Myth, Salvation or Reality will be bought by practitioners in the footwear sector and postgraduates, researchers and lecturers in the area of mass customization.
As future generation information technology (FGIT) becomes specialized and fr- mented, it is easy to lose sight that many topics in FGIT have common threads and, because of this, advances in one discipline may be transmitted to others. Presentation of recent results obtained in different disciplines encourages this interchange for the advancement of FGIT as a whole. Of particular interest are hybrid solutions that c- bine ideas taken from multiple disciplines in order to achieve something more signi- cant than the sum of the individual parts. Through such hybrid philosophy, a new principle can be discovered, which has the propensity to propagate throughout mul- faceted disciplines. FGIT 2009 was the first mega-conference that attempted to follow the above idea of hybridization in FGIT in a form of multiple events related to particular disciplines of IT, conducted by separate scientific committees, but coordinated in order to expose the most important contributions. It included the following international conferences: Advanced Software Engineering and Its Applications (ASEA), Bio-Science and Bio-Technology (BSBT), Control and Automation (CA), Database Theory and Application (DTA), D- aster Recovery and Business Continuity (DRBC; published independently), Future G- eration Communication and Networking (FGCN) that was combined with Advanced Communication and Networking (ACN), Grid and Distributed Computing (GDC), M- timedia, Computer Graphics and Broadcasting (MulGraB), Security Technology (SecTech), Signal Processing, Image Processing and Pattern Recognition (SIP), and- and e-Service, Science and Technology (UNESST).
This book constitutes the thoroughly refereed post-conference proceedings of the 7th International Conference on Numerical Methods and Applications, NMA 2010, held in Borovets, Bulgaria, in August 2010. The 60 revised full papers presented together with 3 invited papers were carefully reviewed and selected from numerous submissions for inclusion in this book. The papers are organized in topical sections on Monte Carlo and quasi-Monte Carlo methods, environmental modeling, grid computing and applications, metaheuristics for optimization problems, and modeling and simulation of electrochemical processes.
The concept of CAST as Computer Aided Systems Theory was introduced by F. Pichler in the late 1980s to refer to computer theoretical and practical developments as tools for solving problems in system science. It was thought of as the third component (the other two being CAD and CAM) required to complete the path from computer and systems sciences to practical developments in science and engineering. Franz Pichler, of the University of Linz, organized the first CAST workshop in April 1988, which demonstrated the acceptance of the concepts by the scientific and technical community. Next, the University of Las Palmas de Gran Canaria joined the University of Linz to organize the first international meeting on CAST (Las Palmas, February 1989) under the name EUROCAST'89. This proved to be a very successful gathering of systems theorists, computer scientists and engineers from most European countries, North America and Japan. It was agreed that EUROCAST international conferences would be organized every two years, alternating between Las Palmas de Gran Canaria and a continental European location. From 2001 the conference has been held exclusively in Las Palmas. Thus, successive EUROCAST meetings took place in Krems (1991), Las Palmas (1993), In- bruck (1995), Las Palmas (1997), Vienna (1999), Las Palmas (2001), Las Palmas (2003) Las Palmas (2005) and Las Palmas (2007), in addition to an extra-European CAST c- ference in Ottawa in 1994.
When I attended college we studied vacuum tubes in our junior year. At that time an average radio had ?ve vacuum tubes and better ones even seven. Then transistors appeared in 1960s. A good radio was judged to be one with more thententransistors. Latergoodradioshad15-20transistors and after that everyone stopped counting transistors. Today modern processors runing personal computers have over 10milliontransistorsandmoremillionswillbeaddedevery year. The difference between 20 and 20M is in complexity, methodology and business models. Designs with 20 tr- sistors are easily generated by design engineers without any tools, whilst designs with 20M transistors can not be done by humans in reasonable time without the help of Prof. Dr. Gajski demonstrates the Y-chart automation. This difference in complexity introduced a paradigm shift which required sophisticated methods and tools, and introduced design automation into design practice. By the decomposition of the design process into many tasks and abstraction levels the methodology of designing chips or systems has also evolved. Similarly, the business model has changed from vertical integration, in which one company did all the tasks from product speci?cation to manufacturing, to globally distributed, client server production in which most of the design and manufacturing tasks are outsourced.
The finite difference method (FDM) hasbeen used tosolve differential equation systems for centuries. The FDM works well for problems of simple geometry and was widely used before the invention of the much more efficient, robust finite element method (FEM). FEM is now widely used in handling problems with complex geometry. Currently, we are using and developing even more powerful numerical techniques aiming to obtain more accurate approximate solutions in a more convenient manner for even more complex systems. The meshfree or meshless method is one such phenomenal development in the past decade, and is the subject of this book. There are many MFree methods proposed so far for different applications. Currently, three monographs on MFree methods have been published. Mesh Free Methods, Moving Beyond the Finite Element Method d by GR Liu (2002) provides a systematic discussion on basic theories, fundamentals for MFree methods, especially on MFree weak-form methods. It provides a comprehensive record of well-known MFree methods and the wide coverage of applications of MFree methods to problems of solids mechanics (solids, beams, plates, shells, etc.) as well as fluid mechanics. The Meshless Local Petrov-Galerkin (MLPG) Method d by Atluri and Shen (2002) provides detailed discussions of the meshfree local Petrov-Galerkin (MLPG) method and itsvariations. Formulations and applications of MLPG are well addressed in their book.
This book aims at providing a view of the current trends in the development of research on Synthesis and Control of Discrete Event Systems. Papers col lected in this volume are based on a selection of talks given in June and July 2001 at two independent meetings: the Workshop on Synthesis of Concurrent Systems, held in Newcastle upon Tyne as a satellite event of ICATPN/ICACSD and organized by Ph. Darondeau and L. Lavagno, and the Symposium on the Supervisory Control of Discrete Event Systems (SCODES), held in Paris as a satellite event of CAV and organized by B. Caillaud and X. Xie. Synthesis is a generic term that covers all procedures aiming to construct from specifications given as input objects matching these specifications. The ories and applications of synthesis have been studied and developped for long in connection with logics, programming, automata, discrete event systems, and hardware circuits. Logics and programming are outside the scope of this book, whose focus is on Discrete Event Systems and Supervisory Control. The stress today in this field is on a better applicability of theories and algorithms to prac tical systems design. Coping with decentralization or distribution and caring for an efficient realization of the synthesized systems or controllers are of the utmost importance in areas so diverse as the supervision of embedded or man ufacturing systems, or the implementation of protocols in software or in hard ware."
Assertion-based IP is much more than a comprehensive set of related assertions. It is a full-fledged reusable and configurable transaction-level verification component, which is used to detect both interesting and incorrect behaviors. Upon detecting interesting or incorrect behavior, the assertion-based IP alerts other verification components within a simulation environment, which are responsible for taking appropriate action. The focus of this book is to bring the assertion discussion up to a higher level and introduce a process for creating effective, reusable, assertion-based IP, which easily integrates with the user s existing verification environment, in other words the testbench infrastructure. The guiding principles promoted in this book when creating an assertion-based IP monitor are:
A unique feature of this book is the fully worked out, detailed examples. The concepts presented in the book are drawn from the authors experience developing assertion-based IP, as well as general assertion-based techniques. Creating Assertion-Based IP is an important resource for design and verification engineers. From the Foreword: Creating Assertion-Based IP " reduces to process the creation of
one of the most valuable kinds of VIP: assertion-based VIP This
book will serve as a valuable reference for years to come." |
You may like...
The Oxford Handbook of Material Culture…
Dan Hicks, Mary C. Beaudry
Hardcover
R4,547
Discovery Miles 45 470
Vocalised Dictionary of Ancient Egyptian
Christian De Vartavan
Hardcover
R2,768
Discovery Miles 27 680
Andean Archaeology II - Art, Landscape…
Helaine Silverman, William H. Isbell
Hardcover
R2,714
Discovery Miles 27 140
The Form of Ideology and the Ideology of…
Francesca Orsini, Neelam Srivastava, …
Hardcover
R1,381
Discovery Miles 13 810
|