Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Professional & Technical > Technology: general issues > Technical design > Computer aided design (CAD)
In recent years, the issue of linkage in GEAs has garnered greater attention and recognition from researchers. Conventional approaches that rely much on ad hoc tweaking of parameters to control the search by balancing the level of exploitation and exploration are grossly inadequate. As shown in the work reported here, such parameters tweaking based approaches have their limits; they can be easily fooled by cases of triviality or peculiarity of the class of problems that the algorithms are designed to handle. Furthermore, these approaches are usually blind to the interactions between the decision variables, thereby disrupting the partial solutions that are being built up along the way.
Knowledge Discovery today is a significant study and research area. In finding answers to many research questions in this area, the ultimate hope is that knowledge can be extracted from various forms of data around us. This book covers recent advances in unsupervised and supervised data analysis methods in Computational Intelligence for knowledge discovery. In its first part the book provides a collection of recent research on distributed clustering, self organizing maps and their recent extensions. If labeled data or data with known associations are available, we may be able to use supervised data analysis methods, such as classifying neural networks, fuzzy rule-based classifiers, and decision trees. Therefore this book presents a collection of important methods of supervised data analysis. "Classification and Clustering for Knowledge Discovery" also includes variety of applications of knowledge discovery in health, safety, commerce, mechatronics, sensor networks, and telecommunications.
Functional Design Errors in Digital Circuits Diagnosis covers a wide spectrum of innovative methods to automate the debugging process throughout the design flow: from Register-Transfer Level (RTL) all the way to the silicon die. In particular, this book describes: (1) techniques for bug trace minimization that simplify debugging; (2) an RTL error diagnosis method that identifies the root cause of errors directly; (3) a counterexample-guided error-repair framework to automatically fix errors in gate-level and RTL designs; (4) a symmetry-based rewiring technology for fixing electrical errors; (5) an incremental verification system for physical synthesis; and (6) an integrated framework for post-silicon debugging and layout repair. The solutions provided in this book can greatly reduce debugging effort, enhance design quality, and ultimately enable the design and manufacture of more reliable electronic devices.
The Maintenance Management Framework describes and reviews the concept, process and framework of modern maintenance management of complex systems; concentrating specifically on modern modelling tools (deterministic and empirical) for maintenance planning and scheduling. It will be bought by engineers and professionals involved in maintenance management, maintenance engineering, operations management, quality, etc. as well as graduate students and researchers in this field.
Many new topologies and circuit design techniques have emerged recently to improve the performance of active inductors, but a comprehensive treatment of the theory, topology, characteristics, and design constraint of CMOS active inductors and transformers, and a detailed examination of their emerging applications in high-speed analog signal processing and data communications over wire and wireless channels, is not available. This book is an attempt to provide an in-depth examination and a systematic presentation of the operation principles and implementation details of CMOS active inductors and transformers, and a detailed examination of their emerging applications in high-speed analog signal processing and data communications over wire and wireless channels. The content of the book is drawn from recently published research papers and are not available in a single, cohesive book. Equal emphasis is given to the theory of CMOS active inductors and transformers, and their emerging applications. Major subjects to be covered in the book include: inductive characteristics in high-speed analog signal processing and data communications, spiral inductors and transformers - modeling and limitations, a historical perspective of device synthesis, the topology, characterization, and implementation of CMOS active inductors and transformers, and the application of CMOS active inductors and transformers in high-speed analog and digital signal processing and data communications.
IDT (Intelligent Decision Technologies) seeks an interchange of research on intelligent systems and intelligent technologies which enhance or improve decision making in industry, government and academia. The focus is interdisciplinary in nature, and includes research on all aspects of intelligent decision technologies, from fundamental development to the applied system. It constitutes a great honor and pleasure for us to publish the works and new research results of scholars from the First KES International Symposium on Intelligent Decision Technologies (KES IDT'09), hosted and organized by University of Hyogo in conjunction with KES International (Himeji, Japan, April, 2009). The symposium was concerned with theory, design, development, implementation, testing and evaluation of intelligent decision systems. Its topics included intelligent agents, fuzzy logic, multi-agent systems, artificial neural networks, genetic algorithms, expert systems, intelligent decision making support systems, information retrieval systems, geographic information systems, and knowledge management systems. These technologies have the potential to support decision making in many areas of management, international business, finance, accounting, marketing, healthcare, military applications, production, networks, traffic management, crisis response, and human interfaces.
Combinatorial optimisation is a ubiquitous discipline whose usefulness spans vast applications domains. The intrinsic complexity of most combinatorial optimisation problems makes classical methods unaffordable in many cases. To acquire practical solutions to these problems requires the use of metaheuristic approaches that trade completeness for pragmatic effectiveness. Such approaches are able to provide optimal or quasi-optimal solutions to a plethora of difficult combinatorial optimisation problems. The application of metaheuristics to combinatorial optimisation is an active field in which new theoretical developments, new algorithmic models, and new application areas are continuously emerging. This volume presents recent advances in the area of metaheuristic combinatorial optimisation, with a special focus on evolutionary computation methods. Moreover, it addresses local search methods and hybrid approaches. In this sense, the book includes cutting-edge theoretical, methodological, algorithmic and applied developments in the field, from respected experts and with a sound perspective.
Verification is too often approached in an ad hoc fashion. Visually inspecting simulation results is no longer feasible and the directed test-case methodology is reaching its limit. Moore's Law demands a productivity revolution in functional verification methodology. Writing Testbenches Using SystemVerilog offers a clear blueprint of a verification process that aims for first-time success using the SystemVerilog language. From simulators to source management tools, from specification to functional coverage, from I's and O's to high-level abstractions, from interfaces to bus-functional models, from transactions to self-checking testbenches, from directed testcases to constrained random generators, from behavioral models to regression suites, this book covers it all. Writing Testbenches Using SystemVerilog presents many of the functional verification features that were added to the Verilog language as part of SystemVerilog. Interfaces, virtual modports, classes, program blocks, clocking blocks and others SystemVerilog features are introduced within a coherent verification methodology and usage model. Writing Testbenches Using SystemVerilog introduces the reader to all elements of a modern, scalable verification methodology. It is an introduction and prelude to the verification methodology detailed in the Verification Methodology Manual for SystemVerilog. It is a SystemVerilog version of the author's bestselling book Writing Testbenches: Functional Verification of HDL Models.
The authors have consolidated their research work in this volume titled Soft Computing for Data Mining Applications. The monograph gives an insight into the research in the ?elds of Data Mining in combination with Soft Computing methodologies. In these days, the data continues to grow - ponentially. Much of the data is implicitly or explicitly imprecise. Database discovery seeks to discover noteworthy, unrecognized associations between the data items in the existing database. The potential of discovery comes from the realization that alternate contexts may reveal additional valuable information. The rate at which the data is storedis growing at a phenomenal rate. Asaresult, traditionaladhocmixturesofstatisticaltechniquesanddata managementtools are no longer adequate for analyzing this vast collection of data. Severaldomainswherelargevolumesofdataarestoredincentralizedor distributeddatabasesincludesapplicationslikeinelectroniccommerce, bio- formatics, computer security, Web intelligence, intelligent learning database systems, ?nance, marketing, healthcare, telecommunications, andother?elds. E?cient tools and algorithms for knowledge discovery in large data sets have been devised during the recent years. These methods exploit the ca- bility of computers to search huge amounts of data in a fast and e?ective manner. However, the data to be analyzed is imprecise and a?icted with - certainty. In the case of heterogeneous data sources such as text and video, the data might moreover be ambiguous and partly con?icting. Besides, p- terns and relationships of interest are usually approximate. Thus, in order to make the information mining process more robust it requires tolerance toward imprecision, uncertainty and exc
Many real systems are composed of multi-state components with different performance levels and several failure modes. These affect the whole system's performance. Most books on reliability theory cover binary models that allow a system only to function perfectly or fail completely. "The Universal Generating Function in Reliability Analysis and Optimization" is the first book that gives a comprehensive description of the universal generating function technique and its applications in binary and multi-state system reliability analysis. Features: This monograph will be of value to anyone interested in system reliability, performance analysis and optimization in industrial, electrical and nuclear engineering.
This book carefully details design tools and techniques for realizing low power and energy efficiency in a highly productive design methodology. Important topics include: Design examples illustrate that these techniques can improve energy efficiency by two to three times.
Along the years, rough set theory has earned a well-deserved reputation as a sound methodology for dealing with imperfect knowledge in a simple though mathematically sound way. This edited volume aims at continue stressing the benefits of applying rough sets in many real-life situations while still keeping an eye on topological aspects of the theory as well as strengthening its linkage with other soft computing paradigms. The volume comprises 11 chapters and is organized into three parts. Part 1 deals with theoretical contributions while Parts 2 and 3 focus on several real world data mining applications. Chapters authored by pioneers were selected on the basis of fundamental ideas/concepts rather than the thoroughness of techniques deployed. Academics, scientists as well as engineers working in the rough set, computational intelligence, soft computing and data mining research area will find the comprehensive coverage of this book invaluable.
In its updated second edition, this book has been extensively revised on a chapter by chapter basis. The book accurately reflects the syntax and semantic changes to the SystemVerilog language standard, making it an essential reference for systems professionals who need the latest version information. In addition, the second edition features a new chapter explaining the SystemVerilog "packages," a new appendix that summarizes the synthesis guidelines presented throughout the book, and all of the code examples have been updated to the final syntax and rerun using the latest version of the Synopsys, Mentor, and Cadance tools.
Evolutionary algorithms (EAs), as well as other bio-inspired heuristics, are widely usedto solvenumericaloptimizationproblems.However, intheir or- inal versions, they are limited to unconstrained search spaces i.e they do not include a mechanism to incorporate feasibility information into the ?tness function. On the other hand, real-world problems usually have constraints in their models. Therefore, a considerable amount of research has been d- icated to design and implement constraint-handling techniques. The use of (exterior) penalty functions is one of the most popular methods to deal with constrained search spaces when using EAs. However, other alternative me- ods have been proposed such as: special encodings and operators, decoders, the use of multiobjective concepts, among others. An e?cient and adequate constraint-handling technique is a key element in the design of competitive evolutionary algorithms to solve complex op- mization problems. In this way, this subject deserves special research e?orts. After asuccessfulspecialsessiononconstraint-handlingtechniquesusedin evolutionary algorithms within the Congress on Evolutionary Computation (CEC) in 2007, and motivated by the kind invitation made by Dr. Janusz Kacprzyk, I decided to edit a book, with the aim of putting together recent studies on constrained numerical optimization using evolutionary algorithms and other bio-inspired approaches. The intended audience for this book comprises graduate students, prac- tionersandresearchersinterestedonalternativetechniquestosolvenumerical optimization problems in presence of constraints
Communication between engineers, their managers, suppliers and customers relies on the existence of a common understanding for the meaning of terms. While this is not normally a problem, it has proved to be a significant roadblock in the EDA industry where terms are created as required by any number of people, multiple terms are coined for the same thing, or even worse, the same term is used for many different things. This taxonomy identifies all of the significant terms used by an industry and provides a structural framework in which those terms can be defined and their relationship to other terms identified. The origins of this work go back to 1995 with a government-sponsored program called RASSP. At the termination of their work, VSIA picked up their work and developed it further. Three new taxonomies were introduced by VSIA for additional facets of the system design and development process. Since role of VSIA has now changed so that it no longer maintains these taxonomies, the baton is being passed on again through a group of interested people and manifested in this key reference work.
Collaborative Product Design and Manufacturing Methodologies and Applications introduces a wide spectrum of collaborative engineering issues in design and manufacturing. It offers state-of-the-art chapters written by international experts from academia and industry, and reflects the most up-to-date R & D work and applications, especially those from the last three to five years. The book will serve as an essential reference for academics, upper-level undergraduate and graduate students and practicing professionals.
Constraint-Based Verification covers an emerging field in functional verification of electronic designs, referred to as the "constraint-based verification." The topics are developed in the context of a wide range of dynamic and static verification approaches including simulation, emulation, and formal methods. The goal is to show how constraints, or assertions, can be used towards automating the generation of testbenches, resulting in a seamless verification flow. Topics such as verification coverage, and connection with assertion based verification, are also covered. The book targets verification engineers as well as researchers. It covers both methodological and technical issues. Particular stress is given to the latest advances in functional verification. The research community has witnessed recent growth of interests in constraint-based functional verification. Various techniques have been developed. They are relatively new, but have reached a level of maturity so that they are appearing in commercial tools such as Vera and System Verilog.
Cellular Neural Networks (CNNs) constitute a class of nonlinear, recurrent and locally coupled arrays of identical dynamical cells that operate in parallel. ANALOG chips are being developed for use in applications where sophisticated signal processing at low power consumption is required. Signal processing via CNNs only becomes efficient if the network is implemented in analog hardware. In view of the physical limitations that analog implementations entail, robust operation of a CNN chip with respect to parameter variations has to be insured. By far not all mathematically possible CNN tasks can be carried out reliably on an analog chip; some of them are inherently too sensitive. This book defines a robustness measure to quantify the degree of robustness and proposes an exact and direct analytical design method for the synthesis of optimally robust network parameters. The method is based on a design centering technique which is generally applicable where linear constraints have to be satisfied in an optimum way. Processing speed is always crucial when discussing signal-processing devices. In the case of the CNN, it is shown that the setting time can be specified in closed analytical expressions, which permits, on the one hand, parameter optimization with respect to speed and, on the other hand, efficient numerical integration of CNNs. Interdependence between robustness and speed issues are also addressed. Another goal pursued is the unification of the theory of continuous-time and discrete-time systems. By means of a delta-operator approach, it is proven that the same network parameters can be used for both of these classes, even if their nonlinear output functions differ. More complex CNN optimization problems that cannot be solved analytically necessitate resorting to numerical methods. Among these, stochastic optimization techniques such as genetic algorithms prove their usefulness, for example in image classification problems. Since the inception of the CNN, the problem of finding the network parameters for a desired task has been regarded as a learning or training problem, and computationally expensive methods derived from standard neural networks have been applied. Furthermore, numerous useful parameter sets have been derived by intuition. In this book, a direct and exact analytical design method for the network parameters is presented. The approach yields solutions which are optimum with respect to robustness, an aspect which is crucial for successful implementation of the analog CNN hardware that has often been neglected. `This beautifully rounded work provides many interesting and useful results, for both CNN theorists and circuit designers.' Leon O. Chua
We describe in this book, new methods and applications of hybrid intelligent systems using soft computing techniques. Soft Computing (SC) consists of several intelligent computing paradigms, including fuzzy logic, neural networks, and evolutionary al- rithms, which can be used to produce powerful hybrid intelligent systems. The book is organized in five main parts, which contain a group of papers around a similar subject. The first part consists of papers with the main theme of intelligent control, which are basically papers that use hybrid systems to solve particular problems of control. The second part contains papers with the main theme of pattern recognition, which are basically papers using soft computing techniques for achieving pattern recognition in different applications. The third part contains papers with the themes of intelligent agents and social systems, which are papers that apply the ideas of agents and social behavior to solve real-world problems. The fourth part contains papers that deal with the hardware implementation of intelligent systems for solving particular problems. The fifth part contains papers that deal with modeling, simulation and optimization for real-world applications.
Evolutionary algorithms are successful biologically inspired meta-heuristics. Their success depends on adequate parameter settings. The question arises: how can evolutionary algorithms learn parameters automatically during the optimization? Evolution strategies gave an answer decades ago: self-adaptation. Their self-adaptive mutation control turned out to be exceptionally successful. But nevertheless self-adaptation has not achieved the attention it deserves. This book introduces various types of self-adaptive parameters for evolutionary computation. Biased mutation for evolution strategies is useful for constrained search spaces. Self-adaptive inversion mutation accelerates the search on combinatorial TSP-like problems. After the analysis of self-adaptive crossover operators the book concentrates on premature convergence of self-adaptive mutation control at the constraint boundary. Besides extensive experiments, statistical tests and some theoretical investigations enrich the analysis of the proposed concepts.
The building blocks of today's and future embedded systems are complex intellectual property components, or cores, many of which are programmable processors. Traditionally, these embedded processors mostly have been pro grammed in assembly languages due to efficiency reasons. This implies time consuming programming, extensive debugging, and low code portability. The requirements of short time-to-market and dependability of embedded systems are obviously much better met by using high-level language (e.g. C) compil ers instead of assembly. However, the use of C compilers frequently incurs a code quality overhead as compared to manually written assembly programs. Due to the need for efficient embedded systems, this overhead must be very low in order to make compilers useful in practice. In turn, this requires new compiler techniques that take the specific constraints in embedded system de sign into account. An example are the specialized architectures of recent DSP and multimedia processors, which are not yet sufficiently exploited by existing compilers."
New Algorithms, Architectures and Applications for Reconfigurable Computing consists of a collection of contributions from the authors of some of the best papers from the Field Programmable Logic conference (FPL 03) and the Design and Test Europe conference (DATE 03). In all, seventy-nine authors, from research teams from all over the world, were invited to present their latest research in the extended format permitted by this special volume. The result is a valuable book that is a unique record of the state of the art in research into field programmable logic and reconfigurable computing. The contributions are organized into twenty-four chapters and are grouped into three main categories: architectures, tools and applications. Within these three broad areas the most strongly represented themes are coarse-grained architectures; dynamically reconfigurable and multi-context architectures; tools for coarse-grained and reconfigurable architectures; networking, security and encryption applications. Field programmable logic and reconfigurable computing are exciting research disciplines that span the traditional boundaries of electronic engineering and computer science. When the skills of both research communities are combined to address the challenges of a single research discipline they serve as a catalyst for innovative research. The work reported in the chapters of this book captures that spirit of that innovation."
This edited collection of essays from world-leading academic and industrial authors yields insight into all aspects of reverse engineering. Methods of reverse engineering analysis are covered, along with special emphasis on the investigation of surface and internal structures. Frequently-used hardware and software are assessed and advice given on the most suitable choice of system. Also covered is rapid prototyping and its relationship with successful reverse engineering.
For the past decade or so, Computational Intelligence (CI) has been an - tremely "hot" topic amongst researchers working in the ?elds of biomedicine and bioinformatics. There are many successful applications of CI in such areas ascomputationalgenomics, predictionofgeneexpression, proteinstructure, and protein-protein interactions, modeling of evolution, or neuronal systems mod- ing and analysis. However, there still are many problems in biomedicine and bioinformatics that are in desperate need of advanced and e?cient compu- tional methodologies to deal with tremendous amounts of data so prevalent in those kinds of researchpursuits. Unfortunately, scientists in both these ?elds are very often unaware of the abundance of computational techniques that could be put to use to help them analyze and understand the data underlying their research inquiries. On the other hand, computational intelligence practitioners are often unfamiliar with the particular problems that their algorithms could be successfully applied for. The separation between the two worlds is partially caused by the use of di?erent languages in these two spheres of science, but also by a relatively small number of publications devoted solely to the purpose of facilitating the exchange of new computational algorithms and methodologies on one hand, and the needs of the realms of biomedicine and bioinformatics on the other. Inordertohelp?llthegapbetweenthescientistsonbothsidesofthisspectrum, wehavesolicitedcontributionsfromresearchersactivelyapplyingcomputational intelligencetechniquestoimportantproblemsinbiomedicineandbioinformatics. The purpose of this book is to provide an overview of powerful state-of-the-art methodologiesthatarecurrentlyutilizedforbiomedicine-and/orbioinformati- orientedapplications, sothatresearchersworkinginthose?eldscouldlearnofnew methodstohelpthemtackletheirproblems. Ontheotherhand, wealsohopethat the CI community will ?nd this book useful by discovering a new and intriguing area of applications.
Interconnect has become the dominating factor in determining system performance in nanometer technologies. This book is dedicated to this important subject. The primary purpose of this monograph is to provide insight and intuition into layout analysis and optimization for interconnect in high speed, high complexity integrated circuits. In this monograph, the effects of wire size, spacing between wires, wire length, coupling length, load capacitance, rise time of the inputs, place of overlap (near driver or receiver side), frequency, shields, direction of the signals, and wire width for both the aggressors and the victim wires on system performance and reliability is thoroughly investigated. Also, parameters like driver strength has been considered as several recent studies considered the simultaneous device and interconnect sizing. Crosstalk noise, as well as the impact of coupling on aggressor delay is analyzed. The pulse width of the crosstalk noise, which is of similar importance for circuit performance as the peak amplitude, is also analyzed. We have considered more parameters that can affect the signal integrity and presented practical intensive simulation results. This book brings together a wealth of information previously scattered throughout the literature, presenting a range of CAD algorithms and techniques for synthesizing and optimizing interconnect. The practical aspects of the algorithms and models are explained with sufficient detail. It deeply investigates the most two effective parameters in layout optimization, spacing and shield insertion, that can affect both capacitive and inductive noise. Noise models needed for layouts with multi-layer multi-crosscoupling segments are investigated. Different post-layout optimization techniques are explained with complexity analysis and benchmarks tests are provided. |
You may like...
4D CAD and Visualization in Construction…
Raymond Issa, I. Flood, …
Hardcover
R8,490
Discovery Miles 84 900
Smart Manufacturing - Integrating…
Scott , M. Shemwell, Hebab A. Quazi
Hardcover
R2,195
Discovery Miles 21 950
Continuous Optimization and Variational…
Anurag Jayswal, Tadeusz Antczak
Hardcover
R4,165
Discovery Miles 41 650
Blockchain for Smart Systems - Computing…
Latesh Malik, Sandhya Arora, …
Hardcover
Mastering Autodesk Inventor 2015 and…
Curtis Waguespack
Paperback
|