![]() |
![]() |
Your cart is empty |
||
Books > Professional & Technical > General
Application-Driven Architecture Synthesis describes the state of the art of architectural synthesis for complex real-time processing. In order to deal with the stringent timing requirements and the intricacies of complex real-time signal and data processing, target architecture styles and target application domains have been adopted to make the synthesis approach feasible. These approaches are also heavily application-driven, which is illustrated by many realistic demonstrations, used as examples in the book. The focus is on domains where application-specific solutions are attractive, such as significant parts of audio, telecom, instrumentation, speech, robotics, medical and automotive processing, image and video processing, TV, multi-media, radar, sonar. Application-Driven Architecture Synthesis is of interest to both academics and senior design engineers and CAD managers in industry. It provides an excellent overview of what capabilities to expect from future practical design tools, and includes an extensive bibliography.
The roots of the project which culminates with the writing of this book can be traced to the work on logic synthesis started in 1979 at the IBM Watson Research Center and at University of California, Berkeley. During the preliminary phases of these projects, the impor tance of logic minimization for the synthesis of area and performance effective circuits clearly emerged. In 1980, Richard Newton stirred our interest by pointing out new heuristic algorithms for two-level logic minimization and the potential for improving upon existing approaches. In the summer of 1981, the authors organized and participated in a seminar on logic manipulation at IBM Research. One of the goals of the seminar was to study the literature on logic minimization and to look at heuristic algorithms from a fundamental and comparative point of view. The fruits of this investigation were surprisingly abundant: it was apparent from an initial implementation of recursive logic minimiza tion (ESPRESSO-I) that, if we merged our new results into a two-level minimization program, an important step forward in automatic logic synthesis could result. ESPRESSO-II was born and an APL implemen tation was created in the summer of 1982. The results of preliminary tests on a fairly large set of industrial examples were good enough to justify the publication of our algorithms. It is hoped that the strength and speed of our minimizer warrant its Italian name, which denotes both express delivery and a specially-brewed black coffee."
Test generation is one of the most difficult tasks facing the designer of complex VLSI-based digital systems. Much of this difficulty is attributable to the almost universal use in testing of low, gate-level circuit and fault models that predate integrated circuit technology. It is long been recognized that the testing prob lem can be alleviated by the use of higher-level methods in which multigate modules or cells are the primitive components in test generation; however, the development of such methods has proceeded very slowly. To be acceptable, high-level approaches should be applicable to most types of digital circuits, and should provide fault coverage comparable to that of traditional, low-level methods. The fault coverage problem has, perhaps, been the most intractable, due to continued reliance in the testing industry on the single stuck-line (SSL) fault model, which is tightly bound to the gate level of abstraction. This monograph presents a novel approach to solving the foregoing problem. It is based on the systematic use of multibit vectors rather than single bits to represent logic signals, including fault signals. A circuit is viewed as a collection of high-level components such as adders, multiplexers, and registers, interconnected by n-bit buses. To match this high-level circuit model, we introduce a high-level bus fault that, in effect, replaces a large number of SSL faults and allows them to be tested in parallel. However, by reducing the bus size from n to one, we can obtain the traditional gate-level circuit and models."
As Europe moves toward 1992 and full economic unity, and as Eastern Europe tries to find its way in the new economic order, the United States hesitates. Will the new European economic order be good for the U.S. or not? Such a question is exacerbated by world-wide changes in the technological order, most evident in Japan's new techno-economic power. As might be expected, philosophers have been slow to come to grips with such issues, and lack of interest is compounded by different philosophical styles in different parts of the world. What this volume addresses is more a matter of conflicting styles than a substantive confrontation with the real-world issues. But there is some attempt to be concrete. The symposium on Ivan Illich - with contributions from philosophers and social critics at the Penns- vania State University, where Illich has taught for several years - may suggest the old cliche of Old World vs. New World. Illich's fulminations against technology are often dismissed by Americans as old-world-style prophecy, while Illich seems largely unknown in his native Europe. But Albert Borgmann, born in Germany though now settled in the U.S., shows that this old dichotomy is difficult to maintain in our technological world. Borgmann's focus is on urgent technological problems that have become almost painfully evident in both Europe and America.
In the last few years CMOS technology has become increas ingly dominant for realizing Very Large Scale Integrated (VLSI) circuits. The popularity of this technology is due to its high den sity and low power requirement. The ability to realize very com plex circuits on a single chip has brought about a revolution in the world of electronics and computers. However, the rapid advance ments in this area pose many new problems in the area of testing. Testing has become a very time-consuming process. In order to ease the burden of testing, many schemes for designing the circuit for improved testability have been presented. These design for testability techniques have begun to catch the attention of chip manufacturers. The trend is towards placing increased emphasis on these techniques. Another byproduct of the increase in the complexity of chips is their higher susceptibility to faults. In order to take care of this problem, we need to build fault-tolerant systems. The area of fault-tolerant computing has steadily gained in importance. Today many universities offer courses in the areas of digital system testing and fault-tolerant computing. Due to the impor tance of CMOS technology, a significant portion of these courses may be devoted to CMOS testing. This book has been written as a reference text for such courses offered at the senior or graduate level. Familiarity with logic design and switching theory is assumed. The book should also prove to be useful to professionals working in the semiconductor industry."
Embedded systems encompass a variety of hardware and software components which perform specific functions in host systems, for example, satellites, washing machines, hand-held telephones and automobiles. Embedded systems have become increasingly digital with a non-digital periphery (analog power) and therefore, both hardware and software codesign are relevant. The vast majority of computers manufactured are used in such systems. They are called embedded' to distinguish them from standard mainframes, workstations, and PCs. Athough the design of embedded systems has been used in industrial practice for decades, the systematic design of such systems has only recently gained increased attention. Advances in microelectronics have made possible applications that would have been impossible without an embedded system design. Embedded System Applications describes the latest techniques for embedded system design in a variety of applications. This also includes some of the latest software tools for embedded system design. Applications of embedded system design in avionics, satellites, radio astronomy, space and control systems are illustrated in separate chapters. Finally, the book contains chapters related to industrial best-practice in embedded system design. Embedded System Applications will be of interest to researchers and designers working in the design of embedded systems for industrial applications.
Today more than 90% of all programmable processors are employed in embedded systems. This number is actually not surprising, contemplating that in a typical home you might find one or two PCs equipped with high-performance standard processors, and probably dozens of embedded systems, including electronic entertainment, household, and telecom devices, each of them equipped with one or more embedded processors. The question arises why programmable processors are so popular in embedded system design. The answer lies in the fact that they help to narrow the gap between chip capacity and designer productivity. Embedded processors cores are nothing but one step further towards improved design reuse, just along the lines of standard cells in logic synthesis and macrocells in RTL synthesis in earlier times of IC design. Additionally, programmable processors permit to migrate functionality from hardware to software, resulting in an even improved reuse factor as well as greatly increased flexibility. The LISA processor design platform (LPDP) presented in Architecture Exploration for Embedded Processors with LISA addresses recent design challenges and results in highly satisfactory solutions. The LPDP covers all major high-level phases of embedded processor design and is capable of automatically generating almost all required software development tools from processor models in the LISA language. It supports a profiling-based, stepwise refinement of processor models down to cycle-accurate and even RTL synthesis models. Moreover, it elegantly avoids model inconsistencies otherwise omnipresent in traditional design flows. The next step in design reuse is already in sight: SoC platforms, i.e., partially pre-designed multi-processor templates that can be quickly tuned towards given applications thereby guaranteeing a high degree of hardware/software reuse in system-level design. Consequently, the LPDP approach goes even beyond processor architecture design. The LPDP solution explicitly addresses SoC integration issues by offering comfortable APIs for external simulation environments as well as clever solutions for the problem of both efficient and user-friendly heterogeneous multiprocessor debugging.
This book aims at providing a view of the current trends in the development of research on Synthesis and Control of Discrete Event Systems. Papers col lected in this volume are based on a selection of talks given in June and July 2001 at two independent meetings: the Workshop on Synthesis of Concurrent Systems, held in Newcastle upon Tyne as a satellite event of ICATPN/ICACSD and organized by Ph. Darondeau and L. Lavagno, and the Symposium on the Supervisory Control of Discrete Event Systems (SCODES), held in Paris as a satellite event of CAV and organized by B. Caillaud and X. Xie. Synthesis is a generic term that covers all procedures aiming to construct from specifications given as input objects matching these specifications. The ories and applications of synthesis have been studied and developped for long in connection with logics, programming, automata, discrete event systems, and hardware circuits. Logics and programming are outside the scope of this book, whose focus is on Discrete Event Systems and Supervisory Control. The stress today in this field is on a better applicability of theories and algorithms to prac tical systems design. Coping with decentralization or distribution and caring for an efficient realization of the synthesized systems or controllers are of the utmost importance in areas so diverse as the supervision of embedded or man ufacturing systems, or the implementation of protocols in software or in hard ware."
Field-Programmable Gate Arrays (FPGAs) have emerged as an attractive means of implementing logic circuits, providing instant manufacturing turnaround and negligible prototype costs. They hold the promise of replacing much of the VLSI market now held by mask-programmed gate arrays. FPGAs offer an affordable solution for customized VLSI, over a wide variety of applications, and have also opened up new possibilities in designing reconfigurable digital systems. Field-Programmable Gate Arrays discusses the most important aspects of FPGAs in a textbook manner. It provides the reader with a focused view of the key issues, using a consistent notation and style of presentation. It provides detailed descriptions of commercially available FPGAs and an in-depth treatment of the FPGA architecture and CAD issues that are the subjects of current research. The material presented is of interest to a variety of readers, including those who are not familiar with FPGA technology, but wish to be introduced to it, as well as those who already have an understanding of FPGAs, but who are interested in learning about the research directions that are of current interest.
This volume is part of a growing body of work that maps the evolution of high technology small firm research over almost a complete decade since 1993. Begun during a period of relative neglect of high technology small firms (HTSFs) during the early 1990s, the book series has witnessed, and perhaps played some part in creating, a resurgence of interest in this type and scale of enterprise in the United Kingdom and mainland Europe by the turn of the century. Throughout this period, specific interest within the high technology small firm study area has ebbed and flowed, with some rather obviously important issues (e.g. policy and finance) often to the fore, while new and resurrected areas of concern have also contributed to the research agenda. Perhaps the best example of resurrection has been the rebirth of interest in the subject of clustering (or agglomeration) as it applies to HTSFs, notably led by Michael Porter. This interest has extended, and put a new slant upon, work consistently well represented in these volumes on networking. This trend is evidenced by the presence of four papers in the concluding Part IV of this volume on "Clusters and Networks". Earlier themes comprise groups of papers on "Science Parks and University Spin offs" (Part II), and "Markets, Strategy and Globalization" (Part III). Both individually and in aggregate, this series of books on HTSF development and growth issues represents a "one stop shop" for all those seeking to gain a broad understanding of the evolution of HTSF research since 1993 by providing a record of the manner in which this research agenda has evolved over these years.
The corps of philosophers who make up the Society for Philosophy & Technology has now been collaborating, in one fashion or another, for almost fifteen years. In addition, the number of philosophers, world-wide, who have begun to focus their analytical skills on technology and related social problems grows increasingly every year. {It would certainly swell the ranks if all of them joined the Society ) It seems more than ap propriate, in this context, to publish a miscellaneous volume that em phasizes the extraordinary range and diversity of contemporary contribu tions to the philosophical understanding of the exceedingly complex phenomenon that is modern technology. My thanks, once again, to the anonymous referees who do so much to maintain standards for the series. And thanks also to the secretaries - Mary Imperatore and Dorothy Milsom - in the Philosophy Department at the University of Delaware; their typing and retyping of the MSS, and especially notes and references, also contributes to keeping our standards high. PAUL T. DURBIN vii Paul T. Durbin (ed.), Philosophy ofT echnology, p. vii."
A reactive system is one that is in continual interaction with its environment and executes at a pace determined by that environment. Examples of reactive systems are network protocols, air-traffic control systems, industrial-process control systems etc. Reactive systems are ubiquitous and represent an important class of systems. Due to their complex nature, such systems are extremely difficult to specify and implement. Many reactive systems are employed in highly-critical applications, making it crucial that one considers issues such as reliability and safety while designing such systems. The design of reactive systems is considered to be problematic, and p.oses one of the greatest challenges in the field of system design and development. In this paper, we discuss specification-modeling methodologies for reactive systems. Specification modeling is an important stage in reactive system design where the designer specifies the desired properties of the reactive system in the form of a specification model. This specification model acts as the guidance and source for the implementation. To develop the specification model of complex systems in an organized manner, designers resort to specification modeling methodologies. In the context of reactive systems, we can call such methodologies reactive-system specification modeling methodologies.
One of the main applications of VHDL is the synthesis of electronic circuits. Circuit Synthesis with VHDL is an introduction to the use of VHDL logic (RTL) synthesis tools in circuit design. The modeling styles proposed are independent of specific market tools and focus on constructs widely recognized as synthesizable by synthesis tools. A statement of the prerequisites for synthesis is followed by a short introduction to the VHDL concepts used in synthesis. Circuit Synthesis with VHDL presents two possible approaches to synthesis: the first starts with VHDL features and derives hardware counterparts; the second starts from a given hardware component and derives several description styles. The book also describes how to introduce the synthesis design cycle into existing design methodologies and the standard synthesis environment. Circuit Synthesis with VHDL concludes with a case study providing a realistic example of the design flow from behavioral description down to the synthesized level. Circuit Synthesis with VHDL is essential reading for all students, researchers, design engineers and managers working with VHDL in a synthesis environment.
A Formal Approach to Hardware Design discusses designing computations to be realised by application specific hardware. It introduces a formal design approach based on a high-level design language called Synchronized Transitions. The models created using Synchronized Transitions enable the designer to perform different kinds of analysis and verification based on descriptions in a single language. It is, for example, possible to use exactly the same design description both for mechanically supported verification and synthesis. Synchronized Transitions is supported by a collection of public domain CAD tools. These tools can be used with the book in presenting a course on the subject. A Formal Approach to Hardware Design illustrates the benefits to be gained from adopting such techniques, but it does so without assuming prior knowledge of formal design methods. The book is thus not only an excellent reference, it is also suitable for use by students and practitioners.
The highly sophisticated techniques of modern engineering are normally conceived of in practical terms. Corresponding to the instrumental function of technology, they are designed to direct the forces of nature according to human purposes. Yet, as soon as the realm of mere skills is exceeded, the intended useful results can only be achieved through planned and preconceived action processes involving the deliberately considered application of well designed tools and devices. This is to say that in all complex cases theoretical reasoning becomes an indispensable means to accomplish the pragmatic technological aims. Hence the abstracting from the actual concrete function of technology opens the way to concentrate attention on the general conceptual framework involved. If this approach is adopted the relevant knowledge and the procedures applied clearly exhibit a logic of their own. This point of view leads to a methodological and even an epistemological analysis of the theoretical structure and the specific methods of procedure characteristic of modern technology. Investigations of this kind, that can be described as belonging to an ana lytical philosophy of technology, form the topic of this anthology. The type of research in question here is closely akin to that of the philosophy of science. But it is an astonishing fact that the commonly accepted and carefully investigated philosophy of science has not yet found its counterpart in an established philosophy of technology."
For many years, the dominant fault model in automatic test pattern gen eration (ATPG) for digital integrated circuits has been the stuck-at fault model. The static nature of stuck-at fault testing when compared to the extremely dynamic nature of integrated circuit (IC) technology has caused many to question whether or not stuck-at fault based testing is still viable. Attempts at answering this question have not been wholly satisfying due to a lack of true quantification, statistical significance, and/or high computational expense. In this monograph we introduce a methodology to address the ques tion in a manner which circumvents the drawbacks of previous approaches. The method is based on symbolic Boolean functional analyses using Or dered Binary Decision Diagrams (OBDDs). OBDDs have been conjectured to be an attractive representation form for Boolean functions, although cases ex ist for which their complexity is guaranteed to grow exponentially with input cardinality. Classes of Boolean functions which exploit the efficiencies inherent in OBDDs to a very great extent are examined in Chapter 7. Exact equa tions giving their OBDD sizes are derived, whereas until very recently only size bounds have been available. These size equations suggest that straight forward applications of OBDDs to design and test related problems may not prove as fruitful as was once thought."
The goal of the research out of which this monograph grew, was to make annealing as much as possible a general purpose optimization routine. At first glance this may seem a straight-forward task, for the formulation of its concept suggests applicability to any combinatorial optimization problem. All that is needed to run annealing on such a problem is a unique representation for each configuration, a procedure for measuring its quality, and a neighbor relation. Much more is needed however for obtaining acceptable results consistently in a reasonably short time. It is even doubtful whether the problem can be formulated such that annealing becomes an adequate approach for all instances of an optimization problem. Questions such as what is the best formulation for a given instance, and how should the process be controlled, have to be answered. Although much progress has been made in the years after the introduction of the concept into the field of combinatorial optimization in 1981, some important questions still do not have a definitive answer. In this book the reader will find the foundations of annealing in a self-contained and consistent presentation. Although the physical analogue from which the con cept emanated is mentioned in the first chapter, all theory is developed within the framework of markov chains. To achieve a high degree of instance independence adaptive strategies are introduced." |
![]() ![]() You may like...
Solving PDEs in Python - The FEniCS…
Hans Petter Langtangen, Anders Logg
Hardcover
R1,386
Discovery Miles 13 860
Human and Nature Minding Automation - An…
Spyros G. Tzafestas
Hardcover
R4,413
Discovery Miles 44 130
Path Planning for Vehicles Operating in…
Viacheslav Pshikhopov
Paperback
Advanced Human-Robot Collaboration in…
Lihui Wang, Xi Vincent Wang, …
Hardcover
R5,169
Discovery Miles 51 690
Flexible Manufacturing Systems: Recent…
A. Raouf, M. Ben-Daya
Hardcover
R4,694
Discovery Miles 46 940
Advances in Robot Kinematics 2020
Jadran Lenarcic, Bruno Siciliano
Hardcover
R5,631
Discovery Miles 56 310
Object -Oriented Analysis and Design…
K Venugopal Reddy, Sampath Korra
Hardcover
|