0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (26)
  • R250 - R500 (86)
  • R500+ (224)
  • -
Status
Format
Author / Contributor
Publisher

Books > Professional & Technical > General

Memory Issues in Embedded Systems-on-Chip - Optimizations and Exploration (Hardcover, 1999 ed.): Preeti Ranjan Panda, Nikil D.... Memory Issues in Embedded Systems-on-Chip - Optimizations and Exploration (Hardcover, 1999 ed.)
Preeti Ranjan Panda, Nikil D. Dutt, Alexandru Nicolau
R2,761 Discovery Miles 27 610 Ships in 18 - 22 working days

Memory Issues in Embedded Systems-On-Chip: Optimizations and Explorations is designed for different groups in the embedded systems-on-chip arena. First, it is designed for researchers and graduate students who wish to understand the research issues involved in memory system optimization and exploration for embedded systems-on-chip. Second, it is intended for designers of embedded systems who are migrating from a traditional micro-controllers centered, board-based design methodology to newer design methodologies using IP blocks for processor-core-based embedded systems-on-chip. Also, since Memory Issues in Embedded Systems-on-Chip: Optimization and Explorations illustrates a methodology for optimizing and exploring the memory configuration of embedded systems-on-chip, it is intended for managers and system designers who may be interested in the emerging capabilities of embedded systems-on-chip design methodologies for memory-intensive applications.

Risk and Society: The Interaction of Science, Technology and Public Policy (Hardcover, 1992 ed.): M. Waterstone Risk and Society: The Interaction of Science, Technology and Public Policy (Hardcover, 1992 ed.)
M. Waterstone
R4,116 Discovery Miles 41 160 Ships in 18 - 22 working days

Life in the last quarter of the twentieth century presents a baffling array of complex issues. The benefits of technology are arrayed against the risks and hazards of those same technological marvels (frequently, though not always, arising as side effects or by-products). This confrontation poses very difficult choices for individuals as well as for those charged with making public policy. Some of the most challenging of these issues result because of the ability of technological innovation and deployment to outpace the capacity of institutions to assess and evaluate implications. In many areas, the rate of technological advance has now far outstripped the capabilities of institutional monitoring and control. While there are many instances in which technological advance occurs without adverse consequences (and in fact, yields tremendous benefits), frequently the advent of a major innovation brings a wide array of unforeseen and (to some) undesirable effects. This problem is exacerbated as the interval between the initial development of a technology and its deployment is shortened, since the opportunity for cautious appraisal is decreased.

Introduction to Analog VLSI Design Automation (Hardcover, 1990 ed.): Mohammed Ismail, Jose E. Franca Introduction to Analog VLSI Design Automation (Hardcover, 1990 ed.)
Mohammed Ismail, Jose E. Franca
R2,757 Discovery Miles 27 570 Ships in 18 - 22 working days

Very large scale integration (VLSI) technologies are now maturing with a current emphasis toward submicron structures and sophisticated applications combining digital as well as analog circuits on a single chip. Abundant examples are found on today's advanced systems for telecom munications, robotics, automotive electronics, image processing, intelli gent sensors, etc .. Exciting new applications are being unveiled in the field of neural computing where the massive use of analog/digital VLSI technologies will have a significant impact. To match such a fast technological trend towards single chip ana logi digital VLSI systems, researchers worldwide have long realized the vital need of producing advanced computer aided tools for designing both digital and analog circuits and systems for silicon integration. Ar chitecture and circuit compilation, device sizing and the layout genera tion are but a few familiar tasks on the world of digital integrated circuit design which can be efficiently accomplished by matured computer aided tools. In contrast, the art of tools for designing and producing analog or even analogi digital integrated circuits is quite primitive and still lack ing the industrial penetration and acceptance already achieved by digital counterparts. In fact, analog design is commonly perceived to be one of the most knowledge-intensive design tasks and analog circuits are still designed, largely by hand, by expert intimately familiar with nuances of the target application and integrated circuit fabrication process. The techniques needed to build good analog circuits seem to exist solely as expertise invested in individual designers."

The Bounding Approach to VLSI Circuit Simulation (Hardcover, 1986 ed.): C.A. Zukowski The Bounding Approach to VLSI Circuit Simulation (Hardcover, 1986 ed.)
C.A. Zukowski
R4,136 Discovery Miles 41 360 Ships in 18 - 22 working days

This book proposes a new approach to circuit simulation that is still in its infancy. The reason for publishing this work as a monograph at this time is to quickly distribute these ideas to the research community for further study. The book is based on a doctoral dissertation undertaken at MIT between 1982 and 1985. In 1982 the author joined a research group that was applying bounding techniques to simple VLSI timing analysis models. The conviction that bounding analysis could also be successfully applied to sophisticated digital MOS circuit models led to the research presented here. Acknowledgments 'me author would like to acknowledge many helpful discussions and much support from his research group at MIT, including Lance Glasser, John Wyatt, Jr., and Paul Penfield, Jr. Many others have also contributed to this work in some way, including Albert Ruchli, Mark Horowitz, Rich Zippel, Chtis Terman, Jacob White, Mark Matson, Bob Armstrong, Steve McCormick, Cyrus Bamji, John Wroclawski, Omar Wing, Gary Dare, Paul Bassett, and Rick LaMaire. The author would like to give special thanks to his wife, Deborra, for her support and many contributions to the presentation of this research. The author would also like to thank his parents for their encouragement, and IBM for its financial support of t, I-Jis project through a graduate fellowship. THE BOUNDING APPROACH TO VLSI CIRCUIT SIMULATION 1. INTRODUCTION The VLSI revolution of the 1970's has created a need for new circuit analysis techniques.

Urbanisation, Land Use, Land Degradation and Environment/Nam S&T Centre (Hardcover): Munir & Mermut Ahmet Ruhi & Cel Ozturk Urbanisation, Land Use, Land Degradation and Environment/Nam S&T Centre (Hardcover)
Munir & Mermut Ahmet Ruhi & Cel Ozturk
R2,680 Discovery Miles 26 800 Ships in 10 - 15 working days
Switching Theory for Logic Synthesis (Hardcover, 1999 ed.): Tsutomu Sasao Switching Theory for Logic Synthesis (Hardcover, 1999 ed.)
Tsutomu Sasao
R4,212 Discovery Miles 42 120 Ships in 18 - 22 working days

Switching Theory for Logic Synthesis covers the basic topics of switching theory and logic synthesis in fourteen chapters. Chapters 1 through 5 provide the mathematical foundation. Chapters 6 through 8 include an introduction to sequential circuits, optimization of sequential machines and asynchronous sequential circuits. Chapters 9 through 14 are the main feature of the book. These chapters introduce and explain various topics that make up the subject of logic synthesis: multi-valued input two-valued output function, logic design for PLDs/FPGAs, EXOR-based design, and complexity theories of logic networks. An appendix providing a history of switching theory is included. The reference list consists of over four hundred entries. Switching Theory for Logic Synthesis is based on the author's lectures at Kyushu Institute of Technology as well as seminars for CAD engineers from various Japanese technology companies. Switching Theory for Logic Synthesis will be of interest to CAD professionals and students at the advanced level. It is also useful as a textbook, as each chapter contains examples, illustrations, and exercises.

Retargetable Compilers for Embedded Core Processors - Methods and Experiences in Industrial Applications (Hardcover, 1997 ed.):... Retargetable Compilers for Embedded Core Processors - Methods and Experiences in Industrial Applications (Hardcover, 1997 ed.)
Clifford Liem
R2,748 Discovery Miles 27 480 Ships in 18 - 22 working days

Embedded core processors are becoming a vital part of today's system-on-a-chip in the growing areas of telecommunications, multimedia and consumer electronics. This is mainly in response to a need to track evolving standards with the flexibility of embedded software. Consequently, maintaining the high product performance and low product cost requires a careful design of the processor tuned to the application domain. With the increased presence of instruction-set processors, retargetable software compilation techniques are critical, not only for improving engineering productivity, but to allow designers to explore the architectural possibilities for the application domain. Retargetable Compilers for Embedded Core Processors, with a Foreword written by Ahmed Jerraya and Pierre Paulin, overviews the techniques of modern retargetable compilers and shows the application of practical techniques to embedded instruction-set processors. The methods are highlighted with examples from industry processors used in products for multimedia, telecommunications, and consumer electronics. An emphasis is given to the methodology and experience gained in applying two different retargetable compiler approaches in industrial settings. The book also discusses many pragmatic areas such as language support, source code abstraction levels, validation strategies, and source-level debugging. In addition, new compiler techniques are described which support address generation for DSP architecture trends. The contribution is an address calculation transformation based on an architectural model. Retargetable Compilers for Embedded Core Processors will be of interest to embedded system designers and programmers, the developers of electronic design automation (EDA) tools for embedded systems, and researchers in hardware/software co-design.

High-Level System Modeling - Specification and Design Methodologies (Hardcover, 1996 ed.): Ronald Waxman, Jean-Michel Berge, Oz... High-Level System Modeling - Specification and Design Methodologies (Hardcover, 1996 ed.)
Ronald Waxman, Jean-Michel Berge, Oz Levia, Jacques Rouillard
R4,119 Discovery Miles 41 190 Ships in 18 - 22 working days

In system design, generation of high-level abstract models that can be closely associated with evolving lower-level models provides designers with the ability to incrementally test' an evolving design against a model of a specification. Such high-level models may deal with areas such as performance, reliability, availability, maintainability, and system safety. Abstract models also allow exploration of the hardware versus software design space in an incremental fashion as a fuller, detailed design unfolds, leaving behind the old practice of hardware-software binding too early in the design process. Such models may also allow the inclusion of non-functional aspects of design (e.g. space, power, heat) in a simulatable information model dealing with the system's operation. This book addresses Model Generation and Application specifically in the following domains: Specification modeling (linking object/data modeling, behavior modeling, and activity modeling). Operational specification modeling (modeling the way the system is supposed to operate - from a user's viewpoint). Linking non-functional parameters with specification models. Hybrid modeling (linking performance and functional elements). Application of high-level modeling to hardware/software approaches. Mathematical analysis techniques related to the modeling approaches. Reliability modeling. Applications of High Level Modeling. Reducing High Level Modeling to Practice. High-Level System Modeling: Specification and Design Methodologies describes the latest research and practice in the modeling of electronic systems and as such is an important update for all researchers, design engineers and technical managers working in design automation and circuit design.

Neural Models and Algorithms for Digital Testing (Hardcover, 1991 ed.): S. T. Chadradhar, Vishwani Agrawal, M. Bushnell Neural Models and Algorithms for Digital Testing (Hardcover, 1991 ed.)
S. T. Chadradhar, Vishwani Agrawal, M. Bushnell
R2,757 Discovery Miles 27 570 Ships in 18 - 22 working days

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 9 QUADRATIC 0-1 PROGRAMMING 8S 9. 1 Energy Minimization 86 9. 2 Notation and Tenninology . . . . . . . . . . . . . . . . . 87 9. 3 Minimization Technique . . . . . . . . . . . . . . . . . . 88 9. 4 An Example . . . . . . . . . . . . . . . . . . . . . . . . 92 9. 5 Accelerated Energy Minimization. . . . . . . . . . . . . 94 9. 5. 1 Transitive Oosure . . . . . . . . . . . . . . . . . 94 9. 5. 2 Additional Pairwise Relationships 96 9. 5. 3 Path Sensitization . . . . . . . . . . . . . . . . . 97 9. 6 Experimental Results 98 9. 7 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . 100 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 10 TRANSITIVE CLOSURE AND TESTING 103 10. 1 Background . . . . . . . . . . . . . . . . . . . . . . . . 104 10. 2 Transitive Oosure Definition 105 10. 3 Implication Graphs 106 10. 4 A Test Generation Algorithm 107 10. 5 Identifying Necessary Assignments 112 10. 5. 1 Implicit Implication and Justification 113 10. 5. 2 Transitive Oosure Does More Than Implication and Justification 115 10. 5. 3 Implicit Sensitization of Dominators 116 10. 5. 4 Redundancy Identification 117 10. 6 Summary 119 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 11 POLYNOMIAL-TIME TESTABILITY 123 11. 1 Background 124 11. 1. 1 Fujiwara's Result 125 11. 1. 2 Contribution of the Present Work . . . . . . . . . 126 11. 2 Notation and Tenninology 127 11. 3 A Polynomial TlDle Algorithm 128 11. 3. 1 Primary Output Fault 129 11. 3. 2 Arbitrary Single Fault 135 11. 3. 3 Multiple Faults. . . . . . . . . . . . . . . . . . . 137 11. 4 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . 139 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 ix 12 SPECIAL CASES OF HARD PROBLEMS 141 12. 1 Problem Statement 142 12. 2 Logic Simulation 143 12. 3 Logic Circuit Modeling . 146 12. 3. 1 Modelfor a Boolean Gate . . . . . . . . . . . . . 147 12. 3. 2 Circuit Modeling 148 12.

Integrating Functional and Temporal Domains in Logic Design - The False Path Problem and Its Implications (Hardcover, 1991... Integrating Functional and Temporal Domains in Logic Design - The False Path Problem and Its Implications (Hardcover, 1991 ed.)
Patrick C. McGeer, Robert K. Brayton
R2,779 Discovery Miles 27 790 Ships in 18 - 22 working days

This book is an extension of one author's doctoral thesis on the false path problem. The work was begun with the idea of systematizing the various solutions to the false path problem that had been proposed in the literature, with a view to determining the computational expense of each versus the gain in accuracy. However, it became clear that some of the proposed approaches in the literature were wrong in that they under estimated the critical delay of some circuits under reasonable conditions. Further, some other approaches were vague and so of questionable accu racy. The focus of the research therefore shifted to establishing a theory (the viability theory) and algorithms which could be guaranteed correct, and then using this theory to justify (or not) existing approaches. Our quest was successful enough to justify presenting the full details in a book. After it was discovered that some existing approaches were wrong, it became apparent that the root of the difficulties lay in the attempts to balance computational efficiency and accuracy by separating the tempo ral and logical (or functional) behaviour of combinational circuits. This separation is the fruit of several unstated assumptions; first, that one can ignore the logical relationships of wires in a network when considering timing behaviour, and, second, that one can ignore timing considerations when attempting to discover the values of wires in a circuit."

Effluents from Alternative Demilitarization Technologies (Hardcover, 1998 ed.): F. W. Holm Effluents from Alternative Demilitarization Technologies (Hardcover, 1998 ed.)
F. W. Holm
R4,132 Discovery Miles 41 320 Ships in 18 - 22 working days

FRANCIS W. HOLM 30 Agua Sarca Road, Placitas, New Mexico 1. Overview The North Atlantic Treaty Organization (NATO) sponsored an Advanced Research in Prague, Czech Republic, on October 13-15, 1997, to collect and Workshop (ARW) study information on effluents from alternative demilitarization technologies and to report on these fmdings. The effluents, orprocess residues, identified for assessment at the workshop are generated by systems that have been proposed as alternatives to incineration technology for destruction of munitions, chemical warfare agent, and associated materials and debris. The alternative technologies analyzed are grouped into three categories based on process bulk operating temperature: low (0-200 C), medium (200-600 C), and high (600-3,500 C). Reaction types considered include hydrolysis, biodegradation, electrochemical oxidation, gas-phase high-temperature reduction, steam reforming, gasification, sulfur reactions, solvated electron chemistry, sodium reactions, supercritical water oxidation, wet air oxidation, and plasma torch technology. These ofprocesses, some of which have been studied categories represent a broad spectrum only in the laboratory and some of which are in commercial use for destruction of hazardous and toxic wastes. Some technologies have been developed and used for specific commercial applications; however, in all cases, research, development, test, and evaluation (RDT&E) is necessary to assure that each technology application is effective for destroying chemical warfare materiel. Table 1 contains a list of more than 40 technologies from a recent report for the U.S. Army [1]. Many ofthe technologies in Table 1 are based on similar principles.

Merlin Raj And The Santa Algorithm - A Computer Science Dog's Tale for Kids (Hardcover): D G Priya Merlin Raj And The Santa Algorithm - A Computer Science Dog's Tale for Kids (Hardcover)
D G Priya; Illustrated by Shelley Hampe
R509 Discovery Miles 5 090 Ships in 18 - 22 working days
Towards One-Pass Synthesis (Hardcover, 2002 ed.): Rolf Drechsler, Wolfgang Gunther Towards One-Pass Synthesis (Hardcover, 2002 ed.)
Rolf Drechsler, Wolfgang Gunther
R2,752 Discovery Miles 27 520 Ships in 18 - 22 working days

The design process of digital circuits is often carried out in individual steps, like logic synthesis, mapping, and routing. Since originally the complete process was too complex, it has been split up in several - more or less independent - phases. In the last 40 years powerful algorithms have been developed to find optimal solutions for each of these steps. However, the interaction of these different algorithms has not been considered for a long time. This leads to quality loss e.g. in cases where highly optimized netlists fit badly onto the target architecture. Since the resulting circuits are often far from being optimal and insufficient regarding the optimization criteria, like area and delay, several iterations of the complete design process have to be carried out to get high quality results. This is a very time consuming and costly process. For this reason, some years ago the idea of one-pass synthesis came up. There were two main approaches how to guarantee that a design got "first time right": Combining levels that were split before, e.g. to use layout information already during the logic synthesis phase; Restricting the optimization in one level such that it better fits to the next one. So far, several approaches in these two directions have been presented and new techniques are under development. In Towards One-Pass Synthesis we describe the new paradigm that is used in one-pass synthesis and present examples for the two techniques above. Theoretical and practical aspects are discussed and minimization algorithms are given. This will help people working with synthesis tools and circuit design in general (in industry and academia) to keep informed about recent developments andnew trends in this area.

Binary Decision Diagrams and Applications for VLSI CAD (Hardcover, 1996 ed.): Shin-ichi Minato Binary Decision Diagrams and Applications for VLSI CAD (Hardcover, 1996 ed.)
Shin-ichi Minato
R4,094 Discovery Miles 40 940 Ships in 18 - 22 working days

Symbolic Boolean manipulation using binary decision diagrams (BDDs) has been successfully applied to a wide variety of tasks, particularly in very large scale integration (VLSI) computer-aided design (CAD). The concept of decision graphs as an abstract representation of Boolean functions dates back to the early work by Lee and Akers. In the last ten years, BDDs have found widespread use as a concrete data structure for symbolic Boolean manipulation. With BDDs, functions can be constructed, manipulated, and compared by simple and efficient graph algorithms. Since Boolean functions can represent not just digital circuit functions, but also such mathematical domains as sets and relations, a wide variety of CAD problems can be solved using BDDs. Binary Decision Diagrams and Applications for VLSI CAD provides valuable information for both those who are new to BDDs as well as to long time aficionados.' -from the Foreword by Randal E. Bryant. Over the past ten years ... BDDs have attracted the attention of many researchers because of their suitability for representing Boolean functions. They are now widely used in many practical VLSI CAD systems. ... this book can serve as an introduction to BDD techniques and ... it presents several new ideas on BDDs and their applications. ... many computer scientists and engineers will be interested in this book since Boolean function manipulation is a fundamental technique not only in digital system design but also in exploring various problems in computer science.' - from the Preface by Shin-ichi Minato.

Energy Audit for Professionals/Nam S&T Centre (Hardcover): Suresh Kumar Dhungel Energy Audit for Professionals/Nam S&T Centre (Hardcover)
Suresh Kumar Dhungel
R1,483 Discovery Miles 14 830 Ships in 18 - 22 working days
Application-Driven Architecture Synthesis (Hardcover, 1993 ed.): Francky Catthoor, Lars-Gunnar Svensson Application-Driven Architecture Synthesis (Hardcover, 1993 ed.)
Francky Catthoor, Lars-Gunnar Svensson
R4,028 Discovery Miles 40 280 Ships in 18 - 22 working days

Application-Driven Architecture Synthesis describes the state of the art of architectural synthesis for complex real-time processing. In order to deal with the stringent timing requirements and the intricacies of complex real-time signal and data processing, target architecture styles and target application domains have been adopted to make the synthesis approach feasible. These approaches are also heavily application-driven, which is illustrated by many realistic demonstrations, used as examples in the book. The focus is on domains where application-specific solutions are attractive, such as significant parts of audio, telecom, instrumentation, speech, robotics, medical and automotive processing, image and video processing, TV, multi-media, radar, sonar. Application-Driven Architecture Synthesis is of interest to both academics and senior design engineers and CAD managers in industry. It provides an excellent overview of what capabilities to expect from future practical design tools, and includes an extensive bibliography.

Logic Minimization Algorithms for VLSI Synthesis (Hardcover, 1984 ed.): Robert K. Brayton, Gary D. Hachtel, C. McMullen,... Logic Minimization Algorithms for VLSI Synthesis (Hardcover, 1984 ed.)
Robert K. Brayton, Gary D. Hachtel, C. McMullen, Alberto L. Sangiovanni-Vincentelli
R4,798 Discovery Miles 47 980 Ships in 18 - 22 working days

The roots of the project which culminates with the writing of this book can be traced to the work on logic synthesis started in 1979 at the IBM Watson Research Center and at University of California, Berkeley. During the preliminary phases of these projects, the impor tance of logic minimization for the synthesis of area and performance effective circuits clearly emerged. In 1980, Richard Newton stirred our interest by pointing out new heuristic algorithms for two-level logic minimization and the potential for improving upon existing approaches. In the summer of 1981, the authors organized and participated in a seminar on logic manipulation at IBM Research. One of the goals of the seminar was to study the literature on logic minimization and to look at heuristic algorithms from a fundamental and comparative point of view. The fruits of this investigation were surprisingly abundant: it was apparent from an initial implementation of recursive logic minimiza tion (ESPRESSO-I) that, if we merged our new results into a two-level minimization program, an important step forward in automatic logic synthesis could result. ESPRESSO-II was born and an APL implemen tation was created in the summer of 1982. The results of preliminary tests on a fairly large set of industrial examples were good enough to justify the publication of our algorithms. It is hoped that the strength and speed of our minimizer warrant its Italian name, which denotes both express delivery and a specially-brewed black coffee."

Hierarchical Modeling for VLSI Circuit Testing (Hardcover, 1990 ed.): Debashis Bhattacharya, John P. Hayes Hierarchical Modeling for VLSI Circuit Testing (Hardcover, 1990 ed.)
Debashis Bhattacharya, John P. Hayes
R2,741 Discovery Miles 27 410 Ships in 18 - 22 working days

Test generation is one of the most difficult tasks facing the designer of complex VLSI-based digital systems. Much of this difficulty is attributable to the almost universal use in testing of low, gate-level circuit and fault models that predate integrated circuit technology. It is long been recognized that the testing prob lem can be alleviated by the use of higher-level methods in which multigate modules or cells are the primitive components in test generation; however, the development of such methods has proceeded very slowly. To be acceptable, high-level approaches should be applicable to most types of digital circuits, and should provide fault coverage comparable to that of traditional, low-level methods. The fault coverage problem has, perhaps, been the most intractable, due to continued reliance in the testing industry on the single stuck-line (SSL) fault model, which is tightly bound to the gate level of abstraction. This monograph presents a novel approach to solving the foregoing problem. It is based on the systematic use of multibit vectors rather than single bits to represent logic signals, including fault signals. A circuit is viewed as a collection of high-level components such as adders, multiplexers, and registers, interconnected by n-bit buses. To match this high-level circuit model, we introduce a high-level bus fault that, in effect, replaces a large number of SSL faults and allows them to be tested in parallel. However, by reducing the bus size from n to one, we can obtain the traditional gate-level circuit and models."

Europe, America, and Technology: Philosophical Perspectives (Hardcover, 1991 ed.): P. T. Durbin Europe, America, and Technology: Philosophical Perspectives (Hardcover, 1991 ed.)
P. T. Durbin
R4,156 Discovery Miles 41 560 Ships in 18 - 22 working days

As Europe moves toward 1992 and full economic unity, and as Eastern Europe tries to find its way in the new economic order, the United States hesitates. Will the new European economic order be good for the U.S. or not? Such a question is exacerbated by world-wide changes in the technological order, most evident in Japan's new techno-economic power. As might be expected, philosophers have been slow to come to grips with such issues, and lack of interest is compounded by different philosophical styles in different parts of the world. What this volume addresses is more a matter of conflicting styles than a substantive confrontation with the real-world issues. But there is some attempt to be concrete. The symposium on Ivan Illich - with contributions from philosophers and social critics at the Penns- vania State University, where Illich has taught for several years - may suggest the old cliche of Old World vs. New World. Illich's fulminations against technology are often dismissed by Americans as old-world-style prophecy, while Illich seems largely unknown in his native Europe. But Albert Borgmann, born in Germany though now settled in the U.S., shows that this old dichotomy is difficult to maintain in our technological world. Borgmann's focus is on urgent technological problems that have become almost painfully evident in both Europe and America.

Testing and Reliable Design of CMOS Circuits (Hardcover, 1990 ed.): Niraj K. Jha, Sandip Kundu Testing and Reliable Design of CMOS Circuits (Hardcover, 1990 ed.)
Niraj K. Jha, Sandip Kundu
R4,146 Discovery Miles 41 460 Ships in 18 - 22 working days

In the last few years CMOS technology has become increas ingly dominant for realizing Very Large Scale Integrated (VLSI) circuits. The popularity of this technology is due to its high den sity and low power requirement. The ability to realize very com plex circuits on a single chip has brought about a revolution in the world of electronics and computers. However, the rapid advance ments in this area pose many new problems in the area of testing. Testing has become a very time-consuming process. In order to ease the burden of testing, many schemes for designing the circuit for improved testability have been presented. These design for testability techniques have begun to catch the attention of chip manufacturers. The trend is towards placing increased emphasis on these techniques. Another byproduct of the increase in the complexity of chips is their higher susceptibility to faults. In order to take care of this problem, we need to build fault-tolerant systems. The area of fault-tolerant computing has steadily gained in importance. Today many universities offer courses in the areas of digital system testing and fault-tolerant computing. Due to the impor tance of CMOS technology, a significant portion of these courses may be devoted to CMOS testing. This book has been written as a reference text for such courses offered at the senior or graduate level. Familiarity with logic design and switching theory is assumed. The book should also prove to be useful to professionals working in the semiconductor industry."

Embedded System Applications (Hardcover, 1997 ed.): Jean-Claude Baron, J. C. Geffroy, G. Motet Embedded System Applications (Hardcover, 1997 ed.)
Jean-Claude Baron, J. C. Geffroy, G. Motet
R5,326 Discovery Miles 53 260 Ships in 18 - 22 working days

Embedded systems encompass a variety of hardware and software components which perform specific functions in host systems, for example, satellites, washing machines, hand-held telephones and automobiles. Embedded systems have become increasingly digital with a non-digital periphery (analog power) and therefore, both hardware and software codesign are relevant. The vast majority of computers manufactured are used in such systems. They are called embedded' to distinguish them from standard mainframes, workstations, and PCs. Athough the design of embedded systems has been used in industrial practice for decades, the systematic design of such systems has only recently gained increased attention. Advances in microelectronics have made possible applications that would have been impossible without an embedded system design. Embedded System Applications describes the latest techniques for embedded system design in a variety of applications. This also includes some of the latest software tools for embedded system design. Applications of embedded system design in avionics, satellites, radio astronomy, space and control systems are illustrated in separate chapters. Finally, the book contains chapters related to industrial best-practice in embedded system design. Embedded System Applications will be of interest to researchers and designers working in the design of embedded systems for industrial applications.

Architecture Exploration for Embedded Processors with LISA (Hardcover, 2003 ed.): Andreas Hoffmann, Heinrich Meyr, Rainer... Architecture Exploration for Embedded Processors with LISA (Hardcover, 2003 ed.)
Andreas Hoffmann, Heinrich Meyr, Rainer Leupers
R4,137 Discovery Miles 41 370 Ships in 18 - 22 working days

Today more than 90% of all programmable processors are employed in embedded systems. This number is actually not surprising, contemplating that in a typical home you might find one or two PCs equipped with high-performance standard processors, and probably dozens of embedded systems, including electronic entertainment, household, and telecom devices, each of them equipped with one or more embedded processors. The question arises why programmable processors are so popular in embedded system design. The answer lies in the fact that they help to narrow the gap between chip capacity and designer productivity. Embedded processors cores are nothing but one step further towards improved design reuse, just along the lines of standard cells in logic synthesis and macrocells in RTL synthesis in earlier times of IC design. Additionally, programmable processors permit to migrate functionality from hardware to software, resulting in an even improved reuse factor as well as greatly increased flexibility.

The LISA processor design platform (LPDP) presented in Architecture Exploration for Embedded Processors with LISA addresses recent design challenges and results in highly satisfactory solutions. The LPDP covers all major high-level phases of embedded processor design and is capable of automatically generating almost all required software development tools from processor models in the LISA language. It supports a profiling-based, stepwise refinement of processor models down to cycle-accurate and even RTL synthesis models. Moreover, it elegantly avoids model inconsistencies otherwise omnipresent in traditional design flows.

The next step in design reuse is already in sight: SoC platforms, i.e., partially pre-designed multi-processor templates that can be quickly tuned towards given applications thereby guaranteeing a high degree of hardware/software reuse in system-level design. Consequently, the LPDP approach goes even beyond processor architecture design. The LPDP solution explicitly addresses SoC integration issues by offering comfortable APIs for external simulation environments as well as clever solutions for the problem of both efficient and user-friendly heterogeneous multiprocessor debugging.

Synthesis and Control of Discrete Event Systems (Hardcover, 2002 ed.): Benoit Caillaud, Philippe Darondeau, Luciano Lavagno,... Synthesis and Control of Discrete Event Systems (Hardcover, 2002 ed.)
Benoit Caillaud, Philippe Darondeau, Luciano Lavagno, Xiaolan Xie
R2,777 Discovery Miles 27 770 Ships in 18 - 22 working days

This book aims at providing a view of the current trends in the development of research on Synthesis and Control of Discrete Event Systems. Papers col lected in this volume are based on a selection of talks given in June and July 2001 at two independent meetings: the Workshop on Synthesis of Concurrent Systems, held in Newcastle upon Tyne as a satellite event of ICATPN/ICACSD and organized by Ph. Darondeau and L. Lavagno, and the Symposium on the Supervisory Control of Discrete Event Systems (SCODES), held in Paris as a satellite event of CAV and organized by B. Caillaud and X. Xie. Synthesis is a generic term that covers all procedures aiming to construct from specifications given as input objects matching these specifications. The ories and applications of synthesis have been studied and developped for long in connection with logics, programming, automata, discrete event systems, and hardware circuits. Logics and programming are outside the scope of this book, whose focus is on Discrete Event Systems and Supervisory Control. The stress today in this field is on a better applicability of theories and algorithms to prac tical systems design. Coping with decentralization or distribution and caring for an efficient realization of the synthesized systems or controllers are of the utmost importance in areas so diverse as the supervision of embedded or man ufacturing systems, or the implementation of protocols in software or in hard ware."

Field-Programmable Gate Arrays (Hardcover, 1992 ed.): Stephen D. Brown, Robert J. Francis, Jonathan Rose, Zvonko G. Vranesic Field-Programmable Gate Arrays (Hardcover, 1992 ed.)
Stephen D. Brown, Robert J. Francis, Jonathan Rose, Zvonko G. Vranesic
R5,260 Discovery Miles 52 600 Ships in 18 - 22 working days

Field-Programmable Gate Arrays (FPGAs) have emerged as an attractive means of implementing logic circuits, providing instant manufacturing turnaround and negligible prototype costs. They hold the promise of replacing much of the VLSI market now held by mask-programmed gate arrays. FPGAs offer an affordable solution for customized VLSI, over a wide variety of applications, and have also opened up new possibilities in designing reconfigurable digital systems. Field-Programmable Gate Arrays discusses the most important aspects of FPGAs in a textbook manner. It provides the reader with a focused view of the key issues, using a consistent notation and style of presentation. It provides detailed descriptions of commercially available FPGAs and an in-depth treatment of the FPGA architecture and CAD issues that are the subjects of current research. The material presented is of interest to a variety of readers, including those who are not familiar with FPGA technology, but wish to be introduced to it, as well as those who already have an understanding of FPGAs, but who are interested in learning about the research directions that are of current interest.

New Technology Based Firms in the New Millennium (Hardcover): Ray Oakey, W. During, S. Kauser New Technology Based Firms in the New Millennium (Hardcover)
Ray Oakey, W. During, S. Kauser
R3,933 Discovery Miles 39 330 Ships in 10 - 15 working days

This volume is part of a growing body of work that maps the evolution of high technology small firm research over almost a complete decade since 1993. Begun during a period of relative neglect of high technology small firms (HTSFs) during the early 1990s, the book series has witnessed, and perhaps played some part in creating, a resurgence of interest in this type and scale of enterprise in the United Kingdom and mainland Europe by the turn of the century. Throughout this period, specific interest within the high technology small firm study area has ebbed and flowed, with some rather obviously important issues (e.g. policy and finance) often to the fore, while new and resurrected areas of concern have also contributed to the research agenda. Perhaps the best example of resurrection has been the rebirth of interest in the subject of clustering (or agglomeration) as it applies to HTSFs, notably led by Michael Porter. This interest has extended, and put a new slant upon, work consistently well represented in these volumes on networking. This trend is evidenced by the presence of four papers in the concluding Part IV of this volume on "Clusters and Networks". Earlier themes comprise groups of papers on "Science Parks and University Spin offs" (Part II), and "Markets, Strategy and Globalization" (Part III). Both individually and in aggregate, this series of books on HTSF development and growth issues represents a "one stop shop" for all those seeking to gain a broad understanding of the evolution of HTSF research since 1993 by providing a record of the manner in which this research agenda has evolved over these years.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
The Age of Extraction - How Tech…
Tim Wu Paperback R380 R329 Discovery Miles 3 290
Functional Polymer and Composite Systems…
Vikas Mittal Hardcover R4,116 R3,189 Discovery Miles 31 890
Characterization of Biomaterials
Amit Bandyopadhyay, Susmita Bose Paperback R3,031 R2,843 Discovery Miles 28 430
Coastal Ecosystems: Hazards Management…
Hardcover R1,838 Discovery Miles 18 380
Interviews With Beekeepers
Steve Donohoe Hardcover R935 Discovery Miles 9 350
If Anyone Builds It, Everyone Dies - The…
Eliezer Yudkowsky, Nate Soares Paperback R440 R369 Discovery Miles 3 690
Management For Engineers, Technologists…
W.P. Nel Paperback R1,214 Discovery Miles 12 140
The Coming Wave - AI, Power and Our…
Mustafa Suleyman, Michael Bhaskar Paperback R295 R263 Discovery Miles 2 630
How Big Things Get Done - The Surprising…
Bent Flyvbjerg, Dan Gardner Paperback R320 R290 Discovery Miles 2 900
The Ames Farm of Woolwich, Maine - Life…
Roberta Ames Hardcover R577 Discovery Miles 5 770

 

Partners