![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > General
Wafer-scale integration has long been the dream of system designers. Instead of chopping a wafer into a few hundred or a few thousand chips, one would just connect the circuits on the entire wafer. What an enormous capability wafer-scale integration would offer: all those millions of circuits connected by high-speed on-chip wires. Unfortunately, the best known optical systems can provide suitably ?ne resolution only over an area much smaller than a whole wafer. There is no known way to pattern a whole wafer with transistors and wires small enough for modern circuits. Statistical defects present a ?rmer barrier to wafer-scale integration. Flaws appear regularly in integrated circuits; the larger the circuit area, the more probable there is a ?aw. If such ?aws were the result only of dust one might reduce their numbers, but ?aws are also the inevitable result of small scale. Each feature on a modern integrated circuit is carved out by only a small number of photons in the lithographic process. Each transistor gets its electrical properties from only a small number of impurity atoms in its tiny area. Inevitably, the quantized nature of light and the atomic nature of matter produce statistical variations in both the number of photons de?ning each tiny shape and the number of atoms providing the electrical behavior of tiny transistors. No known way exists to eliminate such statistical variation, nor may any be possible.
It is recognized that formal design and verification methods are an important requirement for the attainment of high quality system designs. The field has evolved enormously during the last few years, resulting in the fact that formal design and verification methods are nowadays supported by several tools, both commercial and academic. If different tools and users are to generate and read the same language then it is necessary that the same semantics is assigned by them to all constructs and elements of the language. The current IEEE standard VHDL language reference manual (LRM) tries to define VHDL as well as possible in a descriptive way, explaining the semantics in English. But rigor and clarity are very hard to maintain in a semantics defined in this way, and that has already given rise to many misconceptions and contradictory interpretations. Formal Semantics for VHDL is the first book that puts forward a cohesive set of semantics for the VHDL language. The chapters describe several semantics each based on a different underlying formalism: two of them use Petri nets as target language, and two of them higher order logic. Two use functional concepts, and finally another uses the concept of evolving algebras. Formal Semantics for VHDL is essential reading for researchers in formal methods and can be used as a text for an advanced course on the subject.
Electronic Engineering and Computing Technology contains sixty-one revised and extended research articles written by prominent researchers participating in the conference. Topics covered include Control Engineering, Network Management, Wireless Networks, Biotechnology, Signal Processing, Computational Intelligence, Computational Statistics, Internet Computing, High Performance Computing, and industrial applications. Electronic Engineering and Computing Technology will offer the state of art of tremendous advances in electronic engineering and computing technology and also serve as an excellent reference work for researchers and graduate students working with/on electronic engineering and computing technology.
Digital Systems Design and Prototyping: Using Field Programmable Logic and Hardware Description Languages, Second Edition covers the subject of digital systems design using two important technologies: Field Programmable Logic Devices (FPLDs) and Hardware Description Languages (HDLs). These two technologies are combined to aid in the design, prototyping, and implementation of a whole range of digital systems from very simple ones replacing traditional glue logic to very complex ones customized as the applications require. Three HDLs are presented: VHDL and Verilog, the widely used standard languages, and the proprietary Altera HDL (AHDL). The chapters on these languages serve as tutorials and comparisons are made that show the strengths and weaknesses of each language. A large number of examples are used in the description of each language providing insight for the design and implementation of FPLDs. With the addition of the Altera UP-1 prototyping board, all examples can be tested and verified in a real FPLD. Digital Systems Design and Prototyping: Using Field Programmable Logic and Hardware Description Languages, Second Edition is designed as an advanced level textbook as well as a reference for the professional engineer.
The craft of designing mathematical models of dynamic objects offers a large number of methods to solve subproblems in the design, typically parameter estimation, order determination, validation, model reduc tion, analysis of identifiability, sensi tivi ty and accuracy. There is also a substantial amount of process identification software available. A typi cal 'identification package' consists of program modules that implement selections of solution methods, coordinated by supervising programs, communication, and presentation handling file administration, operator of results. It is to be run 'interactively', typically on a designer's 'work station' . However, it is generally not obvious how to do that. Using interactive identification packages necessarily leaves to the user to decide on quite a number of specifications, including which model structure to use, which subproblems to be solved in each particular case, and in what or der. The designer is faced with the task of setting up cases on the work station, based on apriori knowledge about the actual physical object, the experiment conditions, and the purpose of the identification. In doing so, he/she will have to cope with two basic difficulties: 1) The com puter will be unable to solve most of the tentative identification cases, so the latter will first have to be form11lated in a way the computer can handle, and, worse, 2) even in cases where the computer can actually produce a model, the latter will not necessarily be valid for the intended purpose."
The assembly of electronic circuit boards has emerged as one of the most significant growth areas for robotics and automated assembly. This comprehensive volume, which is an edited collection of material mostly published in "Assembly Engineering" and "Electronic Packaging and Production," will provide an essential reference for engineers working in this field, including material on Multi Layer Boards, Chip-on-board and numerous case studies. Frank J. Riley is senior vice-president of the Bodine Corporation and a world authority on assembly automation.
System designers, computer scientists and engineers have c- tinuously invented and employed notations for modeling, speci- ing, simulating, documenting, communicating, teaching, verifying and controlling the designs of digital systems. Initially these s- tems were represented via electronic and fabrication details. F- lowing C. E. Shannon's revelation of 1948, logic diagrams and Boolean equations were used to represent digital systems in a fa- ion that de-emphasized electronic and fabrication detail while revealing logical behavior. A small number of circuits were made available to remove the abstraction of these representations when it was desirable to do so. As system complexity grew, block diagrams, timing charts, sequence charts, and other graphic and symbolic notations were found to be useful in summarizing the gross features of a system and describing how it operated. In addition, it always seemed necessary or appropriate to augment these documents with lengthy verbal descriptions in a natural language. While each notation was, and still is, a perfectly valid means of expressing a design, lack of standardization, conciseness, and f- mal definitions interfered with communication and the understa- ing between groups of people using different notations. This problem was recognized early and formal languages began to evolve in the 1950s when I. S. Reed discovered that flip-flop input equations were equivalent to a register transfer equation, and that xvi tor-like notation. Expanding these concepts Reed developed a no- tion that became known as a Register Transfer Language (RTL).
A central issue in computer vision is the problem of signal to symbol transformation. In the case of texture, which is an important visual cue, this problem has hitherto received very little attention. This book presents a solution to the signal to symbol transformation problem for texture. The symbolic de- scription scheme consists of a novel taxonomy for textures, and is based on appropriate mathematical models for different kinds of texture. The taxonomy classifies textures into the broad classes of disordered, strongly ordered, weakly ordered and compositional. Disordered textures are described by statistical mea- sures, strongly ordered textures by the placement of primitives, and weakly ordered textures by an orientation field. Compositional textures are created from these three classes of texture by using certain rules of composition. The unifying theme of this book is to provide standardized symbolic descriptions that serve as a descriptive vocabulary for textures. The algorithms developed in the book have been applied to a wide variety of textured images arising in semiconductor wafer inspection, flow visualization and lumber processing. The taxonomy for texture can serve as a scheme for the identification and description of surface flaws and defects occurring in a wide range of practical applications.
VHDL Answers to Frequently Asked Questions is a follow-up to the author's book VHDL Coding Styles and Methodologies (ISBN 0-7923-9598-0). On completion of his first book, the author continued teaching VHDL and actively participated in the comp.lang.vhdl newsgroup. During his experiences, he was enlightened by the many interesting issues and questions relating to VHDL and synthesis. These pertained to: misinterpretations in the use of the language; methods for writing error-free, and simulation-efficient, code for testbench designs and for synthesis; and general principles and guidelines for design verification. As a result of this wealth of public knowledge contributed by a large VHDL community, the author decided to act as a facilitator of this information by collecting different classes of VHDL issues, and by elaborating on these topics through complex simulatable examples. This book is intended for those who are seeking an enhanced proficiency in VHDL. This book differs from other VHDL books in many respects.This book: * emphasizes real VHDL, rather than philosophical or introductory types of information * emphasizes application of VHDL for synthesis * uses complete examples to demonstrate problems and solutions * provides a disk that includes all the book examples and other useful reference VHDL material * uses easy to remember symbology notation to emphasize language rules, good and poor methodology and coding styles * identifies obsolete VHDL constructs that must be avoided * identifies synthesizable/non-synthesizable structures * uses a question and answer format to clarify and emphasize the concerns of VHDL users.
Introduction 1. 1 Historical Developments 1 1. 2 Techniques for Improving Performance 2 1. 3 An Architectural Design Example 3 2 Instructions and Addresses 2. 1 Three-address Systems - The CDC 6600 and 7600 7 2. 2 Two-address Systems - The IBM System/360 and /370 10 2. 3 One-address Systems 12 2. 4 Zero-address Systems 15 2. 5 The MU5 Instruction Set 17 2. 6 Comparing Instruction Formats 22 3 Storage Hierarcbies 3. 1 Store Interleaving 26 3. 2 The Atlas Paging System 29 3. 3 IBM Cache Systems 33 3. 4 The MU5 Name Store 37 3. 5 Data Transfers in the MU5 Storage Hierarchy 44 4 Pipelines 4. 1 The MU5 Primary Operand Unit Pipeline 49 4. 2 Arithmetic Pipelines - The TI ASC 62 4. 3 The IBM System/360 Model 91 Common Data Bus 67 5 Instruction Buffering 5. 1 The IBM System/360 Model 195 Instruction Processor 72 5. 2 Instruction Buffering in CDC Computers 77 5. 3 The MU5 Instruction Buffer Unit 82 5. 4 The CRAY-1 Instruction Buffers 87 5. 5 Position of the Control Point 89 6 Parallel Functional Units 6. 1 The CDC 6600 Central Processor 95 6. 2 The CDC 7600 Central Processor 104 6. 3 Performance 110 6 * 4 The CRA Y-1 112 7 Vector Processors 7. 1 Vector Facilities in MU5 126 7. 2 String Operations in MU5 136 7. 3 The CDC Star-100 142 7. 4 The CDC CYBER 205 146 7.
This book constitutes thoroughly refereed post-conference proceedings of the workshops of the 18th International Conference on Parallel Computing, Euro-Par 2012, held in Rhodes Islands, Greece, in August 2012. The papers of these 10 workshops BDMC, CGWS, HeteroPar, HiBB, OMHI, Paraphrase, PROPER, UCHPC, VHPC focus on promotion and advancement of all aspects of parallel and distributed computing.
Modeling in Analog Design highlights some of the most pressing issues in the use of modeling techniques for design of analogue circuits. Using models for circuit design gives designers the power to express directly the behaviour of parts of a circuit in addition to using other pre-defined components. There are numerous advantages to this new category of analog behavioral language. In the short term, by favouring the top-down design and raising the level of description abstraction, this approach provides greater freedom of implementation and a higher degree of technology independence. In the longer term, analog synthesis and formal optimisation are targeted. Modeling in Analog Design introduces the reader to two main language standards: VHDL-A and MHDL. It goes on to provide in-depth examples of the use of these languages to model analog devices. The final part is devoted to the very important topic of modeling the thermal and electrothermal aspects of devices. This book is essential reading for analog designers using behavioral languages and analog CAD tool development environments who have to provide the tools used by the designers.
by Maq Mannan President and CEO, DSM Technologies Chairman of the IEEE 1364 Verilog Standards Group Past Chairman of Open Verilog International One of the major strengths of the Verilog language is the Programming Language Interface (PLI), which allows users and Verilog application developers to infinitely extend the capabilities of the Verilog language and the Verilog simulator. In fact, the overwhelming success of the Verilog language can be partly attributed to the exi- ence of its PLI. Using the PLI, add-on products, such as graphical waveform displays or pre and post simulation analysis tools, can be easily developed. These products can then be used with any Verilog simulator that supports the Verilog PLI. This ability to create thi- party add-on products for Verilog simulators has created new markets and provided the Verilog user base with multiple sources of software tools. Hardware design engineers can, and should, use the Verilog PLI to customize their Verilog simulation environment. A Company that designs graphics chips, for ex- ple, may wish to see the simulation results of a new design in some custom graphical display. The Verilog PLI makes it possible, and even trivial, to integrate custom so- ware, such as a graphical display program, into a Verilog simulator. The simulation results can then dynamically be displayed in the custom format during simulation. And, if the company uses Verilog simulators from multiple simulator vendors, this integrated graphical display will work with all the simulators.
Evolution through natural selection has been going on for a very long time. Evolution through artificial selection has been practiced by humans for a large part of our history, in the breeding of plants and livestock. Artificial evolution, where we evolve an artifact through artificial selection, has been around since electronic computers became common: about 30 years. Right from the beginning, people have suggested using artificial evolution to design electronics automatically.l Only recently, though, have suitable re configurable silicon chips become available that make it easy for artificial evolution to work with a real, physical, electronic medium: before them, ex periments had to be done entirely in software simulations. Early research concentrated on the potential applications opened-up by the raw speed ad vantage of dedicated digital hardware over software simulation on a general purpose computer. This book is an attempt to show that there is more to it than that. In fact, a radically new viewpoint is possible, with fascinating consequences. This book was written as a doctoral thesis, submitted in September 1996. As such, it was a rather daring exercise in ruthless brevity. Believing that the contribution I had to make was essentially a simple one, I resisted being drawn into peripheral discussions. In the places where I deliberately drop a subject, this implies neither that it's not interesting, nor that it's not relevant: just that it's not a crucial part of the tale I want to tell here."
J.-E. Dubois and N. Gershon This book was inspired by the Symposium on "Communications and Computer Aided Systems" held at the 14th International CODATA Conference in September 1994 in Chambery, France. It was conceived and influenced by the discussions at the symposium and most of the contributions were written following the Conference. This is the first comprehensive book, published in one volume, of issues concerning the challenges and the vital impact of the information revolution (including the Internet and the World Wide Web) on science and technology. Topics concerning the impact of the information revolution on science and technology include: * Dramatic improvement in sharing of data and information among scientists and engineers around the world * Collaborations (on-line and off-line) of scientists and engineers separated by distance . * Availability of visual tools and methods to view, understand, search, and share information contained in data * Improvements in data and information browsing, search and access and * New ways of publishing scientific and technological data and information. These changes have dramatically modified the way research and development in science and technology are being carried out. However, to facilitate this information flow nationally and internationally, the science and technology communities need to develop and put in place new standards and policies and resolve some legal issues.
PowerShell Jetzt fur Windows, Linux und macOS Was kann die neue PowerShell auf den verschiedenen Betriebssystemen? Was kann sie nicht? Dieses Buch bietet eine praxisorientierte Einfuhrung in die PowerShell-Welt mit vielen Beispielen. Lernen Sie wichtige Cmdlets, die Arbeit mit Objekten und den Gebrauch von Funktionen, Skripten und Modulen kennen. Fur Umsteiger sind besonders die Unterschiede zur Windows PowerShell interessant. Grundlegende Kenntnisse im Umgang mit Windows, Linux oder macOS sind zum Verstandnis des Buchs voellig ausreichend. Sie erfahren Wie Sie Visual Studio Code installieren und konfigurieren Wie Sie mit der PowerShell interaktiv arbeiten Wie Sie mit der PowerShell programmieren Wie Sie uber PowerShell auf Remoterechner zugreifen
From a review of the Second Edition 'If you are new to the field and want to know what "all this Verilog stuff is about," you've found the golden goose. The text here is straight forward, complete, and example rich -mega-multi-kudos to the author James Lee. Though not as detailed as the Verilog reference guides from Cadence, it likewise doesn't suffer from the excessive abstractness those make you wade through. This is a quick and easy read, and will serve as a desktop reference for as long as Verilog lives. Best testimonial: I'm buying my fourth and fifth copies tonight (I've loaned out/lost two of my others).' Zach Coombes, AMD
For the second time the International Workshop on Responsive Com puter Systems has brought together a group of international experts from the fields of real-time computing, distributed computing, and fault tolerant systems. The two day workshop met at the splendid facilities at the KDD Research and Development Laboratories at Kamifukuoka, Saitama, in Japan on October 1 and 2, 1992. The program included a keynote address, a panel discussion and, in addition to the opening and closing session, six sessions of submitted presentations. The keynote address "The Concepts and Technologies of Depend able and Real-time Computer Systems for Shinkansen Train Control" covered the architecture of the computer control system behind a very responsive, i. e., timely and reliable, transport system-the Shinkansen Train. It has been fascinating to listen to the operational experience with a large fault-tolerant computer application. "What are the Key Paradigms in the Integration of Timeliness and Reliability?" was the topic of the lively panel discussion. Once again the pro's and con's of the time-triggered versus the event-triggered paradigm in the design of a real-time systems were discussed. The eighteen submitted presentations covered diverse topics about important issues in the design of responsive systems and a session on progress reports about leading edge research projects. Lively discussions characterized both days of the meeting. This volume contains the revised presentations that incorporate some of the discussions that occurred during the meeting."
VHDL Coding Styles and Methodologies, Edition is a follow up book to the first edition of same book and to VHDL Answers to Frequently Asked Questions, first and second editions. This book was originally written as a teaching tool for a VHDL training course. The author began writing the book because he could not find a practical and easy to read book that gave in depth coverage of both, the language and coding methodologies. This edition provides practical information on reusable software methodologies for the design of bus functional models for testbenches. It also provides guidelines in the use of VHDL for synthesis. All VHDL code described in the book is on a companion CD. The CD also includes the GNU toolsuite with EMACS language sensitive editor (with VHDL, Verilog, and other language templates), and TSHELL tools that emulate a Unix shell. Model Technology graciously included a timed evaluation version of ModelSim, a recognized industry standard VHDL/Verilog compiler and simulator that supports easy viewing of the models under analysis, along with many debug features. In addition, Synplicity included a timed version of Synplify, a very efficient, user friendly and easy to use FPGA synthesis tool. Synplify provides a user both the RTL and gate level views of the synthesized model, and a performance report of the design. Optimization mechanisms are provided in the tool.
VHDL and FPLDs in Digital Systems Design, Prototyping and Customization treats three aspects of digital systems: design, prototyping and customization, in an integrated manner using two technologies. The two technologies are VHSIC Hardware Description Language (VHDL) and Field-Programmable Logic Devices (FPLDs). VHDL is used for modeling and specification; FPLDs are used for implementation. VHDL and FPLDs in Digital Systems Design, Prototyping and Customization is divided into three parts. Part I provides an introduction to the basic features of VHDL with emphasis on modeling and design. All types of VHDL models including behavioral, structural and dataflow models are presented. Part 2 is a bridge to designing and prototyping using FPLDs as the prototyping and implementation technology. Part 3 contains a number of examples and case studies that demonstrate the effectiveness of using VHDL and FPLDs in the design of real systems. VHDL and FPLDs in Digital Systems Design, Prototyping and Customization is an invaluable comprehensive reference for the digital designer. This work includes examples and software tied to real-world FPLDs.The reader can see how the material presented applies to real-world devices and can experiment with the software. Also included are large-scale designs like the FLIX microcomputer that demonstrates the power of VHDL.
For the near future, the recent predictions and roadmaps of silicon semiconductor technology all agree that the number of transistors on a chip will keep growing exponentially according to Moore's Law, pushing technology towards the system-on-a-chip (SOC) era. However, we are increasingly experiencing a productivity gap where the chip complexity that can be handled by current design teams falls short of the possibilities offered by technological advances. Together with growing time-to-market pressures, this drives the need for innovative measures to increase design productivity by orders of magnitude. It is commonly agreed that the solutions for achieving such a leap in design productivity lie in a shift of the focus of the design process to higher levels of abstraction on the one hand and in the massive reuse of predesigned, complex system components (intellectual property, IP) on the other hand. In order to be successful, both concepts eventually require the adoption of new languages and methodologies for system design, backed-up by the availability of a corresponding set of system-level design automation tools. This book presents the SpecC system-level design language (SLDL) and the corresponding SpecC design methodology. The SpecC language is intended for specification and design of SOCs or embedded systems including software and hardware, whether using fixed platforms, integrating systems from different IPs, or synthesizing the system blocks from programming or hardware description languages. SpecC Specification Language and Methodology describes the SpecC methodology that leads designers from an executable specification to an RTL implementation through a well-defined sequence of steps. Each model is described and guidelines are given for generating these models from executable specifications. Finally, the SpecC methodology is demonstrated on an industrial-size example. The design community is now entering the system level of abstraction era and SpecC is the enabling element to achieve a paradigm shift in design culture needed for system/product design and manufacturing. SpecC Specification Language and Methodology will be of interest to researchers, designers, and managers dealing with system-level design, design flows and methodologies as well as students learning system specification, modeling and design.
Identification of Multivariable Industrial Processes presents a unified approach to multivariable industrial process identification. It concentrates on industrial processes with reference to model applications. The areas covered are experiment design, model structure selection, parameter estimation as well as error bounds of the transfer function. This publication is intended to fill the gap between modern systems and control theory and industrial application. It is based on the results of 10 years of research and application experiences. The theories and models discussed are fully explained and illustrated with case studies. At an early stage the reader is introduced to real applications.
Parallel Processing Applications for Jet Engine Control is a volume in the new Advances in Industrial Control series, edited by Professor M.J. Grimble and Dr. M.A. Johnson of the Industrial Control Unit, University of Strathclyde. The book describes the mapping and load balancing of gas turbine engine and controller simulations onto arrays of transputers. It compares the operating system for transputers and the Uniform System upon the Butterfly Plus computer. The problem of applying formal methods to parallel asychronous processors is addressed, implementing novel fault tolerant systems to meet real-time flight control requirements. The book presents real-time closed-loop results highlighting the advantages and disadvantages of Occam and the transputer. Readers will find that this book provides valuable material for researchers in both academia and the aerospace industry.
This volume contains papers representing a comprehensive record of the contributions to the fifth workshop at EG '90 in Lausanne. The Eurographics hardware workshops have now become an established forum for the exchange of information about the latest developments in this field of growing importance. The first workshop took place during EG '86 in Lisbon. All participants considered this to be a very rewarding event to be repeated at future EG conferences. This view was reinforced at the EG '87 Hardware Workshop in Amsterdam and firmly established the need for such a colloquium in this specialist area within the annual EG conference. The third EG Hardware Workshop took place in Nice in 1988 and the fourth in Hamburg at EG '89. The first part of the book is devoted to rendering machines. The papers in this part address techniques for accelerating the rendering of images and efficient ways of improv ing their quality. The second part on ray tracing describes algorithms and architectures for producing photorealistic images, with emphasis on ways of reducing the time for this computationally intensive task. The third part on visualization systems covers a num ber of topics, including voxel-based systems, radiosity, animation and special rendering techniques. The contributions show that there is flourishing activity in the development of new algorithmic and architectural ideas and, in particular, in absorbing the impact of VLSI technology. The increasing diversity of applications encourage new solutions, and graphics hardware has become a research area of high activity and importance.
Intelligent robotics has become the focus of extensive research activity. This effort has been motivated by the wide variety of applications that can benefit from the developments. These applications often involve mobile robots, multiple robots working and interacting in the same work area, and operations in hazardous environments like nuclear power plants. Applications in the consumer and service sectors are also attracting interest. These applications have highlighted the importance of performance, safety, reliability, and fault tolerance. This volume is a selection of papers from a NATO Advanced Study Institute held in July 1989 with a focus on active perception and robot vision. The papers deal with such issues as motion understanding, 3-D data analysis, error minimization, object and environment modeling, object detection and recognition, parallel and real-time vision, and data fusion. The paradigm underlying the papers is that robotic systems require repeated and hierarchical application of the perception-planning-action cycle. The primary focus of the papers is the perception part of the cycle. Issues related to complete implementations are also discussed. |
You may like...
Formal and Adaptive Methods for…
Anatoliy Doroshenko, Olena Yatsenko
Hardcover
R5,333
Discovery Miles 53 330
Topics in Parallel and Distributed…
Sushil K. Prasad, Anshul Gupta, …
Paperback
R1,487
Discovery Miles 14 870
|