0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (11)
  • R250 - R500 (36)
  • R500+ (3,087)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design

Distributed Sensor Networks - A Multiagent Perspective (Paperback, Softcover reprint of the original 1st ed. 2003): Victor... Distributed Sensor Networks - A Multiagent Perspective (Paperback, Softcover reprint of the original 1st ed. 2003)
Victor Lesser, Charles L. Ortiz Jr, Milind Tambe
R4,037 Discovery Miles 40 370 Ships in 18 - 22 working days

Distributed Sensor Networks is the first book of its kind to examine solutions to this problem using ideas taken from the field of multiagent systems. The field of multiagent systems has itself seen an exponential growth in the past decade, and has developed a variety of techniques for distributed resource allocation. Distributed Sensor Networks contains contributions from leading, international researchers describing a variety of approaches to this problem based on examples of implemented systems taken from a common distributed sensor network application; each approach is motivated, demonstrated and tested by way of a common challenge problem. The book focuses on both practical systems and their theoretical analysis, and is divided into three parts: the first part describes the common sensor network challenge problem; the second part explains the different technical approaches to the common challenge problem; and the third part provides results on the formal analysis of a number of approaches taken to address the challenge problem.

Foundations of Real-Time Computing: Formal Specifications and Methods (Paperback, Softcover reprint of the original 1st ed.... Foundations of Real-Time Computing: Formal Specifications and Methods (Paperback, Softcover reprint of the original 1st ed. 1991)
Andre M.Van Tilborg, Gary M. Koob
R4,021 Discovery Miles 40 210 Ships in 18 - 22 working days

This volume contains a selection of papers that focus on the state-of the-art in formal specification and verification of real-time computing systems. Preliminary versions of these papers were presented at a workshop on the foundations of real-time computing sponsored by the Office of Naval Research in October, 1990 in Washington, D. C. A companion volume by the title Foundations of Real-Time Computing: Scheduling and Resource Management complements this hook by addressing many of the recently devised techniques and approaches for scheduling tasks and managing resources in real-time systems. Together, these two texts provide a comprehensive snapshot of current insights into the process of designing and building real time computing systems on a scientific basis. The notion of real-time system has alternative interpretations, not all of which are intended usages in this collection of papers. Different communities of researchers variously use the term real-time to refer to either very fast computing, or immediate on-line data acquisition, or deadline-driven computing. This text is concerned with the formal specification and verification of computer software and systems whose correct performance is dependent on carefully orchestrated interactions with time, e. g., meeting deadlines and synchronizing with clocks. Such systems have been enabled for a rapidly increasing set of diverse end-uses by the unremitting advances in computing power per constant-dollar cost and per constant-unit-volume of space. End use applications of real-time computers span a spectrum that includes transportation systems, robotics and manufacturing, aerospace and defense, industrial process control, and telecommunications."

Handbook of Electronics Manufacturing Engineering (Paperback, Softcover reprint of the original 3rd ed. 1997): Bernie Matisoff Handbook of Electronics Manufacturing Engineering (Paperback, Softcover reprint of the original 3rd ed. 1997)
Bernie Matisoff
R5,251 Discovery Miles 52 510 Ships in 18 - 22 working days

This single source reference offers a pragmatic and accessible approach to the basic methods and procedures used in the manufacturing and design of modern electronic products. Providing a stategic yet simplified layout, this handbook is set up with an eye toward maximizing productivity in each phase of the eletronics manufacturing process. Not only does this handbook inform the reader on vital issues concerning electronics manufacturing and design, it also provides practical insight and will be of essential use to manufacturing and process engineers in electronics and aerospace manufacturing. In addition, electronics packaging engineers and electronics manufacturing managers and supervisors will gain a wealth of knowledge.

Robust Model-Based Fault Diagnosis for Dynamic Systems (Paperback, Softcover reprint of the original 1st ed. 1999): Jie Chen,... Robust Model-Based Fault Diagnosis for Dynamic Systems (Paperback, Softcover reprint of the original 1st ed. 1999)
Jie Chen, R.J. Patton
R7,657 Discovery Miles 76 570 Ships in 18 - 22 working days

There is an increasing demand for dynamic systems to become safer and more reliable. This requirement extends beyond the normally accepted safety-critical systems such as nuclear reactors and aircraft, where safety is of paramount importance, to systems such as autonomous vehicles and process control systems where the system availability is vital. It is clear that fault diagnosis is becoming an important subject in modern control theory and practice. Robust Model-Based Fault Diagnosis for Dynamic Systems presents the subject of model-based fault diagnosis in a unified framework. It contains many important topics and methods; however, total coverage and completeness is not the primary concern. The book focuses on fundamental issues such as basic definitions, residual generation methods and the importance of robustness in model-based fault diagnosis approaches. In this book, fault diagnosis concepts and methods are illustrated by either simple academic examples or practical applications. The first two chapters are of tutorial value and provide a starting point for newcomers to this field.The rest of the book presents the state of the art in model-based fault diagnosis by discussing many important robust approaches and their applications. This will certainly appeal to experts in this field. Robust Model-Based Fault Diagnosis for Dynamic Systems targets both newcomers who want to get into this subject, and experts who are concerned with fundamental issues and are also looking for inspiration for future research. The book is useful for both researchers in academia and professional engineers in industry because both theory and applications are discussed. Although this is a research monograph, it will be an important text for postgraduate research students world-wide. The largest market, however, will be academics, libraries and practicing engineers and scientists throughout the world.

Distributed Systems for System Architects (Paperback, Softcover reprint of the original 1st ed. 2001): Paulo Verissimo, Luis... Distributed Systems for System Architects (Paperback, Softcover reprint of the original 1st ed. 2001)
Paulo Verissimo, Luis Rodrigues
R2,748 Discovery Miles 27 480 Ships in 18 - 22 working days

The primary audience for this book are advanced undergraduate students and graduate students. Computer architecture, as it happened in other fields such as electronics, evolved from the small to the large, that is, it left the realm of low-level hardware constructs, and gained new dimensions, as distributed systems became the keyword for system implementation. As such, the system architect, today, assembles pieces of hardware that are at least as large as a computer or a network router or a LAN hub, and assigns pieces of software that are self-contained, such as client or server programs, Java applets or pro tocol modules, to those hardware components. The freedom she/he now has, is tremendously challenging. The problems alas, have increased too. What was before mastered and tested carefully before a fully-fledged mainframe or a closely-coupled computer cluster came out on the market, is today left to the responsibility of computer engineers and scientists invested in the role of system architects, who fulfil this role on behalf of software vendors and in tegrators, add-value system developers, R&D institutes, and final users. As system complexity, size and diversity grow, so increases the probability of in consistency, unreliability, non responsiveness and insecurity, not to mention the management overhead. What System Architects Need to Know The insight such an architect must have includes but goes well beyond, the functional properties of distributed systems.

Cooperative Computer-Aided Authoring and Learning - A Systems Approach (Paperback, Softcover reprint of the original 1st ed.... Cooperative Computer-Aided Authoring and Learning - A Systems Approach (Paperback, Softcover reprint of the original 1st ed. 1995)
Max Muhlhauser
R5,163 Discovery Miles 51 630 Ships in 18 - 22 working days

Cooperative Computer-Aided Authoring and Learning: A Systems Approach describes in detail a practical system for computer assisted authoring and learning. Drawing from the experiences gained during the Nestor project, jointly run between the Universities of Karlsruhe, Kaiserslautern and Freiburg and the Digital Equipment Corp. Center for Research and Advanced Development, the book presents a concrete example of new concepts in the domain of computer-aided authoring and learning. The conceptual foundation is laid by a reference architecture for an integrated environment for authoring and learning. This overall architecture represents the nucleus, shell and common denominator for the R&D activities carried out. From its conception, the reference architecture was centered around three major issues: * Cooperation among and between authors and learners in an open, multimedia and distributed system as the most important attribute; * Authoring/learning as the central topic; * Laboratory as the term which evoked the most suitable association with the envisioned authoring/learning environment.Within this framework, the book covers four major topics which denote the most important technical domains, namely: * The system kernel, based on object orientation and hypermedia; * Distributed multimedia support; * Cooperation support, and * Reusable instructional design support. Cooperative Computer-Aided Authoring and Learning: A Systems Approach is a major contribution to the emerging field of collaborative computing and is essential reading for researchers and practitioners alike. Its pedagogic flavor also makes it suitable for use as a text for a course on the subject.

Neural Circuits and Networks - Proceedings of the NATO advanced Study Institute on Neuronal Circuits and Networks, held at the... Neural Circuits and Networks - Proceedings of the NATO advanced Study Institute on Neuronal Circuits and Networks, held at the Ettore Majorana Center, Erice, Italy, June 15-27 1997 (Paperback, Softcover reprint of the original 1st ed. 1998)
Vincent Torre, John Nicholls
R2,644 Discovery Miles 26 440 Ships in 18 - 22 working days

The understanding of parallel processing and of the mechanisms underlying neural networks in the brain is certainly one of the most challenging problems of contemporary science. During the last decades significant progress has been made by the combination of different techniques, which have elucidated properties at a cellular and molecular level. However, in order to make significant progress in this field, it is necessary to gather more direct experimental data on the parallel processing occurring in the nervous system. Indeed the nervous system overcomes the limitations of its elementary components by employing a massive degree of parallelism, through the extremely rich set of synaptic interconnections between neurons. This book gathers a selection of the contributions presented during the NATO ASI School "Neuronal Circuits and Networks" held at the Ettore Majorana Center in Erice, Sicily, from June 15 to 27, 1997. The purpose of the School was to present an overview of recent results on single cell properties, the dynamics of neuronal networks and modelling of the nervous system. The School and the present book propose an interdisciplinary approach of experimental and theoretical aspects of brain functions combining different techniques and methodologies.

Disseminating Security Updates at Internet Scale (Paperback, Softcover reprint of the original 1st ed. 2003): Jun Li, Peter... Disseminating Security Updates at Internet Scale (Paperback, Softcover reprint of the original 1st ed. 2003)
Jun Li, Peter Reiher, Gerald J. Popek
R2,621 Discovery Miles 26 210 Ships in 18 - 22 working days

Disseminating Security Updates at Internet Scale describes a new system, "Revere", that addresses these problems. "Revere" builds large-scale, self-organizing and resilient overlay networks on top of the Internet to push security updates from dissemination centers to individual nodes. "Revere" also sets up repository servers for individual nodes to pull missed security updates. This book further discusses how to protect this push-and-pull dissemination procedure and how to secure "Revere" overlay networks, considering possible attacks and countermeasures. Disseminating Security Updates at Internet Scale presents experimental measurements of a prototype implementation of "Revere" gathered using a large-scale oriented approach. These measurements suggest that "Revere" can deliver security updates at the required scale, speed and resiliency for a reasonable cost. Disseminating Security Updates at Internet Scale will be helpful to those trying to design peer systems at large scale when security is a concern, since many of the issues faced by these designs are also faced by "Revere". The "Revere" solutions may not always be appropriate for other peer systems with very different goals, but the analysis of the problems and possible solutions discussed here will be helpful in designing a customized approach for such systems.

Multiprocessing - Trade-Offs in Computation and Communication (Paperback, Softcover reprint of the original 1st ed. 1993):... Multiprocessing - Trade-Offs in Computation and Communication (Paperback, Softcover reprint of the original 1st ed. 1993)
Vijay K. Naik
R2,634 Discovery Miles 26 340 Ships in 18 - 22 working days

Multiprocessing: Trade-Offs in Computation and Communication presents an in-depth analysis of several commonly observed regular and irregular computations for multiprocessor systems. This book includes techniques which enable researchers and application developers to quantitatively determine the effects of algorithm data dependencies on execution time, on communication requirements, on processor utilization and on the speedups possible. Starting with simple, two-dimensional, diamond-shaped directed acyclic graphs, the analysis is extended to more complex and higher dimensional directed acyclic graphs. The analysis allows for the quantification of the computation and communication costs and their interdependencies. The practical significance of these results on the performance of various data distribution schemes is clearly explained. Using these results, the performance of the parallel computations are formulated in an architecture independent fashion. These formulations allow for the parameterization of the architecture specitific entities such as the computation and communication rates. This type of parameterized performance analysis can be used at compile time or at run-time so as to achieve the most optimal distribution of the computations. The material in Multiprocessing: Trade-Offs in Computation and Communication connects theory with practice, so that the inherent performance limitations in many computations can be understood, and practical methods can be devised that would assist in the development of software for scalable high performance systems.

Computers in Building - Proceedings of the CAADfutures'99 Conference. Proceedings of the Eighth International Conference... Computers in Building - Proceedings of the CAADfutures'99 Conference. Proceedings of the Eighth International Conference on Computer Aided Architectural Design Futures held at Georgia Institute of Technology, Atlanta, Georgia, USA on June 7-8, 1999 (Paperback, Softcover reprint of the original 1st ed. 1999)
Godfried Augenbroe, Charles Eastman
R4,044 Discovery Miles 40 440 Ships in 18 - 22 working days

Since the establishment of the CAAD Futures Foundation in 1985, CAAD experts from all over the world meet every two years to present and document the state of the art of research in Computer Aided Architectural Design. Together, the series provides a good record of the evolving state of research in this area over the last fourteen years. The Proceedings this year is the eighth in the series. The conference held at Georgia Institute of Technology in Atlanta, Georgia, includes twenty-five papers presenting new and exciting results and capabilities in areas such as computer graphics, building modeling, digital sketching and drawing systems, Web-based collaboration and information exchange. An overall reading shows that computers in architecture is still a young field, with many exciting results emerging out of both greater understanding of the human processes and information processing needed to support design and also the continuously expanding capabilities of digital technology.

Compiling Parallel Loops for High Performance Computers - Partitioning, Data Assignment and Remapping (Paperback, Softcover... Compiling Parallel Loops for High Performance Computers - Partitioning, Data Assignment and Remapping (Paperback, Softcover reprint of the original 1st ed. 1993)
David E. Hudak, Santosh G. Abraham
R2,622 Discovery Miles 26 220 Ships in 18 - 22 working days

The exploitationof parallel processing to improve computing speeds is being examined at virtually all levels of computer science, from the study of parallel algorithms to the development of microarchitectures which employ multiple functional units. The most visible aspect of this interest in parallel processing is the commercially available multiprocessor systems which have appeared in the past decade. Unfortunately, the lack of adequate software support for the development of scientific applications that will run efficiently on multiple processors has stunted the acceptance of such systems. One of the major impediments to achieving high parallel efficiency on many data-parallel scientific applications is communication overhead, which is exemplified by cache coherency traffic and global memory overhead of interprocessors with a logically shared address space and physically distributed memory. Such techniques can be used by scientific application designers seeking to optimize code for a particular high-performance computer. In addition, these techniques can be seen as a necesary step toward developing software to support efficient paralled programs.In multiprocessor sytems with physically distributed memory, reducing communication overhead involves both data partitioning and data placement. Adaptive Data Partitioning (ADP) reduces the execution time of parallel programs by minimizing interprocessor communication for iterative data-parallel loops with near-neighbor communication. Data placement schemes are presented that reduce communication overhead. Under the loop partition specified by ADP, global data is partitioned into classes for each processor, allowing each processor to cache certain regions of the global data set. In addition, for many scientific applications, peak parallel efficiency is achieved only when machine-specific tradeoffs between load imbalance and communication are evaluated and utilized in choosing the data partition. The techniques in this book evaluate these tradeoffs to generate optimum cyclic partitions for data-parallel loops with either a linearly varying or uniform computational structure and either neighborhood or dimensional multicast communication patterns.This tradeoff is also treated within the CPR (Collective Partitioning and Remapping) algorithm, which partitions a collection of loops with various computational structures and communication patterns. Experiments that demonstrate the advantage of ADP, data placement, cyclic partitioning and CPR were conducted on the Encore Multimax and BBN TC2000 multiprocessors using the ADAPT system, a program partitioner which automatically restructures iterative data-parallel loops. This book serves as an excellent reference and may be used as the text for an advanced course on the subject.

Designing TSVs for 3D Integrated Circuits (Paperback, 2013): Nauman Khan, Soha Hassoun Designing TSVs for 3D Integrated Circuits (Paperback, 2013)
Nauman Khan, Soha Hassoun
R1,622 Discovery Miles 16 220 Ships in 18 - 22 working days

This book explores the challenges and presents best strategies for designing Through-Silicon Vias (TSVs) for 3D integrated circuits. It describes a novel technique to mitigate TSV-induced noise, the GND Plug, which is superior to others adapted from 2-D planar technologies, such as a backside ground plane and traditional substrate contacts. The book also investigates, in the form of a comparative study, the impact of TSV size and granularity, spacing of C4 connectors, off-chip power delivery network, shared and dedicated TSVs, and coaxial TSVs on the quality of power delivery in 3-D ICs. The authors provide detailed best design practices for designing 3-D power delivery networks. Since TSVs occupy silicon real-estate and impact device density, this book provides four iterative algorithms to minimize the number of TSVs in a power delivery network. Unlike other existing methods, these algorithms can be applied in early design stages when only functional block- level behaviors and a floorplan are available. Finally, the authors explore the use of Carbon Nanotubes for power grid design as a futuristic alternative to Copper.

TRON Project 1990 - Open-Architecture Computer Systems (Paperback, Softcover reprint of the original 1st ed. 1990): Ken Sakamura TRON Project 1990 - Open-Architecture Computer Systems (Paperback, Softcover reprint of the original 1st ed. 1990)
Ken Sakamura
R1,466 Discovery Miles 14 660 Ships in 18 - 22 working days

I wish to extend my warm greetings to you all on behalf of the TRON Association, on this occasion of the Seventh International TRON Project Symposium. The TRON Project was proposed by Dr. Ken Sakamura of the University of Tokyo, with the aim of designing a new, comprehen sive computer architecture that is open to worldwide use. Already more than six years have passed since the project was put in motion. The TRON Association is now made up of over 140 co m panies and organizations, including 25 overseas firms or their affiliates. A basic goal of TRON Project activities is to offer the world a human-oriented computer culture, that will lead to a richer and more fulfilling life for people throughout the world. It is our desire to bring to reality a new order in the world of computers, based on design concepts that consider the needs of human beings first of all, and to enable people to enjoy the full benefits of these com puters in their daily life. Thanks to the efforts of Association members, in recent months a number of TRON-specification 32-bit microprocessors have been made available. ITRON-specification products are continuing to appear, and we are now seeing commercial implementations of BTRON specifications as well. The CTRON subproject, mean while, is promoting standardization through validation testing and a portability experiment, and products are being marketed by sev eral firms. This is truly a year in which the TRON Project has reached the practical implementation stage."

OpenMP in a Heterogeneous World - 8th International Workshop on OpenMP, IWOMP 2012, Rome, Italy, June 11-13, 2012. Proceedings... OpenMP in a Heterogeneous World - 8th International Workshop on OpenMP, IWOMP 2012, Rome, Italy, June 11-13, 2012. Proceedings (Paperback, 2012)
Barbara Chapman, Federico Massaioli, Matthias S. Muller, Marco Rorro
R1,406 Discovery Miles 14 060 Ships in 18 - 22 working days

This book constitutes the refereed proceedings of the 8th International Workshop on OpenMP, held in in Rome, Italy, in June 2012. The 18 technical full papers presented together with 7 posters were carefully reviewed and selected from 30 submissions. The papers are organized in topical sections on proposed extensions to OpenMP, runtime environments, optimization and accelerators, task parallelism, validations and benchmarks

VLSI Placement and Routing: The PI Project (Paperback, Softcover reprint of the original 1st ed. 1989): Alan T Sherman VLSI Placement and Routing: The PI Project (Paperback, Softcover reprint of the original 1st ed. 1989)
Alan T Sherman
R1,384 Discovery Miles 13 840 Ships in 18 - 22 working days

This book provides a superb introduction to and overview of the MIT PI System for custom VLSI placement and routing. Alan Sher man has done an excellent job of collecting and clearly presenting material that was previously available only in various theses, confer ence papers, and memoranda. He has provided here a balanced and comprehensive presentation of the key ideas and techniques used in PI, discussing part of his own Ph. D. work (primarily on the place ment problem) in the context of the overall design of PI and the contributions of the many other PI team members. I began the PI Project in 1981 after learning first-hand how dif ficult it is to manually place modules and route interconnections in a custom VLSI chip. In 1980 Adi Shamir, Leonard Adleman, and I designed a custom VLSI chip for performing RSA encryp tion/decryption [226]. I became fascinated with the combinatorial and algorithmic questions arising in placement and routing, and be gan active research in these areas. The PI Project was started in the belief that many of the most interesting research issues would arise during an actual implementation effort, and secondarily in the hope that a practically useful tool might result. The belief was well-founded, but I had underestimated the difficulty of building a large easily-used software tool for a complex domain; the PI soft ware should be considered as a prototype implementation validating the design choices made.

Relations and Graphs - Discrete Mathematics for Computer Scientists (Paperback, Softcover reprint of the original 1st ed.... Relations and Graphs - Discrete Mathematics for Computer Scientists (Paperback, Softcover reprint of the original 1st ed. 1993)
Gunther Schmidt, Thomas Stroehlein
R2,659 Discovery Miles 26 590 Ships in 18 - 22 working days

Relational methods can be found at various places in computer science, notably in data base theory, relational semantics of concurrency, relationaltype theory, analysis of rewriting systems, and modern programming language design. In addition, they appear in algorithms analysis and in the bulk of discrete mathematics taught to computer scientists. This book is devoted to the background of these methods. It explains how to use relational and graph-theoretic methods systematically in computer science. A powerful formal framework of relational algebra is developed with respect to applications to a diverse range of problem areas. Results are first motivated by practical examples, often visualized by both Boolean 0-1-matrices and graphs, and then derived algebraically.

TRON Project 1987 Open-Architecture Computer Systems - Proceedings of the Third TRON Project Symposium (Paperback, Softcover... TRON Project 1987 Open-Architecture Computer Systems - Proceedings of the Third TRON Project Symposium (Paperback, Softcover reprint of the original 1st ed. 1987)
Ken Sakamura
R1,428 Discovery Miles 14 280 Ships in 18 - 22 working days

Almost 4 years have elapsed since Dr. Ken Sakamura of The University of Tokyo first proposed the TRON (the realtime operating system nucleus) concept and 18 months since the foundation of the TRON Association on 16 June 1986. Members of the Association from Japan and overseas currently exceed 80 corporations. The TRON concept, as advocated by Dr. Ken Sakamura, is concerned with the problem of interaction between man and the computer (the man-machine inter face), which had not previously been given a great deal of attention. Dr. Sakamura has gone back to basics to create a new and complete cultural environment relative to computers and envisage a role for computers which will truly benefit mankind. This concept has indeed caused a stir in the computer field. The scope of the research work involved was initially regarded as being so extensive and diverse that the completion of activities was scheduled for the 1990s. However, I am happy to note that the enthusiasm expressed by individuals and organizations both within and outside Japan has permitted acceleration of the research and development activities. It is to be hoped that the presentations of the Third TRON Project Symposium will further the progress toward the creation of a computer environment that will be compatible with the aspirations of mankind."

Proof and Computation (Paperback, Softcover reprint of the original 1st ed. 1995): Helmut Schwichtenberg Proof and Computation (Paperback, Softcover reprint of the original 1st ed. 1995)
Helmut Schwichtenberg
R2,703 Discovery Miles 27 030 Ships in 18 - 22 working days

Logical concepts and methods are of growing importance in many areas of computer science. The proofs-as-programs paradigm and the wide acceptance of Prolog show this clearly. The logical notion of a formal proof in various constructive systems can be viewed as a very explicit way to describe a computation procedure. Also conversely, the development of logical systems has been influenced by accumulating knowledge on rewriting and unification techniques. This volume contains a series of lectures by leading researchers giving a presentation of new ideas on the impact of the concept of a formal proof on computation theory. The subjects covered are: specification and abstract data types, proving techniques, constructive methods, linear logic, and concurrency and logic.

Parallel Execution of Logic Programs (Paperback, Softcover reprint of the original 1st ed. 1987): John S. Conery Parallel Execution of Logic Programs (Paperback, Softcover reprint of the original 1st ed. 1987)
John S. Conery
R1,373 Discovery Miles 13 730 Ships in 18 - 22 working days

This book is an updated version of my Ph.D. dissertation, The AND/OR Process Model for Parallel Interpretation of Logic Programs. The three years since that paper was finished (or so I thought then) have seen quite a bit of work in the area of parallel execution models and programming languages for logic programs. A quick glance at the bibliography here shows roughly 50 papers on these topics, 40 of which were published after 1983. The main difference between the book and the dissertation is the updated survey of related work. One of the appendices in the dissertation was an overview of a Prolog implementation of an interpreter based on the AND/OR Process Model, a simulator I used to get some preliminary measurements of parallelism in logic programs. In the last three years I have been involved with three other implementations. One was written in C and is now being installed on a small multiprocessor at the University of Oregon. Most of the programming of this interpreter was done by Nitin More under my direction for his M.S. project. The other two, one written in Multilisp and the other in Modula-2, are more limited, intended to test ideas about implementing specific aspects of the model. Instead of an appendix describing one interpreter, this book has more detail about implementation included in Chapters 5 through 7, based on a combination of ideas from the four interpreters.

Robust Computing with Nano-scale Devices - Progresses and Challenges (Paperback, 2010 ed.): Chao Huang Robust Computing with Nano-scale Devices - Progresses and Challenges (Paperback, 2010 ed.)
Chao Huang
R2,624 Discovery Miles 26 240 Ships in 18 - 22 working days

Robust Nano-Computing focuses on various issues of robust nano-computing, defect-tolerance design for nano-technology at different design abstraction levels. It addresses both redundancy- and configuration-based methods as well as fault detecting techniques through the development of accurate computation models and tools. The contents present an insightful view of the ongoing researches on nano-electronic devices, circuits, architectures, and design methods, as well as provide promising directions for future research.

Digital Systems Engineering (Paperback): William J. Dally, John W. Poulton Digital Systems Engineering (Paperback)
William J. Dally, John W. Poulton
R2,247 Discovery Miles 22 470 Ships in 10 - 15 working days

What makes some computers slow? What makes some digital systems operate reliably for years while others fail mysteriously every few hours? Why do some systems dissipate kilowatts while others operate off batteries? These questions of speed, reliability, and power are all determined by the system-level electrical design of a digital system. Digital Systems Engineering presents a comprehensive treatment of these topics. It combines a rigorous development of the fundamental principles in each area with down-to-earth examples of circuits and methods that work in practice. The book not only can serve as an undergraduate textbook, filling the gap between circuit design and logic design, but also can help practicing digital designers keep up with the speed and power of modern integrated circuits. The techniques described in this book, which were once used only in supercomputers, are now essential to the correct and efficient operation of any type of digital system.

Petri Nets - An Introduction (Paperback, Softcover reprint of the original 1st ed. 1985): Wolfgang Reisig Petri Nets - An Introduction (Paperback, Softcover reprint of the original 1st ed. 1985)
Wolfgang Reisig
R1,384 Discovery Miles 13 840 Ships in 18 - 22 working days

Net theory is a theory of systems organization which had its origins, about 20 years ago, in the dissertation of C. A. Petri [1]. Since this seminal paper, nets have been applied in various areas, at the same time being modified and theoretically investigated. In recent time, computer scientists are taking a broader interest in net theory. The main concern of this book is the presentation of those parts of net theory which can serve as a basis for practical application. It introduces the basic net theoretical concepts and ways of thinking, motivates them by means of examples and derives relations between them. Some extended examples il lustrate the method of application of nets. A major emphasis is devoted to those aspect which distinguish nets from other system models. These are for instance, the role of concurrency, an awareness of the finiteness of resources, and the pos sibility of using the same representation technique of different levels of ab straction. On completing this book the reader should have achieved a system atic grounding in the subject allowing him access to the net literature [25]. These objectives determined the subjects treated here. The presentation of the material here is rather more axiomatic than in ductive. We start with the basic notions of 'condition' and 'event' and the con cept of the change of states by (concurrently) occurring events. By generali zation of these notions a part of the theory of nets is presented.

A Practical Introduction to Computer Architecture (Paperback, Softcover reprint of hardcover 1st ed. 2009): Daniel Page A Practical Introduction to Computer Architecture (Paperback, Softcover reprint of hardcover 1st ed. 2009)
Daniel Page
R1,534 Discovery Miles 15 340 Ships in 18 - 22 working days

It is a great pleasure to write a preface to this book. In my view, the content is unique in that it blends traditional teaching approaches with the use of mathematics and a mainstream Hardware Design Language (HDL) as formalisms to describe key concepts. The book keeps the "machine" separate from the "application" by strictly following a bottom-up approach: it starts with transistors and logic gates and only introduces assembly language programs once their execution by a processor is clearly de ned. Using a HDL, Verilog in this case, rather than static circuit diagrams is a big deviation from traditional books on computer architecture. Static circuit diagrams cannot be explored in a hands-on way like the corresponding Verilog model can. In order to understand why I consider this shift so important, one must consider how computer architecture, a subject that has been studied for more than 50 years, has evolved. In the pioneering days computers were constructed by hand. An entire computer could (just about) be described by drawing a circuit diagram. Initially, such d- grams consisted mostly of analogue components before later moving toward d- ital logic gates. The advent of digital electronics led to more complex cells, such as half-adders, ip- ops, and decoders being recognised as useful building blocks.

Reconfigurable Computing: Architectures, Tools and Applications - 8th International Symposium, ARC 2012, Hongkong, China, March... Reconfigurable Computing: Architectures, Tools and Applications - 8th International Symposium, ARC 2012, Hongkong, China, March 19-23, 2012, Proceedings (Paperback, 2012 ed.)
Oliver Choy, Ray Cheung, Peter Athanas, Kentaro Sano
R1,435 Discovery Miles 14 350 Ships in 18 - 22 working days

This book constitutes the refereed proceedings of the 8th International Symposium on Reconfigurable Computing: Architectures, Tools and Applications, ARC 2012, held in Hongkong, China, in March 2012. The 35 revised papers presented, consisting of 25 full papers and 10 poster papers were carefully reviewed and selected from 44 submissions. The topics covered are applied RC design methods and tools, applied RC architectures, applied RC applications and critical issues in applied RC.

Advances in Randomized Parallel Computing (Paperback, Softcover reprint of the original 1st ed. 1999): Panos M. Pardalos,... Advances in Randomized Parallel Computing (Paperback, Softcover reprint of the original 1st ed. 1999)
Panos M. Pardalos, Sanguthevar Rajasekaran
R4,021 Discovery Miles 40 210 Ships in 18 - 22 working days

The technique of randomization has been employed to solve numerous prob lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms have the advantages of simplicity and better performance both in theory and often in practice. This book is a collection of articles written by renowned experts in the area of randomized parallel computing. A brief introduction to randomized algorithms In the aflalysis of algorithms, at least three different measures of performance can be used: the best case, the worst case, and the average case. Often, the average case run time of an algorithm is much smaller than the worst case. 2 For instance, the worst case run time of Hoare's quicksort is O(n ), whereas its average case run time is only O( n log n). The average case analysis is conducted with an assumption on the input space. The assumption made to arrive at the O( n log n) average run time for quicksort is that each input permutation is equally likely. Clearly, any average case analysis is only as good as how valid the assumption made on the input space is. Randomized algorithms achieve superior performances without making any assumptions on the inputs by making coin flips within the algorithm. Any analysis done of randomized algorithms will be valid for all p0: .sible inputs."

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Clean Architecture - A Craftsman's Guide…
Robert Martin Paperback  (1)
R860 R619 Discovery Miles 6 190
The Physics of Computing
Marilyn Wolf Paperback R1,645 Discovery Miles 16 450
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, … Paperback R2,281 Discovery Miles 22 810
Novel Approaches to Information Systems…
Naveen Prakash, Deepika Prakash Hardcover R5,924 Discovery Miles 59 240
Kreislauf des Lebens
Jacob Moleschott Hardcover R1,199 Discovery Miles 11 990
Learn Quantum Computing with Python and…
Robert Loredo Paperback R1,022 Discovery Miles 10 220
Edsger Wybe Dijkstra - His Life, Work…
Krzysztof R. Apt, Tony Hoare Hardcover R2,920 Discovery Miles 29 200
Systems Engineering Neural Networks
A Migliaccio Hardcover R2,817 Discovery Miles 28 170
Applying Integration Techniques and…
Gabor Kecskemeti Hardcover R6,050 Discovery Miles 60 500
Advances in Delay-Tolerant Networks…
Joel J. P. C. Rodrigues Paperback R4,669 Discovery Miles 46 690

 

Partners