0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (8)
  • R250 - R500 (38)
  • R500+ (3,106)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design

Parallel Computation and Computers for Artificial Intelligence (Paperback, Softcover reprint of the original 1st ed. 1988):... Parallel Computation and Computers for Artificial Intelligence (Paperback, Softcover reprint of the original 1st ed. 1988)
J.S. Kowalik
R4,018 Discovery Miles 40 180 Ships in 18 - 22 working days

It has been widely recognized that artificial intelligence computations offer large potential for distributed and parallel processing. Unfortunately, not much is known about designing parallel AI algorithms and efficient, easy-to-use parallel computer architectures for AI applications. The field of parallel computation and computers for AI is in its infancy, but some significant ideas have appeared and initial practical experience has become available. The purpose of this book has been to collect in one volume contributions from several leading researchers and pioneers of AI that represent a sample of these ideas and experiences. This sample does not include all schools of thought nor contributions from all leading researchers, but it covers a relatively wide variety of views and topics and in this sense can be helpful in assessing the state ofthe art. We hope that the book will serve, at least, as a pointer to more specialized literature and that it will stimulate interest in the area of parallel AI processing. It has been a great pleasure and a privilege to cooperate with all contributors to this volume. They have my warmest thanks and gratitude. Mrs. Birgitta Knapp has assisted me in the editorial task and demonstrated a great deal of skill and patience. Janusz S. Kowalik vii INTRODUCTION Artificial intelligence (AI) computer programs can be very time-consuming.

Automatic Performance Prediction of Parallel Programs (Paperback, Softcover reprint of the original 1st ed. 1996): Thomas... Automatic Performance Prediction of Parallel Programs (Paperback, Softcover reprint of the original 1st ed. 1996)
Thomas Fahringer
R2,652 Discovery Miles 26 520 Ships in 18 - 22 working days

Automatic Performance Prediction of Parallel Programs presents a unified approach to the problem of automatically estimating the performance of parallel computer programs. The author focuses primarily on distributed memory multiprocessor systems, although large portions of the analysis can be applied to shared memory architectures as well. The author introduces a novel and very practical approach for predicting some of the most important performance parameters of parallel programs, including work distribution, number of transfers, amount of data transferred, network contention, transfer time, computation time and number of cache misses. This approach is based on advanced compiler analysis that carefully examines loop iteration spaces, procedure calls, array subscript expressions, communication patterns, data distributions and optimizing code transformations at the program level; and the most important machine specific parameters including cache characteristics, communication network indices, and benchmark data for computational operations at the machine level. The material has been fully implemented as part of P3T, which is an integrated automatic performance estimator of the Vienna Fortran Compilation System (VFCS), a state-of-the-art parallelizing compiler for Fortran77, Vienna Fortran and a subset of High Performance Fortran (HPF) programs. A large number of experiments using realistic HPF and Vienna Fortran code examples demonstrate highly accurate performance estimates, and the ability of the described performance prediction approach to successfully guide both programmer and compiler in parallelizing and optimizing parallel programs. A graphical user interface is described and displayed that visualizes each program source line together with the corresponding parameter values. P3T uses color-coded performance visualization to immediately identify hot spots in the parallel program. Performance data can be filtered and displayed at various levels of detail. Colors displayed by the graphical user interface are visualized in greyscale. Automatic Performance Prediction of Parallel Programs also includes coverage of fundamental problems of automatic parallelization for distributed memory multicomputers, a description of the basic parallelization strategy and a large variety of optimizing code transformations as included under VFCS.

Parallel Machines: Parallel Machine Languages - The Emergence of Hybrid Dataflow Computer Architectures (Paperback, Softcover... Parallel Machines: Parallel Machine Languages - The Emergence of Hybrid Dataflow Computer Architectures (Paperback, Softcover reprint of the original 1st ed. 1990)
Robert A. Iannucci
R3,995 Discovery Miles 39 950 Ships in 18 - 22 working days

It is universally accepted today that parallel processing is here to stay but that software for parallel machines is still difficult to develop. However, there is little recognition of the fact that changes in processor architecture can significantly ease the development of software. In the seventies the availability of processors that could address a large name space directly, eliminated the problem of name management at one level and paved the way for the routine development of large programs. Similarly, today, processor architectures that can facilitate cheap synchronization and provide a global address space can simplify compiler development for parallel machines. If the cost of synchronization remains high, the pro gramming of parallel machines will remain significantly less abstract than programming sequential machines. In this monograph Bob Iannucci presents the design and analysis of an architecture that can be a better building block for parallel machines than any von Neumann processor. There is another very interesting motivation behind this work. It is rooted in the long and venerable history of dataflow graphs as a formalism for ex pressing parallel computation. The field has bloomed since 1974, when Dennis and Misunas proposed a truly novel architecture using dataflow graphs as the parallel machine language. The novelty and elegance of dataflow architectures has, however, also kept us from asking the real question: "What can dataflow architectures buy us that von Neumann ar chitectures can't?" In the following I explain in a round about way how Bob and I arrived at this question."

The Upper Layers of Open Systems Interconnection - Proceedings of the Second International Symposium on Interoperability of ADP... The Upper Layers of Open Systems Interconnection - Proceedings of the Second International Symposium on Interoperability of ADP Systems, The Hague, The Netherlands, 25-29 March 1985 (Paperback, Softcover reprint of the original 1st ed. 1987)
Rainer W.G. Herbers
R1,399 Discovery Miles 13 990 Ships in 18 - 22 working days

Interoperability has been a requirement in NATO ever since the Alliance came into being - an obvious requirement when 16 independent Nations agree to allocate national resources for the achievement of a common goal: to maintain peace. With the appearance of data processing in the command and control pro cess of the armed forces, the requirement for interoperability expanded into the data processing field. Although problems of procedural and operational interoperability had been constantly resolved to some extent as they arose over the years, the introduction of data proces sing increased the problems of technical interoperability. The increase was partially due to the natural desire of nations to support their own national industries. But it was definetely also due to the lack of time and resources needed to solve the problems. During the mid- and late -1970s the International Standards Organisa tion (ISO) decided to develop a concept ("model") which would allow "systems" to intercommunicate. The famous ISO 7-layer model for Open Systems Interconnection (OSI) was born. The OSI model was adopted by NATO in 1983 as thi basis for standardization of data communications in NATO. The very successful (first) Symposium on Interoperability of ADP Sys tems, held in November 1982 at the SHAPE Technical Centre (STC), gave an exten ive overview of the work carried out on the lower layers of the model and revealed some intriguing ideas about the upper layers. The first Symposium accurately reflected the state-of-the-art at that point in time."

High Performance Architecture and Grid Computing - International Conference, HPAGC 2011, Chandigarh, India, July 19-20, 2011.... High Performance Architecture and Grid Computing - International Conference, HPAGC 2011, Chandigarh, India, July 19-20, 2011. Proceedings (Paperback, Edition.)
Archana Mantri, Suman Nandi, Gaurav Kumar, Sandeep Kumar
R2,776 Discovery Miles 27 760 Ships in 18 - 22 working days

This book constitutes the refereeds proceedings of the International Conference on High Performance Architecture and Grid Computing, HPAGC 2011, held in Chandigarh, India, in July 2011. The 87 revised full papers presented were carefully reviewed and selected from 240 submissions. The papers are organized in topical sections on grid and cloud computing; high performance architecture; information management and network security.

Distributed and Parallel Systems - Cluster and Grid Computing (Paperback, Softcover reprint of the original 1st ed. 2002):... Distributed and Parallel Systems - Cluster and Grid Computing (Paperback, Softcover reprint of the original 1st ed. 2002)
Peter Kacsuk, Dieter Kranzlmuller, Zsolt Nemeth, Jens Volkert
R2,637 Discovery Miles 26 370 Ships in 18 - 22 working days

Distributed and Parallel Systems: Cluster and Grid Computing is the proceedings of the fourth Austrian-Hungarian Workshop on Distributed and Parallel Systems organized jointly by Johannes Kepler University, Linz, Austria and the MTA SZTAKI Computer and Automation Research Institute.

The papers in this volume cover a broad range of research topics presented in four groups. The first one introduces cluster tools and techniques, especially the issues of load balancing and migration. Another six papers deal with grid and global computing including grid infrastructure, tools, applications and mobile computing. The next nine papers present general questions of distributed development and applications. The last four papers address a crucial issue in distributed computing: fault tolerance and dependable systems.

This volume will be useful to researchers and scholars interested in all areas related to parallel and distributed computing systems.

Distributed Sensor Networks - A Multiagent Perspective (Paperback, Softcover reprint of the original 1st ed. 2003): Victor... Distributed Sensor Networks - A Multiagent Perspective (Paperback, Softcover reprint of the original 1st ed. 2003)
Victor Lesser, Charles L. Ortiz Jr, Milind Tambe
R4,037 Discovery Miles 40 370 Ships in 18 - 22 working days

Distributed Sensor Networks is the first book of its kind to examine solutions to this problem using ideas taken from the field of multiagent systems. The field of multiagent systems has itself seen an exponential growth in the past decade, and has developed a variety of techniques for distributed resource allocation. Distributed Sensor Networks contains contributions from leading, international researchers describing a variety of approaches to this problem based on examples of implemented systems taken from a common distributed sensor network application; each approach is motivated, demonstrated and tested by way of a common challenge problem. The book focuses on both practical systems and their theoretical analysis, and is divided into three parts: the first part describes the common sensor network challenge problem; the second part explains the different technical approaches to the common challenge problem; and the third part provides results on the formal analysis of a number of approaches taken to address the challenge problem.

Foundations of Real-Time Computing: Formal Specifications and Methods (Paperback, Softcover reprint of the original 1st ed.... Foundations of Real-Time Computing: Formal Specifications and Methods (Paperback, Softcover reprint of the original 1st ed. 1991)
Andre M.Van Tilborg, Gary M. Koob
R4,021 Discovery Miles 40 210 Ships in 18 - 22 working days

This volume contains a selection of papers that focus on the state-of the-art in formal specification and verification of real-time computing systems. Preliminary versions of these papers were presented at a workshop on the foundations of real-time computing sponsored by the Office of Naval Research in October, 1990 in Washington, D. C. A companion volume by the title Foundations of Real-Time Computing: Scheduling and Resource Management complements this hook by addressing many of the recently devised techniques and approaches for scheduling tasks and managing resources in real-time systems. Together, these two texts provide a comprehensive snapshot of current insights into the process of designing and building real time computing systems on a scientific basis. The notion of real-time system has alternative interpretations, not all of which are intended usages in this collection of papers. Different communities of researchers variously use the term real-time to refer to either very fast computing, or immediate on-line data acquisition, or deadline-driven computing. This text is concerned with the formal specification and verification of computer software and systems whose correct performance is dependent on carefully orchestrated interactions with time, e. g., meeting deadlines and synchronizing with clocks. Such systems have been enabled for a rapidly increasing set of diverse end-uses by the unremitting advances in computing power per constant-dollar cost and per constant-unit-volume of space. End use applications of real-time computers span a spectrum that includes transportation systems, robotics and manufacturing, aerospace and defense, industrial process control, and telecommunications."

Handbook of Electronics Manufacturing Engineering (Paperback, Softcover reprint of the original 3rd ed. 1997): Bernie Matisoff Handbook of Electronics Manufacturing Engineering (Paperback, Softcover reprint of the original 3rd ed. 1997)
Bernie Matisoff
R5,251 Discovery Miles 52 510 Ships in 18 - 22 working days

This single source reference offers a pragmatic and accessible approach to the basic methods and procedures used in the manufacturing and design of modern electronic products. Providing a stategic yet simplified layout, this handbook is set up with an eye toward maximizing productivity in each phase of the eletronics manufacturing process. Not only does this handbook inform the reader on vital issues concerning electronics manufacturing and design, it also provides practical insight and will be of essential use to manufacturing and process engineers in electronics and aerospace manufacturing. In addition, electronics packaging engineers and electronics manufacturing managers and supervisors will gain a wealth of knowledge.

Robust Model-Based Fault Diagnosis for Dynamic Systems (Paperback, Softcover reprint of the original 1st ed. 1999): Jie Chen,... Robust Model-Based Fault Diagnosis for Dynamic Systems (Paperback, Softcover reprint of the original 1st ed. 1999)
Jie Chen, R.J. Patton
R7,657 Discovery Miles 76 570 Ships in 18 - 22 working days

There is an increasing demand for dynamic systems to become safer and more reliable. This requirement extends beyond the normally accepted safety-critical systems such as nuclear reactors and aircraft, where safety is of paramount importance, to systems such as autonomous vehicles and process control systems where the system availability is vital. It is clear that fault diagnosis is becoming an important subject in modern control theory and practice. Robust Model-Based Fault Diagnosis for Dynamic Systems presents the subject of model-based fault diagnosis in a unified framework. It contains many important topics and methods; however, total coverage and completeness is not the primary concern. The book focuses on fundamental issues such as basic definitions, residual generation methods and the importance of robustness in model-based fault diagnosis approaches. In this book, fault diagnosis concepts and methods are illustrated by either simple academic examples or practical applications. The first two chapters are of tutorial value and provide a starting point for newcomers to this field.The rest of the book presents the state of the art in model-based fault diagnosis by discussing many important robust approaches and their applications. This will certainly appeal to experts in this field. Robust Model-Based Fault Diagnosis for Dynamic Systems targets both newcomers who want to get into this subject, and experts who are concerned with fundamental issues and are also looking for inspiration for future research. The book is useful for both researchers in academia and professional engineers in industry because both theory and applications are discussed. Although this is a research monograph, it will be an important text for postgraduate research students world-wide. The largest market, however, will be academics, libraries and practicing engineers and scientists throughout the world.

Distributed Systems for System Architects (Paperback, Softcover reprint of the original 1st ed. 2001): Paulo Verissimo, Luis... Distributed Systems for System Architects (Paperback, Softcover reprint of the original 1st ed. 2001)
Paulo Verissimo, Luis Rodrigues
R2,748 Discovery Miles 27 480 Ships in 18 - 22 working days

The primary audience for this book are advanced undergraduate students and graduate students. Computer architecture, as it happened in other fields such as electronics, evolved from the small to the large, that is, it left the realm of low-level hardware constructs, and gained new dimensions, as distributed systems became the keyword for system implementation. As such, the system architect, today, assembles pieces of hardware that are at least as large as a computer or a network router or a LAN hub, and assigns pieces of software that are self-contained, such as client or server programs, Java applets or pro tocol modules, to those hardware components. The freedom she/he now has, is tremendously challenging. The problems alas, have increased too. What was before mastered and tested carefully before a fully-fledged mainframe or a closely-coupled computer cluster came out on the market, is today left to the responsibility of computer engineers and scientists invested in the role of system architects, who fulfil this role on behalf of software vendors and in tegrators, add-value system developers, R&D institutes, and final users. As system complexity, size and diversity grow, so increases the probability of in consistency, unreliability, non responsiveness and insecurity, not to mention the management overhead. What System Architects Need to Know The insight such an architect must have includes but goes well beyond, the functional properties of distributed systems.

Cooperative Computer-Aided Authoring and Learning - A Systems Approach (Paperback, Softcover reprint of the original 1st ed.... Cooperative Computer-Aided Authoring and Learning - A Systems Approach (Paperback, Softcover reprint of the original 1st ed. 1995)
Max Muhlhauser
R5,163 Discovery Miles 51 630 Ships in 18 - 22 working days

Cooperative Computer-Aided Authoring and Learning: A Systems Approach describes in detail a practical system for computer assisted authoring and learning. Drawing from the experiences gained during the Nestor project, jointly run between the Universities of Karlsruhe, Kaiserslautern and Freiburg and the Digital Equipment Corp. Center for Research and Advanced Development, the book presents a concrete example of new concepts in the domain of computer-aided authoring and learning. The conceptual foundation is laid by a reference architecture for an integrated environment for authoring and learning. This overall architecture represents the nucleus, shell and common denominator for the R&D activities carried out. From its conception, the reference architecture was centered around three major issues: * Cooperation among and between authors and learners in an open, multimedia and distributed system as the most important attribute; * Authoring/learning as the central topic; * Laboratory as the term which evoked the most suitable association with the envisioned authoring/learning environment.Within this framework, the book covers four major topics which denote the most important technical domains, namely: * The system kernel, based on object orientation and hypermedia; * Distributed multimedia support; * Cooperation support, and * Reusable instructional design support. Cooperative Computer-Aided Authoring and Learning: A Systems Approach is a major contribution to the emerging field of collaborative computing and is essential reading for researchers and practitioners alike. Its pedagogic flavor also makes it suitable for use as a text for a course on the subject.

Neural Circuits and Networks - Proceedings of the NATO advanced Study Institute on Neuronal Circuits and Networks, held at the... Neural Circuits and Networks - Proceedings of the NATO advanced Study Institute on Neuronal Circuits and Networks, held at the Ettore Majorana Center, Erice, Italy, June 15-27 1997 (Paperback, Softcover reprint of the original 1st ed. 1998)
Vincent Torre, John Nicholls
R2,644 Discovery Miles 26 440 Ships in 18 - 22 working days

The understanding of parallel processing and of the mechanisms underlying neural networks in the brain is certainly one of the most challenging problems of contemporary science. During the last decades significant progress has been made by the combination of different techniques, which have elucidated properties at a cellular and molecular level. However, in order to make significant progress in this field, it is necessary to gather more direct experimental data on the parallel processing occurring in the nervous system. Indeed the nervous system overcomes the limitations of its elementary components by employing a massive degree of parallelism, through the extremely rich set of synaptic interconnections between neurons. This book gathers a selection of the contributions presented during the NATO ASI School "Neuronal Circuits and Networks" held at the Ettore Majorana Center in Erice, Sicily, from June 15 to 27, 1997. The purpose of the School was to present an overview of recent results on single cell properties, the dynamics of neuronal networks and modelling of the nervous system. The School and the present book propose an interdisciplinary approach of experimental and theoretical aspects of brain functions combining different techniques and methodologies.

Disseminating Security Updates at Internet Scale (Paperback, Softcover reprint of the original 1st ed. 2003): Jun Li, Peter... Disseminating Security Updates at Internet Scale (Paperback, Softcover reprint of the original 1st ed. 2003)
Jun Li, Peter Reiher, Gerald J. Popek
R2,621 Discovery Miles 26 210 Ships in 18 - 22 working days

Disseminating Security Updates at Internet Scale describes a new system, "Revere", that addresses these problems. "Revere" builds large-scale, self-organizing and resilient overlay networks on top of the Internet to push security updates from dissemination centers to individual nodes. "Revere" also sets up repository servers for individual nodes to pull missed security updates. This book further discusses how to protect this push-and-pull dissemination procedure and how to secure "Revere" overlay networks, considering possible attacks and countermeasures. Disseminating Security Updates at Internet Scale presents experimental measurements of a prototype implementation of "Revere" gathered using a large-scale oriented approach. These measurements suggest that "Revere" can deliver security updates at the required scale, speed and resiliency for a reasonable cost. Disseminating Security Updates at Internet Scale will be helpful to those trying to design peer systems at large scale when security is a concern, since many of the issues faced by these designs are also faced by "Revere". The "Revere" solutions may not always be appropriate for other peer systems with very different goals, but the analysis of the problems and possible solutions discussed here will be helpful in designing a customized approach for such systems.

Multiprocessing - Trade-Offs in Computation and Communication (Paperback, Softcover reprint of the original 1st ed. 1993):... Multiprocessing - Trade-Offs in Computation and Communication (Paperback, Softcover reprint of the original 1st ed. 1993)
Vijay K. Naik
R2,634 Discovery Miles 26 340 Ships in 18 - 22 working days

Multiprocessing: Trade-Offs in Computation and Communication presents an in-depth analysis of several commonly observed regular and irregular computations for multiprocessor systems. This book includes techniques which enable researchers and application developers to quantitatively determine the effects of algorithm data dependencies on execution time, on communication requirements, on processor utilization and on the speedups possible. Starting with simple, two-dimensional, diamond-shaped directed acyclic graphs, the analysis is extended to more complex and higher dimensional directed acyclic graphs. The analysis allows for the quantification of the computation and communication costs and their interdependencies. The practical significance of these results on the performance of various data distribution schemes is clearly explained. Using these results, the performance of the parallel computations are formulated in an architecture independent fashion. These formulations allow for the parameterization of the architecture specitific entities such as the computation and communication rates. This type of parameterized performance analysis can be used at compile time or at run-time so as to achieve the most optimal distribution of the computations. The material in Multiprocessing: Trade-Offs in Computation and Communication connects theory with practice, so that the inherent performance limitations in many computations can be understood, and practical methods can be devised that would assist in the development of software for scalable high performance systems.

Computers in Building - Proceedings of the CAADfutures'99 Conference. Proceedings of the Eighth International Conference... Computers in Building - Proceedings of the CAADfutures'99 Conference. Proceedings of the Eighth International Conference on Computer Aided Architectural Design Futures held at Georgia Institute of Technology, Atlanta, Georgia, USA on June 7-8, 1999 (Paperback, Softcover reprint of the original 1st ed. 1999)
Godfried Augenbroe, Charles Eastman
R4,044 Discovery Miles 40 440 Ships in 18 - 22 working days

Since the establishment of the CAAD Futures Foundation in 1985, CAAD experts from all over the world meet every two years to present and document the state of the art of research in Computer Aided Architectural Design. Together, the series provides a good record of the evolving state of research in this area over the last fourteen years. The Proceedings this year is the eighth in the series. The conference held at Georgia Institute of Technology in Atlanta, Georgia, includes twenty-five papers presenting new and exciting results and capabilities in areas such as computer graphics, building modeling, digital sketching and drawing systems, Web-based collaboration and information exchange. An overall reading shows that computers in architecture is still a young field, with many exciting results emerging out of both greater understanding of the human processes and information processing needed to support design and also the continuously expanding capabilities of digital technology.

Compiling Parallel Loops for High Performance Computers - Partitioning, Data Assignment and Remapping (Paperback, Softcover... Compiling Parallel Loops for High Performance Computers - Partitioning, Data Assignment and Remapping (Paperback, Softcover reprint of the original 1st ed. 1993)
David E. Hudak, Santosh G. Abraham
R2,622 Discovery Miles 26 220 Ships in 18 - 22 working days

The exploitationof parallel processing to improve computing speeds is being examined at virtually all levels of computer science, from the study of parallel algorithms to the development of microarchitectures which employ multiple functional units. The most visible aspect of this interest in parallel processing is the commercially available multiprocessor systems which have appeared in the past decade. Unfortunately, the lack of adequate software support for the development of scientific applications that will run efficiently on multiple processors has stunted the acceptance of such systems. One of the major impediments to achieving high parallel efficiency on many data-parallel scientific applications is communication overhead, which is exemplified by cache coherency traffic and global memory overhead of interprocessors with a logically shared address space and physically distributed memory. Such techniques can be used by scientific application designers seeking to optimize code for a particular high-performance computer. In addition, these techniques can be seen as a necesary step toward developing software to support efficient paralled programs.In multiprocessor sytems with physically distributed memory, reducing communication overhead involves both data partitioning and data placement. Adaptive Data Partitioning (ADP) reduces the execution time of parallel programs by minimizing interprocessor communication for iterative data-parallel loops with near-neighbor communication. Data placement schemes are presented that reduce communication overhead. Under the loop partition specified by ADP, global data is partitioned into classes for each processor, allowing each processor to cache certain regions of the global data set. In addition, for many scientific applications, peak parallel efficiency is achieved only when machine-specific tradeoffs between load imbalance and communication are evaluated and utilized in choosing the data partition. The techniques in this book evaluate these tradeoffs to generate optimum cyclic partitions for data-parallel loops with either a linearly varying or uniform computational structure and either neighborhood or dimensional multicast communication patterns.This tradeoff is also treated within the CPR (Collective Partitioning and Remapping) algorithm, which partitions a collection of loops with various computational structures and communication patterns. Experiments that demonstrate the advantage of ADP, data placement, cyclic partitioning and CPR were conducted on the Encore Multimax and BBN TC2000 multiprocessors using the ADAPT system, a program partitioner which automatically restructures iterative data-parallel loops. This book serves as an excellent reference and may be used as the text for an advanced course on the subject.

Designing TSVs for 3D Integrated Circuits (Paperback, 2013): Nauman Khan, Soha Hassoun Designing TSVs for 3D Integrated Circuits (Paperback, 2013)
Nauman Khan, Soha Hassoun
R1,622 Discovery Miles 16 220 Ships in 18 - 22 working days

This book explores the challenges and presents best strategies for designing Through-Silicon Vias (TSVs) for 3D integrated circuits. It describes a novel technique to mitigate TSV-induced noise, the GND Plug, which is superior to others adapted from 2-D planar technologies, such as a backside ground plane and traditional substrate contacts. The book also investigates, in the form of a comparative study, the impact of TSV size and granularity, spacing of C4 connectors, off-chip power delivery network, shared and dedicated TSVs, and coaxial TSVs on the quality of power delivery in 3-D ICs. The authors provide detailed best design practices for designing 3-D power delivery networks. Since TSVs occupy silicon real-estate and impact device density, this book provides four iterative algorithms to minimize the number of TSVs in a power delivery network. Unlike other existing methods, these algorithms can be applied in early design stages when only functional block- level behaviors and a floorplan are available. Finally, the authors explore the use of Carbon Nanotubes for power grid design as a futuristic alternative to Copper.

TRON Project 1990 - Open-Architecture Computer Systems (Paperback, Softcover reprint of the original 1st ed. 1990): Ken Sakamura TRON Project 1990 - Open-Architecture Computer Systems (Paperback, Softcover reprint of the original 1st ed. 1990)
Ken Sakamura
R1,466 Discovery Miles 14 660 Ships in 18 - 22 working days

I wish to extend my warm greetings to you all on behalf of the TRON Association, on this occasion of the Seventh International TRON Project Symposium. The TRON Project was proposed by Dr. Ken Sakamura of the University of Tokyo, with the aim of designing a new, comprehen sive computer architecture that is open to worldwide use. Already more than six years have passed since the project was put in motion. The TRON Association is now made up of over 140 co m panies and organizations, including 25 overseas firms or their affiliates. A basic goal of TRON Project activities is to offer the world a human-oriented computer culture, that will lead to a richer and more fulfilling life for people throughout the world. It is our desire to bring to reality a new order in the world of computers, based on design concepts that consider the needs of human beings first of all, and to enable people to enjoy the full benefits of these com puters in their daily life. Thanks to the efforts of Association members, in recent months a number of TRON-specification 32-bit microprocessors have been made available. ITRON-specification products are continuing to appear, and we are now seeing commercial implementations of BTRON specifications as well. The CTRON subproject, mean while, is promoting standardization through validation testing and a portability experiment, and products are being marketed by sev eral firms. This is truly a year in which the TRON Project has reached the practical implementation stage."

OpenMP in a Heterogeneous World - 8th International Workshop on OpenMP, IWOMP 2012, Rome, Italy, June 11-13, 2012. Proceedings... OpenMP in a Heterogeneous World - 8th International Workshop on OpenMP, IWOMP 2012, Rome, Italy, June 11-13, 2012. Proceedings (Paperback, 2012)
Barbara Chapman, Federico Massaioli, Matthias S. Muller, Marco Rorro
R1,406 Discovery Miles 14 060 Ships in 18 - 22 working days

This book constitutes the refereed proceedings of the 8th International Workshop on OpenMP, held in in Rome, Italy, in June 2012. The 18 technical full papers presented together with 7 posters were carefully reviewed and selected from 30 submissions. The papers are organized in topical sections on proposed extensions to OpenMP, runtime environments, optimization and accelerators, task parallelism, validations and benchmarks

VLSI Placement and Routing: The PI Project (Paperback, Softcover reprint of the original 1st ed. 1989): Alan T Sherman VLSI Placement and Routing: The PI Project (Paperback, Softcover reprint of the original 1st ed. 1989)
Alan T Sherman
R1,384 Discovery Miles 13 840 Ships in 18 - 22 working days

This book provides a superb introduction to and overview of the MIT PI System for custom VLSI placement and routing. Alan Sher man has done an excellent job of collecting and clearly presenting material that was previously available only in various theses, confer ence papers, and memoranda. He has provided here a balanced and comprehensive presentation of the key ideas and techniques used in PI, discussing part of his own Ph. D. work (primarily on the place ment problem) in the context of the overall design of PI and the contributions of the many other PI team members. I began the PI Project in 1981 after learning first-hand how dif ficult it is to manually place modules and route interconnections in a custom VLSI chip. In 1980 Adi Shamir, Leonard Adleman, and I designed a custom VLSI chip for performing RSA encryp tion/decryption [226]. I became fascinated with the combinatorial and algorithmic questions arising in placement and routing, and be gan active research in these areas. The PI Project was started in the belief that many of the most interesting research issues would arise during an actual implementation effort, and secondarily in the hope that a practically useful tool might result. The belief was well-founded, but I had underestimated the difficulty of building a large easily-used software tool for a complex domain; the PI soft ware should be considered as a prototype implementation validating the design choices made.

Relations and Graphs - Discrete Mathematics for Computer Scientists (Paperback, Softcover reprint of the original 1st ed.... Relations and Graphs - Discrete Mathematics for Computer Scientists (Paperback, Softcover reprint of the original 1st ed. 1993)
Gunther Schmidt, Thomas Stroehlein
R2,659 Discovery Miles 26 590 Ships in 18 - 22 working days

Relational methods can be found at various places in computer science, notably in data base theory, relational semantics of concurrency, relationaltype theory, analysis of rewriting systems, and modern programming language design. In addition, they appear in algorithms analysis and in the bulk of discrete mathematics taught to computer scientists. This book is devoted to the background of these methods. It explains how to use relational and graph-theoretic methods systematically in computer science. A powerful formal framework of relational algebra is developed with respect to applications to a diverse range of problem areas. Results are first motivated by practical examples, often visualized by both Boolean 0-1-matrices and graphs, and then derived algebraically.

TRON Project 1987 Open-Architecture Computer Systems - Proceedings of the Third TRON Project Symposium (Paperback, Softcover... TRON Project 1987 Open-Architecture Computer Systems - Proceedings of the Third TRON Project Symposium (Paperback, Softcover reprint of the original 1st ed. 1987)
Ken Sakamura
R1,428 Discovery Miles 14 280 Ships in 18 - 22 working days

Almost 4 years have elapsed since Dr. Ken Sakamura of The University of Tokyo first proposed the TRON (the realtime operating system nucleus) concept and 18 months since the foundation of the TRON Association on 16 June 1986. Members of the Association from Japan and overseas currently exceed 80 corporations. The TRON concept, as advocated by Dr. Ken Sakamura, is concerned with the problem of interaction between man and the computer (the man-machine inter face), which had not previously been given a great deal of attention. Dr. Sakamura has gone back to basics to create a new and complete cultural environment relative to computers and envisage a role for computers which will truly benefit mankind. This concept has indeed caused a stir in the computer field. The scope of the research work involved was initially regarded as being so extensive and diverse that the completion of activities was scheduled for the 1990s. However, I am happy to note that the enthusiasm expressed by individuals and organizations both within and outside Japan has permitted acceleration of the research and development activities. It is to be hoped that the presentations of the Third TRON Project Symposium will further the progress toward the creation of a computer environment that will be compatible with the aspirations of mankind."

Ad Hoc Mobile Wireless Networks - Principles, Protocols, and Applications, Second Edition (Hardcover, 2nd edition): Subir Kumar... Ad Hoc Mobile Wireless Networks - Principles, Protocols, and Applications, Second Edition (Hardcover, 2nd edition)
Subir Kumar Sarkar, T. G. Basavaraju, C. Puttamadappa
R4,232 Discovery Miles 42 320 Ships in 10 - 15 working days

The military, the research community, emergency services, and industrial environments all rely on ad hoc mobile wireless networks because of their simple infrastructure and minimal central administration. Now in its second edition, Ad Hoc Mobile Wireless Networks: Principles, Protocols, and Applications explains the concepts, mechanism, design, and performance of these highly valued systems.

Following an overview of wireless network fundamentals, the book explores MAC layer, routing, multicast, and transport layer protocols for ad hoc mobile wireless networks. Next, it examines quality of service and energy management systems. Additional chapters cover mobility models for multi-hop ad hoc wireless networks as well as cross-layer design issues.

Exploring Bluetooth, IrDA (Infrared Data Association), HomeRF, WiFi, WiMax, Wireless Internet, and Mobile IP, the book contains appropriate examples and problems at the end of each chapter to illustrate each concept. This second edition has been completely updated with the latest technology and includes a new chapter on recent developments in the field, including sensor networks, personal area networks (PANs), smart dress, and vehicular ad hoc networks.

Self-organized, self-configured, and self-controlled, ad hoc mobile wireless networks will continue to be valued for a range of applications, as they can be set up and deployed anywhere and anytime. This volume captures the current state of the field as well as upcoming challenges awaiting researchers.

Proof and Computation (Paperback, Softcover reprint of the original 1st ed. 1995): Helmut Schwichtenberg Proof and Computation (Paperback, Softcover reprint of the original 1st ed. 1995)
Helmut Schwichtenberg
R2,703 Discovery Miles 27 030 Ships in 18 - 22 working days

Logical concepts and methods are of growing importance in many areas of computer science. The proofs-as-programs paradigm and the wide acceptance of Prolog show this clearly. The logical notion of a formal proof in various constructive systems can be viewed as a very explicit way to describe a computation procedure. Also conversely, the development of logical systems has been influenced by accumulating knowledge on rewriting and unification techniques. This volume contains a series of lectures by leading researchers giving a presentation of new ideas on the impact of the concept of a formal proof on computation theory. The subjects covered are: specification and abstract data types, proving techniques, constructive methods, linear logic, and concurrency and logic.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
CSS and HTML for beginners - A Beginners…
Ethan Hall Hardcover R1,027 R881 Discovery Miles 8 810
Advances in Delay-Tolerant Networks…
Joel J. P. C. Rodrigues Paperback R4,669 Discovery Miles 46 690
CSS For Beginners - The Best CSS Guide…
Ethan Hall Hardcover R895 R773 Discovery Miles 7 730
Concurrency - The Works of Leslie…
Dahlia Malkhi Hardcover R2,469 Discovery Miles 24 690
Creativity in Load-Balance Schemes for…
Alberto Garcia-Robledo, Arturo Diaz Perez, … Hardcover R3,901 Discovery Miles 39 010
The Physics of Computing
Marilyn Wolf Paperback R1,645 Discovery Miles 16 450
Applying Integration Techniques and…
Gabor Kecskemeti Hardcover R6,050 Discovery Miles 60 500
Clean Architecture - A Craftsman's Guide…
Robert Martin Paperback  (1)
R860 R741 Discovery Miles 7 410
Creativity in Computing and DataFlow…
Suyel Namasudra, Veljko Milutinovic Hardcover R4,204 Discovery Miles 42 040
Dual Quaternions and Their Associated…
Ronald Goldman Paperback R1,369 Discovery Miles 13 690

 

Partners