0
Your cart

Your cart is empty

Price
  • R50 - R100 (6)
  • R100 - R250 (240)
  • R250 - R500 (795)
  • R500+ (8,200)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Computer hardware & operating systems

Interaction Between Compilers and Computer Architectures (Hardcover, 2001 ed.): Gyungho Lee, Pen-Chung Yew Interaction Between Compilers and Computer Architectures (Hardcover, 2001 ed.)
Gyungho Lee, Pen-Chung Yew
R2,735 Discovery Miles 27 350 Ships in 18 - 22 working days

Effective compilers allow for a more efficient execution of application programs for a given computer architecture, while well-conceived architectural features can support more effective compiler optimization techniques. A well thought-out strategy of trade-offs between compilers and computer architectures is the key to the successful designing of highly efficient and effective computer systems. From embedded micro-controllers to large-scale multiprocessor systems, it is important to understand the interaction between compilers and computer architectures. The goal of the Annual Workshop on Interaction between Compilers and Computer Architectures (INTERACT) is to promote new ideas and to present recent developments in compiler techniques and computer architectures that enhance each other's capabilities and performance. Interaction Between Compilers and Computer Architectures is an updated and revised volume consisting of seven papers originally presented at the Fifth Workshop on Interaction between Compilers and Computer Architectures (INTERACT-5), which was held in conjunction with the IEEE HPCA-7 in Monterrey, Mexico in 2001. This volume explores recent developments and ideas for better integration of the interaction between compilers and computer architectures in designing modern processors and computer systems. Interaction Between Compilers and Computer Architectures is suitable as a secondary text for a graduate level course, and as a reference for researchers and practitioners in industry.

Parallel Sparse Direct Solver for Integrated Circuit Simulation (Hardcover, 1st ed. 2017): Xiao-Ming Chen, Yu Wang, Huazhong... Parallel Sparse Direct Solver for Integrated Circuit Simulation (Hardcover, 1st ed. 2017)
Xiao-Ming Chen, Yu Wang, Huazhong Yang
R3,182 Discovery Miles 31 820 Ships in 18 - 22 working days

This book describes algorithmic methods and parallelization techniques to design a parallel sparse direct solver which is specifically targeted at integrated circuit simulation problems. The authors describe a complete flow and detailed parallel algorithms of the sparse direct solver. They also show how to improve the performance by simple but effective numerical techniques. The sparse direct solver techniques described can be applied to any SPICE-like integrated circuit simulator and have been proven to be high-performance in actual circuit simulation. Readers will benefit from the state-of-the-art parallel integrated circuit simulation techniques described in this book, especially the latest parallel sparse matrix solution techniques.

Logic and Complexity (Hardcover, 2004 ed.): Richard Lassaigne, Michel De Rougemont Logic and Complexity (Hardcover, 2004 ed.)
Richard Lassaigne, Michel De Rougemont
R4,210 Discovery Miles 42 100 Ships in 18 - 22 working days

Logic and Complexity looks at basic logic as it is used in Computer Science, and provides students with a logical approach to Complexity theory. With plenty of exercises, this book presents classical notions of mathematical logic, such as decidability, completeness and incompleteness, as well as new ideas brought by complexity theory such as NP-completeness, randomness and approximations, providing a better understanding for efficient algorithmic solutions to problems.

Divided into three parts, it covers:

- Model Theory and Recursive Functions - introducing the basic model theory of propositional, 1st order, inductive definitions and 2nd order logic. Recursive functions, Turing computability and decidability are also examined.

- Descriptive Complexity - looking at the relationship between definitions of problems, queries, properties of programs and their computational complexity.

- Approximation - explaining how some optimization problems and counting problems can be approximated according to their logical form.

Logic is important in Computer Science, particularly for verification problems and database query languages such as SQL. Students and researchers in this field will find this book of great interest.

Advanced Boolean Techniques - Selected Papers from the 13th International Workshop on Boolean Problems (Hardcover, 1st ed.... Advanced Boolean Techniques - Selected Papers from the 13th International Workshop on Boolean Problems (Hardcover, 1st ed. 2020)
Rolf Drechsler, Mathias Soeken
R2,676 Discovery Miles 26 760 Ships in 18 - 22 working days

This book describes recent findings in the domain of Boolean logic and Boolean algebra, covering application domains in circuit and system design, but also basic research in mathematics and theoretical computer science. Content includes invited chapters and a selection of the best papers presented at the 13th annual International Workshop on Boolean Problems. Provides a single-source reference to the state-of-the-art research in the field of logic synthesis and Boolean techniques; Includes a selection of the best papers presented at the 13th annual International Workshop on Boolean Problems; Covers Boolean algebras, Boolean logic, Boolean modeling, Combinatorial Search, Boolean and bitwise arithmetic, Software and tools for the solution of Boolean problems, Applications of Boolean logic and algebras, Applications to real-world problems, Boolean constraint solving, and Extensions of Boolean logic.

Logic Synthesis for Low Power VLSI Designs (Hardcover, 1998 ed.): Sasan Iman, Massoud Pedram Logic Synthesis for Low Power VLSI Designs (Hardcover, 1998 ed.)
Sasan Iman, Massoud Pedram
R4,145 Discovery Miles 41 450 Ships in 18 - 22 working days

Logic Synthesis for Low Power VLSI Designs presents a systematic and comprehensive treatment of power modeling and optimization at the logic level. More precisely, this book provides a detailed presentation of methodologies, algorithms and CAD tools for power modeling, estimation and analysis, synthesis and optimization at the logic level. Logic Synthesis for Low Power VLSI Designs contains detailed descriptions of technology-dependent logic transformations and optimizations, technology decomposition and mapping, and post-mapping structural optimization techniques for low power. It also emphasizes the trade-off techniques for two-level and multi-level logic circuits that involve power dissipation and circuit speed, in the hope that the readers can better understand the issues and ways of achieving their power dissipation goal while meeting the timing constraints. Logic Synthesis for Low Power VLSI Designs is written for VLSI design engineers, CAD professionals, and students who have had a basic knowledge of CMOS digital design and logic synthesis.

Perpendicular Magnetic Recording (Hardcover, 2004 ed.): Sakhrat Khizroev, Dmitri Litvinov Perpendicular Magnetic Recording (Hardcover, 2004 ed.)
Sakhrat Khizroev, Dmitri Litvinov
R2,748 Discovery Miles 27 480 Ships in 18 - 22 working days

Magnetic recording is expected to become core technology in a multi-billion dollar industry in the in the very near future. Some of the most critical discoveries regarding perpendicular write and playback heads and perpendicular media were made only during the last several years as a result of extensive and intensive research in both academia and industry in their fierce race to extend the superparamagnetic limit in the magnetic recording media. These discoveries appear to be critical for implementing perpendicular magnetic recording into an actual disk drive.

This book addresses all the open questions and issues which need to be resolved before perpendicular recording can finally be implemented successfully, and is the first monograph in many years to address this subject.

This book is intended for graduate students, young engineers and even senior and more experienced researchers in this field who need to acquire adequate knowledge of the physics of perpendicular magnetic recording in order to further develop the field of perpendicular recording.

BiCMOS Technology and Applications (Hardcover, 2nd ed. 1993): Antonio R. Alvarez BiCMOS Technology and Applications (Hardcover, 2nd ed. 1993)
Antonio R. Alvarez
R4,237 Discovery Miles 42 370 Ships in 18 - 22 working days

BiCMOS Technology and Applications, Second Edition provides a synthesis of available knowledge about the combination of bipolar and MOS transistors in a common integrated circuit - BiCMOS. In this new edition all chapters have been updated and completely new chapters on emerging topics have been added. In addition, BiCMOS Technology and Applications, Second Edition provides the reader with a knowledge of either CMOS or Bipolar technology/design a reference with which they can make educated decisions regarding the viability of BiCMOS in their own application. BiCMOS Technology and Applications, Second Edition is vital reading for practicing integrated circuit engineers as well as technical managers trying to evaluate business issues related to BiCMOS. As a textbook, this book is also appropriate at the graduate level for a special topics course in BiCMOS. A general knowledge in device physics, processing and circuit design is assumed. Given the division of the book, it lends itself well to a two-part course; one on technology and one on design. This will provide advanced students with a good understanding of tradeoffs between bipolar and MOS devices and circuits.

Server Architectures - Multiprocessors, Clusters, Parallel Systems, Web Servers, Storage Solutions (Paperback): Rene J. Chevance Server Architectures - Multiprocessors, Clusters, Parallel Systems, Web Servers, Storage Solutions (Paperback)
Rene J. Chevance
R2,732 Discovery Miles 27 320 Ships in 10 - 15 working days

The goal of this book is to present and compare various options one for systems architecture from two separate points of view. One, that of the information technology decision-maker who must choose a solution matching company business requirements, and secondly that of the systems architect who finds himself between the rock of changes in hardware and software technologies and the hard place of changing business needs.
Different aspects of server architecture are presented, from databases designed for parallel architectures to high-availability systems, and touching en route on often- neglected performance aspects.
1. The book provides IT managers, decision makers and project leaders who want to acquire knowledge sufficient to understand the choices made in and capabilities of systems offered by various vendors:
2. Provides system design information to balance the characteristic applications against the capabilities and nature of various architectural choices
3. In addition, it offers an integrated view of the concepts in server architecture, accompanied by discussion of effects on the evolution of the data processing industry.

Trust in Technology: A Socio-Technical Perspective (Hardcover, 2006 ed.): Karen Clarke, Gillian Hardstone, Mark Rouncefield,... Trust in Technology: A Socio-Technical Perspective (Hardcover, 2006 ed.)
Karen Clarke, Gillian Hardstone, Mark Rouncefield, Ian Sommerville
R2,666 Discovery Miles 26 660 Ships in 18 - 22 working days

This book encapsulates some work done in the DIRC project concerned with trust and responsibility in socio-technical systems. It brings together a range of disciplinary approaches - computer science, sociology and software engineering - to produce a socio-technical systems perspective on the issues surrounding trust in technology in complex settings. Computer systems can only bring about their purported benefits if functionality, users and usability are central to their design and deployment. Thus, technology can only be trusted in situ and in everyday use if these issues have been brought to bear on the process of technology design, implementation and use. The studies detailed in this book analyse the ways in which trust in technology is achieved and/or worked around in everyday situations in a range of settings - including hospitals, a steelworks, a public enquiry, the financial services sector and air traffic control.

Advanced Electronic Technologies and Systems Based on Low-Dimensional Quantum Devices (Hardcover, 1998 ed.): M. Balkanski,... Advanced Electronic Technologies and Systems Based on Low-Dimensional Quantum Devices (Hardcover, 1998 ed.)
M. Balkanski, Nikolai Andreev
R5,309 Discovery Miles 53 090 Ships in 18 - 22 working days

The major thrust of this book is the realisation of an all optical computer. To that end it discusses optoelectronic devices and applications, transmission systems, integrated optoelectronic systems and, of course, all optical computers. The chapters on heterostructure light emitting devices' quantum well carrier transport optoelectronic devices' present the most recent advances in device physics, together with modern devices and their applications. The chapter on microcavity lasers' is essential to the discussion of present and future developments in solid-state laser physics and technology and puts into perspective the present state of research into and the technology of optoelectronic devices, within the context of their use in advanced systems. A significant part of the book deals with problems of propagation in quantum structures. soliton-based switching, gating and transmission systems' presents the basics of controlling the propagation of photons in solids and the use of this control in devices. The chapters on optoelectronic processing using smart pixels' and all optical computers' are preceded by introductory material in fundamentals of quantum structures for optoelectronic devices and systems' and linear and nonlinear absorption and reflection in quantum well structures'. It is clear that new architectures will be necessary if we are to fully utilise the potentiality of electrooptic devices in computing, but even current architectures and structures demonstrate the feasibility of the all optical computer: one that is possible today.

Integrated Research in GRID Computing - CoreGRID Integration Workshop 2005 (Selected Papers) November 28-30, Pisa, Italy... Integrated Research in GRID Computing - CoreGRID Integration Workshop 2005 (Selected Papers) November 28-30, Pisa, Italy (Hardcover, 2007 ed.)
Sergei Gorlatch, Marco Danelutto
R2,813 Discovery Miles 28 130 Ships in 18 - 22 working days

Integrated Research in Grid Computing presents a selection of the best papers presented at the CoreGRID Integration Workshop (CGIW2005), which took place on November 28-30, 2005 in Pisa, Italy. The aim of CoreGRID is to strengthen and advance scientific and technological excellence in the area of Grid and Peer-to-Peer technologies in order to overcome the current fragmentation and duplication of effort in this area. To achieve this objective, the workshop brought together a critical mass of well-established researchers (including 145 permanent researchers and 171 PhD students) from a number of institutions which have all constructed an ambitious joint program of activities. Priority in the workshop was given to work conducted in Tcollaboration between partners from different research institutions and to promising research proposals that could foster such collaboration in the future.

Advances in Computer Graphics Hardware III (Hardcover, 1991 ed.): A. A. M. Kuijk Advances in Computer Graphics Hardware III (Hardcover, 1991 ed.)
A. A. M. Kuijk
R2,831 Discovery Miles 28 310 Ships in 18 - 22 working days

This book is a collection of the finalized versions of the papers presented at the third Eurographics Workshop on Graphics Hardware. The diversity of the contributions reflects the widening range of options for graphics hardware that can be exploited due to the constant evolution of VLSI and software technologies. The first part of the book deals with the algorithmic aspects of graphics systems in a hardware-oriented context. Topics are: VLSI design strategies, data distribution for ray-tracing, the advantages of point-driven image generation with respect to VLSI implementation, use of memory and ease of parallelization, ray-tracing, and image reconstruction. The second part is on specific hardware, on content addressable memories and voxel-based systems. The third part addresses parallel systems: massively parallel object-based architectures, two systems in which image generated by individual rendering systems are composited, a transputer-based parallel display processor.

Adiabatic Logic - Future Trend and System Level Perspective (Hardcover, 2012): Philip Teichmann Adiabatic Logic - Future Trend and System Level Perspective (Hardcover, 2012)
Philip Teichmann
R2,652 Discovery Miles 26 520 Ships in 18 - 22 working days

Adiabatic logic is a potential successor for static CMOS circuit design when it comes to ultra-low-power energy consumption. Future development like the evolutionary shrinking of the minimum feature size as well as revolutionary novel transistor concepts will change the gate level savings gained by adiabatic logic. In addition, the impact of worsening degradation effects has to be considered in the design of adiabatic circuits. The impact of the technology trends on the figures of merit of adiabatic logic, energy saving potential and optimum operating frequency, are investigated, as well as degradation related issues. Adiabatic logic benefits from future devices, is not susceptible to Hot Carrier Injection, and shows less impact of Bias Temperature Instability than static CMOS circuits. Major interest also lies on the efficient generation of the applied power-clock signal. This oscillating power supply can be used to save energy in short idle times by disconnecting circuits. An efficient way to generate the power-clock is by means of the synchronous 2N2P LC oscillator, which is also robust with respect to pattern-induced capacitive variations. An easy to implement but powerful power-clock gating supplement is proposed by gating the synchronization signals. Diverse implementations to shut down the system are presented and rated for their applicability and other aspects like energy reduction capability and data retention. Advantageous usage of adiabatic logic requires compact and efficient arithmetic structures. A broad variety of adder structures and a Coordinate Rotation Digital Computer are compared and rated according to energy consumption and area usage, and the resulting energy saving potential against static CMOS proves the ultra-low-power capability of adiabatic logic. In the end, a new circuit topology has to compete with static CMOS also in productivity. On a 130nm test chip, a large scale test vehicle containing an FIR filter was implemented in adiabatic logic, utilizing a standard, library-based design flow, fabricated, measured and compared to simulations of a static CMOS counterpart, with measured saving factors compliant to the values gained by simulation. This leads to the conclusion that adiabatic logic is ready for productive design due to compatibility not only to CMOS technology, but also to electronic design automation (EDA) tools developed for static CMOS system design.

Crisp and Soft Computing with Hypercubical Calculus - New Approaches to Modeling in Cognitive Science and Technology with... Crisp and Soft Computing with Hypercubical Calculus - New Approaches to Modeling in Cognitive Science and Technology with Parity Logic, Fuzzy Logic, and Evolutionary Computing (Hardcover, 1999 ed.)
Michael Zaus
R4,079 Discovery Miles 40 790 Ships in 18 - 22 working days

In Part I, the impact of an integro-differential operator on parity logic engines (PLEs) as a tool for scientific modeling from scratch is presented. Part II outlines the fuzzy structural modeling approach for building new linear and nonlinear dynamical causal forecasting systems in terms of fuzzy cognitive maps (FCMs). Part III introduces the new type of autogenetic algorithms (AGAs) to the field of evolutionary computing. Altogether, these PLEs, FCMs, and AGAs may serve as conceptual and computational power tools.

Transparent Designs - Personal Computing and the Politics of User-Friendliness (Hardcover): Michael L. Black Transparent Designs - Personal Computing and the Politics of User-Friendliness (Hardcover)
Michael L. Black
R1,378 Discovery Miles 13 780 Ships in 10 - 15 working days

This fascinating cultural history of the personal computer explains how user-friendly design allows tech companies to build systems that we cannot understand. Modern personal computers are easy to use, and their welcoming, user-friendly interfaces encourage us to see them as designed for our individual benefit. Rarely, however, do these interfaces invite us to consider how our individual uses support the broader political and economic strategies of their designers. In Transparent Designs, Michael L. Black revisits early debates from hobbyist newsletters, computing magazines, user manuals, and advertisements about how personal computers could be seen as usable and useful by the average person. Black examines how early personal computers from the Tandy TRS-80 and Commodore PET to the IBM PC and Apple Macintosh were marketed to an American public that was high on the bold promises of the computing revolution but also skeptical about their ability to participate in it. Through this careful archival study, he shows how many of the foundational principles of usability theory were shaped through disagreements over the languages and business strategies developed in response to this skepticism. In short, this book asks us to consider the consequences of a computational culture that is based on the assumption that the average person does not need to know anything about the internal operations of the computers we've come to depend on for everything. Expanding our definition of usability, Transparent Designs examines how popular and technical rhetoric shapes user expectations about what counts as usable and useful as much as or even more so than hardware and software interfaces. Offering a fresh look at the first decade of personal computing, Black highlights how the concept of usability has been leveraged historically to smooth over conflicts between the rhetoric of computing and its material experience. Readers interested in vintage computing, the history of technology, digital rhetoric, or American culture will be fascinated in this book.

Formal Hardware Verification - Methods and Systems in Comparison (Paperback, 1997 ed.): Thomas Kropf Formal Hardware Verification - Methods and Systems in Comparison (Paperback, 1997 ed.)
Thomas Kropf
R1,613 Discovery Miles 16 130 Ships in 18 - 22 working days

This state-of-the-art monograph presents a coherent survey of a variety of methods and systems for formal hardware verification. It emphasizes the presentation of approaches that have matured into tools and systems usable for the actual verification of nontrivial circuits. All in all, the book is a representative and well-structured survey on the success and future potential of formal methods in proving the correctness of circuits. The various chapters describe the respective approaches supplying theoretical foundations as well as taking into account the application viewpoint. By applying all methods and systems presented to the same set of IFIP WG10.5 hardware verification examples, a valuable and fair analysis of the strenghts and weaknesses of the various approaches is given.

From Specification to Embedded Systems Application (Hardcover, 2005 ed.): Achim Rettberg, Mauro C. Zanella, Franz J. Rammig From Specification to Embedded Systems Application (Hardcover, 2005 ed.)
Achim Rettberg, Mauro C. Zanella, Franz J. Rammig
R2,830 Discovery Miles 28 300 Ships in 18 - 22 working days

As almost no other technology, embedded systems is an essential element of many innovations in automotive engineering. New functions and improvements of already existing functions, as well as the compliance with traffic regulations and customer requirements, have only become possible by the increasing use of electronic systems, especially in the fields of driving, safety, reliability, and functionality. Along with the functionalities that increase in number and have to cooperate, the complexity of the entire system will increase.

Synergy effects resulting from distributed application functionalities via several electronic control devies, exchanging information through the network brings about more complex system architectures with many different sub-networks, operating with different velocities and different protocol implementations.

To manage the increasing complexity of these systems, a deterministic behaviour of the control units and the communication network must be provided for, in particular when dealing with a distributed functionality.

From Specification to Embedded Systems Application documents recent approaches and results presented at the International Embedded Systems Symposium (IESS 2005), which was held in August 2005 in Manaus (Brazil) and sponsored by the International Federation for Information Processing (IFIP).

The topics which have been chosen for this working conference are very timely: design methodology, modeling, specification, software synthesis, power management, formal verification, testing, network, communication systems, distributed control systems, resource management and special aspects in system design.

Handbook of Signal Processing Systems (Hardcover, Edition.): Shuvra S. Bhattacharyya, Ed F. Deprettere, Rainer Leupers, Jarmo... Handbook of Signal Processing Systems (Hardcover, Edition.)
Shuvra S. Bhattacharyya, Ed F. Deprettere, Rainer Leupers, Jarmo Takala
R5,392 Discovery Miles 53 920 Ships in 18 - 22 working days

It gives me immense pleasure to introduce this timely handbook to the research/- velopment communities in the ?eld of signal processing systems (SPS). This is the ?rst of its kind and represents state-of-the-arts coverage of research in this ?eld. The driving force behind information technologies (IT) hinges critically upon the major advances in both component integration and system integration. The major breakthrough for the former is undoubtedly the invention of IC in the 50's by Jack S. Kilby, the Nobel Prize Laureate in Physics 2000. In an integrated circuit, all components were made of the same semiconductor material. Beginning with the pocket calculator in 1964, there have been many increasingly complex applications followed. In fact, processing gates and memory storage on a chip have since then grown at an exponential rate, following Moore's Law. (Moore himself admitted that Moore's Law had turned out to be more accurate, longer lasting and deeper in impact than he ever imagined. ) With greater device integration, various signal processing systems have been realized for many killer IT applications. Further breakthroughs in computer sciences and Internet technologies have also catalyzed large-scale system integration. All these have led to today's IT revolution which has profound impacts on our lifestyle and overall prospect of humanity. (It is hard to imagine life today without mobiles or Internets ) The success of SPS requires a well-concerted integrated approach from mul- ple disciplines, such as device, design, and application.

Compiling Parallel Loops for High Performance Computers - Partitioning, Data Assignment and Remapping (Hardcover, 1993 ed.):... Compiling Parallel Loops for High Performance Computers - Partitioning, Data Assignment and Remapping (Hardcover, 1993 ed.)
David E. Hudak, Santosh G. Abraham
R2,745 Discovery Miles 27 450 Ships in 18 - 22 working days

The exploitationof parallel processing to improve computing speeds is being examined at virtually all levels of computer science, from the study of parallel algorithms to the development of microarchitectures which employ multiple functional units. The most visible aspect of this interest in parallel processing is the commercially available multiprocessor systems which have appeared in the past decade. Unfortunately, the lack of adequate software support for the development of scientific applications that will run efficiently on multiple processors has stunted the acceptance of such systems. One of the major impediments to achieving high parallel efficiency on many data-parallel scientific applications is communication overhead, which is exemplified by cache coherency traffic and global memory overhead of interprocessors with a logically shared address space and physically distributed memory. Such techniques can be used by scientific application designers seeking to optimize code for a particular high-performance computer. In addition, these techniques can be seen as a necesary step toward developing software to support efficient paralled programs. In multiprocessor sytems with physically distributed memory, reducing communication overhead involves both data partitioning and data placement. Adaptive Data Partitioning (ADP) reduces the execution time of parallel programs by minimizing interprocessor communication for iterative data-parallel loops with near-neighbor communication. Data placement schemes are presented that reduce communication overhead. Under the loop partition specified by ADP, global data is partitioned into classes for each processor, allowing each processor to cachecertain regions of the global data set. In addition, for many scientific applications, peak parallel efficiency is achieved only when machine-specific tradeoffs between load imbalance and communication are evaluated and utilized in choosing the data partition. The techniques in this book evaluate these tradeoffs to generate optimum cyclic partitions for data-parallel loops with either a linearly varying or uniform computational structure and either neighborhood or dimensional multicast communication patterns. This tradeoff is also treated within the CPR (Collective Partitioning and Remapping) algorithm, which partitions a collection of loops with various computational structures and communication patterns. Experiments that demonstrate the advantage of ADP, data placement, cyclic partitioning and CPR were conducted on the Encore Multimax and BBN TC2000 multiprocessors using the ADAPT system, a program partitioner which automatically restructures iterative data-parallel loops. This book serves as an excellent reference and may be used as the text for an advanced course on the subject.

Hierarchical Scheduling in Parallel and Cluster Systems (Hardcover, 2003 ed.): Sivarama Dandamudi Hierarchical Scheduling in Parallel and Cluster Systems (Hardcover, 2003 ed.)
Sivarama Dandamudi
R4,157 Discovery Miles 41 570 Ships in 18 - 22 working days

Multiple processor systems are an important class of parallel systems. Over the years, several architectures have been proposed to build such systems to satisfy the requirements of high performance computing. These architectures span a wide variety of system types. At the low end of the spectrum, we can build a small, shared-memory parallel system with tens of processors. These systems typically use a bus to interconnect the processors and memory. Such systems, for example, are becoming commonplace in high-performance graph ics workstations. These systems are called uniform memory access (UMA) multiprocessors because they provide uniform access of memory to all pro cessors. These systems provide a single address space, which is preferred by programmers. This architecture, however, cannot be extended even to medium systems with hundreds of processors due to bus bandwidth limitations. To scale systems to medium range i. e., to hundreds of processors, non-bus interconnection networks have been proposed. These systems, for example, use a multistage dynamic interconnection network. Such systems also provide global, shared memory like the UMA systems. However, they introduce local and remote memories, which lead to non-uniform memory access (NUMA) architecture. Distributed-memory architecture is used for systems with thousands of pro cessors. These systems differ from the shared-memory architectures in that there is no globally accessible shared memory. Instead, they use message pass ing to facilitate communication among the processors. As a result, they do not provide single address space."

Introduction to Parallel Processing - Algorithms and Architectures (Hardcover, 1999 ed.): Behrooz Parhami Introduction to Parallel Processing - Algorithms and Architectures (Hardcover, 1999 ed.)
Behrooz Parhami
R5,570 Discovery Miles 55 700 Ships in 18 - 22 working days

THE CONTEXT OF PARALLEL PROCESSING The field of digital computer architecture has grown explosively in the past two decades. Through a steady stream of experimental research, tool-building efforts, and theoretical studies, the design of an instruction-set architecture, once considered an art, has been transformed into one of the most quantitative branches of computer technology. At the same time, better understanding of various forms of concurrency, from standard pipelining to massive parallelism, and invention of architectural structures to support a reasonably efficient and user-friendly programming model for such systems, has allowed hardware performance to continue its exponential growth. This trend is expected to continue in the near future. This explosive growth, linked with the expectation that performance will continue its exponential rise with each new generation of hardware and that (in stark contrast to software) computer hardware will function correctly as soon as it comes off the assembly line, has its down side. It has led to unprecedented hardware complexity and almost intolerable dev- opment costs. The challenge facing current and future computer designers is to institute simplicity where we now have complexity; to use fundamental theories being developed in this area to gain performance and ease-of-use benefits from simpler circuits; to understand the interplay between technological capabilities and limitations, on the one hand, and design decisions based on user and application requirements on the other.

Issues of Fault Diagnosis for Dynamic Systems (Hardcover, 2000 ed.): Ron J. Patton, Paul M. Frank, Robert N. Clark Issues of Fault Diagnosis for Dynamic Systems (Hardcover, 2000 ed.)
Ron J. Patton, Paul M. Frank, Robert N. Clark
R5,253 Discovery Miles 52 530 Ships in 18 - 22 working days

There is an increasing demand for dynamic systems to become safer, more reliable and more economical in operation. This requirement extends beyond the normally accepted safety-critical systems e.g., nuclear reactors, aircraft and many chemical processes, to systems such as autonomous vehicles and some process control systems where the system availability is vital. The field of fault diagnosis for dynamic systems (including fault detection and isolation) has become an important topic of research. Many applications of qualitative and quantitative modelling, statistical processing and neural networks are now being planned and developed in complex engineering systems. Issues of Fault Diagnosis for Dynamic Systems has been prepared by experts in fault detection and isolation (FDI) and fault diagnosis with wide ranging experience.Subjects featured include: - Real plant application studies; - Non-linear observer methods; - Robust approaches to FDI; - The use of parity equations; - Statistical process monitoring; - Qualitative modelling for diagnosis; - Parameter estimation approaches to FDI; - Fault diagnosis for descriptor systems; - FDI in inertial navigation; - Stuctured approaches to FDI; - Change detection methods; - Bio-medical studies. Researchers and industrial experts will appreciate the combination of practical issues and mathematical theory with many examples. Control engineers will profit from the application studies.

Distributed Systems for System Architects (Hardcover, 2001 ed.): Paulo Verissimo, Luis Rodrigues Distributed Systems for System Architects (Hardcover, 2001 ed.)
Paulo Verissimo, Luis Rodrigues
R3,454 Discovery Miles 34 540 Ships in 18 - 22 working days

The primary audience for this book are advanced undergraduate students and graduate students. Computer architecture, as it happened in other fields such as electronics, evolved from the small to the large, that is, it left the realm of low-level hardware constructs, and gained new dimensions, as distributed systems became the keyword for system implementation. As such, the system architect, today, assembles pieces of hardware that are at least as large as a computer or a network router or a LAN hub, and assigns pieces of software that are self-contained, such as client or server programs, Java applets or pro tocol modules, to those hardware components. The freedom she/he now has, is tremendously challenging. The problems alas, have increased too. What was before mastered and tested carefully before a fully-fledged mainframe or a closely-coupled computer cluster came out on the market, is today left to the responsibility of computer engineers and scientists invested in the role of system architects, who fulfil this role on behalf of software vendors and in tegrators, add-value system developers, R&D institutes, and final users. As system complexity, size and diversity grow, so increases the probability of in consistency, unreliability, non responsiveness and insecurity, not to mention the management overhead. What System Architects Need to Know The insight such an architect must have includes but goes well beyond, the functional properties of distributed systems."

Tools and Environments for Parallel and Distributed Systems (Hardcover, 1996 ed.): Amr Zaky, Ted Lewis Tools and Environments for Parallel and Distributed Systems (Hardcover, 1996 ed.)
Amr Zaky, Ted Lewis
R4,179 Discovery Miles 41 790 Ships in 18 - 22 working days

Developing correct and efficient software is far more complex for parallel and distributed systems than it is for sequential processors. Some of the reasons for this added complexity are: the lack of a universally acceptable parallel and distributed programming paradigm, the criticality of achieving high performance, and the difficulty of writing correct parallel and distributed programs. These factors collectively influence the current status of parallel and distributed software development tools efforts. Tools and Environments for Parallel and Distributed Systems addresses the above issues by describing working tools and environments, and gives a solid overview of some of the fundamental research being done worldwide. Topics covered in this collection are: mainstream program development tools, performance prediction tools and studies; debugging tools and research; and nontraditional tools. Audience: Suitable as a secondary text for graduate level courses in software engineering and parallel and distributed systems, and as a reference for researchers and practitioners in industry.

Design Techniques for Mash Continuous-Time Delta-Sigma Modulators (Hardcover, 1st ed. 2018): Qiyuan Liu, Alexander Edward,... Design Techniques for Mash Continuous-Time Delta-Sigma Modulators (Hardcover, 1st ed. 2018)
Qiyuan Liu, Alexander Edward, Carlos Briseno-Vidrios, Jose Silva-Martinez
R2,662 Discovery Miles 26 620 Ships in 18 - 22 working days

This book describes a circuit architecture for converting real analog signals into a digital format, suitable for digital signal processors. This architecture, referred to as multi-stage noise-shaping (MASH) Continuous-Time Sigma-Delta Modulators (CT- M), has the potential to provide better digital data quality and achieve better data rate conversion with lower power consumption. The authors not only cover MASH continuous-time sigma delta modulator fundamentals, but also provide a literature review that will allow students, professors, and professionals to catch up on the latest developments in related technology.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Operational Risk Management
J Young Paperback R603 Discovery Miles 6 030
Many-Criteria Optimization and Decision…
Dimo Brockhoff, Michael Emmerich, … Hardcover R4,256 Discovery Miles 42 560
Managing Operations Throughout Global…
Jean C Essila Hardcover R5,104 Discovery Miles 51 040
Handbook of Mobility Data Mining, Volume…
Haoran Zhang Paperback R2,473 Discovery Miles 24 730
Applications
Peter Benner, et al Hardcover R5,630 Discovery Miles 56 300
Supply Chain Optimization
Joseph Geunes, Panos M. Pardalos Hardcover R4,237 Discovery Miles 42 370
Design of Feedback Control Systems
Raymond T. Stefani, Bahram Shahian, … Hardcover R6,540 Discovery Miles 65 400
Data Envelopment Analysis with R
Farhad Hosseinzadeh Lotfi, Ali Ebrahimnejad, … Hardcover R3,990 Discovery Miles 39 900
Diagnosis and Fault-tolerant Control…
V Puig Hardcover R3,761 Discovery Miles 37 610
Analysis and Synthesis for Interval…
Hongyi Li, Ligang Wu, … Hardcover R4,181 R3,380 Discovery Miles 33 800

 

Partners