0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (12)
  • R250 - R500 (38)
  • R500+ (3,074)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design

Distributed and Parallel Systems - Cluster and Grid Computing (Paperback, Softcover reprint of the original 1st ed. 2002):... Distributed and Parallel Systems - Cluster and Grid Computing (Paperback, Softcover reprint of the original 1st ed. 2002)
Peter Kacsuk, Dieter Kranzlmuller, Zsolt Nemeth, Jens Volkert
R2,637 Discovery Miles 26 370 Ships in 18 - 22 working days

Distributed and Parallel Systems: Cluster and Grid Computing is the proceedings of the fourth Austrian-Hungarian Workshop on Distributed and Parallel Systems organized jointly by Johannes Kepler University, Linz, Austria and the MTA SZTAKI Computer and Automation Research Institute.

The papers in this volume cover a broad range of research topics presented in four groups. The first one introduces cluster tools and techniques, especially the issues of load balancing and migration. Another six papers deal with grid and global computing including grid infrastructure, tools, applications and mobile computing. The next nine papers present general questions of distributed development and applications. The last four papers address a crucial issue in distributed computing: fault tolerance and dependable systems.

This volume will be useful to researchers and scholars interested in all areas related to parallel and distributed computing systems.

Robust Model-Based Fault Diagnosis for Dynamic Systems (Paperback, Softcover reprint of the original 1st ed. 1999): Jie Chen,... Robust Model-Based Fault Diagnosis for Dynamic Systems (Paperback, Softcover reprint of the original 1st ed. 1999)
Jie Chen, R.J. Patton
R7,657 Discovery Miles 76 570 Ships in 18 - 22 working days

There is an increasing demand for dynamic systems to become safer and more reliable. This requirement extends beyond the normally accepted safety-critical systems such as nuclear reactors and aircraft, where safety is of paramount importance, to systems such as autonomous vehicles and process control systems where the system availability is vital. It is clear that fault diagnosis is becoming an important subject in modern control theory and practice. Robust Model-Based Fault Diagnosis for Dynamic Systems presents the subject of model-based fault diagnosis in a unified framework. It contains many important topics and methods; however, total coverage and completeness is not the primary concern. The book focuses on fundamental issues such as basic definitions, residual generation methods and the importance of robustness in model-based fault diagnosis approaches. In this book, fault diagnosis concepts and methods are illustrated by either simple academic examples or practical applications. The first two chapters are of tutorial value and provide a starting point for newcomers to this field.The rest of the book presents the state of the art in model-based fault diagnosis by discussing many important robust approaches and their applications. This will certainly appeal to experts in this field. Robust Model-Based Fault Diagnosis for Dynamic Systems targets both newcomers who want to get into this subject, and experts who are concerned with fundamental issues and are also looking for inspiration for future research. The book is useful for both researchers in academia and professional engineers in industry because both theory and applications are discussed. Although this is a research monograph, it will be an important text for postgraduate research students world-wide. The largest market, however, will be academics, libraries and practicing engineers and scientists throughout the world.

Handbook of Electronics Manufacturing Engineering (Paperback, Softcover reprint of the original 3rd ed. 1997): Bernie Matisoff Handbook of Electronics Manufacturing Engineering (Paperback, Softcover reprint of the original 3rd ed. 1997)
Bernie Matisoff
R5,251 Discovery Miles 52 510 Ships in 18 - 22 working days

This single source reference offers a pragmatic and accessible approach to the basic methods and procedures used in the manufacturing and design of modern electronic products. Providing a stategic yet simplified layout, this handbook is set up with an eye toward maximizing productivity in each phase of the eletronics manufacturing process. Not only does this handbook inform the reader on vital issues concerning electronics manufacturing and design, it also provides practical insight and will be of essential use to manufacturing and process engineers in electronics and aerospace manufacturing. In addition, electronics packaging engineers and electronics manufacturing managers and supervisors will gain a wealth of knowledge.

Computer Architecture: A Minimalist Perspective (Paperback, Softcover reprint of the original 1st ed. 2003): William F.... Computer Architecture: A Minimalist Perspective (Paperback, Softcover reprint of the original 1st ed. 2003)
William F. Gilreath, Phillip A Laplante
R3,997 Discovery Miles 39 970 Ships in 18 - 22 working days

The one instruction set computer (OISC) is the ultimate reduced instruction set computer (RISC). In OISC, the instruction set consists of only one instruction, and then by composition, all other necessary instructions are synthesized. This is an approach completely opposite to that of a complex instruction set computer (CISC), which incorporates complex instructions as microprograms within the processor. Computer Architecture: A Minimalist Perspective examines computer architecture, computability theory, and the history of computers from the perspective of one instruction set computing - a novel approach in which the computer supports only one, simple instruction. This bold, new paradigm offers significant promise in biological, chemical, optical, and molecular scale computers. Features include: - Provides a comprehensive study of computer architecture using computability theory as a base. - Provides a fresh perspective on computer architecture not found in any other text. - Covers history, theory, and practice of computer architecture from a minimalist perspective. Includes a complete implementation of a one instruction computer.- Includes exercises and programming assignments. Computer Architecture: A Minimalist Perspective is designed to meet the needs of a professional audience composed of researchers, computer hardware engineers, software engineers computational theorists, and systems engineers. The book is also intended for use in upper division undergraduate students and early graduate students studying computer architecture or embedded systems. It is an excellent text for use as a supplement or alternative in traditional Computer Architecture Courses, or in courses entitled Special Topics in Computer Architecture.

Ontology Learning for the Semantic Web (Paperback, Softcover reprint of the original 1st ed. 2002): Alexander Maedche Ontology Learning for the Semantic Web (Paperback, Softcover reprint of the original 1st ed. 2002)
Alexander Maedche
R2,647 Discovery Miles 26 470 Ships in 18 - 22 working days

Ontology Learning for the Semantic Web explores techniques for applying knowledge discovery techniques to different web data sources (such as HTML documents, dictionaries, etc.), in order to support the task of engineering and maintaining ontologies. The approach of ontology learning proposed in Ontology Learning for the Semantic Web includes a number of complementary disciplines that feed in different types of unstructured and semi-structured data. This data is necessary in order to support a semi-automatic ontology engineering process. Ontology Learning for the Semantic Web is designed for researchers and developers of semantic web applications. It also serves as an excellent supplemental reference to advanced level courses in ontologies and the semantic web.

Wafer Scale Integration (Paperback, Softcover reprint of the original 1st ed. 1989): Earl E. Swartzlander Jr Wafer Scale Integration (Paperback, Softcover reprint of the original 1st ed. 1989)
Earl E. Swartzlander Jr
R5,206 Discovery Miles 52 060 Ships in 18 - 22 working days

Wafer Scale Integration (WSI) is the culmination of the quest for larger integrated circuits. In VLSI chips are developed by fabricating a wafer with hundreds of identical circuits, testing the circuits, dicing the wafer, and packaging the good dice. In contrast in WSI, a wafer is fabricated with several types of circuits (generally referred to as cells), with multiple instances of each cell type, the cells are tested, and good cells are interconnected to realize a system on the wafer. Since most signal lines stay on the wafer, stray capacitance is low, so that high speeds are achieved with low power consumption. For the same technology a WSI implementation may be a factor of five faster, dissipate a factor of ten less power, and require one hundredth to one thousandth the volume. Successful development of WSI involves many overlapping disciplines, ranging from architecture to test design to fabrication (including laser linking and cutting, multiple levels of interconnection, and packaging). This book concentrates on the areas that are unique to WSI and that are as a result not well covered by any of the many books on VLSI design. A unique aspect of WSI is that the finished circuits are so large that there will be defects in some portions of the circuit. Accordingly much attention must be devoted to designing architectures that facilitate fault detection and reconfiguration to of WSI include fabrication circumvent the faults. Other unique aspects technology and packaging.

Hardware Design and Simulation in VAL/VHDL (Paperback, Softcover reprint of the original 1st ed. 1991): Larry M. Augustin,... Hardware Design and Simulation in VAL/VHDL (Paperback, Softcover reprint of the original 1st ed. 1991)
Larry M. Augustin, David C Luckham, Benoit A. Gennart, Youm Huh, A Stanculescu
R2,666 Discovery Miles 26 660 Ships in 18 - 22 working days

The VHSIC Hardware Description Language (VHDL) provides a standard machine processable notation for describing hardware. VHDL is the result of a collaborative effort between IBM, Intermetrics, and Texas Instruments; sponsored by the Very High Speed Integrated Cir cuits (VHSIC) program office of the Department of Defense, beginning in 1981. Today it is an IEEE standard (1076-1987), and several simulators and other automated support tools for it are available commercially. By providing a standard notation for describing hardware, especially in the early stages of the hardware design process, VHDL is expected to reduce both the time lag and the cost involved in building new systems and upgrading existing ones. VHDL is the result of an evolutionary approach to language devel opment starting with high level hardware description languages existing in 1981. It has a decidedly programming language flavor, resulting both from the orientation of hardware languages of that time, and from a ma jor requirement that VHDL use Ada constructs wherever appropriate. During the 1980's there has been an increasing current of research into high level specification languages for systems, particularly in the software area, and new methods of utilizing specifications in systems de velopment. This activity is worldwide and includes, for example, object oriented design, various rigorous development methods, mathematical verification, and synthesis from high level specifications. VAL (VHDL Annotation Language) is a simple further step in the evolution of hardware description languages in the direction of applying new methods that have developed since VHDL was designed."

Disseminating Security Updates at Internet Scale (Paperback, Softcover reprint of the original 1st ed. 2003): Jun Li, Peter... Disseminating Security Updates at Internet Scale (Paperback, Softcover reprint of the original 1st ed. 2003)
Jun Li, Peter Reiher, Gerald J. Popek
R2,621 Discovery Miles 26 210 Ships in 18 - 22 working days

Disseminating Security Updates at Internet Scale describes a new system, "Revere", that addresses these problems. "Revere" builds large-scale, self-organizing and resilient overlay networks on top of the Internet to push security updates from dissemination centers to individual nodes. "Revere" also sets up repository servers for individual nodes to pull missed security updates. This book further discusses how to protect this push-and-pull dissemination procedure and how to secure "Revere" overlay networks, considering possible attacks and countermeasures. Disseminating Security Updates at Internet Scale presents experimental measurements of a prototype implementation of "Revere" gathered using a large-scale oriented approach. These measurements suggest that "Revere" can deliver security updates at the required scale, speed and resiliency for a reasonable cost. Disseminating Security Updates at Internet Scale will be helpful to those trying to design peer systems at large scale when security is a concern, since many of the issues faced by these designs are also faced by "Revere". The "Revere" solutions may not always be appropriate for other peer systems with very different goals, but the analysis of the problems and possible solutions discussed here will be helpful in designing a customized approach for such systems.

Neural Circuits and Networks - Proceedings of the NATO advanced Study Institute on Neuronal Circuits and Networks, held at the... Neural Circuits and Networks - Proceedings of the NATO advanced Study Institute on Neuronal Circuits and Networks, held at the Ettore Majorana Center, Erice, Italy, June 15-27 1997 (Paperback, Softcover reprint of the original 1st ed. 1998)
Vincent Torre, John Nicholls
R2,644 Discovery Miles 26 440 Ships in 18 - 22 working days

The understanding of parallel processing and of the mechanisms underlying neural networks in the brain is certainly one of the most challenging problems of contemporary science. During the last decades significant progress has been made by the combination of different techniques, which have elucidated properties at a cellular and molecular level. However, in order to make significant progress in this field, it is necessary to gather more direct experimental data on the parallel processing occurring in the nervous system. Indeed the nervous system overcomes the limitations of its elementary components by employing a massive degree of parallelism, through the extremely rich set of synaptic interconnections between neurons. This book gathers a selection of the contributions presented during the NATO ASI School "Neuronal Circuits and Networks" held at the Ettore Majorana Center in Erice, Sicily, from June 15 to 27, 1997. The purpose of the School was to present an overview of recent results on single cell properties, the dynamics of neuronal networks and modelling of the nervous system. The School and the present book propose an interdisciplinary approach of experimental and theoretical aspects of brain functions combining different techniques and methodologies.

Compositional Verification of Concurrent and Real-Time Systems (Paperback, Softcover reprint of the original 1st ed. 2002):... Compositional Verification of Concurrent and Real-Time Systems (Paperback, Softcover reprint of the original 1st ed. 2002)
Eric Y.T. Juan, Jeffrey J.P. Tsai
R2,633 Discovery Miles 26 330 Ships in 18 - 22 working days

With the rapid growth of networking and high-computing power, the demand for large-scale and complex software systems has increased dramatically. Many of the software systems support or supplant human control of safety-critical systems such as flight control systems, space shuttle control systems, aircraft avionics control systems, robotics, patient monitoring systems, nuclear power plant control systems, and so on. Failure of safety-critical systems could result in great disasters and loss of human life. Therefore, software used for safety critical systems should preserve high assurance properties. In order to comply with high assurance properties, a safety-critical system often shares resources between multiple concurrently active computing agents and must meet rigid real-time constraints. However, concurrency and timing constraints make the development of a safety-critical system much more error prone and arduous. The correctness of software systems nowadays depends mainly on the work of testing and debugging. Testing and debugging involve the process of de tecting, locating, analyzing, isolating, and correcting suspected faults using the runtime information of a system. However, testing and debugging are not sufficient to prove the correctness of a safety-critical system. In contrast, static analysis is supported by formalisms to specify the system precisely. Formal verification methods are then applied to prove the logical correctness of the system with respect to the specification. Formal verifica tion gives us greater confidence that safety-critical systems meet the desired assurance properties in order to avoid disastrous consequences.

Foundations of Real-Time Computing: Formal Specifications and Methods (Paperback, Softcover reprint of the original 1st ed.... Foundations of Real-Time Computing: Formal Specifications and Methods (Paperback, Softcover reprint of the original 1st ed. 1991)
Andre M.Van Tilborg, Gary M. Koob
R4,021 Discovery Miles 40 210 Ships in 18 - 22 working days

This volume contains a selection of papers that focus on the state-of the-art in formal specification and verification of real-time computing systems. Preliminary versions of these papers were presented at a workshop on the foundations of real-time computing sponsored by the Office of Naval Research in October, 1990 in Washington, D. C. A companion volume by the title Foundations of Real-Time Computing: Scheduling and Resource Management complements this hook by addressing many of the recently devised techniques and approaches for scheduling tasks and managing resources in real-time systems. Together, these two texts provide a comprehensive snapshot of current insights into the process of designing and building real time computing systems on a scientific basis. The notion of real-time system has alternative interpretations, not all of which are intended usages in this collection of papers. Different communities of researchers variously use the term real-time to refer to either very fast computing, or immediate on-line data acquisition, or deadline-driven computing. This text is concerned with the formal specification and verification of computer software and systems whose correct performance is dependent on carefully orchestrated interactions with time, e. g., meeting deadlines and synchronizing with clocks. Such systems have been enabled for a rapidly increasing set of diverse end-uses by the unremitting advances in computing power per constant-dollar cost and per constant-unit-volume of space. End use applications of real-time computers span a spectrum that includes transportation systems, robotics and manufacturing, aerospace and defense, industrial process control, and telecommunications."

Multicasting on the Internet and its Applications (Paperback, Softcover reprint of the original 1st ed. 1998): Sanjoy Paul Multicasting on the Internet and its Applications (Paperback, Softcover reprint of the original 1st ed. 1998)
Sanjoy Paul
R4,055 Discovery Miles 40 550 Ships in 18 - 22 working days

This book covers the entire spectrum of multicasting on the Internet from link- to application-layer issues, including multicasting in broadcast and non-broadcast links, multicast routing, reliable and real-time multicast transport, group membership and total ordering in multicast groups. In-depth consideration is given to describing IP multicast routing protocols, such as, DVMRP, MOSPF, PIM and CBT, quality of service issues in network-layer using RSVP and ST-2, as well as the relationship between ATM and IP multicast. These discussions include coverage of key concepts using illustrative diagrams and various real-world applications. The protocols and the architecture of MBone are described, real-time multicast transport issues are addressed and various reliable multicast transport protocols are compared both conceptually and analytically. Also included is a discussion of video multicast and other cutting-edge research on multicast with an assessment of their potential impact on future internetworks.Multicasting on the Internet and Its Applications is an invaluable reference work for networking professionals and researchers, network software developers, information technology managers and graduate students.

Computers in Building - Proceedings of the CAADfutures'99 Conference. Proceedings of the Eighth International Conference... Computers in Building - Proceedings of the CAADfutures'99 Conference. Proceedings of the Eighth International Conference on Computer Aided Architectural Design Futures held at Georgia Institute of Technology, Atlanta, Georgia, USA on June 7-8, 1999 (Paperback, Softcover reprint of the original 1st ed. 1999)
Godfried Augenbroe, Charles Eastman
R4,044 Discovery Miles 40 440 Ships in 18 - 22 working days

Since the establishment of the CAAD Futures Foundation in 1985, CAAD experts from all over the world meet every two years to present and document the state of the art of research in Computer Aided Architectural Design. Together, the series provides a good record of the evolving state of research in this area over the last fourteen years. The Proceedings this year is the eighth in the series. The conference held at Georgia Institute of Technology in Atlanta, Georgia, includes twenty-five papers presenting new and exciting results and capabilities in areas such as computer graphics, building modeling, digital sketching and drawing systems, Web-based collaboration and information exchange. An overall reading shows that computers in architecture is still a young field, with many exciting results emerging out of both greater understanding of the human processes and information processing needed to support design and also the continuously expanding capabilities of digital technology.

Still Image Compression on Parallel Computer Architectures (Paperback, Softcover reprint of the original 1st ed. 1999): Savitri... Still Image Compression on Parallel Computer Architectures (Paperback, Softcover reprint of the original 1st ed. 1999)
Savitri Bevinakoppa
R3,995 Discovery Miles 39 950 Ships in 18 - 22 working days

Still Image Compression on Parallel Computer Architectures investigates the application of parallel-processing techniques to digital image compression. Digital image compression is used to reduce the number of bits required to store an image in computer memory and/or transmit it over a communication link. Over the past decade advancements in technology have spawned many applications of digital imaging, such as photo videotex, desktop publishing, graphics arts, color facsimile, newspaper wire phototransmission and medical imaging. For many other contemporary applications, such as distributed multimedia systems, rapid transmission of images is necessary. Dollar cost as well as time cost of transmission and storage tend to be directly proportional to the volume of data. Therefore, application of digital image compression techniques becomes necessary to minimize costs. A number of digital image compression algorithms have been developed and standardized. With the success of these algorithms, research effort is now directed towards improving implementation techniques. The Joint Photographic Experts Group (JPEG) and Motion Photographic Experts Group(MPEG) are international organizations which have developed digital image compression standards. Hardware (VLSI chips) which implement the JPEG image compression algorithm are available. Such hardware is specific to image compression only and cannot be used for other image processing applications. A flexible means of implementing digital image compression algorithms is still required. An obvious method of processing different imaging applications on general purpose hardware platforms is to develop software implementations. JPEG uses an 8 x 8 block of image samples as the basic element for compression. These blocks are processed sequentially. There is always the possibility of having similar blocks in a given image. If similar blocks in an image are located, then repeated compression of these blocks is not necessary. By locating similar blocks in the image, the speed of compression can be increased and the size of the compressed image can be reduced. Based on this concept an enhancement to the JPEG algorithm is proposed, called Bock Comparator Technique (BCT). Still Image Compression on Parallel Computer Architectures is designed for advanced students and practitioners of computer science. This comprehensive reference provides a foundation for understanding digital image compression techniques and parallel computer architectures.

Performance and Reliability Analysis of Computer Systems - An Example-Based Approach Using the SHARPE Software Package... Performance and Reliability Analysis of Computer Systems - An Example-Based Approach Using the SHARPE Software Package (Paperback, Softcover reprint of the original 1st ed. 1996)
Robin A. Sahner, Kishor Trivedi, Antonio Puliafito
R4,046 Discovery Miles 40 460 Ships in 18 - 22 working days

Performance and Reliability Analysis of Computer Systems: An Example-Based Approach Using the SHARPE Software Package provides a variety of probabilistic, discrete-state models used to assess the reliability and performance of computer and communication systems. The models included are combinatorial reliability models (reliability block diagrams, fault trees and reliability graphs), directed, acyclic task precedence graphs, Markov and semi-Markov models (including Markov reward models), product-form queueing networks and generalized stochastic Petri nets. A practical approach to system modeling is followed; all of the examples described are solved and analyzed using the SHARPE tool. In structuring the book, the authors have been careful to provide the reader with a methodological approach to analytical modeling techniques. These techniques are not seen as alternatives but rather as an integral part of a single process of assessment which, by hierarchically combining results from different kinds of models, makes it possible to use state-space methods for those parts of a system that require them and non-state-space methods for the more well-behaved parts of the system. The SHARPE (Symbolic Hierarchical Automated Reliability and Performance Evaluator) package is the `toolchest' that allows the authors to specify stochastic models easily and solve them quickly, adopting model hierarchies and very efficient solution techniques. All the models described in the book are specified and solved using the SHARPE language; its syntax is described and the source code of almost all the examples discussed is provided. Audience: Suitable for use in advanced level courses covering reliability and performance of computer and communications systems and by researchers and practicing engineers whose work involves modeling of system performance and reliability.

Cooperative Computer-Aided Authoring and Learning - A Systems Approach (Paperback, Softcover reprint of the original 1st ed.... Cooperative Computer-Aided Authoring and Learning - A Systems Approach (Paperback, Softcover reprint of the original 1st ed. 1995)
Max Muhlhauser
R5,163 Discovery Miles 51 630 Ships in 18 - 22 working days

Cooperative Computer-Aided Authoring and Learning: A Systems Approach describes in detail a practical system for computer assisted authoring and learning. Drawing from the experiences gained during the Nestor project, jointly run between the Universities of Karlsruhe, Kaiserslautern and Freiburg and the Digital Equipment Corp. Center for Research and Advanced Development, the book presents a concrete example of new concepts in the domain of computer-aided authoring and learning. The conceptual foundation is laid by a reference architecture for an integrated environment for authoring and learning. This overall architecture represents the nucleus, shell and common denominator for the R&D activities carried out. From its conception, the reference architecture was centered around three major issues: * Cooperation among and between authors and learners in an open, multimedia and distributed system as the most important attribute; * Authoring/learning as the central topic; * Laboratory as the term which evoked the most suitable association with the envisioned authoring/learning environment.Within this framework, the book covers four major topics which denote the most important technical domains, namely: * The system kernel, based on object orientation and hypermedia; * Distributed multimedia support; * Cooperation support, and * Reusable instructional design support. Cooperative Computer-Aided Authoring and Learning: A Systems Approach is a major contribution to the emerging field of collaborative computing and is essential reading for researchers and practitioners alike. Its pedagogic flavor also makes it suitable for use as a text for a course on the subject.

Information and Collaboration Models of Integration (Paperback, Softcover reprint of the original 1st ed. 1994): Shimon Y. Nof Information and Collaboration Models of Integration (Paperback, Softcover reprint of the original 1st ed. 1994)
Shimon Y. Nof
R4,069 Discovery Miles 40 690 Ships in 18 - 22 working days

The objective of this book is to bring together contributions by eminent researchers from industry and academia who specialize in the currently separate study and application of the key aspects of integration. The state of knowledge on integration and collaboration models and methods is reviewed, followed by an agenda for needed research that has been generated by the participants. The book is the result of a NATO Advanced Research Workshop on "Integration: Information and Collaboration Models" that took place at II Ciocco, Italy, during June 1993. Significant developments and research projects have been occurring internationally in a major effort to integrate increasingly complex systems. On one hand, advancements in computer technology and computing theories provide better, more timely, information. On of users and clients, and the the other hand, the geographic and organizational distribution proliferation of computers and communication, lead to an explosion of information and to the demand for integration. Two important examples of interest are computer integrated manufacturing and enterprises (CIM/E) and concurrent engineering (CE). CIM/E is the collection of computer technologies such as CNC, CAD, CAM. robotics and computer integrated engineering that integrate all the enterprise activities for competitiveness and timely response to changes. Concurrent engineering is the complete life-cycle approach to engineering of products. systems. and processes including customer requirements, design. planning. costing. service and recycling. In CIM/E and in CE, computer based information is the key to integration.

Parallel Algorithms and Architectures for DSP Applications (Paperback, Softcover reprint of the original 1st ed. 1991): Magdy... Parallel Algorithms and Architectures for DSP Applications (Paperback, Softcover reprint of the original 1st ed. 1991)
Magdy A. Bayoumi
R2,654 Discovery Miles 26 540 Ships in 18 - 22 working days

Over the past few years, the demand for high speed Digital Signal Proces sing (DSP) has increased dramatically. New applications in real-time image processing, satellite communications, radar signal processing, pattern recogni tion, and real-time signal detection and estimation require major improvements at several levels; algorithmic, architectural, and implementation. These perfor mance requirements can be achieved by employing parallel processing at all levels. Very Large Scale Integration (VLSI) technology supports and provides a good avenue for parallelism. Parallelism offers efficient sohitions to several problems which can arise in VLSI DSP architectures such as: 1. Intermediate data communication and routing: several DSP algorithms, such as FFT, involve excessive data routing and reordering. Parallelism is an efficient mechanism to minimize the silicon cost and speed up the pro cessing time of the intermediate middle stages. 2. Complex DSP applications: the required computation is almost doubled. Parallelism will allow two similar channels processing at the same time. The communication between the two channels has to be minimized. 3. Applicatilm specific systems: this emerging approach should achieve real-time performance in a cost-effective way. 4. Testability and fault tolerance: reliability has become a required feature in most of DSP systems. To achieve such property, the involved time overhead is significant. Parallelism may be the solution to maintain ac ceptable speed performance."

A Formal Approach to Hardware Design (Paperback, Softcover reprint of the original 1st ed. 1994): Jorgen Staunstrup A Formal Approach to Hardware Design (Paperback, Softcover reprint of the original 1st ed. 1994)
Jorgen Staunstrup
R4,003 Discovery Miles 40 030 Ships in 18 - 22 working days

A Formal Approach to Hardware Design discusses designing computations to be realised by application specific hardware. It introduces a formal design approach based on a high-level design language called Synchronized Transitions. The models created using Synchronized Transitions enable the designer to perform different kinds of analysis and verification based on descriptions in a single language. It is, for example, possible to use exactly the same design description both for mechanically supported verification and synthesis. Synchronized Transitions is supported by a collection of public domain CAD tools. These tools can be used with the book in presenting a course on the subject. A Formal Approach to Hardware Design illustrates the benefits to be gained from adopting such techniques, but it does so without assuming prior knowledge of formal design methods. The book is thus not only an excellent reference, it is also suitable for use by students and practitioners.

Revit 2018 Architecture (Paperback): Munir Hamad Revit 2018 Architecture (Paperback)
Munir Hamad
R1,176 R994 Discovery Miles 9 940 Save R182 (15%) Ships in 18 - 22 working days

This book is the most comprehensive book you will find Autodesk Revit 2018 Architecture. Covering all of the 2D concepts, it uses both metric and imperial units to illustrate the myriad drawing and editing tools for this popular application. Use the companion files to set up drawing exercises and projects and see all of the book's figures in colour. Revit Architecture 2018 includes over 50 exercises or "mini-workshops," that complete small projects from concept through actual plotting. Solving all of the workshops will simulate the creation of three projects (architectural and mechanical) from beginning to end, without overlooking any of the basic commands and functions in Revit Architecture 2018.

TRON Project 1987 Open-Architecture Computer Systems - Proceedings of the Third TRON Project Symposium (Paperback, Softcover... TRON Project 1987 Open-Architecture Computer Systems - Proceedings of the Third TRON Project Symposium (Paperback, Softcover reprint of the original 1st ed. 1987)
Ken Sakamura
R1,428 Discovery Miles 14 280 Ships in 18 - 22 working days

Almost 4 years have elapsed since Dr. Ken Sakamura of The University of Tokyo first proposed the TRON (the realtime operating system nucleus) concept and 18 months since the foundation of the TRON Association on 16 June 1986. Members of the Association from Japan and overseas currently exceed 80 corporations. The TRON concept, as advocated by Dr. Ken Sakamura, is concerned with the problem of interaction between man and the computer (the man-machine inter face), which had not previously been given a great deal of attention. Dr. Sakamura has gone back to basics to create a new and complete cultural environment relative to computers and envisage a role for computers which will truly benefit mankind. This concept has indeed caused a stir in the computer field. The scope of the research work involved was initially regarded as being so extensive and diverse that the completion of activities was scheduled for the 1990s. However, I am happy to note that the enthusiasm expressed by individuals and organizations both within and outside Japan has permitted acceleration of the research and development activities. It is to be hoped that the presentations of the Third TRON Project Symposium will further the progress toward the creation of a computer environment that will be compatible with the aspirations of mankind."

Computer Systems and Software Engineering - State-of-the-art (Paperback, Softcover reprint of the original 1st ed. 1992):... Computer Systems and Software Engineering - State-of-the-art (Paperback, Softcover reprint of the original 1st ed. 1992)
Patrick de Wilde, Joos P.L. Vandewalle
R4,055 Discovery Miles 40 550 Ships in 18 - 22 working days

Computer Systems and Software Engineering is a compilation of sixteen state-of-the-art lectures and keynote speeches given at the COMPEURO '92 conference. The contributions are from leading researchers, each of whom gives a new insight into subjects ranging from hardware design through parallelism to computer applications. The pragmatic flavour of the contributions makes the book a valuable asset for both researchers and designers alike. The book covers the following subjects: Hardware Design: memory technology, logic design, algorithms and architecture; Parallel Processing: programming, cellular neural networks and load balancing; Software Engineering: machine learning, logic programming and program correctness; Visualization: the graphical computer interface.

Compiling Parallel Loops for High Performance Computers - Partitioning, Data Assignment and Remapping (Paperback, Softcover... Compiling Parallel Loops for High Performance Computers - Partitioning, Data Assignment and Remapping (Paperback, Softcover reprint of the original 1st ed. 1993)
David E. Hudak, Santosh G. Abraham
R2,622 Discovery Miles 26 220 Ships in 18 - 22 working days

The exploitationof parallel processing to improve computing speeds is being examined at virtually all levels of computer science, from the study of parallel algorithms to the development of microarchitectures which employ multiple functional units. The most visible aspect of this interest in parallel processing is the commercially available multiprocessor systems which have appeared in the past decade. Unfortunately, the lack of adequate software support for the development of scientific applications that will run efficiently on multiple processors has stunted the acceptance of such systems. One of the major impediments to achieving high parallel efficiency on many data-parallel scientific applications is communication overhead, which is exemplified by cache coherency traffic and global memory overhead of interprocessors with a logically shared address space and physically distributed memory. Such techniques can be used by scientific application designers seeking to optimize code for a particular high-performance computer. In addition, these techniques can be seen as a necesary step toward developing software to support efficient paralled programs.In multiprocessor sytems with physically distributed memory, reducing communication overhead involves both data partitioning and data placement. Adaptive Data Partitioning (ADP) reduces the execution time of parallel programs by minimizing interprocessor communication for iterative data-parallel loops with near-neighbor communication. Data placement schemes are presented that reduce communication overhead. Under the loop partition specified by ADP, global data is partitioned into classes for each processor, allowing each processor to cache certain regions of the global data set. In addition, for many scientific applications, peak parallel efficiency is achieved only when machine-specific tradeoffs between load imbalance and communication are evaluated and utilized in choosing the data partition. The techniques in this book evaluate these tradeoffs to generate optimum cyclic partitions for data-parallel loops with either a linearly varying or uniform computational structure and either neighborhood or dimensional multicast communication patterns.This tradeoff is also treated within the CPR (Collective Partitioning and Remapping) algorithm, which partitions a collection of loops with various computational structures and communication patterns. Experiments that demonstrate the advantage of ADP, data placement, cyclic partitioning and CPR were conducted on the Encore Multimax and BBN TC2000 multiprocessors using the ADAPT system, a program partitioner which automatically restructures iterative data-parallel loops. This book serves as an excellent reference and may be used as the text for an advanced course on the subject.

Multiprocessing - Trade-Offs in Computation and Communication (Paperback, Softcover reprint of the original 1st ed. 1993):... Multiprocessing - Trade-Offs in Computation and Communication (Paperback, Softcover reprint of the original 1st ed. 1993)
Vijay K. Naik
R2,634 Discovery Miles 26 340 Ships in 18 - 22 working days

Multiprocessing: Trade-Offs in Computation and Communication presents an in-depth analysis of several commonly observed regular and irregular computations for multiprocessor systems. This book includes techniques which enable researchers and application developers to quantitatively determine the effects of algorithm data dependencies on execution time, on communication requirements, on processor utilization and on the speedups possible. Starting with simple, two-dimensional, diamond-shaped directed acyclic graphs, the analysis is extended to more complex and higher dimensional directed acyclic graphs. The analysis allows for the quantification of the computation and communication costs and their interdependencies. The practical significance of these results on the performance of various data distribution schemes is clearly explained. Using these results, the performance of the parallel computations are formulated in an architecture independent fashion. These formulations allow for the parameterization of the architecture specitific entities such as the computation and communication rates. This type of parameterized performance analysis can be used at compile time or at run-time so as to achieve the most optimal distribution of the computations. The material in Multiprocessing: Trade-Offs in Computation and Communication connects theory with practice, so that the inherent performance limitations in many computations can be understood, and practical methods can be devised that would assist in the development of software for scalable high performance systems.

Real-Time UNIX (R) Systems - Design and Application Guide (Paperback, Softcover reprint of the original 1st ed. 1991): Borko... Real-Time UNIX (R) Systems - Design and Application Guide (Paperback, Softcover reprint of the original 1st ed. 1991)
Borko Furht, Dan Grostick, David Gluch, Guy Rabbat, John Parker, …
R4,003 Discovery Miles 40 030 Ships in 18 - 22 working days

A growing concern of mine has been the unrealistic expectations for new computer-related technologies introduced into all kinds of organizations. Unrealistic expectations lead to disappointment, and a schizophrenic approach to the introduction of new technologies. The UNIX and real-time UNIX operating system technologies are major examples of emerging technologies with great potential benefits but unrealistic expectations. Users want to use UNIX as a common operating system throughout large segments of their organizations. A common operating system would decrease software costs by helping to provide portability and interoperability between computer systems in today's multivendor environments. Users would be able to more easily purchase new equipment and technologies and cost-effectively reuse their applications. And they could more easily connect heterogeneous equipment in different departments without having to constantly write and rewrite interfaces. On the other hand, many users in various organizations do not understand the ramifications of general-purpose versus real-time UNIX. Users tend to think of "real-time" as a way to handle exotic heart-monitoring or robotics systems. Then these users use UNIX for transaction processing and office applications and complain about its performance, robustness, and reliability. Unfortunately, the users don't realize that real-time capabilities added to UNIX can provide better performance, robustness and reliability for these non-real-time applications. Many other vendors and users do realize this, however. There are indications even now that general-purpose UNIX will go away as a separate entity. It will be replaced by a real-time UNIX. General-purpose UNIX will exist only as a subset of real-time UNIX.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Thinking Machines - Machine Learning and…
Shigeyuki Takano Paperback R2,011 Discovery Miles 20 110
Grammatical and Syntactical Approaches…
Juhyun Lee, Michael J. Ostwald Hardcover R5,315 Discovery Miles 53 150
Advances in Delay-Tolerant Networks…
Joel J. P. C. Rodrigues Paperback R4,669 Discovery Miles 46 690
The Practice of Enterprise Architecture…
Svyatoslav Kotusev Hardcover R1,571 Discovery Miles 15 710
Heterogeneous Computing - Hardware and…
Mohamed Zahran Hardcover R1,517 Discovery Miles 15 170
Tools and Technologies for the…
Sergey Balandin, Ekaterina Balandina Hardcover R6,502 Discovery Miles 65 020
Applying Integration Techniques and…
Gabor Kecskemeti Hardcover R6,050 Discovery Miles 60 500
Learn Quantum Computing with Python and…
Robert Loredo Paperback R1,022 Discovery Miles 10 220
Creativity in Computing and DataFlow…
Suyel Namasudra, Veljko Milutinovic Hardcover R4,204 Discovery Miles 42 040
Systems Engineering Neural Networks
A Migliaccio Hardcover R2,817 Discovery Miles 28 170

 

Partners