0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (4)
  • R250 - R500 (21)
  • R500+ (2,647)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design > General

Supercomputing - Applications, Algorithms, and Architectures For the Future of Supercomputing (Paperback, Softcover reprint of... Supercomputing - Applications, Algorithms, and Architectures For the Future of Supercomputing (Paperback, Softcover reprint of the original 1st ed. 1991)
Jiro Kondo; Edited by (associates) Toshiko Matsuda
R1,402 Discovery Miles 14 020 Ships in 18 - 22 working days

As the technology of Supercomputing processes, methodologies for approaching problems have also been developed. The main object of this symposium was the interdisciplinary participation of experts in related fields and passionate discussion to work toward the solution of problems. An executive committee especially arranged for this symposium selected speakers and other participants who submitted papers which are included in this volume. Also included are selected extracts from the two sessions of panel discussion, the "Needs and Seeds of Supercomputing," and "The Future of Supercomputing," which arose during a wide-ranging exchange of viewpoints.

Multi-Microprocessor Systems for Real-Time Applications (Paperback, Softcover reprint of the original 1st ed. 1985): Gianni... Multi-Microprocessor Systems for Real-Time Applications (Paperback, Softcover reprint of the original 1st ed. 1985)
Gianni Conte, Dante Del Corso
R4,016 Discovery Miles 40 160 Ships in 18 - 22 working days

The continous development of computer technology supported by the VLSI revolution stimulated the research in the field .of multiprocessors systems. The main motivation for the migration of design efforts from conventional architectures towards multiprocessor ones is the possibi I ity to obtain a significant processing power together with the improvement of price/performance, reliability and flexibility figures. Currently, such systems are moving from research laboratories to real field appl ications. Future technological advances and new generations of components are I ikely to further enhance this trend. This book is intended to provide basic concepts and design methodologies for engineers and researchers involved in the development of mul tiprocessor systems and/or of appl ications based on multiprocessor architectures. In addition the book can be a source of material for computer architecture courses at graduate level. A preliminary knowledge of computer architecture and logical design has been assumed in wri ting this book. Not all the problems related with the development of multiprocessor systems are addressed in th i s book. The covered range spans from the electrical and logical design problems, to architectural issues, to design methodologis for system software. Subj ects such as software development in a multiprocessor environment or loosely coupled multiprocessor systems are out of the scope of the book. Since the basic elements, processors and memories, are now available as standard integrated circuits, the key design problem is how to put them together in an efficient and reliable way."

Database Machines and Knowledge Base Machines (Paperback, Softcover reprint of the original 1st ed. 1988): Masaru Kitsuregawa,... Database Machines and Knowledge Base Machines (Paperback, Softcover reprint of the original 1st ed. 1988)
Masaru Kitsuregawa, Hidehiko Tanaka
R7,743 Discovery Miles 77 430 Ships in 18 - 22 working days

This volume contains the papers presented at the Fifth International Workshop on Database Machines. The papers cover a wide spectrum of topics on Database Machines and Knowledge Base Machines. Reports of major projects, ECRC, MCC, and ICOT are included. Topics on DBM cover new database machine architectures based on vector processing and hypercube parallel processing, VLSI oriented architecture, filter processor, sorting machine, concurrency control mechanism for DBM, main memory database, interconnection network for DBM, and performance evaluation. In this workshop much more attention was given to knowledge base management as compared to the previous four workshops. Many papers discuss deductive database processing. Architectures for semantic network, prolog, and production system were also proposed. We would like to express our deep thanks to all those who contributed to the success of the workshop. We would also like to express our apprecia tion for the valuable suggestions given to us by Prof. D. K. Hsiao, Prof. D."

Switching Machines - Volume 1: Combinational Systems Introduction to Sequential Systems (Paperback, Softcover reprint of the... Switching Machines - Volume 1: Combinational Systems Introduction to Sequential Systems (Paperback, Softcover reprint of the original 1st ed. 1972)
J.P. Perrin, M Denouette, E. Daclin
R1,441 Discovery Miles 14 410 Ships in 18 - 22 working days

We shall begin this brief section with what we consider to be its objective. It will be followed by the main outline and then concluded by a few notes as to how this work should be used. Although logical systems have been manufactured for some time, the theory behind them is quite recent. Without going into historical digressions, we simply remark that the first comprehensive ideas on the application of Boolean algebra to logical systems appeared in the 1930's. These systems appeared in telephone exchanges and were realized with relays. It is only around 1955 that many articles and books trying to systematize the study of such automata, appeared. Since then, the theory has advanced regularly, but not in a way which satisfies those concerned with practical applications. What is serious, is that aside the books by Caldwell (which dates already from 1958), Marcus, and P. Naslin (in France), few works have been published which try to gather and unify results which can be used by the practis ing engineer; this is the objective of the present volumes."

Laser Spectroscopy (Paperback, Softcover reprint of the original 1st ed. 1974): Richard Brewer Laser Spectroscopy (Paperback, Softcover reprint of the original 1st ed. 1974)
Richard Brewer
R2,782 Discovery Miles 27 820 Ships in 18 - 22 working days

The Laser Spectroscopy Conference held at Vail, Colorado, June 25-29, 1973 was in certain ways the first meeting of its kind. Var ious quantum electronics conferences in the past have covered non linear optics, coherence theory, lasers and masers, breakdown, light scattering and so on. However, at Vail only two major themes were developed - tunable laser sources and the use of lasers in spectro scopic measurements, especially those involving high precision. Even so, Laser Spectroscopy covers a broad range of topics, making possible entirely new investigations and in older ones orders of magnitude improvement in resolution. The conference was interdisciplinary and international in char acter with scientists representing Japan, Italy, West Germany, Canada, Israel, France, England, and the United States. Of the 150 participants, the majority were physicists and electrical engineers in quantum electronics and the remainder, physical chemists and astrophysicists. We regret, because of space limitations, about 100 requests to attend had to be refused."

Parallel Computing in Optimization (Paperback, Softcover reprint of the original 1st ed. 1997): A. Migdalas, Panos M. Pardalos,... Parallel Computing in Optimization (Paperback, Softcover reprint of the original 1st ed. 1997)
A. Migdalas, Panos M. Pardalos, Sverre Storoy
R7,726 Discovery Miles 77 260 Ships in 18 - 22 working days

During the last three decades, breakthroughs in computer technology have made a tremendous impact on optimization. In particular, parallel computing has made it possible to solve larger and computationally more difficult prob lems. This volume contains mainly lecture notes from a Nordic Summer School held at the Linkoping Institute of Technology, Sweden in August 1995. In order to make the book more complete, a few authors were invited to contribute chapters that were not part of the course on this first occasion. The purpose of this Nordic course in advanced studies was three-fold. One goal was to introduce the students to the new achievements in a new and very active field, bring them close to world leading researchers, and strengthen their competence in an area with internationally explosive rate of growth. A second goal was to strengthen the bonds between students from different Nordic countries, and to encourage collaboration and joint research ventures over the borders. In this respect, the course built further on the achievements of the "Nordic Network in Mathematical Programming," which has been running during the last three years with the support ofthe Nordic Council for Advanced Studies (NorFA). The final goal was to produce literature on the particular subject, which would be available to both the participating students and to the students of the "next generation" ."

Fairness (Paperback, Softcover reprint of the original 1st ed. 1986): Nissim Francez Fairness (Paperback, Softcover reprint of the original 1st ed. 1986)
Nissim Francez
R1,413 Discovery Miles 14 130 Ships in 18 - 22 working days

The main purpose of this book is to bring together much of the research conducted in recent years in a subject I find both fascinating and impor tant, namely fairness. Much of the reported research is still in the form of technical reports, theses and conference papers, and only a small part has already appeared in the formal scientific journal literature. Fairness is one of those concepts that can intuitively be explained very brieft.y, but bear a lot of consequences, both in theory and the practicality of programming languages. Scientists have traditionally been attracted to studying such concepts. However, a rigorous study of the concept needs a lot of detailed development, evoking much machinery of both mathemat ics and computer science. I am fully aware of the fact that this field of research still lacks matu rity, as does the whole subject of theoretical studies of concurrency and nondeterminism. One symptom of this lack of maturity is the proliferation of models used by the research community to discuss these issues, a variety lacking the invariance property present, for example, in universal formalisms for sequential computing."

A Systolic Array Optimizing Compiler (Paperback, Softcover reprint of the original 1st ed. 1989): Monica S. Lam A Systolic Array Optimizing Compiler (Paperback, Softcover reprint of the original 1st ed. 1989)
Monica S. Lam
R2,635 Discovery Miles 26 350 Ships in 18 - 22 working days

This book is a revision of my Ph. D. thesis dissertation submitted to Carnegie Mellon University in 1987. It documents the research and results of the compiler technology developed for the Warp machine. Warp is a systolic array built out of custom, high-performance processors, each of which can execute up to 10 million floating-point operations per second (10 MFLOPS). Under the direction of H. T. Kung, the Warp machine matured from an academic, experimental prototype to a commercial product of General Electric. The Warp machine demonstrated that the scalable architecture of high-peiformance, programmable systolic arrays represents a practical, cost-effective solu tion to the present and future computation-intensive applications. The success of Warp led to the follow-on iWarp project, a joint project with Intel, to develop a single-chip 20 MFLOPS processor. The availability of the highly integrated iWarp processor will have a significant impact on parallel computing. One of the major challenges in the development of Warp was to build an optimizing compiler for the machine. First, the processors in the xx A Systolic Array Optimizing Compiler array cooperate at a fine granularity of parallelism, interaction between processors must be considered in the generation of code for individual processors. Second, the individual processors themselves derive their performance from a VLIW (Very Long Instruction Word) instruction set and a high degree of internal pipelining and parallelism. The compiler contains optimizations pertaining to the array level of parallelism, as well as optimizations for the individual VLIW processors."

Data Organization in Parallel Computers (Paperback, Softcover Repri): Harry A.G. Wijshoff Data Organization in Parallel Computers (Paperback, Softcover Repri)
Harry A.G. Wijshoff
R2,645 Discovery Miles 26 450 Ships in 18 - 22 working days

The organization of data is clearly of great importance in the design of high performance algorithms and architectures. Although there are several landmark papers on this subject, no comprehensive treatment has appeared. This monograph is intended to fill that gap. We introduce a model of computation for parallel computer architec tures, by which we are able to express the intrinsic complexity of data or ganization for specific architectures. We apply this model of computation to several existing parallel computer architectures, e.g., the CDC 205 and CRAY vector-computers, and the MPP binary array processor. The study of data organization in parallel computations was introduced as early as 1970. During the development of the ILLIAC IV system there was a need for a theory of possible data arrangements in interleaved mem ory systems. The resulting theory dealt primarily with storage schemes also called skewing schemes for 2-dimensional matrices, i.e., mappings from a- dimensional array to a number of memory banks. By means of the model of computation we are able to apply the theory of skewing schemes to var ious kinds of parallel computer architectures. This results in a number of consequences for both the design of parallel computer architectures and for applications of parallel processing."

VLSI for Artificial Intelligence (Paperback, Softcover reprint of the original 1st ed. 1989): Jose G.Delgado- Frias, Will Moore VLSI for Artificial Intelligence (Paperback, Softcover reprint of the original 1st ed. 1989)
Jose G.Delgado- Frias, Will Moore
R2,652 Discovery Miles 26 520 Ships in 18 - 22 working days

This book is an edited selection of the papers presented at the International Workshop on VLSI for Artiflcial Intelligence which was held at the University of Oxford in July 1988. Our thanks go to all the contributors and especially to the programme committee for all their hard work. Thanks are also due to the ACM-SIGARCH, the Alvey Directorate, the lEE and the IEEE Computer Society for publicising the event and to Oxford University for their active support. We are particularly grateful to David Cawley and Paula Appleby for coping with the administrative problems. Jose Delgado-Frias Will Moore October 1988 Programme Committee Igor Aleksander, Imperial College (UK) Yves Bekkers, IRISA/INRIA (France) Michael Brady, University of Oxford (UK) Jose Delgado-Frias, University of Oxford (UK) Steven Krueger, Texas Instruments Inc. (USA) Simon Lavington, University of Essex (UK) Will Moore, University of Oxford (UK) Philip Treleaven, University College London (UK) Benjamin Wah, University of Illinois (USA) Prologue Research on architectures dedicated to artificial intelligence (AI) processing has been increasing in recent years, since conventional data- or numerically-oriented architec tures are not able to provide the computational power and/or functionality required. For the time being these architectures have to be implemented in VLSI technology with its inherent constraints on speed, connectivity, fabrication yield and power. This in turn impacts on the effectiveness of the computer architecture."

Deductive Program Design (Paperback, Softcover reprint of the original 1st ed. 1996): Manfred Broy Deductive Program Design (Paperback, Softcover reprint of the original 1st ed. 1996)
Manfred Broy
R5,193 Discovery Miles 51 930 Ships in 18 - 22 working days

Advanced research on the description of distributed systems and on design calculi for software and hardware is presented in this volume. Distinguished researchers give an overview of the latest state of the art.

Document Processing and Retrieval - Texpros (Paperback, Softcover reprint of the original 1st ed. 1996): Qianhong Liu, Peter A.... Document Processing and Retrieval - Texpros (Paperback, Softcover reprint of the original 1st ed. 1996)
Qianhong Liu, Peter A. Ng
R4,021 Discovery Miles 40 210 Ships in 18 - 22 working days

Document Processing and Retrieval: TEXPROS focuses on the design and implementation of a personal, customizable office information and document processing system called TEXPROS (a TEXt PROcessing System). TEXPROS is a personal, intelligent office information and document processing system for text-oriented documents. This system supports the storage, classification, categorization, retrieval and reproduction of documents, as well as extracting, browsing, retrieving and synthesizing information from a variety of documents. When using TEXPROS in a multi-user or distributed environment, it requires specific protocols for extracting, storing, transmitting and exchanging information. The authors have used a variety of techniques to implement TEXPROS, such as Object-Oriented Programming, Tcl/Tk, X-Windows, etc. The system can be used for many different purposes in many different applications, such as digital libraries, software documentation and information delivery. Audience: Provides in-depth, state-of-the-art coverage of information processing and retrieval, and documentation for such professionals as database specialists, information systems and software developers, and information providers.

Software Performability: From Concepts to Applications (Paperback, Softcover reprint of the original 1st ed. 1996): Ann T. Tai,... Software Performability: From Concepts to Applications (Paperback, Softcover reprint of the original 1st ed. 1996)
Ann T. Tai, John F. Meyer, Algirdas Avizienis
R3,991 Discovery Miles 39 910 Ships in 18 - 22 working days

Computers are currently used in a variety of critical applications, including systems for nuclear reactor control, flight control (both aircraft and spacecraft), and air traffic control. Moreover, experience has shown that the dependability of such systems is particularly sensitive to that of its software components, both the system software of the embedded computers and the application software they support. Software Performability: From Concepts to Applications addresses the construction and solution of analytic performability models for critical-application software. The book includes a review of general performability concepts along with notions which are peculiar to software performability. Since fault tolerance is widely recognized as a viable means for improving the dependability of computer system (beyond what can be achieved by fault prevention), the examples considered are fault-tolerant software systems that incorporate particular methods of design diversity and fault recovery. Software Performability: From Concepts to Applications will be of direct benefit to both practitioners and researchers in the area of performance and dependability evaluation, fault-tolerant computing, and dependable systems for critical applications. For practitioners, it supplies a basis for defining combined performance-dependability criteria (in the form of objective functions) that can be used to enhance the performability (performance/dependability) of existing software designs. For those with research interests in model-based evaluation, the book provides an analytic framework and a variety of performability modeling examples in an application context of recognized importance. The material contained in this book will both stimulate future research on related topics and, for teaching purposes, serve as a reference text in courses on computer system evaluation, fault-tolerant computing, and dependable high-performance computer systems.

Principles of Distributed Systems (Paperback, Softcover reprint of the original 1st ed. 1996): Vijay K Garg Principles of Distributed Systems (Paperback, Softcover reprint of the original 1st ed. 1996)
Vijay K Garg
R4,232 Discovery Miles 42 320 Ships in 18 - 22 working days

Distributed computer systems are now widely available but, despite a number of recent advances, the design of software for these systems remains a challenging task, involving two main difficulties: the absence of a shared clock and the absence of a shared memory. The absence of a shared clock means that the concept of time is not useful in distributed systems. The absence of shared memory implies that the concept of a state of a distributed system also needs to be redefined. These two important concepts occupy a major portion of this book. Principles of Distributed Systems describes tools and techniques that have been successfully applied to tackle the problem of global time and state in distributed systems. The author demonstrates that the concept of time can be replaced by that of causality, and clocks can be constructed to provide causality information. The problem of not having a global state is alleviated by developing efficient algorithms for detecting properties and computing global functions. The author's major emphasis is in developing general mechanisms that can be applied to a variety of problems. For example, instead of discussing algorithms for standard problems, such as termination detection and deadlocks, the book discusses algorithms to detect general properties of a distributed computation. Also included are several worked examples and exercise problems that can be used for individual practice and classroom instruction. Audience: Can be used to teach a one-semester graduate course on distributed systems. Also an invaluable reference book for researchers and practitioners working on the many different aspects of distributed systems.

Time-Constrained Transaction Management - Real-Time Constraints in Database Transaction Systems (Paperback, Softcover reprint... Time-Constrained Transaction Management - Real-Time Constraints in Database Transaction Systems (Paperback, Softcover reprint of the original 1st ed. 1996)
Nandit R. Soparkar, Henry F. Korth, Abraham Silberschatz
R2,617 Discovery Miles 26 170 Ships in 18 - 22 working days

Transaction processing is an established technique for the concurrent and fault tolerant access of persistent data. While this technique has been successful in standard database systems, factors such as time-critical applications, emerg ing technologies, and a re-examination of existing systems suggest that the performance, functionality and applicability of transactions may be substan tially enhanced if temporal considerations are taken into account. That is, transactions should not only execute in a "legal" (i.e., logically correct) man ner, but they should meet certain constraints with regard to their invocation and completion times. Typically, these logical and temporal constraints are application-dependent, and we address some fundamental issues for the man agement of transactions in the presence of such constraints. Our model for transaction-processing is based on extensions to established mod els, and we briefly outline how logical and temporal constraints may be ex pressed in it. For scheduling the transactions, we describe how legal schedules differ from one another in terms of meeting the temporal constraints. Exist ing scheduling mechanisms do not differentiate among legal schedules, and are thereby inadequate with regard to meeting temporal constraints. This provides the basis for seeking scheduling strategies that attempt to meet the temporal constraints while continuing to produce legal schedules."

A VLSI Architecture for Concurrent Data Structures (Paperback, Softcover reprint of the original 1st ed. 1987): J W Dally A VLSI Architecture for Concurrent Data Structures (Paperback, Softcover reprint of the original 1st ed. 1987)
J W Dally
R4,004 Discovery Miles 40 040 Ships in 18 - 22 working days

Concurrent data structures simplify the development of concurrent programs by encapsulating commonly used mechanisms for synchronization and commu nication into data structures. This thesis develops a notation for describing concurrent data structures, presents examples of concurrent data structures, and describes an architecture to support concurrent data structures. Concurrent Smalltalk (CST), a derivative of Smalltalk-80 with extensions for concurrency, is developed to describe concurrent data structures. CST allows the programmer to specify objects that are distributed over the nodes of a concurrent computer. These distributed objects have many constituent objects and thus can process many messages simultaneously. They are the foundation upon which concurrent data structures are built. The balanced cube is a concurrent data structure for ordered sets. The set is distributed by a balanced recursive partition that maps to the subcubes of a binary 7lrcube using a Gray code. A search algorithm, VW search, based on the distance properties of the Gray code, searches a balanced cube in O(log N) time. Because it does not have the root bottleneck that limits all tree-based data structures to 0(1) concurrency, the balanced cube achieves 0C.: N) con currency. Considering graphs as concurrent data structures, graph algorithms are pre sented for the shortest path problem, the max-flow problem, and graph parti tioning. These algorithms introduce new synchronization techniques to achieve better performance than existing algorithms."

Distributed Applications and Interoperable Systems II - IFIP TC6 WG6.1 Second International Working Conference on Distributed... Distributed Applications and Interoperable Systems II - IFIP TC6 WG6.1 Second International Working Conference on Distributed Applications and Interoperable Systems (DAIS'99)June 28-July 1, 1999, Helsinki, Finland (Paperback, Softcover reprint of the original 1st ed. 1999)
Lea Kutvonen, Hartmut Koenig, Martti Tienari
R4,060 Discovery Miles 40 600 Ships in 18 - 22 working days

Mastering interoperability in a computing environment consisting of different operating systems and hardware architectures is a key requirement which faces system engineers building distributed information systems. Distributed applications are a necessity in most central application sectors of the contemporary computerized society, for instance, in office automation, banking, manufacturing, telecommunication and transportation. This book focuses on the techniques available or under development, with the goal of easing the burden of constructing reliable and maintainable interoperable information systems. The topics covered in this book include: * Management of distributed systems; * Frameworks and construction tools; * Open architectures and interoperability techniques; * Experience with platforms like CORBA and RMI; * Language interoperability (e.g. Java); * Agents and mobility; * Quality of service and fault tolerance; * Workflow and object modelling issues; and * Electronic commerce .The book contains the proceedings of the International Working Conference on Distributed Applications and Interoperable Systems II (DAIS'99), which was held June 28-July 1, 1999 in Helsinki, Finland. It was sponsored by the International Federation of Information Processing (IFIP). The conference program presents the state of the art in research concerning distributed and interoperable systems. This is a topical research area where much activity is currently in progress. Interesting new aspects and innovative contributions are still arising regularly. The DAIS series of conferences is one of the main international forums where these important findings are reported.

SBus - Information, Applications, and Experience (Paperback, Softcover reprint of the original 1st ed. 1992): James D. Lyle SBus - Information, Applications, and Experience (Paperback, Softcover reprint of the original 1st ed. 1992)
James D. Lyle
R2,699 Discovery Miles 26 990 Ships in 18 - 22 working days

Workstation and computer users have an ever increasing need for solutions that offer high performance, low cost, small footprints (space requirements), and ease of use. Also, the availability of a wide range of software and hardware options (from a variety of independent vendors) is important because it simplifies the task of expanding existing applications and stretching into new ones. The SBus has been designed and optimized within this framework, and it represents a next-generation approach to a system's I/O intercon nect needs. This book is a collection of information intended to ease the task of developing and integrating new SBus-based products. The focus is primarily on hardware, due to the author's particular expertise, but firmware and software concepts are also included where appropriate. This book is based on revision B.O of the SBus Specification. This revision has been a driving force in the SBus market longer than any other, and is likely to remain a strong influence for some time to come. As of this writing there is currently an effort (desig nated P1496) within the IEEE to produce a new version of the SBus specification that conforms to that group's policies and requirements. This might result in some changes to the specifica tion, but in most cases these will be minor. Most of the information this book contains will remain timely and applicable. To help ensure this, the author has included key information about pro posed or planned changes."

Architecture-Independent Loop Parallelisation (Paperback, Softcover reprint of the original 1st ed. 2000): Radu C. Calinescu Architecture-Independent Loop Parallelisation (Paperback, Softcover reprint of the original 1st ed. 2000)
Radu C. Calinescu
R2,626 Discovery Miles 26 260 Ships in 18 - 22 working days

Architecture-independent programming and automatic parallelisation have long been regarded as two different means of alleviating the prohibitive costs of parallel software development. Building on recent advances in both areas, Architecture-Independent Loop Parallelisation proposes a unified approach to the parallelisation of scientific computing code. This novel approach is based on the bulk-synchronous parallel model of computation, and succeeds in automatically generating parallel code that is architecture-independent, scalable, and of analytically predictable performance.

Microelectronics and Microsystems - Emergent Design Techniques (Paperback, Softcover reprint of the original 1st ed. 2002):... Microelectronics and Microsystems - Emergent Design Techniques (Paperback, Softcover reprint of the original 1st ed. 2002)
Luigi Fortuna, Giuseppe Ferla, Antonio Imbruglia
R2,635 Discovery Miles 26 350 Ships in 18 - 22 working days

The book presents the best contributions, extracted from the theses written by the students who have attended the second edition of the Master in Microelectronics and Systems that has been organized by the Universita degli Studi di Catania and that has been held at the STMicroelectronics Company (Catania Site) from May 2000 to January 2001. In particular, the mentioned Master has been organized among the various ac tivities of the "Istituto Superiore di Catania per la Formazione di Eccellenza." The Institute is one of the Italian network of universities selected by MURST (Ministry University Research Scientific Technology). The first aim of tl;te Master in Microelectronics and Systems is to increase the skills of the students with the Laurea Degree in Physics or Electrical Engineering in the more advanced areas as VLSI system design, high-speed low-voltage low-power circuitS and RF systems. The second aim has been to involve in the educational program companies like STMicroelectronics, ACCENT and ITEL, interested in emergent microelectronics topics, to cooperate with the University in developing high-level research projects. Besides the tutorial activity during the teaching hours, provided by national and international researchers, a significant part of the School has been dedicated to the presentation of specific CAD tools and experiments in order to prepare the students to solve specific problems during the stage period and in the thesis work."

Switching Machines - Volume 2 Sequential Systems (Paperback, Softcover reprint of the original 1st ed. 1972): J.P. Perrin, M... Switching Machines - Volume 2 Sequential Systems (Paperback, Softcover reprint of the original 1st ed. 1972)
J.P. Perrin, M Denouette, E. Daclin
R4,050 Discovery Miles 40 500 Ships in 18 - 22 working days
Design of Reservation Protocols for Multimedia Communication (Paperback, Softcover reprint of the original 1st ed. 1996): Luca... Design of Reservation Protocols for Multimedia Communication (Paperback, Softcover reprint of the original 1st ed. 1996)
Luca Delgrossi
R4,015 Discovery Miles 40 150 Ships in 18 - 22 working days

The advent of multimedia technology is creating a number of new problems in the fields of computer and communication systems. Perhaps the most important of these problems in communication, and certainly the most interesting, is that of designing networks to carry multimedia traffic, including digital audio and video, with acceptable quality. The main challenge in integrating the different services needed by the different types of traffic into the same network (an objective that is made worthwhile by its obvious economic advantages) is to satisfy the performance requirements of continuous media applications, as the quality of audio and video streams at the receiver can be guaranteed only if bounds on delay, delay jitters, bandwidth, and reliability are guaranteed by the network. Since such guarantees cannot be provided by traditional packet-switching technology, a number of researchers and research groups during the last several years have tried to meet the challenge by proposing new protocols or modifications of old ones, to make packet-switching networks capable of delivering audio and video with good quality while carrying all sorts of other traffic. The focus of this book is on HeiTS (the Heidelberg Transport System), and its contributions to integrated services network design. The HeiTS architecture is based on using the Internet Stream Protocol Version 2 (ST-II) at the network layer. The Heidelberg researchers were the first to implement ST-II. The author documents this activity in the book and provides thorough coverage of the improvements made to the protocol. The book also includes coverage of HeiTP as used in error handling, error control, congestion control, and the full specification of ST2+, a new version of ST-II. The ideas and techniques implemented by the Heidelberg group and their coverage in this volume apply to many other approaches to multimedia networking.

A Systolic Array Parallelizing Compiler (Paperback, Softcover reprint of the original 1st ed. 1990): Ping-Sheng Tseng A Systolic Array Parallelizing Compiler (Paperback, Softcover reprint of the original 1st ed. 1990)
Ping-Sheng Tseng
R2,614 Discovery Miles 26 140 Ships in 18 - 22 working days

Widespread use of parallel processing will become a reality only if the process of porting applications to parallel computers can be largely automated. Usually it is straightforward for a user to determine how an application can be mapped onto a parallel machine; however, the actual development of parallel code, if done by hand, is typically difficult and time consuming. Parallelizing compilers, which can gen erate parallel code automatically, are therefore a key technology for parallel processing. In this book, Ping-Sheng Tseng describes a parallelizing compiler for systolic arrays, called AL. Although parallelizing compilers are quite common for shared-memory parallel machines, the AL compiler is one of the first working parallelizing compilers for distributed memory machines, of which systolic arrays are a special case. The AL compiler takes advantage of the fine grain and high bandwidth interprocessor communication capabilities in a systolic architecture to generate efficient parallel code. xii Foreword While capable of handling an important class of applications, AL is not intended to be a general-purpose parallelizing compiler."

Parallel Computation and Computers for Artificial Intelligence (Paperback, Softcover reprint of the original 1st ed. 1988):... Parallel Computation and Computers for Artificial Intelligence (Paperback, Softcover reprint of the original 1st ed. 1988)
J.S. Kowalik
R4,018 Discovery Miles 40 180 Ships in 18 - 22 working days

It has been widely recognized that artificial intelligence computations offer large potential for distributed and parallel processing. Unfortunately, not much is known about designing parallel AI algorithms and efficient, easy-to-use parallel computer architectures for AI applications. The field of parallel computation and computers for AI is in its infancy, but some significant ideas have appeared and initial practical experience has become available. The purpose of this book has been to collect in one volume contributions from several leading researchers and pioneers of AI that represent a sample of these ideas and experiences. This sample does not include all schools of thought nor contributions from all leading researchers, but it covers a relatively wide variety of views and topics and in this sense can be helpful in assessing the state ofthe art. We hope that the book will serve, at least, as a pointer to more specialized literature and that it will stimulate interest in the area of parallel AI processing. It has been a great pleasure and a privilege to cooperate with all contributors to this volume. They have my warmest thanks and gratitude. Mrs. Birgitta Knapp has assisted me in the editorial task and demonstrated a great deal of skill and patience. Janusz S. Kowalik vii INTRODUCTION Artificial intelligence (AI) computer programs can be very time-consuming.

Automatic Performance Prediction of Parallel Programs (Paperback, Softcover reprint of the original 1st ed. 1996): Thomas... Automatic Performance Prediction of Parallel Programs (Paperback, Softcover reprint of the original 1st ed. 1996)
Thomas Fahringer
R2,652 Discovery Miles 26 520 Ships in 18 - 22 working days

Automatic Performance Prediction of Parallel Programs presents a unified approach to the problem of automatically estimating the performance of parallel computer programs. The author focuses primarily on distributed memory multiprocessor systems, although large portions of the analysis can be applied to shared memory architectures as well. The author introduces a novel and very practical approach for predicting some of the most important performance parameters of parallel programs, including work distribution, number of transfers, amount of data transferred, network contention, transfer time, computation time and number of cache misses. This approach is based on advanced compiler analysis that carefully examines loop iteration spaces, procedure calls, array subscript expressions, communication patterns, data distributions and optimizing code transformations at the program level; and the most important machine specific parameters including cache characteristics, communication network indices, and benchmark data for computational operations at the machine level. The material has been fully implemented as part of P3T, which is an integrated automatic performance estimator of the Vienna Fortran Compilation System (VFCS), a state-of-the-art parallelizing compiler for Fortran77, Vienna Fortran and a subset of High Performance Fortran (HPF) programs. A large number of experiments using realistic HPF and Vienna Fortran code examples demonstrate highly accurate performance estimates, and the ability of the described performance prediction approach to successfully guide both programmer and compiler in parallelizing and optimizing parallel programs. A graphical user interface is described and displayed that visualizes each program source line together with the corresponding parameter values. P3T uses color-coded performance visualization to immediately identify hot spots in the parallel program. Performance data can be filtered and displayed at various levels of detail. Colors displayed by the graphical user interface are visualized in greyscale. Automatic Performance Prediction of Parallel Programs also includes coverage of fundamental problems of automatic parallelization for distributed memory multicomputers, a description of the basic parallelization strategy and a large variety of optimizing code transformations as included under VFCS.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Decanter Centrifuge Handbook
A. Records, K. Sutherland Hardcover R6,901 Discovery Miles 69 010
New and Future Developments in Microbial…
H. B Singh, Vijai G. Gupta, … Hardcover R4,796 R4,451 Discovery Miles 44 510
Advances in Quantum Monte Carlo
Shigenori Tanaka, Stuart M. Rothstein, … Hardcover R5,469 Discovery Miles 54 690
Evidence-Based Validation of Herbal…
Pulok K. Mukherjee Hardcover R4,281 Discovery Miles 42 810
Comprehensive Heterocyclic Chemistry IV
David Black, Janine Cossy, … Hardcover R278,190 Discovery Miles 2 781 900
Medicinal and Aromatic Crops…
Valtcho D. Jeliazkov, Charles L. Cantrell Hardcover R4,838 Discovery Miles 48 380
Special Topics in Intellectual Property
Andrea Twiss-Brooks Hardcover R2,716 Discovery Miles 27 160
Comprehensive Glycoscience
Joseph Barchi Hardcover R73,470 Discovery Miles 734 700
Alteration of Ovoproducts - From…
Olivier Goncalves, Jack Legrand Hardcover R3,937 Discovery Miles 39 370
Annual Reports on NMR Spectroscopy…
Graham A. Webb Hardcover R5,466 Discovery Miles 54 660

 

Partners