0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (4)
  • R250 - R500 (22)
  • R500+ (2,630)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design > General

Synchronization Design for Digital Systems (Paperback, Softcover reprint of the original 1st ed. 1991): Teresa H. Meng Synchronization Design for Digital Systems (Paperback, Softcover reprint of the original 1st ed. 1991)
Teresa H. Meng
R2,628 Discovery Miles 26 280 Ships in 18 - 22 working days

Synchronization is one of the important issues in digital system design. While other approaches have always been intriguing, up until now synchro nous operation using a common clock has been the dominant design philo sophy. However, we have reached the point, with advances in technology, where other options should be given serious consideration. This is because the clock periods are getting much smaller in relation to the interconnect propagation delays, even within a single chip and certainly at the board and backplane level. To a large extent, this problem can be overcome with care ful clock distribution in synchronous design, and tools for computer-aided design of clock distribution. However, this places global constraints on the design, making it necessary, for example, to redesign the clock distribution each time any part of the system is changed. In this book, some alternative approaches to synchronization in digital sys tem design are described and developed. We owe these techniques to a long history of effort in both digital system design and in digital communica tions, the latter field being relevant because large propagation delays have always been a dominant consideration in design. While synchronous design is discussed and contrasted to the other techniques in Chapter 6, the dom inant theme of this book is alternative approaches.

Computations with Markov Chains - Proceedings of the 2nd International Workshop on the Numerical Solution of Markov Chains... Computations with Markov Chains - Proceedings of the 2nd International Workshop on the Numerical Solution of Markov Chains (Paperback, Softcover reprint of the original 1st ed. 1995)
William J. Stewart
R4,099 Discovery Miles 40 990 Ships in 18 - 22 working days

Computations with Markov Chains presents the edited and reviewed proceedings of the Second International Workshop on the Numerical Solution of Markov Chains, held January 16--18, 1995, in Raleigh, North Carolina. New developments of particular interest include recent work on stability and conditioning, Krylov subspace-based methods for transient solutions, quadratic convergent procedures for matrix geometric problems, further analysis of the GTH algorithm, the arrival of stochastic automata networks at the forefront of modelling stratagems, and more. An authoritative overview of the field for applied probabilists, numerical analysts and systems modelers, including computer scientists and engineers.

TRON Project 1989 - Open-Architecture Computer Systems (Paperback, Softcover reprint of the original 1st ed. 1988): Ken Sakamura TRON Project 1989 - Open-Architecture Computer Systems (Paperback, Softcover reprint of the original 1st ed. 1988)
Ken Sakamura
R1,432 Discovery Miles 14 320 Ships in 18 - 22 working days

It is almost six years since the inauguration of the TRON project, a con cept first proposed by Dr. K. Sakamura of the University of Tokyo, and it is almost 2 years since the foundation of the TRON Association on March 1988. The number of regular member companies registered in the TRON Association as of November 1988 is 145 which is a new re cord for the Association. Some of this year's major activities that I would particularly like to mention are: - Over 50 TRON project-related products have been or are about to be introduced to the marketplace, according to a preliminary report from the Future Study Committee of the TRON Association. In par ticular, I am happy to say that the ITRON subproject, which is ahead of the other subprojects, has progressed so far that several papers on ITRON applications will be presented at this conference, which means that the ITRON specifications are now ready for application to em bedded commercial and industrial products."

Numerical Integration - Recent Developments, Software and Applications (Paperback, Softcover reprint of the original 1st ed.... Numerical Integration - Recent Developments, Software and Applications (Paperback, Softcover reprint of the original 1st ed. 1987)
Patrick Keast, Graeme. Fairweather
R5,180 Discovery Miles 51 800 Ships in 18 - 22 working days

This volume contains refereed papers and extended abstracts of papers presented at the NATO Advanced Research Workshop entitled 'Numerical Integration: Recent Developments, Software and Applications', held at Dalhousie University, Halifax, Canada, August 11-15, 1986. The Workshop was attended by thirty-six scientists from eleven NATO countries. Thirteen invited lectures and twenty-two contributed lectures were presented, of which twenty-five appear in full in this volume, together with extended abstracts of the remaining ten. It is more than ten years since the last workshop of this nature was held, in Los Alamos in 1975. Many developments have occurred in quadrature in the intervening years, and it seemed an opportune time to bring together again researchers in this area. The development of QUADPACK by Piessens, de Doncker, Uberhuber and Kahaner has changed the focus of research in the area of one dimensional quadrature from the construction of new rules to an emphasis on reliable robust software. There has been a dramatic growth in interest in the testing and evaluation of software, stimulated by the work of Lyness and Kaganove, Einarsson, and Piessens. The earlier research of Patterson into Kronrod extensions of Gauss rules, followed by the work of Monegato, and Piessens and Branders, has greatly increased interest in Gauss-based formulas for one-dimensional integration.

Performance Evaluation, Prediction and Visualization of Parallel Systems (Paperback, Softcover reprint of the original 1st ed.... Performance Evaluation, Prediction and Visualization of Parallel Systems (Paperback, Softcover reprint of the original 1st ed. 1999)
Xingfu Wu
R4,025 Discovery Miles 40 250 Ships in 18 - 22 working days

Performance Evaluation, Prediction and Visualization in Parallel Systems presents a comprehensive and systematic discussion of theoretics, methods, techniques and tools for performance evaluation, prediction and visualization of parallel systems. Chapter 1 gives a short overview of performance degradation of parallel systems, and presents a general discussion on the importance of performance evaluation, prediction and visualization of parallel systems. Chapter 2 analyzes and defines several kinds of serial and parallel runtime, points out some of the weaknesses of parallel speedup metrics, and discusses how to improve and generalize them. Chapter 3 describes formal definitions of scalability, addresses the basic metrics affecting the scalability of parallel systems, discusses scalability of parallel systems from three aspects: parallel architecture, parallel algorithm and parallel algorithm-architecture combinations, and analyzes the relations of scalability and speedup. Chapter 4 discusses the methodology of performance measurement, describes the benchmark- oriented performance test and analysis and how to measure speedup and scalability in practice. Chapter 5 analyzes the difficulties in performance prediction, discusses application-oriented and architecture-oriented performance prediction and how to predict speedup and scalability in practice. Chapter 6 discusses performance visualization techniques and tools for parallel systems from three stages: performance data collection, performance data filtering and performance data visualization, and classifies the existing performance visualization tools. Chapter 7 describes parallel compiling-based, search-based and knowledge-based performance debugging, which assists programmers to optimize the strategy or algorithm in their parallel programs, and presents visual programming-based performance debugging to help programmers identify the location and cause of the performance problem. It also provides concrete suggestions on how to modify their parallel program to improve the performance. Chapter 8 gives an overview of current interconnection networks for parallel systems, analyzes the scalability of interconnection networks, and discusses how to measure and improve network performances. Performance Evaluation, Prediction and Visualization in Parallel Systems serves as an excellent reference for researchers, and may be used as a text for advanced courses on the topic.

Dependable Network Computing (Paperback, Softcover reprint of the original 1st ed. 2000): Dimiter R. Avresky Dependable Network Computing (Paperback, Softcover reprint of the original 1st ed. 2000)
Dimiter R. Avresky
R5,194 Discovery Miles 51 940 Ships in 18 - 22 working days

Dependable Network Computing provides insights into various problems facing millions of global users resulting from the 'internet revolution'. It covers real-time problems involving software, servers, and large-scale storage systems with adaptive fault-tolerant routing and dynamic reconfiguration techniques. Also included is material on routing protocols, QoS, and dead- and live-lock free related issues. All chapters are written by leading specialists in their respective fields. Dependable Network Computing provides useful information for scientists, researchers, and application developers building networks based on commercially off-the-shelf components.

Fault-Tolerant Parallel and Distributed Systems (Paperback, Softcover reprint of the original 1st ed. 1998): Dimiter R.... Fault-Tolerant Parallel and Distributed Systems (Paperback, Softcover reprint of the original 1st ed. 1998)
Dimiter R. Avresky, David R. Kaeli
R4,045 Discovery Miles 40 450 Ships in 18 - 22 working days

The most important uses of computing in the future will be those related to the global 'digital convergence' where all computing becomes digital and internetworked. This convergence will be propelled by new and advanced applications in storage, searching, retrieval and exchanging of information in a myriad of forms. All of these will place heavy demands on large parallel and distributed computer systems because these systems have high intrinsic failure rates. The challenge to the computer scientist is to build a system that is inexpensive, accessible and dependable. The chapters in this book provide insight into many of these issues and others that will challenge researchers and applications developers. Included among these topics are: * Fault-tolerance in communication protocols for distributed systems including synchronous and asynchronous group communication. * Methods and approaches for achieving fault-tolerance in distributed systems such as those used in networks of workstations (NOW), dependable cluster systems, and scalable coherent interfaces (SCI)-based local area multiprocessors (LAMP).* General models and features of distributed safety-critical systems built from commercial off-the-shelf components as well as service dependability in telecomputing systems. * Dependable parallel systems for real-time processing of video signals. * Embedding in faulty multiprocessor systems, broadcasting, system-level testing techniques, on-line detection and recovery from intermittent and permanent faults, and more. Fault-Tolerant Parallel and Distributed Systems is a coherent and uniform collection of chapters with contributions by several of the leading experts working on fault-resilient applications. The numerous techniques and methods included will be of special interest to researchers, developers, and graduate students.

Workload Characterization for Computer System Design (Paperback, Softcover reprint of the original 1st ed. 2000): Lizy Kurian... Workload Characterization for Computer System Design (Paperback, Softcover reprint of the original 1st ed. 2000)
Lizy Kurian John, Ann Marie Grizzaffi Maynard
R2,635 Discovery Miles 26 350 Ships in 18 - 22 working days

The advent of the world-wide web and web-based applications have dramatically changed the nature of computer applications. Computer system design, in the light of these changes, involves understanding these modem workloads, identifying bottlenecks during their execution, and appropriately tailoring microprocessors, memory systems, and the overall system to minimize bottlenecks. This book contains ten chapters dealing with several contemporary programming paradigms including Java, web server and database workloads. The first two chapters concentrate on Java. While Barisone et al.'s characterization in Chapter 1 deals with instruction set usage of Java applications, Kim et al.'s analysis in Chapter 2 focuses on memory referencing behavior of Java workloads. Several applications including the SPECjvm98 suite are studied using interpreter and Just-In-Time (TIT) compilers. Barisone et al.'s work includes an analytical model to compute the utilization of various functional units. Kim et al. present information on locality, live-range of objects, object lifetime distribution, etc. Studying database workloads has been a challenge to research groups, due to the difficulty in accessing standard benchmarks. Configuring hardware and software for database benchmarks such as those from the Transactions Processing Council (TPC) requires extensive effort. In Chapter 3, Keeton and Patterson present a simplified workload (microbenchmark) that approximates the characteristics of complex standardized benchmarks.

Computational Aerosciences in the 21st Century - Proceedings of the ICASE/LaRC/NSF/ARO Workshop, conducted by the Institute for... Computational Aerosciences in the 21st Century - Proceedings of the ICASE/LaRC/NSF/ARO Workshop, conducted by the Institute for Computer Applications in Science and Engineering, NASA Langley Research Center, The National Science Foundation and the Army Research Office, April 22-24, 1998 (Paperback, Softcover reprint of the original 1st ed. 2000)
Manuel D. Salas, W. Kyle Anderson
R2,660 Discovery Miles 26 600 Ships in 18 - 22 working days

Over the last decade, the role of computational simulations in all aspects of aerospace design has steadily increased. However, despite the many advances, the time required for computations is far too long. This book examines new ideas and methodologies that may, in the next twenty years, revolutionize scientific computing. The book specifically looks at trends in algorithm research, human computer interface, network-based computing, surface modeling and grid generation and computer hardware and architecture. The book provides a good overview of the current state-of-the-art and provides guidelines for future research directions. The book is intended for computational scientists active in the field and program managers making strategic research decisions.

Parallel Language and Compiler Research in Japan (Paperback, Softcover reprint of the original 1st ed. 1995): Lubomir Bic,... Parallel Language and Compiler Research in Japan (Paperback, Softcover reprint of the original 1st ed. 1995)
Lubomir Bic, Alexandru Nicolau, Mitsuhisa Sato
R5,208 Discovery Miles 52 080 Ships in 18 - 22 working days

Parallel Language and Compiler Research in Japan offers the international community an opportunity to learn in-depth about key Japanese research efforts in the particular software domains of parallel programming and parallelizing compilers. These are important topics that strongly bear on the effectiveness and affordability of high performance computing systems. The chapters of this book convey a comprehensive and current depiction of leading edge research efforts in Japan that focus on parallel software design, development, and optimization that could be obtained only through direct and personal interaction with the researchers themselves.

TRON Project 1988 - Open-Architecture Computer Systems (Paperback, Softcover reprint of the original 1st ed. 1988): Ken Sakamura TRON Project 1988 - Open-Architecture Computer Systems (Paperback, Softcover reprint of the original 1st ed. 1988)
Ken Sakamura
R1,450 Discovery Miles 14 500 Ships in 18 - 22 working days

It has been almost 5 years since the inauguration of the TRON project, a concept first proposed by Dr. K. Sakamura of the University of Tokyo. The TRON Association, which was founded as an independent organization in March 1988, has been taking over the activities of the earlier TRON Association, which was a division of Japan Electronic Industry Development Association (JEIDA). It has been expanding various operations to globalize the organizations activities. The number of member companies already exceeds 100, with increasing participation from overseas companies. It is truly an awaring historical event that so many members with the same qualifications and aims engaged in the research and development of the computer environment could be gathered together. The TRON concept aims at the creation of a new and complete environment beneficial to both computer and mankind. It has a very wide scope and great diversity. As it includes the open architecture concept and as the TRON machine should be able to work with various foreign languages, the TRON is targetted to be used internationally. In order for us to create a complete TRON world, at though there are several TRON products already on the market, continuous and aggressive participation from as members together with concentration as further development are indispensable. We, the TRON promoters, are much encouraged by such a driving force.

Computer Systems and Software Engineering - State-of-the-art (Paperback, Softcover reprint of the original 1st ed. 1992):... Computer Systems and Software Engineering - State-of-the-art (Paperback, Softcover reprint of the original 1st ed. 1992)
Patrick de Wilde, Joos P.L. Vandewalle
R4,055 Discovery Miles 40 550 Ships in 18 - 22 working days

Computer Systems and Software Engineering is a compilation of sixteen state-of-the-art lectures and keynote speeches given at the COMPEURO '92 conference. The contributions are from leading researchers, each of whom gives a new insight into subjects ranging from hardware design through parallelism to computer applications. The pragmatic flavour of the contributions makes the book a valuable asset for both researchers and designers alike. The book covers the following subjects: Hardware Design: memory technology, logic design, algorithms and architecture; Parallel Processing: programming, cellular neural networks and load balancing; Software Engineering: machine learning, logic programming and program correctness; Visualization: the graphical computer interface.

Asynchronous Circuits (Paperback, Softcover reprint of the original 1st ed. 1995): Janusz A. Brzozowski Asynchronous Circuits (Paperback, Softcover reprint of the original 1st ed. 1995)
Janusz A. Brzozowski; Foreword by C. E. Molnar; Carl-Johan H Seger
R4,045 Discovery Miles 40 450 Ships in 18 - 22 working days

Although asynchronous circuits date back to the early 1950s most of the digital circuits in use today are synchronous because, traditionally, asynchronous circuits have been viewed as difficult to understand and design. In recent years, however, there has been a great surge of interest in asynchronous circuits, largely through the development of new asynchronous design methodologies.
This book provides a comprehensive theory of asynchronous circuits, including modelling, analysis, simulation, specification, verification, and an introduction to their design. It is based on courses given to graduate students and will be suitable for computer scientists and engineers involved in the research and development of asynchronous designs.

The Origins of Digital Computers - Selected Papers (Paperback, 3rd ed. 1982. Softcover reprint of the original 3rd ed. 1982):... The Origins of Digital Computers - Selected Papers (Paperback, 3rd ed. 1982. Softcover reprint of the original 3rd ed. 1982)
B. Randell
R5,224 Discovery Miles 52 240 Ships in 18 - 22 working days
Memory Performance of Prolog Architectures (Paperback, Softcover reprint of the original 1st ed. 1988): Evan Tick Memory Performance of Prolog Architectures (Paperback, Softcover reprint of the original 1st ed. 1988)
Evan Tick
R4,001 Discovery Miles 40 010 Ships in 18 - 22 working days

One suspects that the people who use computers for their livelihood are growing more "sophisticated" as the field of computer science evolves. This view might be defended by the expanding use of languages such as C and Lisp in contrast to the languages such as FORTRAN and COBOL. This hypothesis is false however - computer languages are not like natural languages where successive generations stick with the language of their ancestors. Computer programmers do not grow more sophisticated - programmers simply take the time to muddle through the increasingly complex language semantics in an attempt to write useful programs. Of course, these programmers are "sophisticated" in the same sense as are hackers of MockLisp, PostScript, and Tex - highly specialized and tedious languages. It is quite frustrating how this myth of sophistication is propagated by some industries, universities, and government agencies. When I was an undergraduate at MIT, I distinctly remember the convoluted questions on exams concerning dynamic scoping in Lisp - the emphasis was placed solely on a "hacker's" view of computation, i. e. , the control and manipulation of storage cells. No consideration was given to the logical structure of programs. Within the past five years, Ada and Common Lisp have become programming language standards, despite their complexity (note that dynamic scoping was dropped even from Common Lisp). Of course, most industries' selection of programming languages are primarily driven by the requirement for compatibility (with previous software) and performance.

Modeling, Analysis and Optimization of Network-on-Chip Communication Architectures (Hardcover, 2013 ed.): Umit Y. Ogras, Radu... Modeling, Analysis and Optimization of Network-on-Chip Communication Architectures (Hardcover, 2013 ed.)
Umit Y. Ogras, Radu Marculescu
R3,310 Discovery Miles 33 100 Ships in 18 - 22 working days

Traditionally, design space exploration for Systems-on-Chip (SoCs) has focused on the computational aspects of the problem at hand. However, as the number of components on a single chip and their performance continue to increase, the communication architecture plays a major role in the area, performance and energy consumption of the overall system. As a result, a shift from computation-based to communication-based design becomes mandatory. Towards this end, network-on-chip (NoC) communication architectures have emerged recently as a promising alternative to classical bus and point-to-point communication architectures.

In this dissertation, we study outstanding research problems related to modeling, analysis and optimization of NoC communication architectures. More precisely, we present novel design methodologies, software tools and FPGA prototypes to aid the design of application-specific NoCs.

Supercomputing - Applications, Algorithms, and Architectures For the Future of Supercomputing (Paperback, Softcover reprint of... Supercomputing - Applications, Algorithms, and Architectures For the Future of Supercomputing (Paperback, Softcover reprint of the original 1st ed. 1991)
Jiro Kondo; Edited by (associates) Toshiko Matsuda
R1,402 Discovery Miles 14 020 Ships in 18 - 22 working days

As the technology of Supercomputing processes, methodologies for approaching problems have also been developed. The main object of this symposium was the interdisciplinary participation of experts in related fields and passionate discussion to work toward the solution of problems. An executive committee especially arranged for this symposium selected speakers and other participants who submitted papers which are included in this volume. Also included are selected extracts from the two sessions of panel discussion, the "Needs and Seeds of Supercomputing," and "The Future of Supercomputing," which arose during a wide-ranging exchange of viewpoints.

Multi-Microprocessor Systems for Real-Time Applications (Paperback, Softcover reprint of the original 1st ed. 1985): Gianni... Multi-Microprocessor Systems for Real-Time Applications (Paperback, Softcover reprint of the original 1st ed. 1985)
Gianni Conte, Dante Del Corso
R4,016 Discovery Miles 40 160 Ships in 18 - 22 working days

The continous development of computer technology supported by the VLSI revolution stimulated the research in the field .of multiprocessors systems. The main motivation for the migration of design efforts from conventional architectures towards multiprocessor ones is the possibi I ity to obtain a significant processing power together with the improvement of price/performance, reliability and flexibility figures. Currently, such systems are moving from research laboratories to real field appl ications. Future technological advances and new generations of components are I ikely to further enhance this trend. This book is intended to provide basic concepts and design methodologies for engineers and researchers involved in the development of mul tiprocessor systems and/or of appl ications based on multiprocessor architectures. In addition the book can be a source of material for computer architecture courses at graduate level. A preliminary knowledge of computer architecture and logical design has been assumed in wri ting this book. Not all the problems related with the development of multiprocessor systems are addressed in th i s book. The covered range spans from the electrical and logical design problems, to architectural issues, to design methodologis for system software. Subj ects such as software development in a multiprocessor environment or loosely coupled multiprocessor systems are out of the scope of the book. Since the basic elements, processors and memories, are now available as standard integrated circuits, the key design problem is how to put them together in an efficient and reliable way."

Database Machines and Knowledge Base Machines (Paperback, Softcover reprint of the original 1st ed. 1988): Masaru Kitsuregawa,... Database Machines and Knowledge Base Machines (Paperback, Softcover reprint of the original 1st ed. 1988)
Masaru Kitsuregawa, Hidehiko Tanaka
R7,743 Discovery Miles 77 430 Ships in 18 - 22 working days

This volume contains the papers presented at the Fifth International Workshop on Database Machines. The papers cover a wide spectrum of topics on Database Machines and Knowledge Base Machines. Reports of major projects, ECRC, MCC, and ICOT are included. Topics on DBM cover new database machine architectures based on vector processing and hypercube parallel processing, VLSI oriented architecture, filter processor, sorting machine, concurrency control mechanism for DBM, main memory database, interconnection network for DBM, and performance evaluation. In this workshop much more attention was given to knowledge base management as compared to the previous four workshops. Many papers discuss deductive database processing. Architectures for semantic network, prolog, and production system were also proposed. We would like to express our deep thanks to all those who contributed to the success of the workshop. We would also like to express our apprecia tion for the valuable suggestions given to us by Prof. D. K. Hsiao, Prof. D."

Switching Machines - Volume 1: Combinational Systems Introduction to Sequential Systems (Paperback, Softcover reprint of the... Switching Machines - Volume 1: Combinational Systems Introduction to Sequential Systems (Paperback, Softcover reprint of the original 1st ed. 1972)
J.P. Perrin, M Denouette, E. Daclin
R1,441 Discovery Miles 14 410 Ships in 18 - 22 working days

We shall begin this brief section with what we consider to be its objective. It will be followed by the main outline and then concluded by a few notes as to how this work should be used. Although logical systems have been manufactured for some time, the theory behind them is quite recent. Without going into historical digressions, we simply remark that the first comprehensive ideas on the application of Boolean algebra to logical systems appeared in the 1930's. These systems appeared in telephone exchanges and were realized with relays. It is only around 1955 that many articles and books trying to systematize the study of such automata, appeared. Since then, the theory has advanced regularly, but not in a way which satisfies those concerned with practical applications. What is serious, is that aside the books by Caldwell (which dates already from 1958), Marcus, and P. Naslin (in France), few works have been published which try to gather and unify results which can be used by the practis ing engineer; this is the objective of the present volumes."

Laser Spectroscopy (Paperback, Softcover reprint of the original 1st ed. 1974): Richard Brewer Laser Spectroscopy (Paperback, Softcover reprint of the original 1st ed. 1974)
Richard Brewer
R2,782 Discovery Miles 27 820 Ships in 18 - 22 working days

The Laser Spectroscopy Conference held at Vail, Colorado, June 25-29, 1973 was in certain ways the first meeting of its kind. Var ious quantum electronics conferences in the past have covered non linear optics, coherence theory, lasers and masers, breakdown, light scattering and so on. However, at Vail only two major themes were developed - tunable laser sources and the use of lasers in spectro scopic measurements, especially those involving high precision. Even so, Laser Spectroscopy covers a broad range of topics, making possible entirely new investigations and in older ones orders of magnitude improvement in resolution. The conference was interdisciplinary and international in char acter with scientists representing Japan, Italy, West Germany, Canada, Israel, France, England, and the United States. Of the 150 participants, the majority were physicists and electrical engineers in quantum electronics and the remainder, physical chemists and astrophysicists. We regret, because of space limitations, about 100 requests to attend had to be refused."

Parallel Computing in Optimization (Paperback, Softcover reprint of the original 1st ed. 1997): A. Migdalas, Panos M. Pardalos,... Parallel Computing in Optimization (Paperback, Softcover reprint of the original 1st ed. 1997)
A. Migdalas, Panos M. Pardalos, Sverre Storoy
R7,726 Discovery Miles 77 260 Ships in 18 - 22 working days

During the last three decades, breakthroughs in computer technology have made a tremendous impact on optimization. In particular, parallel computing has made it possible to solve larger and computationally more difficult prob lems. This volume contains mainly lecture notes from a Nordic Summer School held at the Linkoping Institute of Technology, Sweden in August 1995. In order to make the book more complete, a few authors were invited to contribute chapters that were not part of the course on this first occasion. The purpose of this Nordic course in advanced studies was three-fold. One goal was to introduce the students to the new achievements in a new and very active field, bring them close to world leading researchers, and strengthen their competence in an area with internationally explosive rate of growth. A second goal was to strengthen the bonds between students from different Nordic countries, and to encourage collaboration and joint research ventures over the borders. In this respect, the course built further on the achievements of the "Nordic Network in Mathematical Programming," which has been running during the last three years with the support ofthe Nordic Council for Advanced Studies (NorFA). The final goal was to produce literature on the particular subject, which would be available to both the participating students and to the students of the "next generation" ."

Fairness (Paperback, Softcover reprint of the original 1st ed. 1986): Nissim Francez Fairness (Paperback, Softcover reprint of the original 1st ed. 1986)
Nissim Francez
R1,413 Discovery Miles 14 130 Ships in 18 - 22 working days

The main purpose of this book is to bring together much of the research conducted in recent years in a subject I find both fascinating and impor tant, namely fairness. Much of the reported research is still in the form of technical reports, theses and conference papers, and only a small part has already appeared in the formal scientific journal literature. Fairness is one of those concepts that can intuitively be explained very brieft.y, but bear a lot of consequences, both in theory and the practicality of programming languages. Scientists have traditionally been attracted to studying such concepts. However, a rigorous study of the concept needs a lot of detailed development, evoking much machinery of both mathemat ics and computer science. I am fully aware of the fact that this field of research still lacks matu rity, as does the whole subject of theoretical studies of concurrency and nondeterminism. One symptom of this lack of maturity is the proliferation of models used by the research community to discuss these issues, a variety lacking the invariance property present, for example, in universal formalisms for sequential computing."

A Systolic Array Optimizing Compiler (Paperback, Softcover reprint of the original 1st ed. 1989): Monica S. Lam A Systolic Array Optimizing Compiler (Paperback, Softcover reprint of the original 1st ed. 1989)
Monica S. Lam
R2,635 Discovery Miles 26 350 Ships in 18 - 22 working days

This book is a revision of my Ph. D. thesis dissertation submitted to Carnegie Mellon University in 1987. It documents the research and results of the compiler technology developed for the Warp machine. Warp is a systolic array built out of custom, high-performance processors, each of which can execute up to 10 million floating-point operations per second (10 MFLOPS). Under the direction of H. T. Kung, the Warp machine matured from an academic, experimental prototype to a commercial product of General Electric. The Warp machine demonstrated that the scalable architecture of high-peiformance, programmable systolic arrays represents a practical, cost-effective solu tion to the present and future computation-intensive applications. The success of Warp led to the follow-on iWarp project, a joint project with Intel, to develop a single-chip 20 MFLOPS processor. The availability of the highly integrated iWarp processor will have a significant impact on parallel computing. One of the major challenges in the development of Warp was to build an optimizing compiler for the machine. First, the processors in the xx A Systolic Array Optimizing Compiler array cooperate at a fine granularity of parallelism, interaction between processors must be considered in the generation of code for individual processors. Second, the individual processors themselves derive their performance from a VLIW (Very Long Instruction Word) instruction set and a high degree of internal pipelining and parallelism. The compiler contains optimizations pertaining to the array level of parallelism, as well as optimizations for the individual VLIW processors."

Data Organization in Parallel Computers (Paperback, Softcover Repri): Harry A.G. Wijshoff Data Organization in Parallel Computers (Paperback, Softcover Repri)
Harry A.G. Wijshoff
R2,645 Discovery Miles 26 450 Ships in 18 - 22 working days

The organization of data is clearly of great importance in the design of high performance algorithms and architectures. Although there are several landmark papers on this subject, no comprehensive treatment has appeared. This monograph is intended to fill that gap. We introduce a model of computation for parallel computer architec tures, by which we are able to express the intrinsic complexity of data or ganization for specific architectures. We apply this model of computation to several existing parallel computer architectures, e.g., the CDC 205 and CRAY vector-computers, and the MPP binary array processor. The study of data organization in parallel computations was introduced as early as 1970. During the development of the ILLIAC IV system there was a need for a theory of possible data arrangements in interleaved mem ory systems. The resulting theory dealt primarily with storage schemes also called skewing schemes for 2-dimensional matrices, i.e., mappings from a- dimensional array to a number of memory banks. By means of the model of computation we are able to apply the theory of skewing schemes to var ious kinds of parallel computer architectures. This results in a number of consequences for both the design of parallel computer architectures and for applications of parallel processing."

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Digital Twins and Healthcare - Trends…
Loveleen Gaur, Noor Zaman Jhanjhi Hardcover R10,275 Discovery Miles 102 750
Data Analytics for Social Microblogging…
Soumi Dutta, Asit Kumar Das, … Paperback R3,335 Discovery Miles 33 350
Short Stories in Dutch for Beginners…
Olly Richards Paperback R320 R292 Discovery Miles 2 920
Fertility SOS - A Proven Plan To Counter…
Daminda Senekal-Griessel Paperback R350 R99 Discovery Miles 990
First Aid in English Reader D - A Narrow…
Angus Maciver Paperback R477 Discovery Miles 4 770
Kill As Few Patients As Possible - And…
Oscar London Hardcover R465 R431 Discovery Miles 4 310
Self-Consciousness - Human Brain as Data…
Masakazu Shoji Hardcover R1,105 Discovery Miles 11 050
Internet and Distributed Computing…
Jemal H. Abawajy, Mukaddim Pathan, … Hardcover R4,941 Discovery Miles 49 410
Killer Data - Modern Perspectives on…
Eric Hickey Hardcover R4,493 Discovery Miles 44 930
Instruction Level Parallelism
Alex Aiken, Utpal Banerjee, … Hardcover R2,735 Discovery Miles 27 350

 

Partners