0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (4)
  • R250 - R500 (24)
  • R500+ (2,624)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design > General

Foundations of Synergetics I - Distributed Active Systems (Paperback, 2nd ed. 1990): Alexander S. Mikhailov Foundations of Synergetics I - Distributed Active Systems (Paperback, 2nd ed. 1990)
Alexander S. Mikhailov
R1,383 Discovery Miles 13 830 Ships in 18 - 22 working days

This book gives an introduction to the mathematical theory of cooperative behavior in active systems of various origins, both natural and artificial. It is based on a lecture course in synergetics which I held for almost ten years at the University of Moscow. The first volume deals mainly with the problems of pattern fonnation and the properties of self-organized regular patterns in distributed active systems. It also contains a discussion of distributed analog information processing which is based on the cooperative dynamics of active systems. The second volume is devoted to the stochastic aspects of self-organization and the properties of self-established chaos. I have tried to avoid delving into particular applications. The primary intention is to present general mathematical models that describe the principal kinds of coopera tive behavior in distributed active systems. Simple examples, ranging from chemical physics to economics, serve only as illustrations of the typical context in which a particular model can apply. The manner of exposition is more in the tradition of theoretical physics than of in mathematics: Elaborate fonnal proofs and rigorous estimates are often replaced the text by arguments based on an intuitive understanding of the relevant models. Because of the interdisciplinary nature of this book, its readers might well come from very diverse fields of endeavor. It was therefore desirable to minimize the re quired preliminary knowledge. Generally, a standard university course in differential calculus and linear algebra is sufficient."

Performance Evaluation, Prediction and Visualization of Parallel Systems (Paperback, Softcover reprint of the original 1st ed.... Performance Evaluation, Prediction and Visualization of Parallel Systems (Paperback, Softcover reprint of the original 1st ed. 1999)
Xingfu Wu
R4,025 Discovery Miles 40 250 Ships in 18 - 22 working days

Performance Evaluation, Prediction and Visualization in Parallel Systems presents a comprehensive and systematic discussion of theoretics, methods, techniques and tools for performance evaluation, prediction and visualization of parallel systems. Chapter 1 gives a short overview of performance degradation of parallel systems, and presents a general discussion on the importance of performance evaluation, prediction and visualization of parallel systems. Chapter 2 analyzes and defines several kinds of serial and parallel runtime, points out some of the weaknesses of parallel speedup metrics, and discusses how to improve and generalize them. Chapter 3 describes formal definitions of scalability, addresses the basic metrics affecting the scalability of parallel systems, discusses scalability of parallel systems from three aspects: parallel architecture, parallel algorithm and parallel algorithm-architecture combinations, and analyzes the relations of scalability and speedup. Chapter 4 discusses the methodology of performance measurement, describes the benchmark- oriented performance test and analysis and how to measure speedup and scalability in practice. Chapter 5 analyzes the difficulties in performance prediction, discusses application-oriented and architecture-oriented performance prediction and how to predict speedup and scalability in practice. Chapter 6 discusses performance visualization techniques and tools for parallel systems from three stages: performance data collection, performance data filtering and performance data visualization, and classifies the existing performance visualization tools. Chapter 7 describes parallel compiling-based, search-based and knowledge-based performance debugging, which assists programmers to optimize the strategy or algorithm in their parallel programs, and presents visual programming-based performance debugging to help programmers identify the location and cause of the performance problem. It also provides concrete suggestions on how to modify their parallel program to improve the performance. Chapter 8 gives an overview of current interconnection networks for parallel systems, analyzes the scalability of interconnection networks, and discusses how to measure and improve network performances. Performance Evaluation, Prediction and Visualization in Parallel Systems serves as an excellent reference for researchers, and may be used as a text for advanced courses on the topic.

Dependable Network Computing (Paperback, Softcover reprint of the original 1st ed. 2000): Dimiter R. Avresky Dependable Network Computing (Paperback, Softcover reprint of the original 1st ed. 2000)
Dimiter R. Avresky
R5,194 Discovery Miles 51 940 Ships in 18 - 22 working days

Dependable Network Computing provides insights into various problems facing millions of global users resulting from the 'internet revolution'. It covers real-time problems involving software, servers, and large-scale storage systems with adaptive fault-tolerant routing and dynamic reconfiguration techniques. Also included is material on routing protocols, QoS, and dead- and live-lock free related issues. All chapters are written by leading specialists in their respective fields. Dependable Network Computing provides useful information for scientists, researchers, and application developers building networks based on commercially off-the-shelf components.

Fault-Tolerant Parallel and Distributed Systems (Paperback, Softcover reprint of the original 1st ed. 1998): Dimiter R.... Fault-Tolerant Parallel and Distributed Systems (Paperback, Softcover reprint of the original 1st ed. 1998)
Dimiter R. Avresky, David R. Kaeli
R4,045 Discovery Miles 40 450 Ships in 18 - 22 working days

The most important uses of computing in the future will be those related to the global 'digital convergence' where all computing becomes digital and internetworked. This convergence will be propelled by new and advanced applications in storage, searching, retrieval and exchanging of information in a myriad of forms. All of these will place heavy demands on large parallel and distributed computer systems because these systems have high intrinsic failure rates. The challenge to the computer scientist is to build a system that is inexpensive, accessible and dependable. The chapters in this book provide insight into many of these issues and others that will challenge researchers and applications developers. Included among these topics are: * Fault-tolerance in communication protocols for distributed systems including synchronous and asynchronous group communication. * Methods and approaches for achieving fault-tolerance in distributed systems such as those used in networks of workstations (NOW), dependable cluster systems, and scalable coherent interfaces (SCI)-based local area multiprocessors (LAMP).* General models and features of distributed safety-critical systems built from commercial off-the-shelf components as well as service dependability in telecomputing systems. * Dependable parallel systems for real-time processing of video signals. * Embedding in faulty multiprocessor systems, broadcasting, system-level testing techniques, on-line detection and recovery from intermittent and permanent faults, and more. Fault-Tolerant Parallel and Distributed Systems is a coherent and uniform collection of chapters with contributions by several of the leading experts working on fault-resilient applications. The numerous techniques and methods included will be of special interest to researchers, developers, and graduate students.

Numerical Integration - Recent Developments, Software and Applications (Paperback, Softcover reprint of the original 1st ed.... Numerical Integration - Recent Developments, Software and Applications (Paperback, Softcover reprint of the original 1st ed. 1987)
Patrick Keast, Graeme. Fairweather
R5,180 Discovery Miles 51 800 Ships in 18 - 22 working days

This volume contains refereed papers and extended abstracts of papers presented at the NATO Advanced Research Workshop entitled 'Numerical Integration: Recent Developments, Software and Applications', held at Dalhousie University, Halifax, Canada, August 11-15, 1986. The Workshop was attended by thirty-six scientists from eleven NATO countries. Thirteen invited lectures and twenty-two contributed lectures were presented, of which twenty-five appear in full in this volume, together with extended abstracts of the remaining ten. It is more than ten years since the last workshop of this nature was held, in Los Alamos in 1975. Many developments have occurred in quadrature in the intervening years, and it seemed an opportune time to bring together again researchers in this area. The development of QUADPACK by Piessens, de Doncker, Uberhuber and Kahaner has changed the focus of research in the area of one dimensional quadrature from the construction of new rules to an emphasis on reliable robust software. There has been a dramatic growth in interest in the testing and evaluation of software, stimulated by the work of Lyness and Kaganove, Einarsson, and Piessens. The earlier research of Patterson into Kronrod extensions of Gauss rules, followed by the work of Monegato, and Piessens and Branders, has greatly increased interest in Gauss-based formulas for one-dimensional integration.

Workload Characterization for Computer System Design (Paperback, Softcover reprint of the original 1st ed. 2000): Lizy Kurian... Workload Characterization for Computer System Design (Paperback, Softcover reprint of the original 1st ed. 2000)
Lizy Kurian John, Ann Marie Grizzaffi Maynard
R2,635 Discovery Miles 26 350 Ships in 18 - 22 working days

The advent of the world-wide web and web-based applications have dramatically changed the nature of computer applications. Computer system design, in the light of these changes, involves understanding these modem workloads, identifying bottlenecks during their execution, and appropriately tailoring microprocessors, memory systems, and the overall system to minimize bottlenecks. This book contains ten chapters dealing with several contemporary programming paradigms including Java, web server and database workloads. The first two chapters concentrate on Java. While Barisone et al.'s characterization in Chapter 1 deals with instruction set usage of Java applications, Kim et al.'s analysis in Chapter 2 focuses on memory referencing behavior of Java workloads. Several applications including the SPECjvm98 suite are studied using interpreter and Just-In-Time (TIT) compilers. Barisone et al.'s work includes an analytical model to compute the utilization of various functional units. Kim et al. present information on locality, live-range of objects, object lifetime distribution, etc. Studying database workloads has been a challenge to research groups, due to the difficulty in accessing standard benchmarks. Configuring hardware and software for database benchmarks such as those from the Transactions Processing Council (TPC) requires extensive effort. In Chapter 3, Keeton and Patterson present a simplified workload (microbenchmark) that approximates the characteristics of complex standardized benchmarks.

Computational Aerosciences in the 21st Century - Proceedings of the ICASE/LaRC/NSF/ARO Workshop, conducted by the Institute for... Computational Aerosciences in the 21st Century - Proceedings of the ICASE/LaRC/NSF/ARO Workshop, conducted by the Institute for Computer Applications in Science and Engineering, NASA Langley Research Center, The National Science Foundation and the Army Research Office, April 22-24, 1998 (Paperback, Softcover reprint of the original 1st ed. 2000)
Manuel D. Salas, W. Kyle Anderson
R2,660 Discovery Miles 26 600 Ships in 18 - 22 working days

Over the last decade, the role of computational simulations in all aspects of aerospace design has steadily increased. However, despite the many advances, the time required for computations is far too long. This book examines new ideas and methodologies that may, in the next twenty years, revolutionize scientific computing. The book specifically looks at trends in algorithm research, human computer interface, network-based computing, surface modeling and grid generation and computer hardware and architecture. The book provides a good overview of the current state-of-the-art and provides guidelines for future research directions. The book is intended for computational scientists active in the field and program managers making strategic research decisions.

Parallel Language and Compiler Research in Japan (Paperback, Softcover reprint of the original 1st ed. 1995): Lubomir Bic,... Parallel Language and Compiler Research in Japan (Paperback, Softcover reprint of the original 1st ed. 1995)
Lubomir Bic, Alexandru Nicolau, Mitsuhisa Sato
R5,208 Discovery Miles 52 080 Ships in 18 - 22 working days

Parallel Language and Compiler Research in Japan offers the international community an opportunity to learn in-depth about key Japanese research efforts in the particular software domains of parallel programming and parallelizing compilers. These are important topics that strongly bear on the effectiveness and affordability of high performance computing systems. The chapters of this book convey a comprehensive and current depiction of leading edge research efforts in Japan that focus on parallel software design, development, and optimization that could be obtained only through direct and personal interaction with the researchers themselves.

Supercomputing - Applications, Algorithms, and Architectures For the Future of Supercomputing (Paperback, Softcover reprint of... Supercomputing - Applications, Algorithms, and Architectures For the Future of Supercomputing (Paperback, Softcover reprint of the original 1st ed. 1991)
Jiro Kondo; Edited by (associates) Toshiko Matsuda
R1,402 Discovery Miles 14 020 Ships in 18 - 22 working days

As the technology of Supercomputing processes, methodologies for approaching problems have also been developed. The main object of this symposium was the interdisciplinary participation of experts in related fields and passionate discussion to work toward the solution of problems. An executive committee especially arranged for this symposium selected speakers and other participants who submitted papers which are included in this volume. Also included are selected extracts from the two sessions of panel discussion, the "Needs and Seeds of Supercomputing," and "The Future of Supercomputing," which arose during a wide-ranging exchange of viewpoints.

Asynchronous Circuits (Paperback, Softcover reprint of the original 1st ed. 1995): Janusz A. Brzozowski Asynchronous Circuits (Paperback, Softcover reprint of the original 1st ed. 1995)
Janusz A. Brzozowski; Foreword by C. E. Molnar; Carl-Johan H Seger
R4,045 Discovery Miles 40 450 Ships in 18 - 22 working days

Although asynchronous circuits date back to the early 1950s most of the digital circuits in use today are synchronous because, traditionally, asynchronous circuits have been viewed as difficult to understand and design. In recent years, however, there has been a great surge of interest in asynchronous circuits, largely through the development of new asynchronous design methodologies.
This book provides a comprehensive theory of asynchronous circuits, including modelling, analysis, simulation, specification, verification, and an introduction to their design. It is based on courses given to graduate students and will be suitable for computer scientists and engineers involved in the research and development of asynchronous designs.

TRON Project 1988 - Open-Architecture Computer Systems (Paperback, Softcover reprint of the original 1st ed. 1988): Ken Sakamura TRON Project 1988 - Open-Architecture Computer Systems (Paperback, Softcover reprint of the original 1st ed. 1988)
Ken Sakamura
R1,450 Discovery Miles 14 500 Ships in 18 - 22 working days

It has been almost 5 years since the inauguration of the TRON project, a concept first proposed by Dr. K. Sakamura of the University of Tokyo. The TRON Association, which was founded as an independent organization in March 1988, has been taking over the activities of the earlier TRON Association, which was a division of Japan Electronic Industry Development Association (JEIDA). It has been expanding various operations to globalize the organizations activities. The number of member companies already exceeds 100, with increasing participation from overseas companies. It is truly an awaring historical event that so many members with the same qualifications and aims engaged in the research and development of the computer environment could be gathered together. The TRON concept aims at the creation of a new and complete environment beneficial to both computer and mankind. It has a very wide scope and great diversity. As it includes the open architecture concept and as the TRON machine should be able to work with various foreign languages, the TRON is targetted to be used internationally. In order for us to create a complete TRON world, at though there are several TRON products already on the market, continuous and aggressive participation from as members together with concentration as further development are indispensable. We, the TRON promoters, are much encouraged by such a driving force.

A VLSI Architecture for Concurrent Data Structures (Paperback, Softcover reprint of the original 1st ed. 1987): J W Dally A VLSI Architecture for Concurrent Data Structures (Paperback, Softcover reprint of the original 1st ed. 1987)
J W Dally
R4,004 Discovery Miles 40 040 Ships in 18 - 22 working days

Concurrent data structures simplify the development of concurrent programs by encapsulating commonly used mechanisms for synchronization and commu nication into data structures. This thesis develops a notation for describing concurrent data structures, presents examples of concurrent data structures, and describes an architecture to support concurrent data structures. Concurrent Smalltalk (CST), a derivative of Smalltalk-80 with extensions for concurrency, is developed to describe concurrent data structures. CST allows the programmer to specify objects that are distributed over the nodes of a concurrent computer. These distributed objects have many constituent objects and thus can process many messages simultaneously. They are the foundation upon which concurrent data structures are built. The balanced cube is a concurrent data structure for ordered sets. The set is distributed by a balanced recursive partition that maps to the subcubes of a binary 7lrcube using a Gray code. A search algorithm, VW search, based on the distance properties of the Gray code, searches a balanced cube in O(log N) time. Because it does not have the root bottleneck that limits all tree-based data structures to 0(1) concurrency, the balanced cube achieves 0C.: N) con currency. Considering graphs as concurrent data structures, graph algorithms are pre sented for the shortest path problem, the max-flow problem, and graph parti tioning. These algorithms introduce new synchronization techniques to achieve better performance than existing algorithms."

Memory Performance of Prolog Architectures (Paperback, Softcover reprint of the original 1st ed. 1988): Evan Tick Memory Performance of Prolog Architectures (Paperback, Softcover reprint of the original 1st ed. 1988)
Evan Tick
R4,001 Discovery Miles 40 010 Ships in 18 - 22 working days

One suspects that the people who use computers for their livelihood are growing more "sophisticated" as the field of computer science evolves. This view might be defended by the expanding use of languages such as C and Lisp in contrast to the languages such as FORTRAN and COBOL. This hypothesis is false however - computer languages are not like natural languages where successive generations stick with the language of their ancestors. Computer programmers do not grow more sophisticated - programmers simply take the time to muddle through the increasingly complex language semantics in an attempt to write useful programs. Of course, these programmers are "sophisticated" in the same sense as are hackers of MockLisp, PostScript, and Tex - highly specialized and tedious languages. It is quite frustrating how this myth of sophistication is propagated by some industries, universities, and government agencies. When I was an undergraduate at MIT, I distinctly remember the convoluted questions on exams concerning dynamic scoping in Lisp - the emphasis was placed solely on a "hacker's" view of computation, i. e. , the control and manipulation of storage cells. No consideration was given to the logical structure of programs. Within the past five years, Ada and Common Lisp have become programming language standards, despite their complexity (note that dynamic scoping was dropped even from Common Lisp). Of course, most industries' selection of programming languages are primarily driven by the requirement for compatibility (with previous software) and performance.

TRON Project 1989 - Open-Architecture Computer Systems (Paperback, Softcover reprint of the original 1st ed. 1988): Ken Sakamura TRON Project 1989 - Open-Architecture Computer Systems (Paperback, Softcover reprint of the original 1st ed. 1988)
Ken Sakamura
R1,432 Discovery Miles 14 320 Ships in 18 - 22 working days

It is almost six years since the inauguration of the TRON project, a con cept first proposed by Dr. K. Sakamura of the University of Tokyo, and it is almost 2 years since the foundation of the TRON Association on March 1988. The number of regular member companies registered in the TRON Association as of November 1988 is 145 which is a new re cord for the Association. Some of this year's major activities that I would particularly like to mention are: - Over 50 TRON project-related products have been or are about to be introduced to the marketplace, according to a preliminary report from the Future Study Committee of the TRON Association. In par ticular, I am happy to say that the ITRON subproject, which is ahead of the other subprojects, has progressed so far that several papers on ITRON applications will be presented at this conference, which means that the ITRON specifications are now ready for application to em bedded commercial and industrial products."

Modeling, Analysis and Optimization of Network-on-Chip Communication Architectures (Hardcover, 2013 ed.): Umit Y. Ogras, Radu... Modeling, Analysis and Optimization of Network-on-Chip Communication Architectures (Hardcover, 2013 ed.)
Umit Y. Ogras, Radu Marculescu
R3,310 Discovery Miles 33 100 Ships in 18 - 22 working days

Traditionally, design space exploration for Systems-on-Chip (SoCs) has focused on the computational aspects of the problem at hand. However, as the number of components on a single chip and their performance continue to increase, the communication architecture plays a major role in the area, performance and energy consumption of the overall system. As a result, a shift from computation-based to communication-based design becomes mandatory. Towards this end, network-on-chip (NoC) communication architectures have emerged recently as a promising alternative to classical bus and point-to-point communication architectures.

In this dissertation, we study outstanding research problems related to modeling, analysis and optimization of NoC communication architectures. More precisely, we present novel design methodologies, software tools and FPGA prototypes to aid the design of application-specific NoCs.

Parallel Computing in Optimization (Paperback, Softcover reprint of the original 1st ed. 1997): A. Migdalas, Panos M. Pardalos,... Parallel Computing in Optimization (Paperback, Softcover reprint of the original 1st ed. 1997)
A. Migdalas, Panos M. Pardalos, Sverre Storoy
R7,726 Discovery Miles 77 260 Ships in 18 - 22 working days

During the last three decades, breakthroughs in computer technology have made a tremendous impact on optimization. In particular, parallel computing has made it possible to solve larger and computationally more difficult prob lems. This volume contains mainly lecture notes from a Nordic Summer School held at the Linkoping Institute of Technology, Sweden in August 1995. In order to make the book more complete, a few authors were invited to contribute chapters that were not part of the course on this first occasion. The purpose of this Nordic course in advanced studies was three-fold. One goal was to introduce the students to the new achievements in a new and very active field, bring them close to world leading researchers, and strengthen their competence in an area with internationally explosive rate of growth. A second goal was to strengthen the bonds between students from different Nordic countries, and to encourage collaboration and joint research ventures over the borders. In this respect, the course built further on the achievements of the "Nordic Network in Mathematical Programming," which has been running during the last three years with the support ofthe Nordic Council for Advanced Studies (NorFA). The final goal was to produce literature on the particular subject, which would be available to both the participating students and to the students of the "next generation" ."

Fairness (Paperback, Softcover reprint of the original 1st ed. 1986): Nissim Francez Fairness (Paperback, Softcover reprint of the original 1st ed. 1986)
Nissim Francez
R1,413 Discovery Miles 14 130 Ships in 18 - 22 working days

The main purpose of this book is to bring together much of the research conducted in recent years in a subject I find both fascinating and impor tant, namely fairness. Much of the reported research is still in the form of technical reports, theses and conference papers, and only a small part has already appeared in the formal scientific journal literature. Fairness is one of those concepts that can intuitively be explained very brieft.y, but bear a lot of consequences, both in theory and the practicality of programming languages. Scientists have traditionally been attracted to studying such concepts. However, a rigorous study of the concept needs a lot of detailed development, evoking much machinery of both mathemat ics and computer science. I am fully aware of the fact that this field of research still lacks matu rity, as does the whole subject of theoretical studies of concurrency and nondeterminism. One symptom of this lack of maturity is the proliferation of models used by the research community to discuss these issues, a variety lacking the invariance property present, for example, in universal formalisms for sequential computing."

Parallel Execution of Logic Programs (Paperback, Softcover reprint of the original 1st ed. 1987): John S. Conery Parallel Execution of Logic Programs (Paperback, Softcover reprint of the original 1st ed. 1987)
John S. Conery
R1,373 Discovery Miles 13 730 Ships in 18 - 22 working days

This book is an updated version of my Ph.D. dissertation, The AND/OR Process Model for Parallel Interpretation of Logic Programs. The three years since that paper was finished (or so I thought then) have seen quite a bit of work in the area of parallel execution models and programming languages for logic programs. A quick glance at the bibliography here shows roughly 50 papers on these topics, 40 of which were published after 1983. The main difference between the book and the dissertation is the updated survey of related work. One of the appendices in the dissertation was an overview of a Prolog implementation of an interpreter based on the AND/OR Process Model, a simulator I used to get some preliminary measurements of parallelism in logic programs. In the last three years I have been involved with three other implementations. One was written in C and is now being installed on a small multiprocessor at the University of Oregon. Most of the programming of this interpreter was done by Nitin More under my direction for his M.S. project. The other two, one written in Multilisp and the other in Modula-2, are more limited, intended to test ideas about implementing specific aspects of the model. Instead of an appendix describing one interpreter, this book has more detail about implementation included in Chapters 5 through 7, based on a combination of ideas from the four interpreters.

High Performance Computational Methods for Biological Sequence Analysis (Paperback, Softcover reprint of the original 1st ed.... High Performance Computational Methods for Biological Sequence Analysis (Paperback, Softcover reprint of the original 1st ed. 1996)
Tieng K. Yap, Ophir Frieder, Robert L. Martino
R3,996 Discovery Miles 39 960 Ships in 18 - 22 working days

High Performance Computational Methods for Biological Sequence Analysis presents biological sequence analysis using an interdisciplinary approach that integrates biological, mathematical and computational concepts. These concepts are presented so that computer scientists and biomedical scientists can obtain the necessary background for developing better algorithms and applying parallel computational methods. This book will enable both groups to develop the depth of knowledge needed to work in this interdisciplinary field. This work focuses on high performance computational approaches that are used to perform computationally intensive biological sequence analysis tasks: pairwise sequence comparison, multiple sequence alignment, and sequence similarity searching in large databases. These computational methods are becoming increasingly important to the molecular biology community allowing researchers to explore the increasingly large amounts of sequence data generated by the Human Genome Project and other related biological projects. The approaches presented by the authors are state-of-the-art and show how to reduce analysis times significantly, sometimes from days to minutes. High Performance Computational Methods for Biological Sequence Analysis is tremendously important to biomedical science students and researchers who are interested in applying sequence analyses to their studies, and to computational science students and researchers who are interested in applying new computational approaches to biological sequence analyses.

A Systolic Array Optimizing Compiler (Paperback, Softcover reprint of the original 1st ed. 1989): Monica S. Lam A Systolic Array Optimizing Compiler (Paperback, Softcover reprint of the original 1st ed. 1989)
Monica S. Lam
R2,635 Discovery Miles 26 350 Ships in 18 - 22 working days

This book is a revision of my Ph. D. thesis dissertation submitted to Carnegie Mellon University in 1987. It documents the research and results of the compiler technology developed for the Warp machine. Warp is a systolic array built out of custom, high-performance processors, each of which can execute up to 10 million floating-point operations per second (10 MFLOPS). Under the direction of H. T. Kung, the Warp machine matured from an academic, experimental prototype to a commercial product of General Electric. The Warp machine demonstrated that the scalable architecture of high-peiformance, programmable systolic arrays represents a practical, cost-effective solu tion to the present and future computation-intensive applications. The success of Warp led to the follow-on iWarp project, a joint project with Intel, to develop a single-chip 20 MFLOPS processor. The availability of the highly integrated iWarp processor will have a significant impact on parallel computing. One of the major challenges in the development of Warp was to build an optimizing compiler for the machine. First, the processors in the xx A Systolic Array Optimizing Compiler array cooperate at a fine granularity of parallelism, interaction between processors must be considered in the generation of code for individual processors. Second, the individual processors themselves derive their performance from a VLIW (Very Long Instruction Word) instruction set and a high degree of internal pipelining and parallelism. The compiler contains optimizations pertaining to the array level of parallelism, as well as optimizations for the individual VLIW processors."

Data Organization in Parallel Computers (Paperback, Softcover Repri): Harry A.G. Wijshoff Data Organization in Parallel Computers (Paperback, Softcover Repri)
Harry A.G. Wijshoff
R2,645 Discovery Miles 26 450 Ships in 18 - 22 working days

The organization of data is clearly of great importance in the design of high performance algorithms and architectures. Although there are several landmark papers on this subject, no comprehensive treatment has appeared. This monograph is intended to fill that gap. We introduce a model of computation for parallel computer architec tures, by which we are able to express the intrinsic complexity of data or ganization for specific architectures. We apply this model of computation to several existing parallel computer architectures, e.g., the CDC 205 and CRAY vector-computers, and the MPP binary array processor. The study of data organization in parallel computations was introduced as early as 1970. During the development of the ILLIAC IV system there was a need for a theory of possible data arrangements in interleaved mem ory systems. The resulting theory dealt primarily with storage schemes also called skewing schemes for 2-dimensional matrices, i.e., mappings from a- dimensional array to a number of memory banks. By means of the model of computation we are able to apply the theory of skewing schemes to var ious kinds of parallel computer architectures. This results in a number of consequences for both the design of parallel computer architectures and for applications of parallel processing."

Secure Electronic Voting (Paperback, Softcover reprint of the original 1st ed. 2003): Dimitris A. Gritzalis Secure Electronic Voting (Paperback, Softcover reprint of the original 1st ed. 2003)
Dimitris A. Gritzalis
R2,638 Discovery Miles 26 380 Ships in 18 - 22 working days

Secure Electronic Voting is an edited volume, which includes chapters authored by leading experts in the field of security and voting systems. The chapters identify and describe the given capabilities and the strong limitations, as well as the current trends and future perspectives of electronic voting technologies, with emphasis in security and privacy. Secure Electronic Voting includes state-of-the-art material on existing and emerging electronic and Internet voting technologies, which may eventually lead to the development of adequately secure e-voting systems. This book also includes an overview of the legal framework with respect to voting, a description of the user requirements for the development of a secure e-voting system, and a discussion on the relevant technical and social concerns. Secure Electronic Voting includes, also, three case studies on the use and evaluation of e-voting systems in three different real world environments.

Document Processing and Retrieval - Texpros (Paperback, Softcover reprint of the original 1st ed. 1996): Qianhong Liu, Peter A.... Document Processing and Retrieval - Texpros (Paperback, Softcover reprint of the original 1st ed. 1996)
Qianhong Liu, Peter A. Ng
R4,021 Discovery Miles 40 210 Ships in 18 - 22 working days

Document Processing and Retrieval: TEXPROS focuses on the design and implementation of a personal, customizable office information and document processing system called TEXPROS (a TEXt PROcessing System). TEXPROS is a personal, intelligent office information and document processing system for text-oriented documents. This system supports the storage, classification, categorization, retrieval and reproduction of documents, as well as extracting, browsing, retrieving and synthesizing information from a variety of documents. When using TEXPROS in a multi-user or distributed environment, it requires specific protocols for extracting, storing, transmitting and exchanging information. The authors have used a variety of techniques to implement TEXPROS, such as Object-Oriented Programming, Tcl/Tk, X-Windows, etc. The system can be used for many different purposes in many different applications, such as digital libraries, software documentation and information delivery. Audience: Provides in-depth, state-of-the-art coverage of information processing and retrieval, and documentation for such professionals as database specialists, information systems and software developers, and information providers.

Deductive Program Design (Paperback, Softcover reprint of the original 1st ed. 1996): Manfred Broy Deductive Program Design (Paperback, Softcover reprint of the original 1st ed. 1996)
Manfred Broy
R5,193 Discovery Miles 51 930 Ships in 18 - 22 working days

Advanced research on the description of distributed systems and on design calculi for software and hardware is presented in this volume. Distinguished researchers give an overview of the latest state of the art.

Principles of Distributed Systems (Paperback, Softcover reprint of the original 1st ed. 1996): Vijay K Garg Principles of Distributed Systems (Paperback, Softcover reprint of the original 1st ed. 1996)
Vijay K Garg
R4,232 Discovery Miles 42 320 Ships in 18 - 22 working days

Distributed computer systems are now widely available but, despite a number of recent advances, the design of software for these systems remains a challenging task, involving two main difficulties: the absence of a shared clock and the absence of a shared memory. The absence of a shared clock means that the concept of time is not useful in distributed systems. The absence of shared memory implies that the concept of a state of a distributed system also needs to be redefined. These two important concepts occupy a major portion of this book. Principles of Distributed Systems describes tools and techniques that have been successfully applied to tackle the problem of global time and state in distributed systems. The author demonstrates that the concept of time can be replaced by that of causality, and clocks can be constructed to provide causality information. The problem of not having a global state is alleviated by developing efficient algorithms for detecting properties and computing global functions. The author's major emphasis is in developing general mechanisms that can be applied to a variety of problems. For example, instead of discussing algorithms for standard problems, such as termination detection and deadlocks, the book discusses algorithms to detect general properties of a distributed computation. Also included are several worked examples and exercise problems that can be used for individual practice and classroom instruction. Audience: Can be used to teach a one-semester graduate course on distributed systems. Also an invaluable reference book for researchers and practitioners working on the many different aspects of distributed systems.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Advances in Delay-Tolerant Networks…
Joel J. P. C. Rodrigues Paperback R4,669 Discovery Miles 46 690
Artificial Intelligence - Concepts…
Information Reso Management Association Hardcover R9,019 Discovery Miles 90 190
The Practice of Enterprise Architecture…
Svyatoslav Kotusev Hardcover R1,571 Discovery Miles 15 710
Energy-Efficient Communication…
Robert Fasthuber, Francky Catthoor, … Hardcover R4,705 Discovery Miles 47 050
Shared-Memory Parallelism Can Be Simple…
Julian Shun Hardcover R2,946 Discovery Miles 29 460
CSS and HTML for beginners - A Beginners…
Ethan Hall Hardcover R1,027 R881 Discovery Miles 8 810
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, … Paperback R2,281 Discovery Miles 22 810
CSS For Beginners - The Best CSS Guide…
Ethan Hall Hardcover R895 R773 Discovery Miles 7 730
Systems Engineering Neural Networks
A Migliaccio Hardcover R2,817 Discovery Miles 28 170
Heterogeneous Computing - Hardware and…
Mohamed Zahran Hardcover R1,517 Discovery Miles 15 170

 

Partners