0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (12)
  • R250 - R500 (38)
  • R500+ (3,075)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design

Scalable High Performance Computing for Knowledge Discovery and Data Mining - A Special Issue of Data Mining and Knowledge... Scalable High Performance Computing for Knowledge Discovery and Data Mining - A Special Issue of Data Mining and Knowledge Discovery Volume 1, No.4 (1997) (Paperback, Softcover reprint of the original 1st ed. 1998)
Paul Stolorz, Ron Musick
R2,603 Discovery Miles 26 030 Ships in 18 - 22 working days

Scalable High Performance Computing for Knowledge Discovery and Data Mining brings together in one place important contributions and up-to-date research results in this fast moving area. Scalable High Performance Computing for Knowledge Discovery and Data Mining serves as an excellent reference, providing insight into some of the most challenging research issues in the field.

Computer Architectures for Spatially Distributed Data (Paperback, Softcover reprint of the original 1st ed. 1985): Herbert... Computer Architectures for Spatially Distributed Data (Paperback, Softcover reprint of the original 1st ed. 1985)
Herbert Freeman, G.G. Pieroni
R2,697 Discovery Miles 26 970 Ships in 18 - 22 working days

These are the proceedings of a NATO Advanced Study Institute (ASI) held in Cetraro, Italy during 6-17 June 1983. The title of the ASI was Computer Arehiteetures for SpatiaZZy vistributed Vata, and it brouqht together some 60 participants from Europe and America. Presented ere are 21 of the lectures that were delivered. The articles cover a wide spectrum of topics related to computer architecture s specially oriented toward the fast processing of spatial data, and represent an excellent review of the state-of-the-art of this topic. For more than 20 years now researchers in pattern recognition, image processing, meteorology, remote sensing, and computer engineering have been looking toward new forms of computer architectures to speed the processing of data from two- and three-dimensional processes. The work can be said to have commenced with the landmark article by Steve Unger in 1958, and it received a strong forward push with the development of the ILIAC III and IV computers at the University of Illinois during the 1960's. One clear obstacle faced by the computer designers in those days was the limitation of the state-of-the-art of hardware, when the only switching devices available to them were discrete transistors. As aresult parallel processing was generally considered to be imprae tieal, and relatively little progress was made."

Pyramidal Systems for Computer Vision (Paperback, Softcover reprint of the original 1st ed. 1986): Virginio Cantoni, Stefano... Pyramidal Systems for Computer Vision (Paperback, Softcover reprint of the original 1st ed. 1986)
Virginio Cantoni, Stefano Levialdi
R2,698 Discovery Miles 26 980 Ships in 18 - 22 working days

This book contains the proceedings of the NATO Advanced Research Workshop held in Maratea (Italy), May 5-9, 1986 on Pyramidal Systems for Image Processing and Computer Vision. We had 40 participants from 11 countries playing an active part in the workshop and all the leaders of groups that have produced a prototype pyramid machine or a design for such a machine were present. Within the wide field of parallel architectures for image processing a new area was recently born and is growing healthily: the area of pyramidally structured multiprocessing systems. Essentially, the processors are arranged in planes (from a base to an apex) each one of which is generally a reduced (usually by a power of two) version of the plane underneath: these processors are horizontally interconnected (within a plane) and vertically connected with "fathers" (on top planes) and "children" on the plane below. This arrangement has a number of interesting features, all of which were amply discussed in our Workshop including the cellular array and hypercube versions of pyramids. A number of projects (in different parts of the world) are reported as well as some interesting applications in computer vision, tactile systems and numerical calculations.

A High Performance Architecture for Prolog (Paperback, Softcover reprint of the original 1st ed. 1990): T.P. Dobry A High Performance Architecture for Prolog (Paperback, Softcover reprint of the original 1st ed. 1990)
T.P. Dobry
R2,638 Discovery Miles 26 380 Ships in 18 - 22 working days

Artificial Intelligence is entering the mainstream of com- puter applications and as techniques are developed and integrated into a wide variety of areas they are beginning to tax the pro- cessing power of conventional architectures. To meet this demand, specialized architectures providing support for the unique features of symbolic processing languages are emerging. The goal of the research presented here is to show that an archi- tecture specialized for Prolog can achieve a ten-fold improve- ment in performance over conventional, general-purpose architec- tures. This book presents such an architecture for high perfor- mance execution of Prolog programs. The architecture is based on the abstract machine descrip- tion introduced by David H.D. Warren known as the Warren Abstract Machine (W AM). The execution model of the W AM is described and extended to provide a complete Instruction Set Architecture (lSA) for Prolog known as the PLM. This ISA is then realized in a microarchitecture and finally in a hardware design. The work described here represents one of the first efforts to implement the W AM model in hardware. The approach taken is that of direct implementation of the high level WAM instruction set in hardware resulting in a elSe style archi- tecture.

Computations with Markov Chains - Proceedings of the 2nd International Workshop on the Numerical Solution of Markov Chains... Computations with Markov Chains - Proceedings of the 2nd International Workshop on the Numerical Solution of Markov Chains (Paperback, Softcover reprint of the original 1st ed. 1995)
William J. Stewart
R4,099 Discovery Miles 40 990 Ships in 18 - 22 working days

Computations with Markov Chains presents the edited and reviewed proceedings of the Second International Workshop on the Numerical Solution of Markov Chains, held January 16--18, 1995, in Raleigh, North Carolina. New developments of particular interest include recent work on stability and conditioning, Krylov subspace-based methods for transient solutions, quadratic convergent procedures for matrix geometric problems, further analysis of the GTH algorithm, the arrival of stochastic automata networks at the forefront of modelling stratagems, and more. An authoritative overview of the field for applied probabilists, numerical analysts and systems modelers, including computer scientists and engineers.

Systolic Computations (Paperback, Softcover reprint of the original 1st ed. 1992): M.A. Frumkin Systolic Computations (Paperback, Softcover reprint of the original 1st ed. 1992)
M.A. Frumkin
R2,668 Discovery Miles 26 680 Ships in 18 - 22 working days

'Et moi, .. " si j'avait su comment en revenir, je One service mathematics bas rendered the human race. It bas put common sense back n'y serais point aile.' where it belongs, on the topmost shelf next to Jules Verne the dusty canister labelled 'discarded nonsense' . Eric T. Bell The series is divergent; therefore we may be able to do something with it O. Heaviside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and nonlineari ties abound. Similarly, all kinds of parts of mathematics serve as tools for other parts and for other sci ences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One ser vice topology has rendered mathematical physics .. .'; 'One service logic has rendered computer science .. .'; 'One service category theory has rendered mathematics .. .'. All arguably true. And all statements obtainable this way form part of the raison d'ctre of this series."

Supercomputing (Paperback, Softcover reprint of the original 1st ed. 1989): Janusz S. Kowalik Supercomputing (Paperback, Softcover reprint of the original 1st ed. 1989)
Janusz S. Kowalik
R1,463 Discovery Miles 14 630 Ships in 18 - 22 working days

Supercomputing is an important science and technology that enables the scientist or the engineer to simulate numerically very complex physical phenomena related to large-scale scientific, industrial and military applications. It has made considerable progress since the first NATO Workshop on High-Speed Computation in 1983 (Vol. 7 of the same series). This book is a collection of papers presented at the NATO Advanced Research Workshop held in Trondheim, Norway, in June 1989. It presents key research issues related to: - hardware systems, architecture and performance; - compilers and programming tools; - user environments and visualization; - algorithms and applications. Contributions include critical evaluations of the state-of-the-art and many original research results.

Foundations of Synergetics I - Distributed Active Systems (Paperback, 2nd ed. 1990): Alexander S. Mikhailov Foundations of Synergetics I - Distributed Active Systems (Paperback, 2nd ed. 1990)
Alexander S. Mikhailov
R1,383 Discovery Miles 13 830 Ships in 18 - 22 working days

This book gives an introduction to the mathematical theory of cooperative behavior in active systems of various origins, both natural and artificial. It is based on a lecture course in synergetics which I held for almost ten years at the University of Moscow. The first volume deals mainly with the problems of pattern fonnation and the properties of self-organized regular patterns in distributed active systems. It also contains a discussion of distributed analog information processing which is based on the cooperative dynamics of active systems. The second volume is devoted to the stochastic aspects of self-organization and the properties of self-established chaos. I have tried to avoid delving into particular applications. The primary intention is to present general mathematical models that describe the principal kinds of coopera tive behavior in distributed active systems. Simple examples, ranging from chemical physics to economics, serve only as illustrations of the typical context in which a particular model can apply. The manner of exposition is more in the tradition of theoretical physics than of in mathematics: Elaborate fonnal proofs and rigorous estimates are often replaced the text by arguments based on an intuitive understanding of the relevant models. Because of the interdisciplinary nature of this book, its readers might well come from very diverse fields of endeavor. It was therefore desirable to minimize the re quired preliminary knowledge. Generally, a standard university course in differential calculus and linear algebra is sufficient."

Performance Evaluation, Prediction and Visualization of Parallel Systems (Paperback, Softcover reprint of the original 1st ed.... Performance Evaluation, Prediction and Visualization of Parallel Systems (Paperback, Softcover reprint of the original 1st ed. 1999)
Xingfu Wu
R4,025 Discovery Miles 40 250 Ships in 18 - 22 working days

Performance Evaluation, Prediction and Visualization in Parallel Systems presents a comprehensive and systematic discussion of theoretics, methods, techniques and tools for performance evaluation, prediction and visualization of parallel systems. Chapter 1 gives a short overview of performance degradation of parallel systems, and presents a general discussion on the importance of performance evaluation, prediction and visualization of parallel systems. Chapter 2 analyzes and defines several kinds of serial and parallel runtime, points out some of the weaknesses of parallel speedup metrics, and discusses how to improve and generalize them. Chapter 3 describes formal definitions of scalability, addresses the basic metrics affecting the scalability of parallel systems, discusses scalability of parallel systems from three aspects: parallel architecture, parallel algorithm and parallel algorithm-architecture combinations, and analyzes the relations of scalability and speedup. Chapter 4 discusses the methodology of performance measurement, describes the benchmark- oriented performance test and analysis and how to measure speedup and scalability in practice. Chapter 5 analyzes the difficulties in performance prediction, discusses application-oriented and architecture-oriented performance prediction and how to predict speedup and scalability in practice. Chapter 6 discusses performance visualization techniques and tools for parallel systems from three stages: performance data collection, performance data filtering and performance data visualization, and classifies the existing performance visualization tools. Chapter 7 describes parallel compiling-based, search-based and knowledge-based performance debugging, which assists programmers to optimize the strategy or algorithm in their parallel programs, and presents visual programming-based performance debugging to help programmers identify the location and cause of the performance problem. It also provides concrete suggestions on how to modify their parallel program to improve the performance. Chapter 8 gives an overview of current interconnection networks for parallel systems, analyzes the scalability of interconnection networks, and discusses how to measure and improve network performances. Performance Evaluation, Prediction and Visualization in Parallel Systems serves as an excellent reference for researchers, and may be used as a text for advanced courses on the topic.

Dependable Network Computing (Paperback, Softcover reprint of the original 1st ed. 2000): Dimiter R. Avresky Dependable Network Computing (Paperback, Softcover reprint of the original 1st ed. 2000)
Dimiter R. Avresky
R5,194 Discovery Miles 51 940 Ships in 18 - 22 working days

Dependable Network Computing provides insights into various problems facing millions of global users resulting from the 'internet revolution'. It covers real-time problems involving software, servers, and large-scale storage systems with adaptive fault-tolerant routing and dynamic reconfiguration techniques. Also included is material on routing protocols, QoS, and dead- and live-lock free related issues. All chapters are written by leading specialists in their respective fields. Dependable Network Computing provides useful information for scientists, researchers, and application developers building networks based on commercially off-the-shelf components.

Fault-Tolerant Parallel and Distributed Systems (Paperback, Softcover reprint of the original 1st ed. 1998): Dimiter R.... Fault-Tolerant Parallel and Distributed Systems (Paperback, Softcover reprint of the original 1st ed. 1998)
Dimiter R. Avresky, David R. Kaeli
R4,045 Discovery Miles 40 450 Ships in 18 - 22 working days

The most important uses of computing in the future will be those related to the global 'digital convergence' where all computing becomes digital and internetworked. This convergence will be propelled by new and advanced applications in storage, searching, retrieval and exchanging of information in a myriad of forms. All of these will place heavy demands on large parallel and distributed computer systems because these systems have high intrinsic failure rates. The challenge to the computer scientist is to build a system that is inexpensive, accessible and dependable. The chapters in this book provide insight into many of these issues and others that will challenge researchers and applications developers. Included among these topics are: * Fault-tolerance in communication protocols for distributed systems including synchronous and asynchronous group communication. * Methods and approaches for achieving fault-tolerance in distributed systems such as those used in networks of workstations (NOW), dependable cluster systems, and scalable coherent interfaces (SCI)-based local area multiprocessors (LAMP).* General models and features of distributed safety-critical systems built from commercial off-the-shelf components as well as service dependability in telecomputing systems. * Dependable parallel systems for real-time processing of video signals. * Embedding in faulty multiprocessor systems, broadcasting, system-level testing techniques, on-line detection and recovery from intermittent and permanent faults, and more. Fault-Tolerant Parallel and Distributed Systems is a coherent and uniform collection of chapters with contributions by several of the leading experts working on fault-resilient applications. The numerous techniques and methods included will be of special interest to researchers, developers, and graduate students.

Numerical Integration - Recent Developments, Software and Applications (Paperback, Softcover reprint of the original 1st ed.... Numerical Integration - Recent Developments, Software and Applications (Paperback, Softcover reprint of the original 1st ed. 1987)
Patrick Keast, Graeme. Fairweather
R5,180 Discovery Miles 51 800 Ships in 18 - 22 working days

This volume contains refereed papers and extended abstracts of papers presented at the NATO Advanced Research Workshop entitled 'Numerical Integration: Recent Developments, Software and Applications', held at Dalhousie University, Halifax, Canada, August 11-15, 1986. The Workshop was attended by thirty-six scientists from eleven NATO countries. Thirteen invited lectures and twenty-two contributed lectures were presented, of which twenty-five appear in full in this volume, together with extended abstracts of the remaining ten. It is more than ten years since the last workshop of this nature was held, in Los Alamos in 1975. Many developments have occurred in quadrature in the intervening years, and it seemed an opportune time to bring together again researchers in this area. The development of QUADPACK by Piessens, de Doncker, Uberhuber and Kahaner has changed the focus of research in the area of one dimensional quadrature from the construction of new rules to an emphasis on reliable robust software. There has been a dramatic growth in interest in the testing and evaluation of software, stimulated by the work of Lyness and Kaganove, Einarsson, and Piessens. The earlier research of Patterson into Kronrod extensions of Gauss rules, followed by the work of Monegato, and Piessens and Branders, has greatly increased interest in Gauss-based formulas for one-dimensional integration.

Workload Characterization for Computer System Design (Paperback, Softcover reprint of the original 1st ed. 2000): Lizy Kurian... Workload Characterization for Computer System Design (Paperback, Softcover reprint of the original 1st ed. 2000)
Lizy Kurian John, Ann Marie Grizzaffi Maynard
R2,635 Discovery Miles 26 350 Ships in 18 - 22 working days

The advent of the world-wide web and web-based applications have dramatically changed the nature of computer applications. Computer system design, in the light of these changes, involves understanding these modem workloads, identifying bottlenecks during their execution, and appropriately tailoring microprocessors, memory systems, and the overall system to minimize bottlenecks. This book contains ten chapters dealing with several contemporary programming paradigms including Java, web server and database workloads. The first two chapters concentrate on Java. While Barisone et al.'s characterization in Chapter 1 deals with instruction set usage of Java applications, Kim et al.'s analysis in Chapter 2 focuses on memory referencing behavior of Java workloads. Several applications including the SPECjvm98 suite are studied using interpreter and Just-In-Time (TIT) compilers. Barisone et al.'s work includes an analytical model to compute the utilization of various functional units. Kim et al. present information on locality, live-range of objects, object lifetime distribution, etc. Studying database workloads has been a challenge to research groups, due to the difficulty in accessing standard benchmarks. Configuring hardware and software for database benchmarks such as those from the Transactions Processing Council (TPC) requires extensive effort. In Chapter 3, Keeton and Patterson present a simplified workload (microbenchmark) that approximates the characteristics of complex standardized benchmarks.

Computational Aerosciences in the 21st Century - Proceedings of the ICASE/LaRC/NSF/ARO Workshop, conducted by the Institute for... Computational Aerosciences in the 21st Century - Proceedings of the ICASE/LaRC/NSF/ARO Workshop, conducted by the Institute for Computer Applications in Science and Engineering, NASA Langley Research Center, The National Science Foundation and the Army Research Office, April 22-24, 1998 (Paperback, Softcover reprint of the original 1st ed. 2000)
Manuel D. Salas, W. Kyle Anderson
R2,660 Discovery Miles 26 600 Ships in 18 - 22 working days

Over the last decade, the role of computational simulations in all aspects of aerospace design has steadily increased. However, despite the many advances, the time required for computations is far too long. This book examines new ideas and methodologies that may, in the next twenty years, revolutionize scientific computing. The book specifically looks at trends in algorithm research, human computer interface, network-based computing, surface modeling and grid generation and computer hardware and architecture. The book provides a good overview of the current state-of-the-art and provides guidelines for future research directions. The book is intended for computational scientists active in the field and program managers making strategic research decisions.

Parallel Language and Compiler Research in Japan (Paperback, Softcover reprint of the original 1st ed. 1995): Lubomir Bic,... Parallel Language and Compiler Research in Japan (Paperback, Softcover reprint of the original 1st ed. 1995)
Lubomir Bic, Alexandru Nicolau, Mitsuhisa Sato
R5,208 Discovery Miles 52 080 Ships in 18 - 22 working days

Parallel Language and Compiler Research in Japan offers the international community an opportunity to learn in-depth about key Japanese research efforts in the particular software domains of parallel programming and parallelizing compilers. These are important topics that strongly bear on the effectiveness and affordability of high performance computing systems. The chapters of this book convey a comprehensive and current depiction of leading edge research efforts in Japan that focus on parallel software design, development, and optimization that could be obtained only through direct and personal interaction with the researchers themselves.

Supercomputing - Applications, Algorithms, and Architectures For the Future of Supercomputing (Paperback, Softcover reprint of... Supercomputing - Applications, Algorithms, and Architectures For the Future of Supercomputing (Paperback, Softcover reprint of the original 1st ed. 1991)
Jiro Kondo; Edited by (associates) Toshiko Matsuda
R1,402 Discovery Miles 14 020 Ships in 18 - 22 working days

As the technology of Supercomputing processes, methodologies for approaching problems have also been developed. The main object of this symposium was the interdisciplinary participation of experts in related fields and passionate discussion to work toward the solution of problems. An executive committee especially arranged for this symposium selected speakers and other participants who submitted papers which are included in this volume. Also included are selected extracts from the two sessions of panel discussion, the "Needs and Seeds of Supercomputing," and "The Future of Supercomputing," which arose during a wide-ranging exchange of viewpoints.

Asynchronous Circuits (Paperback, Softcover reprint of the original 1st ed. 1995): Janusz A. Brzozowski Asynchronous Circuits (Paperback, Softcover reprint of the original 1st ed. 1995)
Janusz A. Brzozowski; Foreword by C. E. Molnar; Carl-Johan H Seger
R4,045 Discovery Miles 40 450 Ships in 18 - 22 working days

Although asynchronous circuits date back to the early 1950s most of the digital circuits in use today are synchronous because, traditionally, asynchronous circuits have been viewed as difficult to understand and design. In recent years, however, there has been a great surge of interest in asynchronous circuits, largely through the development of new asynchronous design methodologies.
This book provides a comprehensive theory of asynchronous circuits, including modelling, analysis, simulation, specification, verification, and an introduction to their design. It is based on courses given to graduate students and will be suitable for computer scientists and engineers involved in the research and development of asynchronous designs.

TRON Project 1988 - Open-Architecture Computer Systems (Paperback, Softcover reprint of the original 1st ed. 1988): Ken Sakamura TRON Project 1988 - Open-Architecture Computer Systems (Paperback, Softcover reprint of the original 1st ed. 1988)
Ken Sakamura
R1,450 Discovery Miles 14 500 Ships in 18 - 22 working days

It has been almost 5 years since the inauguration of the TRON project, a concept first proposed by Dr. K. Sakamura of the University of Tokyo. The TRON Association, which was founded as an independent organization in March 1988, has been taking over the activities of the earlier TRON Association, which was a division of Japan Electronic Industry Development Association (JEIDA). It has been expanding various operations to globalize the organizations activities. The number of member companies already exceeds 100, with increasing participation from overseas companies. It is truly an awaring historical event that so many members with the same qualifications and aims engaged in the research and development of the computer environment could be gathered together. The TRON concept aims at the creation of a new and complete environment beneficial to both computer and mankind. It has a very wide scope and great diversity. As it includes the open architecture concept and as the TRON machine should be able to work with various foreign languages, the TRON is targetted to be used internationally. In order for us to create a complete TRON world, at though there are several TRON products already on the market, continuous and aggressive participation from as members together with concentration as further development are indispensable. We, the TRON promoters, are much encouraged by such a driving force.

A VLSI Architecture for Concurrent Data Structures (Paperback, Softcover reprint of the original 1st ed. 1987): J W Dally A VLSI Architecture for Concurrent Data Structures (Paperback, Softcover reprint of the original 1st ed. 1987)
J W Dally
R4,004 Discovery Miles 40 040 Ships in 18 - 22 working days

Concurrent data structures simplify the development of concurrent programs by encapsulating commonly used mechanisms for synchronization and commu nication into data structures. This thesis develops a notation for describing concurrent data structures, presents examples of concurrent data structures, and describes an architecture to support concurrent data structures. Concurrent Smalltalk (CST), a derivative of Smalltalk-80 with extensions for concurrency, is developed to describe concurrent data structures. CST allows the programmer to specify objects that are distributed over the nodes of a concurrent computer. These distributed objects have many constituent objects and thus can process many messages simultaneously. They are the foundation upon which concurrent data structures are built. The balanced cube is a concurrent data structure for ordered sets. The set is distributed by a balanced recursive partition that maps to the subcubes of a binary 7lrcube using a Gray code. A search algorithm, VW search, based on the distance properties of the Gray code, searches a balanced cube in O(log N) time. Because it does not have the root bottleneck that limits all tree-based data structures to 0(1) concurrency, the balanced cube achieves 0C.: N) con currency. Considering graphs as concurrent data structures, graph algorithms are pre sented for the shortest path problem, the max-flow problem, and graph parti tioning. These algorithms introduce new synchronization techniques to achieve better performance than existing algorithms."

Memory Performance of Prolog Architectures (Paperback, Softcover reprint of the original 1st ed. 1988): Evan Tick Memory Performance of Prolog Architectures (Paperback, Softcover reprint of the original 1st ed. 1988)
Evan Tick
R4,001 Discovery Miles 40 010 Ships in 18 - 22 working days

One suspects that the people who use computers for their livelihood are growing more "sophisticated" as the field of computer science evolves. This view might be defended by the expanding use of languages such as C and Lisp in contrast to the languages such as FORTRAN and COBOL. This hypothesis is false however - computer languages are not like natural languages where successive generations stick with the language of their ancestors. Computer programmers do not grow more sophisticated - programmers simply take the time to muddle through the increasingly complex language semantics in an attempt to write useful programs. Of course, these programmers are "sophisticated" in the same sense as are hackers of MockLisp, PostScript, and Tex - highly specialized and tedious languages. It is quite frustrating how this myth of sophistication is propagated by some industries, universities, and government agencies. When I was an undergraduate at MIT, I distinctly remember the convoluted questions on exams concerning dynamic scoping in Lisp - the emphasis was placed solely on a "hacker's" view of computation, i. e. , the control and manipulation of storage cells. No consideration was given to the logical structure of programs. Within the past five years, Ada and Common Lisp have become programming language standards, despite their complexity (note that dynamic scoping was dropped even from Common Lisp). Of course, most industries' selection of programming languages are primarily driven by the requirement for compatibility (with previous software) and performance.

Parallel Computing Using Optical Interconnections (Paperback, Softcover reprint of the original 1st ed. 1998): Keqin Li, Yi... Parallel Computing Using Optical Interconnections (Paperback, Softcover reprint of the original 1st ed. 1998)
Keqin Li, Yi Pan, Si Qing Zheng
R4,013 Discovery Miles 40 130 Ships in 18 - 22 working days

Advances in optical technologies have made it possible to implement optical interconnections in future massively parallel processing systems. Photons are non-charged particles, and do not naturally interact. Consequently, there are many desirable characteristics of optical interconnects, e.g. high speed (speed of light), increased fanout, high bandwidth, high reliability, longer interconnection lengths, low power requirements, and immunity to EMI with reduced crosstalk. Optics can utilize free-space interconnects as well as guided wave technology, neither of which has the problems of VLSI technology mentioned above. Optical interconnections can be built at various levels, providing chip-to-chip, module-to-module, board-to-board, and node-to-node communications. Massively parallel processing using optical interconnections poses new challenges; new system configurations need to be designed, scheduling and data communication schemes based on new resource metrics need to be investigated, algorithms for a wide variety of applications need to be developed under the novel computation models that optical interconnections permit, and so on. Parallel Computing Using Optical Interconnections is a collection of survey articles written by leading and active scientists in the area of parallel computing using optical interconnections. This is the first book which provides current and comprehensive coverage of the field, reflects the state of the art from high-level architecture design and algorithmic points of view, and points out directions for further research and development.

TRON Project 1989 - Open-Architecture Computer Systems (Paperback, Softcover reprint of the original 1st ed. 1988): Ken Sakamura TRON Project 1989 - Open-Architecture Computer Systems (Paperback, Softcover reprint of the original 1st ed. 1988)
Ken Sakamura
R1,432 Discovery Miles 14 320 Ships in 18 - 22 working days

It is almost six years since the inauguration of the TRON project, a con cept first proposed by Dr. K. Sakamura of the University of Tokyo, and it is almost 2 years since the foundation of the TRON Association on March 1988. The number of regular member companies registered in the TRON Association as of November 1988 is 145 which is a new re cord for the Association. Some of this year's major activities that I would particularly like to mention are: - Over 50 TRON project-related products have been or are about to be introduced to the marketplace, according to a preliminary report from the Future Study Committee of the TRON Association. In par ticular, I am happy to say that the ITRON subproject, which is ahead of the other subprojects, has progressed so far that several papers on ITRON applications will be presented at this conference, which means that the ITRON specifications are now ready for application to em bedded commercial and industrial products."

Modeling, Analysis and Optimization of Network-on-Chip Communication Architectures (Hardcover, 2013 ed.): Umit Y. Ogras, Radu... Modeling, Analysis and Optimization of Network-on-Chip Communication Architectures (Hardcover, 2013 ed.)
Umit Y. Ogras, Radu Marculescu
R3,310 Discovery Miles 33 100 Ships in 18 - 22 working days

Traditionally, design space exploration for Systems-on-Chip (SoCs) has focused on the computational aspects of the problem at hand. However, as the number of components on a single chip and their performance continue to increase, the communication architecture plays a major role in the area, performance and energy consumption of the overall system. As a result, a shift from computation-based to communication-based design becomes mandatory. Towards this end, network-on-chip (NoC) communication architectures have emerged recently as a promising alternative to classical bus and point-to-point communication architectures.

In this dissertation, we study outstanding research problems related to modeling, analysis and optimization of NoC communication architectures. More precisely, we present novel design methodologies, software tools and FPGA prototypes to aid the design of application-specific NoCs.

Parallel Computing in Optimization (Paperback, Softcover reprint of the original 1st ed. 1997): A. Migdalas, Panos M. Pardalos,... Parallel Computing in Optimization (Paperback, Softcover reprint of the original 1st ed. 1997)
A. Migdalas, Panos M. Pardalos, Sverre Storoy
R7,726 Discovery Miles 77 260 Ships in 18 - 22 working days

During the last three decades, breakthroughs in computer technology have made a tremendous impact on optimization. In particular, parallel computing has made it possible to solve larger and computationally more difficult prob lems. This volume contains mainly lecture notes from a Nordic Summer School held at the Linkoping Institute of Technology, Sweden in August 1995. In order to make the book more complete, a few authors were invited to contribute chapters that were not part of the course on this first occasion. The purpose of this Nordic course in advanced studies was three-fold. One goal was to introduce the students to the new achievements in a new and very active field, bring them close to world leading researchers, and strengthen their competence in an area with internationally explosive rate of growth. A second goal was to strengthen the bonds between students from different Nordic countries, and to encourage collaboration and joint research ventures over the borders. In this respect, the course built further on the achievements of the "Nordic Network in Mathematical Programming," which has been running during the last three years with the support ofthe Nordic Council for Advanced Studies (NorFA). The final goal was to produce literature on the particular subject, which would be available to both the participating students and to the students of the "next generation" ."

Fairness (Paperback, Softcover reprint of the original 1st ed. 1986): Nissim Francez Fairness (Paperback, Softcover reprint of the original 1st ed. 1986)
Nissim Francez
R1,413 Discovery Miles 14 130 Ships in 18 - 22 working days

The main purpose of this book is to bring together much of the research conducted in recent years in a subject I find both fascinating and impor tant, namely fairness. Much of the reported research is still in the form of technical reports, theses and conference papers, and only a small part has already appeared in the formal scientific journal literature. Fairness is one of those concepts that can intuitively be explained very brieft.y, but bear a lot of consequences, both in theory and the practicality of programming languages. Scientists have traditionally been attracted to studying such concepts. However, a rigorous study of the concept needs a lot of detailed development, evoking much machinery of both mathemat ics and computer science. I am fully aware of the fact that this field of research still lacks matu rity, as does the whole subject of theoretical studies of concurrency and nondeterminism. One symptom of this lack of maturity is the proliferation of models used by the research community to discuss these issues, a variety lacking the invariance property present, for example, in universal formalisms for sequential computing."

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Air Pollution, Climate, and Health - An…
Meng Gao, Zifa Wang, … Paperback R3,619 Discovery Miles 36 190
Balancing Greenhouse Gas Budgets…
Benjamin Poulter, Joseph Canadell, … Paperback R3,679 Discovery Miles 36 790
Managing and Processing Big Data in…
Rajkumar Kannan, Raihan Ur Rasool, … Hardcover R5,052 Discovery Miles 50 520
Research Anthology on Implementing…
Information R Management Association Hardcover R15,742 Discovery Miles 157 420
Mathematical Methods in Data Science
Jingli Ren, Haiyan Wang Paperback R3,925 Discovery Miles 39 250
Handbook of Research on Innovative…
Li Yan Hardcover R8,236 Discovery Miles 82 360
Climate Observations, Volume 3 - Data…
Peter Domonkos, Robert Toth, … Paperback R2,941 Discovery Miles 29 410
UML-B Specification for Proven Embedded…
Jean Mermet Hardcover R4,055 Discovery Miles 40 550
Fundamentals of Data Warehouses
Matthias Jarke, Maurizio Lenzerini, … Hardcover R1,534 Discovery Miles 15 340
Taking the Temperature of the Earth…
Glynn Hulley, Darren Ghent Paperback R2,945 Discovery Miles 29 450

 

Partners