0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (9)
  • R250 - R500 (38)
  • R500+ (3,213)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design

Multithreaded Computer Architecture: A Summary of the State of the ART (Paperback, Softcover reprint of the original 1st ed.... Multithreaded Computer Architecture: A Summary of the State of the ART (Paperback, Softcover reprint of the original 1st ed. 1994)
Robert A. Iannucci, Guang R. Gao, Robert H. Halstead Jr, Burton Smith
R5,797 Discovery Miles 57 970 Ships in 10 - 15 working days

Multithreaded computer architecture has emerged as one of the most promising and exciting avenues for the exploitation of parallelism. This new field represents the confluence of several independent research directions which have united over a common set of issues and techniques. Multithreading draws on recent advances in dataflow, RISC, compiling for fine-grained parallel execution, and dynamic resource management. It offers the hope of dramatic performance increases through parallel execution for a broad spectrum of significant applications based on extensions to `traditional' approaches. Multithreaded Computer Architecture is divided into four parts, reflecting four major perspectives on the topic. Part I provides the reader with basic background information, definitions, and surveys of work which have in one way or another been pivotal in defining and shaping multithreading as an architectural discipline. Part II examines key elements of multithreading, highlighting the fundamental nature of latency and synchronization. This section presents clever techniques for hiding latency and supporting large synchronization name spaces. Part III looks at three major multithreaded systems, considering issues of machine organization and compilation strategy. Part IV concludes the volume with an analysis of multithreaded architectures, showcasing methodologies and actual measurements. Multithreaded Computer Architecture: A Summary of the State of the Art is an excellent reference source and may be used as a text for advanced courses on the subject.

Fault-Tolerant Parallel and Distributed Systems (Paperback, Softcover reprint of the original 1st ed. 1998): Dimiter R.... Fault-Tolerant Parallel and Distributed Systems (Paperback, Softcover reprint of the original 1st ed. 1998)
Dimiter R. Avresky, David R. Kaeli
R4,524 Discovery Miles 45 240 Ships in 10 - 15 working days

The most important uses of computing in the future will be those related to the global 'digital convergence' where all computing becomes digital and internetworked. This convergence will be propelled by new and advanced applications in storage, searching, retrieval and exchanging of information in a myriad of forms. All of these will place heavy demands on large parallel and distributed computer systems because these systems have high intrinsic failure rates. The challenge to the computer scientist is to build a system that is inexpensive, accessible and dependable. The chapters in this book provide insight into many of these issues and others that will challenge researchers and applications developers. Included among these topics are: * Fault-tolerance in communication protocols for distributed systems including synchronous and asynchronous group communication. * Methods and approaches for achieving fault-tolerance in distributed systems such as those used in networks of workstations (NOW), dependable cluster systems, and scalable coherent interfaces (SCI)-based local area multiprocessors (LAMP).* General models and features of distributed safety-critical systems built from commercial off-the-shelf components as well as service dependability in telecomputing systems. * Dependable parallel systems for real-time processing of video signals. * Embedding in faulty multiprocessor systems, broadcasting, system-level testing techniques, on-line detection and recovery from intermittent and permanent faults, and more. Fault-Tolerant Parallel and Distributed Systems is a coherent and uniform collection of chapters with contributions by several of the leading experts working on fault-resilient applications. The numerous techniques and methods included will be of special interest to researchers, developers, and graduate students.

Dependable Network Computing (Paperback, Softcover reprint of the original 1st ed. 2000): Dimiter R. Avresky Dependable Network Computing (Paperback, Softcover reprint of the original 1st ed. 2000)
Dimiter R. Avresky
R5,818 Discovery Miles 58 180 Ships in 10 - 15 working days

Dependable Network Computing provides insights into various problems facing millions of global users resulting from the 'internet revolution'. It covers real-time problems involving software, servers, and large-scale storage systems with adaptive fault-tolerant routing and dynamic reconfiguration techniques. Also included is material on routing protocols, QoS, and dead- and live-lock free related issues. All chapters are written by leading specialists in their respective fields. Dependable Network Computing provides useful information for scientists, researchers, and application developers building networks based on commercially off-the-shelf components.

Loop Tiling for Parallelism (Paperback, Softcover reprint of the original 1st ed. 2000): Jingling Xue Loop Tiling for Parallelism (Paperback, Softcover reprint of the original 1st ed. 2000)
Jingling Xue
R4,480 Discovery Miles 44 800 Ships in 10 - 15 working days

Loop tiling, as one of the most important compiler optimizations, is beneficial for both parallel machines and uniprocessors with a memory hierarchy. This book explores the use of loop tiling for reducing communication cost and improving parallelism for distributed memory machines. The author provides mathematical foundations, investigates loop permutability in the framework of nonsingular loop transformations, discusses the necessary machineries required, and presents state-of-the-art results for finding communication- and time-minimal tiling choices. Throughout the book, theorems and algorithms are illustrated with numerous examples and diagrams. The techniques presented in Loop Tiling for Parallelism can be adapted to work for a cluster of workstations, and are also directly applicable to shared-memory machines once the machines are modeled as BSP (Bulk Synchronous Parallel) machines.Features and key topics: * Detailed review of the mathematical foundations, including convex polyhedra and cones; * Self-contained treatment of nonsingular loop transformations, code generation, and full loop permutability; * Tiling loop nests by rectangles and parallelepipeds, including their mathematical definition, dependence analysis, legality test, and code generation; * A complete suite of techniques for generating SPMD code for a tiled loop nest; * Up-to-date results on tile size and shape selection for reducing communication and improving parallelism; * End-of-chapter references for further reading. Researchers and practitioners involved in optimizing compilers and students in advanced computer architecture studies will find this a lucid and well-presented reference work with numerous citations to original sources.

Scheduling in Parallel Computing Systems - Fuzzy and Annealing Techniques (Paperback, Softcover reprint of the original 1st ed.... Scheduling in Parallel Computing Systems - Fuzzy and Annealing Techniques (Paperback, Softcover reprint of the original 1st ed. 1999)
Shaharuddin Salleh, Albert Y. Zomaya
R4,451 Discovery Miles 44 510 Ships in 10 - 15 working days

Scheduling in Parallel Computing Systems: Fuzzy and Annealing Techniques advocates the viability of using fuzzy and annealing methods in solving scheduling problems for parallel computing systems. The book proposes new techniques for both static and dynamic scheduling, using emerging paradigms that are inspired by natural phenomena such as fuzzy logic, mean-field annealing, and simulated annealing. Systems that are designed using such techniques are often referred to in the literature as intelligent' because of their capability to adapt to sudden changes in their environments. Moreover, most of these changes cannot be anticipated in advance or included in the original design of the system. Scheduling in Parallel Computing Systems: Fuzzy and Annealing Techniques provides results that prove such approaches can become viable alternatives to orthodox solutions to the scheduling problem, which are mostly based on heuristics. Although heuristics are robust and reliable when solving certain instances of the scheduling problem, they do not perform well when one needs to obtain solutions to general forms of the scheduling problem. On the other hand, techniques inspired by natural phenomena have been successfully applied for solving a wide range of combinatorial optimization problems (e.g. traveling salesman, graph partitioning). The success of these methods motivated their use in this book to solve scheduling problems that are known to be formidable combinatorial problems. Scheduling in Parallel Computing Systems: Fuzzy and Annealing Techniques is an excellent reference and may be used for advanced courses on the topic.

Still Image Compression on Parallel Computer Architectures (Paperback, Softcover reprint of the original 1st ed. 1999): Savitri... Still Image Compression on Parallel Computer Architectures (Paperback, Softcover reprint of the original 1st ed. 1999)
Savitri Bevinakoppa
R4,465 Discovery Miles 44 650 Ships in 10 - 15 working days

Still Image Compression on Parallel Computer Architectures investigates the application of parallel-processing techniques to digital image compression. Digital image compression is used to reduce the number of bits required to store an image in computer memory and/or transmit it over a communication link. Over the past decade advancements in technology have spawned many applications of digital imaging, such as photo videotex, desktop publishing, graphics arts, color facsimile, newspaper wire phototransmission and medical imaging. For many other contemporary applications, such as distributed multimedia systems, rapid transmission of images is necessary. Dollar cost as well as time cost of transmission and storage tend to be directly proportional to the volume of data. Therefore, application of digital image compression techniques becomes necessary to minimize costs. A number of digital image compression algorithms have been developed and standardized. With the success of these algorithms, research effort is now directed towards improving implementation techniques. The Joint Photographic Experts Group (JPEG) and Motion Photographic Experts Group(MPEG) are international organizations which have developed digital image compression standards. Hardware (VLSI chips) which implement the JPEG image compression algorithm are available. Such hardware is specific to image compression only and cannot be used for other image processing applications. A flexible means of implementing digital image compression algorithms is still required. An obvious method of processing different imaging applications on general purpose hardware platforms is to develop software implementations. JPEG uses an 8 x 8 block of image samples as the basic element for compression. These blocks are processed sequentially. There is always the possibility of having similar blocks in a given image. If similar blocks in an image are located, then repeated compression of these blocks is not necessary. By locating similar blocks in the image, the speed of compression can be increased and the size of the compressed image can be reduced. Based on this concept an enhancement to the JPEG algorithm is proposed, called Bock Comparator Technique (BCT). Still Image Compression on Parallel Computer Architectures is designed for advanced students and practitioners of computer science. This comprehensive reference provides a foundation for understanding digital image compression techniques and parallel computer architectures.

Compiler Technology - Tools, Translators and Language Implementation (Paperback, Softcover reprint of the original 1st ed.... Compiler Technology - Tools, Translators and Language Implementation (Paperback, Softcover reprint of the original 1st ed. 1997)
Derek Beng Kee Kiong
R4,465 Discovery Miles 44 650 Ships in 10 - 15 working days

Compiler technology is fundamental to computer science since it provides the means to implement many other tools. It is interesting that, in fact, many tools have a compiler framework - they accept input in a particular format, perform some processing and present output in another format. Such tools support the abstraction process and are crucial to productive systems development. The focus of Compiler Technology: Tools, Translators and Language Implementation is to enable quick development of analysis tools. Both lexical scanner and parser generator tools are provided as supplements to this book, since a hands-on approach to experimentation with a toy implementation aids in understanding abstract topics such as parse-trees and parse conflicts. Furthermore, it is through hands-on exercises that one discovers the particular intricacies of language implementation. Compiler Technology: Tools, Translators and Language Implementation is suitable as a textbook for an undergraduate or graduate level course on compiler technology, and as a reference for researchers and practitioners interested in compilers and language implementation.

Foundations of Real-Time Computing: Formal Specifications and Methods (Paperback, Softcover reprint of the original 1st ed.... Foundations of Real-Time Computing: Formal Specifications and Methods (Paperback, Softcover reprint of the original 1st ed. 1991)
Andre M.Van Tilborg, Gary M. Koob
R4,497 Discovery Miles 44 970 Ships in 10 - 15 working days

This volume contains a selection of papers that focus on the state-of the-art in formal specification and verification of real-time computing systems. Preliminary versions of these papers were presented at a workshop on the foundations of real-time computing sponsored by the Office of Naval Research in October, 1990 in Washington, D. C. A companion volume by the title Foundations of Real-Time Computing: Scheduling and Resource Management complements this hook by addressing many of the recently devised techniques and approaches for scheduling tasks and managing resources in real-time systems. Together, these two texts provide a comprehensive snapshot of current insights into the process of designing and building real time computing systems on a scientific basis. The notion of real-time system has alternative interpretations, not all of which are intended usages in this collection of papers. Different communities of researchers variously use the term real-time to refer to either very fast computing, or immediate on-line data acquisition, or deadline-driven computing. This text is concerned with the formal specification and verification of computer software and systems whose correct performance is dependent on carefully orchestrated interactions with time, e. g., meeting deadlines and synchronizing with clocks. Such systems have been enabled for a rapidly increasing set of diverse end-uses by the unremitting advances in computing power per constant-dollar cost and per constant-unit-volume of space. End use applications of real-time computers span a spectrum that includes transportation systems, robotics and manufacturing, aerospace and defense, industrial process control, and telecommunications."

Database Recovery (Paperback, Softcover reprint of the original 1st ed. 1998): Vijay Kumar, Sang Hyuk Son Database Recovery (Paperback, Softcover reprint of the original 1st ed. 1998)
Vijay Kumar, Sang Hyuk Son
R4,428 Discovery Miles 44 280 Ships in 10 - 15 working days

Database Recovery presents an in-depth discussion on all aspects of database recovery. Firstly, it introduces the topic informally to set the intuitive understanding, and then presents a formal treatment of recovery mechanism. In the past, recovery has been treated merely as a mechanism which is implemented on an ad-hoc basis. This book elevates the recovery from a mechanism to a concept, and presents its essential properties. A book on recovery is incomplete if it does not present how recovery is practiced in commercial systems. This book, therefore, presents a detailed description of recovery mechanisms as implemented on Informix, OpenIngres, Oracle, and Sybase commercial database systems. Database Recovery is suitable as a textbook for a graduate-level course on database recovery, as a secondary text for a graduate-level course on database systems, and as a reference for researchers and practitioners in industry.

Tools and Environments for Parallel and Distributed Systems (Paperback, Softcover reprint of the original 1st ed. 1996): Amr... Tools and Environments for Parallel and Distributed Systems (Paperback, Softcover reprint of the original 1st ed. 1996)
Amr Zaky, Ted Lewis
R4,493 Discovery Miles 44 930 Ships in 10 - 15 working days

Developing correct and efficient software is far more complex for parallel and distributed systems than it is for sequential processors. Some of the reasons for this added complexity are: the lack of a universally acceptable parallel and distributed programming paradigm, the criticality of achieving high performance, and the difficulty of writing correct parallel and distributed programs. These factors collectively influence the current status of parallel and distributed software development tools efforts. Tools and Environments for Parallel and Distributed Systems addresses the above issues by describing working tools and environments, and gives a solid overview of some of the fundamental research being done worldwide. Topics covered in this collection are: mainstream program development tools, performance prediction tools and studies; debugging tools and research; and nontraditional tools. Audience: Suitable as a secondary text for graduate level courses in software engineering and parallel and distributed systems, and as a reference for researchers and practitioners in industry.

Instruction-Level Parallelism - A Special Issue of The Journal of Supercomputing (Paperback, Softcover reprint of the original... Instruction-Level Parallelism - A Special Issue of The Journal of Supercomputing (Paperback, Softcover reprint of the original 1st ed. 1993)
B.R. Rau, J.A. Fisher
R5,756 Discovery Miles 57 560 Ships in 10 - 15 working days

Instruction-Level Parallelism presents a collection of papers that attempts to capture the most significant work that took place during the 1980s in the area of instruction-level (ILP) parallel processing. The papers in this book discuss both compiler techniques and actual implementation experience on very long instruction word (VLIW) and superscalar architectures.

Information and Collaboration Models of Integration (Paperback, Softcover reprint of the original 1st ed. 1994): Shimon Y. Nof Information and Collaboration Models of Integration (Paperback, Softcover reprint of the original 1st ed. 1994)
Shimon Y. Nof
R4,552 Discovery Miles 45 520 Ships in 10 - 15 working days

The objective of this book is to bring together contributions by eminent researchers from industry and academia who specialize in the currently separate study and application of the key aspects of integration. The state of knowledge on integration and collaboration models and methods is reviewed, followed by an agenda for needed research that has been generated by the participants. The book is the result of a NATO Advanced Research Workshop on "Integration: Information and Collaboration Models" that took place at II Ciocco, Italy, during June 1993. Significant developments and research projects have been occurring internationally in a major effort to integrate increasingly complex systems. On one hand, advancements in computer technology and computing theories provide better, more timely, information. On of users and clients, and the the other hand, the geographic and organizational distribution proliferation of computers and communication, lead to an explosion of information and to the demand for integration. Two important examples of interest are computer integrated manufacturing and enterprises (CIM/E) and concurrent engineering (CE). CIM/E is the collection of computer technologies such as CNC, CAD, CAM. robotics and computer integrated engineering that integrate all the enterprise activities for competitiveness and timely response to changes. Concurrent engineering is the complete life-cycle approach to engineering of products. systems. and processes including customer requirements, design. planning. costing. service and recycling. In CIM/E and in CE, computer based information is the key to integration.

Multicasting on the Internet and its Applications (Paperback, Softcover reprint of the original 1st ed. 1998): Sanjoy Paul Multicasting on the Internet and its Applications (Paperback, Softcover reprint of the original 1st ed. 1998)
Sanjoy Paul
R4,536 Discovery Miles 45 360 Ships in 10 - 15 working days

This book covers the entire spectrum of multicasting on the Internet from link- to application-layer issues, including multicasting in broadcast and non-broadcast links, multicast routing, reliable and real-time multicast transport, group membership and total ordering in multicast groups. In-depth consideration is given to describing IP multicast routing protocols, such as, DVMRP, MOSPF, PIM and CBT, quality of service issues in network-layer using RSVP and ST-2, as well as the relationship between ATM and IP multicast. These discussions include coverage of key concepts using illustrative diagrams and various real-world applications. The protocols and the architecture of MBone are described, real-time multicast transport issues are addressed and various reliable multicast transport protocols are compared both conceptually and analytically. Also included is a discussion of video multicast and other cutting-edge research on multicast with an assessment of their potential impact on future internetworks.Multicasting on the Internet and Its Applications is an invaluable reference work for networking professionals and researchers, network software developers, information technology managers and graduate students.

Parallel Computing on Distributed Memory Multiprocessors (Paperback, Softcover reprint of the original 1st ed. 1993): Fusun... Parallel Computing on Distributed Memory Multiprocessors (Paperback, Softcover reprint of the original 1st ed. 1993)
Fusun Oezguner, Fikret Ercal
R2,988 Discovery Miles 29 880 Ships in 10 - 15 working days

Advances in microelectronic technology have made massively parallel computing a reality and triggered an outburst of research activity in parallel processing architectures and algorithms. Distributed memory multiprocessors - parallel computers that consist of microprocessors connected in a regular topology - are increasingly being used to solve large problems in many application areas. In order to use these computers for a specific application, existing algorithms need to be restructured for the architecture and new algorithms developed. The performance of a computation on a distributed memory multiprocessor is affected by the node and communication architecture, the interconnection network topology, the I/O subsystem, and the parallel algorithm and communication protocols. Each of these parametersis a complex problem, and solutions require an understanding of the interactions among them. This book is based on the papers presented at the NATO Advanced Study Institute held at Bilkent University, Turkey, in July 1991. The book is organized in five parts: Parallel computing structures and communication, Parallel numerical algorithms, Parallel programming, Fault tolerance, and Applications and algorithms.

Matrix Computations on Systolic-Type Arrays (Paperback, Softcover reprint of the original 1st ed. 1992): Jaime Moreno, Tomas... Matrix Computations on Systolic-Type Arrays (Paperback, Softcover reprint of the original 1st ed. 1992)
Jaime Moreno, Tomas Lang
R4,490 Discovery Miles 44 900 Ships in 10 - 15 working days

Matrix Computations on Systolic-Type Arrays provides a framework which permits a good understanding of the features and limitations of processor arrays for matrix algorithms. It describes the tradeoffs among the characteristics of these systems, such as internal storage and communication bandwidth, and the impact on overall performance and cost. A system which allows for the analysis of methods for the design/mapping of matrix algorithms is also presented. This method identifies stages in the design/mapping process and the capabilities required at each stage. Matrix Computations on Systolic-Type Arrays provides a much needed description of the area of processor arrays for matrix algorithms and of the methods used to derive those arrays. The ideas developed here reduce the space of solutions in the design/mapping process by establishing clear criteria to select among possible options as well as by a-priori rejection of alternatives which are not adequate (but which are considered in other approaches). The end result is a method which is more specific than other techniques previously available (suitable for a class of matrix algorithms) but which is more systematic, better defined and more effective in reaching the desired objectives. Matrix Computations on Systolic-Type Arrays will interest researchers and professionals who are looking for systematic mechanisms to implement matrix algorithms either as algorithm-specific structures or using specialized architectures. It provides tools that simplify the design/mapping process without introducing degradation, and that permit tradeoffs between performance/cost measures selected by the designer.

Parallel Language and Compiler Research in Japan (Paperback, Softcover reprint of the original 1st ed. 1995): Lubomir Bic,... Parallel Language and Compiler Research in Japan (Paperback, Softcover reprint of the original 1st ed. 1995)
Lubomir Bic, Alexandru Nicolau, Mitsuhisa Sato
R5,834 Discovery Miles 58 340 Ships in 10 - 15 working days

Parallel Language and Compiler Research in Japan offers the international community an opportunity to learn in-depth about key Japanese research efforts in the particular software domains of parallel programming and parallelizing compilers. These are important topics that strongly bear on the effectiveness and affordability of high performance computing systems. The chapters of this book convey a comprehensive and current depiction of leading edge research efforts in Japan that focus on parallel software design, development, and optimization that could be obtained only through direct and personal interaction with the researchers themselves.

Cooperative Computer-Aided Authoring and Learning - A Systems Approach (Paperback, Softcover reprint of the original 1st ed.... Cooperative Computer-Aided Authoring and Learning - A Systems Approach (Paperback, Softcover reprint of the original 1st ed. 1995)
Max Muhlhauser
R5,781 Discovery Miles 57 810 Ships in 10 - 15 working days

Cooperative Computer-Aided Authoring and Learning: A Systems Approach describes in detail a practical system for computer assisted authoring and learning. Drawing from the experiences gained during the Nestor project, jointly run between the Universities of Karlsruhe, Kaiserslautern and Freiburg and the Digital Equipment Corp. Center for Research and Advanced Development, the book presents a concrete example of new concepts in the domain of computer-aided authoring and learning. The conceptual foundation is laid by a reference architecture for an integrated environment for authoring and learning. This overall architecture represents the nucleus, shell and common denominator for the R&D activities carried out. From its conception, the reference architecture was centered around three major issues: * Cooperation among and between authors and learners in an open, multimedia and distributed system as the most important attribute; * Authoring/learning as the central topic; * Laboratory as the term which evoked the most suitable association with the envisioned authoring/learning environment.Within this framework, the book covers four major topics which denote the most important technical domains, namely: * The system kernel, based on object orientation and hypermedia; * Distributed multimedia support; * Cooperation support, and * Reusable instructional design support. Cooperative Computer-Aided Authoring and Learning: A Systems Approach is a major contribution to the emerging field of collaborative computing and is essential reading for researchers and practitioners alike. Its pedagogic flavor also makes it suitable for use as a text for a course on the subject.

Workload Characterization for Computer System Design (Paperback, Softcover reprint of the original 1st ed. 2000): Lizy Kurian... Workload Characterization for Computer System Design (Paperback, Softcover reprint of the original 1st ed. 2000)
Lizy Kurian John, Ann Marie Grizzaffi Maynard
R2,936 Discovery Miles 29 360 Ships in 10 - 15 working days

The advent of the world-wide web and web-based applications have dramatically changed the nature of computer applications. Computer system design, in the light of these changes, involves understanding these modem workloads, identifying bottlenecks during their execution, and appropriately tailoring microprocessors, memory systems, and the overall system to minimize bottlenecks. This book contains ten chapters dealing with several contemporary programming paradigms including Java, web server and database workloads. The first two chapters concentrate on Java. While Barisone et al.'s characterization in Chapter 1 deals with instruction set usage of Java applications, Kim et al.'s analysis in Chapter 2 focuses on memory referencing behavior of Java workloads. Several applications including the SPECjvm98 suite are studied using interpreter and Just-In-Time (TIT) compilers. Barisone et al.'s work includes an analytical model to compute the utilization of various functional units. Kim et al. present information on locality, live-range of objects, object lifetime distribution, etc. Studying database workloads has been a challenge to research groups, due to the difficulty in accessing standard benchmarks. Configuring hardware and software for database benchmarks such as those from the Transactions Processing Council (TPC) requires extensive effort. In Chapter 3, Keeton and Patterson present a simplified workload (microbenchmark) that approximates the characteristics of complex standardized benchmarks.

Arrays, Functional Languages, and Parallel Systems (Paperback, Softcover reprint of the original 1st ed. 1991): Lenore... Arrays, Functional Languages, and Parallel Systems (Paperback, Softcover reprint of the original 1st ed. 1991)
Lenore M.Restifo Mullin; Contributions by Michael Jenkins, Gaetan Hains, Robert Bernecky, Guang R. Gao
R4,498 Discovery Miles 44 980 Ships in 10 - 15 working days

During a meeting in Toronto last winter, Mike Jenkins, Bob Bernecky and I were discussing how the two existing theories on arrays influenced or were in fluenced by programming languages and systems. More's Army Theory was the basis for NIAL and APL2 and Mullin's A Mathematics of A rmys(MOA), is being used as an algebra of arrays in functional and A-calculus based pro gramming languages. MOA was influenced by Iverson's initial and extended algebra, the foundations for APL and J respectively. We discussed that there is a lot of interest in the Computer Science and Engineering communities concerning formal methods for languages that could support massively parallel operations in scientific computing, a back to-roots interest for both Mike and myself. Languages for this domain can no longer be informally developed since it is necessary to map languages easily to many multiprocessor architectures. Software systems intended for parallel computation require a formal basis so that modifications can be done with relative ease while ensuring integrity in design. List based lan guages are profiting from theoretical foundations such as the Bird-Meertens formalism. Their theory has been successfully used to describe list based parallel algorithms across many classes of architectures."

Computational Aerosciences in the 21st Century - Proceedings of the ICASE/LaRC/NSF/ARO Workshop, conducted by the Institute for... Computational Aerosciences in the 21st Century - Proceedings of the ICASE/LaRC/NSF/ARO Workshop, conducted by the Institute for Computer Applications in Science and Engineering, NASA Langley Research Center, The National Science Foundation and the Army Research Office, April 22-24, 1998 (Paperback, Softcover reprint of the original 1st ed. 2000)
Manuel D. Salas, W. Kyle Anderson
R2,966 Discovery Miles 29 660 Ships in 10 - 15 working days

Over the last decade, the role of computational simulations in all aspects of aerospace design has steadily increased. However, despite the many advances, the time required for computations is far too long. This book examines new ideas and methodologies that may, in the next twenty years, revolutionize scientific computing. The book specifically looks at trends in algorithm research, human computer interface, network-based computing, surface modeling and grid generation and computer hardware and architecture. The book provides a good overview of the current state-of-the-art and provides guidelines for future research directions. The book is intended for computational scientists active in the field and program managers making strategic research decisions.

Reversible Logic Synthesis - From Fundamentals to Quantum Computing (Paperback, Softcover reprint of the original 1st ed.... Reversible Logic Synthesis - From Fundamentals to Quantum Computing (Paperback, Softcover reprint of the original 1st ed. 2004)
Anas N Al-Rabadi
R3,008 Discovery Miles 30 080 Ships in 10 - 15 working days

For the first time in book form, this comprehensive and systematic monograph presents the methods for the reversible synthesis of logic functions and circuits. This methodology offers designers the capability to solve major problems in system design now and in the future, such as the high rate of power consumption, and the emergence of quantum effects for highly dense ICs. The challenge addressed here is to design reliable systems that consume as little power as possible and in which the signals are processed and transmitted at very high speeds with very high signal integrity. Researchers in academia or industry and graduate students, who work in logic synthesis, computer design, computer-aided design tools, and low power VLSI circuit design, will find this book a valuable resource.

The Interaction of Compilation Technology and Computer Architecture (Paperback, Softcover reprint of the original 1st ed.... The Interaction of Compilation Technology and Computer Architecture (Paperback, Softcover reprint of the original 1st ed. 1994)
David J. Lilja, Peter L. Bird
R2,958 Discovery Miles 29 580 Ships in 10 - 15 working days

In brief summary, the following results were presented in this work: * A linear time approach was developed to find register requirements for any specified CS schedule or filled MRT. * An algorithm was developed for finding register requirements for any kernel that has a dependence graph that is acyclic and has no data reuse on machines with depth independent instruction templates. * We presented an efficient method of estimating register requirements as a function of pipeline depth. * We developed a technique for efficiently finding bounds on register require ments as a function of pipeline depth. * Presented experimental data to verify these new techniques. * discussed some interesting design points for register file size on a number of different architectures. REFERENCES [1] Robert P. Colwell, Robert P. Nix, John J O'Donnell, David B Papworth, and Paul K. Rodman. A VLIW Architecture for a Trace Scheduling Com piler. In Architectural Support for Programming Languages and Operating Systems, pages 180-192, 1982. [2] C. Eisenbeis, W. Jalby, and A. Lichnewsky. Compile-Time Optimization of Memory and Register Usage on the Cray-2. In Proceedings of the Second Workshop on Languages and Compilers, Urbana l/inois, August 1989. [3] C. Eisenbeis, William Jalby, and Alain Lichnewsky. Squeezing More CPU Performance Out of a Cray-2 by Vector Block Scheduling. In Proceedings of Supercomputing '88, pages 237-246, 1988. [4] Michael J. Flynn. Very High-Speed Computing Systems. Proceedings of the IEEE, 54:1901-1909, December 1966.

Performance and Reliability Analysis of Computer Systems - An Example-Based Approach Using the SHARPE Software Package... Performance and Reliability Analysis of Computer Systems - An Example-Based Approach Using the SHARPE Software Package (Paperback, Softcover reprint of the original 1st ed. 1996)
Robin A. Sahner, Kishor Trivedi, Antonio Puliafito
R4,526 Discovery Miles 45 260 Ships in 10 - 15 working days

Performance and Reliability Analysis of Computer Systems: An Example-Based Approach Using the SHARPE Software Package provides a variety of probabilistic, discrete-state models used to assess the reliability and performance of computer and communication systems. The models included are combinatorial reliability models (reliability block diagrams, fault trees and reliability graphs), directed, acyclic task precedence graphs, Markov and semi-Markov models (including Markov reward models), product-form queueing networks and generalized stochastic Petri nets. A practical approach to system modeling is followed; all of the examples described are solved and analyzed using the SHARPE tool. In structuring the book, the authors have been careful to provide the reader with a methodological approach to analytical modeling techniques. These techniques are not seen as alternatives but rather as an integral part of a single process of assessment which, by hierarchically combining results from different kinds of models, makes it possible to use state-space methods for those parts of a system that require them and non-state-space methods for the more well-behaved parts of the system. The SHARPE (Symbolic Hierarchical Automated Reliability and Performance Evaluator) package is the `toolchest' that allows the authors to specify stochastic models easily and solve them quickly, adopting model hierarchies and very efficient solution techniques. All the models described in the book are specified and solved using the SHARPE language; its syntax is described and the source code of almost all the examples discussed is provided. Audience: Suitable for use in advanced level courses covering reliability and performance of computer and communications systems and by researchers and practicing engineers whose work involves modeling of system performance and reliability.

Scalable Shared Memory Multiprocessors (Paperback, Softcover reprint of the original 1st ed. 1992): Michel Dubois, Shreekant S.... Scalable Shared Memory Multiprocessors (Paperback, Softcover reprint of the original 1st ed. 1992)
Michel Dubois, Shreekant S. Thakkar
R4,501 Discovery Miles 45 010 Ships in 10 - 15 working days

The workshop on Scalable Shared Memory Multiprocessors took place on May 26 and 27 1990 at the Stouffer Madison Hotel in Seattle, Washington as a prelude to the 1990 International Symposium on Computer Architecture. About 100 participants listened for two days to the presentations of 22 invited The motivation for this workshop was to speakers, from academia and industry. promote the free exchange of ideas among researchers working on shared-memory multiprocessor architectures. There was ample opportunity to argue with speakers, and certainly participants did not refrain a bit from doing so. Clearly, the problem of scalability in shared-memory multiprocessors is still a wide-open question. We were even unable to agree on a definition of "scalability." Authors had more than six months to prepare their manuscript, and therefore the papers included in this proceedings are refinements of the speakers' presentations, based on the criticisms received at the workshop. As a result, 17 authors contributed to these proceedings. We wish to thank them for their diligence and care. The contributions in these proceedings can be partitioned into four categories 1. Access Order and Synchronization 2. Performance 3. Cache Protocols and Architectures 4. Distributed Shared Memory Particular topics on which new ideas and results are presented in these proceedings include: efficient schemes for combining networks, formal specification of shared memory models, correctness of trace-driven simulations, synchronization, various coherence protocols, ."

The Design and Implementation of a Log-structured file system (Paperback, Softcover reprint of the original 1st ed. 1995):... The Design and Implementation of a Log-structured file system (Paperback, Softcover reprint of the original 1st ed. 1995)
Mendel Rosenblum
R2,911 Discovery Miles 29 110 Ships in 10 - 15 working days

Computersystemsresearch is heavilyinfluencedby changesincomputertechnol- ogy. As technology changes alterthe characteristics ofthe underlying hardware com- ponents of the system, the algorithms used to manage the system need to be re- examinedand newtechniques need to bedeveloped. Technological influencesare par- ticularly evident in the design of storage management systems such as disk storage managers and file systems. The influences have been so pronounced that techniques developed as recently as ten years ago are being made obsolete. The basic problem for disk storage managers is the unbalanced scaling of hard- warecomponenttechnologies. Disk storage managerdesign depends on the technolo- gy for processors, main memory, and magnetic disks. During the 1980s, processors and main memories benefited from the rapid improvements in semiconductortechnol- ogy and improved by several orders ofmagnitude in performance and capacity. This improvement has not been matched by disk technology, which is bounded by the me- chanics ofrotating magnetic media. Magnetic disks ofthe 1980s have improved by a factor of 10in capacity butonly a factor of2 in performance. This unbalanced scaling ofthe hardware components challenges the disk storage manager to compensate for the slower disks and allow performance to scale with the processor and main memory technology. Unless the performance of file systems can be improved over that of the disks, I/O-bound applications will be unable to use the rapid improvements in processor speeds to improve performance for computer users. Disk storage managers must break this bottleneck and decouple application perfor- mance from the disk.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Artificial Intelligence Perspective for…
Sezer Bozkus Kahyaoglu, Vahap Tecim Hardcover R2,989 R2,499 Discovery Miles 24 990
Multi-Fractal Traffic and Anomaly…
Ming Li Hardcover R2,169 Discovery Miles 21 690
Designing Switch/Routers - Fundamental…
James Aweya Paperback R3,845 Discovery Miles 38 450
Mind Matters - A Tribute To Allen Newell
David M. Steier, Tom M. Mitchell Hardcover R4,182 Discovery Miles 41 820
Designing Switch/Routers - Fundamental…
James Aweya Hardcover R9,091 Discovery Miles 90 910
Futuristic Research Trends and…
Bhawana Rudra, Anshul Verma, … Hardcover R3,194 Discovery Miles 31 940
Parallel Computing
Moreshwar R. Bhujade Hardcover R1,057 Discovery Miles 10 570
Edge-AI in Healthcare - Trends and…
Sonali Vyas, Akanksha Upadhyaya, … Hardcover R2,644 Discovery Miles 26 440
Machine Learning and Artificial…
Tawseef Ayoub Shaikh, Saqib Hakak, … Hardcover R4,355 Discovery Miles 43 550
Software Design by Example - A…
Greg Wilson Paperback R1,370 Discovery Miles 13 700

 

Partners