0
Your cart

Your cart is empty

Browse All Departments
  • All Departments
Price
  • R1,000 - R2,500 (5)
  • R2,500 - R5,000 (1)
  • -
Status
Brand

Showing 1 - 6 of 6 matches in All Departments

9th International Conference on Automated Deduction - Argonne, Illinois, USA, May 23-26, 1988. Proceedings (Paperback, 1988... 9th International Conference on Automated Deduction - Argonne, Illinois, USA, May 23-26, 1988. Proceedings (Paperback, 1988 ed.)
Ewing Lusk, Ross Overbeek
R3,171 Discovery Miles 31 710 Ships in 10 - 15 working days

This volume contains the papers presented at the Ninth International Conference on Automated Deduction (CADE-9) held May 23-26 at Argonne National Laboratory, Argonne, Illinois. The conference commemorates the twenty-fifth anniversary of the discovery of the resolution principle, which took place during the summer of 1963. The CADE conferences are a forum for reporting on research on all aspects of automated deduction, including theorem proving, logic programming, unification, deductive databases, term rewriting, ATP for non-standard logics, and program verification. All papers submitted to the conference were refereed by at least two referees, and the program committee accepted the 52 that appear here. Also included in this volume are abstracts of 21 implementations of automated deduction systems.

MPI - Eine Einfuhrung (German, Paperback): William Gropp, Ewing Lusk, Anthony Skjellum MPI - Eine Einfuhrung (German, Paperback)
William Gropp, Ewing Lusk, Anthony Skjellum; Translated by Holger Blaar; Contributions by Paul Molitor
R2,263 R1,749 Discovery Miles 17 490 Save R514 (23%) Ships in 10 - 15 working days

Message Passing Interface (MPI) ist ein Protokoll, das parallel Berechnungen auf verteilten, heterogenen, lose-gekoppelten Computersystemen ermoglicht."

Using Advanced MPI - Modern Features of the Message-Passing Interface (Paperback): William Gropp, Torsten Hoefler, Rajeev... Using Advanced MPI - Modern Features of the Message-Passing Interface (Paperback)
William Gropp, Torsten Hoefler, Rajeev Thakur, Ewing Lusk
R2,336 Discovery Miles 23 360 Ships in 10 - 15 working days

A guide to advanced features of MPI, reflecting the latest version of the MPI standard, that takes an example-driven, tutorial approach. This book offers a practical guide to the advanced features of the MPI (Message-Passing Interface) standard library for writing programs for parallel computers. It covers new features added in MPI-3, the latest version of the MPI standard, and updates from MPI-2. Like its companion volume, Using MPI, the book takes an informal, example-driven, tutorial approach. The material in each chapter is organized according to the complexity of the programs used as examples, starting with the simplest example and moving to more complex ones. Using Advanced MPI covers major changes in MPI-3, including changes to remote memory access and one-sided communication that simplify semantics and enable better performance on modern hardware; new features such as nonblocking and neighborhood collectives for greater scalability on large systems; and minor updates to parallel I/O and dynamic processes. It also covers support for hybrid shared-memory/message-passing programming; MPI_Message, which aids in certain types of multithreaded programming; features that handle very large data; an interface that allows the programmer and the developer to access performance data; and a new binding of MPI to Fortran.

Parallel Programming Using C++ (Paperback, New): Gregory V. Wilson, Paul Lu, William Gropp, Ewing Lusk Parallel Programming Using C++ (Paperback, New)
Gregory V. Wilson, Paul Lu, William Gropp, Ewing Lusk
R2,038 Discovery Miles 20 380 Ships in 10 - 15 working days

Foreword by Bjarne Stroustrup Software is generally acknowledged to be the single greatest obstacle preventing mainstream adoption of massively-parallel computing. While sequential applications are routinely ported to platforms ranging from PCs to mainframes, most parallel programs only ever run on one type of machine. One reason for this is that most parallel programming systems have failed to insulate their users from the architectures of the machines on which they have run. Those that have been platform-independent have usually also had poor performance.Many researchers now believe that object-oriented languages may offer a solution. By hiding the architecture-specific constructs required for high performance inside platform-independent abstractions, parallel object-oriented programming systems may be able to combine the speed of massively-parallel computing with the comfort of sequential programming.Parallel Programming Using C++ describes fifteen parallel programming systems based on C++, the most popular object-oriented language of today. These systems cover the whole spectrum of parallel programming paradigms, from data parallelism through dataflow and distributed shared memory to message-passing control parallelism.For the parallel programming community, a common parallel application is discussed in each chapter, as part of the description of the system itself. By comparing the implementations of the polygon overlay problem in each system, the reader can get a better sense of their expressiveness and functionality for a common problem. For the systems community, the chapters contain a discussion of the implementation of the various compilers and runtime systems. In addition to discussing the performance of polygon overlay, several of the contributors also discuss the performance of other, more substantial, applications.For the research community, the contributors discuss the motivations for and philosophy of their systems. As well, many of the chapters include critiques that complete the research arc by pointing out possible future research directions. Finally, for the object-oriented community, there are many examples of how encapsulation, inheritance, and polymorphism can be used to control the complexity of developing, debugging, and tuning parallel software.

Scalable Input/Output - Achieving System Balance (Paperback, New): Daniel A. Reed, William Gropp, Ewing Lusk Scalable Input/Output - Achieving System Balance (Paperback, New)
Daniel A. Reed, William Gropp, Ewing Lusk
R1,237 Discovery Miles 12 370 Ships in 10 - 15 working days

The major research results from the Scalable Input/Output Initiative, exploring software and algorithmic solutions to the I/O imbalance.As we enter the "decade of data," the disparity between the vast amount of data storage capacity (measurable in terabytes and petabytes) and the bandwidth available for accessing it has created an input/output bottleneck that is proving to be a major constraint on the effective use of scientific data for research. Scalable Input/Output is a summary of the major research results of the Scalable I/O Initiative, launched by Paul Messina, then Director of the Center for Advanced Computing Research at the California Institute of Technology, to explore software and algorithmic solutions to the I/O imbalance. The contributors explore techniques for I/O optimization, including: I/O characterization to understand application and system I/O patterns; system checkpointing strategies; collective I/O and parallel database support for scientific applications; parallel I/O libraries and strategies for file striping, prefetching, and write behind; compilation strategies for out-of-core data access; scheduling and shared virtual memory alternatives; network support for low-latency data transfer; and parallel I/O application programming interfaces.

Using MPI - Portable Parallel Programming with the Message-Passing Interface (Paperback, third edition): William Gropp, Ewing... Using MPI - Portable Parallel Programming with the Message-Passing Interface (Paperback, third edition)
William Gropp, Ewing Lusk, Anthony Skjellum
R1,795 R1,592 Discovery Miles 15 920 Save R203 (11%) Ships in 9 - 15 working days

The thoroughly updated edition of a guide to parallel programming with MPI, reflecting the latest specifications, with many detailed examples. This book offers a thoroughly updated guide to the MPI (Message-Passing Interface) standard library for writing programs for parallel computers. Since the publication of the previous edition of Using MPI, parallel computing has become mainstream. Today, applications run on computers with millions of processors; multiple processors sharing memory and multicore processors with multiple hardware threads per core are common. The MPI-3 Forum recently brought the MPI standard up to date with respect to developments in hardware capabilities, core language evolution, the needs of applications, and experience gained over the years by vendors, implementers, and users. This third edition of Using MPI reflects these changes in both text and example code. The book takes an informal, tutorial approach, introducing each concept through easy-to-understand examples, including actual code in C and Fortran. Topics include using MPI in simple programs, virtual topologies, MPI datatypes, parallel libraries, and a comparison of MPI with sockets. For the third edition, example code has been brought up to date; applications have been updated; and references reflect the recent attention MPI has received in the literature. A companion volume, Using Advanced MPI, covers more advanced topics, including hybrid programming and coping with large data.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Loot
Nadine Gordimer Paperback  (2)
R383 R310 Discovery Miles 3 100
Casio LW-200-7AV Watch with 10-Year…
R999 R884 Discovery Miles 8 840
Fast X
Vin Diesel, Jason Momoa, … DVD R172 R132 Discovery Miles 1 320
Loot
Nadine Gordimer Paperback  (2)
R383 R310 Discovery Miles 3 100
Bestway Play Pool Set (124L)
R195 Discovery Miles 1 950
Bad Boy Men's Smoke Watch & Sunglass Set…
 (3)
R489 Discovery Miles 4 890
Generic Pantum PC210 Compatible Toner…
R610 R200 Discovery Miles 2 000
Parker Jotter Original Ballpoint Pen…
R199 R157 Discovery Miles 1 570
Harry Potter Wizard Wand - In…
 (3)
R800 Discovery Miles 8 000
LG 20MK400H 19.5" Monitor WXGA LED Black
R2,199 R1,699 Discovery Miles 16 990

 

Partners