![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design > Parallel processing
Computational clusters have long provided a mechanism for the acceleration of high performance computing (HPC) applications. With today's supercomputers now exceeding the petaflop scale, however, they are also exhibiting an increase in heterogeneity. Thisheterogeneity spans a range of technologies, from multiple operating systems to hardware accelerators and novel architectures. Because of the exceptional acceleration some of these heterogeneous architectures provide, they are being embraced as viable tools for HPC applications. Given the scale of today's supercomputers, it is clear that scientists must consider the use of fault-tolerance in their applications. This is particularly true as computational clusters with hundreds and thousands of processors become ubiquitous in large-scale scientific computing, leading to lower mean-times-to-failure. This forces the systems to effectively deal with the possibility of arbitrary and unexpected node failure. In this book the address the issue of fault-tolerance via checkpointing. They discuss the existing strategies to provide rollback recovery to applications -- both via MPI at the user level and through application-level techniques. Checkpointing itself has been studied extensively in the literature, including the authors' own works. Here they give a general overview of checkpointing and how it's implemented. More importantly, they describe strategies to improve the performance of checkpointing, particularly in the case of distributed systems.
The field of parallel and distributed computing is undergoing changes at a breathtaking pace. Networked computers are now omnipresent in virtually every application, from games to sophisticated space missions. The increasing complexity, heterogeneity, largeness, and dynamism of the emerging pervasive environments and associated applications are challenging the advancement of the parallel and distributed computing paradigm. Many novel infrastructures have been or are being created to provide the necessary computational fabric for realising parallel and distributed applications from diverse domains. New models and tools are also being proposed to evaluate and predict the quality of these complicated parallel and distributed systems. Current and recent past efforts, made to provide the infrastructures and models for such applications, have addressed many underlying complex problems and have thus resulted in new tools and paradigms for effectively realising parallel and distributed systems. This book showcases these novel tools and approaches with inputs from relevant experts.
The latest techniques and principles of parallel and grid database processing The growth in grid databases, coupled with the utility of parallel query processing, presents an important opportunity to understand and utilize high-performance parallel database processing within a major database management system (DBMS). This important new book provides readers with a fundamental understanding of parallelism in data-intensive applications, and demonstrates how to develop faster capabilities to support them. It presents a balanced treatment of the theoretical and practical aspects of high-performance databases to demonstrate how parallel query is executed in a DBMS, including concepts, algorithms, analytical models, and grid transactions. High-Performance Parallel Database Processing and Grid Databases serves as a valuable resource for researchers working in parallel databases and for practitioners interested in building a high-performance database. It is also a much-needed, self-contained textbook for database courses at the advanced undergraduate and graduate levels.
The thoroughly updated edition of a guide to parallel programming with MPI, reflecting the latest specifications, with many detailed examples. This book offers a thoroughly updated guide to the MPI (Message-Passing Interface) standard library for writing programs for parallel computers. Since the publication of the previous edition of Using MPI, parallel computing has become mainstream. Today, applications run on computers with millions of processors; multiple processors sharing memory and multicore processors with multiple hardware threads per core are common. The MPI-3 Forum recently brought the MPI standard up to date with respect to developments in hardware capabilities, core language evolution, the needs of applications, and experience gained over the years by vendors, implementers, and users. This third edition of Using MPI reflects these changes in both text and example code. The book takes an informal, tutorial approach, introducing each concept through easy-to-understand examples, including actual code in C and Fortran. Topics include using MPI in simple programs, virtual topologies, MPI datatypes, parallel libraries, and a comparison of MPI with sockets. For the third edition, example code has been brought up to date; applications have been updated; and references reflect the recent attention MPI has received in the literature. A companion volume, Using Advanced MPI, covers more advanced topics, including hybrid programming and coping with large data.
Programming Massively Parallel Processors: A Hands-on Approach, Third Edition shows both student and professional alike the basic concepts of parallel programming and GPU architecture, exploring, in detail, various techniques for constructing parallel programs. Case studies demonstrate the development process, detailing computational thinking and ending with effective and efficient parallel programs. Topics of performance, floating-point format, parallel patterns, and dynamic parallelism are covered in-depth. For this new edition, the authors have updated their coverage of CUDA, including coverage of newer libraries, such as CuDNN, moved content that has become less important to appendices, added two new chapters on parallel patterns, and updated case studies to reflect current industry practices.
What does Google's management of billions of Web pages have in common with analysis of a genome with billions of nucleotides? Both apply methods that coordinate many processors to accomplish a single task. From mining genomes to the World Wide Web, from modeling financial markets to global weather patterns, parallel computing enables computations that would otherwise be impractical if not impossible with sequential approaches alone. Its fundamental role as an enabler of simulations and data analysis continues an advance in a wide range of application areas. "Scientific Parallel Computing" is the first textbook to integrate all the fundamentals of parallel computing in a single volume while also providing a basis for a deeper understanding of the subject. Designed for graduate and advanced undergraduate courses in the sciences and in engineering, computer science, and mathematics, it focuses on the three key areas of algorithms, architecture, languages, and their crucial synthesis in performance. The book's computational examples, whose math prerequisites are not beyond the level of advanced calculus, derive from a breadth of topics in scientific and engineering simulation and data analysis. The programming exercises presented early in the book are designed to bring students up to speed quickly, while the book later develops projects challenging enough to guide students toward research questions in the field. The new paradigm of cluster computing is fully addressed. A supporting web site provides access to all the codes and software mentioned in the book, and offers topical information on popular parallel computing systems. Integrates all the fundamentals of parallel computing essential for today's high-performance requirements Ideal for graduate and advanced undergraduate students in the sciences and in engineering, computer science, and mathematics Extensive programming and theoretical exercises enable students to write parallel codes quickly More challenging projects later in the book introduce research questions New paradigm of cluster computing fully addressed Supporting web site provides access to all the codes and software mentioned in the book
In the not too distant future, every researcher and professional in science and engineering fields will have to understand parallel and distributed computing. With hyperthreading in Intel processors, hypertransport links in AMD processors, multi-core silicon in today's high-end microprocessors from IBM and emerging cluster and grid computing, parallel and distributed computers have moved into the mainstream of computing. To fully exploit these advances in computer architectures, researchers and professionals must start to design parallel or distributed software, systems and algorithms for their scientific and engineering applications. Parallel and distributed scientific and engineering computing has become a key technology which will play an important part in determining, or at least shaping, future research and development activities in many academic and industrial branches. This book reports on the recent important advances in the area of parallel and distributed computing for science and engineering applications. Included in the book are selected papers from prestigious workshops such as PACT-SHPSEC, IPDPS-PDSECA and ICPP-HPSECA together with some invited papers from prominent researchers around the world. The book is divided into five main sections. These chapters not only provide novel ideas, new experimental results and handful experience in this field, but also stimulate the future research activities in the area of parallel and distributed computing for science and engineering applications.
The field of parallel computing dates back to the mid-fifties, where research labs started the development of so-called supercomputers with the aim to significantly increase the performance, mainly the number of (floating point) operations a machine is able to perform per unit of time. Since then, significant advances in hardware and software technology have brought the field to a point where the long-time challenge of tera-flop computing was reached in 1998. While increases in performance are still a driving factor in parallel and distributed processing, there are many other challenges to be addressed in the field. Enabled by growth of the Internet, the majority of desktop computers nowadays can be seen as part of a huge distributed system, the World Wide Web. Advances in wireless networks extend the scope to a variety of mobile devices (including notebooks, PDAs, or mobile phones). Information is therefore distributed by nature, users require immediate access to information sources, to computing power, and to communication facilities. While performance in the sense defined above is still an important criterion in such kind of systems, other issues, including correctness, reliability, security, ease of use, ubiquitous access, intelligent services, etc. must be considered already in the development process itself. This extended notion of performance covering all those aspects is called "quality of parallel and distributed programs and systems". In order to examine and guarantee quality of parallel and distributed programs and systems special models, metrics and tools are necessary. The six papers selected for this volume tackle various aspects of these problems.
The continuous progress in scientific research is one of the important factors explaining the constantly increasing demand for computational power. On the other hand, one of the results of such progress is the availability of more powerful computer platforms. To that end, this volume reviews a broad array of subjects based on the solutions to the daily problems in industrial production, research, and development.
Virtual Shared Memory for Distributed Architecture
These ten papers represent a range of applications for the practical use of parallel computing. They address large scale high performance applications, data transfer and storage cost minimisation, two-stroke engine applications, large air pollution models, parallel global aircraft configuration design, parallel execution time analysis, parallel randomised heuristics, the analysis of complex waveguide circuits, reading database copy, and business process re-engineering.
Toward Teraflop Computing & New Grand Challenge Applications Proceedings of the Mardi Gras '94 Conference, February 10-12, 1994 Louisiana State University
This volume contains the conference proceedings for the 2001 International Conference on Parallel Processing Workshops.
As per the constant need to solve larger and larger numerical problems, it is not possible to neglect the opportunity that comes from the close adaptation of computational algorithms and their implementations for particular features of computing devices, i.e. the characteristics and performance of available workstations and servers. In the last decade, the advances in hardware manufacturing, the decreasing cost and the spread of GPUs have attracted the attention of researchers for numerical simulations, given that for some problems, GPU-based simulations can significantly outperform the ones based on CPUs. The objective of this book is first to present how to design in a context of GPGPU numerical methods in order to obtain the highest efficiency. A second objective of this book is to propose new auto-tuning techniques to optimize access on GPU. A third objective of this book is to propose new preconditioning techniques for GPGPU. Finally, an original energy consumption model is proposed, leading to a robust and accurate energy consumption prediction model.
Business has joined science and engineering in exploiting the benefits of high-performance computing. Parallel programming has become an important skill for professionals developers to deliver fast and optimized software systems. This guide to parallel programming takes a programmer from design through coding, testing, and deployment, beginning with an introduction to parallel 'thinking' and program design. The book examines the major parallel system architectures and the most prevalent technologies, and concludes by tying all concepts together into a single application. Although the core of the guide is about programming and software engineering, it also provides a solid understanding of how to engineer a reliable and useful parallel system for high-performance computers. This new guide targets the professional C and C++ developer who needs to understand all key technologies for developing parallel programs and software systems. It will be an essential reference for those with interests in the software engineering, parallel programming, and concurrent programming fields.
The aim of these proceedings is to help disseminate the knowledge about the potential of parallel computing. The contents give an overview of various European sites pioneering the Connection Machine and convey a flavour of the different applications that run efficiently on this parallel architecture. |
![]() ![]() You may like...
Advances in High Performance Computing…
Lucio Grandinetti, Etc
Hardcover
R2,690
Discovery Miles 26 900
Internet and Distributed Computing…
Jemal H. Abawajy, Mukaddim Pathan, …
Hardcover
R5,358
Discovery Miles 53 580
Creativity in Load-Balance Schemes for…
Alberto Garcia-Robledo, Arturo Diaz Perez, …
Hardcover
R4,229
Discovery Miles 42 290
Constraint Decision-Making Systems in…
Santosh Kumar Das, Nilanjan Dey
Hardcover
R7,253
Discovery Miles 72 530
Migrating Legacy Applications…
Anca Daniela Ionita, Marin Litoiu, …
Hardcover
R5,387
Discovery Miles 53 870
|