![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems > Supercomputers
Designed for introductory parallel computing courses at the advanced undergraduate or beginning graduate level, Elements of Parallel Computing presents the fundamental concepts of parallel computing not from the point of view of hardware, but from a more abstract view of algorithmic and implementation patterns. The aim is to facilitate the teaching of parallel programming by surveying some key algorithmic structures and programming models, together with an abstract representation of the underlying hardware. The presentation is friendly and informal. The content of the book is language neutral, using pseudocode that represents common programming language models. The first five chapters' present core concepts in parallel computing. SIMD, shared memory, and distributed memory machine models are covered, along with a brief discussion of what their execution models look like. The book also discusses decomposition as a fundamental activity in parallel algorithmic design, starting with a naive example, and continuing with a discussion of some key algorithmic structures. Important programming models are presented in depth, as well as important concepts of performance analysis, including work-depth analysis of task graphs, communication analysis of distributed memory algorithms, key performance metrics, and a discussion of barriers to obtaining good performance. The second part of the book presents three case studies that reinforce the concepts of the earlier chapters. One feature of these chapters is to contrast different solutions to the same problem, using select problems that aren't discussed frequently in parallel computing textbooks. They include the Single Source Shortest Path Problem, the Eikonal equation, and a classical computational geometry problem: computation of the two-dimensional convex hull. After presenting the problem and sequential algorithms, each chapter first discusses the sources of parallelism then
Written by high performance computing (HPC) experts, Introduction to High Performance Computing for Scientists and Engineers provides a solid introduction to current mainstream computer architecture, dominant parallel programming models, and useful optimization strategies for scientific HPC. From working in a scientific computing center, the authors gained a unique perspective on the requirements and attitudes of users as well as manufacturers of parallel computers. The text first introduces the architecture of modern cache-based microprocessors and discusses their inherent performance limitations, before describing general optimization strategies for serial code on cache-based architectures. It next covers shared- and distributed-memory parallel computer architectures and the most relevant network topologies. After discussing parallel computing on a theoretical level, the authors show how to avoid or ameliorate typical performance problems connected with OpenMP. They then present cache-coherent non-uniform memory access (ccNUMA) optimization techniques, examine distributed-memory parallel programming with message passing interface (MPI), and explain how to write efficient MPI code. The final chapter focuses on hybrid programming with MPI and OpenMP. Users of high performance computers often have no idea what factors limit time to solution and whether it makes sense to think about optimization at all. This book facilitates an intuitive understanding of performance limitations without relying on heavy computer science knowledge. It also prepares readers for studying more advanced literature. Read about the authors' recent honor: Informatics Europe Curriculum Best Practices Award for Parallelism and Concurrency.
Created to help scientists and engineers write computer code, this practical book addresses the important tools and techniques that are necessary for scientific computing, but which are not yet commonplace in science and engineering curricula. This book contains chapters summarizing the most important topics that computational researchers need to know about. It leverages the viewpoints of passionate experts involved with scientific computing courses around the globe and aims to be a starting point for new computational scientists and a reference for the experienced. Each contributed chapter focuses on a specific tool or skill, providing the content needed to provide a working knowledge of the topic in about one day. While many individual books on specific computing topics exist, none is explicitly focused on getting technical professionals and students up and running immediately across a variety of computational areas.
Created to help scientists and engineers write computer code, this practical book addresses the important tools and techniques that are necessary for scientific computing, but which are not yet commonplace in science and engineering curricula. This book contains chapters summarizing the most important topics that computational researchers need to know about. It leverages the viewpoints of passionate experts involved with scientific computing courses around the globe and aims to be a starting point for new computational scientists and a reference for the experienced. Each contributed chapter focuses on a specific tool or skill, providing the content needed to provide a working knowledge of the topic in about one day. While many individual books on specific computing topics exist, none is explicitly focused on getting technical professionals and students up and running immediately across a variety of computational areas.
The Future of Numerical Computing Written by one of the foremost experts in high-performance computing and the inventor of Gustafson's Law, The End of Error: Unum Computing explains a new approach to computer arithmetic: the universal number (unum). The unum encompasses all IEEE floating-point formats as well as fixed-point and exact integer arithmetic. This new number type obtains more accurate answers than floating-point arithmetic yet uses fewer bits in many cases, saving memory, bandwidth, energy, and power. A Complete Revamp of Computer Arithmetic from the Ground Up Richly illustrated in color, this groundbreaking book represents a fundamental change in how to perform calculations automatically. It illustrates how this novel approach can solve problems that have vexed engineers and scientists for decades, including problems that have been historically limited to serial processing. Suitable for Anyone Using Computers for Calculations The book is accessible to anyone who uses computers for technical calculations, with much of the book only requiring high school math. The author makes the mathematics interesting through numerous analogies. He clearly defines jargon and uses color-coded boxes for mathematical formulas, computer code, important descriptions, and exercises.
High-Performance Computing (HPC) delivers higher computational performance to solve problems in science, engineering and finance. There are various HPC resources available for different needs, ranging from cloud computing- that can be used without much expertise and expense - to more tailored hardware, such as Field-Programmable Gate Arrays (FPGAs) or D-Wave's quantum computer systems. High-Performance Computing in Finance is the first book that provides a state-of-the-art introduction to HPC for finance, capturing both academically and practically relevant problems.
High Performance Computing: Programming and Applications presents techniques that address new performance issues in the programming of high performance computing (HPC) applications. Omitting tedious details, the book discusses hardware architecture concepts and programming techniques that are the most pertinent to application developers for achieving high performance. Even though the text concentrates on C and Fortran, the techniques described can be applied to other languages, such as C++ and Java. Drawing on their experience with chips from AMD and systems, interconnects, and software from Cray Inc., the authors explore the problems that create bottlenecks in attaining good performance. They cover techniques that pertain to each of the three levels of parallelism: 1. Message passing between the nodes 2. Shared memory parallelism on the nodes or the multiple instruction, multiple data (MIMD) units on the accelerator 3. Vectorization on the inner level After discussing architectural and software challenges, the book outlines a strategy for porting and optimizing an existing application to a large massively parallel processor (MPP) system. With a look toward the future, it also introduces the use of general purpose graphics processing units (GPGPUs) for carrying out HPC computations. A companion website at www.hybridmulticoreoptimization.com contains all the examples from the book, along with updated timing results on the latest released processors.
Although the highly anticipated petascale computers of the near future will perform at an order of magnitude faster than today's quickest supercomputer, the scaling up of algorithms and applications for this class of computers remains a tough challenge. From scalable algorithm design for massive concurrency toperformance analyses and scientific visualization, Petascale Computing: Algorithms and Applications captures the state of the art in high-performance computing algorithms and applications. Featuring contributions from the world's leading experts in computational science, this edited collection explores the use of petascale computers for solving the most difficult scientific and engineering problems of the current century. Covering a wide range of important topics, the book illustrates how petascale computing can be applied to space and Earth science missions, biological systems, weather prediction, climate science, disasters, black holes, and gamma ray bursts. It details the simulation of multiphysics, cosmological evolution, molecular dynamics, and biomolecules. The book also discusses computational aspects that include the Uintah framework, Enzo code, multithreaded algorithms, petaflops, performance analysis tools, multilevel finite element solvers, finite element code development, Charm++, and the Cactus framework. Supplying petascale tools, programming methodologies, and an eight-page color insert, this volume addresses the challenging problems of developing application codes that can take advantage of the architectural features of the new petascale systems in advance of their first deployment.
Few works are as timely and critical to the advancement of high performance computing than is this new up-to-date treatise on leading-edge directions of operating systems. It is a first-hand product of many of the leaders in this rapidly evolving field and possibly the most comprehensive. This new and important book masterfully presents the major alternative concepts driving the future of operating system design for high performance computing. In particular, it describes the major advances of monolithic operating systems such as Linux and Unix that dominate the TOP500 list. It also presents the state of the art in lightweight kernels that exhibit high efficiency and scalability at the loss of generality. Finally, this work looks forward to possibly the most promising strategy of a hybrid structure combining full service functionality with lightweight kernel operation. With this, it is likely that this new work will find its way on the shelves of almost everyone who is in any way engaged in the multi-discipline of high performance computing. (From the foreword by Thomas Sterling)
Few works are as timely and critical to the advancement of high performance computing than is this new up-to-date treatise on leading-edge directions of operating systems. It is a first-hand product of many of the leaders in this rapidly evolving field and possibly the most comprehensive. This new and important book masterfully presents the major alternative concepts driving the future of operating system design for high performance computing. In particular, it describes the major advances of monolithic operating systems such as Linux and Unix that dominate the TOP500 list. It also presents the state of the art in lightweight kernels that exhibit high efficiency and scalability at the loss of generality. Finally, this work looks forward to possibly the most promising strategy of a hybrid structure combining full service functionality with lightweight kernel operation. With this, it is likely that this new work will find its way on the shelves of almost everyone who is in any way engaged in the multi-discipline of high performance computing. (From the foreword by Thomas Sterling)
Introduction to Modeling and Simulation with MATLAB and Python is intended for students and professionals in science, social science, and engineering that wish to learn the principles of computer modeling, as well as basic programming skills. The book content focuses on meeting a set of basic modeling and simulation competencies that were developed as part of several National Science Foundation grants. Even though computer science students are much more expert programmers, they are not often given the opportunity to see how those skills are being applied to solve complex science and engineering problems and may also not be aware of the libraries used by scientists to create those models. The book interleaves chapters on modeling concepts and related exercises with programming concepts and exercises. The authors start with an introduction to modeling and its importance to current practices in the sciences and engineering. They introduce each of the programming environments and the syntax used to represent variables and compute mathematical equations and functions. As students gain more programming expertise, the authors return to modeling concepts, providing starting code for a variety of exercises where students add additional code to solve the problem and provide an analysis of the outcomes. In this way, the book builds both modeling and programming expertise with a "just-in-time" approach so that by the end of the book, students can take on relatively simple modeling example on their own. Each chapter is supplemented with references to additional reading, tutorials, and exercises that guide students to additional help and allows them to practice both their programming and analytical modeling skills. In addition, each of the programming related chapters is divided into two parts - one for MATLAB and one for Python. In these chapters, the authors also refer to additional online tutorials that students can use if they are having difficulty with any of the topics. The book culminates with a set of final project exercise suggestions that incorporate both the modeling and programming skills provided in the rest of the volume. Those projects could be undertaken by individuals or small groups of students. The companion website at http://www.intromodeling.com provides updates to instructions when there are substantial changes in software versions, as well as electronic copies of exercises and the related code. The website also offers a space where people can suggest additional projects they are willing to share as well as comments on the existing projects and exercises throughout the book. Solutions and lecture notes will also be available for qualifying instructors.
This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe's leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.
This book constitutes the thoroughly refereed post-conference proceedings of the Third International Conference on High Performance Computing in Science and Engineering, HPCSE 2017, held in Karolinka, Czech Republic, in May 2017. The 15 papers presented in this volume were carefully reviewed and selected from 20 submissions. The conference provides an international forum for exchanging ideas among researchers involved in scientific and parallel computing, including theory and applications, as well as applied and computational mathematics. The focus of HPCSE 2017 was on models, algorithms, and software tools which facilitate efficient and convenient utilization of modern parallel and distributed computing architectures, as well as on large-scale applications.
This book constitutes the refereed proceedings of the Third Russian Supercomputing Days, RuSCDays 2017, held in Moscow, Russia, in September 2017.The 41 revised full papers and one revised short paper presented were carefully reviewed and selected from 120 submissions. The papers are organized in topical sections on parallel algorithms; supercomputer simulation; high performance architectures, tools and technologies.
Contemporary High Performance Computing: From Petascale toward Exascale, Volume 3 focuses on the ecosystems surrounding the world's leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. This third volume will be a continuation of the two previous volumes, and will include other HPC ecosystems using the same chapter outline: description of a flagship system, major application workloads, facilities, and sponsors. Features: Describes many prominent, international systems in HPC from 2015 through 2017 including each system's hardware and software architecture Covers facilities for each system including power and cooling Presents application workloads for each site Discusses historic and projected trends in technology and applications Includes contributions from leading experts Designed for researchers and students in high performance computing, computational science, and related areas, this book provides a valuable guide to the state-of-the art research, trends, and resources in the world of HPC.
This book constitutes the proceedings of the 4th Latin American Conference on High Performance Computing, CARLA 2017, held in Buenos Aires, Argentina, and Colonia del Sacramento, Uruguay, in September 2017. The 29 papers presented in this volume were carefully reviewed and selected from 50 submissions. They are organized in topical sections named: HPC infrastructures and datacenters; HPC industry and education; GPU, multicores, accelerators; HPC applications and tools; big data and data management; parallel and distributed algorithms; Grid, cloud and federations.
Current advances in High Performance Computing (HPC) increasingly impact efficient software development workflows. Programmers for HPC applications need to consider trends such as increased core counts, multiple levels of parallelism, reduced memory per core, and I/O system challenges in order to derive well performing and highly scalable codes. At the same time, the increasing complexity adds further sources of program defects. While novel programming paradigms and advanced system libraries provide solutions for some of these challenges, appropriate supporting tools are indispensable. Such tools aid application developers in debugging, performance analysis, or code optimization and therefore make a major contribution to the development of robust and efficient parallel software. This book introduces a selection of the tools presented and discussed at the 7th International Parallel Tools Workshop, held in Dresden, Germany, September 3-4, 2013.
This gentle introduction to High Performance Computing (HPC) for Data Science using the Message Passing Interface (MPI) standard has been designed as a first course for undergraduates on parallel programming on distributed memory models, and requires only basic programming notions. Divided into two parts the first part covers high performance computing using C++ with the Message Passing Interface (MPI) standard followed by a second part providing high-performance data analytics on computer clusters. In the first part, the fundamental notions of blocking versus non-blocking point-to-point communications, global communications (like broadcast or scatter) and collaborative computations (reduce), with Amdalh and Gustafson speed-up laws are described before addressing parallel sorting and parallel linear algebra on computer clusters. The common ring, torus and hypercube topologies of clusters are then explained and global communication procedures on these topologies are studied. This first part closes with the MapReduce (MR) model of computation well-suited to processing big data using the MPI framework. In the second part, the book focuses on high-performance data analytics. Flat and hierarchical clustering algorithms are introduced for data exploration along with how to program these algorithms on computer clusters, followed by machine learning classification, and an introduction to graph analytics. This part closes with a concise introduction to data core-sets that let big data problems be amenable to tiny data problems. Exercises are included at the end of each chapter in order for students to practice the concepts learned, and a final section contains an overall exam which allows them to evaluate how well they have assimilated the material covered in the book.
A Comprehensive Study of SQL - Practice and Implementation is designed as a textbook and provides a comprehensive approach to SQL (Structured Query Language), the standard programming language for defining, organizing, and exploring data in relational databases. It demonstrates how to leverage the two most vital tools for data query and analysis - SQL and Excel - to perform comprehensive data analysis without the need for a sophisticated and expensive data mining tool or application. Features The book provides a complete collection of modeling techniques, beginning with fundamentals and gradually progressing through increasingly complex real-world case studies It explains how to build, populate, and administer high-performance databases and develop robust SQL-based applications It also gives a solid foundation in best practices and relational theory The book offers self-contained lessons on key SQL concepts or techniques at the end of each chapter using numerous illustrations and annotated examples This book is aimed primarily at advanced undergraduates and graduates with a background in computer science and information technology. Researchers and professionals will also find this book useful.
Gain Critical Insight into the Parallel I/O Ecosystem Parallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem. The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O hardware, middleware, and applications. The book then traverses up the I/O software stack. The second part covers the file system layer and the third part discusses middleware (such as MPIIO and PLFS) and user-facing libraries (such as Parallel-NetCDF, HDF5, ADIOS, and GLEAN). Delving into real-world scientific applications that use the parallel I/O infrastructure, the fourth part presents case studies from particle-in-cell, stochastic, finite volume, and direct numerical simulations. The fifth part gives an overview of various profiling and benchmarking tools used by practitioners. The final part of the book addresses the implications of current trends in HPC on parallel I/O in the exascale world.
The SUPERMEN "After a rare speech at the National Center for Atmospheric Research in Boulder, Colorado, in 1976, programmers in the audience had suddenly fallen silent when Cray offered to answer questions. He stood there for several minutes, waiting for their queries, but none came. When he left, the head of NCAR's computing division chided the programmers. 'Why didn't someone raise a hand?' After a tense moment, one programmer replied, 'How do you talk to God?'" -from The SUPERMEN The Story of Seymour Cray and the Technical Wizards behind the Supercomputer "They were building revolutionary, not evolutionary, machines...They were blazing a trail-molding science into a product...The freedom to create was extraordinary." -from The Supermen In 1951, a soft-spoken, skinny young man fresh from the University of Minnesota took a job in an old glider factory in St. Paul. Computer technology would never be the same, for the glider factory was the home of Engineering Research Associates and the recent college grad was Seymour R. Cray. During his extraordinary career, Cray would be alternately hailed as "the Albert Einstein," "the Thomas Edison," and "the Evel Knievel" of supercomputing. At various times, he was all three-a master craftsman, inventor, and visionary whose disdain for the rigors of corporate life became legendary, and whose achievements remain unsurpassed. The Supermen is award-winning writer Charles J. Murray's exhilarating account of how the brilliant-some would say eccentric-Cray and his gifted colleagues blazed the trail that led to the Information Age. This is a thrilling, real-life scientific adventure, deftly capturing the daring, seat-of-the-pants spirit of the early days of computer development, as well as an audacious, modern-day David and Goliath battle, in which a group of maverick engineers beat out IBM to become the runaway industry leaders. Murray's briskly paced narrative begins during the final months of the Second World War, when men such as William Norris and Howard Engstrom began researching commercial applications for the code-breaking machines of wartime, and charts the rise of technological research in response to the Cold War. In those days computers were huge, cumbersome machines with names like Demon and Atlas. When Cray came on board, things quickly changed. Drawing on in-depth interviews-including the last interview Cray completed before his untimely and tragic death-Murray provides rare insight into Cray's often controversial approach to his work. Cray could spend exhausting hours in single-minded pursuit of a particular goal, and Murray takes us behind the scenes to witness late-night brainstorming sessions and miraculous eleventh-hour fixes. Cray's casual, often hostile attitude toward management, although alienating to some, was more than a passionate need for independence; he simply thought differently than others. Seymour Cray saw farther and faster, and trusted his vision with an unassailable confidence. Yet he inspired great loyalty as well, making it possible for his own start-up company, Cray Research, to bring the 54,000-employee conglomerate of Control Data to its knees. Ultimately, The Supermen is a story of genius, and how a unique set of circumstances-a small-team approach, corporate detachment, and a government-backed marketplace-enabled that genius to flourish. In an atmosphere of unparalleled freedom and creativity, Seymour Cray's vision and drive fueled a technological revolution from which America would emerge as the world's leader in supercomputing.
Based on a course developed by the author, Introduction to High Performance Scientific Computing introduces methods for adding parallelism to numerical methods for solving differential equations. It contains exercises and programming projects that facilitate learning as well as examples and discussions based on the C programming language, with additional comments for those already familiar with C . The text provides an overview of concepts and algorithmic techniques for modern scientific computing and is divided into six self-contained parts that can be assembled in any order to create an introductory course using available computer hardware.Part I introduces the C programming language for those not already familiar with programming in a compiled language. Part II describes parallelism on shared memory architectures using OpenMP. Part III details parallelism on computer clusters using MPI for coordinating a computation. Part IV demonstrates the use of graphical programming units (GPUs) to solve problems using the CUDA language for NVIDIA graphics cards. Part V addresses programming on GPUs for non-NVIDIA graphics cards using the OpenCL framework. Finally, Part VI contains a brief discussion of numerical methods and applications, giving the reader an opportunity to test the methods on typical computing problems. Introduction to High Performance Scientific Computing is intended for advanced undergraduate or beginning graduate students who have limited exposure to programming or parallel programming concepts. Extensive knowledge of numerical methods is not assumed. The material can be adapted to the available computational hardware, from OpenMP on simple laptops or desktops to MPI on computer clusters or CUDA and OpenCL for computers containing NVIDIA or other graphics cards. Experienced programmers unfamiliar with parallel programming will benefit from comparing the various methods to determine the type of parallel programming best suited for their application. The book can be used for courses on parallel scientific computing, high performance computing, and numerical methods for parallel computing.
This book constitutes the refereed proceedings of the 14th International Conference on High-Performance Computing, HiPC 2007, held in Goa, India, in December 2007. The 53 revised full papers presented together with the abstracts of 5 keynote talks were carefully reviewed and selected from 253 submissions. The papers are organized in topical sections on applications on I/O and FPGAs, microarchitecture and multiprocessor architecture, applications of novel architectures, system software, scheduling, energy-aware computing, P2P and internet applications, communication and routing, cluster and grid applications, as well as mobile computing.
This book constitutes the refereed joint post-conference proceedings of the 6th International Symposium on High-Performance Computing, ISHPC 2005, held in, Japan, in 2005. It also includes the refereed post-proceedings of the First International Workshop on Advanced Low Power Systems 2006, ALPS2006, and some from the Workshop on Applications for PetaFLOPS Computing, APC 2005. A total of 42 papers were carefully selected from 76 submissions, covering a huge range of topics.
This book constitutes the refereed proceedings of the 13th International Conference on High-Performance Computing, HiPC 2006, held in Bangalore, India in December 2006. The 52 revised full papers presented together with the abstracts of 7 invited talks were carefully reviewed and selected from 335 submissions. The papers are organized in topical sections on scheduling and load balancing, architectures, network and distributed algorithms, application software, network services, applications, ad-hoc networks, systems software, sensor networks and performance evaluation, as well as routing and data management algorithms. |
![]() ![]() You may like...
Phase Space Picture Of Quantum…
Young Suh Kim, Marilyn E Noz
Paperback
R1,499
Discovery Miles 14 990
On Quanta, Mind and Matter - Hans Primas…
Harald Atmanspacher, A. Amann, …
Hardcover
R2,612
Discovery Miles 26 120
|