![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Supercomputers
A Thorough Overview of the Next Generation in Computing Poised to follow in the footsteps of the Internet, grid computing is on the verge of becoming more robust and accessible to the public in the near future. Focusing on this novel, yet already powerful, technology, Introduction to Grid Computing explores state-of-the-art grid projects, core grid technologies, and applications of the grid. After comparing the grid with other distributed systems, the book covers two important aspects of a grid system: scheduling of jobs and resource discovery and monitoring in grid. It then discusses existing and emerging security technologies, such as WS-Security and OGSA security, as well as the functions of grid middleware at a conceptual level. The authors also describe famous grid projects, demonstrate the pricing of European options through the use of the Monte Carlo method on grids, and highlight different parallelization possibilities on the grid. Taking a tutorial approach, this concise book provides a complete introduction to the components of the grid architecture and applications of grid computing. It expertly shows how grid computing can be used in various areas, from computational mechanics to risk management in financial institutions.
Describing state-of-the-art solutions in distributed system architectures, Integration of Services into Workflow Applications presents a concise approach to the integration of loosely coupled services into workflow applications. It discusses key challenges related to the integration of distributed systems and proposes solutions, both in terms of theoretical aspects such as models and workflow scheduling algorithms, and technical solutions such as software tools and APIs. The book provides an in-depth look at workflow scheduling and proposes a way to integrate several different types of services into one single workflow application. It shows how these components can be expressed as services that can subsequently be integrated into workflow applications. The workflow applications are often described as acyclic graphs with dependencies which allow readers to define complex scenarios in terms of basic tasks. Presents state-of-the-art solutions to challenges in multi-domain workflow application definition, optimization, and execution Proposes a uniform concept of a service that can represent executable components in all major distributed software architectures used today Discusses an extended model with determination of data flows among parallel paths of a workflow application Since workflow applications often process big data, the book explores the dynamic management of data with various storage constraints during workflow execution. It addresses several practical problems related to data handling, including data partitioning for parallel processing next to service selection and scheduling, processing data in batches or streams, and constraints on data sizes that can be processed at the same time by service instances. Illustrating several workflow applications that were proposed, implemented, and benchmarked in a real BeesyCluster environment, the book includes templates for
Created to help scientists and engineers write computer code, this practical book addresses the important tools and techniques that are necessary for scientific computing, but which are not yet commonplace in science and engineering curricula. This book contains chapters summarizing the most important topics that computational researchers need to know about. It leverages the viewpoints of passionate experts involved with scientific computing courses around the globe and aims to be a starting point for new computational scientists and a reference for the experienced. Each contributed chapter focuses on a specific tool or skill, providing the content needed to provide a working knowledge of the topic in about one day. While many individual books on specific computing topics exist, none is explicitly focused on getting technical professionals and students up and running immediately across a variety of computational areas.
Written by high performance computing (HPC) experts, Introduction to High Performance Computing for Scientists and Engineers provides a solid introduction to current mainstream computer architecture, dominant parallel programming models, and useful optimization strategies for scientific HPC. From working in a scientific computing center, the authors gained a unique perspective on the requirements and attitudes of users as well as manufacturers of parallel computers. The text first introduces the architecture of modern cache-based microprocessors and discusses their inherent performance limitations, before describing general optimization strategies for serial code on cache-based architectures. It next covers shared- and distributed-memory parallel computer architectures and the most relevant network topologies. After discussing parallel computing on a theoretical level, the authors show how to avoid or ameliorate typical performance problems connected with OpenMP. They then present cache-coherent non-uniform memory access (ccNUMA) optimization techniques, examine distributed-memory parallel programming with message passing interface (MPI), and explain how to write efficient MPI code. The final chapter focuses on hybrid programming with MPI and OpenMP. Users of high performance computers often have no idea what factors limit time to solution and whether it makes sense to think about optimization at all. This book facilitates an intuitive understanding of performance limitations without relying on heavy computer science knowledge. It also prepares readers for studying more advanced literature. Read about the authors' recent honor: Informatics Europe Curriculum Best Practices Award for Parallelism and Concurrency.
Created to help scientists and engineers write computer code, this practical book addresses the important tools and techniques that are necessary for scientific computing, but which are not yet commonplace in science and engineering curricula. This book contains chapters summarizing the most important topics that computational researchers need to know about. It leverages the viewpoints of passionate experts involved with scientific computing courses around the globe and aims to be a starting point for new computational scientists and a reference for the experienced. Each contributed chapter focuses on a specific tool or skill, providing the content needed to provide a working knowledge of the topic in about one day. While many individual books on specific computing topics exist, none is explicitly focused on getting technical professionals and students up and running immediately across a variety of computational areas.
Read this if you want to understand how to shape our technological future and reinvigorate democracy along the way. -- Reed Hastings, co-founder and CEO of Netflix __________ A forward-thinking manifesto from three Stanford professors which reveals how big tech's obsession with optimization and efficiency has sacrificed fundamental human values and outlines steps we can take to change course, renew our democracy, and save ourselves. __________ In no more than the blink of an eye, a naive optimism about technology's liberating potential has given way to a dystopian obsession with biased algorithms, surveillance capitalism, and job-displacing robots. Yet too few of us see any alternative to accepting the onward march of technology. We have simply accepted a technological future designed for us by technologists, the venture capitalists who fund them, and the politicians who give them free rein. It doesn't need to be this way. System Error exposes the root of our current predicament: how big tech's relentless focus on optimization is driving a future that reinforces discrimination, erodes privacy, displaces workers, and pollutes the information we get. Armed with an understanding of how technologists think and exercise their power, three Stanford professors - a philosopher working at the intersection of tech and ethics, a political scientist who served under Obama, and the director of the undergraduate Computer Science program at Stanford (also an early Google engineer) - reveal how we can hold that power to account. As the dominance of big tech becomes an explosive societal conundrum, they share their provocative insights and concrete solutions to help everyone understand what is happening, what is at stake, and what we can do to control technology instead of letting it control us.
The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.
High Performance Computing: Programming and Applications presents techniques that address new performance issues in the programming of high performance computing (HPC) applications. Omitting tedious details, the book discusses hardware architecture concepts and programming techniques that are the most pertinent to application developers for achieving high performance. Even though the text concentrates on C and Fortran, the techniques described can be applied to other languages, such as C++ and Java. Drawing on their experience with chips from AMD and systems, interconnects, and software from Cray Inc., the authors explore the problems that create bottlenecks in attaining good performance. They cover techniques that pertain to each of the three levels of parallelism: 1. Message passing between the nodes 2. Shared memory parallelism on the nodes or the multiple instruction, multiple data (MIMD) units on the accelerator 3. Vectorization on the inner level After discussing architectural and software challenges, the book outlines a strategy for porting and optimizing an existing application to a large massively parallel processor (MPP) system. With a look toward the future, it also introduces the use of general purpose graphics processing units (GPGPUs) for carrying out HPC computations. A companion website at www.hybridmulticoreoptimization.com contains all the examples from the book, along with updated timing results on the latest released processors.
Over the past several decades, applications permeated by advances in digital signal processing have undergone unprecedented growth in capabilities. The editors and authors of High Performance Embedded Computing Handbook: A Systems Perspective have been significant contributors to this field, and the principles and techniques presented in the handbook are reinforced by examples drawn from their work.
A Comprehensive Study of SQL - Practice and Implementation is designed as a textbook and provides a comprehensive approach to SQL (Structured Query Language), the standard programming language for defining, organizing, and exploring data in relational databases. It demonstrates how to leverage the two most vital tools for data query and analysis - SQL and Excel - to perform comprehensive data analysis without the need for a sophisticated and expensive data mining tool or application. Features The book provides a complete collection of modeling techniques, beginning with fundamentals and gradually progressing through increasingly complex real-world case studies It explains how to build, populate, and administer high-performance databases and develop robust SQL-based applications It also gives a solid foundation in best practices and relational theory The book offers self-contained lessons on key SQL concepts or techniques at the end of each chapter using numerous illustrations and annotated examples This book is aimed primarily at advanced undergraduates and graduates with a background in computer science and information technology. Researchers and professionals will also find this book useful.
Few works are as timely and critical to the advancement of high performance computing than is this new up-to-date treatise on leading-edge directions of operating systems. It is a first-hand product of many of the leaders in this rapidly evolving field and possibly the most comprehensive. This new and important book masterfully presents the major alternative concepts driving the future of operating system design for high performance computing. In particular, it describes the major advances of monolithic operating systems such as Linux and Unix that dominate the TOP500 list. It also presents the state of the art in lightweight kernels that exhibit high efficiency and scalability at the loss of generality. Finally, this work looks forward to possibly the most promising strategy of a hybrid structure combining full service functionality with lightweight kernel operation. With this, it is likely that this new work will find its way on the shelves of almost everyone who is in any way engaged in the multi-discipline of high performance computing. (From the foreword by Thomas Sterling)
The book summarizes the results of the projects of the High Performance Computing Center Stuttgart (HLRS) for the year 2000. The most significant contributions have been selected in a scientific review process. Together they provide an overview of recent developments in high performance computing and simulation. Reflecting the close cooperation of the HLRS with industry, special emphasis has been put on the industrial relevance of the presented results and methods. The book therefore becomes a collection of showcases for an innovative combination of the state-of-the-art modeling, novel numerical algorithms and the use of leading edge high performance computing systems.
Read this if you want to understand how to shape our technological future and reinvigorate democracy along the way. -- Reed Hastings, co-founder and CEO of Netflix __________ A forward-thinking manifesto from three Stanford professors which reveals how big tech's obsession with optimization and efficiency has sacrificed fundamental human values and outlines steps we can take to change course, renew our democracy, and save ourselves. __________ In no more than the blink of an eye, a naive optimism about technology's liberating potential has given way to a dystopian obsession with biased algorithms, surveillance capitalism, and job-displacing robots. Yet too few of us see any alternative to accepting the onward march of technology. We have simply accepted a technological future designed for us by technologists, the venture capitalists who fund them, and the politicians who give them free rein. It doesn't need to be this way. System Error exposes the root of our current predicament: how big tech's relentless focus on optimization is driving a future that reinforces discrimination, erodes privacy, displaces workers, and pollutes the information we get. Armed with an understanding of how technologists think and exercise their power, three Stanford professors - a philosopher working at the intersection of tech and ethics, a political scientist who served under Obama, and the director of the undergraduate Computer Science program at Stanford (also an early Google engineer) - reveal how we can hold that power to account. As the dominance of big tech becomes an explosive societal conundrum, they share their provocative insights and concrete solutions to help everyone understand what is happening, what is at stake, and what we can do to control technology instead of letting it control us.
This book constitutes the thoroughly refereed post-conference proceedings of the Third International Conference on High Performance Computing in Science and Engineering, HPCSE 2017, held in Karolinka, Czech Republic, in May 2017. The 15 papers presented in this volume were carefully reviewed and selected from 20 submissions. The conference provides an international forum for exchanging ideas among researchers involved in scientific and parallel computing, including theory and applications, as well as applied and computational mathematics. The focus of HPCSE 2017 was on models, algorithms, and software tools which facilitate efficient and convenient utilization of modern parallel and distributed computing architectures, as well as on large-scale applications.
This book constitutes the refereed proceedings of the Third Russian Supercomputing Days, RuSCDays 2017, held in Moscow, Russia, in September 2017.The 41 revised full papers and one revised short paper presented were carefully reviewed and selected from 120 submissions. The papers are organized in topical sections on parallel algorithms; supercomputer simulation; high performance architectures, tools and technologies.
This book constitutes the proceedings of the 4th Latin American Conference on High Performance Computing, CARLA 2017, held in Buenos Aires, Argentina, and Colonia del Sacramento, Uruguay, in September 2017. The 29 papers presented in this volume were carefully reviewed and selected from 50 submissions. They are organized in topical sections named: HPC infrastructures and datacenters; HPC industry and education; GPU, multicores, accelerators; HPC applications and tools; big data and data management; parallel and distributed algorithms; Grid, cloud and federations.
Current advances in High Performance Computing (HPC) increasingly impact efficient software development workflows. Programmers for HPC applications need to consider trends such as increased core counts, multiple levels of parallelism, reduced memory per core, and I/O system challenges in order to derive well performing and highly scalable codes. At the same time, the increasing complexity adds further sources of program defects. While novel programming paradigms and advanced system libraries provide solutions for some of these challenges, appropriate supporting tools are indispensable. Such tools aid application developers in debugging, performance analysis, or code optimization and therefore make a major contribution to the development of robust and efficient parallel software. This book introduces a selection of the tools presented and discussed at the 7th International Parallel Tools Workshop, held in Dresden, Germany, September 3-4, 2013.
Contemporary High Performance Computing: From Petascale toward Exascale, Volume 3 focuses on the ecosystems surrounding the world's leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. This third volume will be a continuation of the two previous volumes, and will include other HPC ecosystems using the same chapter outline: description of a flagship system, major application workloads, facilities, and sponsors. Features: Describes many prominent, international systems in HPC from 2015 through 2017 including each system's hardware and software architecture Covers facilities for each system including power and cooling Presents application workloads for each site Discusses historic and projected trends in technology and applications Includes contributions from leading experts Designed for researchers and students in high performance computing, computational science, and related areas, this book provides a valuable guide to the state-of-the art research, trends, and resources in the world of HPC.
"Ask not what your compiler can do for you, ask what you can do for your compiler." --John Levesque, Director of Cray's Supercomputing Centers of Excellence The next decade of computationally intense computing lies with more powerful multi/manycore nodes where processors share a large memory space. These nodes will be the building block for systems that range from a single node workstation up to systems approaching the exaflop regime. The node itself will consist of 10's to 100's of MIMD (multiple instruction, multiple data) processing units with SIMD (single instruction, multiple data) parallel instructions. Since a standard, affordable memory architecture will not be able to supply the bandwidth required by these cores, new memory organizations will be introduced. These new node architectures will represent a significant challenge to application developers. Programming for Hybrid Multi/Manycore MPP Systems attempts to briefly describe the current state-of-the-art in programming these systems, and proposes an approach for developing a performance-portable application that can effectively utilize all of these systems from a single application. The book starts with a strategy for optimizing an application for multi/manycore architectures. It then looks at the three typical architectures, covering their advantages and disadvantages. The next section of the book explores the other important component of the target-the compiler. The compiler will ultimately convert the input language to executable code on the target, and the book explores how to make the compiler do what we want. The book then talks about gathering runtime statistics from running the application on the important problem sets previously discussed. How best to utilize available memory bandwidth and virtualization is covered next, along with hybridization of a program. The last part of the book includes several major applications, and examines future hardware advancements and how the application developer may prepare for those advancements.
This gentle introduction to High Performance Computing (HPC) for Data Science using the Message Passing Interface (MPI) standard has been designed as a first course for undergraduates on parallel programming on distributed memory models, and requires only basic programming notions. Divided into two parts the first part covers high performance computing using C++ with the Message Passing Interface (MPI) standard followed by a second part providing high-performance data analytics on computer clusters. In the first part, the fundamental notions of blocking versus non-blocking point-to-point communications, global communications (like broadcast or scatter) and collaborative computations (reduce), with Amdalh and Gustafson speed-up laws are described before addressing parallel sorting and parallel linear algebra on computer clusters. The common ring, torus and hypercube topologies of clusters are then explained and global communication procedures on these topologies are studied. This first part closes with the MapReduce (MR) model of computation well-suited to processing big data using the MPI framework. In the second part, the book focuses on high-performance data analytics. Flat and hierarchical clustering algorithms are introduced for data exploration along with how to program these algorithms on computer clusters, followed by machine learning classification, and an introduction to graph analytics. This part closes with a concise introduction to data core-sets that let big data problems be amenable to tiny data problems. Exercises are included at the end of each chapter in order for students to practice the concepts learned, and a final section contains an overall exam which allows them to evaluate how well they have assimilated the material covered in the book.
High-Performance Computing (HPC) delivers higher computational performance to solve problems in science, engineering and finance. There are various HPC resources available for different needs, ranging from cloud computing- that can be used without much expertise and expense - to more tailored hardware, such as Field-Programmable Gate Arrays (FPGAs) or D-Wave's quantum computer systems. High-Performance Computing in Finance is the first book that provides a state-of-the-art introduction to HPC for finance, capturing both academically and practically relevant problems.
Gain Critical Insight into the Parallel I/O Ecosystem Parallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem. The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O hardware, middleware, and applications. The book then traverses up the I/O software stack. The second part covers the file system layer and the third part discusses middleware (such as MPIIO and PLFS) and user-facing libraries (such as Parallel-NetCDF, HDF5, ADIOS, and GLEAN). Delving into real-world scientific applications that use the parallel I/O infrastructure, the fourth part presents case studies from particle-in-cell, stochastic, finite volume, and direct numerical simulations. The fifth part gives an overview of various profiling and benchmarking tools used by practitioners. The final part of the book addresses the implications of current trends in HPC on parallel I/O in the exascale world.
Also Available as an eBook We're all familiar with computers and the concept of doing work via these silicon-chip-driven modern wonders. The technological advances have been stunning: a typical handheld computing device today has more computing power than a 1960s computer that took up an entire room. In today's world, computing size is inversely proportional to computer speed: The smaller the computer, the faster it works. With computing speed just about doubling every eighteen months, today's processing power is more than 100 million times that of a computer in 1970. What does the future hold for computers and their ever-growing power? In Scientific American's UNDERSTANDING SUPERCOMPUTING, you'll discover what constitutes a "supercomputer," how supercomputers function, and how you can make your own computer into a super machine (it's a matter of networking). From a chess computer that can beat the world's greatest human player to machines that control satellite communications, find out what tomorrow holds in store for supercomputers in terms of hardware, software, and everyday applications.
This book constitutes the refereed joint post-conference proceedings of the 6th International Symposium on High-Performance Computing, ISHPC 2005, held in, Japan, in 2005. It also includes the refereed post-proceedings of the First International Workshop on Advanced Low Power Systems 2006, ALPS2006, and some from the Workshop on Applications for PetaFLOPS Computing, APC 2005. A total of 42 papers were carefully selected from 76 submissions, covering a huge range of topics.
This book constitutes the refereed proceedings of the 14th International Conference on High-Performance Computing, HiPC 2007, held in Goa, India, in December 2007. The 53 revised full papers presented together with the abstracts of 5 keynote talks were carefully reviewed and selected from 253 submissions. The papers are organized in topical sections on applications on I/O and FPGAs, microarchitecture and multiprocessor architecture, applications of novel architectures, system software, scheduling, energy-aware computing, P2P and internet applications, communication and routing, cluster and grid applications, as well as mobile computing. |
You may like...
An Introduction to Modern Analysis
Vicente Montesinos, Peter Zizler, …
Hardcover
R3,061
Discovery Miles 30 610
Computational Optimization - Success in…
Vladislav Bukshtynov
Hardcover
R2,752
Discovery Miles 27 520
Scheduling Problems - New Applications…
Rodrigo Da Rosa Righi
Hardcover
R3,068
Discovery Miles 30 680
|