Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Showing 1 - 15 of 15 matches in All Departments
This timely text presents a comprehensive overview of fault tolerance techniques for high-performance computing (HPC). The text opens with a detailed introduction to the concepts of checkpoint protocols and scheduling algorithms, prediction, replication, silent error detection and correction, together with some application-specific techniques such as ABFT. Emphasis is placed on analytical performance models. This is then followed by a review of general-purpose techniques, including several checkpoint and rollback recovery protocols. Relevant execution scenarios are also evaluated and compared through quantitative models. Features: provides a survey of resilience methods and performance models; examines the various sources for errors and faults in large-scale systems; reviews the spectrum of techniques that can be applied to design a fault-tolerant MPI; investigates different approaches to replication; discusses the challenge of energy consumption of fault-tolerance methods in extreme-scale systems.
Readership This book is devoted to the study of compiler transformations that are needed to expose the parallelism hiddenin a program. This book is notan introductory book to parallel processing, nor is it an introductory book to parallelizing compilers. Weassume thatreaders are familiar withthebooks High Performance Compilers for Parallel Computingby Wolfe [121] and Super- compilers for Parallel and Vector Computers by Zima and Chapman [125], and that they want to know more about scheduling transformations. In this book we describe both task graph scheduling and loop nest scheduling. Taskgraphschedulingaims atexecuting tasks linked by prece- dence constraints; it is a run-time activity. Loop nest scheduling aims at ex- ecutingstatementinstances linked bydata dependences;it is a compile-time activity. We are mostly interested in loop nestscheduling,butwe also deal with task graph scheduling for two main reasons: (i) Beautiful algorithms and heuristics have been reported in the literature recently; and (ii) Several graphscheduling, like list scheduling, are the basis techniques used in task ofthe loop transformations implemented in loop nest scheduling. As for loop nest scheduling our goal is to capture in a single place the fantastic developments of the last decade or so. Dozens of loop trans- formations have been introduced (loop interchange, skewing, fusion, dis- tribution, etc.) before a unifying theory emerged. The theory builds upon the pioneering papers of Karp, Miller, and Winograd [65] and of Lam- port [75], and it relies on sophisticated mathematical tools (unimodular transformations, parametric integer linear programming, Hermite decom- position, Smithdecomposition, etc.).
Full of practical examples, Introduction to Scheduling presents the basic concepts and methods, fundamental results, and recent developments of scheduling theory. With contributions from highly respected experts, it provides self-contained, easy-to-follow, yet rigorous presentations of the material. The book first classifies scheduling problems and their complexity and then presents examples that demonstrate successful techniques for the design of efficient approximation algorithms. It also discusses classical problems, such as the famous makespan minimization problem, as well as more recent advances, such as energy-efficient scheduling algorithms. After focusing on job scheduling problems that encompass independent and possibly parallel jobs, the text moves on to a practical application of cyclic scheduling for the synthesis of embedded systems. It also proves that efficient schedules can be derived in the context of steady-state scheduling. Subsequent chapters discuss scheduling large and computer-intensive applications on parallel resources, illustrate different approaches of multi-objective scheduling, and show how to compare the performance of stochastic task-resource systems. The final chapter assesses the impact of platform models on scheduling techniques. From the basics to advanced topics and platform models, this volume provides a thorough introduction to the field. It reviews classical methods, explores more contemporary models, and shows how the techniques and algorithms are used in practice.
Focusing on algorithms for distributed-memory parallel architectures, Parallel Algorithms presents a rigorous yet accessible treatment of theoretical models of parallel computation, parallel algorithm design for homogeneous and heterogeneous platforms, complexity and performance analysis, and essential notions of scheduling. The book extracts fundamental ideas and algorithmic principles from the mass of parallel algorithm expertise and practical implementations developed over the last few decades. In the first section of the text, the authors cover two classical theoretical models of parallel computation (PRAMs and sorting networks), describe network models for topology and performance, and define several classical communication primitives. The next part deals with parallel algorithms on ring and grid logical topologies as well as the issue of load balancing on heterogeneous computing platforms. The final section presents basic results and approaches for common scheduling problems that arise when developing parallel algorithms. It also discusses advanced scheduling topics, such as divisible load scheduling and steady-state scheduling. With numerous examples and exercises in each chapter, this text encompasses both the theoretical foundations of parallel algorithms and practical parallel algorithm design.
Full of practical examples, Introduction to Scheduling presents the basic concepts and methods, fundamental results, and recent developments of scheduling theory. With contributions from highly respected experts, it provides self-contained, easy-to-follow, yet rigorous presentations of the material. The book first classifies scheduling problems and their complexity and then presents examples that demonstrate successful techniques for the design of efficient approximation algorithms. It also discusses classical problems, such as the famous makespan minimization problem, as well as more recent advances, such as energy-efficient scheduling algorithms. After focusing on job scheduling problems that encompass independent and possibly parallel jobs, the text moves on to a practical application of cyclic scheduling for the synthesis of embedded systems. It also proves that efficient schedules can be derived in the context of steady-state scheduling. Subsequent chapters discuss scheduling large and computer-intensive applications on parallel resources, illustrate different approaches of multi-objective scheduling, and show how to compare the performance of stochastic task-resource systems. The final chapter assesses the impact of platform models on scheduling techniques. From the basics to advanced topics and platform models, this volume provides a thorough introduction to the field. It reviews classical methods, explores more contemporary models, and shows how the techniques and algorithms are used in practice.
This timely text presents a comprehensive overview of fault tolerance techniques for high-performance computing (HPC). The text opens with a detailed introduction to the concepts of checkpoint protocols and scheduling algorithms, prediction, replication, silent error detection and correction, together with some application-specific techniques such as ABFT. Emphasis is placed on analytical performance models. This is then followed by a review of general-purpose techniques, including several checkpoint and rollback recovery protocols. Relevant execution scenarios are also evaluated and compared through quantitative models. Features: provides a survey of resilience methods and performance models; examines the various sources for errors and faults in large-scale systems; reviews the spectrum of techniques that can be applied to design a fault-tolerant MPI; investigates different approaches to replication; discusses the challenge of energy consumption of fault-tolerance methods in extreme-scale systems.
Readership This book is devoted to the study of compiler transformations that are needed to expose the parallelism hiddenin a program. This book is notan introductory book to parallel processing, nor is it an introductory book to parallelizing compilers. Weassume thatreaders are familiar withthebooks High Performance Compilers for Parallel Computingby Wolfe [121] and Super- compilers for Parallel and Vector Computers by Zima and Chapman [125], and that they want to know more about scheduling transformations. In this book we describe both task graph scheduling and loop nest scheduling. Taskgraphschedulingaims atexecuting tasks linked by prece- dence constraints; it is a run-time activity. Loop nest scheduling aims at ex- ecutingstatementinstances linked bydata dependences;it is a compile-time activity. We are mostly interested in loop nestscheduling,butwe also deal with task graph scheduling for two main reasons: (i) Beautiful algorithms and heuristics have been reported in the literature recently; and (ii) Several graphscheduling, like list scheduling, are the basis techniques used in task ofthe loop transformations implemented in loop nest scheduling. As for loop nest scheduling our goal is to capture in a single place the fantastic developments of the last decade or so. Dozens of loop trans- formations have been introduced (loop interchange, skewing, fusion, dis- tribution, etc.) before a unifying theory emerged. The theory builds upon the pioneering papers of Karp, Miller, and Winograd [65] and of Lam- port [75], and it relies on sophisticated mathematical tools (unimodular transformations, parametric integer linear programming, Hermite decom- position, Smithdecomposition, etc.).
This two-volume set presents the proceedings of the Second
International European Conference on Parallel Processing, EuroPar
'96, held in Lyon, France, in August 1996.
This two-volume set presents the proceedings of the Second
International European Conference on Parallel Processing, EuroPar
'96, held in Lyon, France, in August 1996.
This volume presents the proceedings of the joint meeting CONPAR 92 - VAPP V, held in Lyon, France, September 1992. The international Conferences on Parallel Processing (CONPAR) and the meetings on Vector and Parallel Processors in computational science (VAPP) have been held jointly since CONPAR 90 - VAPP IV, held in Zurich. The aim of the meeting presented in this volume is to review hardware and architecture developmentstogether with languages and software tools for supporting parallel processing and to highlight advances in models, algorithms, andapplications software on vector and parallel architectures. The papers in the volume are organized into sections on networks, software tools, distributed algorithms, dedicated architectures, numerical applications, systolic algorithms, parallel linear algebra, architectures, shared virtual memory, load balancing, data parallelism, parallel algorithms, image processing, compiling and scheduling, simulation and performance analysis, parallel artificialintelligence, dataflow architectures, parallel programming, and poster presentations.
Presenting a complementary perspective to standard books on algorithms, A Guide to Algorithm Design: Paradigms, Methods, and Complexity Analysis provides a roadmap for readers to determine the difficulty of an algorithmic problem by finding an optimal solution or proving complexity results. It gives a practical treatment of algorithmic complexity and guides readers in solving algorithmic problems. Divided into three parts, the book offers a comprehensive set of problems with solutions as well as in-depth case studies that demonstrate how to assess the complexity of a new problem. Part I helps readers understand the main design principles and design efficient algorithms. Part II covers polynomial reductions from NP-complete problems and approaches that go beyond NP-completeness. Part III supplies readers with tools and techniques to evaluate problem complexity, including how to determine which instances are polynomial and which are NP-hard. Drawing on the authors' classroom-tested material, this text takes readers step by step through the concepts and methods for analyzing algorithmic complexity. Through many problems and detailed examples, readers can investigate polynomial-time algorithms and NP-completeness and beyond.
Focusing on algorithms for distributed-memory parallel architectures, Parallel Algorithms presents a rigorous yet accessible treatment of theoretical models of parallel computation, parallel algorithm design for homogeneous and heterogeneous platforms, complexity and performance analysis, and essential notions of scheduling. The book extracts fundamental ideas and algorithmic principles from the mass of parallel algorithm expertise and practical implementations developed over the last few decades. In the first section of the text, the authors cover two classical theoretical models of parallel computation (PRAMs and sorting networks), describe network models for topology and performance, and define several classical communication primitives. The next part deals with parallel algorithms on ring and grid logical topologies as well as the issue of load balancing on heterogeneous computing platforms. The final section presents basic results and approaches for common scheduling problems that arise when developing parallel algorithms. It also discusses advanced scheduling topics, such as divisible load scheduling and steady-state scheduling. With numerous examples and exercises in each chapter, this text encompasses both the theoretical foundations of parallel algorithms and practical parallel algorithm design.
|
You may like...
|