Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Showing 1 - 7 of 7 matches in All Departments
Readership This book is devoted to the study of compiler transformations that are needed to expose the parallelism hiddenin a program. This book is notan introductory book to parallel processing, nor is it an introductory book to parallelizing compilers. Weassume thatreaders are familiar withthebooks High Performance Compilers for Parallel Computingby Wolfe [121] and Super- compilers for Parallel and Vector Computers by Zima and Chapman [125], and that they want to know more about scheduling transformations. In this book we describe both task graph scheduling and loop nest scheduling. Taskgraphschedulingaims atexecuting tasks linked by prece- dence constraints; it is a run-time activity. Loop nest scheduling aims at ex- ecutingstatementinstances linked bydata dependences;it is a compile-time activity. We are mostly interested in loop nestscheduling,butwe also deal with task graph scheduling for two main reasons: (i) Beautiful algorithms and heuristics have been reported in the literature recently; and (ii) Several graphscheduling, like list scheduling, are the basis techniques used in task ofthe loop transformations implemented in loop nest scheduling. As for loop nest scheduling our goal is to capture in a single place the fantastic developments of the last decade or so. Dozens of loop trans- formations have been introduced (loop interchange, skewing, fusion, dis- tribution, etc.) before a unifying theory emerged. The theory builds upon the pioneering papers of Karp, Miller, and Winograd [65] and of Lam- port [75], and it relies on sophisticated mathematical tools (unimodular transformations, parametric integer linear programming, Hermite decom- position, Smithdecomposition, etc.).
Full of practical examples, Introduction to Scheduling presents the basic concepts and methods, fundamental results, and recent developments of scheduling theory. With contributions from highly respected experts, it provides self-contained, easy-to-follow, yet rigorous presentations of the material. The book first classifies scheduling problems and their complexity and then presents examples that demonstrate successful techniques for the design of efficient approximation algorithms. It also discusses classical problems, such as the famous makespan minimization problem, as well as more recent advances, such as energy-efficient scheduling algorithms. After focusing on job scheduling problems that encompass independent and possibly parallel jobs, the text moves on to a practical application of cyclic scheduling for the synthesis of embedded systems. It also proves that efficient schedules can be derived in the context of steady-state scheduling. Subsequent chapters discuss scheduling large and computer-intensive applications on parallel resources, illustrate different approaches of multi-objective scheduling, and show how to compare the performance of stochastic task-resource systems. The final chapter assesses the impact of platform models on scheduling techniques. From the basics to advanced topics and platform models, this volume provides a thorough introduction to the field. It reviews classical methods, explores more contemporary models, and shows how the techniques and algorithms are used in practice.
Full of practical examples, Introduction to Scheduling presents the basic concepts and methods, fundamental results, and recent developments of scheduling theory. With contributions from highly respected experts, it provides self-contained, easy-to-follow, yet rigorous presentations of the material. The book first classifies scheduling problems and their complexity and then presents examples that demonstrate successful techniques for the design of efficient approximation algorithms. It also discusses classical problems, such as the famous makespan minimization problem, as well as more recent advances, such as energy-efficient scheduling algorithms. After focusing on job scheduling problems that encompass independent and possibly parallel jobs, the text moves on to a practical application of cyclic scheduling for the synthesis of embedded systems. It also proves that efficient schedules can be derived in the context of steady-state scheduling. Subsequent chapters discuss scheduling large and computer-intensive applications on parallel resources, illustrate different approaches of multi-objective scheduling, and show how to compare the performance of stochastic task-resource systems. The final chapter assesses the impact of platform models on scheduling techniques. From the basics to advanced topics and platform models, this volume provides a thorough introduction to the field. It reviews classical methods, explores more contemporary models, and shows how the techniques and algorithms are used in practice.
Readership This book is devoted to the study of compiler transformations that are needed to expose the parallelism hiddenin a program. This book is notan introductory book to parallel processing, nor is it an introductory book to parallelizing compilers. Weassume thatreaders are familiar withthebooks High Performance Compilers for Parallel Computingby Wolfe [121] and Super- compilers for Parallel and Vector Computers by Zima and Chapman [125], and that they want to know more about scheduling transformations. In this book we describe both task graph scheduling and loop nest scheduling. Taskgraphschedulingaims atexecuting tasks linked by prece- dence constraints; it is a run-time activity. Loop nest scheduling aims at ex- ecutingstatementinstances linked bydata dependences;it is a compile-time activity. We are mostly interested in loop nestscheduling,butwe also deal with task graph scheduling for two main reasons: (i) Beautiful algorithms and heuristics have been reported in the literature recently; and (ii) Several graphscheduling, like list scheduling, are the basis techniques used in task ofthe loop transformations implemented in loop nest scheduling. As for loop nest scheduling our goal is to capture in a single place the fantastic developments of the last decade or so. Dozens of loop trans- formations have been introduced (loop interchange, skewing, fusion, dis- tribution, etc.) before a unifying theory emerged. The theory builds upon the pioneering papers of Karp, Miller, and Winograd [65] and of Lam- port [75], and it relies on sophisticated mathematical tools (unimodular transformations, parametric integer linear programming, Hermite decom- position, Smithdecomposition, etc.).
This book constitutes thoroughly refereed post-conference proceedings of the workshops of the 16th International Conference on Parallel Computing, Euro-Par 2010, held in Ischia, Italy, in August/September 2010. The papers of these 9 workshops HeteroPar, HPCC, HiBB, CoreGrid, UCHPC, HPCF, PROPER, CCPI, and VHPC focus on promotion and advancement of all aspects of parallel and distributed computing.
Presenting a complementary perspective to standard books on algorithms, A Guide to Algorithm Design: Paradigms, Methods, and Complexity Analysis provides a roadmap for readers to determine the difficulty of an algorithmic problem by finding an optimal solution or proving complexity results. It gives a practical treatment of algorithmic complexity and guides readers in solving algorithmic problems. Divided into three parts, the book offers a comprehensive set of problems with solutions as well as in-depth case studies that demonstrate how to assess the complexity of a new problem. Part I helps readers understand the main design principles and design efficient algorithms. Part II covers polynomial reductions from NP-complete problems and approaches that go beyond NP-completeness. Part III supplies readers with tools and techniques to evaluate problem complexity, including how to determine which instances are polynomial and which are NP-hard. Drawing on the authors' classroom-tested material, this text takes readers step by step through the concepts and methods for analyzing algorithmic complexity. Through many problems and detailed examples, readers can investigate polynomial-time algorithms and NP-completeness and beyond.
|
You may like...
How Did We Get Here? - A Girl's Guide to…
Mpoomy Ledwaba
Paperback
(1)
|