|
|
Showing 1 - 3 of
3 matches in All Departments
|
High Performance Computing - 31st International Conference, ISC High Performance 2016, Frankfurt, Germany, June 19-23, 2016, Proceedings (Paperback, 1st ed. 2016)
Julian M. Kunkel, Pavan Balaji, Jack Dongarra
|
R1,469
Discovery Miles 14 690
|
Ships in 18 - 22 working days
|
This book constitutes the refereed proceedings of the 31st
International Conference, ISC High Performance 2016 [formerly known
as the International Supercomputing Conference] held in Frankfurt,
Germany, in June 2016. The 25 revised full papers presented in this
book were carefully reviewed and selected from 60 submissions. The
papers cover the following topics: Autotuning and Thread Mapping;
Data Locality and Decomposition; Scalable Applications; Machine
Learning; Datacenters andCloud; Communication Runtime; Intel Xeon
Phi; Manycore Architectures; Extreme-scale Computations; and
Resilience.
|
High Performance Computing - 32nd International Conference, ISC High Performance 2017, Frankfurt, Germany, June 18-22, 2017, Proceedings (Paperback, 1st ed. 2017)
Julian M. Kunkel, Rio Yokota, Pavan Balaji, David Keyes
|
R1,449
Discovery Miles 14 490
|
Ships in 18 - 22 working days
|
This book constitutes the refereed proceedings of the 32nd
International Conference, ISC High Performance 2017, held in
Frankfurt, Germany, in June 2017. The 22 revised full papers
presented in this book were carefully reviewed and selected from 66
submissions. The papers cover the following topics: applications
and algorithms; proxy applications; architecture and system
optimization; and energy-aware computing.
An overview of the most prominent contemporary parallel processing
programming models, written in a unique tutorial style. With the
coming of the parallel computing era, computer scientists have
turned their attention to designing programming models that are
suited for high-performance parallel computing and supercomputing
systems. Programming parallel systems is complicated by the fact
that multiple processing units are simultaneously computing and
moving data. This book offers an overview of some of the most
prominent parallel programming models used in high-performance
computing and supercomputing systems today. The chapters describe
the programming models in a unique tutorial style rather than using
the formal approach taken in the research literature. The aim is to
cover a wide range of parallel programming models, enabling the
reader to understand what each has to offer. The book begins with a
description of the Message Passing Interface (MPI), the most common
parallel programming model for distributed memory computing. It
goes on to cover one-sided communication models, ranging from
low-level runtime libraries (GASNet, OpenSHMEM) to high-level
programming models (UPC, GA, Chapel); task-oriented programming
models (Charm++, ADLB, Scioto, Swift, CnC) that allow users to
describe their computation and data units as tasks so that the
runtime system can manage computation and data movement as
necessary; and parallel programming models intended for on-node
parallelism in the context of multicore architecture or attached
accelerators (OpenMP, Cilk Plus, TBB, CUDA, OpenCL). The book will
be a valuable resource for graduate students, researchers, and any
scientist who works with data sets and large computations.
Contributors Timothy Armstrong, Michael G. Burke, Ralph Butler,
Bradford L. Chamberlain, Sunita Chandrasekaran, Barbara Chapman,
Jeff Daily, James Dinan, Deepak Eachempati, Ian T. Foster, William
D. Gropp, Paul Hargrove, Wen-mei Hwu, Nikhil Jain, Laxmikant Kale,
David Kirk, Kath Knobe, Ariram Krishnamoorthy, Jeffery A. Kuehn,
Alexey Kukanov, Charles E. Leiserson, Jonathan Lifflander, Ewing
Lusk, Tim Mattson, Bruce Palmer, Steven C. Pieper, Stephen W.
Poole, Arch D. Robison, Frank Schlimbach, Rajeev Thakur, Abhinav
Vishnu, Justin M. Wozniak, Michael Wilde, Kathy Yelick, Yili Zheng
|
|