Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Showing 1 - 6 of 6 matches in All Departments
General-purpose graphics processing units (GPGPU) have emerged as an important class of shared memory parallel processing architectures, with widespread deployment in every computer class from high-end supercomputers to embedded mobile platforms. Relative to more traditional multicore systems of today, GPGPUs have distinctly higher degrees of hardware multithreading (hundreds of hardware thread contexts vs. tens), a return to wide vector units (several tens vs. 1-10), memory architectures that deliver higher peak memory bandwidth (hundreds of gigabytes per second vs. tens), and smaller caches/scratchpad memories (less than 1 megabyte vs. 1-10 megabytes). In this book, we provide a high-level overview of current GPGPU architectures and programming models. We review the principles that are used in previous shared memory parallel platforms, focusing on recent results in both the theory and practice of parallel algorithms, and suggest a connection to GPGPU platforms. We aim to provide hints to architects about understanding algorithm aspect to GPGPU. We also provide detailed performance analysis and guide optimizations from high-level algorithms to low-level instruction level optimizations. As a case study, we use n-body particle simulations known as the fast multipole method (FMM) as an example. We also briefly survey the state-of-the-art in GPU performance analysis tools and techniques. Table of Contents: GPU Design, Programming, and Trends / Performance Principles / From Principles to Practice: Analysis and Tuning / Using Detailed Performance Analysis to Guide Optimization
Programming Massively Parallel Processors: A Hands-on Approach shows both student and professional alike the basic concepts of parallel programming and GPU architecture. Various techniques for constructing parallel programs are explored in detail. Case studies demonstrate the development process, which begins with computational thinking and ends with effective and efficient parallel programs. Topics of performance, floating-point format, parallel patterns, and dynamic parallelism are covered in depth. For this new edition, the authors are updating their coverage of CUDA, including the concept of unified memory, and expanding content in areas such as threads, while still retaining its concise, intuitive, practical approach based on years of road-testing in the authors' own parallel computing courses.
This is the second volume of Morgan Kaufmann's "GPU Computing Gems," offering an all-new set of insights, ideas, and practical "hands-on" skills from researchers and developers worldwide. Each chapter gives you a window into the work being performed across a variety of application domains, and the opportunity to witness the impact of parallel GPU computing on the efficiency of scientific research. "GPU Computing Gems: Jade Edition" showcases the latest research solutions with GPGPU and CUDA, including: Improving memory access patterns for cellular automata using CUDALarge-scale gas turbine simulations on GPU clustersIdentifying and mitigating credit risk using large-scale economic capital simulationsGPU-powered MATLAB acceleration with JacketBiologically-inspired machine visionAn efficient CUDA algorithm for the maximum network flow problem"30 more chapters" of innovative GPU computing ideas, written to be accessible to researchers from any industry "GPU Computing Gems: Jade Edition" contains 100% new material
covering a variety of application domains: algorithms and data
structures, engineering, interactive physics for games,
computational finance, and programming tools.
..".the perfect companion to "Programming Massively Parallel Processors" by Hwu & Kirk." -Nicolas Pinto, Research Scientist at Harvard & MIT, NVIDIA Fellow 2009-2010 Graphics processing units (GPUs) can do much more than render graphics. Scientists and researchers increasingly look to GPUs to improve the efficiency and performance of computationally-intensive experiments across a range of disciplines. "GPU Computing Gems: Emerald Edition" brings their techniques to you, showcasing GPU-based solutions including: Black hole simulations with CUDAGPU-accelerated computation and interactive display of molecular orbitalsTemporal data mining for neuroscienceGPU -based parallelization for fast circuit optimizationFast graph cuts for computer visionReal-time stereo on GPGPU using progressive multi-resolution adaptive windowsGPU image demosaicingTomographic image reconstruction from unordered lines with CUDAMedical image processing using GPU -accelerated ITK image filters"41 more chapters" of innovative GPU computing ideas, written to be accessible to researchers from any domain "GPU Computing Gems: Emerald Edition" is the first volume in
Morgan Kaufmann's Applications of GPU Computing Series, offering
the latest insights and research in computer vision, electronic
design automation, emerging data-intensive applications, life
sciences, medical imaging, ray tracing and rendering, scientific
simulation, signal and audio processing, statistical modeling, and
video / image processing.
Heterogeneous Systems Architecture - a new compute platform infrastructure presents a next-generation hardware platform, and associated software, that allows processors of different types to work efficiently and cooperatively in shared memory from a single source program. HSA also defines a virtual ISA for parallel routines or kernels, which is vendor and ISA independent thus enabling single source programs to execute across any HSA compliant heterogeneous processer from those used in smartphones to supercomputers. The book begins with an overview of the evolution of heterogeneous parallel processing, associated problems, and how they are overcome with HSA. Later chapters provide a deeper perspective on topics such as the runtime, memory model, queuing, context switching, the architected queuing language, simulators, and tool chains. Finally, three real world examples are presented, which provide an early demonstration of how HSA can deliver significantly higher performance thru C++ based applications. Contributing authors are HSA Foundation members who are experts from both academia and industry. Some of these distinguished authors are listed here in alphabetical order: Yeh-Ching Chung, Benedict R. Gaster, Juan Gomez-Luna, Derek Hower, Lee Howes, Shih-Hao HungThomas B. Jablin, David Kaeli,Phil Rogers, Ben Sander, I-Jui (Ray) Sung.
Programming Massively Parallel Processors: A Hands-on Approach, Third Edition shows both student and professional alike the basic concepts of parallel programming and GPU architecture, exploring, in detail, various techniques for constructing parallel programs. Case studies demonstrate the development process, detailing computational thinking and ending with effective and efficient parallel programs. Topics of performance, floating-point format, parallel patterns, and dynamic parallelism are covered in-depth. For this new edition, the authors have updated their coverage of CUDA, including coverage of newer libraries, such as CuDNN, moved content that has become less important to appendices, added two new chapters on parallel patterns, and updated case studies to reflect current industry practices.
|
You may like...
|