|
|
Showing 1 - 3 of
3 matches in All Departments
Today's smartphones utilize a rapidly developing range of
sophisticated applications, pushing the limits of mobile processing
power. The increased demand for cell phone applications has
necessitated the rise of mobile cloud computing, a technological
research arena which combines cloud computing, mobile computing,
and wireless networks to maximize the computational and data
storage capabilities of mobile devices. Enabling Real-Time Mobile
Cloud Computing through Emerging Technologies is an authoritative
and accessible resource that incorporates surveys, tutorials, and
the latest scholarly research on cellular technologies to explore
the latest developments in mobile and wireless computing
technologies. With its exhaustive coverage of emerging techniques,
protocols, and computational structures, this reference work is an
ideal tool for students, instructors, and researchers in the field
of telecommunications. This reference work features astute articles
on a wide range of current research topics including, but not
limited to, architectural communication components (cloudlets),
infrastructural components, secure mobile cloud computing, medical
cloud computing, network latency, and emerging open source
structures that optimize and accelerate smartphones.
GPU Parallel Program Development using CUDA teaches GPU programming
by showing the differences among different families of GPUs. This
approach prepares the reader for the next generation and future
generations of GPUs. The book emphasizes concepts that will remain
relevant for a long time, rather than concepts that are
platform-specific. At the same time, the book also provides
platform-dependent explanations that are as valuable as generalized
GPU concepts. The book consists of three separate parts; it starts
by explaining parallelism using CPU multi-threading in Part I. A
few simple programs are used to demonstrate the concept of dividing
a large task into multiple parallel sub-tasks and mapping them to
CPU threads. Multiple ways of parallelizing the same task are
analyzed and their pros/cons are studied in terms of both core and
memory operation. Part II of the book introduces GPU massive
parallelism. The same programs are parallelized on multiple Nvidia
GPU platforms and the same performance analysis is repeated.
Because the core and memory structures of CPUs and GPUs are
different, the results differ in interesting ways. The end goal is
to make programmers aware of all the good ideas, as well as the bad
ideas, so readers can apply the good ideas and avoid the bad ideas
in their own programs. Part III of the book provides pointer for
readers who want to expand their horizons. It provides a brief
introduction to popular CUDA libraries (such as cuBLAS, cuFFT, NPP,
and Thrust),the OpenCL programming language, an overview of GPU
programming using other programming languages and API libraries
(such as Python, OpenCV, OpenGL, and Apple's Swift and Metal,) and
the deep learning library cuDNN.
GPU Parallel Program Development using CUDA teaches GPU programming
by showing the differences among different families of GPUs. This
approach prepares the reader for the next generation and future
generations of GPUs. The book emphasizes concepts that will remain
relevant for a long time, rather than concepts that are
platform-specific. At the same time, the book also provides
platform-dependent explanations that are as valuable as generalized
GPU concepts. The book consists of three separate parts; it starts
by explaining parallelism using CPU multi-threading in Part I. A
few simple programs are used to demonstrate the concept of dividing
a large task into multiple parallel sub-tasks and mapping them to
CPU threads. Multiple ways of parallelizing the same task are
analyzed and their pros/cons are studied in terms of both core and
memory operation. Part II of the book introduces GPU massive
parallelism. The same programs are parallelized on multiple Nvidia
GPU platforms and the same performance analysis is repeated.
Because the core and memory structures of CPUs and GPUs are
different, the results differ in interesting ways. The end goal is
to make programmers aware of all the good ideas, as well as the bad
ideas, so readers can apply the good ideas and avoid the bad ideas
in their own programs. Part III of the book provides pointer for
readers who want to expand their horizons. It provides a brief
introduction to popular CUDA libraries (such as cuBLAS, cuFFT, NPP,
and Thrust),the OpenCL programming language, an overview of GPU
programming using other programming languages and API libraries
(such as Python, OpenCV, OpenGL, and Apple's Swift and Metal,) and
the deep learning library cuDNN.
|
|