|
|
Books > Computing & IT > Computer programming
Parallel Programming with OpenACC is a modern, practical guide to
implementing dependable computing systems. The book explains how
anyone can use OpenACC to quickly ramp-up application performance
using high-level code directives called pragmas. The OpenACC
directive-based programming model is designed to provide a simple,
yet powerful, approach to accelerators without significant
programming effort. Author Rob Farber, working with a team of
expert contributors, demonstrates how to turn existing applications
into portable GPU accelerated programs that demonstrate immediate
speedups. The book also helps users get the most from the latest
NVIDIA and AMD GPU plus multicore CPU architectures (and soon for
Intel (R) Xeon Phi (TM) as well). Downloadable example codes
provide hands-on OpenACC experience for common problems in
scientific, commercial, big-data, and real-time systems. Topics
include writing reusable code, asynchronous capabilities, using
libraries, multicore clusters, and much more. Each chapter explains
how a specific aspect of OpenACC technology fits, how it works, and
the pitfalls to avoid. Throughout, the book demonstrates how the
use of simple working examples that can be adapted to solve
application needs.
Parallelism is the key to achieving high performance in computing.
However, writing efficient and scalable parallel programs is
notoriously difficult, and often requires significant expertise. To
address this challenge, it is crucial to provide programmers with
high-level tools to enable them to develop solutions easily, and at
the same time emphasize the theoretical and practical aspects of
algorithm design to allow the solutions developed to run
efficiently under many different settings. This thesis addresses
this challenge using a three-pronged approach consisting of the
design of shared-memory programming techniques, frameworks, and
algorithms for important problems in computing. The thesis provides
evidence that with appropriate programming techniques, frameworks,
and algorithms, shared-memory programs can be simple, fast, and
scalable, both in theory and in practice. The results developed in
this thesis serve to ease the transition into the multicore era.
The first part of this thesis introduces tools and techniques for
deterministic parallel programming, including means for
encapsulating nondeterminism via powerful commutative building
blocks, as well as a novel framework for executing sequential
iterative loops in parallel, which lead to deterministic parallel
algorithms that are efficient both in theory and in practice. The
second part of this thesis introduces Ligra, the first high-level
shared memory framework for parallel graph traversal algorithms.
The framework allows programmers to express graph traversal
algorithms using very short and concise code, delivers performance
competitive with that of highly-optimized code, and is up to orders
of magnitude faster than existing systems designed for distributed
memory. This part of the thesis also introduces Ligra , which
extends Ligra with graph compression techniques to reduce space
usage and improve parallel performance at the same time, and is
also the first graph processing system to support in-memory graph
compression. The third and fourth parts of this thesis bridge the
gap between theory and practice in parallel algorithm design by
introducing the first algorithms for a variety of important
problems on graphs and strings that are efficient both in theory
and in practice. For example, the thesis develops the first
linear-work and polylogarithmic-depth algorithms for suffix tree
construction and graph connectivity that are also practical, as
well as a work-efficient, polylogarithmic-depth, and
cache-efficient shared-memory algorithm for triangle computations
that achieves a 2-5x speedup over the best existing algorithms on
40 cores. This is a revised version of the thesis that won the 2015
ACM Doctoral Dissertation Award.
The development of software has expanded substantially in recent
years. As these technologies continue to advance, well-known
organizations have begun implementing these programs into the ways
they conduct business. These large companies play a vital role in
the economic environment, so understanding the software that they
utilize is pertinent in many aspects. Researching and analyzing the
tools that these corporations use will assist in the practice of
software engineering and give other organizations an outline of how
to successfully implement their own computational methods. Tools
and Techniques for Software Development in Large Organizations:
Emerging Research and Opportunities is an essential reference
source that discusses advanced software methods that prominent
companies have adopted to develop high quality products. This book
will examine the various devices that organizations such as Google,
Cisco, and Facebook have implemented into their production and
development processes. Featuring research on topics such as
database management, quality assurance, and machine learning, this
book is ideally designed for software engineers, data scientists,
developers, programmers, professors, researchers, and students
seeking coverage on the advancement of software devices in today's
major corporations.
Wireless Public Safety Networks, Volume Two: A Systematic Approach
presents the latest advances in the wireless Public Safety Networks
(PSNs) field, the networks established by authorities to either
prepare the population for an eminent catastrophe, or those used
for support during crisis and normalization phases. Maintaining
communication capabilities in a disaster scenario is crucial for
avoiding loss of lives and damages to property. This book examines
past communication failures that have directly contributed to the
loss of lives, giving readers in-depth discussions of the public
networks that impact emergency management, covering social media,
crowdsourcing techniques, wearable wireless sensors, moving-cells
scenarios, mobility management protocols, 5G networks, broadband
networks, data dissemination, and the resources of the frequency
spectrum.
Software development and design is an intricate and complex process
that requires a multitude of steps to ultimately create a quality
product. One crucial aspect of this process is minimizing potential
errors through software fault prediction. Enhancing Software Fault
Prediction With Machine Learning: Emerging Research and
Opportunities is an innovative source of material on the latest
advances and strategies for software quality prediction. Including
a range of pivotal topics such as case-based reasoning, rate of
improvement, and expert systems, this book is an ideal reference
source for engineers, researchers, academics, students,
professionals, and practitioners interested in novel developments
in software design and analysis.
An integral element of software engineering is model engineering.
They both endeavor to minimize cost, time, and risks with quality
software. As such, model engineering is a highly useful field that
demands in-depth research on the most current approaches and
techniques. Only by understanding the most up-to-date research can
these methods reach their fullest potential. Advancements in
Model-Driven Architecture in Software Engineering is an essential
publication that prepares readers to exercise modeling and model
transformation and covers state-of-the-art research and
developments on various approaches for methodologies and platforms
of model-driven architecture, applications and software development
of model-driven architecture, modeling languages, and modeling
tools. Highlighting a broad range of topics including cloud
computing, service-oriented architectures, and modeling languages,
this book is ideally designed for engineers, programmers, software
designers, entrepreneurs, researchers, academicians, and students.
This book describes recent innovations in 3D media and
technologies, with coverage of 3D media capturing, processing,
encoding, and adaptation, networking aspects for 3D Media, and
quality of user experience (QoE). The main contributions are based
on the results of the FP7 European Projects ROMEO, which focus on
new methods for the compression and delivery of 3D multi-view video
and spatial audio, as well as the optimization of networking and
compression jointly across the Future Internet
(www.ict-romeo.eu).
The delivery of 3D media to individual users remains a highly
challenging problem due to the large amount of data involved,
diverse network characteristics and user terminal requirements, as
well as the user s context such as their preferences and location.
As the number of visual views increases, current systems will
struggle to meet the demanding requirements in terms of delivery of
constant video quality to both fixed and mobile users.
ROMEO will design and develop hybrid-networking solutions that
combine the DVB-T2 and DVB-NGH broadcast access network
technologies together with a QoE aware Peer-to-Peer (P2P)
distribution system that operates over wired and wireless links.
Live streaming 3D media needs to be received by collaborating users
at the same time or with imperceptible delay to enable them to
watch together while exchanging comments as if they were all in the
same location.
The volume provides state-of-the-art information on 3D
multi-view video, spatial audio networking protocols for 3D media,
P2P 3D media streaming, and 3D Media delivery across heterogeneous
wireless networks among other topics. Graduate students and
professionals in electrical engineering and computer science with
an interest in 3D Future Internet Media will find this volume to be
essential reading."
Topics in Parallel and Distributed Computing provides resources and
guidance for those learning PDC as well as those teaching students
new to the discipline. The pervasiveness of computing devices
containing multicore CPUs and GPUs, including home and office PCs,
laptops, and mobile devices, is making even common users dependent
on parallel processing. Certainly, it is no longer sufficient for
even basic programmers to acquire only the traditional sequential
programming skills. The preceding trends point to the need for
imparting a broad-based skill set in PDC technology. However, the
rapid changes in computing hardware platforms and devices,
languages, supporting programming environments, and research
advances, poses a challenge both for newcomers and seasoned
computer scientists. This edited collection has been developed over
the past several years in conjunction with the IEEE technical
committee on parallel processing (TCPP), which held several
workshops and discussions on learning parallel computing and
integrating parallel concepts into courses throughout computer
science curricula.
Mathematics has been used as a tool in logistical reasoning for
centuries. Examining how specific mathematic structures can aid in
data and knowledge management helps determine how to efficiently
and effectively process more information in these fields. N-ary
Relations for Logical Analysis of Data and Knowledge is a critical
scholarly reference source that provides a detailed study of the
mathematical techniques currently involved in the progression of
information technology fields. Featuring relevant topics that
include algebraic sets, deductive analysis, defeasible reasoning,
and probabilistic modeling, this publication is ideal for
academicians, students, and researchers who are interested in
staying apprised of the latest research in the information
technology field.
The highly dynamic world of information technology service
management stresses the benefits of the quick and correct
implementation of IT services. A disciplined approach relies on a
separate set of assumptions and principles as an agile approach,
both of which have complicated implementation processes as well as
copious benefits. Combining these two approaches to enhance the
effectiveness of each, while difficult, can yield exceptional
dividends. Balancing Agile and Disciplined Engineering and
Management Approaches for IT Services and Software Products is an
essential publication that focuses on clarifying theoretical
foundations of balanced design methods with conceptual frameworks
and empirical cases. Highlighting a broad range of topics including
business trends, IT service, and software development, this book is
ideally designed for software engineers, software developers,
programmers, information technology professionals, researchers,
academicians, and students.
As human activities moved to the digital domain, so did all the
well-known malicious behaviors including fraud, theft, and other
trickery. There is no silver bullet, and each security threat calls
for a specific answer. One specific threat is that applications
accept malformed inputs, and in many cases it is possible to craft
inputs that let an intruder take full control over the target
computer system. The nature of systems programming languages lies
at the heart of the problem. Rather than rewriting decades of
well-tested functionality, this book examines ways to live with the
(programming) sins of the past while shoring up security in the
most efficient manner possible. We explore a range of different
options, each making significant progress towards securing legacy
programs from malicious inputs. The solutions explored include
enforcement-type defenses, which excludes certain program
executions because they never arise during normal operation.
Another strand explores the idea of presenting adversaries with a
moving target that unpredictably changes its attack surface thanks
to randomization. We also cover tandem execution ideas where the
compromise of one executing clone causes it to diverge from another
thus revealing adversarial activities. The main purpose of this
book is to provide readers with some of the most influential works
on run-time exploits and defenses. We hope that the material in
this book will inspire readers and generate new ideas and
paradigms.
|
|