|
|
Books > Computing & IT > Computer programming
Advances in Computers, the latest volume in the series published
since 1960, presents detailed coverage of innovations in computer
hardware, software, theory, design, and applications. In addition,
it provides contributors with a medium in which they can explore
their subjects in greater depth and breadth than journal articles
usually allow. As a result, many articles have become standard
references that continue to be of significant, lasting value in
this rapidly expanding field.
The Physics of Computing gives a foundational view of the physical
principles underlying computers. Performance, power, thermal
behavior, and reliability are all harder and harder to achieve as
transistors shrink to nanometer scales. This book describes the
physics of computing at all levels of abstraction from single gates
to complete computer systems. It can be used as a course for
juniors or seniors in computer engineering and electrical
engineering, and can also be used to teach students in other
scientific disciplines important concepts in computing. For
electrical engineering, the book provides the fundamentals of
computing that link core concepts to computing. For computer
science, it provides foundations of key challenges such as power
consumption, performance, and thermal. The book can also be used as
a technical reference by professionals.
Elementary Statistics: A Guide to Data Analysis Using R provides
students with an introduction to both the field of statistics and
R, one of the most widely used languages for statistical computing,
analysis, and graphing in a variety of fields, including the
sciences, finance, banking, health care, e-commerce, and marketing.
Part I provides an overview of both statistics and R. Part II
focuses on descriptive statistics and probability. In Part III,
students learn about discrete and continuous probability
distributions with chapters addressing probability distributions,
binominal probability distributions, and normal probability
distributions. Part IV speaks to statistical inference with content
covering confidence intervals, hypothesis testing, chi-square tests
and F-distributions. The final part explores additional statistical
inference and assumptions, including correlation, regression, and
nonparametric statistics. Helpful appendices provide students with
an index of terminology, an index of applications, a glossary of
symbols, and a guide to the most common R commands. Elementary
Statistics is an ideal resource for introductory courses in
undergraduate statistics, graduate statistics, and data analysis
across the disciplines.
Complex Systems and Clouds: A Self-Organization and Self-Management
Perspective provides insights into the intricate world of
self-organizing systems. Large scale distributed computer systems
have evolved into very complex systems and are at the point where
they need to borrow self-adapting organizing concepts from nature.
The book explores complexity in big distributed systems and in the
natural processes in physics and chemistry, building a platform for
understanding how self-organization in big distributed systems can
be achieved. It goes beyond the theoretical description of
self-organization to present principles for designing
self-organizing systems, and concludes by showing the need for a
paradigm shift in the development of large-scale systems from
strictly deterministic to non-deterministic and adaptive.
Cyber-Physical Systems: Foundations, Principles and Applications
explores the core system science perspective needed to design and
build complex cyber-physical systems. Using Systems Science's
underlying theories, such as probability theory, decision theory,
game theory, organizational sociology, behavioral economics, and
cognitive psychology, the book addresses foundational issues
central across CPS applications, including System Design -- How to
design CPS to be safe, secure, and resilient in rapidly evolving
environments, System Verification -- How to develop effective
metrics and methods to verify and certify large and complex CPS,
Real-time Control and Adaptation -- How to achieve real-time
dynamic control and behavior adaptation in a diverse environments,
such as clouds and in network-challenged spaces, Manufacturing --
How to harness communication, computation, and control for
developing new products, reducing product concepts to realizable
designs, and producing integrated software-hardware systems at a
pace far exceeding today's timeline. The book is part of the
Intelligent Data-Centric Systems: Sensor-Collected Intelligence
series edited by Fatos Xhafa, Technical University of Catalonia.
Indexing: The books of this series are submitted to EI-Compendex
and SCOPUS
Usability Testing for Survey Research provides researchers with a
guide to the tools necessary to evaluate, test, and modify surveys
in an iterative method during the survey pretesting process. It
includes examples that apply usability to any type of survey during
any stage of development, along with tactics on how to tailor
usability testing to meet budget and scheduling constraints. The
book's authors distill their experience to provide tips on how
usability testing can be applied to paper surveys, mixed-mode
surveys, interviewer-administered tools, and additional products.
Readers will gain an understanding of usability and usability
testing and why it is needed for survey research, along with
guidance on how to design and conduct usability tests, analyze and
report findings, ideas for how to tailor usability testing to meet
budget and schedule constraints, and new knowledge on how to apply
usability testing to other survey-related products, such as project
websites and interviewer administered tools.
Parallelism is the key to achieving high performance in computing.
However, writing efficient and scalable parallel programs is
notoriously difficult, and often requires significant expertise. To
address this challenge, it is crucial to provide programmers with
high-level tools to enable them to develop solutions easily, and at
the same time emphasize the theoretical and practical aspects of
algorithm design to allow the solutions developed to run
efficiently under many different settings. This thesis addresses
this challenge using a three-pronged approach consisting of the
design of shared-memory programming techniques, frameworks, and
algorithms for important problems in computing. The thesis provides
evidence that with appropriate programming techniques, frameworks,
and algorithms, shared-memory programs can be simple, fast, and
scalable, both in theory and in practice. The results developed in
this thesis serve to ease the transition into the multicore era.
The first part of this thesis introduces tools and techniques for
deterministic parallel programming, including means for
encapsulating nondeterminism via powerful commutative building
blocks, as well as a novel framework for executing sequential
iterative loops in parallel, which lead to deterministic parallel
algorithms that are efficient both in theory and in practice. The
second part of this thesis introduces Ligra, the first high-level
shared memory framework for parallel graph traversal algorithms.
The framework allows programmers to express graph traversal
algorithms using very short and concise code, delivers performance
competitive with that of highly-optimized code, and is up to orders
of magnitude faster than existing systems designed for distributed
memory. This part of the thesis also introduces Ligra , which
extends Ligra with graph compression techniques to reduce space
usage and improve parallel performance at the same time, and is
also the first graph processing system to support in-memory graph
compression. The third and fourth parts of this thesis bridge the
gap between theory and practice in parallel algorithm design by
introducing the first algorithms for a variety of important
problems on graphs and strings that are efficient both in theory
and in practice. For example, the thesis develops the first
linear-work and polylogarithmic-depth algorithms for suffix tree
construction and graph connectivity that are also practical, as
well as a work-efficient, polylogarithmic-depth, and
cache-efficient shared-memory algorithm for triangle computations
that achieves a 2-5x speedup over the best existing algorithms on
40 cores. This is a revised version of the thesis that won the 2015
ACM Doctoral Dissertation Award.
Advances in Computers, the latest volume in the series published
since 1960, presents detailed coverage of innovations in computer
hardware, software, theory, design, and applications. In addition,
it provides contributors with a medium in which they can explore
their subjects in greater depth and breadth than journal articles
usually allow. As a result, many articles have become standard
references that continue to be of significant, lasting value in
this rapidly expanding field.
Parallel Programming with OpenACC is a modern, practical guide to
implementing dependable computing systems. The book explains how
anyone can use OpenACC to quickly ramp-up application performance
using high-level code directives called pragmas. The OpenACC
directive-based programming model is designed to provide a simple,
yet powerful, approach to accelerators without significant
programming effort. Author Rob Farber, working with a team of
expert contributors, demonstrates how to turn existing applications
into portable GPU accelerated programs that demonstrate immediate
speedups. The book also helps users get the most from the latest
NVIDIA and AMD GPU plus multicore CPU architectures (and soon for
Intel (R) Xeon Phi (TM) as well). Downloadable example codes
provide hands-on OpenACC experience for common problems in
scientific, commercial, big-data, and real-time systems. Topics
include writing reusable code, asynchronous capabilities, using
libraries, multicore clusters, and much more. Each chapter explains
how a specific aspect of OpenACC technology fits, how it works, and
the pitfalls to avoid. Throughout, the book demonstrates how the
use of simple working examples that can be adapted to solve
application needs.
The development of software has expanded substantially in recent
years. As these technologies continue to advance, well-known
organizations have begun implementing these programs into the ways
they conduct business. These large companies play a vital role in
the economic environment, so understanding the software that they
utilize is pertinent in many aspects. Researching and analyzing the
tools that these corporations use will assist in the practice of
software engineering and give other organizations an outline of how
to successfully implement their own computational methods. Tools
and Techniques for Software Development in Large Organizations:
Emerging Research and Opportunities is an essential reference
source that discusses advanced software methods that prominent
companies have adopted to develop high quality products. This book
will examine the various devices that organizations such as Google,
Cisco, and Facebook have implemented into their production and
development processes. Featuring research on topics such as
database management, quality assurance, and machine learning, this
book is ideally designed for software engineers, data scientists,
developers, programmers, professors, researchers, and students
seeking coverage on the advancement of software devices in today's
major corporations.
Certifiable Software Applications 1: Main Processes is dedicated to
the establishment of quality assurance and safety assurance. It
establishes the context for achieving a certifiable software
application. In it, the author covers recent developments such as
the module, component and product line approach. Applicable
standards are presented and security principles are described and
discussed. Finally, the requirements for mastering quality and
configuration are explained. In this book the reader will find the
fundamental practices from the field and an introduction to the
concept of software application.
|
|