|
|
Books > Computing & IT > Computer hardware & operating systems
As the future of software development in a global environment
continues to be influenced by the areas of service oriented
architecture (SOA) and cloud computing, many legacy applications
will need to migrate these environments to take advantage of the
benefits offered by the service environment. Migrating Legacy
Applications: Challenges in Service Oriented Architecture and Cloud
Computing Environments presents a closer look at the partnership
between service oriented architecture and cloud computing
environments while analyzing potential solutions to challenges
related to the migration of legacy applications. This reference is
essential for students and university scholars alike.
The proliferation of wireless communications has led to mobile
computing, a new era in data communication and processing allowing
people to access information anywhere and anytime using lightweight
computer devices. Aligned with this phenomenon, a vast number of
mobile solutions, systems, and applications have been continuously
developed. However, despite the opportunities, there exist
constraints, challenges, and complexities in realizing the full
potential of mobile computing, requiring research and
experimentation. Algorithms, Methods, and Applications in Mobile
Computing and Communications is a critical scholarly publication
that examines the various aspects of mobile computing and
communications from engineering, business, and organizational
perspectives. The book details current research involving mobility
challenges that hinder service applicability, mobile money transfer
services and anomaly detection, and mobile fog environments. As a
resource rich in information about mobile devices, wireless
broadcast databases, and machine communications, it is an ideal
source for computer scientists, IT specialists, service providers,
information technology professionals, academicians, and researchers
interested in the field of mobile computing.
In recent years, most applications deal with constraint
decision-making systems as problems are based on imprecise
information and parameters. It is difficult to understand the
nature of data based on applications and it requires a specific
model for understanding the nature of the system. Further research
on constraint decision-making systems in engineering is required.
Constraint Decision-Making Systems in Engineering derives and
explores several types of constraint decisions in engineering and
focuses on new and innovative conclusions based on problems, robust
and efficient systems, and linear and non-linear applications.
Covering topics such as fault detection, data mining techniques,
and knowledge-based management, this premier reference source is an
essential resource for engineers, managers, computer scientists,
students and educators of higher education, librarians,
researchers, and academicians.
Though traditionally information systems have been centralized,
these systems are now distributed over the web. This requires a
re-investigation into the way information systems are modeled and
designed. Because of this new function, critical problems,
including security, never-fail systems, and quality of service have
begun to emerge. Novel Approaches to Information Systems Design is
an essential publication that explores the most recent,
cutting-edge research in information systems and exposes the reader
to emerging but relatively mature models and techniques in the
area. Highlighting a wide range of topics such as big data,
business intelligence, and energy efficiency, this publication is
ideally designed for managers, administrators, system developers,
information system engineers, researchers, academicians, and
graduate-level students seeking coverage on critical components of
information systems.
Mobile Sensors and Context-Aware Computing is a useful guide that
explains how hardware, software, sensors, and operating systems
converge to create a new generation of context-aware mobile
applications. This cohesive guide to the mobile computing landscape
demonstrates innovative mobile and sensor solutions for platforms
that deliver enhanced, personalized user experiences, with examples
including the fast-growing domains of mobile health and vehicular
networking. Users will learn how the convergence of mobile and
sensors facilitates cyber-physical systems and the Internet of
Things, and how applications which directly interact with the
physical world are becoming more and more compatible. The authors
cover both the platform components and key issues of security,
privacy, power management, and wireless interaction with other
systems.
The Physics of Computing gives a foundational view of the physical
principles underlying computers. Performance, power, thermal
behavior, and reliability are all harder and harder to achieve as
transistors shrink to nanometer scales. This book describes the
physics of computing at all levels of abstraction from single gates
to complete computer systems. It can be used as a course for
juniors or seniors in computer engineering and electrical
engineering, and can also be used to teach students in other
scientific disciplines important concepts in computing. For
electrical engineering, the book provides the fundamentals of
computing that link core concepts to computing. For computer
science, it provides foundations of key challenges such as power
consumption, performance, and thermal. The book can also be used as
a technical reference by professionals.
With the continual development of professional industries in
today's modernized world, certain technologies have become
increasingly applicable. Cyber-physical systems, specifically, are
a mechanism that has seen rapid implementation across numerous
fields. This is a technology that is constantly evolving, so
specialists need a handbook of research that keeps pace with the
advancements and methodologies of these devices. Tools and
Technologies for the Development of Cyber-Physical Systems is an
essential reference source that discusses recent advancements of
cyber-physical systems and its application within the health,
information, and computer science industries. Featuring research on
topics such as autonomous agents, power supply methods, and
software assessment, this book is ideally designed for data
scientists, technology developers, medical practitioners, computer
engineers, researchers, academicians, and students seeking coverage
on the development and various applications of cyber-physical
systems.
Present day sophisticated, adaptive, and autonomous (to a certain
degree) robotic technology is a radically new stimulus for the
cognitive system of the human learner from the earliest to the
oldest age. It deserves extensive, thorough, and systematic
research based on novel frameworks for analysis, modelling,
synthesis, and implementation of CPSs for social applications.
Cyber-Physical Systems for Social Applications is a critical
scholarly book that examines the latest empirical findings for
designing cyber-physical systems for social applications and aims
at forwarding the symbolic human-robot perspective in areas that
include education, social communication, entertainment, and
artistic performance. Highlighting topics such as evolinguistics,
human-robot interaction, and neuroinformatics, this book is ideally
designed for social network developers, cognitive scientists,
education science experts, evolutionary linguists, researchers, and
academicians.
4 zettabytes (4 billion terabytes) of data generated in 2013, 44
zettabytes predicted for 2020 and 185 zettabytes for 2025. These
figures are staggering and perfectly illustrate this new era of
data deluge. Data has become a major economic and social challenge.
The speed of processing of these data is the weakest link in a
computer system: the storage system. It is therefore crucial to
optimize this operation. During the last decade, storage systems
have experienced a major revolution: the advent of flash memory.
Flash Memory Integration: Performance and Energy Issues contributes
to a better understanding of these revolutions. The authors offer
us an insight into the integration of flash memory in computer
systems, their behavior in performance and in power consumption
compared to traditional storage systems. The book also presents, in
their entirety, various methods for measuring the performance and
energy consumption of storage systems for embedded as well as
desktop/server computer systems. We are invited on a journey to the
memories of the future.
Advances in Computers, the latest volume in the series published
since 1960, presents detailed coverage of innovations in computer
hardware, software, theory, design, and applications. In addition,
it provides contributors with a medium in which they can explore
their subjects in greater depth and breadth than journal articles
usually allow. As a result, many articles have become standard
references that continue to be of significant, lasting value in
this rapidly expanding field.
Autonomic networking aims to solve the mounting problems created by
increasingly complex networks, by enabling devices and
service-providers to decide, preferably without human intervention,
what to do at any given moment, and ultimately to create
self-managing networks that can interface with each other, adapting
their behavior to provide the best service to the end-user in all
situations. This book gives both an understanding and an assessment
of the principles, methods and architectures in autonomous network
management, as well as lessons learned from, the ongoing
initiatives in the field. It includes contributions from industry
groups at Orange Labs, Motorola, Ericsson, the ANA EU Project and
leading universities. These groups all provide chapters examining
the international research projects to which they are contributing,
such as the EU Autonomic Network Architecture Project and Ambient
Networks EU Project, reviewing current developments and
demonstrating how autonomic management principles are used to
define new architectures, models, protocols, and mechanisms for
future network equipment.
Parallelism is the key to achieving high performance in computing.
However, writing efficient and scalable parallel programs is
notoriously difficult, and often requires significant expertise. To
address this challenge, it is crucial to provide programmers with
high-level tools to enable them to develop solutions easily, and at
the same time emphasize the theoretical and practical aspects of
algorithm design to allow the solutions developed to run
efficiently under many different settings. This thesis addresses
this challenge using a three-pronged approach consisting of the
design of shared-memory programming techniques, frameworks, and
algorithms for important problems in computing. The thesis provides
evidence that with appropriate programming techniques, frameworks,
and algorithms, shared-memory programs can be simple, fast, and
scalable, both in theory and in practice. The results developed in
this thesis serve to ease the transition into the multicore era.
The first part of this thesis introduces tools and techniques for
deterministic parallel programming, including means for
encapsulating nondeterminism via powerful commutative building
blocks, as well as a novel framework for executing sequential
iterative loops in parallel, which lead to deterministic parallel
algorithms that are efficient both in theory and in practice. The
second part of this thesis introduces Ligra, the first high-level
shared memory framework for parallel graph traversal algorithms.
The framework allows programmers to express graph traversal
algorithms using very short and concise code, delivers performance
competitive with that of highly-optimized code, and is up to orders
of magnitude faster than existing systems designed for distributed
memory. This part of the thesis also introduces Ligra , which
extends Ligra with graph compression techniques to reduce space
usage and improve parallel performance at the same time, and is
also the first graph processing system to support in-memory graph
compression. The third and fourth parts of this thesis bridge the
gap between theory and practice in parallel algorithm design by
introducing the first algorithms for a variety of important
problems on graphs and strings that are efficient both in theory
and in practice. For example, the thesis develops the first
linear-work and polylogarithmic-depth algorithms for suffix tree
construction and graph connectivity that are also practical, as
well as a work-efficient, polylogarithmic-depth, and
cache-efficient shared-memory algorithm for triangle computations
that achieves a 2-5x speedup over the best existing algorithms on
40 cores. This is a revised version of the thesis that won the 2015
ACM Doctoral Dissertation Award.
This book gives a review of the principles, methods and techniques
of important and emerging research topics and technologies in
Channel Coding, including theory, algorithms, and applications.
Edited by leading people in the field who, through their
reputation, have been able to commission experts to write on a
particular topic. With this reference source you will: Quickly
grasp a new area of research Understand the underlying principles
of a topic and its applications Ascertain how a topic relates to
other areas and learn of the research issues yet to be resolved
Advances in Computers, the latest volume in the series published
since 1960, presents detailed coverage of innovations in computer
hardware, software, theory, design, and applications. In addition,
it provides contributors with a medium in which they can explore
their subjects in greater depth and breadth than journal articles
usually allow. As a result, many articles have become standard
references that continue to be of significant, lasting value in
this rapidly expanding field.
This book is a celebration of Leslie Lamport's work on concurrency,
interwoven in four-and-a-half decades of an evolving industry: from
the introduction of the first personal computer to an era when
parallel and distributed multiprocessors are abundant. His works
lay formal foundations for concurrent computations executed by
interconnected computers. Some of the algorithms have become
standard engineering practice for fault tolerant distributed
computing - distributed systems that continue to function correctly
despite failures of individual components. He also developed a
substantial body of work on the formal specification and
verification of concurrent systems, and has contributed to the
development of automated tools applying these methods. Part I
consists of technical chapters of the book and a biography. The
technical chapters of this book present a retrospective on
Lamport's original ideas from experts in the field. Through this
lens, it portrays their long-lasting impact. The chapters cover
timeless notions Lamport introduced: the Bakery algorithm, atomic
shared registers and sequential consistency; causality and logical
time; Byzantine Agreement; state machine replication and Paxos;
temporal logic of actions (TLA). The professional biography tells
of Lamport's career, providing the context in which his work arose
and broke new grounds, and discusses LaTeX - perhaps Lamport's most
influential contribution outside the field of concurrency. This
chapter gives a voice to the people behind the achievements,
notably Lamport himself, and additionally the colleagues around
him, who inspired, collaborated, and helped him drive worldwide
impact. Part II consists of a selection of Leslie Lamport's most
influential papers. This book touches on a lifetime of
contributions by Leslie Lamport to the field of concurrency and on
the extensive influence he had on people working in the field. It
will be of value to historians of science, and to researchers and
students who work in the area of concurrency and who are interested
to read about the work of one of the most influential researchers
in this field.
|
|