|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
Edsger Wybe Dijkstra (1930-2002) was one of the most influential
researchers in the history of computer science, making fundamental
contributions to both the theory and practice of computing. Early
in his career, he proposed the single-source shortest path
algorithm, now commonly referred to as Dijkstra's algorithm. He
wrote (with Jaap Zonneveld) the first ALGOL 60 compiler, and
designed and implemented with his colleagues the influential THE
operating system. Dijkstra invented the field of concurrent
algorithms, with concepts such as mutual exclusion, deadlock
detection, and synchronization. A prolific writer and forceful
proponent of the concept of structured programming, he convincingly
argued against the use of the Go To statement. In 1972 he was
awarded the ACM Turing Award for "fundamental contributions to
programming as a high, intellectual challenge; for eloquent
insistence and practical demonstration that programs should be
composed correctly, not just debugged into correctness; for
illuminating perception of problems at the foundations of program
design." Subsequently he invented the concept of self-stabilization
relevant to fault-tolerant computing. He also devised an elegant
language for nondeterministic programming and its weakest
precondition semantics, featured in his influential 1976 book A
Discipline of Programming in which he advocated the development of
programs in concert with their correctness proofs. In the later
stages of his life, he devoted much attention to the development
and presentation of mathematical proofs, providing further support
to his long-held view that the programming process should be viewed
as a mathematical activity. In this unique new book, 31 computer
scientists, including five recipients of the Turing Award, present
and discuss Dijkstra's numerous contributions to computing science
and assess their impact. Several authors knew Dijkstra as a friend,
teacher, lecturer, or colleague. Their biographical essays and
tributes provide a fascinating multi-author picture of Dijkstra,
from the early days of his career up to the end of his life.
Shape grammar and space syntax have been separately developed but
rarely combined in any significant way. The first of these is
typically used to investigate or generate the formal or geometric
properties of architecture, while the second is used to analyze the
spatial, topological, or social properties of architecture. Despite
the reciprocal relationship between form and space in
architecture-it is difficult to conceptualize a completed building
without a sense of both of these properties-the two major
computational theories have been largely developed and applied in
isolation from each another. Grammatical and Syntactical Approaches
in Architecture: Emerging Research and Opportunities is a critical
scholarly resource that explores the relationship between shape
grammar and space syntax for urban planning and architecture and
enables the creative discovery of both the formal and spatial
features of an architectural style or type. This book, furthermore,
presents a new method to selectively capture aspects of both the
grammar and syntax of architecture. Featuring a range of topics
such as mathematical analysis, spatial configuration, and domestic
architecture, this book is essential for architects, policymakers,
urban planners, researchers, academicians, and students.
In recent years, most applications deal with constraint
decision-making systems as problems are based on imprecise
information and parameters. It is difficult to understand the
nature of data based on applications and it requires a specific
model for understanding the nature of the system. Further research
on constraint decision-making systems in engineering is required.
Constraint Decision-Making Systems in Engineering derives and
explores several types of constraint decisions in engineering and
focuses on new and innovative conclusions based on problems, robust
and efficient systems, and linear and non-linear applications.
Covering topics such as fault detection, data mining techniques,
and knowledge-based management, this premier reference source is an
essential resource for engineers, managers, computer scientists,
students and educators of higher education, librarians,
researchers, and academicians.
If you look around you will find that all computer systems, from
your portable devices to the strongest supercomputers, are
heterogeneous in nature. The most obvious heterogeneity is the
existence of computing nodes of different capabilities (e.g.
multicore, GPUs, FPGAs, ...). But there are also other
heterogeneity factors that exist in computing systems, like the
memory system components, interconnection, etc. The main reason for
these different types of heterogeneity is to have good performance
with power efficiency. Heterogeneous computing results in both
challenges and opportunities. This book discusses both. It shows
that we need to deal with these challenges at all levels of the
computing stack: from algorithms all the way to process technology.
We discuss the topic of heterogeneous computing from different
angles: hardware challenges, current hardware state-of-the-art,
software issues, how to make the best use of the current
heterogeneous systems, and what lies ahead. The aim of this book is
to introduce the big picture of heterogeneous computing. Whether
you are a hardware designer or a software developer, you need to
know how the pieces of the puzzle fit together. The main goal is to
bring researchers and engineers to the forefront of the research
frontier in the new era that started a few years ago and is
expected to continue for decades. We believe that academics,
researchers, practitioners, and students will benefit from this
book and will be prepared to tackle the big wave of heterogeneous
computing that is here to stay.
Distributed systems intertwine with our everyday lives. The
benefits and current shortcomings of the underpinning technologies
are experienced by a wide range of people and their smart devices.
With the rise of large-scale IoT and similar distributed systems,
cloud bursting technologies, and partial outsourcing solutions,
private entities are encouraged to increase their efficiency and
offer unparalleled availability and reliability to their users.
Applying Integration Techniques and Methods in Distributed Systems
is a critical scholarly publication that defines the current state
of distributed systems, determines further goals, and presents
architectures and service frameworks to achieve highly integrated
distributed systems and presents solutions to integration and
efficient management challenges faced by current and future
distributed systems. Highlighting topics such as multimedia,
programming languages, and smart environments, this book is ideal
for system administrators, integrators, designers, developers,
researchers, and academicians.
As technology continues to advance in today's global market,
practitioners are targeting systems with significant levels of
applicability and variance. Instrumentation is a multidisciplinary
subject that provides a wide range of usage in several professional
fields, specifically engineering. Instrumentation plays a key role
in numerous daily processes and has seen substantial advancement in
recent years. It is of utmost importance for engineering
professionals to understand the modern developments of instruments
and how they affect everyday life. Advancements in Instrumentation
and Control in Applied System Applications is a collection of
innovative research on the methods and implementations of
instrumentation in real-world practices including communication,
transportation, and biomedical systems. While highlighting topics
including smart sensor design, medical image processing, and atrial
fibrillation, this book is ideally designed for researchers,
software engineers, technologists, developers, scientists,
designers, IT professionals, academicians, and post-graduate
students seeking current research on recent developments within
instrumentation systems and their applicability in daily life.
Recent years have witnessed the rise of analysis of real-world
massive and complex phenomena in graphs; to efficiently solve these
large-scale graph problems, it is necessary to exploit high
performance computing (HPC), which accelerates the innovation
process for discovery and invention of new products and procedures
in network science. Creativity in Load-Balance Schemes for
Multi/Many-Core Heterogeneous Graph Computing: Emerging Research
and Opportunities is a critical scholarly resource that examines
trends, challenges, and collaborative processes in emerging fields
within complex network analysis. Featuring coverage on a broad
range of topics such as high-performance computing, big data,
network science, and accelerated network traversal, this book is
geared towards data analysts, researchers, students in information
communication technology (ICT), program developers, and academics.
For courses in Logic and Computer design. Understanding Logic and
Computer Design for All Audiences Logic and Computer Design
Fundamentals is a thoroughly up-to-date text that makes logic
design, digital system design, and computer design available to
students of all levels. The Fifth Edition brings this widely
recognised source to modern standards by ensuring that all
information is relevant and contemporary. The material focuses on
industry trends and successfully bridges the gap between the much
higher levels of abstraction students in the field must work with
today than in the past. Broadly covering logic and computer design,
Logic and Computer Design Fundamentals is a flexibly organised
source material that allows instructors to tailor its use to a wide
range of student audiences.
Though traditionally information systems have been centralized,
these systems are now distributed over the web. This requires a
re-investigation into the way information systems are modeled and
designed. Because of this new function, critical problems,
including security, never-fail systems, and quality of service have
begun to emerge. Novel Approaches to Information Systems Design is
an essential publication that explores the most recent,
cutting-edge research in information systems and exposes the reader
to emerging but relatively mature models and techniques in the
area. Highlighting a wide range of topics such as big data,
business intelligence, and energy efficiency, this publication is
ideally designed for managers, administrators, system developers,
information system engineers, researchers, academicians, and
graduate-level students seeking coverage on critical components of
information systems.
Present day sophisticated, adaptive, and autonomous (to a certain
degree) robotic technology is a radically new stimulus for the
cognitive system of the human learner from the earliest to the
oldest age. It deserves extensive, thorough, and systematic
research based on novel frameworks for analysis, modelling,
synthesis, and implementation of CPSs for social applications.
Cyber-Physical Systems for Social Applications is a critical
scholarly book that examines the latest empirical findings for
designing cyber-physical systems for social applications and aims
at forwarding the symbolic human-robot perspective in areas that
include education, social communication, entertainment, and
artistic performance. Highlighting topics such as evolinguistics,
human-robot interaction, and neuroinformatics, this book is ideally
designed for social network developers, cognitive scientists,
education science experts, evolutionary linguists, researchers, and
academicians.
Parallelism is the key to achieving high performance in computing.
However, writing efficient and scalable parallel programs is
notoriously difficult, and often requires significant expertise. To
address this challenge, it is crucial to provide programmers with
high-level tools to enable them to develop solutions easily, and at
the same time emphasize the theoretical and practical aspects of
algorithm design to allow the solutions developed to run
efficiently under many different settings. This thesis addresses
this challenge using a three-pronged approach consisting of the
design of shared-memory programming techniques, frameworks, and
algorithms for important problems in computing. The thesis provides
evidence that with appropriate programming techniques, frameworks,
and algorithms, shared-memory programs can be simple, fast, and
scalable, both in theory and in practice. The results developed in
this thesis serve to ease the transition into the multicore era.
The first part of this thesis introduces tools and techniques for
deterministic parallel programming, including means for
encapsulating nondeterminism via powerful commutative building
blocks, as well as a novel framework for executing sequential
iterative loops in parallel, which lead to deterministic parallel
algorithms that are efficient both in theory and in practice. The
second part of this thesis introduces Ligra, the first high-level
shared memory framework for parallel graph traversal algorithms.
The framework allows programmers to express graph traversal
algorithms using very short and concise code, delivers performance
competitive with that of highly-optimized code, and is up to orders
of magnitude faster than existing systems designed for distributed
memory. This part of the thesis also introduces Ligra , which
extends Ligra with graph compression techniques to reduce space
usage and improve parallel performance at the same time, and is
also the first graph processing system to support in-memory graph
compression. The third and fourth parts of this thesis bridge the
gap between theory and practice in parallel algorithm design by
introducing the first algorithms for a variety of important
problems on graphs and strings that are efficient both in theory
and in practice. For example, the thesis develops the first
linear-work and polylogarithmic-depth algorithms for suffix tree
construction and graph connectivity that are also practical, as
well as a work-efficient, polylogarithmic-depth, and
cache-efficient shared-memory algorithm for triangle computations
that achieves a 2-5x speedup over the best existing algorithms on
40 cores. This is a revised version of the thesis that won the 2015
ACM Doctoral Dissertation Award.
This book is a celebration of Leslie Lamport's work on concurrency,
interwoven in four-and-a-half decades of an evolving industry: from
the introduction of the first personal computer to an era when
parallel and distributed multiprocessors are abundant. His works
lay formal foundations for concurrent computations executed by
interconnected computers. Some of the algorithms have become
standard engineering practice for fault tolerant distributed
computing - distributed systems that continue to function correctly
despite failures of individual components. He also developed a
substantial body of work on the formal specification and
verification of concurrent systems, and has contributed to the
development of automated tools applying these methods. Part I
consists of technical chapters of the book and a biography. The
technical chapters of this book present a retrospective on
Lamport's original ideas from experts in the field. Through this
lens, it portrays their long-lasting impact. The chapters cover
timeless notions Lamport introduced: the Bakery algorithm, atomic
shared registers and sequential consistency; causality and logical
time; Byzantine Agreement; state machine replication and Paxos;
temporal logic of actions (TLA). The professional biography tells
of Lamport's career, providing the context in which his work arose
and broke new grounds, and discusses LaTeX - perhaps Lamport's most
influential contribution outside the field of concurrency. This
chapter gives a voice to the people behind the achievements,
notably Lamport himself, and additionally the colleagues around
him, who inspired, collaborated, and helped him drive worldwide
impact. Part II consists of a selection of Leslie Lamport's most
influential papers. This book touches on a lifetime of
contributions by Leslie Lamport to the field of concurrency and on
the extensive influence he had on people working in the field. It
will be of value to historians of science, and to researchers and
students who work in the area of concurrency and who are interested
to read about the work of one of the most influential researchers
in this field.
Intelligent systems and related designs have become important
instruments leading to profound innovations in automated control
and interaction with computers and machines. Such systems depend
upon established methods and tools for solving complex learning and
decision-making problems under uncertain and continuously varying
conditions. Intelligent Applications for Heterogeneous System
Modeling and Design examines the latest developments in intelligent
system engineering being used across industries with an emphasis on
transportation, aviation, and medicine. Focusing on the latest
trends in artificial intelligence, systems design and testing, and
related topic areas, this publication is designed for use by
engineers, IT specialists, academicians, and graduate-level
students.
This book provides a comprehensive coverage of hardware security
concepts, derived from the unique characteristics of emerging logic
and memory devices and related architectures. The primary focus is
on mapping device-specific properties, such as multi-functionality,
runtime polymorphism, intrinsic entropy, nonlinearity, ease of
heterogeneous integration, and tamper-resilience to the
corresponding security primitives that they help realize, such as
static and dynamic camouflaging, true random number generation,
physically unclonable functions, secure heterogeneous and
large-scale systems, and tamper-proof memories. The authors discuss
several device technologies offering the desired properties
(including spintronics switches, memristors, silicon nanowire
transistors and ferroelectric devices) for such security primitives
and schemes, while also providing a detailed case study for each of
the outlined security applications. Overall, the book gives a
holistic perspective of how the promising properties found in
emerging devices, which are not readily afforded by traditional
CMOS devices and systems, can help advance the field of hardware
security.
|
|