|
Books > Computing & IT > Computer hardware & operating systems
As the volume of global Internet traffic increases, the Internet is
beginning to suffer from a broad spectrum of performance-degrading
infrastructural limitations that threaten to jeopardize the
continued growth of new, innovative services. In answer to this
challenge, computer scientists seek to maintain the original design
principles of the Internet while allowing for a more dynamic
approach to the manner in which networks are designed and operated.
The Handbook of Research on Redesigning the Future of Internet
Architectures covers some of the hottest topics currently being
debated by the Internet community at large, including Internet
governance, privacy issues, service delivery automation, advanced
networking schemes, and new approaches to Internet
traffic-forwarding and path-computation mechanics. Targeting
students, network-engineers, and technical strategists, this book
seeks to provide a broad and comprehensive look at the next wave of
revolutionary ideas poised to reshape the very foundation of the
Internet as we know it.
For courses in engineering and technical management System
architecture is the study of early decision making in complex
systems. This text teaches how to capture experience and analysis
about early system decisions, and how to choose architectures that
meet stakeholder needs, integrate easily, and evolve flexibly. With
case studies written by leading practitioners, from hybrid cars to
communications networks to aircraft, this text showcases the
science and art of system architecture.
This book presents Dual Mode Logic (DML), a new design paradigm for
digital integrated circuits. DML logic gates can operate in two
modes, each optimized for a different metric. Its on-the-fly
switching between these operational modes at the gate, block and
system levels provide maximal E-D optimization flexibility. Each
highly detailed chapter has multiple illustrations showing how the
DML paradigm seamlessly implements digital circuits that dissipate
less energy while simultaneously improving performance and reducing
area without a significant compromise in reliability. All the
facets of the DML methodology are covered, starting from basic
concepts, through single gate optimization, general module
optimization, design trade-offs and new ways DML can be integrated
into standard design flows using standard EDA tools. DML logic is
compatible with numerous applications but is particularly
advantageous for ultra-low power, reliable high performance
systems, and advanced scaled technologies Written in language
accessible to students and design engineers, each topic is oriented
toward immediate application by all those interested in an
alternative to CMOS logic. Describes a novel, promising alternative
to conventional CMOS logic, known as Dual Mode Logic (DML), with
which a single gate can be operated selectively in two modes, each
optimized for a different metric (e.g., energy consumption,
performance, size); Demonstrates several techniques at the
architectural level, which can result in high energy savings and
improved system performance; Focuses on the tradeoffs between
power, area and speed including optimizations at the transistor and
gate level, including alternatives to DML basic cells; Illustrates
DML efficiency for a variety of VLSI applications.
Enterprise Architecture (EA) is the organizing logic for a firm's
core business processes and IT capabilities captured in a set of
policies and technical choices. ""Handbook of Enterprise Systems
Architecture in Practice"" provides a comprehensive and unified
reference overview of practical aspects of enterprise architecture.
This premier reference source includes a complete analysis of EA
theory, concepts, strategies, implementation challenges, and case
studies. The impact of effective enterprise architecture on IT
governance, IT portfolio management, IT risks, and IT outsourcing
are described in this authoritative reference tool. Researchers and
IT professionals will gain insights into how firms can maximize the
business value of IT and increase competitiveness.
This book describes how we can design and make efficient processors
for high-performance computing, AI, and data science. Although
there are many textbooks on the design of processors we do not have
a widely accepted definition of the efficiency of a general-purpose
computer architecture. Without a definition of the efficiency, it
is difficult to make scientific approach to the processor design.
In this book, a clear definition of efficiency is given and thus a
scientific approach for processor design is made possible. In
chapter 2, the history of the development of high-performance
processor is overviewed, to discuss what quantity we can use to
measure the efficiency of these processors. The proposed quantity
is the ratio between the minimum possible energy consumption and
the actual energy consumption for a given application using a given
semiconductor technology. In chapter 3, whether or not this
quantity can be used in practice is discussed, for many real-world
applications. In chapter 4, general-purpose processors in the past
and present are discussed from this viewpoint. In chapter 5, how we
can actually design processors with near-optimal efficiencies is
described, and in chapter 6 how we can program such processors.
This book gives a new way to look at the field of the design of
high-performance processors.
Portable Biosensors and Point-of-Care Systems describes the
principles, design and applications of a new generation of
analytical and diagnostic biomedical devices, characterized by
their very small size, ease of use, multi-analytical capabilities
and speed to provide handheld and mobile point-of-care (POC)
diagnostics. The book is divided in four Parts. Part I is an
in-depth analysis of the various technologies upon which portable
diagnostic devices and biosensors are built. In Part II, advances
in the design and optimization of special components of biosensor
systems and handheld devices are presented. In Part III, a wide
scope of applications of portable biosensors and handheld POC
devices is described, ranging from the support of primary
healthcare to food and environmental safety screening. Diverse
topics are covered, including counterterrorism, travel medicine and
drug development. Finally, Part IV of the book is dedicated to the
presentation of commercially available products including a review
of the products of point-of-care in-vitro-diagnostics companies, a
review of technologies which have achieved a high Technology
Readiness Level, and a special market case study of POC infusion
systems combined with intelligent patient monitoring. This book is
essential reading for researchers and experts in the healthcare
diagnostic and analytical sector, and for electronics and material
engineers working on portable sensors.
Society's growing dependence on information technology for survival
has elevated the importance of controlling and evaluating
information systems. A sound plan for auditing information systems
and the technology that supports them is a necessity for
organizations to improve the IS benefits and allow the organization
to manage the risks associated with technology. Auditing
Information Systems gives a global vision of auditing and control,
exposing the major techniques and methods. It provides guidelines
for auditing the crucial areas of IT--databases, security,
maintenance, quality, and communications.
This book discusses the advantages and challenges of Body-Biasing
for integrated circuits and systems, together with the deployment
of the design infrastructure needed to generate this Body-Bias
voltage. These new design solutions enable state of the art energy
efficiency and system flexibility for the latest applications, such
as Internet of Things and 5G communications.
With the proliferation of GPS devices in daily life, trajectory
data that records where and when people move is now readily
available on a large scale. As one of the most typical
representatives, it has now become widely recognized that taxi
trajectory data provides rich opportunities to enable promising
smart urban services. Yet, a considerable gap still exists between
the raw data available, and the extraction of actionable
intelligence. This gap poses fundamental challenges on how we can
achieve such intelligence. These challenges include inaccuracy
issues, large data volumes to process, and sparse GPS data, to name
but a few. Moreover, the movements of taxis and the leaving
trajectory data are the result of a complex interplay between
several parties, including drivers, passengers, travellers, urban
planners, etc. In this book, we present our latest findings on
mining taxi GPS trajectory data to enable a number of smart urban
services, and to bring us one step closer to the vision of smart
mobility. Firstly, we focus on some fundamental issues in
trajectory data mining and analytics, including data map-matching,
data compression, and data protection. Secondly, driven by the real
needs and the most common concerns of each party involved, we
formulate each problem mathematically and propose novel data mining
or machine learning methods to solve it. Extensive evaluations with
real-world datasets are also provided, to demonstrate the
effectiveness and efficiency of using trajectory data. Unlike other
books, which deal with people and goods transportation separately,
this book also extends smart urban services to goods transportation
by introducing the idea of crowdshipping, i.e., recruiting taxis to
make package deliveries on the basis of real-time information.
Since people and goods are two essential components of smart
cities, we feel this extension is bot logical and essential.
Lastly, we discuss the most important scientific problems and open
issues in mining GPS trajectory data.
This book is the sixth volume of the successful book series on
Robot Operating System: The Complete Reference. The objective of
the book is to provide the reader with comprehensive coverage of
the Robot Operating Systems (ROS) and the latest trends and
contributed systems. ROS is currently considered as the primary
development framework for robotics applications. There are seven
chapters organized into three parts. Part I presents two chapters
on the emerging ROS 2.0 framework; in particular, ROS 2.0 is become
increasingly mature to be integrated into the industry. The first
chapter from Amazon AWS deals with the challenges that ROS 2
developers will face as they transition their system to be
commercial-grade. The second chapter deals with reactive
programming for both ROS1 and ROS. In Part II, two chapters deal
with advanced robotics, namely on the usage of robots in farms, and
the second deals with platooning systems. Part III provides three
chapters on ROS navigation. The first chapter deals with the use of
deep learning for ROS navigation. The second chapter presents a
detailed tuning guide on ROS navigation and the last chapter
discusses SLAM for ROS applications. I believe that this book is a
valuable companion for ROS users and developers to learn more ROS
capabilities and features.
System administration is about the design, running and maintenance
of human-computer systems. Examples of human-computer systems
include business enterprises, service institutions and any
extensive machinery that is operated by, or interacts with human
beings. System administration is often thought of as the
technological side of a system: the architecture, construction and
optimization of the collaborating parts, but it also occasionally
touches on softer factors such as user assistance (help desks),
ethical considerations in deploying a system, and the larger
implications of its design for others who come into contact with
it.
This book summarizes the state of research and practice in this
emerging field of network and system administration, in an
anthology of chapters written by the top academics in the field.
The authors include members of the IST-EMANICS Network of
Excellence in Network Management.
This book will be a valuable reference work for researchers and
senior system managers wanting to understand the essentials of
system administration, whether in practical application of a data
center or in the design of new systems and data centers.
- Covers data center planning and design
- Discusses configuration management
- Illustrates business modeling and system administration
- Provides the latest theoretical developments
Prepare for the updated version of Microsoft Exam AZ-900 and help
demonstrate your real-world knowledge of cloud services and how
they can be provided with Microsoft Azure, including high-level
concepts that apply throughout Azure, and key concepts specific to
individual services. Designed for professionals in both
non-technical or technical roles, this Exam Ref focuses on the
critical thinking and decision-making acumen needed for success at
the Microsoft Certified Fundamentals level. Focus on the expertise
measured by these objectives: Describe cloud concepts Describe
Azure architecture and services Describe Azure management and
governance This Microsoft Exam Ref: Organizes its coverage by exam
objectives Features strategic, what-if scenarios to challenge you
Assumes you want to show foundational knowledge of cloud services
and their delivery with Microsoft Azure About the Exam Exam AZ-900
focuses on knowledge needed to describe cloud computing; the
benefits of using cloud services; cloud service types; core Azure
architectural components; Azure compute, networking, and storage
services; Azure identity, access, and security; Azure cost
management; Azure features and tools for governance and compliance,
and for managing and deploying resources; and Azure monitoring
tools. About Microsoft Certification Passing this exam fulfills
your requirements for the Microsoft Certified: Azure Fundamentals
credential, validating your basic knowledge of cloud services and
how those services are provided with Azure. Whether you're new to
the fi eld or a seasoned professional, demonstrating this knowledge
can help you jump-start your career and prepare you to dive deeper
into the many technical opportunities Azure offers.
Digital signal processing is now a growing area of study invading
all walks of organizational life, from consumer products to
database management. ""Web-Based Supply Chain Management and
Digital Signal Processing: Methods for Effective Information
Administration and Transmission"" presents trends and techniques
for successful intelligent decision-making and transfer of products
through digital signal processing. A defining collection of field
advancements, this publication provides the latest and most
complete research in supply chain management with examples and case
studies useful for those involved with various levels of
management.
This book provides a single-source reference to routing algorithms
for Networks-on-Chip (NoCs), as well as in-depth discussions of
advanced solutions applied to current and next generation, many
core NoC-based Systems-on-Chip (SoCs). After a basic introduction
to the NoC design paradigm and architectures, routing algorithms
for NoC architectures are presented and discussed at all
abstraction levels, from the algorithmic level to actual
implementation. Coverage emphasizes the role played by the routing
algorithm and is organized around key problems affecting current
and next generation, many-core SoCs. A selection of routing
algorithms is included, specifically designed to address key issues
faced by designers in the ultra-deep sub-micron (UDSM) era,
including performance improvement, power, energy, and thermal
issues, fault tolerance and reliability.
This book describes a wide variety of System-on-Chip (SoC) security
threats and vulnerabilities, as well as their sources, in each
stage of a design life cycle. The authors discuss a wide variety of
state-of-the-art security verification and validation approaches
such as formal methods and side-channel analysis, as well as
simulation-based security and trust validation approaches. This
book provides a comprehensive reference for system on chip
designers and verification and validation engineers interested in
verifying security and trust of heterogeneous SoCs.
Information and communication technologies (ICT) are a vital
component of successful business models. As new technologies
emerge, organizations must adapt quickly and strategically to these
changes or risk falling behind. Evolution and Standardization of
Mobile Communications Technology examines methods of developing and
regulating compatibility standards in the ICT industry, assisting
organizations in their application of the latest communications
technologies in their business practices. Organizations maintain
competitive advantage by implementing cutting-edge technologies as
soon as they appear. This book serves as a compendium of the most
recent research and development in this arena, providing readers
with the insight necessary to take full advantage of a wide range
of ICT solutions. This book is part of the Advances in IT Standards
and Standardization Research series collection.
Due to a rapidly growing number of devices and communications,
cloud computing has begun to fall behind on its ability to
adequately process today's technology. Additionally, companies have
begun to look for solutions that would help reduce their
infrastructure costs and improve profitability. Fog computing, a
paradigm that extends cloud computing and services to the edge of
the network, has presented itself as a viable solution and
cost-saving method. However, before businesses can implement this
new method, concerns regarding its security, privacy, availability,
and data protection must be addressed. Advancing Consumer-Centric
Fog Computing Architectures is a collection of innovative research
on the methods and applications of fog computing in technological,
business, and organizational dimensions. Thoroughly examining fog
computing with respect to issues of management, trust and privacy,
governance, and interoperability, this publication highlights a
range of topics including access control mechanism, data
confidentiality, and service-oriented architecture. This book is
ideally designed for academicians, researchers, software
developers, IT professionals, policymakers, technology designers,
graduate-level students, managers, and business owners.
Since its birth in the late 1970s, the business recovery industry
has continued to broaden, moving from original batch application
processing on mainframes to include recovery for telecommunications
connectivity, distributed processing on mid-range systems, and most
recently, network and work area recovery. Whenever accidents,
disasters and natural events interrupt business activities, one
thing is certain: businesses lose money. How much money often
depends on how prepared companies are for dealing with business
interruptions. A Primer for Disaster Recovery Planning in an IT
Environment is intended to help businesses plan for an occurrence
that could mean a business stoppage. It helps you evaluate your
business in terms of vulnerability to disaster and guides you
through the process of creating a disaster recovery plan.
|
|