|
|
Books > Computing & IT > Computer hardware & operating systems
Explores the full range of issues - moral, ethical, social, legal,
and technological - involved in developing firm controls and best
practices to secure the ever growing information infrastructure
upon which societies and individuals depend.
Enterprise integration is a broad activity that involves solving a
range of issues relating to business process definition, common
data standards, architectural compatibility, technical
interoperability, and organizational alignment. Enterprise
Architecture and Integration: Methods, Implementation, and
Technologies provides a detailed analysis of the important
strategies for integrating IT systems into fields such as
e-business and customer-relationship management. This Premier
Reference Source supplies readers with a comprehensive survey of
existing enterprise architecture and integration approaches, and
presents case studies that illustrate best practices. It takes a
holistic view of enterprise integration, describing innovative
methods, tools, and architectures with which organizations can
systematically achieve enterprise integration.
XML in Data Management is for IT managers and technical staff
involved in the creation, administration, or maintenance of a data
management infrastructure that includes XML. For most IT staff, XML
is either just a buzzword that is ignored or a silver bullet to be
used in every nook and cranny of their organization. The truth is
in between the two. This book provides the guidance necessary for
data managers to make measured decisions about XML within their
organizations. Readers will understand the uses of XML, its
component architecture, its strategic implications, and how these
apply to data management.
To view a sample chapter and read the Foreword by Thomas C. Redman,
visit http: //books.elsevier.com/mk/?isbn=0120455994
* Takes a data-centric view of XML.
* Explains how, when, and why to apply XML to data management
systems.
* Covers XML component architecture, data engineering, frameworks,
metadata, legacy systems, and more.
* Discusses the various strengths and weaknesses of XML
technologies in the context of organizational data management and
integration.
The greatly expanded and updated 3rd edition of this textbook
offers the reader a comprehensive introduction to the concepts of
logic functions and equations and their applications across
computer science and engineering. The authors' approach emphasizes
a thorough understanding of the fundamental principles as well as
numerical and computer-based solution methods. The book provides
insight into applications across propositional logic, binary
arithmetic, coding, cryptography, complexity, logic design, and
artificial intelligence. Updated throughout, some major additions
for the 3rd edition include: a new chapter about the concepts
contributing to the power of XBOOLE; a new chapter that introduces
into the application of the XBOOLE-Monitor XBM 2; many tasks that
support the readers in amplifying the learned content at the end of
the chapters; solutions of a large subset of these tasks to confirm
learning success; challenging tasks that need the power of the
XBOOLE software for their solution. The XBOOLE-monitor XBM 2
software is used to solve the exercises; in this way the
time-consuming and error-prone manipulation on the bit level is
moved to an ordinary PC, more realistic tasks can be solved, and
the challenges of thinking about algorithms leads to a higher level
of education.
The rapid, global growth of technology necessitates a continued
review of issues relating to privacy and security, as well as
studies on the adoption of and access to new products, tools, and
software. ICT Ethics and Security in the 21st Century: New
Developments and Applications highlights ethical dilemmas and
security challenges posed by the rise of more recent technologies
along with ongoing challenges such as the digital divide, threats
to privacy, and organizational security measures. This book
comprises a valuable resource for ICT researchers, educators,
students, and professionals along with both employers and employees
of large organizations searching for resolutions to the everyday
ethical and security dilemmas we must grapple with in our highly
globalised and technologized world.
The past few years have seen a major change in computing systems,
as growing data volumes and stalling processor speeds require more
and more applications to scale out to clusters. Today, a myriad
data sources, from the Internet to business operations to
scientific instruments, produce large and valuable data streams.
However, the processing capabilities of single machines have not
kept up with the size of data. As a result, organizations
increasingly need to scale out their computations over clusters. At
the same time, the speed and sophistication required of data
processing have grown. In addition to simple queries, complex
algorithms like machine learning and graph analysis are becoming
common. And in addition to batch processing, streaming analysis of
real-time data is required to let organizations take timely action.
Future computing platforms will need to not only scale out
traditional workloads, but support these new applications too. This
book, a revised version of the 2014 ACM Dissertation Award winning
dissertation, proposes an architecture for cluster computing
systems that can tackle emerging data processing workloads at
scale. Whereas early cluster computing systems, like MapReduce,
handled batch processing, our architecture also enables streaming
and interactive queries, while keeping MapReduce's scalability and
fault tolerance. And whereas most deployed systems only support
simple one-pass computations (e.g., SQL queries), ours also extends
to the multi-pass algorithms required for complex analytics like
machine learning. Finally, unlike the specialized systems proposed
for some of these workloads, our architecture allows these
computations to be combined, enabling rich new applications that
intermix, for example, streaming and batch processing. We achieve
these results through a simple extension to MapReduce that adds
primitives for data sharing, called Resilient Distributed Datasets
(RDDs). We show that this is enough to capture a wide range of
workloads. We implement RDDs in the open source Spark system, which
we evaluate using synthetic and real workloads. Spark matches or
exceeds the performance of specialized systems in many domains,
while offering stronger fault tolerance properties and allowing
these workloads to be combined. Finally, we examine the generality
of RDDs from both a theoretical modeling perspective and a systems
perspective. This version of the dissertation makes corrections
throughout the text and adds a new section on the evolution of
Apache Spark in industry since 2014. In addition, editing,
formatting, and links for the references have been added.
Originally designed for interpersonal communication, today mobile
devices are capable of connecting their users to a wide variety of
Internet-enabled services and applications. Multimodality in Mobile
Computing and Mobile Devices: Methods for Adaptable Usability
explores a variety of perspectives on multimodal user interface
design, describes a variety of novel multimodal applications, and
provides real-life experience reports. Containing research from
leading international experts, this innovative publication presents
core concepts that define multi-modal, multi-channel, and
multi-device interactions and their role in mobile, pervasive, and
ubiquitous computing.
Order affects the results you get: Different orders of presenting
material can lead to qualitatively and quantitatively different
learning outcomes. These differences occur in both natural and
artificial learning systems. In Order to Learn shows how order
effects are crucial in human learning, instructional design,
machine learning, and both symbolic and connectionist cognitive
models. Each chapter explains a different aspect of how the order
in which material is presented can strongly influence what is
learned by humans and theoretical models of learning in a variety
of domains. In addition to data, models are provided that predict
and describe order effects and analyze how and when they will
occur. The introductory and concluding chapters compile suggestions
for improving learning through better sequences of learning
materials, including how to take advantage of order effects that
encourage learning and how to avoid order effects that discourage
learning. Each chapter also highlights questions that may inspire
further research. Taken together, these chapters show how order
effects in different areas can and do inform each other. In Order
to Learn will be of interest to researchers and students in
cognitive science, education, machine learning.
Multimedia has evolved with the introduction of interaction,
allowing and encouraging users to control and navigate through
content. Experimental multimedia is a new human-computer
communication method that allows for the reinvention and
redevelopment of user content. Experimental Multimedia Systems for
Interactivity and Strategic Innovation presents the next
evolutionary step of multimedia where interactivity meets targeted
creativity and experimentation. In providing the basic framework
for experimental multimedia through case studies that allow the
reader to appreciate the design of multimedia systems, this
publication's audience extends beyond new media artists to
scientists, publishers, and developers who wish to extend their
system designs to offer adaptive capabilities combined with
multimedia content and dynamic interaction. This publication
presents a collection of carefully selected theoretical and applied
research chapters with a focus on matters including, but not
limited to, aesthetics in publishing, e-health systems, artificial
intelligence, augmented reality, human-computer interaction,
interactive multimedia, and new media curation.
A hands-on introduction to FPGA prototyping and SoC design This
Second Edition of the popular book follows the same
"learning-by-doing" approach to teach the fundamentals and
practices of VHDL synthesis and FPGA prototyping. It uses a
coherent series of examples to demonstrate the process to develop
sophisticated digital circuits and IP (intellectual property)
cores, integrate them into an SoC (system on a chip) framework,
realize the system on an FPGA prototyping board, and verify the
hardware and software operation. The examples start with simple
gate-level circuits, progress gradually through the RT (register
transfer) level modules, and lead to a functional embedded system
with custom I/O peripherals and hardware accelerators. Although it
is an introductory text, the examples are developed in a rigorous
manner, and the derivations follow strict design guidelines and
coding practices used for large, complex digital systems. The new
edition is completely updated. It presents the hardware design in
the SoC context and introduces the hardware-software co-design
concept. Instead of treating examples as isolated entities, the
book integrates them into a single coherent SoC platform that
allows readers to explore both hardware and software
"programmability" and develop complex and interesting embedded
system projects. The revised edition: Adds four general-purpose IP
cores, which are multi-channel PWM (pulse width modulation)
controller, I2C controller, SPI controller, and XADC (Xilinx
analog-to-digital converter) controller. Introduces a music
synthesizer constructed with a DDFS (direct digital frequency
synthesis) module and an ADSR (attack-decay-sustain-release)
envelop generator. Expands the original video controller into a
complete stream-based video subsystem that incorporates a video
synchronization circuit, a test pattern generator, an OSD
(on-screen display) controller, a sprite generator, and a frame
buffer. Introduces basic concepts of software-hardware co-design
with Xilinx MicroBlaze MCS soft-core processor. Provides an
overview of bus interconnect and interface circuit. Introduces
basic embedded system software development. Suggests additional
modules and peripherals for interesting and challenging projects.
The FPGA Prototyping by VHDL Examples, Second Edition makes a
natural companion text for introductory and advanced digital design
courses and embedded system course. It also serves as an ideal
self-teaching guide for practicing engineers who wish to learn more
about this emerging area of interest.
Architecture of Reliable Web Applications Software presents new
concepts regarding the reliability, availability, manageability,
performance, scalability, and secured-ability of applications,
particularly the ones that run over the Web. ""Architecture of
Reliable Web Applications Software"" examines the causes of failure
in a Web-based information system development project, and
indicates that to exploit the unprecedented opportunities offered
by e-service applications, businesses and users alike need a highly
available, reliable, and efficient telecommunication
infrastructure. ""Architecture of Reliable Web Application
Software"" proposes a scalable QoS-aware architecture for the
management of QoS-aware Web services to provide QoS management
support for both Web services' providers and consumers. It also
introduces Hyper-services as a unified application model for
semantic Web frameworks and proposes Conceptual Model Driven
Software Development as a means of easy adoption to them.
In a digital context, trust is a multifaceted concept, including
trust in application usability, trust in information security, and
trust in fellow users. Mobile technologies have compounded the
impact of such considerations. Trust Management in Mobile
Environments: Autonomic and Usable Models explores current advances
in digital and mobile computing technologies from the user
perspective, evaluating trust models and autonomic trust
management. From the recent history of trust in digital
environments to prospective future developments, this book serves
as a potent reference source for professionals, graduate and
post-graduate students, researchers, and practitioners in the field
of trust management.
This book provides comprehensive coverage of the latest research
into integrated circuits' ageing, explaining the causes of this
phenomenon, describing its effects on electronic systems, and
providing mitigation techniques to build ageing-resilient circuits.
Almost all the systems in our world, including technical, social,
economic, and environmental systems, are becoming interconnected
and increasingly complex, and as such they are vulnerable to
various risks. Due to this trend, resilience creation is becoming
more important to system managers and decision makers, this to
ensure sustained performance. In order to be able to ensure an
acceptable sustained performance under such interconnectedness and
complexity, resilience creation with a system approach is a
requirement. Mathematical modeling based approaches are the most
common approach for system resilience creation. Mathematical
Modelling of System Resilience covers resilience creation for
various system aspects including a functional system of the supply
chain, overall supply chain systems; various methodologies for
modeling system resilience; satellite-based approach for addressing
climate related risks, repair-based approach for sustainable
performance of an engineering system, and modeling measures of the
reliability for a vertical take-off and landing system. Each of the
chapters contributes state of the art research for the relevant
resilience related topic covered in the chapter. Technical topics
covered in the book include: 1. Supply chain risk, vulnerability
and disruptions 2. System resilience for containing failures and
disruptions 3. Resiliency considering frequency and intensities of
disasters 4. Resilience performance index 5. Resiliency of electric
Traction system 6. Degree of resilience 7. Satellite observation
and hydrological risk 8. Latitude of Resilience 9. On-line repair
for resilience 10. Reliability design for Vertical Takeoff and
landing Prototype
This book covers two main topics: First, novel fast and flexible
simulation techniques for modern heterogeneous NoC-based multi-core
architectures. These are implemented in the full-system simulator
called InvadeSIM and designed to study the dynamic behavior of
hundreds of parallel application programs running on such
architectures while competing for resources. Second, a novel
actor-oriented programming library called ActorX10, which allows to
formally model parallel streaming applications by actor graphs and
to analyze predictable execution behavior as part of so-called
hybrid mapping approaches, which are used to guarantee real-time
requirements of such applications at design time independent from
dynamic workloads by a combination of static analysis and dynamic
embedding.
As advances in technology continue to generate the collective
knowledge of an organization and its operations, strategic models
for information systems are developed in order to arrange business
processes and business data. Frameworks for Developing Efficient
Information Systems: Models, Theory, and Practice presents research
and practices on the advancements in systems analysis and design.
These theoretical frameworks and practical solutions are useful for
researchers, practitioners, and academicians as this book aims to
bridge the communication gap between business managers and system
designers.
 |
Ed Mastery
(Hardcover)
Michael W Lucas
|
R758
R697
Discovery Miles 6 970
Save R61 (8%)
|
Ships in 18 - 22 working days
|
|
|
|
|