|
Books > Computing & IT > Applications of computing
The best source for cutting-edge insights into AI in healthcare
operations AI in Healthcare: How Artificial Intelligence Is
Changing IT Operations and Infrastructure Services collects,
organizes and provides the latest, most up-to-date research on the
emerging technology of artificial intelligence as it is applied to
healthcare operations. Written by a world-leading technology
executive specializing in healthcare IT, this book provides
concrete examples and practical advice on how to deploy artificial
intelligence solutions in your healthcare environment. AI in
Healthcare reveals to readers how they can take advantage of
connecting real-time event correlation and response automation to
minimize IT disruptions in critical healthcare IT functions. This
book provides in-depth coverage of all the most important and
central topics in the healthcare applications of artificial
intelligence, including: Healthcare IT AI Clinical Operations AI
Operational Infrastructure Project Planning Metrics, Reporting, and
Service Performance AIOps in Automation AIOps Cloud Operations
Future of AI Written in an accessible and straightforward style,
this book will be invaluable to IT managers, administrators, and
engineers in healthcare settings, as well as anyone with an
interest or stake in healthcare technology.
 |
Computational Intelligence in Data Science
- 4th IFIP TC 12 International Conference, ICCIDS 2021, Chennai, India, March 18-20, 2021, Revised Selected Papers
(Hardcover, 1st ed. 2021)
Vallidevi Krishnamurthy, Suresh Jaganathan, Kanchana Rajaram, Saraswathi Shunmuganathan
|
R2,640
Discovery Miles 26 400
|
Ships in 10 - 15 working days
|
|
This book constitutes the refereed post-conference proceedings of
the Fourth IFIP TC 12 International Conference on Computational
Intelligence in Data Science, ICCIDS 2021, held in Chennai, India,
in March 2021. The 20 revised full papers presented were carefully
reviewed and selected from 75 submissions. The papers cover topics
such as computational intelligence for text analysis; computational
intelligence for image and video analysis; blockchain and data
science.
The communication field is evolving rapidly in order to keep up
with society's demands. As such, it becomes imperative to research
and report recent advancements in computational intelligence as it
applies to communication networks. The Handbook of Research on
Recent Developments in Intelligent Communication Application is a
pivotal reference source for the latest developments on emerging
data communication applications. Featuring extensive coverage
across a range of relevant perspectives and topics, such as
satellite communication, cognitive radio networks, and wireless
sensor networks, this book is ideally designed for engineers,
professionals, practitioners, upper-level students, and academics
seeking current information on emerging communication networking
trends.
Data mapping in a data warehouse is the process of creating a link
between two distinct data models' (source and target)
tables/attributes. Data mapping is required at many stages of DW
life-cycle to help save processor overhead; every stage has its own
unique requirements and challenges. Therefore, many data warehouse
professionals want to learn data mapping in order to move from an
ETL (extract, transform, and load data between databases) developer
to a data modeler role. Data Mapping for Data Warehouse Design
provides basic and advanced knowledge about business intelligence
and data warehouse concepts including real life scenarios that
apply the standard techniques to projects across various domains.
After reading this book, readers will understand the importance of
data mapping across the data warehouse life cycle.
Across numerous industries in modern society, there is a constant
need to gather precise and relevant data efficiently and quickly.
As such, it is imperative to research new methods and approaches to
increase productivity in these areas. Next-Generation Information
Retrieval and Knowledge Resources Management is a key source on the
latest advancements in multidisciplinary research methods and
applications and examines effective techniques for managing and
utilizing information resources. Featuring extensive coverage
across a range of relevant perspectives and topics, such as
knowledge discovery, spatial indexing, and data mining, this book
is ideally designed for researchers, graduate students, academics,
and industry professionals seeking ways to optimize knowledge
management processes.
This book shows how machine learning (ML) methods can be used to
enhance cyber security operations, including detection, modeling,
monitoring as well as defense against threats to sensitive data and
security systems. Filling an important gap between ML and cyber
security communities, it discusses topics covering a wide range of
modern and practical ML techniques, frameworks and tools.
Digital image processing is a field that is constantly improving.
Gaining high-level understanding from digital images is a key
requirement for computing. One aspect of study that is assisting
with this advancement is fractal theory. This new science has
gained momentum and popularity as it has become a key topic of
research in the area of image analysis. Examining Fractal Image
Processing and Analysis is an essential reference source that
discusses fractal theory applications and analysis, including
box-counting analysis, multi-fractal analysis, 3D fractal analysis,
and chaos theory, as well as recent trends in other soft computing
techniques. Featuring research on topics such as image compression,
pattern matching, and artificial neural networks, this book is
ideally designed for system engineers, computer engineers,
professionals, academicians, researchers, and students seeking
coverage on problem-oriented processing techniques and imaging
technologies.
Modern society exists in a digital era in which high volumes of
multimedia information exists. To optimize the management of this
data, new methods are emerging for more efficient information
retrieval. Web Semantics for Textual and Visual Information
Retrieval is a pivotal reference source for the latest academic
research on embedding and associating semantics with multimedia
information to improve data retrieval techniques. Highlighting a
range of pertinent topics such as automation, knowledge discovery,
and social networking, this book is ideally designed for
researchers, practitioners, students, and professionals interested
in emerging trends in information retrieval.
The past few years have seen a major change in computing systems,
as growing data volumes and stalling processor speeds require more
and more applications to scale out to clusters. Today, a myriad
data sources, from the Internet to business operations to
scientific instruments, produce large and valuable data streams.
However, the processing capabilities of single machines have not
kept up with the size of data. As a result, organizations
increasingly need to scale out their computations over clusters. At
the same time, the speed and sophistication required of data
processing have grown. In addition to simple queries, complex
algorithms like machine learning and graph analysis are becoming
common. And in addition to batch processing, streaming analysis of
real-time data is required to let organizations take timely action.
Future computing platforms will need to not only scale out
traditional workloads, but support these new applications too. This
book, a revised version of the 2014 ACM Dissertation Award winning
dissertation, proposes an architecture for cluster computing
systems that can tackle emerging data processing workloads at
scale. Whereas early cluster computing systems, like MapReduce,
handled batch processing, our architecture also enables streaming
and interactive queries, while keeping MapReduce's scalability and
fault tolerance. And whereas most deployed systems only support
simple one-pass computations (e.g., SQL queries), ours also extends
to the multi-pass algorithms required for complex analytics like
machine learning. Finally, unlike the specialized systems proposed
for some of these workloads, our architecture allows these
computations to be combined, enabling rich new applications that
intermix, for example, streaming and batch processing. We achieve
these results through a simple extension to MapReduce that adds
primitives for data sharing, called Resilient Distributed Datasets
(RDDs). We show that this is enough to capture a wide range of
workloads. We implement RDDs in the open source Spark system, which
we evaluate using synthetic and real workloads. Spark matches or
exceeds the performance of specialized systems in many domains,
while offering stronger fault tolerance properties and allowing
these workloads to be combined. Finally, we examine the generality
of RDDs from both a theoretical modeling perspective and a systems
perspective. This version of the dissertation makes corrections
throughout the text and adds a new section on the evolution of
Apache Spark in industry since 2014. In addition, editing,
formatting, and links for the references have been added.
High-performance computing (HPC) describes the use of connected
computing units to perform complex tasks. It relies on
parallelization techniques and algorithms to synchronize these
disparate units in order to perform faster than a single processor
could, alone. Used in industries from medicine and research to
military and higher education, this method of computing allows for
users to complete complex data-intensive tasks. This field has
undergone many changes over the past decade, and will continue to
grow in popularity in the coming years. Innovative Research
Applications in Next-Generation High Performance Computing aims to
address the future challenges, advances, and applications of HPC
and related technologies. As the need for such processors
increases, so does the importance of developing new ways to
optimize the performance of these supercomputers. This timely
publication provides comprehensive information for researchers,
students in ICT, program developers, military and government
organizations, and business professionals.
"What information do these data reveal?" "Is the information
correct?" "How can I make the best use of the information?" The
widespread use of computers and our reliance on the data generated
by them have made these questions increasingly common and
important. Computerized data may be in either digital or analog
form and may be relevant to a wide range of applications that
include medical monitoring and diagnosis, scientific research,
engineering, quality control, seismology, meteorology, political
and economic analysis and business and personal financial
applications. The sources of the data may be databases that have
been developed for specific purposes or may be of more general
interest and include those that are accessible on the Internet. In
addition, the data may represent either single or multiple
parameters. Examining data in its initial form is often very
laborious and also makes it possible to "miss the forest for the
trees" by failing to notice patterns in the data that are not
readily apparent. To address these problems, this monograph
describes several accurate and efficient methods for displaying,
reviewing and analyzing digital and analog data. The methods may be
used either singly or in various combinations to maximize the value
of the data to those for whom it is relevant. None of the methods
requires special devices and each can be used on common platforms
such as personal computers, tablets and smart phones. Also, each of
the methods can be easily employed utilizing widely available
off-the-shelf software. Using the methods does not require special
expertise in computer science or technology, graphical design or
statistical analysis. The usefulness and accuracy of all the
described methods of data display, review and interpretation have
been confirmed in multiple carefully performed studies using
independent, objective endpoints. These studies and their results
are described in the monograph. Because of their ease of use,
accuracy and efficiency, the methods for displaying, reviewing and
analyzing data described in this monograph can be highly useful to
all who must work with computerized information and make decisions
based upon it.
Modelling of information is necessary in developing information
systems. Information is acquired from many sources, by using
various methods and tools. It must be recognized, conceptualized,
and conceptually organized efficiently so that users can easily
understand and use it. Modelling is needed to understand, explain,
organize, predict, and reason on information. It also helps to
master the role and functions of components of information systems.
Modelling can be performed with many different purposes in mind, at
different levels, and by using different notions and different
background theories. It can be made by emphasizing users'
conceptual understanding of information on a domain level, on an
algorithmic level, or on representation levels. On each level, the
objects and structures used on them are different, and different
rules govern the behavior on them. Therefore the notions, rules,
theories, languages, and methods for modelling on different levels
are also different. It will be useful if we can develop theories
and methodologies for modelling, to be used in different
situations, because databases, knowledge bases, and repositories in
knowledge management systems, developed on the basis of models and
used to technically store information, are growing day by day. In
this publication, the interest is focused on modelling of
information, and one of the central topics is modelling of time.
Scientific and technical papers of high quality are brought
together in this book.
Interfaces within computers, computing, and programming are
consistently evolving and continue to be relevant to computer
science as it progresses. Advancements in human-computer
interactions, their aesthetic appeal, ease of use, and learnability
are made possible due to the creation of user interfaces and result
in further growth in science, aesthetics, and practical
applications. Interface Support for Creativity, Productivity, and
Expression in Computer Graphics is a collection of innovative
research on usability, the apps humans use, and their sensory
environment. While highlighting topics such as image datasets,
augmented reality, and visual storytelling, this book is ideally
designed for researchers, academicians, graphic designers,
programmers, software developers, educators, multimedia
specialists, and students seeking current research on uniting
digital content with the physicality of the device through
applications, thus addressing sensory perception.
|
|