|
|
Books > Computing & IT > Applications of computing
Communication based on the internet of things (IoT) generates huge
amounts of data from sensors over time, which opens a wide range of
applications and areas for researchers. The application of
analytics, machine learning, and deep learning techniques over such
a large volume of data is a very challenging task. Therefore, it is
essential to find patterns, retrieve novel insights, and predict
future behavior using this large amount of sensory data. Artificial
intelligence (AI) has an important role in facilitating analytics
and learning in the IoT devices. Applying AI-Based IoT Systems to
Simulation-Based Information Retrieval provides relevant frameworks
and the latest empirical research findings in the area. It is ideal
for professionals who wish to improve their understanding of the
strategic role of trust at different levels of the information and
knowledge society and trust at the levels of the global economy,
networks and organizations, teams and work groups, information
systems, and individuals as actors in the networked environments.
Covering topics such as blockchain visualization, computer-aided
drug discovery, and health monitoring, this premier reference
source is an excellent resource for business leaders and
executives, IT managers, security professionals, data scientists,
students and faculty of higher education, librarians, hospital
administrators, researchers, and academicians.
It is crucial that forensic science meets challenges such as
identifying hidden patterns in data, validating results for
accuracy, and understanding varying criminal activities in order to
be authoritative so as to hold up justice and public safety.
Artificial intelligence, with its potential subsets of machine
learning and deep learning, has the potential to transform the
domain of forensic science by handling diverse data, recognizing
patterns, and analyzing, interpreting, and presenting results.
Machine Learning and deep learning frameworks, with developed
mathematical and computational tools, facilitate the investigators
to provide reliable results. Further study on the potential uses of
these technologies is required to better understand their benefits.
Aiding Forensic Investigation Through Deep Learning and Machine
Learning Frameworks provides an outline of deep learning and
machine learning frameworks and methods for use in forensic science
to produce accurate and reliable results to aid investigation
processes. The book also considers the challenges, developments,
advancements, and emerging approaches of deep learning and machine
learning. Covering key topics such as biometrics, augmented
reality, and fraud investigation, this reference work is crucial
for forensic scientists, law enforcement, computer scientists,
researchers, scholars, academicians, practitioners, instructors,
and students.
DHM and Posturography explores the body of knowledge and
state-of-the-art in digital human modeling, along with its
application in ergonomics and posturography. The book provides an
industry first introductory and practitioner focused overview of
human simulation tools, with detailed chapters describing elements
of posture, postural interactions, and fields of application. Thus,
DHM tools and a specific scientific/practical problem - the study
of posture - are linked in a coherent framework. In addition,
sections show how DHM interfaces with the most common physical
devices for posture analysis. Case studies provide the applied
knowledge necessary for practitioners to make informed decisions.
Digital Human Modelling is the science of representing humans with
their physical properties, characteristics and behaviors in
computerized, virtual models. These models can be used standalone,
or integrated with other computerized object design systems, to
design or study designs, workplaces or products in their
relationship with humans.
Uncertainty in Data Envelopment Analysis: Fuzzy and Belief
Degree-Based Uncertainties introduces methods to investigate
uncertain data in DEA models, providing a deeper look into two
types of uncertain DEA methods: Fuzzy DEA and Belief Degree Based
Uncertainty DEA, which are based on uncertain measures. These
models aim to solve problems encountered by classical data analysis
in cases where inputs and outputs of systems and processes are
volatile and complex, making measurement difficult. Classical data
envelopment analysis (DEA) models use crisp data in order to
measure inputs and outputs of a given system. Crisp input and
output data are fundamentally indispensable in the conventional DEA
models. If these models contain complex-uncertain data, then they
will become more important and practical for decision-makers.
Developments and Applications for ECG Signal Processing: Modeling,
Segmentation, and Pattern Recognition covers reliable techniques
for ECG signal processing and their potential to significantly
increase the applicability of ECG use in diagnosis. This book
details a wide range of challenges in the processes of acquisition,
preprocessing, segmentation, mathematical modelling and pattern
recognition in ECG signals, presenting practical and robust
solutions based on digital signal processing techniques. Users will
find this to be a comprehensive resource that contributes to
research on the automatic analysis of ECG signals and extends
resources relating to rapid and accurate diagnoses, particularly
for long-term signals. Chapters cover classical and modern features
surrounding f ECG signals, ECG signal acquisition systems,
techniques for noise suppression for ECG signal processing, a
delineation of the QRS complex, mathematical modelling of T- and
P-waves, and the automatic classification of heartbeats.
This book provides a snapshot of the state of current research at
the interface between machine learning and healthcare with special
emphasis on machine learning projects that are (or are close to)
achieving improvement in patient outcomes. The book provides
overviews on a range of technologies including detecting
artefactual events in vital signs monitoring data; patient
physiological monitoring; tracking infectious disease; predicting
antibiotic resistance from genomic data; and managing chronic
disease. With contributions from an international panel of leading
researchers, this book will find a place on the bookshelves of
academic and industrial researchers and advanced students working
in healthcare technologies, biomedical engineering, and machine
learning.
There is a significant deficiency among contemporary medicine
practices reflected by experts making medical decisions for a large
proportion of the population for which no or minimal data exists.
Fortunately, our capacity to procure and apply such information is
rapidly rising. As medicine becomes more individualized, the
implementation of health IT and data interoperability become
essential components to delivering quality healthcare. Quality
Assurance in the Era of Individualized Medicine is a collection of
innovative research on the methods and utilization of digital
readouts to fashion an individualized therapy instead of a
mass-population-directed strategy. While highlighting topics
including assistive technologies, patient management, and clinical
practices, this book is ideally designed for health professionals,
doctors, nurses, hospital management, medical administrators, IT
specialists, data scientists, researchers, academicians, and
students.
Deep Learning through Sparse Representation and Low-Rank Modeling
bridges classical sparse and low rank models-those that emphasize
problem-specific Interpretability-with recent deep network models
that have enabled a larger learning capacity and better utilization
of Big Data. It shows how the toolkit of deep learning is closely
tied with the sparse/low rank methods and algorithms, providing a
rich variety of theoretical and analytic tools to guide the design
and interpretation of deep learning models. The development of the
theory and models is supported by a wide variety of applications in
computer vision, machine learning, signal processing, and data
mining. This book will be highly useful for researchers, graduate
students and practitioners working in the fields of computer
vision, machine learning, signal processing, optimization and
statistics.
Artificial Intelligence in the Age of Neural Networks and Brain
Computing demonstrates that existing disruptive implications and
applications of AI is a development of the unique attributes of
neural networks, mainly machine learning, distributed
architectures, massive parallel processing, black-box inference,
intrinsic nonlinearity and smart autonomous search engines. The
book covers the major basic ideas of brain-like computing behind
AI, provides a framework to deep learning, and launches novel and
intriguing paradigms as future alternatives. The success of
AI-based commercial products proposed by top industry leaders, such
as Google, IBM, Microsoft, Intel and Amazon can be interpreted
using this book.
Blockchain technology allows value exchange without the need for a
central authority and ensures trust powered by its decentralized
architecture. As such, the growing use of the internet of things
(IoT) and the rise of artificial intelligence (AI) are to be
benefited immensely by this technology that can offer devices and
applications data security, decentralization, accountability, and
reliable authentication. Bringing together blockchain technology,
AI, and IoT can allow these tools to complement the strengths and
weaknesses of the others and make systems more efficient.
Multidisciplinary Functions of Blockchain Technology in AI and IoT
Applications deliberates upon prospects of blockchain technology
using AI and IoT devices in various application domains. This book
contains a comprehensive collection of chapters on machine
learning, IoT, and AI in areas that include security issues of IoT,
farming, supply chain management, predictive analytics, and natural
languages processing. While highlighting these areas, the book is
ideally intended for IT industry professionals, students of
computer science and software engineering, computer scientists,
practitioners, stakeholders, researchers, and academicians
interested in updated and advanced research surrounding the
functions of blockchain technology in AI and IoT applications
across diverse fields of research.
During these uncertain and turbulent times, intelligent
technologies including artificial neural networks (ANN) and machine
learning (ML) have played an incredible role in being able to
predict, analyze, and navigate unprecedented circumstances across a
number of industries, ranging from healthcare to hospitality.
Multi-factor prediction in particular has been especially helpful
in dealing with the most current pressing issues such as COVID-19
prediction, pneumonia detection, cardiovascular diagnosis and
disease management, automobile accident prediction, and vacation
rental listing analysis. To date, there has not been much research
content readily available in these areas, especially content
written extensively from a user perspective. Biomedical and
Business Applications Using Artificial Neural Networks and Machine
Learning is designed to cover a brief and focused range of
essential topics in the field with perspectives, models, and
first-hand experiences shared by prominent researchers, discussing
applications of artificial neural networks (ANN) and machine
learning (ML) for biomedical and business applications and a
listing of current open-source software for neural networks,
machine learning, and artificial intelligence. It also presents
summaries of currently available open source software that utilize
neural networks and machine learning. The book is ideal for
professionals, researchers, students, and practitioners who want to
more fully understand in a brief and concise format the realm and
technologies of artificial neural networks (ANN) and machine
learning (ML) and how they have been used for prediction of
multi-disciplinary research problems in a multitude of disciplines.
Source Separation and Machine Learning presents the fundamentals in
adaptive learning algorithms for Blind Source Separation (BSS) and
emphasizes the importance of machine learning perspectives. It
illustrates how BSS problems are tackled through adaptive learning
algorithms and model-based approaches using the latest information
on mixture signals to build a BSS model that is seen as a
statistical model for a whole system. Looking at different models,
including independent component analysis (ICA), nonnegative matrix
factorization (NMF), nonnegative tensor factorization (NTF), and
deep neural network (DNN), the book addresses how they have evolved
to deal with multichannel and single-channel source separation.
Due to the increasing availability of affordable internet services,
the number of users, and the need for a wider range of
multimedia-based applications, internet usage is on the rise. With
so many users and such a large amount of data, the requirements of
analyzing large data sets leads to the need for further
advancements to information processing. Big Data Processing with
Hadoop is an essential reference source that discusses possible
solutions for millions of users working with a variety of data
applications, who expect fast turnaround responses, but encounter
issues with processing data at the rate it comes in. Featuring
research on topics such as market basket analytics, scheduler load
simulator, and writing YARN applications, this book is ideally
designed for IoT professionals, students, and engineers seeking
coverage on many of the real-world challenges regarding big data.
|
You may like...
Magnetism
Samuel Hiti
Hardcover
R546
Discovery Miles 5 460
|