|
|
Books > Computing & IT
Communication based on the internet of things (IoT) generates huge
amounts of data from sensors over time, which opens a wide range of
applications and areas for researchers. The application of
analytics, machine learning, and deep learning techniques over such
a large volume of data is a very challenging task. Therefore, it is
essential to find patterns, retrieve novel insights, and predict
future behavior using this large amount of sensory data. Artificial
intelligence (AI) has an important role in facilitating analytics
and learning in the IoT devices. Applying AI-Based IoT Systems to
Simulation-Based Information Retrieval provides relevant frameworks
and the latest empirical research findings in the area. It is ideal
for professionals who wish to improve their understanding of the
strategic role of trust at different levels of the information and
knowledge society and trust at the levels of the global economy,
networks and organizations, teams and work groups, information
systems, and individuals as actors in the networked environments.
Covering topics such as blockchain visualization, computer-aided
drug discovery, and health monitoring, this premier reference
source is an excellent resource for business leaders and
executives, IT managers, security professionals, data scientists,
students and faculty of higher education, librarians, hospital
administrators, researchers, and academicians.
It is crucial that forensic science meets challenges such as
identifying hidden patterns in data, validating results for
accuracy, and understanding varying criminal activities in order to
be authoritative so as to hold up justice and public safety.
Artificial intelligence, with its potential subsets of machine
learning and deep learning, has the potential to transform the
domain of forensic science by handling diverse data, recognizing
patterns, and analyzing, interpreting, and presenting results.
Machine Learning and deep learning frameworks, with developed
mathematical and computational tools, facilitate the investigators
to provide reliable results. Further study on the potential uses of
these technologies is required to better understand their benefits.
Aiding Forensic Investigation Through Deep Learning and Machine
Learning Frameworks provides an outline of deep learning and
machine learning frameworks and methods for use in forensic science
to produce accurate and reliable results to aid investigation
processes. The book also considers the challenges, developments,
advancements, and emerging approaches of deep learning and machine
learning. Covering key topics such as biometrics, augmented
reality, and fraud investigation, this reference work is crucial
for forensic scientists, law enforcement, computer scientists,
researchers, scholars, academicians, practitioners, instructors,
and students.
Intelligent technologies have emerged as imperative tools in
computer science and information security. However, advanced
computing practices have preceded new methods of attacks on the
storage and transmission of data. Developing approaches such as
image processing and pattern recognition are susceptible to
breaches in security. Modern protection methods for these
innovative techniques require additional research. The Handbook of
Research on Intelligent Data Processing and Information Security
Systems provides emerging research exploring the theoretical and
practical aspects of cyber protection and applications within
computer science and telecommunications. Special attention is paid
to data encryption, steganography, image processing, and
recognition, and it targets professionals who want to improve their
knowledge in order to increase strategic capabilities and
organizational effectiveness. As such, this book is ideal for
analysts, programmers, computer engineers, software engineers,
mathematicians, data scientists, developers, IT specialists,
academicians, researchers, and students within fields of
information technology, information security, robotics, artificial
intelligence, image processing, computer science, and
telecommunications.
Brain-machine interfacing or brain-computer interfacing (BMI/BCI)
is an emerging and challenging technology used in engineering and
neuroscience. The ultimate goal is to provide a pathway from the
brain to the external world via mapping, assisting, augmenting or
repairing human cognitive or sensory-motor functions. In this book
an international panel of experts introduce signal processing and
machine learning techniques for BMI/BCI and outline their practical
and future applications in neuroscience, medicine, and
rehabilitation, with a focus on EEG-based BMI/BCI methods and
technologies. Topics covered include discriminative learning of
connectivity pattern of EEG; feature extraction from EEG
recordings; EEG signal processing; transfer learning algorithms in
BCI; convolutional neural networks for event-related potential
detection; spatial filtering techniques for improving individual
template-based SSVEP detection; feature extraction and
classification algorithms for image RSVP based BCI; decoding music
perception and imagination using deep learning techniques;
neurofeedback games using EEG-based Brain-Computer Interface
Technology; affective computing system and more.
DHM and Posturography explores the body of knowledge and
state-of-the-art in digital human modeling, along with its
application in ergonomics and posturography. The book provides an
industry first introductory and practitioner focused overview of
human simulation tools, with detailed chapters describing elements
of posture, postural interactions, and fields of application. Thus,
DHM tools and a specific scientific/practical problem - the study
of posture - are linked in a coherent framework. In addition,
sections show how DHM interfaces with the most common physical
devices for posture analysis. Case studies provide the applied
knowledge necessary for practitioners to make informed decisions.
Digital Human Modelling is the science of representing humans with
their physical properties, characteristics and behaviors in
computerized, virtual models. These models can be used standalone,
or integrated with other computerized object design systems, to
design or study designs, workplaces or products in their
relationship with humans.
Geoinformatics for Geosciences: Advanced Geospatial Analysis using
RS, GIS and Soft Computing is a comprehensive guide to the
methodologies and techniques that can be used in Earth observation
data assessments, geospatial analysis, and soft computing in the
geosciences. The book covers a variety of spatiotemporal problems
and topics in the areas of the environment, geohazards, urban
analysis, health, pollution, climate change, resources and
geomorphology, among others. Sections cover environmental and
climate issues, analysis of geomorphological data, hazard and
disaster impacts, natural and human resources, the influence of
environmental conditions, geohazards, climate change,
geomorphological changes, etc., and socioeconomic challenges.
Detailing up-to-date techniques in geoinformatics, this book offers
in-depth, up-to-date methodologies for researchers and academics to
understand how contemporary data can be combined with innovative
techniques and tools in order to address challenges in the
geosciences.
Uncertainty in Data Envelopment Analysis: Fuzzy and Belief
Degree-Based Uncertainties introduces methods to investigate
uncertain data in DEA models, providing a deeper look into two
types of uncertain DEA methods: Fuzzy DEA and Belief Degree Based
Uncertainty DEA, which are based on uncertain measures. These
models aim to solve problems encountered by classical data analysis
in cases where inputs and outputs of systems and processes are
volatile and complex, making measurement difficult. Classical data
envelopment analysis (DEA) models use crisp data in order to
measure inputs and outputs of a given system. Crisp input and
output data are fundamentally indispensable in the conventional DEA
models. If these models contain complex-uncertain data, then they
will become more important and practical for decision-makers.
Recently, artificial intelligence (AI), the internet of things
(IoT), and cognitive technologies have successfully been applied to
various research domains, including computer vision, natural
language processing, voice recognition, and more. In addition, AI
with IoT has made a significant breakthrough and a shift in
technical direction to achieve high efficiency and adaptability in
a variety of new applications. On the other hand, network design
and optimization for AI applications addresses a complementary
topic, namely the support of AI-based systems through novel
networking techniques, including new architectures, as well as
performance models for IoT systems. IoT has paved the way to a
plethora of new application domains, at the same time posing
several challenges as a multitude of devices, protocols,
communication channels, architectures, and middleware exist. Big
data generated by these devices calls for advanced learning and
data mining techniques to effectively understand, learn, and reason
with this volume of information, such as cognitive technologies.
Cognitive technologies play a major role in developing successful
cognitive systems which mimic ""cognitive"" functions associated
with human intelligence, such as ""learning"" and ""problem
solving."" Thus, there is a continuing demand for recent research
in these two linked fields. Innovations and Applications of AI,
IoT, and Cognitive Technologies discusses the latest innovations
and applications of AI, IoT, and cognitive-based smart systems. The
chapters cover the intersection of these three fields in emerging
and developed economies in terms of their respective development
situation, public policies, technologies and intellectual capital,
innovation systems, competition and strategies, marketing and
growth capability, and governance and relegation models. These
applications span areas such as healthcare, security and privacy,
industrial systems, multidisciplinary sciences, and more. This book
is ideal for technologists, IT specialists, policymakers,
government officials, academics, students, and practitioners
interested in the experiences of innovations and applications of
AI, IoT, and cognitive technologies.
Updates the premier textbook for students and librarians needing to
know the landscape of current databases and how to search them.
Librarians need to know of existing databases, and they must be
able to teach search capabilities and strategies to library users.
This practical guide introduces librarians to a broad spectrum of
fee-based and freely available databases and explains how to teach
them. The updated 6th edition of this well-regarded text covers new
databases on the market as well as updates to older databases. It
also explains underlying information structures and demonstrates
how to search most effectively. It introduces readers to several
recent changes, such as the move away from metadata-based indexing
to full text indexing by vendors covering newspaper content.
Business databases receive greater emphasis. As in the previous
edition, this book takes a real-world approach, covering topics
from basic and advanced search tools to online subject databases.
Each chapter includes a thorough discussion, a recap, concrete
examples, exercises, and points to consider, making it an ideal
text for courses in database searching as well as a trustworthy
professional resource. Helps librarians and students understand the
latest developments in library databases Looks not only at textual
databases but also numerical, image, video, and social media
resources Includes changes and trends in database functionality
since the 5th edition
Developments and Applications for ECG Signal Processing: Modeling,
Segmentation, and Pattern Recognition covers reliable techniques
for ECG signal processing and their potential to significantly
increase the applicability of ECG use in diagnosis. This book
details a wide range of challenges in the processes of acquisition,
preprocessing, segmentation, mathematical modelling and pattern
recognition in ECG signals, presenting practical and robust
solutions based on digital signal processing techniques. Users will
find this to be a comprehensive resource that contributes to
research on the automatic analysis of ECG signals and extends
resources relating to rapid and accurate diagnoses, particularly
for long-term signals. Chapters cover classical and modern features
surrounding f ECG signals, ECG signal acquisition systems,
techniques for noise suppression for ECG signal processing, a
delineation of the QRS complex, mathematical modelling of T- and
P-waves, and the automatic classification of heartbeats.
There is a significant deficiency among contemporary medicine
practices reflected by experts making medical decisions for a large
proportion of the population for which no or minimal data exists.
Fortunately, our capacity to procure and apply such information is
rapidly rising. As medicine becomes more individualized, the
implementation of health IT and data interoperability become
essential components to delivering quality healthcare. Quality
Assurance in the Era of Individualized Medicine is a collection of
innovative research on the methods and utilization of digital
readouts to fashion an individualized therapy instead of a
mass-population-directed strategy. While highlighting topics
including assistive technologies, patient management, and clinical
practices, this book is ideally designed for health professionals,
doctors, nurses, hospital management, medical administrators, IT
specialists, data scientists, researchers, academicians, and
students.
Deep Learning through Sparse Representation and Low-Rank Modeling
bridges classical sparse and low rank models-those that emphasize
problem-specific Interpretability-with recent deep network models
that have enabled a larger learning capacity and better utilization
of Big Data. It shows how the toolkit of deep learning is closely
tied with the sparse/low rank methods and algorithms, providing a
rich variety of theoretical and analytic tools to guide the design
and interpretation of deep learning models. The development of the
theory and models is supported by a wide variety of applications in
computer vision, machine learning, signal processing, and data
mining. This book will be highly useful for researchers, graduate
students and practitioners working in the fields of computer
vision, machine learning, signal processing, optimization and
statistics.
|
|