|
Books > Reference & Interdisciplinary > Communication studies > Information theory > General
Quality assurance is an essential aspect for ensuring the success
of corporations worldwide. Consistent quality requirements across
organizations of similar types ensure that these requirements can
be accurately and easily evaluated. Shaping the Future Through
Standardization is an essential scholarly book that examines
quality and standardization within diverse organizations globally
with a special focus on future perspectives, including how
standards and standardization may shape the future. Featuring a
wide range of topics such as economics, pedagogy, and management,
this book is ideal for academicians, researchers, decision makers,
policymakers, managers, corporate professionals, and students.
Mit grossem didaktischen Geschick gelingt es den Autoren,
Begeisterung fur die Welt der geheimen Botschaften zu wecken. So
gelingt der Einstieg in die Kryptologie ganz leicht. Viele
Beispiele und Aufgaben regen dazu an, sich selbstandig mit diesem
faszinierenden Gebiet zu beschaftigen und helfen dabei, den
erlernten Stoff weiter zu vertiefen.
RDF-based knowledge graphs require additional formalisms to be
fully context-aware, which is presented in this book. This book
also provides a collection of provenance techniques and
state-of-the-art metadata-enhanced, provenance-aware, knowledge
graph-based representations across multiple application domains, in
order to demonstrate how to combine graph-based data models and
provenance representations. This is important to make statements
authoritative, verifiable, and reproducible, such as in biomedical,
pharmaceutical, and cybersecurity applications, where the data
source and generator can be just as important as the data itself.
Capturing provenance is critical to ensure sound experimental
results and rigorously designed research studies for patient and
drug safety, pathology reports, and medical evidence generation.
Similarly, provenance is needed for cyberthreat intelligence
dashboards and attack maps that aggregate and/or fuse heterogeneous
data from disparate data sources to differentiate between
unimportant online events and dangerous cyberattacks, which is
demonstrated in this book. Without provenance, data reliability and
trustworthiness might be limited, causing data reuse, trust,
reproducibility and accountability issues. This book primarily
targets researchers who utilize knowledge graphs in their methods
and approaches (this includes researchers from a variety of
domains, such as cybersecurity, eHealth, data science, Semantic
Web, etc.). This book collects core facts for the state of the art
in provenance approaches and techniques, complemented by a critical
review of existing approaches. New research directions are also
provided that combine data science and knowledge graphs, for an
increasingly important research topic.
|
Interactivity, Game Creation, Design, Learning, and Innovation
- 8th EAI International Conference, ArtsIT 2019, and 4th EAI International Conference, DLI 2019, Aalborg, Denmark, November 6-8, 2019, Proceedings
(Paperback, 1st ed. 2020)
Anthony Brooks, Eva Irene Brooks
|
R3,120
Discovery Miles 31 200
|
Ships in 10 - 15 working days
|
|
This book constitutes the refereed post-conference proceedings of
two conferences: The 8th EAI International Conference on ArtsIT,
Interactivity and Game Creation (ArtsIT 2019), and the 4th EAI
International Conference on Design, Learning, and Innovation (DLI
2019). Both conferences were hosed in Aalborg, Denmark, and took
place November 6-8, 2019. The 61 revised full papers presented were
carefully selected from 98 submissions. The papers represent a
forum for the dissemination of cutting-edge research results in the
area of arts, design and technology, including open related topics
like interactivity and game creation.
Enhance your data science programming and analysis with the Wolfram
programming language and Mathematica, an applied mathematical tools
suite. The book will introduce you to the Wolfram programming
language and its syntax, as well as the structure of Mathematica
and its advantages and disadvantages. You'll see how to use the
Wolfram language for data science from a theoretical and practical
perspective. Learning this language makes your data science code
better because it is very intuitive and comes with pre-existing
functions that can provide a welcoming experience for those who use
other programming languages. You'll cover how to use Mathematica
where data management and mathematical computations are needed.
Along the way you'll appreciate how Mathematica provides a complete
integrated platform: it has a mixed syntax as a result of its
symbolic and numerical calculations allowing it to carry out
various processes without superfluous lines of code. You'll learn
to use its notebooks as a standard format, which also serves to
create detailed reports of the processes carried out. What You Will
Learn Use Mathematica to explore data and describe the concepts
using Wolfram language commands Create datasets, work with data
frames, and create tables Import, export, analyze, and visualize
data Work with the Wolfram data repository Build reports on the
analysis Use Mathematica for machine learning, with different
algorithms, including linear, multiple, and logistic regression;
decision trees; and data clustering Who This Book Is For Data
scientists new to using Wolfram and Mathematica as a language/tool
to program in. Programmers should have some prior programming
experience, but can be new to the Wolfram language.
|
Information Processing and Management of Uncertainty in Knowledge-Based Systems
- 18th International Conference, IPMU 2020, Lisbon, Portugal, June 15-19, 2020, Proceedings, Part I
(Paperback, 1st ed. 2020)
Marie-Jeanne Lesot, Susana Vieira, Marek Z Reformat, Joao Paulo Carvalho, Anna Wilbik, …
|
R3,626
Discovery Miles 36 260
|
Ships in 10 - 15 working days
|
|
This three volume set (CCIS 1237-1239) constitutes the proceedings
of the 18th International Conference on Information Processing and
Management of Uncertainty in Knowledge-Based Systems, IPMU 2020, in
June 2020. The conference was scheduled to take place in Lisbon,
Portugal, at University of Lisbon, but due to COVID-19 pandemic it
was held virtually. The 173 papers were carefully reviewed and
selected from 213 submissions. The papers are organized in topical
sections: homage to Enrique Ruspini; invited talks; foundations and
mathematics; decision making, preferences and votes; optimization
and uncertainty; games; real world applications; knowledge
processing and creation; machine learning I; machine learning II;
XAI; image processing; temporal data processing; text analysis and
processing; fuzzy interval analysis; theoretical and applied
aspects of imprecise probabilities; similarities in artificial
intelligence; belief function theory and its applications;
aggregation: theory and practice; aggregation: pre-aggregation
functions and other generalizations of monotonicity; aggregation:
aggregation of different data structures; fuzzy methods in data
mining and knowledge discovery; computational intelligence for
logistics and transportation problems; fuzzy implication functions;
soft methods in statistics and data analysis; image understanding
and explainable AI; fuzzy and generalized quantifier theory;
mathematical methods towards dealing with uncertainty in applied
sciences; statistical image processing and analysis, with
applications in neuroimaging; interval uncertainty; discrete models
and computational intelligence; current techniques to model,
process and describe time series; mathematical fuzzy logic and
graded reasoning models; formal concept analysis, rough sets,
general operators and related topics; computational intelligence
methods in information modelling, representation and processing.
The advancement of technology in today's world has led to the
progression of several professional fields. This includes the
classroom, as teachers have begun using new technological
strategies to increase student involvement and motivation. ICT
innovation including virtual reality and blended learning methods
has changed the scope of classroom environments across the globe;
however, significant research is lacking in this area. ICTs and
Innovation for Didactics of Social Sciences is a fundamental
reference focused on didactics of social sciences and ICTs
including issues related to innovation, resources, and strategies
for teachers that can link to the transformation of social sciences
teaching and learning as well as societal transformation. While
highlighting topics such as blended learning, augmented reality,
and virtual classrooms, this book is ideally designed for
researchers, administrators, educators, practitioners, and students
interested in understanding current relevant ICT resources and
innovative strategies for the didactic of social sciences and
didactic possibilities in relation to concrete conceptual contents,
resolution of problems, planning, decision making, development of
social skills, attention, and motivation promoting a necessary
technological literacy.
Binary detection is a ubiquitous problem in virtually all branches
of science and technology. In many cases, binary detection must be
carried out in a setting wherein the presence of an adversary
aiming at inducing a wrong decision cannot be ruled out.
Applications include network monitoring, spam filtering, multimedia
forensics, video surveillance and biometric authentication to name
but a few. In these cases, the attack is carried out at the time of
testing. With the advent of widespread machine learning tools, the
attacker can act during the learning phase, making it harder to
detect. The main idea behind adversarial detection theory is to
cast the detection problem into a game-theoretic framework. This
allows the goals and the actions available to the two contenders to
be rigorously defined. In this monograph, the authors address
several variants of a general adversarial binary detection problem,
depending on the knowledge available to the Defender and the
Attacker of the statistical characterization of a system. They lead
the reader through the considerations and solutions under two
hypotheses, using a framework that can be adopted in many
applications. This monograph, aimed at students, researchers and
practitioners working in the application areas who want an
accessible introduction to the theory behind Adversarial Binary
Detection and the possible solutions to their particular problem.
In the recent past, many components of modern infrastructure such
as transportation systems, power systems, climate and environment
monitoring systems, education systems and even government are being
increasingly interconnected through information networks. Central
to the functioning of current-day information networks are
strategies that facilitate distributed network information
processing objectives. In this monograph, the authors address the
overarching challenge of designing efficient information processing
strategies from a fundamental network information theory viewpoint.
The authors address several network communication problems which
can be considered as building blocks of networks. They consider
these problems from both the data transmission and the data storage
perspectives. They devise structured coding schemes for the finite
alphabet cases of these problems and for each problem provide at
least one example where they prove that the structured coding
scheme is optimal, whereas the unstructured coding scheme is
strictly suboptimal. Toward studying the information-theoretic
performance limits in each of these communication scenarios, they
consider two key concepts: common information and code structure.
They uncover a new fundamental connection between them, and develop
the key elements of a unified coding framework.This monograph is
aimed at students, researchers and practitioners in information
theory and communications. It provides an in-depth discussion of
the theory and techniques resulting in a framework that the reader
can apply to further their own work.
In recent years, many developing regions across the globe have made
rigorous efforts to become integrated into the global information
society. The development and implementation of information
communication technology (ICT) devices and policies within various
fields of service have significantly aided in the infrastructural
progression of these countries. Despite these considerable
advancements, there remains a lack of research and awareness on
this imperative subject. Developing Countries and Technology
Inclusion in the 21st Century Information Society is an essential
reference source that discusses the adoption and impact of ICT
tools in developing areas of the world as well as specific
challenges and sustainable uses within various professional fields.
Featuring research on topics such as policy development, gender
differences, and international business, this book is ideally
designed for educators, policymakers, researchers, librarians,
practitioners, scientists, government officials, and students
seeking coverage on modern applications of ICT services in
developing countries.
The authors of this monograph survey a suite of techniques based on
the theory of polynomials, collectively referred to as polynomial
methods. These techniques provide useful tools not only for the
design of highly practical algorithms with provable optimality, but
also for establishing the fundamental limits of inference problems
through moment matching. The authors demonstrate the effectiveness
of the polynomial method using concrete problems such as entropy
and support size estimation, distinct elements problem, and
learning Gaussian mixture models. This monograph provides a
comprehensive, yet concise, overview of the theory covering topics
such as polynomial approximation, polynomial interpolation and
majorization, moment space and positive polynomials, orthogonal
polynomials and Gaussian quadrature. The authors proceed to show
the applications of the theory in statistical inference. Polynomial
Methods in Statistical Inference provides students, and researchers
with an accessible and complete treatment of a subject that has
recently been used to solve many challenging problems in
statistical inference.
Recent years have witnessed a rapid growth of large-scale machine
learning and big data analytics, facilitating the developments of
data intensive applications like voice/image recognition, real-time
mapping services, autonomous driving, social networks, and
augmented/virtual reality. These applications are supported by
cloud infrastructures composed of large datacenters.The large scale
distributed machine learning/data analytics systems provide the
necessary processing power to handle these applications, but suffer
three major performance bottlenecks; namely, communication,
straggler and security. In this ground-breaking monograph, the
authors introduce the novel concept of Coded Computing. Coded
Computing exploits coding theory to optimally inject and leverage
data/task redundancy in distributed computing systems, creating
coding opportunities to overcome the bottlenecks. After introducing
the reader to the core of the problem, the authors describe in
detail each of the bottlenecks that can be overcome using Coded
Computing. The monograph provides an accessible introduction into
how this new technique can be used in developing large-scale
computing systems.
The first half of the book provides an introduction to general
topology, with ample space given to exercises and carefully
selected applications. The second half of the text includes topics
in asymmetric topology, a field motivated by applications in
computer science. Recurring themes include the interactions of
topology with order theory and mathematics designed to model
loss-of-resolution situations.
Wahrend die Kryptologie Konzepte und Methoden aus der
Komplexitatstheorie verwendet, ist die Forschung in der
Komplexitatstheorie wiederum oft durch Fragen aus der Kryptologie
motiviert. Der Band hebt die enge Verflechtung dieser beiden
Gebiete hervor und fuhrt auf verstandlicher Weise in das
faszinierende Gebiet der Kryptokomplexitat" ein. Das Buch enthalt
zahlreiche Abbildungen und Ubungsaufgaben sowie ein ausfuhrliches
Stichwort- und Literaturverzeichnis. Es eignet sich fur Studierende
der Informatik, Mathematik oder Ingenieurswissenschaften."
|
|