|
Showing 1 - 4 of
4 matches in All Departments
Data processing has become essential to modern civilization. The
original data for this processing comes from measurements or from
experts, and both sources are subject to uncertainty.
Traditionally, probabilistic methods have been used to process
uncertainty. However, in many practical situations, we do not know
the corresponding probabilities: in measurements, we often only
know the upper bound on the measurement errors; this is known as
interval uncertainty. In turn, expert estimates often include
imprecise (fuzzy) words from natural language such as "small"; this
is known as fuzzy uncertainty. In this book, leading specialists on
interval, fuzzy, probabilistic uncertainty and their combination
describe state-of-the-art developments in their research areas.
Accordingly, the book offers a valuable guide for researchers and
practitioners interested in data processing under uncertainty, and
an introduction to the latest trends and techniques in this area,
suitable for graduate students.
Data processing has become essential to modern civilization. The
original data for this processing comes from measurements or from
experts, and both sources are subject to uncertainty.
Traditionally, probabilistic methods have been used to process
uncertainty. However, in many practical situations, we do not know
the corresponding probabilities: in measurements, we often only
know the upper bound on the measurement errors; this is known as
interval uncertainty. In turn, expert estimates often include
imprecise (fuzzy) words from natural language such as "small"; this
is known as fuzzy uncertainty. In this book, leading specialists on
interval, fuzzy, probabilistic uncertainty and their combination
describe state-of-the-art developments in their research areas.
Accordingly, the book offers a valuable guide for researchers and
practitioners interested in data processing under uncertainty, and
an introduction to the latest trends and techniques in this area,
suitable for graduate students.
In many practical situations, we are interested in statistics
characterizing a population of objects: e.g. in the mean height of
people from a certain area. Most algorithms for estimating such
statistics assume that the sample values are exact. In practice,
sample values come from measurements, and measurements are never
absolutely accurate. Sometimes, we know the exact probability
distribution of the measurement inaccuracy, but often, we only know
the upper bound on this inaccuracy. In this case, we have interval
uncertainty: e.g. if the measured value is 1.0, and inaccuracy is
bounded by 0.1, then the actual (unknown) value of the quantity can
be anywhere between 1.0 - 0.1 = 0.9 and 1.0 + 0.1 = 1.1. In other
cases, the values are expert estimates, and we only have fuzzy
information about the estimation inaccuracy. This book shows how to
compute statistics under such interval and fuzzy uncertainty. The
resulting methods are applied to computer science (optimal
scheduling of different processors), to information technology
(maintaining privacy), to computer engineering (design of computer
chips), and to data processing in geosciences, radar imaging, and
structural mechanics.
In many practical situations, we are interested in statistics
characterizing a population of objects: e.g. in the mean height of
people from a certain area. Most algorithms for estimating such
statistics assume that the sample values are exact. In practice,
sample values come from measurements, and measurements are never
absolutely accurate. Sometimes, we know the exact probability
distribution of the measurement inaccuracy, but often, we only know
the upper bound on this inaccuracy. In this case, we have interval
uncertainty: e.g. if the measured value is 1.0, and inaccuracy is
bounded by 0.1, then the actual (unknown) value of the quantity can
be anywhere between 1.0 - 0.1 = 0.9 and 1.0 + 0.1 = 1.1. In other
cases, the values are expert estimates, and we only have fuzzy
information about the estimation inaccuracy. This book shows how to
compute statistics under such interval and fuzzy uncertainty. The
resulting methods are applied to computer science (optimal
scheduling of different processors), to information technology
(maintaining privacy), to computer engineering (design of computer
chips), and to data processing in geosciences, radar imaging, and
structural mechanics.
|
|