|
Showing 1 - 4 of
4 matches in All Departments
A new and refreshingly different approach to presenting the
foundations of statistical algorithms, Foundations of Statistical
Algorithms: With References to R Packages reviews the historical
development of basic algorithms to illuminate the evolution of
today's more powerful statistical algorithms. It emphasizes
recurring themes in all statistical algorithms, including
computation, assessment and verification, iteration, intuition,
randomness, repetition and parallelization, and scalability. Unique
in scope, the book reviews the upcoming challenge of scaling many
of the established techniques to very large data sets and delves
into systematic verification by demonstrating how to derive general
classes of worst case inputs and emphasizing the importance of
testing over a large number of different inputs. Broadly
accessible, the book offers examples, exercises, and selected
solutions in each chapter as well as access to a supplementary
website. After working through the material covered in the book,
readers should not only understand current algorithms but also gain
a deeper understanding of how algorithms are constructed, how to
evaluate new algorithms, which recurring principles are used to
tackle some of the tough problems statistical programmers face, and
how to take an idea for a new method and turn it into something
practically useful.
This open access book provides a wealth of hands-on examples that
illustrate how hyperparameter tuning can be applied in practice and
gives deep insights into the working mechanisms of machine learning
(ML) and deep learning (DL) methods. The aim of the book is to
equip readers with the ability to achieve better results with
significantly less time, costs, effort and resources using the
methods described here. The case studies presented in this book can
be run on a regular desktop or notebook computer. No
high-performance computing facilities are required. The idea for
the book originated in a study conducted by Bartz & Bartz GmbH
for the Federal Statistical Office of Germany (Destatis). Building
on that study, the book is addressed to practitioners in industry
as well as researchers, teachers and students in academia. The
content focuses on the hyperparameter tuning of ML and DL
algorithms, and is divided into two main parts: theory (Part I) and
application (Part II). Essential topics covered include: a survey
of important model parameters; four parameter tuning studies and
one extensive global parameter tuning study; statistical analysis
of the performance of ML and DL methods based on severity; and a
new, consensus-ranking-based way to aggregate and analyze results
from multiple algorithms. The book presents analyses of more than
30 hyperparameters from six relevant ML and DL methods, and
provides source code so that users can reproduce the results.
Accordingly, it serves as a handbook and textbook alike.
This open access book provides a wealth of hands-on examples that
illustrate how hyperparameter tuning can be applied in practice and
gives deep insights into the working mechanisms of machine learning
(ML) and deep learning (DL) methods. The aim of the book is to
equip readers with the ability to achieve better results with
significantly less time, costs, effort and resources using the
methods described here. The case studies presented in this book can
be run on a regular desktop or notebook computer. No
high-performance computing facilities are required. The idea for
the book originated in a study conducted by Bartz & Bartz GmbH
for the Federal Statistical Office of Germany (Destatis). Building
on that study, the book is addressed to practitioners in industry
as well as researchers, teachers and students in academia. The
content focuses on the hyperparameter tuning of ML and DL
algorithms, and is divided into two main parts: theory (Part I) and
application (Part II). Essential topics covered include: a survey
of important model parameters; four parameter tuning studies and
one extensive global parameter tuning study; statistical analysis
of the performance of ML and DL methods based on severity; and a
new, consensus-ranking-based way to aggregate and analyze results
from multiple algorithms. The book presents analyses of more than
30 hyperparameters from six relevant ML and DL methods, and
provides source code so that users can reproduce the results.
Accordingly, it serves as a handbook and textbook alike.
A new and refreshingly different approach to presenting the
foundations of statistical algorithms, Foundations of Statistical
Algorithms: With References to R Packages reviews the historical
development of basic algorithms to illuminate the evolution of
today's more powerful statistical algorithms. It emphasizes
recurring themes in all statistical algorithms, including
computation, assessment and verification, iteration, intuition,
randomness, repetition and parallelization, and scalability. Unique
in scope, the book reviews the upcoming challenge of scaling many
of the established techniques to very large data sets and delves
into systematic verification by demonstrating how to derive general
classes of worst case inputs and emphasizing the importance of
testing over a large number of different inputs. Broadly
accessible, the book offers examples, exercises, and selected
solutions in each chapter as well as access to a supplementary
website. After working through the material covered in the book,
readers should not only understand current algorithms but also gain
a deeper understanding of how algorithms are constructed, how to
evaluate new algorithms, which recurring principles are used to
tackle some of the tough problems statistical programmers face, and
how to take an idea for a new method and turn it into something
practically useful.
|
You may like...
Loot
Nadine Gordimer
Paperback
(2)
R398
R330
Discovery Miles 3 300
Loot
Nadine Gordimer
Paperback
(2)
R398
R330
Discovery Miles 3 300
Hampstead
Diane Keaton, Brendan Gleeson, …
DVD
R66
Discovery Miles 660
|