0
Your cart

Your cart is empty

Books > Computing & IT > Applications of computing > Artificial intelligence > Natural language & machine translation

Buy Now

Validity, Reliability, and Significance - Empirical Methods for NLP and Data Science (Paperback) Loot Price: R1,640
Discovery Miles 16 400
Validity, Reliability, and Significance - Empirical Methods for NLP and Data Science (Paperback): Stefan Riezler, Michael...

Validity, Reliability, and Significance - Empirical Methods for NLP and Data Science (Paperback)

Stefan Riezler, Michael Hagmann

Series: Synthesis Lectures on Human Language Technologies

 (sign in to rate)
Loot Price R1,640 Discovery Miles 16 400 | Repayment Terms: R154 pm x 12*

Bookmark and Share

Expected to ship within 10 - 15 working days

Empirical methods are means to answering methodological questions of empirical sciences by statistical techniques. The methodological questions addressed in this book include the problems of validity, reliability, and significance. In the case of machine learning, these correspond to the questions of whether a model predicts what it purports to predict, whether a model's performance is consistent across replications, and whether a performance difference between two models is due to chance, respectively. The goal of this book is to answer these questions by concrete statistical tests that can be applied to assess validity, reliability, and significance of data annotation and machine learning prediction in the fields of NLP and data science. Our focus is on model-based empirical methods where data annotations and model predictions are treated as training data for interpretable probabilistic models from the well-understood families of generalized additive models (GAMs) and linear mixed effects models (LMEMs). Based on the interpretable parameters of the trained GAMs or LMEMs, the book presents model-based statistical tests such as a validity test that allows detecting circular features that circumvent learning. Furthermore, the book discusses a reliability coefficient using variance decomposition based on random effect parameters of LMEMs. Last, a significance test based on the likelihood ratio of nested LMEMs trained on the performance scores of two machine learning models is shown to naturally allow the inclusion of variations in meta-parameter settings into hypothesis testing, and further facilitates a refined system comparison conditional on properties of input data. This book can be used as an introduction to empirical methods for machine learning in general, with a special focus on applications in NLP and data science. The book is self-contained, with an appendix on the mathematical background on GAMs and LMEMs, and with an accompanying webpage including R code to replicate experiments presented in the book.

General

Imprint: Springer International Publishing AG
Country of origin: Switzerland
Series: Synthesis Lectures on Human Language Technologies
Release date: December 2021
First published: 2022
Authors: Stefan Riezler • Michael Hagmann
Dimensions: 235 x 191mm (L x W)
Format: Paperback
Pages: 147
ISBN-13: 978-3-03-101055-2
Languages: English
Subtitles: English
Categories: Books > Language & Literature > Language & linguistics > Computational linguistics
Books > Computing & IT > Applications of computing > Artificial intelligence > Natural language & machine translation
Promotions
LSN: 3-03-101055-8
Barcode: 9783031010552

Is the information for this product incomplete, wrong or inappropriate? Let us know about it.

Does this product have an incorrect or missing image? Send us a new image.

Is this product missing categories? Add more categories.

Review This Product

No reviews yet - be the first to create one!

Partners