![]() |
![]() |
Your cart is empty |
||
Showing 1 - 3 of 3 matches in All Departments
Understand how to use Explainable AI (XAI) libraries and build trust in AI and machine learning models. This book utilizes a problem-solution approach to explaining machine learning models and their algorithms. The book starts with model interpretation for supervised learning linear models, which includes feature importance, partial dependency analysis, and influential data point analysis for both classification and regression models. Next, it explains supervised learning using non-linear models and state-of-the-art frameworks such as SHAP values/scores and LIME for local interpretation. Explainability for time series models is covered using LIME and SHAP, as are natural language processing-related tasks such as text classification, and sentiment analysis with ELI5, and ALIBI. The book concludes with complex model classification and regression-like neural networks and deep learning models using the CAPTUM framework that shows feature attribution, neuron attribution, and activation attribution. After reading this book, you will understand AI and machine learning models and be able to put that knowledge into practice to bring more accuracy and transparency to your analyses. What You Will Learn Create code snippets and explain machine learning models using Python Leverage deep learning models using the latest code with agile implementations Build, train, and explain neural network models designed to scale Understand the different variants of neural network models Who This Book Is For AI engineers, data scientists, and software developers interested in XAI
Learn the ins and outs of decisions, biases, and reliability of AI algorithms and how to make sense of these predictions. This book explores the so-called black-box models to boost the adaptability, interpretability, and explainability of the decisions made by AI algorithms using frameworks such as Python XAI libraries, TensorFlow 2.0+, Keras, and custom frameworks using Python wrappers. You'll begin with an introduction to model explainability and interpretability basics, ethical consideration, and biases in predictions generated by AI models. Next, you'll look at methods and systems to interpret linear, non-linear, and time-series models used in AI. The book will also cover topics ranging from interpreting to understanding how an AI algorithm makes a decision Further, you will learn the most complex ensemble models, explainability, and interpretability using frameworks such as Lime, SHAP, Skater, ELI5, etc. Moving forward, you will be introduced to model explainability for unstructured data, classification problems, and natural language processing-related tasks. Additionally, the book looks at counterfactual explanations for AI models. Practical Explainable AI Using Python shines the light on deep learning models, rule-based expert systems, and computer vision tasks using various XAI frameworks. What You'll Learn Review the different ways of making an AI model interpretable and explainable Examine the biasness and good ethical practices of AI models Quantify, visualize, and estimate reliability of AI models Design frameworks to unbox the black-box models Assess the fairness of AI models Understand the building blocks of trust in AI models Increase the level of AI adoption Who This Book Is For AI engineers, data scientists, and software developers involved in driving AI projects/ AI products.
Learn how to use PyTorch to build neural network models using code snippets updated for this second edition. This book includes new chapters covering topics such as distributed PyTorch modeling, deploying PyTorch models in production, and developments around PyTorch with updated code. You'll start by learning how to use tensors to develop and fine-tune neural network models and implement deep learning models such as LSTMs, and RNNs. Next, you'll explore probability distribution concepts using PyTorch, as well as supervised and unsupervised algorithms with PyTorch. This is followed by a deep dive on building models with convolutional neural networks, deep neural networks, and recurrent neural networks using PyTorch. This new edition covers also topics such as Scorch, a compatible module equivalent to the Scikit machine learning library, model quantization to reduce parameter size, and preparing a model for deployment within a production system. Distributed parallel processing for balancing PyTorch workloads, using PyTorch for image processing, audio analysis, and model interpretation are also covered in detail. Each chapter includes recipe code snippets to perform specific activities. By the end of this book, you will be able to confidently build neural network models using PyTorch. What You Will Learn Utilize new code snippets and models to train machine learning models using PyTorch Train deep learning models with fewer and smarter implementations Explore the PyTorch framework for model explainability and to bring transparency to model interpretation Build, train, and deploy neural network models designed to scale with PyTorch Understand best practices for evaluating and fine-tuning models using PyTorch Use advanced torch features in training deep neural networks Explore various neural network models using PyTorch Discover functions compatible with sci-kit learn compatible models Perform distributed PyTorch training and execution Who This Book Is ForMachine learning engineers, data scientists and Python programmers and software developers interested in learning the PyTorch framework.
|
![]() ![]() You may like...
Homelessness in America, 1893-1992 - An…
Rodney Van Whitlock, Bernard Lubin, …
Hardcover
R2,204
Discovery Miles 22 040
Exploring the Theory, Pedagogy and…
Lone Dirckinck-Holmfeld, Vivien Hodgson, …
Hardcover
R2,918
Discovery Miles 29 180
Better Choices - Ensuring South Africa's…
Greg Mills, Mcebisi Jonas, …
Paperback
Sustainability of Life Cycle Management…
Rehab O. Abdel Rahman, Michael I. Ojovan
Paperback
R5,004
Discovery Miles 50 040
|