![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Artificial intelligence > Machine learning
This book examines the abilities of new machine learning models for predicting ore grade in mining engineering. A variety of case studies are examined in this book. A motivation for preparing this book was the absence of robust models for estimating ore grade. Models of current books can also be used for the different sciences because they have high capabilities for estimating different variables. Mining engineers can use the book to determine the ore grade accurately. This book helps identify mineral-rich regions for exploration and exploitation. Exploration costs can be decreased by using the models in the current book. In this book, the author discusses the new concepts in mining engineering, such as uncertainty in ore grade modeling. Ensemble models are presented in this book to estimate ore grade. In the book, readers learn how to construct advanced machine learning models for estimating ore grade. The authors of this book present advanced and hybrid models used to estimate ore grade instead of the classic methods such as kriging. The current book can be used as a comprehensive handbook for estimating ore grades. Industrial managers and modelers can use the models of the current books. Each level of ore grade modeling is explained in the book. In this book, advanced optimizers are presented to train machine learning models. Therefore, the book can also be used by modelers in other fields. The main motivation of this book is to address previous shortcomings in the modeling process of ore grades. The scope of this book includes mining engineering, soft computing models, and artificial intelligence.
This book describes the technical problems and solutions for automatically recognizing and parsing a medical image into multiple objects, structures, or anatomies. It gives all the key methods, including state-of- the-art approaches based on machine learning, for recognizing or detecting, parsing or segmenting, a cohort of anatomical structures from a medical image. Written by top experts in Medical Imaging, this book is ideal for university researchers and industry practitioners in medical imaging who want a complete reference on key methods, algorithms and applications in medical image recognition, segmentation and parsing of multiple objects. Learn: Research challenges and problems in medical image recognition, segmentation and parsing of multiple objects Methods and theories for medical image recognition, segmentation and parsing of multiple objects Efficient and effective machine learning solutions based on big datasets Selected applications of medical image parsing using proven algorithms
Explores different dimensions of computational intelligence applications and illustrates its use in the solution of assorted real world biomedical and healthcare problems Provides guidance in developing intelligence based diagnostic systems, efficient models and cost effective machines Provides the latest research findings, solutions to the concerning issues and relevant theoretical frameworks in the area of machine learning and deep learning for healthcare systems Describes experiences and findings relating to protocol design, prototyping, experimental evaluation, real test-beds, and empirical characterization of security and privacy interoperability issues in healthcare applications Explores and illustrates the current and future impacts of pandemics and mitigatse risk in healthcare with advanced analytics
With an evolutionary advancement of Machine Learning (ML) algorithms, a rapid increase of data volumes and a significant improvement of computation powers, machine learning becomes hot in different applications. However, because of the nature of "black-box" in ML methods, ML still needs to be interpreted to link human and machine learning for transparency and user acceptance of delivered solutions. This edited book addresses such links from the perspectives of visualisation, explanation, trustworthiness and transparency. The book establishes the link between human and machine learning by exploring transparency in machine learning, visual explanation of ML processes, algorithmic explanation of ML models, human cognitive responses in ML-based decision making, human evaluation of machine learning and domain knowledge in transparent ML applications. This is the first book of its kind to systematically understand the current active research activities and outcomes related to human and machine learning. The book will not only inspire researchers to passionately develop new algorithms incorporating human for human-centred ML algorithms, resulting in the overall advancement of ML, but also help ML practitioners proactively use ML outputs for informative and trustworthy decision making. This book is intended for researchers and practitioners involved with machine learning and its applications. The book will especially benefit researchers in areas like artificial intelligence, decision support systems and human-computer interaction.
The QL&SC 2012 is a major symposium for scientists, and practitioners all around the world to present their latest researches, results, ideas, developments and applications in such areas as quantitative logic, many-valued logic, fuzzy logic, quantification of software, artificial intelligence, fuzzy sets and systems and soft computing.This invaluable book provides a broad introduction to the fuzzy reasoning and soft computing. It is certain one should not go too far in approximation and optimization, and a certain degree must be kept in mind. This is the essential idea of quantitative logic and soft computing.The explanations in the book are complete to provide the necessary background material needed to go further into the subject and explore the research literature. It is suitable reading for graduate students. It provides a platform for mutual exchanges from top experts and scholars around the world in this field.
A fundamental assumption of work in artificial intelligence and machine learning is that knowledge is expressed in a computer with the help of knowledge representations. Since the proper choice of such representations is a difficult task that fundamentally affects the capabilities of a system, the problem of automatic representation change is an important topic in current research. Concept Formation and Knowledge Revision focuses on representation change as a concept formation task, regarding concepts as the elementary representational vocabulary from which further statements are constructed. Taking an interdisciplinary approach from psychological foundations to computer implementations, the book draws on existing psychological results about the nature of human concepts and concept formation to determine the scope of concept formation phenomena, and to identify potential components of computational concept formation models. The central idea of this work is that computational concept formation can usefully be understood as a process that is triggered in a demand-driven fashion by the representational needs of the learning system, and identify the knowledge revision activities of a system as a particular context for such a process. The book presents a detailed analysis of the revision problem for first-order clausal theories, and develops a set of postulates that any such operation should satisfy. It shows how a minimum theory revision operator can be realized by using exception sets, and that this operator is indeed maximally general. The book then shows that concept formation can be triggered from within the knowledge revision process whenever the existing representation does not permit the plausible reformulation of an exception set, demonstrating the usefulness of the approach both theoretically and empirically within the learning knowledge acquisition system MOBAL. In using a first-order representation, this book is part of the rapidly developing field of Inductive Logic Programming (ILP). By integrating the computational issues with psychological and fundamental discussions of concept formation phenomena, the book will be of interest to readers both theoretically and psychologically inclined. From the foreword by Katharina Morik: The ideal to combine the three sources of artificial intelligence research has almost never been reached. Such a combined and integrated research requires the researcher to master different ways of thinking, different work styles, different sets of literature, and different research procedures. It requires capabilities in software engineering for the application part, in theoretical computer science for the theory part, and in psychology for the cognitive part. The most important capability for artificial intelligence is to keep the integrative view and to create a true original work that goes beyond the collection of pieces from different fields. This book achieves such an integrative view of concept formation and knowledge revision by presenting the way from psychological investigations that indicate that concepts are theories and point at the important role of a demand for learning. to an implemented system which supports users in their tasks when working with a knowledge base and its theoretical foundation. '
Regularization, Optimization, Kernels, and Support Vector Machines offers a snapshot of the current state of the art of large-scale machine learning, providing a single multidisciplinary source for the latest research and advances in regularization, sparsity, compressed sensing, convex and large-scale optimization, kernel methods, and support vector machines. Consisting of 21 chapters authored by leading researchers in machine learning, this comprehensive reference: Covers the relationship between support vector machines (SVMs) and the Lasso Discusses multi-layer SVMs Explores nonparametric feature selection, basis pursuit methods, and robust compressive sensing Describes graph-based regularization methods for single- and multi-task learning Considers regularized methods for dictionary learning and portfolio selection Addresses non-negative matrix factorization Examines low-rank matrix and tensor-based models Presents advanced kernel methods for batch and online machine learning, system identification, domain adaptation, and image processing Tackles large-scale algorithms including conditional gradient methods, (non-convex) proximal techniques, and stochastic gradient descent Regularization, Optimization, Kernels, and Support Vector Machines is ideal for researchers in machine learning, pattern recognition, data mining, signal processing, statistical learning, and related areas.
This book explains the complete loop to effectively use self-tracking data for machine learning. While it focuses on self-tracking data, the techniques explained are also applicable to sensory data in general, making it useful for a wider audience. Discussing concepts drawn from from state-of-the-art scientific literature, it illustrates the approaches using a case study of a rich self-tracking data set. Self-tracking has become part of the modern lifestyle, and the amount of data generated by these devices is so overwhelming that it is difficult to obtain useful insights from it. Luckily, in the domain of artificial intelligence there are techniques that can help out: machine-learning approaches allow this type of data to be analyzed. While there are ample books that explain machine-learning techniques, self-tracking data comes with its own difficulties that require dedicated techniques such as learning over time and across users.
This thesis represents one of the most comprehensive and in-depth studies of the use of Lorentz-boosted hadronic final state systems in the search for signals of Supersymmetry conducted to date at the Large Hadron Collider. A thorough assessment is performed of the observables that provide enhanced sensitivity to new physics signals otherwise hidden under an enormous background of top quark pairs produced by Standard Model processes. This is complemented by an ingenious analysis optimization procedure that allowed for extending the reach of this analysis by hundreds of GeV in mass of these hypothetical new particles. Lastly, the combination of both deep, thoughtful physics analysis with the development of high-speed electronics for identifying and selecting these same objects is not only unique, but also revolutionary. The Global Feature Extraction system that the author played a critical role in bringing to fruition represents the first dedicated hardware device for selecting these Lorentz-boosted hadronic systems in real-time using state-of-the-art processing chips and embedded systems.
Turning text into valuable information is essential for businesses looking to gain a competitive advantage. With recent improvements in natural language processing (NLP), users now have many options for solving complex challenges. But it's not always clear which NLP tools or libraries would work for a business's needs, or which techniques you should use and in what order. This practical book provides data scientists and developers with blueprints for best practice solutions to common tasks in text analytics and natural language processing. Authors Jens Albrecht, Sidharth Ramachandran, and Christian Winkler provide real-world case studies and detailed code examples in Python to help you get started quickly. Extract data from APIs and web pages Prepare textual data for statistical analysis and machine learning Use machine learning for classification, topic modeling, and summarization Explain AI models and classification results Explore and visualize semantic similarities with word embeddings Identify customer sentiment in product reviews Create a knowledge graph based on named entities and their relations
An intelligent agent interacting with the real world will encounter individual people, courses, test results, drugs prescriptions, chairs, boxes, etc., and needs to reason about properties of these individuals and relations among them as well as cope with uncertainty. Uncertainty has been studied in probability theory and graphical models, and relations have been studied in logic, in particular in the predicate calculus and its extensions. This book examines the foundations of combining logic and probability into what are called relational probabilistic models. It introduces representations, inference, and learning techniques for probability, logic, and their combinations. The book focuses on two representations in detail: Markov logic networks, a relational extension of undirected graphical models and weighted first-order predicate calculus formula, and Problog, a probabilistic extension of logic programs that can also be viewed as a Turing-complete relational extension of Bayesian networks.
This comprehensive and timely book, New Age Analytics: Transforming the Internet through Machine Learning, IoT, and Trust Modeling, explores the importance of tools and techniques used in machine learning, big data mining, and more. The book explains how advancements in the world of the web have been achieved and how the experiences of users can be analyzed. It looks at data gathering by the various electronic means and explores techniques for analysis and management, how to manage voluminous data, user responses, and more. This volume provides an abundance of valuable information for professionals and researchers working in the field of business analytics, big data, social network data, computer science, analytical engineering, and forensic analysis. Moreover, the book provides insights and support from both practitioners and academia in order to highlight the most debated aspects in the field.
Deep learning on graphs has become one of the hottest topics in machine learning. The book consists of four parts to best accommodate our readers with diverse backgrounds and purposes of reading. Part 1 introduces basic concepts of graphs and deep learning; Part 2 discusses the most established methods from the basic to advanced settings; Part 3 presents the most typical applications including natural language processing, computer vision, data mining, biochemistry and healthcare; and Part 4 describes advances of methods and applications that tend to be important and promising for future research. The book is self-contained, making it accessible to a broader range of readers including (1) senior undergraduate and graduate students; (2) practitioners and project managers who want to adopt graph neural networks into their products and platforms; and (3) researchers without a computer science background who want to use graph neural networks to advance their disciplines.
This book presents the tools used in machine learning (ML) and the benefits of using such tools in facilities. It focus on real life business applications, explaining the most popular algorithms easily and clearly without the use of calculus or matrix/vector algebra. Replete with case studies, this book provides a working knowledge of ML current and future capabilities and the impact it will have on every business. It demonstrates that it is also possible to carry out successful ML and AI projects in any manufacturing plant, even without fully fulfilling the five V (Volume, Velocity, Variety, Veracity and Value) usually associated with big data. This book takes a closer look at how AI and ML are also able to work for industrial area, as well as how you could adapt some of the standard tips and techniques (usually for big data) for your own needs in your SME. Organizations which first understand these tools and know how to use them will benefit at the expense of their rivals.
This unique compendium gives an updated presentation of clustering, one of the most challenging tasks in machine learning. The book provides a unitary presentation of classical and contemporary algorithms ranging from partitional and hierarchical clustering up to density-based clustering, clustering of categorical data, and spectral clustering.Most of the mathematical background is provided in appendices, highlighting algebraic and complexity theory, in order to make this volume as self-contained as possible. A substantial number of exercises and supplements makes this a useful reference textbook for researchers and students.
This important textbook introduces the concept of intrusion detection, discusses various approaches for intrusion detection systems (IDS), and presents the architecture and implementation of IDS. It emphasizes on the prediction and learning algorithms for intrusion detection and highlights techniques for intrusion detection of wired computer networks and wireless sensor networks. The performance comparison of various IDS via simulation will also be included.
Numerical simulation models are used in all engineering disciplines for modeling physical phenomena to learn how the phenomena work, and to identify problems and optimize behavior. Smart Proxy Models provide an opportunity to replicate numerical simulations with very high accuracy and can be run on a laptop within a few minutes, thereby simplifying the use of complex numerical simulations, which can otherwise take tens of hours. This book focuses on Smart Proxy Modeling and provides readers with all the essential details on how to develop Smart Proxy Models using Artificial Intelligence and Machine Learning, as well as how it may be used in real-world cases. Covers replication of highly accurate numerical simulations using Artificial Intelligence and Machine Learning Details application in reservoir simulation and modeling and computational fluid dynamics Includes real case studies based on commercially available simulators Smart Proxy Modeling is ideal for petroleum, chemical, environmental, and mechanical engineers, as well as statisticians and others working with applications of data-driven analytics.
Learn practical and modern experimental methods used by engineers in technology and trading. Experimentation for Engineers: From A/B testing to Bayesian optimization is a toolbox of methods for optimizing machine learning systems, quantitative trading strategies, and more. You'll start with a deep dive into A/B testing, and then graduate to advanced methods used to improve performance in highly competitive industries like finance and social media. The experimentation skills you'll master in this unique, practical guide will quickly reveal which approaches and features deliver real results for your business. In Experimentation for Engineers, you'll learn how to evaluate the changes you make to your system and ensure that your experiments don't undermine revenue or other business metrics. By the time you're done, you'll be able to seamlessly deploy changes to production while avoiding common pitfalls. Purchase of the print book includes a free eBook in PDF, Kindle, and ePub formats from Manning Publications.
In this work, the interaction between the Higgs boson and the top quark is studied with the proton-proton collisions at 13 TeV provided by the LHC at the CMS detector at CERN (Geneva). At the LHC, these particles are produced simultaneously via the associate production of the Higgs boson with one top quark (tH process) or two top quarks (ttH process). Compared to many other possible outcomes of the proton-proton interactions, these processes are very rare, as the top quark and the Higgs boson are the heaviest elementary particles known. Hence, identifying them constitutes a significant experimental challenge. A high particle selection efficiency in the CMS detector is therefore crucial. At the core of this selection stands the Level-1 (L1) trigger system, a system that filters collision events to retain only those with potential interest for physics analysis. The selection of hadronically decaying leptons, expected from the Higgs boson decays, is especially demanding due to the large background arising from the QCD interactions. The first part of this thesis presents the optimization of the L1 algorithm in Run 2 (2016-2018) and Run 3 (2022-2024) of the LHC. It includes the development of a novel trigger concept for the High-Luminosity LHC, foreseen to start in 2027 and to deliver 5 times the current instantaneous luminosity. To this end, sophisticated algorithms based on machine learning approaches are used, facilitated by the increasingly modern technology and powerful computation of the trigger system. The second part of the work presents the search of the tH and ttH processes with the subsequent decays of the Higgs boson to pairs of lepton, W bosons or Z bosons, making use of the data recorded during Run 2. The presence of multiple particles in the final state, along with the low cross section of the processes, makes the search an ideal use case for multivariant discriminants that enhance the selectivity of the signals and reject the overwhelming background contributions. The discriminants presented are built using state-of-the-art machine learning techniques, able to capture the correlations amongst the processes involved, as well as the so-called Matrix Element Method (MEM), which combines the theoretical description of the processes with the detector resolution effects. The level of sophistication of the methods used, along with the unprecedented amount of collision data analyzed, result in the most stringent measurements of the tH and ttH cross sections up to date.
Increasingly, crimes and fraud are digital in nature, occurring at breakneck speed and encompassing large volumes of data. To combat this unlawful activity, knowledge about the use of machine learning technology and software is critical. Machine Learning Forensics for Law Enforcement, Security, and Intelligence integrates an assortment of deductive and instructive tools, techniques, and technologies to arm professionals with the tools they need to be prepared and stay ahead of the game. Step-by-step instructions The book is a practical guide on how to conduct forensic investigations using self-organizing clustering map (SOM) neural networks, text extraction, and rule generating software to "interrogate the evidence." This powerful data is indispensable for fraud detection, cybersecurity, competitive counterintelligence, and corporate and litigation investigations. The book also provides step-by-step instructions on how to construct adaptive criminal and fraud detection systems for organizations. Prediction is the key Internet activity, email, and wireless communications can be captured, modeled, and deployed in order to anticipate potential cyber attacks and other types of crimes. The successful prediction of human reactions and server actions by quantifying their behaviors is invaluable for pre-empting criminal activity. This volume assists chief information officers, law enforcement personnel, legal and IT professionals, investigators, and competitive intelligence analysts in the strategic planning needed to recognize the patterns of criminal activities in order to predict when and where crimes and intrusions are likely to take place.
Humans learn best from feedback-we are encouraged to take actions that lead to positive results while deterred by decisions with negative consequences. This reinforcement process can be applied to computer programs allowing them to solve more complex problems that classical programming cannot. Deep Reinforcement Learning in Action teaches you the fundamental concepts and terminology of deep reinforcement learning, along with the practical skills and techniques you'll need to implement it into your own projects. Key features * Structuring problems as Markov Decision Processes * Popular algorithms such Deep Q-Networks, Policy Gradient method and Evolutionary Algorithms and the intuitions that drive them * Applying reinforcement learning algorithms to real-world problems Audience You'll need intermediate Python skills and a basic understanding of deep learning. About the technology Deep reinforcement learning is a form of machine learning in which AI agents learn optimal behavior from their own raw sensory input. The system perceives the environment, interprets the results of its past decisions, and uses this information to optimize its behavior for maximum long-term return. Deep reinforcement learning famously contributed to the success of AlphaGo but that's not all it can do! Alexander Zai is a Machine Learning Engineer at Amazon AI working on MXNet that powers a suite of AWS machine learning products. Brandon Brown is a Machine Learning and Data Analysis blogger at outlace.com committed to providing clear teaching on difficult topics for newcomers.
What do financial data prediction, day-trading rule development, and bio-marker selection have in common? They are just a few of the tasks that could potentially be resolved with genetic programming and machine learning techniques. Written by leaders in this field, Applied Genetic Programming and Machine Learning delineates the extension of Genetic Programming (GP) for practical applications. Reflecting rapidly developing concepts and emerging paradigms, this book outlines how to use machine learning techniques, make learning operators that efficiently sample a search space, navigate the search process through the design of objective fitness functions, and examine the search performance of the evolutionary system. It provides a methodology for integrating GP and machine learning techniques, establishing a robust evolutionary framework for addressing tasks from areas such as chaotic time-series prediction, system identification, financial forecasting, classification, and data mining. The book provides a starting point for the research of extended GP frameworks with the integration of several machine learning schemes. Drawing on empirical studies taken from fields such as system identification, finanical engineering, and bio-informatics, it demonstrates how the proposed methodology can be useful in practical inductive problem solving.
Build predictive models from time-based patterns in your data. Master statistical models including new deep learning approaches for time series forecasting. In Time Series Forecasting in Python you will learn how to: Recognize a time series forecasting problem and build a performant predictive model Create univariate forecasting models that account for seasonal effects and external variables Build multivariate forecasting models to predict many time series at once Leverage large datasets by using deep learning for forecasting time series Automate the forecasting process DESCRIPTION Time Series Forecasting in Python teaches you to build powerful predictive models from time-based data. Every model you create is relevant, useful, and easy to implement with Python. You'll explore interesting real-world datasets like Google's daily stock price and economic data for the USA, quickly progressing from the basics to developing large-scale models that use deep learning tools like TensorFlow.Time Series Forecasting in Python teaches you to apply time series forecasting and get immediate, meaningful predictions. You'll learn both traditional statistical and new deep learning models for time series forecasting, all fully illustrated with Python source code. Time Series Forecasting in Python teaches you to build powerful predictive models from time-based data. Every model you create is relevant, useful, and easy to implement with Python. You'll explore interesting real-world datasets like Google's daily stock price and economic data for the USA, quickly progressing from the basics to developing large-scale models that use deep learning tools like TensorFlow. about the technology Time series forecasting reveals hidden trends and makes predictions about the future from your data. This powerful technique has proven incredibly valuable across multiple fields-from tracking business metrics, to healthcare and the sciences. Modern Python libraries and powerful deep learning tools have opened up new methods and utilities for making practical time series forecasts. about the book Time Series Forecasting in Python teaches you to apply time series forecasting and get immediate, meaningful predictions. You'll learn both traditional statistical and new deep learning models for time series forecasting, all fully illustrated with Python source code. Test your skills with hands-on projects for forecasting air travel, volume of drug prescriptions, and the earnings of Johnson & Johnson. By the time you're done, you'll be ready to build accurate and insightful forecasting models with tools from the Python ecosystem.
Human and Machine Hearing is the first book to comprehensively describe how human hearing works and how to build machines to analyze sounds in the same way that people do. Drawing on over thirty-five years of experience in analyzing hearing and building systems, Richard F. Lyon explains how we can now build machines with close-to-human abilities in speech, music, and other sound-understanding domains. He explains human hearing in terms of engineering concepts, and describes how to incorporate those concepts into machines for a wide range of modern applications. The details of this approach are presented at an accessible level, to bring a diverse range of readers, from neuroscience to engineering, to a common technical understanding. The description of hearing as signal-processing algorithms is supported by corresponding open-source code, for which the book serves as motivating documentation. |
![]() ![]() You may like...
Dodge Manufacturing Company - Power…
Dodge Manufacturing Company
Hardcover
R1,040
Discovery Miles 10 400
Hidden Link Prediction in Stochastic…
Babita Pandey, Aditya Khamparia
Hardcover
R5,251
Discovery Miles 52 510
Advances in Statistical Control…
Chang-Hee Won, Cheryl B. Schrader, …
Hardcover
R3,093
Discovery Miles 30 930
|