![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence > Machine learning
Self-driving cars, natural language recognition, and online recommendation engines are all possible thanks to Machine Learning. Now you can create your own genetic algorithms, nature-inspired swarms, Monte Carlo simulations, cellular automata, and clusters. Learn how to test your ML code and dive into even more advanced topics. If you are a beginner-to-intermediate programmer keen to understand machine learning, this book is for you. Discover machine learning algorithms using a handful of self-contained recipes. Build a repertoire of algorithms, discovering terms and approaches that apply generally. Bake intelligence into your algorithms, guiding them to discover good solutions to problems. In this book, you will: Use heuristics and design fitness functions. Build genetic algorithms. Make nature-inspired swarms with ants, bees and particles. Create Monte Carlo simulations. Investigate cellular automata. Find minima and maxima, using hill climbing and simulated annealing. Try selection methods, including tournament and roulette wheels. Learn about heuristics, fitness functions, metrics, and clusters. Test your code and get inspired to try new problems. Work through scenarios to code your way out of a paper bag; an important skill for any competent programmer. See how the algorithms explore and learn by creating visualizations of each problem. Get inspired to design your own machine learning projects and become familiar with the jargon. What You Need: Code in C++ (>= C++11), Python (2.x or 3.x) and JavaScript (using the HTML5 canvas). Also uses matplotlib and some open source libraries, including SFML, Catch and Cosmic-Ray. These plotting and testing libraries are not required but their use will give you a fuller experience. Armed with just a text editor and compiler/interpreter for your language of choice you can still code along from the general algorithm descriptions.
Machine Intelligence 13 ushers in an exciting new phase of artificial intelligence research, one in which machine learning has emerged as a hot-bed of new theory, as a practical tool in engineering disciplines, and as a source of material for cognitive models of the human brain. Based on the Machine Intelligence Workshop of 1992, held at Strathclyde University in Scotland, the book brings together numerous papers from some of the field's leading researchers to discuss current theoretical and practical issues. Highlights include a chapter by J.A. Robinson--the founder of modern computational logic--on the field's great forefathers John von Neumann and Alan Turing, and a chapter by Stephen Muggleton that analyzes Turing's legacy in logic and machine learning. This thirteenth volume in the renowned Machine Intelligence series remains the best source of information for the latest developments in the field. All students and researchers in artificial intelligence and machine learning will want to own a copy.
A comprehensive guide to learning technologies that unlock the value in big data Cognitive Computing provides detailed guidance toward building a new class of systems that learn from experience and derive insights to unlock the value of big data. This book helps technologists understand cognitive computing's underlying technologies, from knowledge representation techniques and natural language processing algorithms to dynamic learning approaches based on accumulated evidence, rather than reprogramming. Detailed case examples from the financial, healthcare, and manufacturing walk readers step-by-step through the design and testing of cognitive systems, and expert perspectives from organizations such as Cleveland Clinic, Memorial Sloan-Kettering, as well as commercial vendors that are creating solutions. These organizations provide insight into the real-world implementation of cognitive computing systems. The IBM Watson cognitive computing platform is described in a detailed chapter because of its significance in helping to define this emerging market. In addition, the book includes implementations of emerging projects from Qualcomm, Hitachi, Google and Amazon. Today's cognitive computing solutions build on established concepts from artificial intelligence, natural language processing, ontologies, and leverage advances in big data management and analytics. They foreshadow an intelligent infrastructure that enables a new generation of customer and context-aware smart applications in all industries. Cognitive Computing is a comprehensive guide to the subject, providing both the theoretical and practical guidance technologists need. * Discover how cognitive computing evolved from promise to reality * Learn the elements that make up a cognitive computing system * Understand the groundbreaking hardware and software technologies behind cognitive computing * Learn to evaluate your own application portfolio to find the best candidates for pilot projects * Leverage cognitive computing capabilities to transform the organization Cognitive systems are rightly being hailed as the new era of computing. Learn how these technologies enable emerging firms to compete with entrenched giants, and forward-thinking established firms to disrupt their industries. Professionals who currently work with big data and analytics will see how cognitive computing builds on their foundation, and creates new opportunities. Cognitive Computing provides complete guidance to this new level of human-machine interaction.
This book emphasizes various image shape feature extraction methods which are necessary for image shape recognition and classification. Focussing on a shape feature extraction technique used in content-based image retrieval (CBIR), it explains different applications of image shape features in the field of content-based image retrieval. Showcasing useful applications and illustrating examples in many interdisciplinary fields, the present book is aimed at researchers and graduate students in electrical engineering, data science, computer science, medicine, and machine learning including medical physics and information technology.
Leading technology firms and research institutions are continuously exploring new techniques in artificial intelligence and machine learning. As such, deep learning has now been recognized in various real-world applications such as computer vision, image processing, biometrics, pattern recognition, and medical imaging. The deep learning approach has opened new opportunities that can make such real-life applications and tasks easier and more efficient. The Handbook of Research on Deep Learning Innovations and Trends is an essential scholarly resource that presents current trends and the latest research on deep learning and explores the concepts, algorithms, and techniques of data mining and analysis. Highlighting topics such as computer vision, encryption systems, and biometrics, this book is ideal for researchers, practitioners, industry professionals, students, and academicians.
This textbook describes the hands-on application of data science techniques to solve problems in manufacturing and the Industrial Internet of Things (IIoT). Monitoring and managing operational performance is a crucial activity for industrial and business organisations. The emergence of low-cost, accessible computing and storage, through Industrial Digital Technologies (IDT) and Industry 4.0, has generated considerable interest in innovative approaches to doing more with data. Data science, predictive analytics, machine learning, artificial intelligence and general approaches to modelling, simulating and visualising industrial systems have often been considered topics only for research labs and academic departments. This textbook debunks the mystique around applied data science and shows readers, using tutorial-style explanations and real-life case studies, how practitioners can develop their own understanding of performance to achieve tangible business improvements. All exercises can be completed with commonly available tools, many of which are free to install and use. Readers will learn how to use tools to investigate, diagnose, propose and implement analytics solutions that will provide explainable results to deliver digital transformation.
Reinforcement learning has developed as a successful learning approach for domains that are not fully understood and that are too complex to be described in closed form. However, reinforcement learning does not scale well to large and continuous problems. Furthermore, knowledge acquired in one environment cannot be transferred to new environments. In this book the author investigates whether deficiencies of reinforcement learning can be overcome by suitable abstraction methods. He discusses various forms of spatial abstraction, in particular qualitative abstraction, a form of representing knowledge that has been thoroughly investigated and successfully applied in spatial cognition research. With his approach, he exploits spatial structures and structural similarity to support the learning process by abstracting from less important features and stressing the essential ones. The author demonstrates his learning approach and the transferability of knowledge by having his system learn in a virtual robot simulation system and consequently transfering the acquired knowledge to a physical robot. The approach is influenced by findings from cognitive science. The book is suitable for researchers working in artificial intelligence, in particular knowledge representation, learning, spatial cognition and robotics.
This invaluable book has been designed to be useful to most practising scientists and engineers, whatever their field and however rusty their mathematics and programming might be. The approach taken is largely practical, with algorithms being presented in full and working code (in BASIC, FORTRAN, PASCAL AND C) included on a floppy disk to help the reader get up and running as quickly as possible. The text could also be used as part of an undergraduate course on search and optimisation. Student exercises are included at the end of several of the chapters, many of which are computer-based and designed to encourage exploration of the method.
Summarizes and illuminates two decades of research
The twenty-first century has seen a breathtaking expansion of statistical methodology, both in scope and influence. 'Data science' and 'machine learning' have become familiar terms in the news, as statistical methods are brought to bear upon the enormous data sets of modern science and commerce. How did we get here? And where are we going? How does it all fit together? Now in paperback and fortified with exercises, this book delivers a concentrated course in modern statistical thinking. Beginning with classical inferential theories - Bayesian, frequentist, Fisherian - individual chapters take up a series of influential topics: survival analysis, logistic regression, empirical Bayes, the jackknife and bootstrap, random forests, neural networks, Markov Chain Monte Carlo, inference after model selection, and dozens more. The distinctly modern approach integrates methodology and algorithms with statistical inference. Each chapter ends with class-tested exercises, and the book concludes with speculation on the future direction of statistics and data science.
This book covers the research area from multiple viewpoints including bibliometric analysis, reviews, empirical analysis, platforms, and future applications. The centralized training of deep learning and machine learning models not only incurs a high communication cost of data transfer into the cloud systems but also raises the privacy protection concerns of data providers. This book aims at targeting researchers and practitioners to delve deep into core issues in federated learning research to transform next-generation artificial intelligence applications. Federated learning enables the distribution of the learning models across the devices and systems which perform initial training and report the updated model attributes to the centralized cloud servers for secure and privacy-preserving attribute aggregation and global model development. Federated learning benefits in terms of privacy, communication efficiency, data security, and contributors' control of their critical data.
Industrial Applications of Machine Learning shows how machine learning can be applied to address real-world problems in the fourth industrial revolution, and provides the required knowledge and tools to empower readers to build their own solutions based on theory and practice. The book introduces the fourth industrial revolution and its current impact on organizations and society. It explores machine learning fundamentals, and includes four case studies that address a real-world problem in the manufacturing or logistics domains, and approaches machine learning solutions from an application-oriented point of view. The book should be of special interest to researchers interested in real-world industrial problems. Features Describes the opportunities, challenges, issues, and trends offered by the fourth industrial revolution Provides a user-friendly introduction to machine learning with examples of cutting-edge applications in different industrial sectors Includes four case studies addressing real-world industrial problems solved with machine learning techniques A dedicated website for the book contains the datasets of the case studies for the reader's reproduction, enabling the groundwork for future problem-solving Uses of three of the most widespread software and programming languages within the engineering and data science communities, namely R, Python, and Weka
This is a practical guide to P-splines, a simple, flexible and powerful tool for smoothing. P-splines combine regression on B-splines with simple, discrete, roughness penalties. They were introduced by the authors in 1996 and have been used in many diverse applications. The regression basis makes it straightforward to handle non-normal data, like in generalized linear models. The authors demonstrate optimal smoothing, using mixed model technology and Bayesian estimation, in addition to classical tools like cross-validation and AIC, covering theory and applications with code in R. Going far beyond simple smoothing, they also show how to use P-splines for regression on signals, varying-coefficient models, quantile and expectile smoothing, and composite links for grouped data. Penalties are the crucial elements of P-splines; with proper modifications they can handle periodic and circular data as well as shape constraints. Combining penalties with tensor products of B-splines extends these attractive properties to multiple dimensions. An appendix offers a systematic comparison to other smoothers.
This book offers a comprehensible overview of Big Data Preprocessing, which includes a formal description of each problem. It also focuses on the most relevant proposed solutions. This book illustrates actual implementations of algorithms that helps the reader deal with these problems. This book stresses the gap that exists between big, raw data and the requirements of quality data that businesses are demanding. This is called Smart Data, and to achieve Smart Data the preprocessing is a key step, where the imperfections, integration tasks and other processes are carried out to eliminate superfluous information. The authors present the concept of Smart Data through data preprocessing in Big Data scenarios and connect it with the emerging paradigms of IoT and edge computing, where the end points generate Smart Data without completely relying on the cloud. Finally, this book provides some novel areas of study that are gathering a deeper attention on the Big Data preprocessing. Specifically, it considers the relation with Deep Learning (as of a technique that also relies in large volumes of data), the difficulty of finding the appropriate selection and concatenation of preprocessing techniques applied and some other open problems. Practitioners and data scientists who work in this field, and want to introduce themselves to preprocessing in large data volume scenarios will want to purchase this book. Researchers that work in this field, who want to know which algorithms are currently implemented to help their investigations, may also be interested in this book.
This book aims to reach an understanding of how the mind carries
out three sorts of thinking -- deduction, induction, and creation
-- to consider what goes right and what goes wrong, and to explore
computational models of these sorts of thinking. Written for
students of the mind -- psychologists, computer scientists,
philosophers, linguists, and other cognitive scientists -- it also
provides general readers with a self-contained account of human and
machine thinking. The author presents his point of view, rather
than a review, as simply as possible so that no technical
background is required. Like the field of research itself, it calls
for hard thinking about thinking.
This is the second volume of a large two-volume editorial project we wish to dedicate to the memory of the late Professor Ryszard S. Michalski who passed away in 2007. He was one of the fathers of machine learning, an exciting and relevant, both from the practical and theoretical points of view, area in modern computer science and information technology. His research career started in the mid-1960s in Poland, in the Institute of Automation, Polish Academy of Sciences in Warsaw, Poland. He left for the USA in 1970, and since then had worked there at various universities, notably, at the University of Illinois at Urbana - Champaign and finally, until his untimely death, at George Mason University. We, the editors, had been lucky to be able to meet and collaborate with Ryszard for years, indeed some of us knew him when he was still in Poland. After he started working in the USA, he was a frequent visitor to Poland, taking part at many conferences until his death. We had also witnessed with a great personal pleasure honors and awards he had received over the years, notably when some years ago he was elected Foreign Member of the Polish Academy of Sciences among some top scientists and scholars from all over the world, including Nobel prize winners. Professor Michalski's research results influenced very strongly the development of machine learning, data mining, and related areas. Also, he inspired many established and younger scholars and scientists all over the world. We feel very happy that so many top scientists from all over the world agreed to pay the last tribute to Professor Michalski by writing papers in their areas of research. These papers will constitute the most appropriate tribute to Professor Michalski, a devoted scholar and researcher. Moreover, we believe that they will inspire many newcomers and younger researchers in the area of broadly perceived machine learning, data analysis and data mining. The papers included in the two volumes, Machine Learning I and Machine Learning II, cover diverse topics, and various aspects of the fields involved. For convenience of the potential readers, we will now briefly summarize the contents of the particular chapters.
This book aims to present dominant applications and use cases of the fast-evolving DT and determines vital Industry 4.0 technologies for building DT that can provide solutions for fighting local and globalmedical emergencies during pandemics. Moreover, it discusses a new framework integrating DT and blockchain technology to provide a more efficient and effective preventive conservation in different applications.
Covid-19 has shown us the importance of mathematical and statistical models to interpret reality, provide forecasts, and explore future scenarios. Algorithms, artificial neural networks, and machine learning help us discover the opportunities and pitfalls of a world governed by mathematics and artificial intelligence.
Build predictive models from time-based patterns in your data. Master statistical models including new deep learning approaches for time series forecasting. In Time Series Forecasting in Python you will learn how to: Recognize a time series forecasting problem and build a performant predictive model Create univariate forecasting models that account for seasonal effects and external variables Build multivariate forecasting models to predict many time series at once Leverage large datasets by using deep learning for forecasting time series Automate the forecasting process DESCRIPTION Time Series Forecasting in Python teaches you to build powerful predictive models from time-based data. Every model you create is relevant, useful, and easy to implement with Python. You'll explore interesting real-world datasets like Google's daily stock price and economic data for the USA, quickly progressing from the basics to developing large-scale models that use deep learning tools like TensorFlow.Time Series Forecasting in Python teaches you to apply time series forecasting and get immediate, meaningful predictions. You'll learn both traditional statistical and new deep learning models for time series forecasting, all fully illustrated with Python source code. Time Series Forecasting in Python teaches you to build powerful predictive models from time-based data. Every model you create is relevant, useful, and easy to implement with Python. You'll explore interesting real-world datasets like Google's daily stock price and economic data for the USA, quickly progressing from the basics to developing large-scale models that use deep learning tools like TensorFlow. about the technology Time series forecasting reveals hidden trends and makes predictions about the future from your data. This powerful technique has proven incredibly valuable across multiple fields-from tracking business metrics, to healthcare and the sciences. Modern Python libraries and powerful deep learning tools have opened up new methods and utilities for making practical time series forecasts. about the book Time Series Forecasting in Python teaches you to apply time series forecasting and get immediate, meaningful predictions. You'll learn both traditional statistical and new deep learning models for time series forecasting, all fully illustrated with Python source code. Test your skills with hands-on projects for forecasting air travel, volume of drug prescriptions, and the earnings of Johnson & Johnson. By the time you're done, you'll be ready to build accurate and insightful forecasting models with tools from the Python ecosystem.
The objective of Document Analysis and Recognition (DAR) is to recognize the text and graphical components of a document and to extract information. This book is a collection of research papers and state-of-the-art reviews by leading researchers all over the world. It includes pointers to challenges and opportunities for future research directions. The main goal of the book is to identify good practices for the use of learning strategies in DAR.
Just over thirty years after Holland first presented the outline for Learning Classifier System paradigm, the ability of LCS to solve complex real-world problems is becoming clear. In particular, their capability for rule induction in data mining has sparked renewed interest in LCS. This book brings together work by a number of individuals who are demonstrating their good performance in a variety of domains. The first contribution is arranged as follows: Firstly, the main forms of LCS are described in some detail. A number of historical uses of LCS in data mining are then reviewed before an overview of the rest of the volume is presented. The rest of this book describes recent research on the use of LCS in the main areas of machine learning data mining: classification, clustering, time-series and numerical prediction, feature selection, ensembles, and knowledge discovery.
Someday computers will be artists. They'll be able to write amusing and original stories, invent and play games of unsurpassed complexity and inventiveness, tell jokes and suffer writer's block. But these things will require computers that can both achieve artistic goals and be creative. Both capabilities are far from accomplished. This book presents a theory of creativity that addresses some of the many hard problems which must be solved to build a creative computer. It also presents an exploration of the kinds of goals and plans needed to write simple short stories. These theories have been implemented in a computer program called MINSTREL which tells stories about King Arthur and his knights. While far from being the silicon author of the future, MINSTREL does illuminate many of the interesting and difficult issues involved in constructing a creative computer. The results presented here should be of interest to at least three different groups of people. Artificial intelligence researchers should find this work an interesting application of symbolic AI to the problems of story-telling and creativity. Psychologists interested in creativity and imagination should benefit from the attempt to build a detailed, explicit model of the creative process. Finally, authors and others interested in how people write should find MINSTREL's model of the author-level writing process thought-provoking.
Agricultural systems are uniquely complex systems, given that agricultural systems are parts of natural and ecological systems. Those aspects bring in a substantial degree of uncertainty in system operation. Also, impact factors, such as weather factors, are critical in agricultural systems but these factors are uncontrollable in system management. Modern agriculture has been evolving through precision agriculture beginning in the late 1980s and biotechnological innovations in the early 2000s. Precision agriculture implements site-specific crop production management by integrating agricultural mechanization and information technology in geographic information system (GIS), global navigation satellite system (GNSS), and remote sensing. Now, precision agriculture is set to evolve into smart agriculture with advanced systematization, informatization, intelligence and automation. From precision agriculture to smart agriculture, there is a substantial amount of specific control and communication problems that have been investigated and will continue to be studied. In this book, the core ideas and methods from control problems in agricultural production systems are extracted, and a system view of agricultural production is formulated for the analysis and design of management strategies to control and optimize agricultural production systems while exploiting the intrinsic feedback information-exchanging mechanisms. On this basis, the theoretical framework of agricultural cybernetics is established to predict and control the behavior of agricultural production systems through control theory.
Event mining encompasses techniques for automatically and efficiently extracting valuable knowledge from historical event/log data. The field, therefore, plays an important role in data-driven system management. Event Mining: Algorithms and Applications presents state-of-the-art event mining approaches and applications with a focus on computing system management. The book first explains how to transform log data in disparate formats and contents into a canonical form as well as how to optimize system monitoring. It then shows how to extract useful knowledge from data. It describes intelligent and efficient methods and algorithms to perform data-driven pattern discovery and problem determination for managing complex systems. The book also discusses data-driven approaches for the detailed diagnosis of a system issue and addresses the application of event summarization in Twitter messages (tweets). Understanding the interdisciplinary field of event mining can be challenging as it requires familiarity with several research areas and the relevant literature is scattered in diverse publications. This book makes it easier to explore the field by providing both a good starting point for readers not familiar with the topics and a comprehensive reference for those already working in this area.
This comprehensive and timely book, New Age Analytics: Transforming the Internet through Machine Learning, IoT, and Trust Modeling, explores the importance of tools and techniques used in machine learning, big data mining, and more. The book explains how advancements in the world of the web have been achieved and how the experiences of users can be analyzed. It looks at data gathering by the various electronic means and explores techniques for analysis and management, how to manage voluminous data, user responses, and more. This volume provides an abundance of valuable information for professionals and researchers working in the field of business analytics, big data, social network data, computer science, analytical engineering, and forensic analysis. Moreover, the book provides insights and support from both practitioners and academia in order to highlight the most debated aspects in the field. |
You may like...
Managing and Processing Big Data in…
Rajkumar Kannan, Raihan Ur Rasool, …
Hardcover
R5,052
Discovery Miles 50 520
Insightful Data Visualization with SAS…
Falko Schulz, Travis Murphy
Hardcover
R1,147
Discovery Miles 11 470
Data Analytics for Social Microblogging…
Soumi Dutta, Asit Kumar Das, …
Paperback
R3,335
Discovery Miles 33 350
Big Data - Concepts, Methodologies…
Information Reso Management Association
Hardcover
R17,615
Discovery Miles 176 150
|