![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence > Machine learning
Since the introduction of genetic algorithms in the 1970s, an enormous number of articles together with several significant monographs and books have been published on this methodology. As a result, genetic algorithms have made a major contribution to optimization, adaptation, and learning in a wide variety of unexpected fields. Over the years, many excellent books in genetic algorithm optimization have been published; however, they focus mainly on single-objective discrete or other hard optimization problems under certainty. There appears to be no book that is designed to present genetic algorithms for solving not only single-objective but also fuzzy and multiobjective optimization problems in a unified way. Genetic Algorithms And Fuzzy Multiobjective Optimization introduces the latest advances in the field of genetic algorithm optimization for 0-1 programming, integer programming, nonconvex programming, and job-shop scheduling problems under multiobjectiveness and fuzziness. In addition, the book treats a wide range of actual real world applications. The theoretical material and applications place special stress on interactive decision-making aspects of fuzzy multiobjective optimization for human-centered systems in most realistic situations when dealing with fuzziness. The intended readers of this book are senior undergraduate students, graduate students, researchers, and practitioners in the fields of operations research, computer science, industrial engineering, management science, systems engineering, and other engineering disciplines that deal with the subjects of multiobjective programming for discrete or other hard optimization problems under fuzziness. Real world research applications are used throughout the book to illustrate the presentation. These applications are drawn from complex problems. Examples include flexible scheduling in a machine center, operation planning of district heating and cooling plants, and coal purchase planning in an actual electric power plant.
Genetic programming (GP), one of the most advanced forms of evolutionary computation, has been highly successful as a technique for getting computers to automatically solve problems without having to tell them explicitly how. Since its inceptions more than ten years ago, GP has been used to solve practical problems in a variety of application fields. Along with this ad-hoc engineering approaches interest increased in how and why GP works. This book provides a coherent consolidation of recent work on the theoretical foundations of GP. A concise introduction to GP and genetic algorithms (GA) is followed by a discussion of fitness landscapes and other theoretical approaches to natural and artificial evolution. Having surveyed early approaches to GP theory it presents new exact schema analysis, showing that it applies to GP as well as to the simpler GAs. New results on the potentially infinite number of possible programs are followed by two chapters applying these new techniques.
The expansion of digital data has transformed various sectors of business such as healthcare, industrial manufacturing, and transportation. A new way of solving business problems has emerged through the use of machine learning techniques in conjunction with big data analytics. Deep Learning Innovations and Their Convergence With Big Data is a pivotal reference for the latest scholarly research on upcoming trends in data analytics and potential technologies that will facilitate insight in various domains of science, industry, business, and consumer applications. Featuring extensive coverage on a broad range of topics and perspectives such as deep neural network, domain adaptation modeling, and threat detection, this book is ideally designed for researchers, professionals, and students seeking current research on the latest trends in the field of deep learning techniques in big data analytics. Contents include: Deep Auto-Encoders Deep Neural Network Domain Adaptation Modeling Multilayer Perceptron (MLP) Natural Language Processing (NLP) Restricted Boltzmann Machines (RBM) Threat Detection
Data Science, Analytics and Machine Learning with R explains the principles of data mining and machine learning techniques and accentuates the importance of applied and multivariate modeling. The book emphasizes the fundamentals of each technique, with step-by-step codes and real-world examples with data from areas such as medicine and health, biology, engineering, technology and related sciences. Examples use the most recent R language syntax, with recognized robust, widespread and current packages. Code scripts are exhaustively commented, making it clear to readers what happens in each command. For data collection, readers are instructed how to build their own robots from the very beginning. In addition, an entire chapter focuses on the concept of spatial analysis, allowing readers to build their own maps through geo-referenced data (such as in epidemiologic research) and some basic statistical techniques. Other chapters cover ensemble and uplift modeling and GLMM (Generalized Linear Mixed Models) estimations, both linear and nonlinear.
The study of artificial intelligence (AI) is indeed a strange pursuit. Unlike most other disciplines, few AI researchers even agree on a mutually acceptable definition of their chosen field of study. Some see AI as a sub field of computer science, others see AI as a computationally oriented branch of psychology or linguistics, while still others see it as a bag of tricks to be applied to an entire spectrum of diverse domains. This lack of unified purpose among the AI community makes this a very exciting time for AI research: new and diverse projects are springing up literally every day. As one might imagine, however, this diversity also leads to genuine difficulties in assessing the significance and validity of AI research. These difficulties are an indication that AI has not yet matured as a science: it is still at the point where people are attempting to lay down (hopefully sound) foundations. Ritchie and Hanna [1] posit the following categorization as an aid in assessing the validity of an AI research endeavor: (1) The project could introduce, in outline, a novel (or partly novel) idea or set of ideas. (2) The project could elaborate the details of some approach. Starting with the kind of idea in (1), the research could criticize it or fill in further details (3) The project could be an AI experiment, where a theory as in (1) and (2) is applied to some domain. Such experiments are usually computer programs that implement a particular theory.
This volume introduces machine learning techniques that are particularly powerful and effective for modeling multimedia data and common tasks of multimedia content analysis. It systematically covers key machine learning techniques in an intuitive fashion and demonstrates their applications through case studies. Coverage includes examples of unsupervised learning, generative models and discriminative models. In addition, the book examines Maximum Margin Markov (M3) networks, which strive to combine the advantages of both the graphical models and Support Vector Machines (SVM).
The advances of live cell video imaging and high-throughput technologies for functional and chemical genomics provide unprecedented opportunities to understand how biological processes work in subcellularand multicellular systems. The interdisciplinary research field of Video Bioinformatics is defined by BirBhanu as the automated processing, analysis, understanding, data mining, visualization, query-basedretrieval/storage of biological spatiotemporal events/data and knowledge extracted from dynamic imagesand microscopic videos. Video bioinformatics attempts to provide a deeper understanding of continuousand dynamic life processes.Genome sequences alone lack spatial and temporal information, and video imaging of specific moleculesand their spatiotemporal interactions, using a range of imaging methods, are essential to understandhow genomes create cells, how cells constitute organisms, and how errant cells cause disease. The bookexamines interdisciplinary research issues and challenges with examples that deal with organismal dynamics,intercellular and tissue dynamics, intracellular dynamics, protein movement, cell signaling and softwareand databases for video bioinformatics.Topics and Features* Covers a set of biological problems, their significance, live-imaging experiments, theory andcomputational methods, quantifiable experimental results and discussion of results.* Provides automated methods for analyzing mild traumatic brain injury over time, identifying injurydynamics after neonatal hypoxia-ischemia and visualizing cortical tissue changes during seizureactivity as examples of organismal dynamics* Describes techniques for quantifying the dynamics of human embryonic stem cells with examplesof cell detection/segmentation, spreading and other dynamic behaviors which are important forcharacterizing stem cell health* Examines and quantifies dynamic processes in plant and fungal systems such as cell trafficking,growth of pollen tubes in model systems such as Neurospora Crassa and Arabidopsis* Discusses the dynamics of intracellular molecules for DNA repair and the regulation of cofilintransport using video analysis* Discusses software, system and database aspects of video bioinformatics by providing examples of5D cell tracking by FARSIGHT open source toolkit, a survey on available databases and software,biological processes for non-verbal communications and identification and retrieval of moth imagesThis unique text will be of great interest to researchers and graduate students of Electrical Engineering,Computer Science, Bioengineering, Cell Biology, Toxicology, Genetics, Genomics, Bioinformatics, ComputerVision and Pattern Recognition, Medical Image Analysis, and Cell Molecular and Developmental Biology.The large number of example applications will also appeal to application scientists and engineers.Dr. Bir Bhanu is Distinguished Professor of Electrical & C omputer Engineering, Interim Chair of theDepartment of Bioengineering, Cooperative Professor of Computer Science & Engineering, and MechanicalEngineering and the Director of the Center for Research in Intelligent Systems, at the University of California,Riverside, California, USA.Dr. Prue Talbot is Professor of Cell Biology & Neuroscience and Director of the Stem Cell Center and Core atthe University of California Riverside, California, USA.
Recent decades have seen rapid advances in automatization processes, supported by modern machines and computers. The result is significant increases in system complexity and state changes, information sources, the need for faster data handling and the integration of environmental influences. Intelligent systems, equipped with a taxonomy of data-driven system identification and machine learning algorithms, can handle these problems partially. Conventional learning algorithms in a batch off-line setting fail whenever dynamic changes of the process appear due to non-stationary environments and external influences. "Learning in Non-Stationary Environments: Methods and Applications "offers a wide-ranging, comprehensive review of recent developments and important methodologies in the field. The coverage focuses on dynamic learning in unsupervised problems, dynamic learning in supervised classification and dynamic learning in supervised regression problems. A later section is dedicated to applications in which dynamic learning methods serve as keystones for achieving models with high accuracy. Rather than rely on a mathematical theorem/proof style, the editors highlight numerous figures, tables, examples and applications, together with their explanations. This approach offers a useful basis for further investigation and fresh ideas and motivates and inspires newcomers to explore this promising and still emerging field of research. "
Grammatical Evolution: Evolutionary Automatic Programming in an Arbitrary Language provides the first comprehensive introduction to Grammatical Evolution, a novel approach to Genetic Programming that adopts principles from molecular biology in a simple and useful manner, coupled with the use of grammars to specify legal structures in a search. Grammatical Evolution's rich modularity gives a unique flexibility, making it possible to use alternative search strategies - whether evolutionary, deterministic or some other approach - and to even radically change its behavior by merely changing the grammar supplied. This approach to Genetic Programming represents a powerful new weapon in the Machine Learning toolkit that can be applied to a diverse set of problem domains.
What follows is a sampler of work in knowledge acquisition. It comprises three technical papers and six guest editorials. The technical papers give an in-depth look at some of the important issues and current approaches in knowledge acquisition. The editorials were pro duced by authors who were basically invited to sound off. I've tried to group and order the contributions somewhat coherently. The following annotations emphasize the connections among the separate pieces. Buchanan's editorial starts on the theme of "Can machine learning offer anything to expert systems?" He emphasizes the practical goals of knowledge acquisition and the challenge of aiming for them. Lenat's editorial briefly describes experience in the development of CYC that straddles both fields. He outlines a two-phase development that relies on an engineering approach early on and aims for a crossover to more automated techniques as the size of the knowledge base increases. Bareiss, Porter, and Murray give the first technical paper. It comes from a laboratory of machine learning researchers who have taken an interest in supporting the development of knowledge bases, with an emphasis on how development changes with the growth of the knowledge base. The paper describes two systems. The first, Protos, adjusts the training it expects and the assistance it provides as its knowledge grows. The second, KI, is a system that helps integrate knowledge into an already very large knowledge base."
This book highlights the financial community's realization regarding the failure of corporate communication required for forensic professionals. This has led to structural weaknesses in areas such as flawed internal controls, poor corporate governance, and fraudulent financial statements. A vital need exists for the development of forensic accounting techniques, a reduction in external auditor deficiencies in fraud detection, and the use of cloud forensic audit to enhance corporate efficiency in fraud detection. This book discusses forensic accounting techniques and explores how forensic accountants add value while investigating claims & fraud. It will also highlight the corporate benefits of forensic accounting audit and the acceptance of this evidence in the court of law. The chapters will ultimately show the significance of forensic accounting audits and how research has developed in the field. By researching new ways, techniques, and methods for minimizing corporate damages, society can be greatly benefitted.
This monograph is a contribution to the study of the identification problem: the problem of identifying an item from a known class us ing positive and negative examples. This problem is considered to be an important component of the process of inductive learning, and as such has been studied extensively. In the overview we shall explain the objectives of this work and its place in the overall fabric of learning research. Context. Learning occurs in many forms; the only form we are treat ing here is inductive learning, roughly characterized as the process of forming general concepts from specific examples. Computer Science has found three basic approaches to this problem: * Select a specific learning task, possibly part of a larger task, and construct a computer program to solve that task . * Study cognitive models of learning in humans and extrapolate from them general principles to explain learning behavior. Then construct machine programs to test and illustrate these models. xi Xll PREFACE * Formulate a mathematical theory to capture key features of the induction process. This work belongs to the third category. The various studies of learning utilize training examples (data) in different ways. The three principal ones are: * Similarity-based (or empirical) learning, in which a collection of examples is used to select an explanation from a class of possible rules.
Data Mining is the science and technology of exploring large and complex bodies of data in order to discover useful patterns. It is extremely important because it enables modeling and knowledge extraction from abundant data availability. This book introduces soft computing methods extending the envelope of problems that data mining can solve efficiently. It presents practical soft-computing approaches in data mining and includes various real-world case studies with detailed results.
Over the past three decades or so, research on machine learning and data mining has led to a wide variety of algorithms that learn general functions from experience. As machine learning is maturing, it has begun to make the successful transition from academic research to various practical applications. Generic techniques such as decision trees and artificial neural networks, for example, are now being used in various commercial and industrial applications. Learning to Learn is an exciting new research direction within machine learning. Similar to traditional machine-learning algorithms, the methods described in Learning to Learn induce general functions from experience. However, the book investigates algorithms that can change the way they generalize, i.e., practice the task of learning itself, and improve on it. To illustrate the utility of learning to learn, it is worthwhile comparing machine learning with human learning. Humans encounter a continual stream of learning tasks. They do not just learn concepts or motor skills, they also learn bias, i.e., they learn how to generalize. As a result, humans are often able to generalize correctly from extremely few examples - often just a single example suffices to teach us a new thing. A deeper understanding of computer programs that improve their ability to learn can have a large practical impact on the field of machine learning and beyond. In recent years, the field has made significant progress towards a theory of learning to learn along with practical new algorithms, some of which led to impressive results in real-world applications. Learning to Learn provides a survey of some of the most exciting new research approaches, written by leading researchers in the field. Its objective is to investigate the utility and feasibility of computer programs that can learn how to learn, both from a practical and a theoretical point of view.
This book sheds light on processes associated with the construction of cognitive maps, that is to say, with the construction of internal representations of very large spatial entities such as towns, cities, neighborhoods, landscapes, metropolitan areas, environments and the like. Because of their size, such entities can never be seen in their entirety, and consequently one constructs their internal representation by means of visual, as well as non-visual, modes of sensation and information - text, auditory, haptic and olfactory means for example - or by inference. Intersensory coordination and information transfer thus play a crucial role in the construction of cognitive maps. Because it involves a multiplicity of sensational and informational modes, the issue of cognitive maps does not fall into any single traditional cognitive field, but rather into, and often in between, several of them. Thus, although one is dealing here with processes associated with almost every aspect of our daily life, the subject has received relatively marginal scientific attention. The book is directed to researchers and students of cognitive mapping and environmental cognition. In particular it focuses on the cognitive processes by which one form of information, say haptic, is being transformed into another, say a visual image, and by which multiple forms of information participate in constructing cognitive maps.
Machine Conversationsis a collection of some of the best research available in the practical arts of machine conversation. The book describes various attempts to create practical and flexible machine conversation - ways of talking to computers in an unrestricted version of English or some other language. While this book employs and advances the theory of dialogue and its linguistic underpinnings, the emphasis is on practice, both in university research laboratories and in company research and development. Since the focus is on the task and on the performance, this book provides some of the first-rate work taking place in industry, quite apart from the academic tradition. It also reveals striking and relevant facts about the tone of machine conversations and closely evaluates what users require. Machine Conversations is an excellent reference for researchers interested in computational linguistics, cognitive science, natural language processing, artificial intelligence, human computer interfaces and machine learning.
Genetic Algorithms: Principles and Perspectives: A Guide to GA Theory is a survey of some important theoretical contributions, many of which have been proposed and developed in the Foundations of Genetic Algorithms series of workshops. However, this theoretical work is still rather fragmented, and the authors believe that it is the right time to provide the field with a systematic presentation of the current state of theory in the form of a set of theoretical perspectives. The authors do this in the interest of providing students and researchers with a balanced foundational survey of some recent research on GAs. The scope of the book includes chapter-length discussions of Basic Principles, Schema Theory, "No Free Lunch," GAs and Markov Processes, Dynamical Systems Model, Statistical Mechanics Approximations, Predicting GA Performance, Landscapes and Test Problems.
This book collects selected papers by authors for CODATA 2006, which are relevant to the acquisition of knowledge and the assessment of risk and opportunity that comes from combining data from a number of different disciplines.
Machine learning methods are now an important tool for scientists, researchers, engineers and students in a wide range of areas. This book is written for people who want to adopt and use the main tools of machine learning, but aren't necessarily going to want to be machine learning researchers. Intended for students in final year undergraduate or first year graduate computer science programs in machine learning, this textbook is a machine learning toolkit. Applied Machine Learning covers many topics for people who want to use machine learning processes to get things done, with a strong emphasis on using existing tools and packages, rather than writing one's own code. A companion to the author's Probability and Statistics for Computer Science, this book picks up where the earlier book left off (but also supplies a summary of probability that the reader can use). Emphasizing the usefulness of standard machinery from applied statistics, this textbook gives an overview of the major applied areas in learning, including coverage of:* classification using standard machinery (naive bayes; nearest neighbor; SVM)* clustering and vector quantization (largely as in PSCS)* PCA (largely as in PSCS)* variants of PCA (NIPALS; latent semantic analysis; canonical correlation analysis)* linear regression (largely as in PSCS)* generalized linear models including logistic regression* model selection with Lasso, elasticnet* robustness and m-estimators* Markov chains and HMM's (largely as in PSCS)* EM in fairly gory detail; long experience teaching this suggests one detailed example is required, which students hate; but once they've been through that, the next one is easy* simple graphical models (in the variational inference section)* classification with neural networks, with a particular emphasis onimage classification* autoencoding with neural networks* structure learning
Advances in computing, communications, and control have bridged the physical components of reality and cyberspace leading to the smart internet of things (IoT). The notion of IoT has extraordinary significance for the future of several industrial domains. Hence, it is expected that the complexity in the design of IoT applications will continue to increase due to the integration of several cyber components with physical and industrial systems. As a result, several smart protocols and algorithms are needed to communicate and exchange data between IoT devices. Smart Devices, Applications, and Protocols for the IoT is a collection of innovative research that explores new methods and techniques for achieving reliable and efficient communication in recent applications including machine learning, network optimization, adaptive methods, and smart algorithms and protocols. While highlighting topics including artificial intelligence, sensor networks, and mobile network architectures, this book is ideally designed for IT specialists and consultants, software engineers, technology developers, academicians, researchers, and students seeking current research on up-to-date technologies in smart communications, protocols, and algorithms in IoT.
This book presents innovative work in Climate Informatics, a new field that reflects the application of data mining methods to climate science, and shows where this new and fast growing field is headed. Given its interdisciplinary nature, Climate Informatics offers insights, tools and methods that are increasingly needed in order to understand the climate system, an aspect which in turn has become crucial because of the threat of climate change. There has been a veritable explosion in the amount of data produced by satellites, environmental sensors and climate models that monitor, measure and forecast the earth system. In order to meaningfully pursue knowledge discovery on the basis of such voluminous and diverse datasets, it is necessary to apply machine learning methods, and Climate Informatics lies at the intersection of machine learning and climate science. This book grew out of the fourth workshop on Climate Informatics held in Boulder, Colorado in Sep. 2014.
Autonomous agents or multiagent systems are computational systems in which several computational agents interact or work together to perform some set of tasks. These systems may involve computational agents having common goals or distinct goals. Real-Time Search for Learning Autonomous Agents focuses on extending real-time search algorithms for autonomous agents and for a multiagent world. Although real-time search provides an attractive framework for resource-bounded problem solving, the behavior of the problem solver is not rational enough for autonomous agents. The problem solver always keeps the record of its moves and the problem solver cannot utilize and improve previous experiments. Other problems are that although the algorithms interleave planning and execution, they cannot be directly applied to a multiagent world. The problem solver cannot adapt to the dynamically changing goals and the problem solver cannot cooperatively solve problems with other problem solvers. This book deals with all these issues. Real-Time Search for Learning Autonomous Agents serves as an excellent resource for researchers and engineers interested in both practical references and some theoretical basis for agent/multiagent systems. The book can also be used as a text for advanced courses on the subject.
In recent years the development of new classification and regression algorithms based on deep learning has led to a revolution in the fields of artificial intelligence, machine learning, and data analysis. The development of a theoretical foundation to guarantee the success of these algorithms constitutes one of the most active and exciting research topics in applied mathematics. This book presents the current mathematical understanding of deep learning methods from the point of view of the leading experts in the field. It serves both as a starting point for researchers and graduate students in computer science, mathematics, and statistics trying to get into the field and as an invaluable reference for future research.
Recent Advances in Robot Learning contains seven papers on robot learning written by leading researchers in the field. As the selection of papers illustrates, the field of robot learning is both active and diverse. A variety of machine learning methods, ranging from inductive logic programming to reinforcement learning, is being applied to many subproblems in robot perception and control, often with objectives as diverse as parameter calibration and concept formulation. While no unified robot learning framework has yet emerged to cover the variety of problems and approaches described in these papers and other publications, a clear set of shared issues underlies many robot learning problems. Machine learning, when applied to robotics, is situated: it is embedded into a real-world system that tightly integrates perception, decision making and execution. Since robot learning involves decision making, there is an inherent active learning issue. Robotic domains are usually complex, yet the expense of using actual robotic hardware often prohibits the collection of large amounts of training data. Most robotic systems are real-time systems. Decisions must be made within critical or practical time constraints. These characteristics present challenges and constraints to the learning system. Since these characteristics are shared by other important real-world application domains, robotics is a highly attractive area for research on machine learning. On the other hand, machine learning is also highly attractive to robotics. There is a great variety of open problems in robotics that defy a static, hand-coded solution. Recent Advances in Robot Learning is an edited volume of peer-reviewed original research comprising seven invited contributions by leading researchers. This research work has also been published as a special issue of Machine Learning (Volume 23, Numbers 2 and 3).
Intelligent systems of the natural kind are adaptive and robust: they learn over time and degrade gracefully under stress. If artificial systems are to display a similar level of sophistication, an organizing framework and operating principles are required to manage the resulting complexity of design and behavior. This book presents a general framework for adaptive systems. The utility of the comprehensive framework is demonstrated by tailoring it to particular models of computational learning, ranging from neural networks to declarative logic. The key to robustness lies in distributed decision making. An exemplar of this strategy is the neural network in both its biological and synthetic forms. In a neural network, the knowledge is encoded in the collection of cells and their linkages, rather than in any single component. Distributed decision making is even more apparent in the case of independent agents. For a population of autonomous agents, their proper coordination may well be more instrumental for attaining their objectives than are their individual capabilities. This book probes the problems and opportunities arising from autonomous agents acting individually and collectively. Following the general framework for learning systems and its application to neural networks, the coordination of independent agents through game theory is explored. Finally, the utility of game theory for artificial agents is revealed through a case study in robotic coordination. Given the universality of the subjects -- learning behavior and coordinative strategies in uncertain environments -- this book will be of interest to students and researchers in various disciplines, ranging from all areas of engineering to the computing disciplines; from the life sciences to the physical sciences; and from the management arts to social studies. |
You may like...
Hardware Accelerator Systems for…
Shiho Kim, Ganesh Chandra Deka
Hardcover
R3,950
Discovery Miles 39 500
Deep Learning for Chest Radiographs…
Yashvi Chandola, Jitendra Virmani, …
Paperback
R2,060
Discovery Miles 20 600
Cognitive Big Data Intelligence with a…
Sushruta Mishra, Hrudaya Kumar Tripathy, …
Paperback
R2,829
Discovery Miles 28 290
Cyber-Physical Systems - AI and COVID-19
Ramesh Poonia, Basant Agarwal, …
Paperback
R2,817
Discovery Miles 28 170
Machine Learning for Planetary Science
Joern Helbert, Mario D'Amore, …
Paperback
R3,380
Discovery Miles 33 800
Research Anthology on Machine Learning…
Information R Management Association
Hardcover
R16,088
Discovery Miles 160 880
Adversarial Robustness for Machine…
Pin-Yu Chen, Cho-Jui Hsieh
Paperback
R2,204
Discovery Miles 22 040
|