![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence > Machine learning
The work described in this book was first presented at the Second Workshop on Genetic Programming, Theory and Practice, organized by the Center for the Study of Complex Systems at the University of Michigan, Ann Arbor, 13-15 May 2004. The goal of this workshop series is to promote the exchange of research results and ideas between those who focus on Genetic Programming (GP) theory and those who focus on the application of GP to various re- world problems. In order to facilitate these interactions, the number of talks and participants was small and the time for discussion was large. Further, participants were asked to review each other's chapters before the workshop. Those reviewer comments, as well as discussion at the workshop, are reflected in the chapters presented in this book. Additional information about the workshop, addendums to chapters, and a site for continuing discussions by participants and by others can be found at http: //cscs.umich.edu:8000/GPTP-20041. We thank all the workshop participants for making the workshop an exciting and productive three days. In particular we thank all the authors, without whose hard work and creative talents, neither the workshop nor the book would be possible. We also thank our keynote speakers Lawrence ("Dave") Davis of NuTech Solutions, Inc., Jordan Pollack of Brandeis University, and Richard Lenski of Michigan State University, who delivered three thought-provoking speeches that inspired a great deal of discussion among the participants.
Algorithmic Learning in a Random World describes recent theoretical and experimental developments in building computable approximations to Kolmogorov's algorithmic notion of randomness. Based on these approximations, a new set of machine learning algorithms have been developed that can be used to make predictions and to estimate their confidence and credibility in high-dimensional spaces under the usual assumption that the data are independent and identically distributed (assumption of randomness). Another aim of this unique monograph is to outline some limits of predictions: The approach based on algorithmic theory of randomness allows for the proof of impossibility of prediction in certain situations. The book describes how several important machine learning problems, such as density estimation in high-dimensional spaces, cannot be solved if the only assumption is randomness.
Data Science, Analytics and Machine Learning with R explains the principles of data mining and machine learning techniques and accentuates the importance of applied and multivariate modeling. The book emphasizes the fundamentals of each technique, with step-by-step codes and real-world examples with data from areas such as medicine and health, biology, engineering, technology and related sciences. Examples use the most recent R language syntax, with recognized robust, widespread and current packages. Code scripts are exhaustively commented, making it clear to readers what happens in each command. For data collection, readers are instructed how to build their own robots from the very beginning. In addition, an entire chapter focuses on the concept of spatial analysis, allowing readers to build their own maps through geo-referenced data (such as in epidemiologic research) and some basic statistical techniques. Other chapters cover ensemble and uplift modeling and GLMM (Generalized Linear Mixed Models) estimations, both linear and nonlinear.
Most machine learning research has been concerned with the development of systems that implememnt one type of inference within a single representational paradigm. Such systems, which can be called monostrategy learning systems, include those for empirical induction of decision trees or rules, explanation-based generalization, neural net learning from examples, genetic algorithm-based learning, and others. Monostrategy learning systems can be very effective and useful if learning problems to which they are applied are sufficiently narrowly defined. Many real-world applications, however, pose learning problems that go beyond the capability of monostrategy learning methods. In view of this, recent years have witnessed a growing interest in developing multistrategy systems, which integrate two or more inference types and/or paradigms within one learning system. Such multistrategy systems take advantage of the complementarity of different inference types or representational mechanisms. Therefore, they have a potential to be more versatile and more powerful than monostrategy systems. On the other hand, due to their greater complexity, their development is significantly more difficult and represents a new great challenge to the machine learning community. Multistrategy Learning contains contributions characteristic of the current research in this area.
Designing complex programs such as operating systems, compilers, filing systems, data base systems, etc. is an old ever lasting research area. Genetic programming is a relatively new promising and growing research area. Among other uses, it provides efficient tools to deal with hard problems by evolving creative and competitive solutions. Systems Programming is generally strewn with such hard problems. This book is devoted to reporting innovative and significant progress about the contribution of genetic programming in systems programming. The contributions of this book clearly demonstrate that genetic programming is very effective in solving hard and yet-open problems in systems programming. Followed by an introductory chapter, in the remaining contributed chapters, the reader can easily learn about systems where genetic programming can be applied successfully. These include but are not limited to, information security systems, compilers, data mining systems, stock market prediction systems, robots and automatic programming.
PHILOSOPHY AND COGNITIVE SCIENCE: CATEGORIES, CONSCIOUSNESS, AND REASONING The individual man, since his separate existence is manifested only by ignorance and error, so far as he is anything apart from his fellows, and from what he and they are to be, is only a negation. Peirce, Some Consequences of Four Incapacities. 1868. For the second time the International Colloquium on Cognitive Science gathered at San Sebastian from May, 7-11, 1991 to discuss the following main topics: Knowledge of Categories Consciousness Reasoning and Interpretation Evolution, Biology, and Mind It is not an easy task to introduce in a few words the content of this volume. We have collected eleven invited papers presented at the Colloquium, which means the substantial part of it. Unfortunately, it has not been possible to include all the invited lectures of the meeting. Before sketching and showing the relevance of each paper, let us explain the reasons for having adopted the decision to organize each two years an international colloquium on Cognitive Science at Donostia (San Sebastian). First of all, Cognitive Science is a very active research area in the world, linking multidisciplinary efforts coming mostly from psychology, artificial intelligence, theoretical linguistics and neurobiology, and using more and more formal tools. We think that this new discipline lacks solid foundations, and in this sense philosophy, particularly knowledge theory, and logic must be called for.
Making Robots Smarter is a book about learning robots. It treats this topic based on the idea that the integration of sensing and action is the central issue. In the first part of the book, aspects of learning in execution and control are discussed. Methods for the automatic synthesis of controllers, for active sensing, for learning to enhance assembly, and for learning sensor-based navigation are presented. Since robots are not isolated but should serve us, the second part of the book discusses learning for human-robot interaction. Methods of learning understandable concepts for assembly, monitoring, and navigation are described as well as optimizing the implementation of such understandable concepts for a robot's real-time performance. In terms of the study of embodied intelligence, Making Robots Smarter asks how skills are acquired and where capabilities of execution and control come from. Can they be learned from examples or experience? What is the role of communication in the learning procedure? Whether we name it one way or the other, the methodological challenge is that of integrating learning capabilities into robots.
This book presents a modular and expandable technique in the rapidly emerging research area of automatic configuration and selection of the best algorithm for the instance at hand. The author presents the basic model behind ISAC and then details a number of modifications and practical applications. In particular, he addresses automated feature generation, offline algorithm configuration for portfolio generation, algorithm selection, adaptive solvers, online tuning, and parallelization. The author's related thesis was honorably mentioned (runner-up) for the ACP Dissertation Award in 2014, and this book includes some expanded sections and notes on recent developments. Additionally, the techniques described in this book have been successfully applied to a number of solvers competing in the SAT and MaxSAT International Competitions, winning a total of 18 gold medals between 2011 and 2014. The book will be of interest to researchers and practitioners in artificial intelligence, in particular in the area of machine learning and constraint programming.
Ontology Learning for the Semantic Web explores techniques for
applying knowledge discovery techniques to different web data
sources (such as HTML documents, dictionaries, etc.), in order to
support the task of engineering and maintaining ontologies. The
approach of ontology learning proposed in Ontology Learning for the
Semantic Web includes a number of complementary disciplines that
feed in different types of unstructured and semi-structured data.
This data is necessary in order to support a semi-automatic
ontology engineering process.
To-date computers are supposed to store and exploit knowledge. At least that is one of the aims of research fields such as Artificial Intelligence and Information Systems. However, the problem is to understand what knowledge means, to find ways of representing knowledge, and to specify automated machineries that can extract useful information from stored knowledge. Knowledge is something people have in their mind, and which they can express through natural language. Knowl edge is acquired not only from books, but also from observations made during experiments; in other words, from data. Changing data into knowledge is not a straightforward task. A set of data is generally disorganized, contains useless details, although it can be incomplete. Knowledge is just the opposite: organized (e.g. laying bare dependencies, or classifications), but expressed by means of a poorer language, i.e. pervaded by imprecision or even vagueness, and assuming a level of granularity. One may say that knowledge is summarized and organized data - at least the kind of knowledge that computers can store."
Independent Component Analysis (ICA) is a signal-processing method to extract independent sources given only observed data that are mixtures of the unknown sources. Recently, blind source separation by ICA has received considerable attention because of its potential signal-processing applications such as speech enhancement systems, telecommunications, medical signal-processing and several data mining issues. This book presents theories and applications of ICA and includes invaluable examples of several real-world applications. Based on theories in probabilistic models, information theory and artificial neural networks, several unsupervised learning algorithms are presented that can perform ICA. The seemingly different theories such as infomax, maximum likelihood estimation, negentropy maximization, nonlinear PCA, Bussgang algorithm and cumulant-based methods are reviewed and put in an information theoretic framework to unify several lines of ICA research. An algorithm is presented that is able to blindly separate mixed signals with sub- and super-Gaussian source distributions. The learning algorithms can be extended to filter systems, which allows the separation of voices recorded in a real environment (cocktail party problem). The ICA algorithm has been successfully applied to many biomedical signal-processing problems such as the analysis of electroencephalographic data and functional magnetic resonance imaging data. ICA applied to images results in independent image components that can be used as features in pattern classification problems such as visual lip-reading and face recognition systems. The ICA algorithm can furthermore be embedded in an expectation maximization framework for unsupervised classification. Independent Component Analysis: Theory and Applications is the first book to successfully address this fairly new and generally applicable method of blind source separation. It is essential reading for researchers and practitioners with an interest in ICA.
Although good devices exist for presenting visual and auditory sensations, there has yet to be a device for presenting olfactory stimulus. Nevertheless, the area for smell presentation continues to evolve and smell presentation in multimedia is not unlikely in the future. Human Olfactory Displays and Interfaces: Odor Sensing and Presentation provides the opportunity to learn about olfactory displays and its odor reproduction. Covering the fundamental and latest research of sensors and sensing systems as well as presentation technique, this book is vital for researchers, students, and practitioners gaining knowledge in the fields of consumer electronics, communications, virtual realities, electronic instruments, and more.
Digital Image Enhancement and Reconstruction: Techniques and Applications explores different concepts and techniques used for the enhancement as well as reconstruction of low-quality images. Most real-life applications require good quality images to gain maximum performance, however, the quality of the images captured in real-world scenarios is often very unsatisfactory. Most commonly, images are noisy, blurry, hazy, tiny, and hence need to pass through image enhancement and/or reconstruction algorithms before they can be processed by image analysis applications. This book comprehensively explores application-specific enhancement and reconstruction techniques including satellite image enhancement, face hallucination, low-resolution face recognition, medical image enhancement and reconstruction, reconstruction of underwater images, text image enhancement, biometrics, etc. Chapters will present a detailed discussion of the challenges faced in handling each particular kind of image, analysis of the best available solutions, and an exploration of applications and future directions. The book provides readers with a deep dive into denoising, dehazing, super-resolution, and use of soft computing across a range of engineering applications.
Since the introduction of genetic algorithms in the 1970s, an enormous number of articles together with several significant monographs and books have been published on this methodology. As a result, genetic algorithms have made a major contribution to optimization, adaptation, and learning in a wide variety of unexpected fields. Over the years, many excellent books in genetic algorithm optimization have been published; however, they focus mainly on single-objective discrete or other hard optimization problems under certainty. There appears to be no book that is designed to present genetic algorithms for solving not only single-objective but also fuzzy and multiobjective optimization problems in a unified way. Genetic Algorithms And Fuzzy Multiobjective Optimization introduces the latest advances in the field of genetic algorithm optimization for 0-1 programming, integer programming, nonconvex programming, and job-shop scheduling problems under multiobjectiveness and fuzziness. In addition, the book treats a wide range of actual real world applications. The theoretical material and applications place special stress on interactive decision-making aspects of fuzzy multiobjective optimization for human-centered systems in most realistic situations when dealing with fuzziness. The intended readers of this book are senior undergraduate students, graduate students, researchers, and practitioners in the fields of operations research, computer science, industrial engineering, management science, systems engineering, and other engineering disciplines that deal with the subjects of multiobjective programming for discrete or other hard optimization problems under fuzziness. Real world research applications are used throughout the book to illustrate the presentation. These applications are drawn from complex problems. Examples include flexible scheduling in a machine center, operation planning of district heating and cooling plants, and coal purchase planning in an actual electric power plant.
Genetic programming (GP), one of the most advanced forms of evolutionary computation, has been highly successful as a technique for getting computers to automatically solve problems without having to tell them explicitly how. Since its inceptions more than ten years ago, GP has been used to solve practical problems in a variety of application fields. Along with this ad-hoc engineering approaches interest increased in how and why GP works. This book provides a coherent consolidation of recent work on the theoretical foundations of GP. A concise introduction to GP and genetic algorithms (GA) is followed by a discussion of fitness landscapes and other theoretical approaches to natural and artificial evolution. Having surveyed early approaches to GP theory it presents new exact schema analysis, showing that it applies to GP as well as to the simpler GAs. New results on the potentially infinite number of possible programs are followed by two chapters applying these new techniques.
The expansion of digital data has transformed various sectors of business such as healthcare, industrial manufacturing, and transportation. A new way of solving business problems has emerged through the use of machine learning techniques in conjunction with big data analytics. Deep Learning Innovations and Their Convergence With Big Data is a pivotal reference for the latest scholarly research on upcoming trends in data analytics and potential technologies that will facilitate insight in various domains of science, industry, business, and consumer applications. Featuring extensive coverage on a broad range of topics and perspectives such as deep neural network, domain adaptation modeling, and threat detection, this book is ideally designed for researchers, professionals, and students seeking current research on the latest trends in the field of deep learning techniques in big data analytics. Contents include: Deep Auto-Encoders Deep Neural Network Domain Adaptation Modeling Multilayer Perceptron (MLP) Natural Language Processing (NLP) Restricted Boltzmann Machines (RBM) Threat Detection
This book highlights the financial community's realization regarding the failure of corporate communication required for forensic professionals. This has led to structural weaknesses in areas such as flawed internal controls, poor corporate governance, and fraudulent financial statements. A vital need exists for the development of forensic accounting techniques, a reduction in external auditor deficiencies in fraud detection, and the use of cloud forensic audit to enhance corporate efficiency in fraud detection. This book discusses forensic accounting techniques and explores how forensic accountants add value while investigating claims & fraud. It will also highlight the corporate benefits of forensic accounting audit and the acceptance of this evidence in the court of law. The chapters will ultimately show the significance of forensic accounting audits and how research has developed in the field. By researching new ways, techniques, and methods for minimizing corporate damages, society can be greatly benefitted.
The study of artificial intelligence (AI) is indeed a strange pursuit. Unlike most other disciplines, few AI researchers even agree on a mutually acceptable definition of their chosen field of study. Some see AI as a sub field of computer science, others see AI as a computationally oriented branch of psychology or linguistics, while still others see it as a bag of tricks to be applied to an entire spectrum of diverse domains. This lack of unified purpose among the AI community makes this a very exciting time for AI research: new and diverse projects are springing up literally every day. As one might imagine, however, this diversity also leads to genuine difficulties in assessing the significance and validity of AI research. These difficulties are an indication that AI has not yet matured as a science: it is still at the point where people are attempting to lay down (hopefully sound) foundations. Ritchie and Hanna [1] posit the following categorization as an aid in assessing the validity of an AI research endeavor: (1) The project could introduce, in outline, a novel (or partly novel) idea or set of ideas. (2) The project could elaborate the details of some approach. Starting with the kind of idea in (1), the research could criticize it or fill in further details (3) The project could be an AI experiment, where a theory as in (1) and (2) is applied to some domain. Such experiments are usually computer programs that implement a particular theory.
The advances of live cell video imaging and high-throughput technologies for functional and chemical genomics provide unprecedented opportunities to understand how biological processes work in subcellularand multicellular systems. The interdisciplinary research field of Video Bioinformatics is defined by BirBhanu as the automated processing, analysis, understanding, data mining, visualization, query-basedretrieval/storage of biological spatiotemporal events/data and knowledge extracted from dynamic imagesand microscopic videos. Video bioinformatics attempts to provide a deeper understanding of continuousand dynamic life processes.Genome sequences alone lack spatial and temporal information, and video imaging of specific moleculesand their spatiotemporal interactions, using a range of imaging methods, are essential to understandhow genomes create cells, how cells constitute organisms, and how errant cells cause disease. The bookexamines interdisciplinary research issues and challenges with examples that deal with organismal dynamics,intercellular and tissue dynamics, intracellular dynamics, protein movement, cell signaling and softwareand databases for video bioinformatics.Topics and Features* Covers a set of biological problems, their significance, live-imaging experiments, theory andcomputational methods, quantifiable experimental results and discussion of results.* Provides automated methods for analyzing mild traumatic brain injury over time, identifying injurydynamics after neonatal hypoxia-ischemia and visualizing cortical tissue changes during seizureactivity as examples of organismal dynamics* Describes techniques for quantifying the dynamics of human embryonic stem cells with examplesof cell detection/segmentation, spreading and other dynamic behaviors which are important forcharacterizing stem cell health* Examines and quantifies dynamic processes in plant and fungal systems such as cell trafficking,growth of pollen tubes in model systems such as Neurospora Crassa and Arabidopsis* Discusses the dynamics of intracellular molecules for DNA repair and the regulation of cofilintransport using video analysis* Discusses software, system and database aspects of video bioinformatics by providing examples of5D cell tracking by FARSIGHT open source toolkit, a survey on available databases and software,biological processes for non-verbal communications and identification and retrieval of moth imagesThis unique text will be of great interest to researchers and graduate students of Electrical Engineering,Computer Science, Bioengineering, Cell Biology, Toxicology, Genetics, Genomics, Bioinformatics, ComputerVision and Pattern Recognition, Medical Image Analysis, and Cell Molecular and Developmental Biology.The large number of example applications will also appeal to application scientists and engineers.Dr. Bir Bhanu is Distinguished Professor of Electrical & C omputer Engineering, Interim Chair of theDepartment of Bioengineering, Cooperative Professor of Computer Science & Engineering, and MechanicalEngineering and the Director of the Center for Research in Intelligent Systems, at the University of California,Riverside, California, USA.Dr. Prue Talbot is Professor of Cell Biology & Neuroscience and Director of the Stem Cell Center and Core atthe University of California Riverside, California, USA.
This volume introduces machine learning techniques that are particularly powerful and effective for modeling multimedia data and common tasks of multimedia content analysis. It systematically covers key machine learning techniques in an intuitive fashion and demonstrates their applications through case studies. Coverage includes examples of unsupervised learning, generative models and discriminative models. In addition, the book examines Maximum Margin Markov (M3) networks, which strive to combine the advantages of both the graphical models and Support Vector Machines (SVM).
Machine learning methods are now an important tool for scientists, researchers, engineers and students in a wide range of areas. This book is written for people who want to adopt and use the main tools of machine learning, but aren't necessarily going to want to be machine learning researchers. Intended for students in final year undergraduate or first year graduate computer science programs in machine learning, this textbook is a machine learning toolkit. Applied Machine Learning covers many topics for people who want to use machine learning processes to get things done, with a strong emphasis on using existing tools and packages, rather than writing one's own code. A companion to the author's Probability and Statistics for Computer Science, this book picks up where the earlier book left off (but also supplies a summary of probability that the reader can use). Emphasizing the usefulness of standard machinery from applied statistics, this textbook gives an overview of the major applied areas in learning, including coverage of:* classification using standard machinery (naive bayes; nearest neighbor; SVM)* clustering and vector quantization (largely as in PSCS)* PCA (largely as in PSCS)* variants of PCA (NIPALS; latent semantic analysis; canonical correlation analysis)* linear regression (largely as in PSCS)* generalized linear models including logistic regression* model selection with Lasso, elasticnet* robustness and m-estimators* Markov chains and HMM's (largely as in PSCS)* EM in fairly gory detail; long experience teaching this suggests one detailed example is required, which students hate; but once they've been through that, the next one is easy* simple graphical models (in the variational inference section)* classification with neural networks, with a particular emphasis onimage classification* autoencoding with neural networks* structure learning
Recent decades have seen rapid advances in automatization processes, supported by modern machines and computers. The result is significant increases in system complexity and state changes, information sources, the need for faster data handling and the integration of environmental influences. Intelligent systems, equipped with a taxonomy of data-driven system identification and machine learning algorithms, can handle these problems partially. Conventional learning algorithms in a batch off-line setting fail whenever dynamic changes of the process appear due to non-stationary environments and external influences. "Learning in Non-Stationary Environments: Methods and Applications "offers a wide-ranging, comprehensive review of recent developments and important methodologies in the field. The coverage focuses on dynamic learning in unsupervised problems, dynamic learning in supervised classification and dynamic learning in supervised regression problems. A later section is dedicated to applications in which dynamic learning methods serve as keystones for achieving models with high accuracy. Rather than rely on a mathematical theorem/proof style, the editors highlight numerous figures, tables, examples and applications, together with their explanations. This approach offers a useful basis for further investigation and fresh ideas and motivates and inspires newcomers to explore this promising and still emerging field of research. "
What follows is a sampler of work in knowledge acquisition. It comprises three technical papers and six guest editorials. The technical papers give an in-depth look at some of the important issues and current approaches in knowledge acquisition. The editorials were pro duced by authors who were basically invited to sound off. I've tried to group and order the contributions somewhat coherently. The following annotations emphasize the connections among the separate pieces. Buchanan's editorial starts on the theme of "Can machine learning offer anything to expert systems?" He emphasizes the practical goals of knowledge acquisition and the challenge of aiming for them. Lenat's editorial briefly describes experience in the development of CYC that straddles both fields. He outlines a two-phase development that relies on an engineering approach early on and aims for a crossover to more automated techniques as the size of the knowledge base increases. Bareiss, Porter, and Murray give the first technical paper. It comes from a laboratory of machine learning researchers who have taken an interest in supporting the development of knowledge bases, with an emphasis on how development changes with the growth of the knowledge base. The paper describes two systems. The first, Protos, adjusts the training it expects and the assistance it provides as its knowledge grows. The second, KI, is a system that helps integrate knowledge into an already very large knowledge base."
Grammatical Evolution: Evolutionary Automatic Programming in an Arbitrary Language provides the first comprehensive introduction to Grammatical Evolution, a novel approach to Genetic Programming that adopts principles from molecular biology in a simple and useful manner, coupled with the use of grammars to specify legal structures in a search. Grammatical Evolution's rich modularity gives a unique flexibility, making it possible to use alternative search strategies - whether evolutionary, deterministic or some other approach - and to even radically change its behavior by merely changing the grammar supplied. This approach to Genetic Programming represents a powerful new weapon in the Machine Learning toolkit that can be applied to a diverse set of problem domains.
This monograph is a contribution to the study of the identification problem: the problem of identifying an item from a known class us ing positive and negative examples. This problem is considered to be an important component of the process of inductive learning, and as such has been studied extensively. In the overview we shall explain the objectives of this work and its place in the overall fabric of learning research. Context. Learning occurs in many forms; the only form we are treat ing here is inductive learning, roughly characterized as the process of forming general concepts from specific examples. Computer Science has found three basic approaches to this problem: * Select a specific learning task, possibly part of a larger task, and construct a computer program to solve that task . * Study cognitive models of learning in humans and extrapolate from them general principles to explain learning behavior. Then construct machine programs to test and illustrate these models. xi Xll PREFACE * Formulate a mathematical theory to capture key features of the induction process. This work belongs to the third category. The various studies of learning utilize training examples (data) in different ways. The three principal ones are: * Similarity-based (or empirical) learning, in which a collection of examples is used to select an explanation from a class of possible rules. |
You may like...
God in the Enlightenment
William J. Bulman, Robert G. Ingram
Hardcover
R3,761
Discovery Miles 37 610
Takaful and Islamic Cooperative Finance…
S. Nazim Ali, Shariq Nisar
Hardcover
R4,488
Discovery Miles 44 880
Lessing's Philosophy of Religion and the…
Toshimasa Yasukata
Hardcover
R2,769
Discovery Miles 27 690
|