Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Showing 1 - 25 of 28 matches in All Departments
This book provides a general and comprehensible overview of imbalanced learning. It contains a formal description of a problem, and focuses on its main features, and the most relevant proposed solutions. Additionally, it considers the different scenarios in Data Science for which the imbalanced classification can create a real challenge. This book stresses the gap with standard classification tasks by reviewing the case studies and ad-hoc performance metrics that are applied in this area. It also covers the different approaches that have been traditionally applied to address the binary skewed class distribution. Specifically, it reviews cost-sensitive learning, data-level preprocessing methods and algorithm-level solutions, taking also into account those ensemble-learning solutions that embed any of the former alternatives. Furthermore, it focuses on the extension of the problem for multi-class problems, where the former classical methods are no longer to be applied in a straightforward way. This book also focuses on the data intrinsic characteristics that are the main causes which, added to the uneven class distribution, truly hinders the performance of classification algorithms in this scenario. Then, some notes on data reduction are provided in order to understand the advantages related to the use of this type of approaches. Finally this book introduces some novel areas of study that are gathering a deeper attention on the imbalanced data issue. Specifically, it considers the classification of data streams, non-classical classification problems, and the scalability related to Big Data. Examples of software libraries and modules to address imbalanced classification are provided. This book is highly suitable for technical professionals, senior undergraduate and graduate students in the areas of data science, computer science and engineering. It will also be useful for scientists and researchers to gain insight on the current developments in this area of study, as well as future research directions.
This book provides a general overview of multiple instance learning (MIL), defining the framework and covering the central paradigms. The authors discuss the most important algorithms for MIL such as classification, regression and clustering. With a focus on classification, a taxonomy is set and the most relevant proposals are specified. Efficient algorithms are developed to discover relevant information when working with uncertainty. Key representative applications are included. This book carries out a study of the key related fields of distance metrics and alternative hypothesis. Chapters examine new and developing aspects of MIL such as data reduction for multi-instance problems and imbalanced MIL data. Class imbalance for multi-instance problems is defined at the bag level, a type of representation that utilizes ambiguity due to the fact that bag labels are available, but the labels of the individual instances are not defined. Additionally, multiple instance multiple label learning is explored. This learning framework introduces flexibility and ambiguity in the object representation providing a natural formulation for representing complicated objects. Thus, an object is represented by a bag of instances and is allowed to have associated multiple class labels simultaneously. This book is suitable for developers and engineers working to apply MIL techniques to solve a variety of real-world problems. It is also useful for researchers or students seeking a thorough overview of MIL literature, methods, and tools.
Data Preprocessing for Data Mining addresses one of the most important issues within the well-known Knowledge Discovery from Data process. Data directly taken from the source will likely have inconsistencies, errors or most importantly, it is not ready to be considered for a data mining process. Furthermore, the increasing amount of data in recent science, industry and business applications, calls to the requirement of more complex tools to analyze it. Thanks to data preprocessing, it is possible to convert the impossible into possible, adapting the data to fulfill the input demands of each data mining algorithm. Data preprocessing includes the data reduction techniques, which aim at reducing the complexity of the data, detecting or removing irrelevant and noisy elements from the data. This book is intended to review the tasks that fill the gap between the data acquisition from the source and the data mining process. A comprehensive look from a practical point of view, including basic concepts and surveying the techniques proposed in the specialized literature, is given.Each chapter is a stand-alone guide to a particular data preprocessing topic, from basic concepts and detailed descriptions of classical algorithms, to an incursion of an exhaustive catalog of recent developments. The in-depth technical descriptions make this book suitable for technical professionals, researchers, senior undergraduate and graduate students in data science, computer science and engineering.
Fuzzy modeling usually comes with two contradictory requirements: interpretability, which is the capability to express the real system behavior in a comprehensible way, and accuracy, which is the capability to faithfully represent the real system. In this framework, one of the most important areas is linguistic fuzzy modeling, where the legibility of the obtained model is the main objective. This task is usually developed by means of linguistic (Mamdani) fuzzy rule-based systems. An active research area is oriented towards the use of new techniques and structures to extend the classical, rigid linguistic fuzzy modeling with the main aim of increasing its precision degree. Traditionally, this accuracy improvement has been carried out without considering the corresponding interpretability loss. Currently, new trends have been proposed trying to preserve the linguistic fuzzy model description power during the optimization process. Written by leading experts in the field, this volume collects some representative researcher that pursue this approach.
Fuzzy modeling has become one of the most productive and successful results of fuzzy logic. Among others, it has been applied to knowledge discovery, automatic classification, long-term prediction, or medical and engineering analysis. The research developed in the topic during the last two decades has been mainly focused on exploiting the fuzzy model flexibility to obtain the highest accuracy. This approach usually sets aside the interpretability of the obtained models. However, we should remember the initial philosophy of fuzzy sets theory directed to serve the bridge between the human understanding and the machine processing. In this challenge, the ability of fuzzy models to express the behavior of the real system in a comprehensible manner acquires a great importance. This book collects the works of a group of experts in the field that advocate the interpretability improvements as a mechanism to obtain well balanced fuzzy models.
This carefully edited book presents an up-to-date state of current research in the use of fuzzy sets and their extensions. It pays particular attention to foundation issues and to their application to four important areas where fuzzy sets are seen to be an important tool for modeling and solving problems. The book 's 34 chapters deal with the subject with clarity and effectiveness. They include four review papers introducing some non-standard representations
This book offers a comprehensive review of multilabel techniques widely used to classify and label texts, pictures, videos and music in the Internet. A deep review of the specialized literature on the field includes the available software needed to work with this kind of data. It provides the user with the software tools needed to deal with multilabel data, as well as step by step instruction on how to use them. The main topics covered are: * The special characteristics of multi-labeled data and the metrics available to measure them.* The importance of taking advantage of label correlations to improve the results.* The different approaches followed to face multi-label classification.* The preprocessing techniques applicable to multi-label datasets.* The available software tools to work with multi-label data. This book is beneficial for professionals and researchers in a variety of fields because of the wide range of potential applications for multilabel classification. Besides its multiple applications to classify different types of online information, it is also useful in many other areas, such as genomics and biology. No previous knowledge about the subject is required. The book introduces all the needed concepts to understand multilabel data characterization, treatment and evaluation.
More than a decade has passed since the First International
Conference of the Learning Sciences (ICLS) was held at Northwestern
University in 1991. The conference has now become an established
place for researchers to gather. The 2004 meeting is the first
under the official sponsorship of the International Society of the
Learning Sciences (ISLS). The theme of this conference is
"Embracing Diversity in the Learning Sciences." As a field, the
learning sciences have always drawn from a diverse set of
disciplines to study learning in an array of settings. Psychology,
cognitive science, anthropology, and artificial intelligence have
all contributed to the development of methodologies to study
learning in schools, museums, and organizations. As the field
grows, however, it increasingly recognizes the challenges to
studying and changing learning environments across levels in
complex social systems. This demands attention to new kinds of
diversity in who, what, and how we study; and to the issues raised
to develop coherent accounts of how learning occurs. Ranging from
schools to families, and across all levels of formal schooling from
pre-school through higher education, this ideology can be supported
in a multitude of social contexts. The papers in these conference
proceedings respond to the call.
More than a decade has passed since the First International Conference of the Learning Sciences (ICLS) was held at Northwestern University in 1991. The conference has now become an established place for researchers to gather. The 2004 meeting is the first under the official sponsorship of the International Society of the Learning Sciences (ISLS). The theme of this conference is "Embracing Diversity in the Learning Sciences." As a field, the learning sciences have always drawn from a diverse set of disciplines to study learning in an array of settings. Psychology, cognitive science, anthropology, and artificial intelligence have all contributed to the development of methodologies to study learning in schools, museums, and organizations. As the field grows, however, it increasingly recognizes the challenges to studying and changing learning environments across levels in complex social systems. This demands attention to new kinds of diversity in who, what, and how we study; and to the issues raised to develop coherent accounts of how learning occurs. Ranging from schools to families, and across all levels of formal schooling from pre-school through higher education, this ideology can be supported in a multitude of social contexts. The papers in these conference proceedings respond to the call.
Foundations of Computational Intelligence Volume 2: Approximation Reasoning: Theoretical Foundations and Applications Human reasoning usually is very approximate and involves various types of - certainties. Approximate reasoning is the computational modelling of any part of the process used by humans to reason about natural phenomena or to solve real world problems. The scope of this book includes fuzzy sets, Dempster-Shafer theory, multi-valued logic, probability, random sets, and rough set, near set and hybrid intelligent systems. Besides research articles and expository papers on t- ory and algorithms of approximation reasoning, papers on numerical experiments and real world applications were also encouraged. This Volume comprises of 12 chapters including an overview chapter providing an up-to-date and state-of-the research on the applications of Computational Intelligence techniques for - proximation reasoning. The Volume is divided into 2 parts: Part-I: Approximate Reasoning - Theoretical Foundations Part-II: Approximate Reasoning - Success Stories and Real World Applications Part I on Approximate Reasoning - Theoretical Foundations contains four ch- ters that describe several approaches of fuzzy and Para consistent annotated logic approximation reasoning. In Chapter 1, "Fuzzy Sets, Near Sets, and Rough Sets for Your Computational Intelligence Toolbox" by Peters considers how a user might utilize fuzzy sets, near sets, and rough sets, taken separately or taken together in hybridizations as part of a computational intelligence toolbox. In multi-criteria decision making, it is necessary to aggregate (combine) utility values corresponding to several criteria (parameters).
This book includes the outcomes of the 11th International Symposium on Ambient Intelligence (ISAmI 2020). The 11th International Symposium on Ambient Intelligence is hosted by the University of L'Aquila and is going to be held in L'Aquila (Italy). Initially planned on the 17th to the 19th of June 2020, it was postponed to the 7th to the 9th of October 2020, due to the COVID-19 outbreak.
This book features the outcomes of the 16th International Conference on Distributed Computing and Artificial Intelligence 2019 (DCAI 2019), which is a forum to present applications of innovative techniques for studying and solving complex problems in artificial intelligence and computing. The exchange of ideas between scientists and technicians from both the academic and industrial sectors is essential to facilitate the development of systems that can meet the ever-increasing demands of today's society. This book brings together lessons learned, current work and promising future trends associated with distributed computing, artificial intelligence and their application to provide efficient solutions to real-world problems. The book includes 29 high-quality and diverse contributions in established and emerging areas of research presented at the symposium organized by the Osaka Institute of Technology, Hiroshima University, University of Granada and University of Salamanca, which was held in Avila, Spain, from 26th-28th June 2019
This book provides a general and comprehensible overview of imbalanced learning. It contains a formal description of a problem, and focuses on its main features, and the most relevant proposed solutions. Additionally, it considers the different scenarios in Data Science for which the imbalanced classification can create a real challenge. This book stresses the gap with standard classification tasks by reviewing the case studies and ad-hoc performance metrics that are applied in this area. It also covers the different approaches that have been traditionally applied to address the binary skewed class distribution. Specifically, it reviews cost-sensitive learning, data-level preprocessing methods and algorithm-level solutions, taking also into account those ensemble-learning solutions that embed any of the former alternatives. Furthermore, it focuses on the extension of the problem for multi-class problems, where the former classical methods are no longer to be applied in a straightforward way. This book also focuses on the data intrinsic characteristics that are the main causes which, added to the uneven class distribution, truly hinders the performance of classification algorithms in this scenario. Then, some notes on data reduction are provided in order to understand the advantages related to the use of this type of approaches. Finally this book introduces some novel areas of study that are gathering a deeper attention on the imbalanced data issue. Specifically, it considers the classification of data streams, non-classical classification problems, and the scalability related to Big Data. Examples of software libraries and modules to address imbalanced classification are provided. This book is highly suitable for technical professionals, senior undergraduate and graduate students in the areas of data science, computer science and engineering. It will also be useful for scientists and researchers to gain insight on the current developments in this area of study, as well as future research directions.
This book constitutes the refereed proceedings of the 18th Conference of the Spanish Association for Artificial Intelligence, CAEPIA 2018, held in Granada, Spain, in October 2018. The 36 full papers presented were carefully selected from 240 submissions. The Conference of the Spanish Association of Artificial Intelligence (CAEPIA) is a biennial forum open to researchers from all over the world to present and discuss their latest scientific and technological advances in Antificial Intelligence (AI). Authors are kindly requested to submit unpublished original papers describing relevant research on AI issues from all points of view: formal, methodological, technical or applied.
This book provides a general overview of multiple instance learning (MIL), defining the framework and covering the central paradigms. The authors discuss the most important algorithms for MIL such as classification, regression and clustering. With a focus on classification, a taxonomy is set and the most relevant proposals are specified. Efficient algorithms are developed to discover relevant information when working with uncertainty. Key representative applications are included. This book carries out a study of the key related fields of distance metrics and alternative hypothesis. Chapters examine new and developing aspects of MIL such as data reduction for multi-instance problems and imbalanced MIL data. Class imbalance for multi-instance problems is defined at the bag level, a type of representation that utilizes ambiguity due to the fact that bag labels are available, but the labels of the individual instances are not defined. Additionally, multiple instance multiple label learning is explored. This learning framework introduces flexibility and ambiguity in the object representation providing a natural formulation for representing complicated objects. Thus, an object is represented by a bag of instances and is allowed to have associated multiple class labels simultaneously. This book is suitable for developers and engineers working to apply MIL techniques to solve a variety of real-world problems. It is also useful for researchers or students seeking a thorough overview of MIL literature, methods, and tools.
This book offers a comprehensive review of multilabel techniques widely used to classify and label texts, pictures, videos and music in the Internet. A deep review of the specialized literature on the field includes the available software needed to work with this kind of data. It provides the user with the software tools needed to deal with multilabel data, as well as step by step instruction on how to use them. The main topics covered are: * The special characteristics of multi-labeled data and the metrics available to measure them.* The importance of taking advantage of label correlations to improve the results.* The different approaches followed to face multi-label classification.* The preprocessing techniques applicable to multi-label datasets.* The available software tools to work with multi-label data. This book is beneficial for professionals and researchers in a variety of fields because of the wide range of potential applications for multilabel classification. Besides its multiple applications to classify different types of online information, it is also useful in many other areas, such as genomics and biology. No previous knowledge about the subject is required. The book introduces all the needed concepts to understand multilabel data characterization, treatment and evaluation.
Data Preprocessing for Data Mining addresses one of the most important issues within the well-known Knowledge Discovery from Data process. Data directly taken from the source will likely have inconsistencies, errors or most importantly, it is not ready to be considered for a data mining process. Furthermore, the increasing amount of data in recent science, industry and business applications, calls to the requirement of more complex tools to analyze it. Thanks to data preprocessing, it is possible to convert the impossible into possible, adapting the data to fulfill the input demands of each data mining algorithm. Data preprocessing includes the data reduction techniques, which aim at reducing the complexity of the data, detecting or removing irrelevant and noisy elements from the data. This book is intended to review the tasks that fill the gap between the data acquisition from the source and the data mining process. A comprehensive look from a practical point of view, including basic concepts and surveying the techniques proposed in the specialized literature, is given.Each chapter is a stand-alone guide to a particular data preprocessing topic, from basic concepts and detailed descriptions of classical algorithms, to an incursion of an exhaustive catalog of recent developments. The in-depth technical descriptions make this book suitable for technical professionals, researchers, senior undergraduate and graduate students in data science, computer science and engineering.
Fuzzy modeling has become one of the most productive and successful results of fuzzy logic. Among others, it has been applied to knowledge discovery, automatic classification, long-term prediction, or medical and engineering analysis. The research developed in the topic during the last two decades has been mainly focused on exploiting the fuzzy model flexibility to obtain the highest accuracy. This approach usually sets aside the interpretability of the obtained models. However, we should remember the initial philosophy of fuzzy sets theory directed to serve the bridge between the human understanding and the machine processing. In this challenge, the ability of fuzzy models to express the behavior of the real system in a comprehensible manner acquires a great importance. This book collects the works of a group of experts in the field that advocate the interpretability improvements as a mechanism to obtain well balanced fuzzy models.
Fuzzy modeling usually comes with two contradictory requirements: interpretability, which is the capability to express the real system behavior in a comprehensible way, and accuracy, which is the capability to faithfully represent the real system. In this framework, one of the most important areas is linguistic fuzzy modeling, where the legibility of the obtained model is the main objective. This task is usually developed by means of linguistic (Mamdani) fuzzy rule-based systems. An active research area is oriented towards the use of new techniques and structures to extend the classical, rigid linguistic fuzzy modeling with the main aim of increasing its precision degree. Traditionally, this accuracy improvement has been carried out without considering the corresponding interpretability loss. Currently, new trends have been proposed trying to preserve the linguistic fuzzy model description power during the optimization process. Written by leading experts in the field, this volume collects some representative researcher that pursue this approach.
Foundations of Computational Intelligence Volume 2: Approximation Reasoning: Theoretical Foundations and Applications Human reasoning usually is very approximate and involves various types of - certainties. Approximate reasoning is the computational modelling of any part of the process used by humans to reason about natural phenomena or to solve real world problems. The scope of this book includes fuzzy sets, Dempster-Shafer theory, multi-valued logic, probability, random sets, and rough set, near set and hybrid intelligent systems. Besides research articles and expository papers on t- ory and algorithms of approximation reasoning, papers on numerical experiments and real world applications were also encouraged. This Volume comprises of 12 chapters including an overview chapter providing an up-to-date and state-of-the research on the applications of Computational Intelligence techniques for - proximation reasoning. The Volume is divided into 2 parts: Part-I: Approximate Reasoning - Theoretical Foundations Part-II: Approximate Reasoning - Success Stories and Real World Applications Part I on Approximate Reasoning - Theoretical Foundations contains four ch- ters that describe several approaches of fuzzy and Para consistent annotated logic approximation reasoning. In Chapter 1, "Fuzzy Sets, Near Sets, and Rough Sets for Your Computational Intelligence Toolbox" by Peters considers how a user might utilize fuzzy sets, near sets, and rough sets, taken separately or taken together in hybridizations as part of a computational intelligence toolbox. In multi-criteria decision making, it is necessary to aggregate (combine) utility values corresponding to several criteria (parameters).
Theneedforintelligentsystemstechnologyinsolvingreal-lifeproblemshasbeen consistently growing. In order to address this need, researchers in the ?eld have been developing methodologies and tools to develop intelligent systems for so- ing complex problems. The International Society of Applied Intelligence (ISAI) through its annual IEA/AIE conferences provides a forum for international s- enti?c and industrial community in the ?eld of Applied Arti?cial Intelligence to interactively participate in developing intelligent systems, which are needed to solve twenty ?rst century's ever growing problems in almost every ?eld. The 23rdInternationalConference on Industrial, Engineering and Other - plications of Applied Intelligence Systems (IEA/AIE-2010) held in C ordoba, Spain, followed IEA/AIE tradition of providing an international scienti?c forum for researchers in the ?eld of applied arti?cial intelligence. The presentations of theinvitedspeakersandauthorsmainlyfocusedondevelopingandstudyingnew methods to cope with the problems posed by real-life applications of arti?cial intelligence.Paperspresentedinthetwentythirdconferenceintheseriescovered theories as well as applications of intelligent systems in solving complex real-life problems. We received 297 papers for the main track, selecting 119 of them with the highest quality standards. Each paper was revised by at least three members of the Program Committee. The papers in the proceedings cover a wide number of topics including: applications to robotics, business and ?nancial markets, bio- formaticsandbiomedicine, applicationsofagent-basedsystems, computervision, control, simulation and modeling, data mining, decision support systems, evo- tionary computation and its applications, fuzzy systems and their applications, heuristic optimization methods and swarm intelligence, intelligent agent-based systems, internetapplications, knowledgemanagementandknowledgebaseds- tems, machine learning, neural network applications, optimization and heuristic search, and other real-life applications."
Theneedforintelligentsystemstechnologyinsolvingreal-lifeproblemshasbeen consistently growing. In order to address this need, researchers in the ?eld have been developing methodologies and tools to develop intelligent systems for so- ing complex problems. The International Society of Applied Intelligence (ISAI) through its annual IEA/AIE conferences provides a forum for international s- enti?c and industrial community in the ?eld of Applied Arti?cial Intelligence to interactively participate in developing intelligent systems, which are needed to solve twenty ?rst century's ever growing problems in almost every ?eld. The 23rdInternationalConference on Industrial, Engineering and Other - plications of Applied Intelligence Systems (IEA/AIE-2010) held in C' ordoba, Spain, followed IEA/AIE tradition of providing an international scienti?c forum for researchers in the ?eld of applied arti?cial intelligence. The presentations of theinvitedspeakersandauthorsmainlyfocusedondevelopingandstudyingnew methods to cope with the problems posed by real-life applications of arti?cial intelligence.Paperspresentedinthetwentythirdconferenceintheseriescovered theories as well as applications of intelligent systems in solving complex real-life problems. We received 297 papers for the main track, selecting 119 of them with the highest quality standards. Each paper was revised by at least three members of the Program Committee. The papers in the proceedings cover a wide number of topics including: applications to robotics, business and ?nancial markets, bio- formaticsandbiomedicine,applicationsofagent-basedsystems,computervision, control, simulation and modeling, data mining, decision support systems, evo- tionary computation and its applications, fuzzy systems and their applications, heuristic optimization methods and swarm intelligence, intelligent agent-based systems,internetapplications,knowledgemanagementandknowledgebaseds- tems, machine learning, neural network applications, optimization and heuristic search, and other real-life applications.
This book offers a comprehensible overview of Big Data Preprocessing, which includes a formal description of each problem. It also focuses on the most relevant proposed solutions. This book illustrates actual implementations of algorithms that helps the reader deal with these problems. This book stresses the gap that exists between big, raw data and the requirements of quality data that businesses are demanding. This is called Smart Data, and to achieve Smart Data the preprocessing is a key step, where the imperfections, integration tasks and other processes are carried out to eliminate superfluous information. The authors present the concept of Smart Data through data preprocessing in Big Data scenarios and connect it with the emerging paradigms of IoT and edge computing, where the end points generate Smart Data without completely relying on the cloud. Finally, this book provides some novel areas of study that are gathering a deeper attention on the Big Data preprocessing. Specifically, it considers the relation with Deep Learning (as of a technique that also relies in large volumes of data), the difficulty of finding the appropriate selection and concatenation of preprocessing techniques applied and some other open problems. Practitioners and data scientists who work in this field, and want to introduce themselves to preprocessing in large data volume scenarios will want to purchase this book. Researchers that work in this field, who want to know which algorithms are currently implemented to help their investigations, may also be interested in this book.
This book offers a comprehensible overview of Big Data Preprocessing, which includes a formal description of each problem. It also focuses on the most relevant proposed solutions. This book illustrates actual implementations of algorithms that helps the reader deal with these problems. This book stresses the gap that exists between big, raw data and the requirements of quality data that businesses are demanding. This is called Smart Data, and to achieve Smart Data the preprocessing is a key step, where the imperfections, integration tasks and other processes are carried out to eliminate superfluous information. The authors present the concept of Smart Data through data preprocessing in Big Data scenarios and connect it with the emerging paradigms of IoT and edge computing, where the end points generate Smart Data without completely relying on the cloud. Finally, this book provides some novel areas of study that are gathering a deeper attention on the Big Data preprocessing. Specifically, it considers the relation with Deep Learning (as of a technique that also relies in large volumes of data), the difficulty of finding the appropriate selection and concatenation of preprocessing techniques applied and some other open problems. Practitioners and data scientists who work in this field, and want to introduce themselves to preprocessing in large data volume scenarios will want to purchase this book. Researchers that work in this field, who want to know which algorithms are currently implemented to help their investigations, may also be interested in this book.
The seventh International Conference on Knowledge Management in Organizations (KMO) brings together researchers and developers from industry and the academic world to report on the latest scientific and technical advances on knowledge management in organisations. KMO 2012 provides an international forum for authors to present and discuss research focused on the role of knowledge management for innovative services in industries, to shed light on recent advances in cloud computing for KM as well as to identify future directions for researching the role of knowledge management in service innovation and how cloud computing can be used to address many of the issues currently facing KM in academia and industrial sectors. The conference took place at Salamanca in Spain on the 11th-13th July in 2012." |
You may like...
Topological Methods in Data Analysis and…
Hamish Carr, Issei Fujishiro, …
Paperback
R4,943
Discovery Miles 49 430
|