Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Applications of computing > Databases > Data mining
Abstraction is a fundamental mechanism underlying both human and artificial perception, representation of knowledge, reasoning and learning. This mechanism plays a crucial role in many disciplines, notably Computer Programming, Natural and Artificial Vision, Complex Systems, Artificial Intelligence and Machine Learning, Art, and Cognitive Sciences. This book first provides the reader with an overview of the notions of abstraction proposed in various disciplines by comparing both commonalities and differences. After discussing the characterizing properties of abstraction, a formal model, the KRA model, is presented to capture them. This model makes the notion of abstraction easily applicable by means of the introduction of a set of abstraction operators and abstraction patterns, reusable across different domains and applications. It is the impact of abstraction in Artificial Intelligence, Complex Systems and Machine Learning which creates the core of the book. A general framework, based on the KRA model, is presented, and its pragmatic power is illustrated with three case studies: Model-based diagnosis, Cartographic Generalization, and learning Hierarchical Hidden Markov Models.
This volume gathers selected peer-reviewed papers presented at the XXVI International Joint Conference on Industrial Engineering and Operations Management (IJCIEOM), held on July 8-11, 2020 in Rio de Janeiro, Brazil. The respective chapters address a range of timely topics in industrial engineering, including operations and process management, global operations, managerial economics, data science and stochastic optimization, logistics and supply chain management, quality management, product development, strategy and organizational engineering, knowledge and information management, work and human factors, sustainability, production engineering education, healthcare operations management, disaster management, and more. These topics broadly involve fields like operations, manufacturing, industrial and production engineering, and management. Given its scope, the book offers a valuable resource for those engaged in optimization research, operations research, and practitioners alike.
This book reports on the development and validation of a generic defeasible logic programming framework for carrying out argumentative reasoning in Semantic Web applications (GF@SWA). The proposed methodology is unique in providing a solution for representing incomplete and/or contradictory information coming from different sources, and reasoning with it. GF@SWA is able to represent this type of information, perform argumentation-driven hybrid reasoning to resolve conflicts, and generate graphical representations of the integrated information, thus assisting decision makers in decision making processes. GF@SWA represents the first argumentative reasoning engine for carrying out automated reasoning in the Semantic Web context and is expected to have a significant impact on future business applications. The book provides the readers with a detailed and clear exposition of different argumentation-based reasoning techniques, and of their importance and use in Semantic Web applications. It addresses both academics and professionals, and will be of primary interest to researchers, students and practitioners in the area of Web-based intelligent decision support systems and their application in various domains.
Data Preprocessing for Data Mining addresses one of the most important issues within the well-known Knowledge Discovery from Data process. Data directly taken from the source will likely have inconsistencies, errors or most importantly, it is not ready to be considered for a data mining process. Furthermore, the increasing amount of data in recent science, industry and business applications, calls to the requirement of more complex tools to analyze it. Thanks to data preprocessing, it is possible to convert the impossible into possible, adapting the data to fulfill the input demands of each data mining algorithm. Data preprocessing includes the data reduction techniques, which aim at reducing the complexity of the data, detecting or removing irrelevant and noisy elements from the data. This book is intended to review the tasks that fill the gap between the data acquisition from the source and the data mining process. A comprehensive look from a practical point of view, including basic concepts and surveying the techniques proposed in the specialized literature, is given.Each chapter is a stand-alone guide to a particular data preprocessing topic, from basic concepts and detailed descriptions of classical algorithms, to an incursion of an exhaustive catalog of recent developments. The in-depth technical descriptions make this book suitable for technical professionals, researchers, senior undergraduate and graduate students in data science, computer science and engineering.
This book explores all relevant aspects of net scoring, also known as uplift modeling: a data mining approach used to analyze and predict the effects of a given treatment on a desired target variable for an individual observation. After discussing modern net score modeling methods, data preparation, and the assessment of uplift models, the book investigates software implementations and real-world scenarios. Focusing on the application of theoretical results and on practical issues of uplift modeling, it also includes a dedicated chapter on software solutions in SAS, R, Spectrum Miner, and KNIME, which compares the respective tools. This book also presents the applications of net scoring in various contexts, e.g. medical treatment, with a special emphasis on direct marketing and corresponding business cases. The target audience primarily includes data scientists, especially researchers and practitioners in predictive modeling and scoring, mainly, but not exclusively, in the marketing context.
This book enriches unsupervised outlier detection research by proposing several new distance-based and density-based outlier scores in a k-nearest neighbors' setting. The respective chapters highlight the latest developments in k-nearest neighbor-based outlier detection research and cover such topics as our present understanding of unsupervised outlier detection in general; distance-based and density-based outlier detection in particular; and the applications of the latest findings to boundary point detection and novel object detection. The book also offers a new perspective on bridging the gap between k-nearest neighbor-based outlier detection and clustering-based outlier detection, laying the groundwork for future advances in unsupervised outlier detection research. The authors hope the algorithms and applications proposed here will serve as valuable resources for outlier detection researchers for years to come.
This book presents new approaches that advance research in all aspects of agent-based models, technologies, simulations and implementations for data intensive applications. The nine chapters contain a review of recent cross-disciplinary approaches in cloud environments and multi-agent systems, and important formulations of data intensive problems in distributed computational environments together with the presentation of new agent-based tools to handle those problems and Big Data in general. This volume can serve as a reference for students, researchers and industry practitioners working in or interested in joining interdisciplinary work in the areas of data intensive computing and Big Data systems using emergent large-scale distributed computing paradigms. It will also allow newcomers to grasp key concepts and potential solutions on advanced topics of theory, models, technologies, system architectures and implementation of applications in Multi-Agent systems and data intensive computing.
This work takes a critical look at the current concept of isotopic landscapes ("isoscapes") in bioarchaeology and its application in future research. It specifically addresses the research potential of cremated finds, a somewhat neglected bioarchaeological substrate, resulting primarily from the inherent osteological challenges and complex mineralogy associated with it. In addition, for the first time data mining methods are applied. The chapters are the outcome of an international workshop sponsored by the German Science Foundation and the Centre of Advanced Studies at the Ludwig-Maximilian-University in Munich. Isotopic landscapes are indispensable tracers for the monitoring of the flow of matter through geo/ecological systems since they comprise existing temporally and spatially defined stable isotopic patterns found in geological and ecological samples. Analyses of stable isotopes of the elements nitrogen, carbon, oxygen, strontium, and lead are routinely utilized in bioarchaeology to reconstruct biodiversity, palaeodiet, palaeoecology, palaeoclimate, migration and trade. The interpretive power of stable isotopic ratios depends not only on firm, testable hypotheses, but most importantly on the cooperative networking of scientists from both natural and social sciences. Application of multi-isotopic tracers generates isotopic patterns with multiple dimensions, which accurately characterize a find, but can only be interpreted by use of modern data mining methods.
Mining of Data with Complex Structures: - Clarifies the type and nature of data with complex structure including sequences, trees and graphs - Provides a detailed background of the state-of-the-art of sequence mining, tree mining and graph mining. -Defines the essential aspects of the tree mining problem: subtree types, support definitions, constraints. - Outlines the implementation issues one needs to consider when developing tree mining algorithms (enumeration strategies, data structures, etc.) - Details the Tree Model Guided (TMG) approach for tree mining and provides the mathematical model for the worst case estimate of complexity of mining ordered induced and embedded subtrees. - Explains the mechanism of the TMG framework for mining ordered/unordered induced/embedded and distance-constrained embedded subtrees. - Provides a detailed comparison of the different tree mining approaches highlighting the characteristics and benefits of each approach. - Overviews the implications and potential applications of tree mining in general knowledge management related tasks, and uses Web, health and bioinformatics related applications as case studies. - Details the extension of the TMG framework for sequence mining - Provides an overview of the future research direction with respect to technical extensions and application areas The primary audience is 3rd year, 4th year undergraduate students, Masters and PhD students and academics. The book can be used for both teaching and research. The secondary audiences are practitioners in industry, business, commerce, government and consortiums, alliances and partnerships to learn how to introduce and efficiently make use of the techniques for mining of data with complex structures into their applications. The scope of the book is both theoretical and practical and as such it will reach a broad market both within academia and industry. In addition, its subject matter is a rapidly emerging field that is critical for efficient analysis of knowledge stored in various domains."
The book proposes new technologies and discusses future solutions for design infrastructure for ICT. The book contains high quality submissions presented at Second International Conference on Information and Communication Technology for Sustainable Development (ICT4SD - 2016) held at Goa, India during 1 - 2 July, 2016. The conference stimulates the cutting-edge research discussions among many academic pioneering researchers, scientists, industrial engineers, and students from all around the world. The topics covered in this book also focus on innovative issues at international level by bringing together the experts from different countries.
* This book is an updated version of a well-received book previously published in Chinese by Science Press of China (the first edition in 2006 and the second in 2013). It offers a systematic and practical overview of spatial data mining, which combines computer science and geo-spatial information science, allowing each field to profit from the knowledge and techniques of the other. To address the spatiotemporal specialties of spatial data, the authors introduce the key concepts and algorithms of the data field, cloud model, mining view, and Deren Li methods. The data field method captures the interactions between spatial objects by diffusing the data contribution from a universe of samples to a universe of population, thereby bridging the gap between the data model and the recognition model. The cloud model is a qualitative method that utilizes quantitative numerical characters to bridge the gap between pure data and linguistic concepts. The mining view method discriminates the different requirements by using scale, hierarchy, and granularity in order to uncover the anisotropy of spatial data mining. The Deren Li method performs data preprocessing to prepare it for further knowledge discovery by selecting a weight for iteration in order to clean the observed spatial data as much as possible. In addition to the essential algorithms and techniques, the book provides application examples of spatial data mining in geographic information science and remote sensing. The practical projects include spatiotemporal video data mining for protecting public security, serial image mining on nighttime lights for assessing the severity of the Syrian Crisis, and the applications in the government project 'the Belt and Road Initiatives'.
This book starts with an introduction to process modeling and process paradigms, then explains how to query and analyze process models, and how to analyze the process execution data. In this way, readers receive a comprehensive overview of what is needed to identify, understand and improve business processes. The book chiefly focuses on concepts, techniques and methods. It covers a large body of knowledge on process analytics - including process data querying, analysis, matching and correlating process data and models - to help practitioners and researchers understand the underlying concepts, problems, methods, tools and techniques involved in modern process analytics. Following an introduction to basic business process and process analytics concepts, it describes the state of the art in this area before examining different analytics techniques in detail. In this regard, the book covers analytics over different levels of process abstractions, from process execution data and methods for linking and correlating process execution data, to inferring process models, querying process execution data and process models, and scalable process data analytics methods. In addition, it provides a review of commercial process analytics tools and their practical applications. The book is intended for a broad readership interested in business process management and process analytics. It provides researchers with an introduction to these fields by comprehensively classifying the current state of research, by describing in-depth techniques and methods, and by highlighting future research directions. Lecturers will find a wealth of material to choose from for a variety of courses, ranging from undergraduate courses in business process management to graduate courses in business process analytics. Lastly, it offers professionals a reference guide to the state of the art in commercial tools and techniques, complemented by many real-world use case scenarios.
The prevalence of data science has grown exponentially in recent years. Increases in data exchange have created the need for standards and formats on handling data from different sources. Developing Metadata Applications Profiles is an innovative reference source that discusses the latest trends and techniques for effectively managing and exchanging metadata. Including a range of perspectives on schemas and application profiles, such as interoperability, ontology-based design, and model-driven approaches, this book is ideally designed for researchers, academics, professionals, graduate students, and practitioners actively engaged in data science.
This book focuses on new research challenges in intelligent information filtering and retrieval. It collects invited chapters and extended research contributions from DART 2014 (the 8th International Workshop on Information Filtering and Retrieval), held in Pisa (Italy), on December 10, 2014, and co-hosted with the XIII AI*IA Symposium on Artificial Intelligence. The main focus of DART was to discuss and compare suitable novel solutions based on intelligent techniques and applied to real-world contexts. The chapters of this book present a comprehensive review of related works and the current state of the art. The contributions from both practitioners and researchers have been carefully reviewed by experts in the area, who also gave useful suggestions to improve the quality of the book.
There are many invaluable books available on data mining theory and applications. However, in compiling a volume titled DATA MINING: Foundations and Intelligent Paradigms: Volume 3: Medical, Health, Social, Biological and other Applications we wish to introduce some of the latest developments to a broad audience of both specialists and non-specialists in this field."
Knowledge management (KM) is about managing the lifecycle of knowledge consisting of creating, storing, sharing and applying knowledge. Two main approaches towards KM are codification and personalization. The first focuses on capturing knowledge using technology and the latter on the process of socializing for sharing and creating knowledge. Social media are becoming very popular as individuals and also organizations learn how to use it. The primary applications of social media in a business context are marketing and recruitment. But there is also a huge potential for knowledge management in these organizations. For example, wikis can be used to collect organizational knowledge and social networking tools, which leads to exchanging new ideas and innovation. The interesting part of social media is that, by using them, one immediately starts to generate content that can be useful for the organization. Hence, they naturally combine the codification and personalisation approaches to KM. This book aims to provide an overview of new and innovative applications of social media and to report challenges that need to be solved. One example is the watering down of knowledge as a result of the use of organizational social media (Von Krogh, 2012).
The concept of a big data warehouse appeared in order to store moving data objects and temporal data information. Moving objects are geometries that change their position and shape continuously over time. In order to support spatio-temporal data, a data model and associated query language is needed for supporting moving objects. Emerging Perspectives in Big Data Warehousing is an essential research publication that explores current innovative activities focusing on the integration between data warehousing and data mining with an emphasis on the applicability to real-world problems. Featuring a wide range of topics such as index structures, ontology, and user behavior, this book is ideally designed for IT consultants, researchers, professionals, computer scientists, academicians, and managers.
The explosion of information technology has led to substantial growth of web-accessible linguistic data in terms of quantity, diversity and complexity. These resources become even more useful when interlinked with each other to generate network effects. The general trend of providing data online is thus accompanied by newly developing methodologies to interconnect linguistic data and metadata. This includes linguistic data collections, general-purpose knowledge bases (e.g., the DBpedia, a machine-readable edition of the Wikipedia), and repositories with specific information about languages, linguistic categories and phenomena. The Linked Data paradigm provides a framework for interoperability and access management, and thereby allows to integrate information from such a diverse set of resources. The contributions assembled in this volume illustrate the band-width of applications of the Linked Data paradigm for representative types of language resources. They cover lexical-semantic resources, annotated corpora, typological databases as well as terminology and metadata repositories. The book includes representative applications from diverse fields, ranging from academic linguistics (e.g., typology and corpus linguistics) over applied linguistics (e.g., lexicography and translation studies) to technical applications (in computational linguistics, Natural Language Processing and information technology). This volume accompanies the Workshop on Linked Data in Linguistics 2012 (LDL-2012) in Frankfurt/M., Germany, organized by the Open Linguistics Working Group (OWLG) of the Open Knowledge Foundation (OKFN). It assembles contributions of the workshop participants and, beyond this, it summarizes initial steps in the formation of a Linked Open Data cloud of linguistic resources, the Linguistic Linked Open Data cloud (LLOD).
Data mining, an interdisciplinary field combining methods from artificial intelligence, machine learning, statistics and database systems, has grown tremendously over the last 20 years and produced core results for applications like business intelligence, spatio-temporal data analysis, bioinformatics, and stream data processing. The fifteen contributors to this volume are successful and well-known data mining scientists and professionals. Although by no means an exhaustive list, all of them have helped the field to gain the reputation and importance it enjoys today, through the many valuable contributions they have made. Mohamed Medhat Gaber has asked them (and many others) to write down their journeys through the data mining field, trying to answer the following questions: 1. What are your motives for conducting research in the data mining field? 2. Describe the milestones of your research in this field. 3. What are your notable success stories? 4. How did you learn from your failures? 5. Have you encountered unexpected results? 6. What are the current research issues and challenges in your area? 7. Describe your research tools and techniques. 8. How would you advise a young researcher to make an impact? 9. What do you predict for the next two years in your area? 10. What are your expectations in the long term? In order to maintain the informal character of their contributions, they were given complete freedom as to how to organize their answers. This narrative presentation style provides PhD students and novices who are eager to find their way to successful research in data mining with valuable insights into career planning. In addition, everyone else interested in the history of computer science may be surprised about the stunning successes and possible failures computer science careers (still) have to offer.
In recent years, as part of the increasing "informationization" of industry and the economy, enterprises have been accumulating vast amounts of detailed data such as high-frequency transaction data in nancial markets and point-of-sale information onindividualitems in theretail sector. Similarly,vast amountsof data arenow ava- able on business networks based on inter rm transactions and shareholdings. In the past, these types of information were studied only by economists and management scholars. More recently, however, researchers from other elds, such as physics, mathematics, and information sciences, have become interested in this kind of data and, based on novel empirical approaches to searching for regularities and "laws" akin to those in the natural sciences, have produced intriguing results. This book is the proceedings of the international conference THICCAPFA7 that was titled "New Approaches to the Analysis of Large-Scale Business and E- nomic Data," held in Tokyo, March 1-5, 2009. The letters THIC denote the Tokyo Tech (Tokyo Institute of Technology)-Hitotsubashi Interdisciplinary Conference. The conference series, titled APFA (Applications of Physics in Financial Analysis), focuses on the analysis of large-scale economic data. It has traditionally brought physicists and economists together to exchange viewpoints and experience (APFA1 in Dublin 1999, APFA2 in Liege ` 2000, APFA3 in London 2001, APFA4 in Warsaw 2003, APFA5 in Torino 2006, and APFA6 in Lisbon 2007). The aim of the conf- ence is to establish fundamental analytical techniques and data collection methods, taking into account the results from a variety of academic disciplines.
This book includes an extended version of selected papers presented at the 11th Industry Symposium 2021 held during January 7-10, 2021. The book covers contributions ranging from theoretical and foundation research, platforms, methods, applications, and tools in all areas. It provides theory and practices in the area of data science, which add a social, geographical, and temporal dimension to data science research. It also includes application-oriented papers that prepare and use data in discovery research. This book contains chapters from academia as well as practitioners on big data technologies, artificial intelligence, machine learning, deep learning, data representation and visualization, business analytics, healthcare analytics, bioinformatics, etc. This book is helpful for the students, practitioners, researchers as well as industry professional.
In the era of big data, this book explores the new challenges of urban-rural planning and management from a practical perspective based on a multidisciplinary project. Researchers as contributors to this book have accomplished their projects by using big data and relevant data mining technologies for investigating the possibilities of big data, such as that obtained through cell phones, social network systems and smart cards instead of conventional survey data for urban planning support. This book showcases active researchers who share their experiences and ideas on human mobility, accessibility and recognition of places, connectivity of transportation and urban structure in order to provide effective analytic and forecasting tools for smart city planning and design solutions in China. |
You may like...
Computational Intelligence in Data…
Vallidevi Krishnamurthy, Suresh Jaganathan, …
Hardcover
R2,501
Discovery Miles 25 010
New Opportunities for Sentiment Analysis…
Aakanksha Sharaff, G. R. Sinha, …
Hardcover
R7,022
Discovery Miles 70 220
Big Data - Concepts, Methodologies…
Information Reso Management Association
Hardcover
R18,647
Discovery Miles 186 470
Implementation of Machine Learning…
Veljko Milutinovi, Nenad Mitic, …
Hardcover
R7,022
Discovery Miles 70 220
Transforming Businesses With Bitcoin…
Dharmendra Singh Rajput, Ramjeevan Singh Thakur, …
Hardcover
R6,259
Discovery Miles 62 590
Modeling and Simulating Complex Business…
Zoumpolia Dikopoulou
Hardcover
R3,506
Discovery Miles 35 060
|