Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Applications of computing > Databases > Data mining
This book constitutes the refereed proceedings of the 11th International Workshop on Algorithms and Models for the Web Graph, WAW 2014, held in Beijing, China, in December 2014. The 12 papers presented were carefully reviewed and selected for inclusion in this volume. The aim of the workshop was to further the understanding of graphs that arise from the Web and various user activities on the Web, and stimulate the development of high-performance algorithms and applications that exploit these graphs. The workshop gathered the researchers who are working on graph-theoretic and algorithmic aspects of related complex networks, including social networks, citation networks, biological networks, molecular networks, and other networks arising from the Internet.
In theory, there is no difference between theory and practice. But, in practice, there is. Jan L. A. van de Snepscheut The ?ow of academic ideas in the area of computational intelligence has penetrated industry with tremendous speed and persistence. Thousands of applications have proved the practical potential of fuzzy logic, neural networks, evolutionary com- tation, swarm intelligence, and intelligent agents even before their theoretical foundation is completely understood. And the popularity is rising. Some software vendors have pronounced the new machine learning gold rush to "Transfer Data into Gold". New buzzwords like "data mining", "genetic algorithms", and "swarm optimization" have enriched the top executives' vocabulary to make them look more "visionary" for the 21st century. The phrase "fuzzy math" became political jargon after being used by US President George W. Bush in one of the election debates in the campaign in 2000. Even process operators are discussing the perf- mance of neural networks with the same passion as the performance of the Dallas Cowboys. However, for most of the engineers and scientists introducing computational intelligence technologies into practice, looking at the growing number of new approaches, and understanding their theoretical principles and potential for value creation becomes a more and more dif?cult task.
This book constitutes the proceedings of the 10th International Conference on Advanced Data Mining and Applications, ADMA 2014, held in Guilin, China during December 2014. The 48 regular papers and 10 workshop papers presented in this volume were carefully reviewed and selected from 90 submissions. They deal with the following topics: data mining, social network and social media, recommend systems, database, dimensionality reduction, advance machine learning techniques, classification, big data and applications, clustering methods, machine learning, and data mining and database.
The LNCS journal Transactions on Large-Scale Data- and Knowledge-Centered Systems focuses on data management, knowledge discovery, and knowledge processing, which are core and hot topics in computer science. Since the 1990s, the Internet has become the main driving force behind application development in all domains. An increase in the demand for resource sharing across different sites connected through networks has led to an evolution of data- and knowledge-management systems from centralized systems to decentralized systems enabling large-scale distributed applications providing high scalability. Current decentralized systems still focus on data and knowledge as their main resource. Feasibility of these systems relies basically on P2P (peer-to-peer) techniques and the support of agent systems with scaling and decentralized control. Synergy between grids, P2P systems, and agent technologies is the key to data- and knowledge-centered systems in large-scale environments. This special issue contains extended and revised versions of 4 papers, selected from the 25 papers presented at the satellite events associated with the 17th East-European Conference on Advances in Databases and Information Systems (ADBIS 2013), held on September 1-4, 2013 in Genoa, Italy. The three satellite events were GID 2013, the Second International Workshop on GPUs in Databases; SoBI 2013, the First International Workshop on Social Business Intelligence: Integrating Social Content in Decision Making; and OAIS 2013, the Second International Workshop on Ontologies Meet Advanced Information Systems. The papers cover various topics in large-scale data and knowledge-centered systems, including GPU-accelerated database systems and GPU-based compression for large time series databases, design of parallel data warehouses, and schema matching. The special issue content, which combines both theoretical and application-based contributions, gives a useful overview of some of the current trends in large-scale data and knowledge management and will stimulate new ideas for further research and development within both the scientific and industrial communities.
"Foundations of Data Mining and Knowledge Discovery" contains the latest results and new directions in data mining research. Data mining, which integrates various technologies, including computational intelligence, database and knowledge management, machine learning, soft computing, and statistics, is one of the fastest growing fields in computer science. Although many data mining techniques have been developed, further development of the field requires a close examination of its foundations. This volume presents the results of investigations into the foundations of the discipline, and represents the state of the art for much of the current research. This book will prove extremely valuable and fruitful for data mining researchers, no matter whether they would like to uncover the fundamental principles behind data mining, or apply the theories to practical applications.
This book primarily discusses issues related to the mining aspects of data streams and it is unique in its primary focus on the subject. This volume covers mining aspects of data streams comprehensively: each contributed chapter contains a survey on the topic, the key ideas in the field for that particular topic, and future research directions. The book is intended for a professional audience composed of researchers and practitioners in industry. This book is also appropriate for advanced-level students in computer science.
With the growing use of information technology and the recent advances in web systems, the amount of data available to users has increased exponentially. Thus, there is a critical need to understand the content of the data. As a result, data-mining has become a popular research topic in recent years for the treatment of the "data rich and information poor" syndrome. In this carefully edited volume a theoretical foundation as well as important new directions for data-mining research are presented. It brings together a set of well respected data mining theoreticians and researchers with practical data mining experiences. The presented theories will give data mining practitioners a scientific perspective in data mining and thus provide more insight into their problems, and the provided new data mining topics can be expected to stimulate further research in these important directions.
Mohamed Medhat Gaber "It is not my aim to surprise or shock you - but the simplest way I can summarise is to say that there are now in the world machines that think, that learn and that create. Moreover, their ability to do these things is going to increase rapidly until - in a visible future - the range of problems they can handle will be coextensive with the range to which the human mind has been applied" by Herbert A. Simon (1916-2001) 1Overview This book suits both graduate students and researchers with a focus on discovering knowledge from scienti c data. The use of computational power for data analysis and knowledge discovery in scienti c disciplines has found its roots with the re- lution of high-performance computing systems. Computational science in physics, chemistry, and biology represents the rst step towards automation of data analysis tasks. The rational behind the developmentof computationalscience in different - eas was automating mathematical operations performed in those areas. There was no attention paid to the scienti c discovery process. Automated Scienti c Disc- ery (ASD) [1-3] represents the second natural step. ASD attempted to automate the process of theory discovery supported by studies in philosophy of science and cognitive sciences. Although early research articles have shown great successes, the area has not evolved due to many reasons. The most important reason was the lack of interaction between scientists and the automating systems.
Data mining (DM) consists of extracting interesting knowledge from re- world, large & complex data sets; and is the core step of a broader process, called the knowledge discovery from databases (KDD) process. In addition to the DM step, which actually extracts knowledge from data, the KDD process includes several preprocessing (or data preparation) and post-processing (or knowledge refinement) steps. The goal of data preprocessing methods is to transform the data to facilitate the application of a (or several) given DM algorithm(s), whereas the goal of knowledge refinement methods is to validate and refine discovered knowledge. Ideally, discovered knowledge should be not only accurate, but also comprehensible and interesting to the user. The total process is highly computation intensive. The idea of automatically discovering knowledge from databases is a very attractive and challenging task, both for academia and for industry. Hence, there has been a growing interest in data mining in several AI-related areas, including evolutionary algorithms (EAs). The main motivation for applying EAs to KDD tasks is that they are robust and adaptive search methods, which perform a global search in the space of candidate solutions (for instance, rules or another form of knowledge representation).
The abundance of information and increase in computing power currently enable researchers to tackle highly complicated and challenging computational problems. Solutions to such problems are now feasible using advances and innovations from the area of Artificial Intelligence. The general focus of the AIAI conference is to provide insights on how Artificial Intelligence may be applied in real-world situations and serve the study, analysis and modeling of theoretical and practical issues. This volume contains papers selected for presentation at the 6th IFIP Conference on Artificial Intelligence Applications and Innovations (AIAI 2010) and held in Larnaca, Cyprus, during October 6-7, 2010. IFIP AIAI 2010 was co-organized by the University of Cyprus and the Cyprus University of Technology and was sponsored by the Cyprus University of Technology, Frederick University and the Cyprus Tourism Organization. AIAI 2010 is the official conference of the WG12.5 "Artificial Intel- gence Applications" working group of IFIP TC12, the International Federation for Information Processing Technical Committee on Artificial Intelligence (AI). AIAI is a conference that grows in significance every year attracting researchers from different countries around the globe. It maintains high quality, standards and welcomes research papers describing technical advances and engineering and ind- trial applications of intelligent systems. AIAI 2010 was not confined to introducing how AI may be applied in real-life situations, but also included innovative methods, techniques, tools and ideas of AI expressed at the algorithmic or systemic level.
This text reviews the evolution of the field of visualization, providing innovative examples from various disciplines, highlighting the important role that visualization plays in extracting and organizing the concepts found in complex data. Features: presents a thorough introduction to the discipline of knowledge visualization, its current state of affairs and possible future developments; examines how tables have been used for information visualization in historical textual documents; discusses the application of visualization techniques for knowledge transfer in business relationships, and for the linguistic exploration and analysis of sensory descriptions; investigates the use of visualization to understand orchestral music scores, the optical theory behind Renaissance art, and to assist in the reconstruction of an historic church; describes immersive 360 degree stereographic visualization, knowledge-embedded embodied interaction, and a novel methodology for the analysis of architectural forms.
This book constitutes the refereed proceedings of the 13th Pacific Rim Conference on Artificial Intelligence, PRICAI 2014, held in Gold Coast, Queensland, Australia, in December 2014. The 74 full papers and 20 short papers presented in this volume were carefully reviewed and selected from 203 submissions. The topics include inference; reasoning; robotics; social intelligence. AI foundations; applications of AI; agents; Bayesian networks; neural networks; Markov networks; bioinformatics; cognitive systems; constraint satisfaction; data mining and knowledge discovery; decision theory; evolutionary computation; games and interactive entertainment; heuristics; knowledge acquisition and ontology; knowledge representation, machine learning; multimodal interaction; natural language processing; planning and scheduling; probabilistic.
Enterprise Architecture, Integration, and Interoperability and the Networked enterprise have become the theme of many conferences in the past few years. These conferences were organised by IFIP TC5 with the support of its two working groups: WG 5. 12 (Architectures for Enterprise Integration) and WG 5. 8 (Enterprise Interoperability), both concerned with aspects of the topic: how is it possible to architect and implement businesses that are flexible and able to change, to interact, and use one another's s- vices in a dynamic manner for the purpose of (joint) value creation. The original qu- tion of enterprise integration in the 1980s was: how can we achieve and integrate - formation and material flow in the enterprise? Various methods and reference models were developed or proposed - ranging from tightly integrated monolithic system - chitectures, through cell-based manufacturing to on-demand interconnection of bu- nesses to form virtual enterprises in response to market opportunities. Two camps have emerged in the endeavour to achieve the same goal, namely, to achieve interoperability between businesses (whereupon interoperability is the ability to exchange information in order to use one another's services or to jointly implement a service). One school of researchers addresses the technical aspects of creating dynamic (and static) interconnections between disparate businesses (or parts thereof).
Increasingly, human beings are sensors engaging directly with the mobile Internet. Individuals can now share real-time experiences at an unprecedented scale. Social Sensing: Building Reliable Systems on Unreliable Data looks at recent advances in the emerging field of social sensing, emphasizing the key problem faced by application designers: how to extract reliable information from data collected from largely unknown and possibly unreliable sources. The book explains how a myriad of societal applications can be derived from this massive amount of data collected and shared by average individuals. The title offers theoretical foundations to support emerging data-driven cyber-physical applications and touches on key issues such as privacy. The authors present solutions based on recent research and novel ideas that leverage techniques from cyber-physical systems, sensor networks, machine learning, data mining, and information fusion.
Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis, Second Edition, provides a comprehensive guide for practitioners who wish to understand, construct, and analyze intelligent systems for decision support based on probabilistic networks. This new edition contains six new sections, in addition to fully-updated examples, tables, figures, and a revised appendix. Intended primarily for practitioners, this book does not require sophisticated mathematical skills or deep understanding of the underlying theory and methods nor does it discuss alternative technologies for reasoning under uncertainty. The theory and methods presented are illustrated through more than 140 examples, and exercises are included for the reader to check his or her level of understanding. The techniques and methods presented for knowledge elicitation, model construction and verification, modeling techniques and tricks, learning models from data, and analyses of models have all been developed and refined on the basis of numerous courses that the authors have held for practitioners worldwide.
The three volume set LNCS 8834, LNCS 8835, and LNCS 8836 constitutes the proceedings of the 20th International Conference on Neural Information Processing, ICONIP 2014, held in Kuching, Malaysia, in November 2014. The 231 full papers presented were carefully reviewed and selected from 375 submissions. The selected papers cover major topics of theoretical research, empirical study, and applications of neural information processing research. The 3 volumes represent topical sections containing articles on cognitive science, neural networks and learning systems, theory and design, applications, kernel and statistical methods, evolutionary computation and hybrid intelligent systems, signal and image processing, and special sessions intelligent systems for supporting decision, making processes,theories and applications, cognitive robotics, and learning systems for social network and web mining.
The three volume set LNCS 8834, LNCS 8835, and LNCS 8836 constitutes the proceedings of the 21st International Conference on Neural Information Processing, ICONIP 2014, held in Kuching, Malaysia, in November 2014. The 231 full papers presented were carefully reviewed and selected from 375 submissions. The selected papers cover major topics of theoretical research, empirical study, and applications of neural information processing research. The 3 volumes represent topical sections containing articles on cognitive science, neural networks and learning systems, theory and design, applications, kernel and statistical methods, evolutionary computation and hybrid intelligent systems, signal and image processing, and special sessions intelligent systems for supporting decision, making processes, theories and applications, cognitive robotics, and learning systems for social network and web mining.
This book constitutes revised selected papers from the two International Workshops on Artificial Intelligence Approaches to the Complexity of Legal Systems, AICOL IV and AICOL V, held in 2013. The first took place as part of the 26th IVR Congress in Belo Horizonte, Brazil, during July 21-27, 2013; the second was held in Bologna as a joint special workshop of JURIX 2013 on December 11, 2013. The 19 papers presented in this volume were carefully reviewed and selected for inclusion in this book. They are organized in topical sections named: social intelligence and legal conceptual models; legal theory, normative systems and software agents; semantic Web technologies, legal ontologies and argumentation; and crowdsourcing and online dispute resolution (ODR).
Beginning Apache Cassandra Development introduces you to one of the most robust and best-performing NoSQL database platforms on the planet. Apache Cassandra is a document database following the JSON document model. It is specifically designed to manage large amounts of data across many commodity servers without there being any single point of failure. This design approach makes Apache Cassandra a robust and easy-to-implement platform when high availability is needed. Apache Cassandra can be used by developers in Java, PHP, Python, and JavaScript-the primary and most commonly used languages. In Beginning Apache Cassandra Development, author and Cassandra expert Vivek Mishra takes you through using Apache Cassandra from each of these primary languages. Mishra also covers the Cassandra Query Language (CQL), the Apache Cassandra analog to SQL. You'll learn to develop applications sourcing data from Cassandra, query that data, and deliver it at speed to your application's users. Cassandra is one of the leading NoSQL databases, meaning you get unparalleled throughput and performance without the sort of processing overhead that comes with traditional proprietary databases. Beginning Apache Cassandra Development will therefore help you create applications that generate search results quickly, stand up to high levels of demand, scale as your user base grows, ensure operational simplicity, and-not least-provide delightful user experiences.
This book constitutes revised selected papers from the second ECML PKDD Workshop on Data Analytics for Renewable Energy Integration, DARE 2014, held in Nancy, France, in September 2014. The 11 papers presented in this volume were carefully reviewed and selected for inclusion in this book.
Data mining has emerged as one of the most active areas in information and c- munication technologies(ICT). With the boomingof the global economy,and ub- uitouscomputingandnetworkingacrosseverysectorand business,data andits deep analysis becomes a particularly important issue for enhancing the soft power of an organization, its production systems, decision-making and performance. The last ten years have seen ever-increasingapplications of data mining in business, gove- ment, social networks and the like. However, a crucial problem that prevents data mining from playing a strategic decision-support role in ICT is its usually limited decision-support power in the real world. Typical concerns include its actionability, workability, transferability, and the trustworthy, dependable, repeatable, operable and explainable capabilities of data mining algorithms, tools and outputs. This monograph, Domain Driven Data Mining, is motivated by the real-world challenges to and complexities of the current KDD methodologies and techniques, which are critical issues faced by data mining, as well as the ?ndings, thoughts and lessons learned in conducting several large-scale real-world data mining bu- ness applications. The aim and objective of domain driven data mining is to study effective and ef?cient methodologies, techniques, tools, and applications that can discover and deliver actionable knowledge that can be passed on to business people for direct decision-making and action-taking.
The LNCS journal Transactions on Large-Scale Data- and Knowledge-Centered Systems focuses on data management, knowledge discovery, and knowledge processing, which are core and hot topics in computer science. Since the 1990s, the Internet has become the main driving force behind application development in all domains. An increase in the demand for resource sharing across different sites connected through networks has led to an evolution of data- and knowledge-management systems from centralized systems to decentralized systems enabling large-scale distributed applications providing high scalability. Current decentralized systems still focus on data and knowledge as their main resource. Feasibility of these systems relies basically on P2P (peer-to-peer) techniques and the support of agent systems with scaling and decentralized control. Synergy between grids, P2P systems, and agent technologies is the key to data- and knowledge-centered systems in large-scale environments. This, the 14th issue of Transactions on Large-Scale Data- and Knowledge-Centered Systems, contains four revised selected regular papers. Topics covered include data stream systems, top-k query processing, semantic web service (SWS) discovery, and XML functional dependencies.
This book constitutes the thoroughly revised selected papers of the 4th and 5th workshops on Big Data Benchmarks, Performance Optimization, and Emerging Hardware, BPOE 4 and BPOE 5, held respectively in Salt Lake City, in March 2014, and in Hangzhou, in September 2014. The 16 papers presented were carefully reviewed and selected from 30 submissions. Both workshops focus on architecture and system support for big data systems, such as benchmarking; workload characterization; performance optimization and evaluation; emerging hardware.
This is the first book primarily dedicated to clustering using multiobjective genetic algorithms with extensive real-life applications in data mining and bioinformatics. The authors first offer detailed introductions to the relevant techniques - genetic algorithms, multiobjective optimization, soft computing, data mining and bioinformatics. They then demonstrate systematic applications of these techniques to real-world problems in the areas of data mining, bioinformatics and geoscience. The authors offer detailed theoretical and statistical notes, guides to future research, and chapter summaries. The book can be used as a textbook and as a reference book by graduate students and academic and industrial researchers in the areas of soft computing, data mining, bioinformatics and geoscience.
This book constitutes the thoroughly refereed conference proceedings of the 9th International Conference on Rough Sets and Knowledge Technology, RSKT 2014, held in Shanghai, China, in October 2014. The 70 papers presented were carefully reviewed and selected from 162 submissions. The papers in this volume cover topics such as foundations and generalizations of rough sets, attribute reduction and feature selection, applications of rough sets, intelligent systems and applications, knowledge technology, domain-oriented data-driven data mining, uncertainty in granular computing, advances in granular computing, big data to wise decisions, rough set theory, and three-way decisions, uncertainty, and granular computing. |
You may like...
|