![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases > Data mining
This book constitutes thoroughly revised and selected papers from the 10th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2015, held in Berlin, Germany, in March 2015. VISIGRAPP comprises GRAPP, International Conference on Computer Graphics Theory and Applications; IVAPP, International Conference on Information Visualization Theory and Applications; and VISAPP, International Conference on Computer Vision Theory and Applications. The 23 thoroughly revised and extended papers presented in this volume were carefully reviewed and selected from 529 submissions. The book also contains one invited talk in full-paper length. The regular papers were organized in topical sections named: computer graphics theory and applications; information visualization theory and applications; and computer vision theory and applications.
This compendium is a completely revised version of an earlier book, Data Mining in Time Series Databases, by the same editors. It provides a unique collection of new articles written by leading experts that account for the latest developments in the field of time series and data stream mining.The emerging topics covered by the book include weightless neural modeling for mining data streams, using ensemble classifiers for imbalanced and evolving data streams, document stream mining with active learning, and many more. In particular, it addresses the domain of streaming data, which has recently become one of the emerging topics in Data Science, Big Data, and related areas. Existing titles do not provide sufficient information on this topic.
This book contains the refereed proceedings of the 10th International Conference on Knowledge Management in Organizations, KMO 2015, held in Maribor, Slovenia, in August 2015. The theme of the conference was "Knowledge Management and Internet of Things." The KMO conference brings together researchers and developers from industry and academia to discuss how knowledge management using big data can improve innovation and competitiveness. The 59 contributions accepted for KMO 2015 were selected from 163 submissions and are organized in topical sections on: knowledge management processes, successful knowledge sharing and knowledge management practices, innovations for competitiveness, knowledge management platforms and tools, social networks and mining techniques, knowledge management and the Internet of Things, knowledge management in health care, and knowledge management in education and research.
Advances in hardware technology have lead to an ability to collect data with the use of a variety of sensor technologies. In particular sensor notes have become cheaper and more efficient, and have even been integrated into day-to-day devices of use, such as mobile phones. This has lead to a much larger scale of applicability and mining of sensor data sets. The human-centric aspect of sensor data has created tremendous opportunities in integrating social aspects of sensor data collection into the mining process. Managing and Mining Sensor Data is a contributed volume by prominent leaders in this field, targeting advanced-level students in computer science as a secondary text book or reference. Practitioners and researchers working in this field will also find this book useful.
This book constitutes the proceedings of the Third Asia Pacific Conference on Business Process Management held in Busan, South Korea, in June 2015. Overall, 37 contributions from ten countries were submitted. After each submission was reviewed by at least three Program Committee members, 12 full and two short papers were accepted for publication in this volume. These papers cover various topics and are categorized under four main research focuses in BPM: advancement in workflow technologies, resources allocation strategies, process mining, and emerging topics in BPM.
Introduces an assortment of powerful command line utilities that can be combined to create simple, yet powerful shell scripts for processing datasets. The code samples and scripts use the bash shell, and typically involve small datasets so you can focus on understanding the features of grep, sed, and awk. Companion files with code are available for downloading from the publisher.
Hyperspectral Image Fusion is the first text dedicated to the fusion techniques for such a huge volume of data consisting of a very large number of images. This monograph brings out recent advances in the research in the area of visualization of hyperspectral data. It provides a set of pixel-based fusion techniques, each of which is based on a different framework and has its own advantages and disadvantages. The techniques are presented with complete details so that practitioners can easily implement them. It is also demonstrated how one can select only a few specific bands to speed up the process of fusion by exploiting spatial correlation within successive bands of the hyperspectral data. While the techniques for fusion of hyperspectral images are being developed, it is also important to establish a framework for objective assessment of such techniques. This monograph has a dedicated chapter describing various fusion performance measures that are applicable to hyperspectral image fusion. This monograph also presents a notion of consistency of a fusion technique which can be used to verify the suitability and applicability of a technique for fusion of a very large number of images. This book will be a highly useful resource to the students, researchers, academicians and practitioners in the specific area of hyperspectral image fusion, as well as generic image fusion.
Collective view prediction is to judge the opinions of an active web user based on unknown elements by referring to the collective mind of the whole community. Content-based recommendation and collaborative filtering are two mainstream collective view prediction techniques. They generate predictions by analyzing the text features of the target object or the similarity of users' past behaviors. Still, these techniques are vulnerable to the artificially-injected noise data, because they are not able to judge the reliability and credibility of the information sources. Trust-based Collective View Prediction describes new approaches for tackling this problem by utilizing users' trust relationships from the perspectives of fundamental theory, trust-based collective view prediction algorithms and real case studies. The book consists of two main parts - a theoretical foundation and an algorithmic study. The first part will review several basic concepts and methods related to collective view prediction, such as state-of-the-art recommender systems, sentimental analysis, collective view, trust management, the Relationship of Collective View and Trustworthy, and trust in collective view prediction. In the second part, the authors present their models and algorithms based on a quantitative analysis of more than 300 thousand users' data from popular product-reviewing websites. They also introduce two new trust-based prediction algorithms, one collaborative algorithm based on the second-order Markov random walk model, and one Bayesian fitting model for combining multiple predictors. The discussed concepts, developed algorithms, empirical results, evaluation methodologies and the robust analysis framework described in Trust-based Collective View Prediction will not only provide valuable insights and findings to related research communities and peers, but also showcase the great potential to encourage industries and business partners to integrate these techniques into new applications.
This book constitutes the proceedings of the 6th International Conference on Pattern Recognition and Machine Intelligence, PReMI 2015, held in Warsaw, Poland, in June/July 2015. The total of 53 full papers and 1 short paper presented in this volume were carefully reviewed and selected from 90 submissions. They were organized in topical sections named: foundations of machine learning; image processing; image retrieval; image tracking; pattern recognition; data mining techniques for large scale data; fuzzy computing; rough sets; bioinformatics; and applications of artificial intelligence.
This book constitutes the refereed proceedings of the First International Conference on Decision Support Systems Technology, ICDSST 2015, held in Belgrade, Serbia, in May 2015. The theme of the event was "Big Data Analytics for Decision-Making" and it was organized by the EURO (Association of European Operational Research Societies) working group of Decision Support Systems (EWG-DSS). The eight papers presented in this book were selected out of 26 submissions after being carefully reviewed by at least three internationally known experts from the ICDSST 2015 Program Committee and external invited reviewers. The selected papers are representative of current and relevant research activities in the area of decision support systems, such as decision analysis for enterprise systems and non-hierarchical networks, integrated solutions for decision support and knowledge management in distributed environments, decision support system evaluations and analysis through social networks, and decision support system applications in real-world environments. The volume is completed by an additional invited paper on big data decision-making use cases.
The book will focus on exploiting state of the art research in semantic web and web science. The rapidly evolving world-wide-web has led to revolutionary changes in the whole of society. The research and development of the semantic web covers a number of global standards of the web and cutting edge technologies, such as: linked data, social semantic web, semantic web search, smart data integration, semantic web mining and web scale computing. These proceedings are from the 6th Chinese Semantics Web Symposium.
Abstraction is a fundamental mechanism underlying both human and artificial perception, representation of knowledge, reasoning and learning. This mechanism plays a crucial role in many disciplines, notably Computer Programming, Natural and Artificial Vision, Complex Systems, Artificial Intelligence and Machine Learning, Art, and Cognitive Sciences. This book first provides the reader with an overview of the notions of abstraction proposed in various disciplines by comparing both commonalities and differences. After discussing the characterizing properties of abstraction, a formal model, the KRA model, is presented to capture them. This model makes the notion of abstraction easily applicable by means of the introduction of a set of abstraction operators and abstraction patterns, reusable across different domains and applications. It is the impact of abstraction in Artificial Intelligence, Complex Systems and Machine Learning which creates the core of the book. A general framework, based on the KRA model, is presented, and its pragmatic power is illustrated with three case studies: Model-based diagnosis, Cartographic Generalization, and learning Hierarchical Hidden Markov Models.
Collaboratively Constructed Language Resources (CCLRs) such as Wikipedia, Wiktionary, Linked Open Data, and various resources developed using crowdsourcing techniques such as Games with a Purpose and Mechanical Turk have substantially contributed to the research in natural language processing (NLP). Various NLP tasks utilize such resources to substitute for or supplement conventional lexical semantic resources and linguistically annotated corpora. These resources also provide an extensive body of texts from which valuable knowledge is mined. There are an increasing number of community efforts to link and maintain multiple linguistic resources. This book aims offers comprehensive coverage of CCLR-related topics, including their construction, utilization in NLP tasks, and interlinkage and management. Various Bachelor/Master/Ph.D. programs in natural language processing, computational linguistics, and knowledge discovery can use this book both as the main text and as a supplementary reading. The book also provides a valuable reference guide for researchers and professionals for the above topics.
This volume comprises papers dedicated to data science and the extraction of knowledge from many types of data: structural, quantitative, or statistical approaches for the analysis of data; advances in classification, clustering and pattern recognition methods; strategies for modeling complex data and mining large data sets; applications of advanced methods in specific domains of practice. The contributions offer interesting applications to various disciplines such as psychology, biology, medical and health sciences; economics, marketing, banking and finance; engineering; geography and geology; archeology, sociology, educational sciences, linguistics and musicology; library science. The book contains the selected and peer-reviewed papers presented during the European Conference on Data Analysis (ECDA 2013) which was jointly held by the German Classification Society (GfKl) and the French-speaking Classification Society (SFC) in July 2013 at the University of Luxembourg.
The LNCS journal Transactions on Large-Scale Data- and Knowledge-Centered Systems focuses on data management, knowledge discovery, and knowledge processing, which are core and hot topics in computer science. Since the 1990s, the Internet has become the main driving force behind application development in all domains. An increase in the demand for resource sharing across different sites connected through networks has led to an evolution of data- and knowledge-management systems from centralized systems to decentralized systems enabling large-scale distributed applications providing high scalability. Current decentralized systems still focus on data and knowledge as their main resource. Feasibility of these systems relies basically on P2P (peer-to-peer) techniques and the support of agent systems with scaling and decentralized control. Synergy between grids, P2P systems, and agent technologies is the key to data- and knowledge-centered systems in large-scale environments.This volume, the 26th issue of Transactions on Large-Scale Data- and Knowledge-Centered Systems, focuses on Data Warehousing and Knowledge Discovery from Big Data, and contains extended and revised versions of four papers selected as the best papers from the 16th International Conference on Data Warehousing and Knowledge Discovery (DaWaK 2014), held in Munich, Germany, during September 1-5, 2014. The papers focus on data cube computation, the construction and analysis of a data warehouse in the context of cancer epidemiology, pattern mining algorithms, and frequent item-set border approximation.
Community structure is a salient structural characteristic of many real-world networks. Communities are generally hierarchical, overlapping, multi-scale and coexist with other types of structural regularities of networks. This poses major challenges for conventional methods of community detection. This book will comprehensively introduce the latest advances in community detection, especially the detection of overlapping and hierarchical community structures, the detection of multi-scale communities in heterogeneous networks, and the exploration of multiple types of structural regularities. These advances have been successfully applied to analyze large-scale online social networks, such as Facebook and Twitter. This book provides readers a convenient way to grasp the cutting edge of community detection in complex networks. The thesis on which this book is based was honored with the "Top 100 Excellent Doctoral Dissertations Award" from the Chinese Academy of Sciences and was nominated as the "Outstanding Doctoral Dissertation" by the Chinese Computer Federation.
This, the 25th issue of Transactions on Large-Scale Data- and Knowledge-Centered Systems, contains five fully revised selected papers focusing on data and knowledge management systems. Topics covered include a framework consisting of two heuristics with slightly different characteristics to compute the action rating of data stores, a theoretical and experimental study of filter-based equijoins in a MapReduce environment, a constraint programming approach based on constraint reasoning to study the view selection and data placement problem given a limited amount of resources, a formalization and an approximate algorithm to tackle the problem of source selection and query decomposition in federations of SPARQL endpoints, and a matcher factory enabling the generation of a dedicated schema matcher for a given schema matching scenario.
This proceedings set contains 85 selected full papers presented at the 3rd International Conference on Modelling, Computation and Optimization in Information Systems and Management Sciences - MCO 2015, held on May 11-13, 2015 at Lorraine University, France. The present part I of the 2 volume set includes articles devoted to Combinatorial optimization and applications, DC programming and DCA: thirty years of Developments, Dynamic Optimization, Modelling and Optimization in financial engineering, Multiobjective programming, Numerical Optimization, Spline Approximation and Optimization, as well as Variational Principles and Applications.
Michael Nofer examines whether and to what extent Social Media can be used to predict stock returns. Market-relevant information is available on various platforms on the Internet, which largely consist of user generated content. For instance, emotions can be extracted in order to identify the investors' risk appetite and in turn the willingness to invest in stocks. Discussion forums also provide an opportunity to identify opinions on certain companies. Taking Social Media platforms as examples, the author examines the forecasting quality of user generated content on the Internet.
This proceedings set contains 85 selected full papers presentedat the 3rd International Conference on Modelling, Computation and Optimization in Information Systems and Management Sciences - MCO 2015, held on May 11-13, 2015 at Lorraine University, France. The present part II of the 2 volume set includes articles devoted to Data analysis and Data mining, Heuristic / Meta heuristic methods for operational research applications, Optimization applied to surveillance and threat detection, Maintenance and Scheduling, Post Crises banking and eco-finance modelling, Transportation, as well as Technologies and methods for multi-stakeholder decision analysis in public settings.
This book constitutes the thoroughly refereed proceedings of the Fourth International Symposium on Data-Driven Process Discovery and Analysis held in Riva del Milan, Italy, in November 2014. The five revised full papers were carefully selected from 21 submissions. Following the event, authors were given the opportunity to improve their papers with the insights they gained from the symposium. During this edition, the presentations and discussions frequently focused on the implementation of process mining algorithms in contexts where the analytical process is fed by data streams. The selected papers underline the most relevant challenges identified and propose novel solutions and approaches for their solution.
This book describes the fundamentals of data acquisition systems, how they enable users to sample signals that measure real physical conditions and convert the resulting samples into digital, numeric values that can be analyzed by a computer. The author takes a problem-solving approach to data acquisition, providing the tools engineers need to use the concepts introduced. Coverage includes sensors that convert physical parameters to electrical signals, signal conditioning circuitry to convert sensor signals into a form that can be converted to digital values and analog-to-digital converters, which convert conditioned sensor signals to digital values. Readers will benefit from the hands-on approach, culminating with data acquisition projects, including hardware and software needed to build data acquisition systems.
The present book outlines a new approach to possibilistic clustering in which the sought clustering structure of the set of objects is based directly on the formal definition of fuzzy cluster and the possibilistic memberships are determined directly from the values of the pairwise similarity of objects. The proposed approach can be used for solving different classification problems. Here, some techniques that might be useful at this purpose are outlined, including a methodology for constructing a set of labeled objects for a semi-supervised clustering algorithm, a methodology for reducing analyzed attribute space dimensionality and a methods for asymmetric data processing. Moreover, a technique for constructing a subset of the most appropriate alternatives for a set of weak fuzzy preference relations, which are defined on a universe of alternatives, is described in detail, and a method for rapidly prototyping the Mamdani's fuzzy inference systems is introduced. This book addresses engineers, scientists, professors, students and post-graduate students, who are interested in and work with fuzzy clustering and its applications
This, the 24th issue of Transactions on Large-Scale Data- and Knowledge-Centered Systems, contains extended and revised versions of seven papers presented at the 25th International Conference on Database and Expert Systems Applications, DEXA 2014, held in Munich, Germany, in September 2014. Following the conference, and two further rounds of reviewing and selection, six extended papers and one invited keynote paper were chosen for inclusion in this special issue. Topics covered include systems modeling, similarity search, bioinformatics, data pricing, k-nearest neighbor querying, database replication, and data anonymization.
This book is mainly about an innovative and fundamental method called "intelligent knowledge" to bridge the gap between data mining and knowledge management, two important fields recognized by the information technology (IT) community and business analytics (BA) community respectively. The book includes definitions of the "first-order" analytic process, "second-order" analytic process and intelligent knowledge, which have not formally been addressed by either data mining or knowledge management. Based on these concepts, which are especially important in connection with the current Big Data movement, the book describes a framework of domain-driven intelligent knowledge discovery. To illustrate its technical advantages for large-scale data, the book employs established approaches, such as Multiple Criteria Programming, Support Vector Machine and Decision Tree to identify intelligent knowledge incorporated with human knowledge. The book further shows its applicability by means of real-life data analyses in the contexts of internet business and traditional Chinese medicines. |
![]() ![]() You may like...
1 Recce: Volume 3 - Onsigbaarheid Is Ons…
Alexander Strachan
Paperback
|