Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Applications of computing > Databases > Data mining
This book constitutes the refereed proceedings of the 12th EUROLAN Summer School on Linguistic Linked Open Data and its Satellite Workshop on Social Media and the Web of Linked Data, RUMOUR 2015, held in Sibiu, Romania, in July 2015. The 10 revised full papers presented together with 12 abstracts of tutorials were carefully reviewed and selected from 21 submissions.
This book introduces Meaningful Purposive Interaction Analysis (MPIA) theory, which combines social network analysis (SNA) with latent semantic analysis (LSA) to help create and analyse a meaningful learning landscape from the digital traces left by a learning community in the co-construction of knowledge. The hybrid algorithm is implemented in the statistical programming language and environment R, introducing packages which capture - through matrix algebra - elements of learners' work with more knowledgeable others and resourceful content artefacts. The book provides comprehensive package-by-package application examples, and code samples that guide the reader through the MPIA model to show how the MPIA landscape can be constructed and the learner's journey mapped and analysed. This building block application will allow the reader to progress to using and building analytics to guide students and support decision-making in learning.
This book offers a timely report on key theories and applications of soft-computing. Written in honour of Professor Gaspar Mayor on his 70th birthday, it primarily focuses on areas related to his research, including fuzzy binary operators, aggregation functions, multi-distances, and fuzzy consensus/decision models. It also discusses a number of interesting applications such as the implementation of fuzzy mathematical morphology based on Mayor-Torrens t-norms. Importantly, the different chapters, authored by leading experts, present novel results and offer new perspectives on different aspects of Mayor's research. The book also includes an overview of evolutionary fuzzy systems, a topic that is not one of Mayor's main areas of interest, and a final chapter written by the Spanish pioneer in fuzzy logic, Professor E. Trillas. Computer and decision scientists, knowledge engineers and mathematicians alike will find here an authoritative overview of key soft-computing concepts and techniques.
This book offers an original and broad exploration of the fundamental methods in Clustering and Combinatorial Data Analysis, presenting new formulations and ideas within this very active field. With extensive introductions, formal and mathematical developments and real case studies, this book provides readers with a deeper understanding of the mutual relationships between these methods, which are clearly expressed with respect to three facets: logical, combinatorial and statistical. Using relational mathematical representation, all types of data structures can be handled in precise and unified ways which the author highlights in three stages: Clustering a set of descriptive attributes Clustering a set of objects or a set of object categories Establishing correspondence between these two dual clusterings Tools for interpreting the reasons of a given cluster or clustering are also included. Foundations and Methods in Combinatorial and Statistical Data Analysis and Clustering will be a valuable resource for students and researchers who are interested in the areas of Data Analysis, Clustering, Data Mining and Knowledge Discovery.
This book starts with an introduction to process modeling and process paradigms, then explains how to query and analyze process models, and how to analyze the process execution data. In this way, readers receive a comprehensive overview of what is needed to identify, understand and improve business processes. The book chiefly focuses on concepts, techniques and methods. It covers a large body of knowledge on process analytics - including process data querying, analysis, matching and correlating process data and models - to help practitioners and researchers understand the underlying concepts, problems, methods, tools and techniques involved in modern process analytics. Following an introduction to basic business process and process analytics concepts, it describes the state of the art in this area before examining different analytics techniques in detail. In this regard, the book covers analytics over different levels of process abstractions, from process execution data and methods for linking and correlating process execution data, to inferring process models, querying process execution data and process models, and scalable process data analytics methods. In addition, it provides a review of commercial process analytics tools and their practical applications. The book is intended for a broad readership interested in business process management and process analytics. It provides researchers with an introduction to these fields by comprehensively classifying the current state of research, by describing in-depth techniques and methods, and by highlighting future research directions. Lecturers will find a wealth of material to choose from for a variety of courses, ranging from undergraduate courses in business process management to graduate courses in business process analytics. Lastly, it offers professionals a reference guide to the state of the art in commercial tools and techniques, complemented by many real-world use case scenarios.
This book constitutes the refereed proceedings of the Second International Workshop on Process-Aware Systems, PAS 2015, held in Hangzhou, China, in October 2015. The four revised full papers and two short papers, presented together with five demo papers were carefully reviewed and selected from 16 submissions. The papers are organized in topical sections on process modeling and comparison; process data analysis; Cloud workflow applications.
The LNCS journal Transactions on Large-Scale Data- and Knowledge-Centered Systems focuses on data management, knowledge discovery, and knowledge processing, which are core and hot topics in computer science. Since the 1990s, the Internet has become the main driving force behind application development in all domains. An increase in the demand for resource sharing across different sites connected through networks has led to an evolution of data- and knowledge-management systems from centralized systems to decentralized systems enabling large-scale distributed applications providing high scalability. Current decentralized systems still focus on data and knowledge as their main resource. Feasibility of these systems relies basically on P2P (peer-to-peer) techniques and the support of agent systems with scaling and decentralized control. Synergy between grids, P2P systems, and agent technologies is the key to data- and knowledge-centered systems in large-scale environments.This volume, the 26th issue of Transactions on Large-Scale Data- and Knowledge-Centered Systems, focuses on Data Warehousing and Knowledge Discovery from Big Data, and contains extended and revised versions of four papers selected as the best papers from the 16th International Conference on Data Warehousing and Knowledge Discovery (DaWaK 2014), held in Munich, Germany, during September 1-5, 2014. The papers focus on data cube computation, the construction and analysis of a data warehouse in the context of cancer epidemiology, pattern mining algorithms, and frequent item-set border approximation.
Many scientific disciplines rely on observational data of systems for which it is difficult (or impossible) to implement controlled experiments. Data analysis techniques are required for identifying causal information and relationships directly from such observational data. This need has led to the development of many different time series causality approaches and tools including transfer entropy, convergent cross-mapping (CCM), and Granger causality statistics. A practicing analyst can explore the literature to find many proposals for identifying drivers and causal connections in time series data sets. Exploratory causal analysis (ECA) provides a framework for exploring potential causal structures in time series data sets and is characterized by a myopic goal to determine which data series from a given set of series might be seen as the primary driver. In this work, ECA is used on several synthetic and empirical data sets, and it is found that all of the tested time series causality tools agree with each other (and intuitive notions of causality) for many simple systems but can provide conflicting causal inferences for more complicated systems. It is proposed that such disagreements between different time series causality tools during ECA might provide deeper insight into the data than could be found otherwise.
This book is devoted to the modeling and understanding of complex urban systems. This second volume of Understanding Complex Urban Systems focuses on the challenges of the modeling tools, concerning, e.g., the quality and quantity of data and the selection of an appropriate modeling approach. It is meant to support urban decision-makers-including municipal politicians, spatial planners, and citizen groups-in choosing an appropriate modeling approach for their particular modeling requirements. The contributors to this volume are from different disciplines, but all share the same goal: optimizing the representation of complex urban systems. They present and discuss a variety of approaches for dealing with data-availability problems and finding appropriate modeling approaches-and not only in terms of computer modeling. The selection of articles featured in this volume reflect a broad variety of new and established modeling approaches such as: - An argument for using Big Data methods in conjunction with Agent-based Modeling; - The introduction of a participatory approach involving citizens, in order to utilize an Agent-based Modeling approach to simulate urban-growth scenarios; - A presentation of semantic modeling to enable a flexible application of modeling methods and a flexible exchange of data; - An article about a nested-systems approach to analyzing a city's interdependent subsystems (according to these subsystems' different velocities of change); - An article about methods that use Luhmann's system theory to characterize cities as systems that are composed of flows; - An article that demonstrates how the Sen-Nussbaum Capabilities Approach can be used in urban systems to measure household well-being shifts that occur in response to the resettlement of urban households; - A final article that illustrates how Adaptive Cycles of Complex Adaptive Systems, as well as innovation, can be applied to gain a better understanding of cities and to promote more resilient and more sustainable urban futures.
This work takes a critical look at the current concept of isotopic landscapes ("isoscapes") in bioarchaeology and its application in future research. It specifically addresses the research potential of cremated finds, a somewhat neglected bioarchaeological substrate, resulting primarily from the inherent osteological challenges and complex mineralogy associated with it. In addition, for the first time data mining methods are applied. The chapters are the outcome of an international workshop sponsored by the German Science Foundation and the Centre of Advanced Studies at the Ludwig-Maximilian-University in Munich. Isotopic landscapes are indispensable tracers for the monitoring of the flow of matter through geo/ecological systems since they comprise existing temporally and spatially defined stable isotopic patterns found in geological and ecological samples. Analyses of stable isotopes of the elements nitrogen, carbon, oxygen, strontium, and lead are routinely utilized in bioarchaeology to reconstruct biodiversity, palaeodiet, palaeoecology, palaeoclimate, migration and trade. The interpretive power of stable isotopic ratios depends not only on firm, testable hypotheses, but most importantly on the cooperative networking of scientists from both natural and social sciences. Application of multi-isotopic tracers generates isotopic patterns with multiple dimensions, which accurately characterize a find, but can only be interpreted by use of modern data mining methods.
This book investigates the ways in which these systems can promote public value by encouraging the disclosure and reuse of privately-held data in ways that support collective values such as environmental sustainability. Supported by funding from the National Science Foundation, the authors' research team has been working on one such system, designed to enhance consumers ability to access information about the sustainability of the products that they buy and the supply chains that produce them. Pulled by rapidly developing technology and pushed by budget cuts, politicians and public managers are attempting to find ways to increase the public value of their actions. Policymakers are increasingly acknowledging the potential that lies in publicly disclosing more of the data that they hold, as well as incentivizing individuals and organizations to access, use, and combine it in new ways. Due to technological advances which include smarter phones, better ways to track objects and people as they travel, and more efficient data processing, it is now possible to build systems which use shared, transparent data in creative ways. The book adds to the current conversation among academics and practitioners about how to promote public value through data disclosure, focusing particularly on the roles that governments, businesses and non-profit actors can play in this process, making it of interest to both scholars and policy-makers.
This, the 25th issue of Transactions on Large-Scale Data- and Knowledge-Centered Systems, contains five fully revised selected papers focusing on data and knowledge management systems. Topics covered include a framework consisting of two heuristics with slightly different characteristics to compute the action rating of data stores, a theoretical and experimental study of filter-based equijoins in a MapReduce environment, a constraint programming approach based on constraint reasoning to study the view selection and data placement problem given a limited amount of resources, a formalization and an approximate algorithm to tackle the problem of source selection and query decomposition in federations of SPARQL endpoints, and a matcher factory enabling the generation of a dedicated schema matcher for a given schema matching scenario.
This book constitutes the thoroughly refereed proceedings of the Fourth International Conference on Data Technologies and Applications, DATA 2015, held in Colmar, France, in July 2015. The 9 revised full papers were carefully reviewed and selected from 70 submissions. The papers deal with the following topics: databases, data warehousing, data mining, data management, data security, knowledge and information systems and technologies; advanced application of data.
The work presents new approaches to Machine Learning for Cyber Physical Systems, experiences and visions. It contains some selected papers from the international Conference ML4CPS - Machine Learning for Cyber Physical Systems, which was held in Lemgo, October 1-2, 2015. Cyber Physical Systems are characterized by their ability to adapt and to learn: They analyze their environment and, based on observations, they learn patterns, correlations and predictive models. Typical applications are condition monitoring, predictive maintenance, image processing and diagnosis. Machine Learning is the key technology for these developments.
The two volumes of this book collect high-quality peer-reviewed research papers presented in the International Conference on ICT for Sustainable Development (ICT4SD 2015) held at Ahmedabad, India during 3 - 4 July 2015. The book discusses all areas of Information and Communication Technologies and its applications in field for engineering and management. The main focus of the volumes are on applications of ICT for Infrastructure, e-Governance, and contemporary technologies advancements on Data Mining, Security, Computer Graphics, etc. The objective of this International Conference is to provide an opportunity for the researchers, academicians, industry persons and students to interact and exchange ideas, experience and expertise in the current trend and strategies for Information and Communication Technologies.
This volume is aiming at a wide range of readers and researchers in the area of Big Data by presenting the recent advances in the fields of Big Data Analysis, as well as the techniques and tools used to analyze it. The book includes 10 distinct chapters providing a concise introduction to Big Data Analysis and recent Techniques and Environments for Big Data Analysis. It gives insight into how the expensive fitness evaluation of evolutionary learning can play a vital role in big data analysis by adopting Parallel, Grid, and Cloud computing environments.
This, the 24th issue of Transactions on Large-Scale Data- and Knowledge-Centered Systems, contains extended and revised versions of seven papers presented at the 25th International Conference on Database and Expert Systems Applications, DEXA 2014, held in Munich, Germany, in September 2014. Following the conference, and two further rounds of reviewing and selection, six extended papers and one invited keynote paper were chosen for inclusion in this special issue. Topics covered include systems modeling, similarity search, bioinformatics, data pricing, k-nearest neighbor querying, database replication, and data anonymization.
The two volumes of this book collect high-quality peer-reviewed research papers presented in the International Conference on ICT for Sustainable Development (ICT4SD 2015) held at Ahmedabad, India during 3 - 4 July 2015. The book discusses all areas of Information and Communication Technologies and its applications in field for engineering and management. The main focus of the volumes are on applications of ICT for Infrastructure, e-Governance, and contemporary technologies advancements on Data Mining, Security, Computer Graphics, etc. The objective of this International Conference is to provide an opportunity for the researchers, academicians, industry persons and students to interact and exchange ideas, experience and expertise in the current trend and strategies for Information and Communication Technologies.
This book explores an approach to social robotics based solely on autonomous unsupervised techniques and positions it within a structured exposition of related research in psychology, neuroscience, HRI, and data mining. The authors present an autonomous and developmental approach that allows the robot to learn interactive behavior by imitating humans using algorithms from time-series analysis and machine learning. The first part provides a comprehensive and structured introduction to time-series analysis, change point discovery, motif discovery and causality analysis focusing on possible applicability to HRI problems. Detailed explanations of all the algorithms involved are provided with open-source implementations in MATLAB enabling the reader to experiment with them. Imitation and simulation are the key technologies used to attain social behavior autonomously in the proposed approach. Part two gives the reader a wide overview of research in these areas in psychology, and ethology. Based on this background, the authors discuss approaches to endow robots with the ability to autonomously learn how to be social. Data Mining for Social Robots will be essential reading for graduate students and practitioners interested in social and developmental robotics.
This book constitutes thoroughly revised and selected papers from the 10th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2015, held in Berlin, Germany, in March 2015. VISIGRAPP comprises GRAPP, International Conference on Computer Graphics Theory and Applications; IVAPP, International Conference on Information Visualization Theory and Applications; and VISAPP, International Conference on Computer Vision Theory and Applications. The 23 thoroughly revised and extended papers presented in this volume were carefully reviewed and selected from 529 submissions. The book also contains one invited talk in full-paper length. The regular papers were organized in topical sections named: computer graphics theory and applications; information visualization theory and applications; and computer vision theory and applications.
This book constitutes the refereed proceedings of the International Conference on Geographical Information Systems Theory, Applications and Management, held in Barcelona, Spain, in April 2015. The 10 revised full papers presented were carefully reviewed and selected from 45 submissions. The papers address new challenges in geo-spatial data sensing, observation, representation, processing, visualization, sharing and managing. They concern information and communications technology (ICT) as well as management of information and knowledge-based systems.
This book contains the combined proceedings of the 4th International Conference on Ubiquitous Computing Application and Wireless Sensor Network (UCAWSN-15) and the 16th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT-15). The combined proceedings present peer-reviewed contributions from academic and industrial researchers in fields including ubiquitous and context-aware computing, context-awareness reasoning and representation, location awareness services, and architectures, protocols and algorithms, energy, management and control of wireless sensor networks. The book includes the latest research results, practical developments and applications in parallel/distributed architectures, wireless networks and mobile computing, formal methods and programming languages, network routing and communication algorithms, database applications and data mining, access control and authorization and privacy preserving computation.
This book contains some selected papers from the International Conference on Extreme Learning Machine 2015, which was held in Hangzhou, China, December 15-17, 2015. This conference brought together researchers and engineers to share and exchange R&D experience on both theoretical studies and practical applications of the Extreme Learning Machine (ELM) technique and brain learning. This book covers theories, algorithms ad applications of ELM. It gives readers a glance of the most recent advances of ELM.
This book contains some selected papers from the International Conference on Extreme Learning Machine 2015, which was held in Hangzhou, China, December 15-17, 2015. This conference brought together researchers and engineers to share and exchange R&D experience on both theoretical studies and practical applications of the Extreme Learning Machine (ELM) technique and brain learning. This book covers theories, algorithms ad applications of ELM. It gives readers a glance of the most recent advances of ELM. |
You may like...
Responsible Management Education - The…
Principles For Responsible Management Education
Paperback
R1,872
Discovery Miles 18 720
|