![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > Data mining
This book gathers a collection of high-quality peer-reviewed research papers presented at the 2nd International Conference on Data and Information Sciences (ICDIS 2019), held at Raja Balwant Singh Engineering Technical Campus, Agra, India, on March 29-30, 2019. In chapters written by leading researchers, developers, and practitioner from academia and industry, it covers virtually all aspects of computational sciences and information security, including central topics like artificial intelligence, cloud computing, and big data. Highlighting the latest developments and technical solutions, it will show readers from the computer industry how to capitalize on key advances in next-generation computer and communication technology.
This book sets the stage of the evolution of corporate governance, laws and regulations, other forms of governance, and the interaction between data governance and other corporate governance sub-disciplines. Given the continuously evolving and complex regulatory landscape and the growing number of laws and regulations, compliance is a widely discussed issue in the field of data. This book considers the cost of non-compliance bringing in examples from different industries of instances in which companies failed to comply with rules, regulations, and other legal obligations, and goes on to explain how data governance helps in avoiding such pitfalls. The first in a three-volume series on data governance, this book does not assume any prior or specialist knowledge in data governance and will be highly beneficial for IT, management and law students, academics, information management and business professionals, and researchers to enhance their knowledge and get guidance in managing their own data governance projects from a governance and compliance perspective.
Enterprise Architecture, Integration, and Interoperability and the Networked enterprise have become the theme of many conferences in the past few years. These conferences were organised by IFIP TC5 with the support of its two working groups: WG 5. 12 (Architectures for Enterprise Integration) and WG 5. 8 (Enterprise Interoperability), both concerned with aspects of the topic: how is it possible to architect and implement businesses that are flexible and able to change, to interact, and use one another's s- vices in a dynamic manner for the purpose of (joint) value creation. The original qu- tion of enterprise integration in the 1980s was: how can we achieve and integrate - formation and material flow in the enterprise? Various methods and reference models were developed or proposed - ranging from tightly integrated monolithic system - chitectures, through cell-based manufacturing to on-demand interconnection of bu- nesses to form virtual enterprises in response to market opportunities. Two camps have emerged in the endeavour to achieve the same goal, namely, to achieve interoperability between businesses (whereupon interoperability is the ability to exchange information in order to use one another's services or to jointly implement a service). One school of researchers addresses the technical aspects of creating dynamic (and static) interconnections between disparate businesses (or parts thereof).
This book features research papers presented at the International Conference on Emerging Technologies in Data Mining and Information Security (IEMIS 2020) held at the University of Engineering & Management, Kolkata, India, during July 2020. The book is organized in three volumes and includes high-quality research work by academicians and industrial experts in the field of computing and communication, including full-length papers, research-in-progress papers, and case studies related to all the areas of data mining, machine learning, Internet of things (IoT), and information security.
These proceedings gather outstanding research papers presented at the Second International Conference on Data Engineering 2015 (DaEng-2015) and offer a consolidated overview of the latest developments in databases, information retrieval, data mining and knowledge management. The conference brought together researchers and practitioners from academia and industry to address key challenges in these fields, discuss advanced data engineering concepts and form new collaborations. The topics covered include but are not limited to: * Data engineering * Big data * Data and knowledge visualization * Data management * Data mining and warehousing * Data privacy & security * Database theory * Heterogeneous databases * Knowledge discovery in databases * Mobile, grid and cloud computing * Knowledge management * Parallel and distributed data * Temporal data * Web data, services and information engineering * Decision support systems * E-Business engineering and management * E-commerce and e-learning * Geographical information systems * Information management * Information quality and strategy * Information retrieval, integration and visualization * Information security * Information systems and technologies
This 2 volume-set of IFIP AICT 583 and 584 constitutes the refereed proceedings of the 16th IFIP WG 12.5 International Conference on Artificial Intelligence Applications and Innovations, AIAI 2020, held in Neos Marmaras, Greece, in June 2020.* The 70 full papers and 5 short papers presented were carefully reviewed and selected from 149 submissions. They cover a broad range of topics related to technical, legal, and ethical aspects of artificial intelligence systems and their applications and are organized in the following sections: Part I: classification; clustering - unsupervised learning -analytics; image processing; learning algorithms; neural network modeling; object tracking - object detection systems; ontologies - AI; and sentiment analysis - recommender systems. Part II: AI ethics - law; AI constraints; deep learning - LSTM; fuzzy algebra - fuzzy systems; machine learning; medical - health systems; and natural language. *The conference was held virtually due to the COVID-19 pandemic.
The book discusses machine learning-based decision-making models, and presents intelligent, hybrid and adaptive methods and tools for solving complex learning and decision-making problems under conditions of uncertainty. Featuring contributions from data scientists, practitioners and educators, the book covers a range of topics relating to intelligent systems for decision science, and examines recent innovations, trends, and practical challenges in the field. The book is a valuable resource for academics, students, researchers and professionals wanting to gain insights into decision-making.
This book discusses the application of data systems and data-driven infrastructure in existing industrial systems in order to optimize workflow, utilize hidden potential, and make existing systems free from vulnerabilities. The book discusses application of data in the health sector, public transportation, the financial institutions, and in battling natural disasters, among others. Topics include real-time applications in the current big data perspective; improving security in IoT devices; data backup techniques for systems; artificial intelligence-based outlier prediction; machine learning in OpenFlow Network; and application of deep learning in blockchain enabled applications. This book is intended for a variety of readers from professional industries, organizations, and students.
This book discusses recent research and applications in intelligent service computing in mobile environments. The authors first explain how advances in artificial intelligence and big data have allowed for an array of intelligent services with complex and diverse applications. They then show how this brings new opportunities and challenges for service computing. The book, made up of contributions from academic and industry, aims to present advances in intelligent services, new algorithms and techniques in the field, foundational theory and systems, as well as practical real-life applications. Some of the topics discussed include cognition, modeling, description and verification for intelligent services; discovery, recommendation and selection for intelligent services; formal verification, testing and inspection for intelligent services; and composition and cooperation methods for intelligent services.
The post-genomic revolution is witnessing the generation of petabytes of data annually, with deep implications ranging across evolutionary theory, developmental biology, agriculture, and disease processes. "Data Mining for Systems Biology: Methods and Protocols," surveys and demonstrates the science and technology of converting an unprecedented data deluge to new knowledge and biological insight. The volume is organized around two overlapping themes, network inference and functional inference. Written in the highly successful "Methods in Molecular Biology " series format, chapters include introductions to their respective topics, lists of the necessary materials and reagents, step-by-step, readily reproducible protocols, and key tips on troubleshooting and avoiding known pitfalls. Authoritative and practical, "Data Mining for Systems Biology: Methods and Protocols" also seeks to aid researchers in the further development of databases, mining and visualization systems that are central to the paradigm altering discoveries being made with increasing frequency."
This book shows healthcare professionals how to turn data points into meaningful knowledge upon which they can take effective action. Actionable intelligence can take many forms, from informing health policymakers on effective strategies for the population to providing direct and predictive insights on patients to healthcare providers so they can achieve positive outcomes. It can assist those performing clinical research where relevant statistical methods are applied to both identify the efficacy of treatments and improve clinical trial design. It also benefits healthcare data standards groups through which pertinent data governance policies are implemented to ensure quality data are obtained, measured, and evaluated for the benefit of all involved. Although the obvious constant thread among all of these important healthcare use cases of actionable intelligence is the data at hand, such data in and of itself merely represents one element of the full structure of healthcare data analytics. This book examines the structure for turning data into actionable knowledge and discusses: The importance of establishing research questions Data collection policies and data governance Principle-centered data analytics to transform data into information Understanding the "why" of classified causes and effects Narratives and visualizations to inform all interested parties Actionable Intelligence in Healthcare is an important examination of how proper healthcare-related questions should be formulated, how relevant data must be transformed to associated information, and how the processing of information relates to knowledge. It indicates to clinicians and researchers why this relative knowledge is meaningful and how best to apply such newfound understanding for the betterment of all.
This book provides an introduction to the field of periodic pattern mining, reviews state-of-the-art techniques, discusses recent advances, and reviews open-source software. Periodic pattern mining is a popular and emerging research area in the field of data mining. It involves discovering all regularly occurring patterns in temporal databases. One of the major applications of periodic pattern mining is the analysis of customer transaction databases to discover sets of items that have been regularly purchased by customers. Discovering such patterns has several implications for understanding the behavior of customers. Since the first work on periodic pattern mining, numerous studies have been published and great advances have been made in this field. The book consists of three main parts: introduction, algorithms, and applications. The first chapter is an introduction to pattern mining and periodic pattern mining. The concepts of periodicity, periodic support, search space exploration techniques, and pruning strategies are discussed. The main types of algorithms are also presented such as periodic-frequent pattern growth, partial periodic pattern-growth, and periodic high-utility itemset mining algorithm. Challenges and research opportunities are reviewed. The chapters that follow present state-of-the-art techniques for discovering periodic patterns in (1) transactional databases, (2) temporal databases, (3) quantitative temporal databases, and (4) big data. Then, the theory on concise representations of periodic patterns is presented, as well as hiding sensitive information using privacy-preserving data mining techniques. The book concludes with several applications of periodic pattern mining, including applications in air pollution data analytics, accident data analytics, and traffic congestion analytics.
Learn how to apply the principles of machine learning to time series modeling with this indispensable resource Machine Learning for Time Series Forecasting with Python is an incisive and straightforward examination of one of the most crucial elements of decision-making in finance, marketing, education, and healthcare: time series modeling. Despite the centrality of time series forecasting, few business analysts are familiar with the power or utility of applying machine learning to time series modeling. Author Francesca Lazzeri, a distinguished machine learning scientist and economist, corrects that deficiency by providing readers with comprehensive and approachable explanation and treatment of the application of machine learning to time series forecasting. Written for readers who have little to no experience in time series forecasting or machine learning, the book comprehensively covers all the topics necessary to: Understand time series forecasting concepts, such as stationarity, horizon, trend, and seasonality Prepare time series data for modeling Evaluate time series forecasting models' performance and accuracy Understand when to use neural networks instead of traditional time series models in time series forecasting Machine Learning for Time Series Forecasting with Python is full real-world examples, resources and concrete strategies to help readers explore and transform data and develop usable, practical time series forecasts. Perfect for entry-level data scientists, business analysts, developers, and researchers, this book is an invaluable and indispensable guide to the fundamental and advanced concepts of machine learning applied to time series modeling.
The book first explores the cybersecurity's landscape and the inherent susceptibility of online communication system such as e-mail, chat conversation and social media in cybercrimes. Common sources and resources of digital crimes, their causes and effects together with the emerging threats for society are illustrated in this book. This book not only explores the growing needs of cybersecurity and digital forensics but also investigates relevant technologies and methods to meet the said needs. Knowledge discovery, machine learning and data analytics are explored for collecting cyber-intelligence and forensics evidence on cybercrimes. Online communication documents, which are the main source of cybercrimes are investigated from two perspectives: the crime and the criminal. AI and machine learning methods are applied to detect illegal and criminal activities such as bot distribution, drug trafficking and child pornography. Authorship analysis is applied to identify the potential suspects and their social linguistics characteristics. Deep learning together with frequent pattern mining and link mining techniques are applied to trace the potential collaborators of the identified criminals. Finally, the aim of the book is not only to investigate the crimes and identify the potential suspects but, as well, to collect solid and precise forensics evidence to prosecute the suspects in the court of law.
This edited book covers recent advances of techniques, methods and tools treating the problem of learning from data streams generated by evolving non-stationary processes. The goal is to discuss and overview the advanced techniques, methods and tools that are dedicated to manage, exploit and interpret data streams in non-stationary environments. The book includes the required notions, definitions, and background to understand the problem of learning from data streams in non-stationary environments and synthesizes the state-of-the-art in the domain, discussing advanced aspects and concepts and presenting open problems and future challenges in this field. Provides multiple examples to facilitate the understanding data streams in non-stationary environments; Presents several application cases to show how the methods solve different real world problems; Discusses the links between methods to help stimulate new research and application directions.
This book discusses the impact of advanced information technologies, such as data processing, machine learning, and artificial intelligence, on organizational decision-making processes and practices. One of the book's central themes is the interplay between human reasoning and machine logic in the context of organizational functioning, specifically, the fairly common situations in which subjective beliefs are pitted against objective evidence giving rise to conflict rather than enhancing the quality of organizational sensemaking. Aiming to not only raise the awareness of the potential challenges but also to offer solutions, the book delineates and discusses the core impediments to effective human-information technology interactions, and outlines strategies for overcoming those obstacles on the way to enhancing the efficacy of organizational decision-making.
This book provides a broad overview of essential features of subsurface environmental modelling at the science-policy interface, offering insights into the potential challenges in the field of subsurface flow and transport, as well as the corresponding computational modelling and its impact on the area of policy- and decision-making. The book is divided into two parts: Part I presents models, methods and software at the science-policy interface. Building on this, Part II illustrates the specifications using detailed case studies of subsurface environmental modelling. It also includes a systematic research overview and discusses the anthropogenic use of the subsurface, with a particular focus on energy-related technologies, such as carbon sequestration, geothermal technologies, fluid and energy storage, nuclear waste disposal, and unconventional oil and gas recovery.
Big data is a well-trafficked subject in recent IT discourse and does not lack for current research. In fact, there is such a surfeit of material related to big data-and so much of it of questionably reliability, thanks to the high-gloss efforts of savvy tech-marketing gurus-that it can, at times, be difficult for a serious academician to navigate. The Handbook of Research on Trends and Future Directions in Big Data and Web Intelligence cuts through the haze of glitz and pomp surrounding big data and offers a simple, straightforward reference-source of practical academic utility. Covering such topics as cloud computing, parallel computing, natural language processing, and personalized medicine, this volume presents an overview of current research, insight into recent advances, and gaps in the literature indicative of opportunities for future inquiry and is targeted toward a broad, interdisciplinary audience of students, academics, researchers, and professionals in fields of IT, networking, and data-analytics.
This volume presents techniques and theories drawn from mathematics, statistics, computer science, and information science to analyze problems in business, economics, finance, insurance, and related fields. The authors present proposals for solutions to common problems in related fields. To this end, they are showing the use of mathematical, statistical, and actuarial modeling, and concepts from data science to construct and apply appropriate models with real-life data, and employ the design and implementation of computer algorithms to evaluate decision-making processes. This book is unique as it associates data science - data-scientists coming from different backgrounds - with some basic and advanced concepts and tools used in econometrics, operational research, and actuarial sciences. It, therefore, is a must-read for scholars, students, and practitioners interested in a better understanding of the techniques and theories of these fields.
Encompassing a broad range of forms and sources of data, this textbook introduces data systems through a progressive presentation. Introduction to Data Systems covers data acquisition starting with local files, then progresses to data acquired from relational databases, from REST APIs and through web scraping. It teaches data forms/formats from tidy data to relationally defined sets of tables to hierarchical structure like XML and JSON using data models to convey the structure, operations, and constraints of each data form. The starting point of the book is a foundation in Python programming found in introductory computer science classes or short courses on the language, and so does not require prerequisites of data structures, algorithms, or other courses. This makes the material accessible to students early in their educational career and equips them with understanding and skills that can be applied in computer science, data science/data analytics, and information technology programs as well as for internships and research experiences. This book is accessible to a wide variety of students. By drawing together content normally spread across upper level computer science courses, it offers a single source providing the essentials for data science practitioners. In our increasingly data-centric world, students from all domains will benefit from the "data-aptitude" built by the material in this book.
Nearly everyone knows K-means algorithm in the fields of data mining and business intelligence. But the ever-emerging data with extremely complicated characteristics bring new challenges to this "old" algorithm. This book addresses these challenges and makes novel contributions in establishing theoretical frameworks for K-means distances and K-means based consensus clustering, identifying the "dangerous" uniform effect and zero-value dilemma of K-means, adapting right measures for cluster validity, and integrating K-means with SVMs for rare class analysis. This book not only enriches the clustering and optimization theories, but also provides good guidance for the practical use of K-means, especially for important tasks such as network intrusion detection and credit fraud prediction. The thesis on which this book is based has won the "2010 National Excellent Doctoral Dissertation Award", the highest honor for not more than 100 PhD theses per year in China.
This work provides an innovative look at the use of open data for extracting information to detect and prevent crime, and also explores the link between terrorism and organized crime. In counter-terrorism and other forms of crime prevention, foresight about potential threats is vitally important and this information is increasingly available via electronic data sources such as social media communications. However, the amount and quality of these sources is varied, and researchers and law enforcement need guidance about when and how to extract useful information from them. The emergence of these crime threats, such as communication between organized crime networks and radicalization towards terrorism, is driven by a combination of political, economic, social, technological, legal and environmental factors. The contributions to this volume represent a major step by researchers to systematically collect, filter, interpret, and use the information available. For the purposes of this book, the only data sources used are publicly available sources which can be accessed legally and ethically. This work will be of interest to researchers in criminology and criminal justice, particularly in police science, organized crime, counter-terrorism and crime science. It will also be of interest to those in related fields such as applications of computer science and data mining, public policy, and business intelligence.
This book proposes new control and protection schemes to improve the overall stability and security of future wide-area power systems. It focuses on the high penetration levels of renewable energy sources and distributed generation, particularly with the trend towards smart grids. The control methods discussed can improve the overall stability in normal and abnormal operation conditions, while the protection methods presented can be used to ensure the secure operation of systems under most severe contingencies. Presenting stability, security, and protection methods for power systems in one concise volume, this book takes the reader on a journey from concepts and fundamentals to the latest and future trends in each topic covered, making it an informative and intriguing read for researchers, graduate students, and practitioners alike.
Making use of data is not anymore a niche project but central to almost every project. With access to massive compute resources and vast amounts of data, it seems at least in principle possible to solve any problem. However, successful data science projects result from the intelligent application of: human intuition in combination with computational power; sound background knowledge with computer-aided modelling; and critical reflection of the obtained insights and results. Substantially updating the previous edition, then entitled Guide to Intelligent Data Analysis, this core textbook continues to provide a hands-on instructional approach to many data science techniques, and explains how these are used to solve real world problems. The work balances the practical aspects of applying and using data science techniques with the theoretical and algorithmic underpinnings from mathematics and statistics. Major updates on techniques and subject coverage (including deep learning) are included. Topics and features: guides the reader through the process of data science, following the interdependent steps of project understanding, data understanding, data blending and transformation, modeling, as well as deployment and monitoring; includes numerous examples using the open source KNIME Analytics Platform, together with an introductory appendix; provides a review of the basics of classical statistics that support and justify many data analysis methods, and a glossary of statistical terms; integrates illustrations and case-study-style examples to support pedagogical exposition; supplies further tools and information at an associated website. This practical and systematic textbook/reference is a "need-to-have" tool for graduate and advanced undergraduate students and essential reading for all professionals who face data science problems. Moreover, it is a "need to use, need to keep" resource following one's exploration of the subject.
Formal specifications are an important tool for the construction, verification and analysis of systems, since without it is hardly possible to explain whether a system worked correctly or showed an expected behavior. This book proposes the use of representation theorems as a means to develop an understanding of all models of a specification in order to exclude possible unintended models, demonstrating the general methodology with representation theorems for applications in qualitative spatial reasoning, data stream processing, and belief revision. For qualitative spatial reasoning, it develops a model of spatial relatedness that captures the scaling context with hierarchical partitions of a spatial domain, and axiomatically characterizes the resulting relations. It also shows that various important properties of stream processing, such as prefix-determinedness or various factorization properties can be axiomatized, and that the axioms are fulfilled by natural classes of stream functions. The third example is belief revision, which is concerned with the revision of knowledge bases under new, potentially incompatible information. In this context, the book considers a subclass of revision operators, namely the class of reinterpretation operators, and characterizes them axiomatically. A characteristic property of reinterpretation operators is that of dissolving potential inconsistencies by reinterpreting symbols of the knowledge base. Intended for researchers in theoretical computer science or one of the above application domains, the book presents results that demonstrate the use of representation theorems for the design and evaluation of formal specifications, and provide the basis for future application-development kits that support application designers with automatically built representations. |
You may like...
Handbook of Research on Advanced…
Siddhartha Bhattacharyya, Pinaki Banerjee, …
Hardcover
R7,041
Discovery Miles 70 410
New Opportunities for Sentiment Analysis…
Aakanksha Sharaff, G. R. Sinha, …
Hardcover
R6,648
Discovery Miles 66 480
Handbook of Research on Automated…
Mrutyunjaya Panda, Harekrishna Misra
Hardcover
R7,766
Discovery Miles 77 660
The Data and Analytics Playbook - Proven…
Lowell Fryman, Gregory Lampshire, …
Paperback
R1,200
Discovery Miles 12 000
|