![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases
This book presents a unique approach to stream data mining. Unlike the vast majority of previous approaches, which are largely based on heuristics, it highlights methods and algorithms that are mathematically justified. First, it describes how to adapt static decision trees to accommodate data streams; in this regard, new splitting criteria are developed to guarantee that they are asymptotically equivalent to the classical batch tree. Moreover, new decision trees are designed, leading to the original concept of hybrid trees. In turn, nonparametric techniques based on Parzen kernels and orthogonal series are employed to address concept drift in the problem of non-stationary regressions and classification in a time-varying environment. Lastly, an extremely challenging problem that involves designing ensembles and automatically choosing their sizes is described and solved. Given its scope, the book is intended for a professional audience of researchers and practitioners who deal with stream data, e.g. in telecommunication, banking, and sensor networks.
This book summarizes the research findings presented at the 13th International Joint Conference on Knowledge-Based Software Engineering (JCKBSE 2020), which took place on August 24-26, 2020. JCKBSE 2020 was originally planned to take place in Larnaca, Cyprus. Unfortunately, the COVID-19 pandemic forced it be rescheduled as an online conference. JCKBSE is a well-established, international, biennial conference that focuses on the applications of artificial intelligence in software engineering. The 2020 edition of the conference was organized by Hiroyuki Nakagawa, Graduate School of Information Science and Technology, Osaka University, Japan, and George A. Tsihrintzis and Maria Virvou, Department of Informatics, University of Piraeus, Greece. This research book is a valuable resource for experts and researchers in the field of (knowledge-based) software engineering, as well as general readers in the fields of artificial and computational Intelligence and, more generally, computer science wanting to learn more about the field of (knowledge-based) software engineering and its applications. An extensive list of bibliographic references at the end of each paper helps readers to probe further into the application areas of interest to them.
This provides a comprehensive overview of the key principles of security concerns surrounding the upcoming Internet of Things (IoT), and introduces readers to the protocols adopted in the IoT. It also analyses the vulnerabilities, attacks and defense mechanisms, highlighting the security issues in the context of big data. Lastly, trust management approaches and ubiquitous learning applications are examined in detail. As such, the book sets the stage for developing and securing IoT applications both today and in the future.
Integrative Document and Content Management: Strategies for Exploiting Enterprise Knowledge blends theory and practice to provide practical knowledge and guidelines to enterprises wishing to understand the importance of managing documents to their operations along with presentation of document content to facilitate business planning and operations support. This book gives extensive pointers to those who propose to embark upon the implementation of integrated document management systems and to embrace Web content management within a life cycle framework covering document creation to Web publication.
This thesis focuses on the problem of optimizing the quality of network multimedia services. This problem spans multiple domains, from subjective perception of multimedia quality to computer networks management. The work done in this thesis approaches the problem at different levels, developing methods for modeling the subjective perception of quality based on objectively measurable parameters of the multimedia coding process as well as the transport over computer networks. The modeling of subjective perception is motivated by work done in psychophysics, while using Machine Learning techniques to map network conditions to the human perception of video services. Furthermore, the work develops models for efficient control of multimedia systems operating in dynamic networked environments with the goal of delivering optimized Quality of Experience. Overall this thesis delivers a set of methods for monitoring and optimizing the quality of multimedia services that adapt to the dynamic environment of computer networks in which they operate.
This book is a collection of selected papers presented at the First International Conference on Industrial IoT, Big Data and Supply Chain (IIoTBDSC), held as an online conference due to COVID-19 (initially to be held in Macao, Special Administration Region (SAR) of China), during September 15-17, 2020. It includes novel and innovative work from experts, practitioners, scientists and decision-makers from academia and industry. It brings multi-disciplines together on IIoT, data science, cloud computing, software engineering approaches to design, development, testing and quality of products and services.
This invaluable text/reference investigates the state of the art in approaches to building, monitoring, managing, and governing smart cities. A particular focus is placed on the distributed computing environments within the infrastructure of such cities, including issues of device connectivity, communication, security, and interoperability. A selection of experts of international repute offer their perspectives on current trends and best practices, and their suggestions for future developments, together with case studies supporting the vision of smart cities based on the Internet of Things (IoT). Topics and features: examines the various methodologies relating to next-level urbanization, including approaches to security and privacy relating to social and legal aspects; describes a recursive and layered approach to modeling large-scale resource management systems for self-sustainable cities; proposes a novel architecture for hybrid vehicular wireless sensor networks, and a pricing mechanism for the management of natural resources; discusses the challenges and potential solutions to building smart city surveillance systems, applying knowledge-based governance, and adopting electric vehicles; covers topics on intelligent distributed systems, IoT, fog computing paradigms, big data management and analytics, and smart grids; reviews issues of sustainability in the design of smart cities and healthcare services, illustrated by case studies taken from cities in Japan, India, and Brazil. This illuminating volume offers a comprehensive reference for researchers investigating smart cities and the IoT, students interested in the distributed computing technologies used by smart living systems, and practitioners wishing to adopt the latest security and connectivity techniques in smart city environments.
This book constitutes the refereed post-conference proceedings of the IFIP TC 3 Open Conference on Computers in Education, OCCE 2020, held in Mumbai, India, in January 2020. The 11 full papers and 4 short papers included in this volume were carefully reviewed and selected from 57 submissions. The papers discuss key emerging topics and evolving practices in the area of educational computing research. They are organized in the following topical sections: computing education; learners' and teachers' perspectives; teacher professional development; the industry perspective; and further aspects.
This thesis covers a diverse set of topics related to space-based gravitational wave detectors such as the Laser Interferometer Space Antenna (LISA). The core of the thesis is devoted to the preprocessing of the interferometric link data for a LISA constellation, specifically developing optimal Kalman filters to reduce arm length noise due to clock noise. The approach is to apply Kalman filters of increasing complexity to make optimal estimates of relevant quantities such as constellation arm length, relative clock drift, and Doppler frequencies based on the available measurement data. Depending on the complexity of the filter and the simulated data, these Kalman filter estimates can provide up to a few orders of magnitude improvement over simpler estimators. While the basic concept of the LISA measurement (Time Delay Interferometry) was worked out some time ago, this work brings a level of rigor to the processing of the constellation-level data products. The thesis concludes with some topics related to the eLISA such as a new class of phenomenological waveforms for extreme mass-ratio inspiral sources (EMRIs, one of the main source for eLISA), an octahedral space-based GW detector that does not require drag-free test masses, and some efficient template-search algorithms for the case of relatively high SNR signals.
This book provides a multidisciplinary view into how individuals and groups interact with the information environments that surround them. The book discusses how informational environments shape our daily lives, and how digital technologies can improve the ways in which people make use of informational environments. It presents the research and outcomes of a seven-year multidisciplinary research initiative, the Leibniz-WissenschaftsCampus Tubingen Informational Environments, jointly conducted by the Leibniz-Institut fur Wissensmedien (IWM) and the Eberhard Karls Universitat Tubingen. Book chapters from leading international experts in psychology, education, computer science, sociology, and medicine provide a multi-layered and multidisciplinary view on how the interplay between individuals and their informational environments unfolds. Featured topics include: Managing obesity prevention using digital media. Using digital media to assess and promote school teacher competence. Informational environments and their effect on college student dropout. Web-Platforms for game-based learning of orthography and numeracy. How to design adaptive information environments to support self-regulated learning with multimedia. Informational Environments will be of interest to advanced undergraduate students, postgraduate students, researchers and practitioners in various fields of educational psychology, social psychology, education, computer science, communication science, sociology, and medicine.
'Data Mining Patterns' gives an overall view of the recent solutions for mining and covers mining new kinds of patterns, mining patterns under constraints, new kinds of complex data and real-world applications of these concepts.
As the applications of data mining, the non-trivial extraction of implicit information in a data set, have expanded in recent years, so has the need for techniques that are tolerable to imprecision, uncertainty, and approximation. Intelligent Soft Computation and Evolving Data Mining: Integrating Advanced Technologies is a compendium that addresses this need. It integrates contrasting techniques of conventional hard computing and soft computing to exploit the tolerance for imprecision, uncertainty, partial truth, and approximation to achieve tractability, robustness and low-cost solution. This book provides a reference to researchers, practitioners, and students in both soft computing and data mining communities, forming a foundation for the development of the field.
This book gathers the outcomes of the second ECCOMAS CM3 Conference series on transport, which addressed the main challenges and opportunities that computation and big data represent for transport and mobility in the automotive, logistics, aeronautics and marine-maritime fields. Through a series of plenary lectures and mini-forums with lectures followed by question-and-answer sessions, the conference explored potential solutions and innovations to improve transport and mobility in surface and air applications. The book seeks to answer the question of how computational research in transport can provide innovative solutions to Green Transportation challenges identified in the ambitious Horizon 2020 program. In particular, the respective papers present the state of the art in transport modeling, simulation and optimization in the fields of maritime, aeronautics, automotive and logistics research. In addition, the content includes two white papers on transport challenges and prospects. Given its scope, the book will be of interest to students, researchers, engineers and practitioners whose work involves the implementation of Intelligent Transport Systems (ITS) software for the optimal use of roads, including safety and security, traffic and travel data, surface and air traffic management, and freight logistics.
This book highlights the state of the art and recent advances in Big Data clustering methods and their innovative applications in contemporary AI-driven systems. The book chapters discuss Deep Learning for Clustering, Blockchain data clustering, Cybersecurity applications such as insider threat detection, scalable distributed clustering methods for massive volumes of data; clustering Big Data Streams such as streams generated by the confluence of Internet of Things, digital and mobile health, human-robot interaction, and social networks; Spark-based Big Data clustering using Particle Swarm Optimization; and Tensor-based clustering for Web graphs, sensor streams, and social networks. The chapters in the book include a balanced coverage of big data clustering theory, methods, tools, frameworks, applications, representation, visualization, and clustering validation.
This book examines the principles of and advances in personalized task recommendation in crowdsourcing systems, with the aim of improving their overall efficiency. It discusses the challenges faced by personalized task recommendation when crowdsourcing systems channel human workforces, knowledge, skills and perspectives beyond traditional organizational boundaries. The solutions presented help interested individuals find tasks that closely match their personal interests and capabilities in a context of ever-increasing opportunities of participating in crowdsourcing activities. In order to explore the design of mechanisms that generate task recommendations based on individual preferences, the book first lays out a conceptual framework that guides the analysis and design of crowdsourcing systems. Based on a comprehensive review of existing research, it then develops and evaluates a new kind of task recommendation service that integrates with existing systems. The resulting prototype provides a platform for both the field study and the practical implementation of task recommendation in productive environments.
Every day we need to solve large problems for which supercomputers are needed. High performance computing (HPC) is a paradigm that allows to efficiently implement large-scale computational tasks on powerful supercomputers unthinkable without optimization. We try to minimize our effort and to maximize the achieved profit. Many challenging real world problems arising in engineering, economics, medicine and other areas can be formulated as large-scale computational tasks. The volume is a comprehensive collection of extended contributions from the High performance computing conference held in Borovets, Bulgaria, September 2019. This book presents recent advances in high performance computing. The topics of interest included into this volume are: HP software tools, Parallel Algorithms and Scalability, HPC in Big Data analytics, Modelling, Simulation & Optimization in a Data Rich Environment, Advanced numerical methods for HPC, Hybrid parallel or distributed algorithms. The volume is focused on important large-scale applications like Environmental and Climate Modeling, Computational Chemistry and Heuristic Algorithms.
This book highlights some of the unique aspects of spatio-temporal graph data from the perspectives of modeling and developing scalable algorithms. The authors discuss in the first part of this book, the semantic aspects of spatio-temporal graph data in two application domains, viz., urban transportation and social networks. Then the authors present representational models and data structures, which can effectively capture these semantics, while ensuring support for computationally scalable algorithms. In the first part of the book, the authors describe algorithmic development issues in spatio-temporal graph data. These algorithms internally use the semantically rich data structures developed in the earlier part of this book. Finally, the authors introduce some upcoming spatio-temporal graph datasets, such as engine measurement data, and discuss some open research problems in the area. This book will be useful as a secondary text for advanced-level students entering into relevant fields of computer science, such as transportation and urban planning. It may also be useful for researchers and practitioners in the field of navigational algorithms.
This book addresses the current status, challenges and future directions of data-driven materials discovery and design. It presents the analysis and learning from data as a key theme in many science and cyber related applications. The challenging open questions as well as future directions in the application of data science to materials problems are sketched. Computational and experimental facilities today generate vast amounts of data at an unprecedented rate. The book gives guidance to discover new knowledge that enables materials innovation to address grand challenges in energy, environment and security, the clearer link needed between the data from these facilities and the theory and underlying science. The role of inference and optimization methods in distilling the data and constraining predictions using insights and results from theory is key to achieving the desired goals of real time analysis and feedback. Thus, the importance of this book lies in emphasizing that the full value of knowledge driven discovery using data can only be realized by integrating statistical and information sciences with materials science, which is increasingly dependent on high throughput and large scale computational and experimental data gathering efforts. This is especially the case as we enter a new era of big data in materials science with the planning of future experimental facilities such as the Linac Coherent Light Source at Stanford (LCLS-II), the European X-ray Free Electron Laser (EXFEL) and MaRIE (Matter Radiation in Extremes), the signature concept facility from Los Alamos National Laboratory. These facilities are expected to generate hundreds of terabytes to several petabytes of in situ spatially and temporally resolved data per sample. The questions that then arise include how we can learn from the data to accelerate the processing and analysis of reconstructed microstructure, rapidly map spatially resolved properties from high throughput data, devise diagnostics for pattern detection, and guide experiments towards desired targeted properties. The authors are an interdisciplinary group of leading experts who bring the excitement of the nascent and rapidly emerging field of materials informatics to the reader.
This volume collects contributions written by different experts in honor of Prof. Jaime Munoz Masque. It covers a wide variety of research topics, from differential geometry to algebra, but particularly focuses on the geometric formulation of variational calculus; geometric mechanics and field theories; symmetries and conservation laws of differential equations, and pseudo-Riemannian geometry of homogeneous spaces. It also discusses algebraic applications to cryptography and number theory. It offers state-of-the-art contributions in the context of current research trends. The final result is a challenging panoramic view of connecting problems that initially appear distant.
This book demonstrates how quantitative methods for text analysis can successfully combine with qualitative methods in the study of different disciplines of the Humanities and Social Sciences (HSS). The book focuses on learning about the evolution of ideas of HSS disciplines through a distant reading of the contents conveyed by scientific literature, in order to retrieve the most relevant topics being debated over time. Quantitative methods, statistical techniques and software packages are used to identify and study the main subject matters of a discipline from raw textual data, both in the past and today. The book also deals with the concept of quality of life of words and aims to foster a discussion about the life cycle of scientific ideas. Textual data retrieved from large corpora pose interesting challenges for any data analysis method and today represent a growing area of research in many fields. New problems emerge from the growing availability of large databases and new methods are needed to retrieve significant information from those large information sources. This book can be used to explain how quantitative methods can be part of the research instrumentation and the "toolbox" of scholars of Humanities and Social Sciences. The book contains numerous examples and a description of the main methods in use, with references to literature and available software. Most of the chapters of the book have been written in a non-technical language for HSS researchers without mathematical, computer or statistical backgrounds.
This book provides a review of advanced topics relating to the theory, research, analysis and implementation in the context of big data platforms and their applications, with a focus on methods, techniques, and performance evaluation. The explosive growth in the volume, speed, and variety of data being produced every day requires a continuous increase in the processing speeds of servers and of entire network infrastructures, as well as new resource management models. This poses significant challenges (and provides striking development opportunities) for data intensive and high-performance computing, i.e., how to efficiently turn extremely large datasets into valuable information and meaningful knowledge. The task of context data management is further complicated by the variety of sources such data derives from, resulting in different data formats, with varying storage, transformation, delivery, and archiving requirements. At the same time rapid responses are needed for real-time applications. With the emergence of cloud infrastructures, achieving highly scalable data management in such contexts is a critical problem, as the overall application performance is highly dependent on the properties of the data management service.
The Semantic Web represents a vision for how to make the huge amount of information on the Web automatically processable by machines on a large scale. For this purpose, a whole suite of standards, technologies and related tools have been specified and developed over the last couple of years and they have now become the foundation for numerous new applications. A Developer's Guide to the Semantic Web helps the reader to learn the core standards, key components and underlying concepts. It provides in-depth coverage of both the what-is and how-to aspects of the Semantic Web. From Yu's presentation, the reader will obtain not only a solid understanding about the Semantic Web, but also learn how to combine all the pieces to build new applications on the Semantic Web. The second edition of this book not only adds detailed coverage of the latest W3C standards such as SPARQL 1.1 and RDB2RDF, it also updates the readers by following recent developments. More specifically, it includes five new chapters on schema.org and semantic markup, on Semantic Web technologies used in social networks and on new applications and projects such as data.gov and Wikidata and it also provides a complete coding example of building a search engine that supports Rich Snippets. Software developers in industry and students specializing in Web development or Semantic Web technologies will find in this book the most complete guide to this exciting field available today. Based on the step-by-step presentation of real-world projects, where the technologies and standards are applied, they will acquire the knowledge needed to design and implement state-of-the-art applications.
This book introduces novel methods for leak and blockage detection in pipelines. The leak happens as a result of ageing pipelines or extreme pressure forced by operational error or valve rapid variation. Many factors influence blockage formation in pipes like wax deposition that leads to the formation and eventual growth of solid layers and deposition of suspended solid particles in the fluids. In this book, initially, different categories of leak detection are overviewed. Afterwards, the observability and controllability of pipeline systems are analysed. Control variables can be usually presented by pressure and flow rates at the start and end points of the pipe. Different cases are considered based on the selection of control variables to model the system. Several theorems are presented to test the observability and controllability of the system. In this book, the leakage flow in the pipelines is studied numerically to find the relationship between leakage flow and pressure difference. Removing leakage completely is almost impossible; hence, the development of a formal systematic leakage control policy is the most reliable approach to reducing leakage rates.
Pattern Recognition on Oriented Matroids covers a range of innovative problems in combinatorics, poset and graph theories, optimization, and number theory that constitute a far-reaching extension of the arsenal of committee methods in pattern recognition. The groundwork for the modern committee theory was laid in the mid-1960s, when it was shown that the familiar notion of solution to a feasible system of linear inequalities has ingenious analogues which can serve as collective solutions to infeasible systems. A hierarchy of dialects in the language of mathematics, for instance, open cones in the context of linear inequality systems, regions of hyperplane arrangements, and maximal covectors (or topes) of oriented matroids, provides an excellent opportunity to take a fresh look at the infeasible system of homogeneous strict linear inequalities - the standard working model for the contradictory two-class pattern recognition problem in its geometric setting. The universal language of oriented matroid theory considerably simplifies a structural and enumerative analysis of applied aspects of the infeasibility phenomenon. The present book is devoted to several selected topics in the emerging theory of pattern recognition on oriented matroids: the questions of existence and applicability of matroidal generalizations of committee decision rules and related graph-theoretic constructions to oriented matroids with very weak restrictions on their structural properties; a study (in which, in particular, interesting subsequences of the Farey sequence appear naturally) of the hierarchy of the corresponding tope committees; a description of the three-tope committees that are the most attractive approximation to the notion of solution to an infeasible system of linear constraints; an application of convexity in oriented matroids as well as blocker constructions in combinatorial optimization and in poset theory to enumerative problems on tope committees; an attempt to clarify how elementary changes (one-element reorientations) in an oriented matroid affect the family of its tope committees; a discrete Fourier analysis of the important family of critical tope committees through rank and distance relations in the tope poset and the tope graph; the characterization of a key combinatorial role played by the symmetric cycles in hypercube graphs. Contents Oriented Matroids, the Pattern Recognition Problem, and Tope Committees Boolean Intervals Dehn-Sommerville Type Relations Farey Subsequences Blocking Sets of Set Families, and Absolute Blocking Constructions in Posets Committees of Set Families, and Relative Blocking Constructions in Posets Layers of Tope Committees Three-Tope Committees Halfspaces, Convex Sets, and Tope Committees Tope Committees and Reorientations of Oriented Matroids Topes and Critical Committees Critical Committees and Distance Signals Symmetric Cycles in the Hypercube Graphs
Clinical Decision Support and Beyond: Progress and Opportunities in Knowledge-Enhanced Health and Healthcare, now in its third edition, discusses the underpinnings of effective, reliable, and easy-to-use clinical decision support systems at the point of care as a productive way of managing the flood of data, knowledge, and misinformation when providing patient care. Incorporating CDS into electronic health record systems has been underway for decades; however its complexities, costs, and user resistance have lagged its potential. Thus it is of utmost importance to understand the process in detail, to take full advantage of its capabilities. The book expands and updates the content of the previous edition, and discusses topics such as integration of CDS into workflow, context-driven anticipation of needs for CDS, new forms of CDS derived from data analytics, precision medicine, population health, integration of personal monitoring, and patient-facing CDS. In addition, it discusses population health management, public health CDS and CDS to help reduce health disparities. It is a valuable resource for clinicians, practitioners, students and members of medical and biomedical fields who are interested to learn more about the potential of clinical decision support to improve health and wellness and the quality of health care. |
![]() ![]() You may like...
Discovering Computers 2018 - Digital…
Misty Vermaat, Steven Freund, …
Paperback
CompTIA Data+ DA0-001 Exam Cram
Akhil Behl, Sivasubramanian
Digital product license key
Blockchain Technology: Applications and…
Sandeep Kumar Panda, Ajay Kumar Jena, …
Hardcover
R5,123
Discovery Miles 51 230
|