![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
This book provides a systematic review of many advanced techniques to support the analysis of large collections of documents, ranging from the elementary to the profound, covering all the aspects of the visualization of text documents. Particularly, we start by introducing the fundamental concept of information visualization and visual analysis, followed by a brief survey of the field of text visualization and commonly used data models for converting document into a structured form for visualization. Then we introduce the key visualization techniques including visualizing document similarity, content, sentiments, as well as text corpus exploration system in details with concrete examples in the rest of the book.
This provides a comprehensive overview of the key principles of security concerns surrounding the upcoming Internet of Things (IoT), and introduces readers to the protocols adopted in the IoT. It also analyses the vulnerabilities, attacks and defense mechanisms, highlighting the security issues in the context of big data. Lastly, trust management approaches and ubiquitous learning applications are examined in detail. As such, the book sets the stage for developing and securing IoT applications both today and in the future.
This thesis focuses on the problem of optimizing the quality of network multimedia services. This problem spans multiple domains, from subjective perception of multimedia quality to computer networks management. The work done in this thesis approaches the problem at different levels, developing methods for modeling the subjective perception of quality based on objectively measurable parameters of the multimedia coding process as well as the transport over computer networks. The modeling of subjective perception is motivated by work done in psychophysics, while using Machine Learning techniques to map network conditions to the human perception of video services. Furthermore, the work develops models for efficient control of multimedia systems operating in dynamic networked environments with the goal of delivering optimized Quality of Experience. Overall this thesis delivers a set of methods for monitoring and optimizing the quality of multimedia services that adapt to the dynamic environment of computer networks in which they operate.
This informative book goes beyond the technical aspects of data management to provide detailed analyses of quality problems and their impacts, potential solutions and how they are combined to form an overall data quality program, senior management's role, methods used to make improvements, and the life-cycle of data quality. It concludes with case studies, summaries of main points, roles and responsibilities for each individual, and a helpful listing of "dos and don'ts".
This thesis presents an experimental study of quantum memory based on cold atomic ensembles and discusses photonic entanglement. It mainly focuses on experimental research on storing orbital angular momentum, and introduces readers to methods for storing a single photon carried by an image or an entanglement of spatial modes. The thesis also discusses the storage of photonic entanglement using the Raman scheme as a step toward implementing high-bandwidth quantum memory. The storage of photonic entanglement is central to achieving long-distance quantum communication based on quantum repeaters and scalable linear optical quantum computation. Addressing this key issue, the findings presented in the thesis are very promising with regard to future high-speed and high-capacity quantum communications.
This thesis covers a diverse set of topics related to space-based gravitational wave detectors such as the Laser Interferometer Space Antenna (LISA). The core of the thesis is devoted to the preprocessing of the interferometric link data for a LISA constellation, specifically developing optimal Kalman filters to reduce arm length noise due to clock noise. The approach is to apply Kalman filters of increasing complexity to make optimal estimates of relevant quantities such as constellation arm length, relative clock drift, and Doppler frequencies based on the available measurement data. Depending on the complexity of the filter and the simulated data, these Kalman filter estimates can provide up to a few orders of magnitude improvement over simpler estimators. While the basic concept of the LISA measurement (Time Delay Interferometry) was worked out some time ago, this work brings a level of rigor to the processing of the constellation-level data products. The thesis concludes with some topics related to the eLISA such as a new class of phenomenological waveforms for extreme mass-ratio inspiral sources (EMRIs, one of the main source for eLISA), an octahedral space-based GW detector that does not require drag-free test masses, and some efficient template-search algorithms for the case of relatively high SNR signals.
This book reports on a novel concept of mechanism transitions for the design of highly scalable and adaptive publish/subscribe systems. First, it introduces relevant mechanisms for location-based filtering and locality-aware dissemination of events based on a thorough review of the state-of-the-art. This is followed by a detailed description of the design of a transition-enabled publish/subscribe system that enables seamless switching between mechanisms during runtime. Lastly, the proposed concepts are evaluated within the challenging context of location-based mobile applications. The book assesses in depth the performance and cost of transition execution, highlighting the impact of the proposed state transfer mechanism and the potential of coexisting transition-enabled mechanisms.
This book offers an original and broad exploration of the fundamental methods in Clustering and Combinatorial Data Analysis, presenting new formulations and ideas within this very active field. With extensive introductions, formal and mathematical developments and real case studies, this book provides readers with a deeper understanding of the mutual relationships between these methods, which are clearly expressed with respect to three facets: logical, combinatorial and statistical. Using relational mathematical representation, all types of data structures can be handled in precise and unified ways which the author highlights in three stages: Clustering a set of descriptive attributes Clustering a set of objects or a set of object categories Establishing correspondence between these two dual clusterings Tools for interpreting the reasons of a given cluster or clustering are also included. Foundations and Methods in Combinatorial and Statistical Data Analysis and Clustering will be a valuable resource for students and researchers who are interested in the areas of Data Analysis, Clustering, Data Mining and Knowledge Discovery.
This book is an introduction to both offensive and defensive techniques of cyberdeception. Unlike most books on cyberdeception, this book focuses on methods rather than detection. It treats cyberdeception techniques that are current, novel, and practical, and that go well beyond traditional honeypots. It contains features friendly for classroom use: (1) minimal use of programming details and mathematics, (2) modular chapters that can be covered in many orders, (3) exercises with each chapter, and (4) an extensive reference list.Cyberattacks have grown serious enough that understanding and using deception is essential to safe operation in cyberspace. The deception techniques covered are impersonation, delays, fakes, camouflage, false excuses, and social engineering. Special attention is devoted to cyberdeception in industrial control systems and within operating systems. This material is supported by a detailed discussion of how to plan deceptions and calculate their detectability and effectiveness. Some of the chapters provide further technical details of specific deception techniques and their application. Cyberdeception can be conducted ethically and efficiently when necessary by following a few basic principles. This book is intended for advanced undergraduate students and graduate students, as well as computer professionals learning on their own. It will be especially useful for anyone who helps run important and essential computer systems such as critical-infrastructure and military systems.
This book discusses the fusion of mobile and WiFi network data with semantic technologies and diverse context sources for offering semantically enriched context-aware services in the telecommunications domain. It presents the OpenMobileNetwork as a platform for providing estimated and semantically enriched mobile and WiFi network topology data using the principles of Linked Data. This platform is based on the OpenMobileNetwork Ontology consisting of a set of network context ontology facets that describe mobile network cells as well as WiFi access points from a topological perspective and geographically relate their coverage areas to other context sources. The book also introduces Linked Crowdsourced Data and its corresponding Context Data Cloud Ontology, which is a crowdsourced dataset combining static location data with dynamic context information. Linked Crowdsourced Data supports the OpenMobileNetwork by providing the necessary context data richness for more sophisticated semantically enriched context-aware services. Various application scenarios and proof of concept services as well as two separate evaluations are part of the book. As the usability of the provided services closely depends on the quality of the approximated network topologies, it compares the estimated positions for mobile network cells within the OpenMobileNetwork to a small set of real-world cell positions. The results prove that context-aware services based on the OpenMobileNetwork rely on a solid and accurate network topology dataset. The book also evaluates the performance of the exemplary Semantic Tracking as well as Semantic Geocoding services, verifying the applicability and added value of semantically enriched mobile and WiFi network data.
The Digital Humanities have arrived at a moment when digital Big Data is becoming more readily available, opening exciting new avenues of inquiry but also new challenges. This pioneering book describes and demonstrates the ways these data can be explored to construct cultural heritage knowledge, for research and in teaching and learning. It helps humanities scholars to grasp Big Data in order to do their work, whether that means understanding the underlying algorithms at work in search engines, or designing and using their own tools to process large amounts of information.Demonstrating what digital tools have to offer and also what 'digital' does to how we understand the past, the authors introduce the many different tools and developing approaches in Big Data for historical and humanistic scholarship, show how to use them, what to be wary of, and discuss the kinds of questions and new perspectives this new macroscopic perspective opens up. Authored 'live' online with ongoing feedback from the wider digital history community, Exploring Big Historical Data breaks new ground and sets the direction for the conversation into the future. It represents the current state-of-the-art thinking in the field and exemplifies the way that digital work can enhance public engagement in the humanities.Exploring Big Historical Data should be the go-to resource for undergraduate and graduate students confronted by a vast corpus of data, and researchers encountering these methods for the first time. It will also offer a helping hand to the interested individual seeking to make sense of genealogical data or digitized newspapers, and even the local historical society who are trying to see the value in digitizing their holdings.The companion website to Exploring Big Historical Data can be found at www.themacroscope.org/. On this site you will find code, a discussion forum, essays, and datafiles that accompany this book.
This volume collects contributions written by different experts in honor of Prof. Jaime Munoz Masque. It covers a wide variety of research topics, from differential geometry to algebra, but particularly focuses on the geometric formulation of variational calculus; geometric mechanics and field theories; symmetries and conservation laws of differential equations, and pseudo-Riemannian geometry of homogeneous spaces. It also discusses algebraic applications to cryptography and number theory. It offers state-of-the-art contributions in the context of current research trends. The final result is a challenging panoramic view of connecting problems that initially appear distant.
The history and future of geographic information (GI) in the context of big data creates new avenues of concern over its organization, access and use. In this book the authors explore both the background and present challenges facing the preservation of GI, focusing on the roles of librarians, archivists, data scientists, and other information professionals in the creation of GI records for its organization, access, and use.
In this work we plan to revise the main techniques for enumeration algorithms and to show four examples of enumeration algorithms that can be applied to efficiently deal with some biological problems modelled by using biological networks: enumerating central and peripheral nodes of a network, enumerating stories, enumerating paths or cycles, and enumerating bubbles. Notice that the corresponding computational problems we define are of more general interest and our results hold in the case of arbitrary graphs. Enumerating all the most and less central vertices in a network according to their eccentricity is an example of an enumeration problem whose solutions are polynomial and can be listed in polynomial time, very often in linear or almost linear time in practice. Enumerating stories, i.e. all maximal directed acyclic subgraphs of a graph G whose sources and targets belong to a predefined subset of the vertices, is on the other hand an example of an enumeration problem with an exponential number of solutions, that can be solved by using a non trivial brute-force approach. Given a metabolic network, each individual story should explain how some interesting metabolites are derived from some others through a chain of reactions, by keeping all alternative pathways between sources and targets. Enumerating cycles or paths in an undirected graph, such as a protein-protein interaction undirected network, is an example of an enumeration problem in which all the solutions can be listed through an optimal algorithm, i.e. the time required to list all the solutions is dominated by the time to read the graph plus the time required to print all of them. By extending this result to directed graphs, it would be possible to deal more efficiently with feedback loops and signed paths analysis in signed or interaction directed graphs, such as gene regulatory networks. Finally, enumerating mouths or bubbles with a source s in a directed graph, that is enumerating all the two vertex-disjoint directed paths between the source s and all the possible targets, is an example of an enumeration problem in which all the solutions can be listed through a linear delay algorithm, meaning that the delay between any two consecutive solutions is linear, by turning the problem into a constrained cycle enumeration problem. Such patterns, in a de Bruijn graph representation of the reads obtained by sequencing, are related to polymorphisms in DNA- or RNA-seq data.
This book provides fresh insights into the cutting edge of multimedia data mining, reflecting how the research focus has shifted towards networked social communities, mobile devices and sensors. The work describes how the history of multimedia data processing can be viewed as a sequence of disruptive innovations. Across the chapters, the discussion covers the practical frameworks, libraries, and open source software that enable the development of ground-breaking research into practical applications. Features: reviews how innovations in mobile, social, cognitive, cloud and organic based computing impacts upon the development of multimedia data mining; provides practical details on implementing the technology for solving real-world problems; includes chapters devoted to privacy issues in multimedia social environments and large-scale biometric data processing; covers content and concept based multimedia search and advanced algorithms for multimedia data representation, processing and visualization.
This book highlights recent research advances in unsupervised learning using natural computing techniques such as artificial neural networks, evolutionary algorithms, swarm intelligence, artificial immune systems, artificial life, quantum computing, DNA computing, and others. The book also includes information on the use of natural computing techniques for unsupervised learning tasks. It features several trending topics, such as big data scalability, wireless network analysis, engineering optimization, social media, and complex network analytics. It shows how these applications have triggered a number of new natural computing techniques to improve the performance of unsupervised learning methods. With this book, the readers can easily capture new advances in this area with systematic understanding of the scope in depth. Readers can rapidly explore new methods and new applications at the junction between natural computing and unsupervised learning. Includes advances on unsupervised learning using natural computing techniques Reports on topics in emerging areas such as evolutionary multi-objective unsupervised learning Features natural computing techniques such as evolutionary multi-objective algorithms and many-objective swarm intelligence algorithms
Integrative Document and Content Management: Strategies for Exploiting Enterprise Knowledge blends theory and practice to provide practical knowledge and guidelines to enterprises wishing to understand the importance of managing documents to their operations along with presentation of document content to facilitate business planning and operations support. This book gives extensive pointers to those who propose to embark upon the implementation of integrated document management systems and to embrace Web content management within a life cycle framework covering document creation to Web publication.
'Data Mining Patterns' gives an overall view of the recent solutions for mining and covers mining new kinds of patterns, mining patterns under constraints, new kinds of complex data and real-world applications of these concepts.
Graphs are a powerful tool for representing and understanding objects and their relationships in various application domains. The growing popularity of graph databases has generated data management problems that include finding efficient techniques for compressing large graph databases and suitable techniques for visualizing, browsing, and navigating large graph databases. Graph Data Management: Techniques and Applications is a central reference source for different data management techniques for graph data structures and their application. This book discusses graphs for modeling complex structured and schemaless data from the Semantic Web, social networks, protein networks, chemical compounds, and multimedia databases and offers essential research for academics working in the interdisciplinary domains of databases, data mining, and multimedia technology.
Conceptual modeling has always been one of the main issues in information systems engineering as it aims to describe the general knowledge of the system at an abstract level that facilitates user understanding and software development. This collection of selected papers provides a comprehensive and extremely readable overview of what conceptual modeling is and perspectives on making it more and more relevant in our society. It covers topics like modeling the human genome, blockchain technology, model-driven software development, data integration, and wiki-like repositories and demonstrates the general applicability of conceptual modeling to various problems in diverse domains. Overall, this book is a source of inspiration for everybody in academia working on the vision of creating a strong, fruitful and creative community of conceptual modelers. With this book the editors and authors want to honor Prof. Antoni Olive for his enormous and ongoing contributions to the conceptual modeling discipline. It was presented to him on the occasion of his keynote at ER 2017 in Valencia, a conference that he has contributed to and supported for over 20 years. Thank you very much to Antoni for so many years of cooperation and friendship.
With the onset of massive cosmological data collection through media such as the Sloan Digital Sky Survey (SDSS), galaxy classification has been accomplished for the most part with the help of citizen science communities like Galaxy Zoo. Seeking the wisdom of the crowd for such Big Data processing has proved extremely beneficial. However, an analysis of one of the Galaxy Zoo morphological classification data sets has shown that a significant majority of all classified galaxies are labelled as Uncertain . This book reports on how to use data mining, more specifically clustering, to identify galaxies that the public has shown some degree of uncertainty for as to whether they belong to one morphology type or another. The book shows the importance of transitions between different data mining techniques in an insightful workflow. It demonstrates that Clustering enables to identify discriminating features in the analysed data sets, adopting a novel feature selection algorithms called Incremental Feature Selection (IFS). The book shows the use of state-of-the-art classification techniques, Random Forests and Support Vector Machines to validate the acquired results. It is concluded that a vast majority of these galaxies are, in fact, of spiral morphology with a small subset potentially consisting of stars, elliptical galaxies or galaxies of other morphological variants."
In this book, contributors provide insights into the latest developments of Edge Computing/Mobile Edge Computing, specifically in terms of communication protocols and related applications and architectures. The book provides help to Edge service providers, Edge service consumers, and Edge service developers interested in getting the latest knowledge in the area. The book includes relevant Edge Computing topics such as applications; architecture; services; inter-operability; data analytics; deployment and service; resource management; simulation and modeling; and security and privacy. Targeted readers include those from varying disciplines who are interested in designing and deploying Edge Computing. Features the latest research related to Edge Computing, from a variety of perspectives; Tackles Edge Computing in academia and industry, featuring a variety of new and innovative operational ideas; Provides a strong foundation for researchers to advance further in the Edge Computing domain.
This book examines the principles of and advances in personalized task recommendation in crowdsourcing systems, with the aim of improving their overall efficiency. It discusses the challenges faced by personalized task recommendation when crowdsourcing systems channel human workforces, knowledge, skills and perspectives beyond traditional organizational boundaries. The solutions presented help interested individuals find tasks that closely match their personal interests and capabilities in a context of ever-increasing opportunities of participating in crowdsourcing activities. In order to explore the design of mechanisms that generate task recommendations based on individual preferences, the book first lays out a conceptual framework that guides the analysis and design of crowdsourcing systems. Based on a comprehensive review of existing research, it then develops and evaluates a new kind of task recommendation service that integrates with existing systems. The resulting prototype provides a platform for both the field study and the practical implementation of task recommendation in productive environments.
Transactions are a concept related to the logical database as seen from the perspective of database application programmers: a transaction is a sequence of database actions that is to be executed as an atomic unit of work. The processing of transactions on databases is a well- established area with many of its foundations having already been laid in the late 1970s and early 1980s. The unique feature of this textbook is that it bridges the gap between the theory of transactions on the logical database and the implementation of the related actions on the underlying physical database. The authors relate the logical database, which is composed of a dynamically changing set of data items with unique keys, and the underlying physical database with a set of fixed-size data and index pages on disk. Their treatment of transaction processing builds on the "do-redo-undo" recovery paradigm, and all methods and algorithms presented are carefully designed to be compatible with this paradigm as well as with write-ahead logging, steal-and-no-force buffering, and fine-grained concurrency control. Chapters 1 to 6 address the basics needed to fully appreciate transaction processing on a centralized database system within the context of our transaction model, covering topics like ACID properties, database integrity, buffering, rollbacks, isolation, and the interplay of logical locks and physical latches. Chapters 7 and 8 present advanced features including deadlock-free algorithms for reading, inserting and deleting tuples, while the remaining chapters cover additional advanced topics extending on the preceding foundational chapters, including multi-granular locking, bulk actions, versioning, distributed updates, and write-intensive transactions. This book is primarily intended as a text for advanced undergraduate or graduate courses on database management in general or transaction processing in particular.
This book presents a unique approach to stream data mining. Unlike the vast majority of previous approaches, which are largely based on heuristics, it highlights methods and algorithms that are mathematically justified. First, it describes how to adapt static decision trees to accommodate data streams; in this regard, new splitting criteria are developed to guarantee that they are asymptotically equivalent to the classical batch tree. Moreover, new decision trees are designed, leading to the original concept of hybrid trees. In turn, nonparametric techniques based on Parzen kernels and orthogonal series are employed to address concept drift in the problem of non-stationary regressions and classification in a time-varying environment. Lastly, an extremely challenging problem that involves designing ensembles and automatically choosing their sizes is described and solved. Given its scope, the book is intended for a professional audience of researchers and practitioners who deal with stream data, e.g. in telecommunication, banking, and sensor networks. |
You may like...
Fundamentals of Spatial Information…
Robert Laurini, Derek Thompson
Hardcover
R1,451
Discovery Miles 14 510
Data Analytics for Social Microblogging…
Soumi Dutta, Asit Kumar Das, …
Paperback
R3,335
Discovery Miles 33 350
Management Of Information Security
Michael Whitman, Herbert Mattord
Paperback
Cognitive and Soft Computing Techniques…
Akash Kumar Bhoi, Victor Hugo Costa de Albuquerque, …
Paperback
R2,583
Discovery Miles 25 830
CompTIA Data+ DA0-001 Exam Cram
Akhil Behl, Sivasubramanian
Digital product license key
R1,024
Discovery Miles 10 240
Blockchain Life - Making Sense of the…
Kary Oberbrunner, Lee Richter
Hardcover
R506
Discovery Miles 5 060
|