![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
This book provides fresh insights into the cutting edge of multimedia data mining, reflecting how the research focus has shifted towards networked social communities, mobile devices and sensors. The work describes how the history of multimedia data processing can be viewed as a sequence of disruptive innovations. Across the chapters, the discussion covers the practical frameworks, libraries, and open source software that enable the development of ground-breaking research into practical applications. Features: reviews how innovations in mobile, social, cognitive, cloud and organic based computing impacts upon the development of multimedia data mining; provides practical details on implementing the technology for solving real-world problems; includes chapters devoted to privacy issues in multimedia social environments and large-scale biometric data processing; covers content and concept based multimedia search and advanced algorithms for multimedia data representation, processing and visualization.
This volume collects contributions written by different experts in honor of Prof. Jaime Munoz Masque. It covers a wide variety of research topics, from differential geometry to algebra, but particularly focuses on the geometric formulation of variational calculus; geometric mechanics and field theories; symmetries and conservation laws of differential equations, and pseudo-Riemannian geometry of homogeneous spaces. It also discusses algebraic applications to cryptography and number theory. It offers state-of-the-art contributions in the context of current research trends. The final result is a challenging panoramic view of connecting problems that initially appear distant.
The Digital Humanities have arrived at a moment when digital Big Data is becoming more readily available, opening exciting new avenues of inquiry but also new challenges. This pioneering book describes and demonstrates the ways these data can be explored to construct cultural heritage knowledge, for research and in teaching and learning. It helps humanities scholars to grasp Big Data in order to do their work, whether that means understanding the underlying algorithms at work in search engines, or designing and using their own tools to process large amounts of information.Demonstrating what digital tools have to offer and also what 'digital' does to how we understand the past, the authors introduce the many different tools and developing approaches in Big Data for historical and humanistic scholarship, show how to use them, what to be wary of, and discuss the kinds of questions and new perspectives this new macroscopic perspective opens up. Authored 'live' online with ongoing feedback from the wider digital history community, Exploring Big Historical Data breaks new ground and sets the direction for the conversation into the future. It represents the current state-of-the-art thinking in the field and exemplifies the way that digital work can enhance public engagement in the humanities.Exploring Big Historical Data should be the go-to resource for undergraduate and graduate students confronted by a vast corpus of data, and researchers encountering these methods for the first time. It will also offer a helping hand to the interested individual seeking to make sense of genealogical data or digitized newspapers, and even the local historical society who are trying to see the value in digitizing their holdings.The companion website to Exploring Big Historical Data can be found at www.themacroscope.org/. On this site you will find code, a discussion forum, essays, and datafiles that accompany this book.
The Digital Humanities have arrived at a moment when digital Big Data is becoming more readily available, opening exciting new avenues of inquiry but also new challenges. This pioneering book describes and demonstrates the ways these data can be explored to construct cultural heritage knowledge, for research and in teaching and learning. It helps humanities scholars to grasp Big Data in order to do their work, whether that means understanding the underlying algorithms at work in search engines, or designing and using their own tools to process large amounts of information.Demonstrating what digital tools have to offer and also what 'digital' does to how we understand the past, the authors introduce the many different tools and developing approaches in Big Data for historical and humanistic scholarship, show how to use them, what to be wary of, and discuss the kinds of questions and new perspectives this new macroscopic perspective opens up. Authored 'live' online with ongoing feedback from the wider digital history community, Exploring Big Historical Data breaks new ground and sets the direction for the conversation into the future. It represents the current state-of-the-art thinking in the field and exemplifies the way that digital work can enhance public engagement in the humanities.Exploring Big Historical Data should be the go-to resource for undergraduate and graduate students confronted by a vast corpus of data, and researchers encountering these methods for the first time. It will also offer a helping hand to the interested individual seeking to make sense of genealogical data or digitized newspapers, and even the local historical society who are trying to see the value in digitizing their holdings.The companion website to Exploring Big Historical Data can be found at www.themacroscope.org/. On this site you will find code, a discussion forum, essays, and datafiles that accompany this book.
This book introduces Meaningful Purposive Interaction Analysis (MPIA) theory, which combines social network analysis (SNA) with latent semantic analysis (LSA) to help create and analyse a meaningful learning landscape from the digital traces left by a learning community in the co-construction of knowledge. The hybrid algorithm is implemented in the statistical programming language and environment R, introducing packages which capture - through matrix algebra - elements of learners' work with more knowledgeable others and resourceful content artefacts. The book provides comprehensive package-by-package application examples, and code samples that guide the reader through the MPIA model to show how the MPIA landscape can be constructed and the learner's journey mapped and analysed. This building block application will allow the reader to progress to using and building analytics to guide students and support decision-making in learning.
Integrative Document and Content Management: Strategies for Exploiting Enterprise Knowledge blends theory and practice to provide practical knowledge and guidelines to enterprises wishing to understand the importance of managing documents to their operations along with presentation of document content to facilitate business planning and operations support. This book gives extensive pointers to those who propose to embark upon the implementation of integrated document management systems and to embrace Web content management within a life cycle framework covering document creation to Web publication.
'Data Mining Patterns' gives an overall view of the recent solutions for mining and covers mining new kinds of patterns, mining patterns under constraints, new kinds of complex data and real-world applications of these concepts.
Graphs are a powerful tool for representing and understanding objects and their relationships in various application domains. The growing popularity of graph databases has generated data management problems that include finding efficient techniques for compressing large graph databases and suitable techniques for visualizing, browsing, and navigating large graph databases. Graph Data Management: Techniques and Applications is a central reference source for different data management techniques for graph data structures and their application. This book discusses graphs for modeling complex structured and schemaless data from the Semantic Web, social networks, protein networks, chemical compounds, and multimedia databases and offers essential research for academics working in the interdisciplinary domains of databases, data mining, and multimedia technology.
This thesis covers a diverse set of topics related to space-based gravitational wave detectors such as the Laser Interferometer Space Antenna (LISA). The core of the thesis is devoted to the preprocessing of the interferometric link data for a LISA constellation, specifically developing optimal Kalman filters to reduce arm length noise due to clock noise. The approach is to apply Kalman filters of increasing complexity to make optimal estimates of relevant quantities such as constellation arm length, relative clock drift, and Doppler frequencies based on the available measurement data. Depending on the complexity of the filter and the simulated data, these Kalman filter estimates can provide up to a few orders of magnitude improvement over simpler estimators. While the basic concept of the LISA measurement (Time Delay Interferometry) was worked out some time ago, this work brings a level of rigor to the processing of the constellation-level data products. The thesis concludes with some topics related to the eLISA such as a new class of phenomenological waveforms for extreme mass-ratio inspiral sources (EMRIs, one of the main source for eLISA), an octahedral space-based GW detector that does not require drag-free test masses, and some efficient template-search algorithms for the case of relatively high SNR signals.
This book provides a framework for integrating information management in supply chains. Current trends in business practice have made it necessary to explore the potential held by information integration with regard to environmental aspects. Information flow integration provides an opportunity to focus on the creation of a more "green" supply chain. However, it is currently difficult to identify the impact of information integration on greening a supply chain in a wide range of practical applications. Accordingly, this book focuses on the potential value of information integration solutions in terms of greening supply chain management. It covers the following major topics: Application of information flow standards in the supply chain Information systems and technological solutions for integrating information flows in supply chains The Internet of Things and the industry 4.0 concept, with regard to the integration of supply chains Modeling and simulation of logistics processes Decision-making tools enabling the greening of supply chains
Conceptual modeling has always been one of the main issues in information systems engineering as it aims to describe the general knowledge of the system at an abstract level that facilitates user understanding and software development. This collection of selected papers provides a comprehensive and extremely readable overview of what conceptual modeling is and perspectives on making it more and more relevant in our society. It covers topics like modeling the human genome, blockchain technology, model-driven software development, data integration, and wiki-like repositories and demonstrates the general applicability of conceptual modeling to various problems in diverse domains. Overall, this book is a source of inspiration for everybody in academia working on the vision of creating a strong, fruitful and creative community of conceptual modelers. With this book the editors and authors want to honor Prof. Antoni Olive for his enormous and ongoing contributions to the conceptual modeling discipline. It was presented to him on the occasion of his keynote at ER 2017 in Valencia, a conference that he has contributed to and supported for over 20 years. Thank you very much to Antoni for so many years of cooperation and friendship.
This book presents an investigative approach to globalization-driving technologies that efficiently deliver ubiquitous, last-mile, broadband internet access to emerging markets and rural areas. Research has shown that ubiquitous internet access boosts socio-economic growth through innovations in science and technology, and has a positive effect on the lives of individuals. Last-mile internet access in developing countries is not only intended to provide areas with stable, efficient, and cost-effective broadband capabilities, but also to encourage the use of connectivity for human capacity development. The book offers an overview of the principles of various technologies, such as light fidelity and millimeter-wave backhaul, as last-mile internet solutions and describes these potential solutions from a signal propagation perspective. It also provides readers with the notional context needed to understand their operation, benefits, and limitations, and enables them to investigate feasible and tailored solutions to ensure sustainable infrastructures that are expandable and maintainable.
In this book, contributors provide insights into the latest developments of Edge Computing/Mobile Edge Computing, specifically in terms of communication protocols and related applications and architectures. The book provides help to Edge service providers, Edge service consumers, and Edge service developers interested in getting the latest knowledge in the area. The book includes relevant Edge Computing topics such as applications; architecture; services; inter-operability; data analytics; deployment and service; resource management; simulation and modeling; and security and privacy. Targeted readers include those from varying disciplines who are interested in designing and deploying Edge Computing. Features the latest research related to Edge Computing, from a variety of perspectives; Tackles Edge Computing in academia and industry, featuring a variety of new and innovative operational ideas; Provides a strong foundation for researchers to advance further in the Edge Computing domain.
Transactions are a concept related to the logical database as seen from the perspective of database application programmers: a transaction is a sequence of database actions that is to be executed as an atomic unit of work. The processing of transactions on databases is a well- established area with many of its foundations having already been laid in the late 1970s and early 1980s. The unique feature of this textbook is that it bridges the gap between the theory of transactions on the logical database and the implementation of the related actions on the underlying physical database. The authors relate the logical database, which is composed of a dynamically changing set of data items with unique keys, and the underlying physical database with a set of fixed-size data and index pages on disk. Their treatment of transaction processing builds on the "do-redo-undo" recovery paradigm, and all methods and algorithms presented are carefully designed to be compatible with this paradigm as well as with write-ahead logging, steal-and-no-force buffering, and fine-grained concurrency control. Chapters 1 to 6 address the basics needed to fully appreciate transaction processing on a centralized database system within the context of our transaction model, covering topics like ACID properties, database integrity, buffering, rollbacks, isolation, and the interplay of logical locks and physical latches. Chapters 7 and 8 present advanced features including deadlock-free algorithms for reading, inserting and deleting tuples, while the remaining chapters cover additional advanced topics extending on the preceding foundational chapters, including multi-granular locking, bulk actions, versioning, distributed updates, and write-intensive transactions. This book is primarily intended as a text for advanced undergraduate or graduate courses on database management in general or transaction processing in particular.
This book presents a unique approach to stream data mining. Unlike the vast majority of previous approaches, which are largely based on heuristics, it highlights methods and algorithms that are mathematically justified. First, it describes how to adapt static decision trees to accommodate data streams; in this regard, new splitting criteria are developed to guarantee that they are asymptotically equivalent to the classical batch tree. Moreover, new decision trees are designed, leading to the original concept of hybrid trees. In turn, nonparametric techniques based on Parzen kernels and orthogonal series are employed to address concept drift in the problem of non-stationary regressions and classification in a time-varying environment. Lastly, an extremely challenging problem that involves designing ensembles and automatically choosing their sizes is described and solved. Given its scope, the book is intended for a professional audience of researchers and practitioners who deal with stream data, e.g. in telecommunication, banking, and sensor networks.
As the applications of data mining, the non-trivial extraction of implicit information in a data set, have expanded in recent years, so has the need for techniques that are tolerable to imprecision, uncertainty, and approximation. Intelligent Soft Computation and Evolving Data Mining: Integrating Advanced Technologies is a compendium that addresses this need. It integrates contrasting techniques of conventional hard computing and soft computing to exploit the tolerance for imprecision, uncertainty, partial truth, and approximation to achieve tractability, robustness and low-cost solution. This book provides a reference to researchers, practitioners, and students in both soft computing and data mining communities, forming a foundation for the development of the field.
This book presents two practical physical attacks. It shows how attackers can reveal the secret key of symmetric as well as asymmetric cryptographic algorithms based on these attacks, and presents countermeasures on the software and the hardware level that can help to prevent them in the future. Though their theory has been known for several years now, since neither attack has yet been successfully implemented in practice, they have generally not been considered a serious threat. In short, their physical attack complexity has been overestimated and the implied security threat has been underestimated. First, the book introduces the photonic side channel, which offers not only temporal resolution, but also the highest possible spatial resolution. Due to the high cost of its initial implementation, it has not been taken seriously. The work shows both simple and differential photonic side channel analyses. Then, it presents a fault attack against pairing-based cryptography. Due to the need for at least two independent precise faults in a single pairing computation, it has not been taken seriously either. Based on these two attacks, the book demonstrates that the assessment of physical attack complexity is error-prone, and as such cryptography should not rely on it. Cryptographic technologies have to be protected against all physical attacks, whether they have already been successfully implemented or not. The development of countermeasures does not require the successful execution of an attack but can already be carried out as soon as the principle of a side channel or a fault attack is sufficiently understood.
This provides a comprehensive overview of the key principles of security concerns surrounding the upcoming Internet of Things (IoT), and introduces readers to the protocols adopted in the IoT. It also analyses the vulnerabilities, attacks and defense mechanisms, highlighting the security issues in the context of big data. Lastly, trust management approaches and ubiquitous learning applications are examined in detail. As such, the book sets the stage for developing and securing IoT applications both today and in the future.
As information technology continues to advance in massive increments, the bank of information available from personal, financial, and business electronic transactions and all other electronic documentation and data storage is growing at an exponential rate. With this wealth of information comes the opportunity and necessity to utilize this information to maintain competitive advantage and process information effectively in real-world situations. Data Mining and Knowledge Discovery Technologies presents researchers and practitioners in fields such as knowledge management, information science, Web engineering, and medical informatics, with comprehensive, innovative research on data mining methods, structures, tools, and methods, the knowledge discovery process, and data marts, among many other cutting-edge topics.
This book presents an exhaustive and timely review of key research work on fuzzy XML data management, and provides readers with a comprehensive resource on the state-of-the art tools and theories in this fast growing area. Topics covered in the book include: representation of fuzzy XML, query of fuzzy XML, fuzzy database models, extraction of fuzzy XML from fuzzy database models, reengineering of fuzzy XML into fuzzy database models, and reasoning of fuzzy XML. The book is intended as a reference guide for researchers, practitioners and graduate students working and/or studying in the field of Web Intelligence, as well as for data and knowledge engineering professionals seeking new approaches to replace traditional methods, which may be unnecessarily complex or even unproductive.
With the onset of massive cosmological data collection through media such as the Sloan Digital Sky Survey (SDSS), galaxy classification has been accomplished for the most part with the help of citizen science communities like Galaxy Zoo. Seeking the wisdom of the crowd for such Big Data processing has proved extremely beneficial. However, an analysis of one of the Galaxy Zoo morphological classification data sets has shown that a significant majority of all classified galaxies are labelled as Uncertain . This book reports on how to use data mining, more specifically clustering, to identify galaxies that the public has shown some degree of uncertainty for as to whether they belong to one morphology type or another. The book shows the importance of transitions between different data mining techniques in an insightful workflow. It demonstrates that Clustering enables to identify discriminating features in the analysed data sets, adopting a novel feature selection algorithms called Incremental Feature Selection (IFS). The book shows the use of state-of-the-art classification techniques, Random Forests and Support Vector Machines to validate the acquired results. It is concluded that a vast majority of these galaxies are, in fact, of spiral morphology with a small subset potentially consisting of stars, elliptical galaxies or galaxies of other morphological variants."
The technologies in data mining have been applied to bioinformatics research in the past few years with success, but more research in this field is necessary. While tremendous progress has been made over the years, many of the fundamental challenges in bioinformatics are still open. Data mining plays a essential role in understanding the emerging problems in genomics, proteomics, and systems biology. ""Advanced Data Mining Technologies in Bioinformatics"" covers important research topics of data mining on bioinformatics. Readers of this book will gain an understanding of the basics and problems of bioinformatics, as well as the applications of data mining technologies in tackling the problems and the essential research topics in the field. ""Advanced Data Mining Technologies in Bioinformatics"" is extremely useful for data mining researchers, molecular biologists, graduate students, and others interested in this topic.
Data Mining techniques are gradually becoming essential components of corporate intelligence systems and progressively evolving into a pervasive technology within activities that range from the utilization of historical data to predicting the success of an awareness campaign. In reality, data mining is becoming an interdisciplinary field driven by various multi-dimensional applications. Data Mining Applications for Empowering Knowledge Societies presents an overview on the main issues of data mining, including its classification, regression, clustering, and ethical issues. This comprehensive book also provides readers with knowledge enhancing processes as well as a wide spectrum of data mining applications. |
You may like...
Apache HTTP Server Documentation Version…
Apache Software Foundation
Hardcover
R1,660
Discovery Miles 16 600
Essential Java for Scientists and…
Brian Hahn, Katherine Malan
Paperback
R1,266
Discovery Miles 12 660
|