![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
The book collects contributions from experts worldwide addressing recent scholarship in social network analysis such as influence spread, link prediction, dynamic network biclustering, and delurking. It covers both new topics and new solutions to known problems. The contributions rely on established methods and techniques in graph theory, machine learning, stochastic modelling, user behavior analysis and natural language processing, just to name a few. This text provides an understanding of using such methods and techniques in order to manage practical problems and situations. Trends in Social Network Analysis: Information Propagation, User Behavior Modelling, Forecasting, and Vulnerability Assessment appeals to students, researchers, and professionals working in the field.
Advanced visual analysis and problem solving has been conducted successfully for millennia. The Pythagorean Theorem was proven using visual means more than 2000 years ago. In the 19th century, John Snow stopped a cholera epidemic in London by proposing that a specific water pump be shut down. He discovered that pump by visually correlating data on a city map. The goal of this book is to present the current trends in visual and spatial analysis for data mining, reasoning, problem solving and decision-making. This is the first book to focus on visual decision making and problem solving in general with specific applications in the geospatial domain - combining theory with real-world practice. The book is unique in its integration of modern symbolic and visual approaches to decision making and problem solving. As such, it ties together much of the monograph and textbook literature in these emerging areas. This book contains 21 chapters that have been grouped into five parts: (1) visual problem solving and decision making, (2) visual and heterogeneous reasoning, (3) visual correlation, (4) visual and spatial data mining, and (5) visual and spatial problem solving in geospatial domains. Each chapter ends with a summary and exercises. The book is intended for professionals and graduate students in computer science, applied mathematics, imaging science and Geospatial Information Systems (GIS). In addition to being a state-of-the-art research compilation, this book can be used a text for advanced courses on the subjects such as modeling, computer graphics, visualization, image processing, data mining, GIS, and algorithm analysis.
Perceiving complex multidimensional problems has proven to be a difficult task for people to overcome. However, introducing composite indicators into such problems allows the opportunity to reduce the problem's complexity. Emerging Trends in the Development and Application of Composite Indicators is an authoritative reference source for the latest scholarly research on the benefits and challenges presented by building composite indicators, and how these techniques promote optimized critical thinking. Highlighting various indicator types and quantitative methods, this book is ideally designed for developers, researchers, public officials, and upper-level students.
Concurrency in Dependable Computing focuses on concurrency related issues in the area of dependable computing. Failures of system components, be hardware units or software modules, can be viewed as undesirable events occurring concurrently with a set of normal system events. Achieving dependability therefore is closely related to, and also benefits from, concurrency theory and formalisms. This beneficial relationship appears to manifest into three strands of work. Application level structuring of concurrent activities. Concepts such as atomic actions, conversations, exception handling, view synchrony, etc., are useful in structuring concurrent activities so as to facilitate attempts at coping with the effects of component failures. Replication induced concurrency management. Replication is a widely used technique for achieving reliability. Replica management essentially involves ensuring that replicas perceive concurrent events identically. Application of concurrency formalisms for dependability assurance. Fault-tolerant algorithms are harder to verify than their fault-free counterparts due to the fact that the impact of component faults at each state need to be considered in addition to valid state transitions. CSP, Petri nets, CCS are useful tools to specify and verify fault-tolerant designs and protocols. Concurrency in Dependable Computing explores many significant issues in all three strands. To this end, it is composed as a collection of papers written by authors well-known in their respective areas of research. To ensure quality, the papers are reviewed by a panel of at least three experts in the relevant area.
The rapid advancement of semantic web technologies, along with the fact that they are at various levels of maturity, has left many practitioners confused about the current state of these technologies. Focusing on the most mature technologies, Applied Semantic Web Technologies integrates theory with case studies to illustrate the history, current state, and future direction of the semantic web. It maintains an emphasis on real-world applications and examines the technical and practical issues related to the use of semantic technologies in intelligent information management. The book starts with an introduction to the fundamentals-reviewing ontology basics, ontology languages, and research related to ontology alignment, mediation, and mapping. Next, it covers ontology engineering issues and presents a collaborative ontology engineering tool that is an extension of the Semantic MediaWiki. Unveiling a novel approach to data and knowledge engineering, the text: Introduces cutting-edge taxonomy-aware algorithms Examines semantics-based service composition in transport logistics Offers ontology alignment tools that use information visualization techniques Explains how to enrich the representation of entity semantics in an ontology Addresses challenges in tackling the content creation bottleneck Using case studies, the book provides authoritative insights and highlights valuable lessons learned by the authors-information systems veterans with decades of experience. They explain how to create social ontologies and present examples of the application of semantic technologies in building automation, logistics, ontology-driven business process intelligence, decision making, and energy efficiency in smart homes.
In 2013, the International Conference on Advance Information Systems Engineering (CAiSE) turns 25. Initially launched in 1989, for all these years the conference has provided a broad forum for researchers working in the area of Information Systems Engineering. To reflect on the work done so far and to examine prospects for future work, the CAiSE Steering Committee decided to present a selection of seminal papers published for the conference during these years and to ask their authors, all prominent researchers in the field, to comment on their work and how it has developed over the years. The scope of the papers selected covers a broad range of topics related to modeling and designing information systems, collecting and managing requirements, and with special attention to how information systems are engineered towards their final development and deployment as software components.With this approach, the book provides not only a historical analysis on how information systems engineering evolved over the years, but also a fascinating social network analysis of the research community. Additionally, many inspiring ideas for future research and new perspectives in this area are sparked by the intriguing comments of the renowned authors.
Updated new edition of Ralph Kimball's groundbreaking book on dimensional modeling for data warehousing and business intelligence The first edition of Ralph Kimball's "The Data Warehouse Toolkit" introduced the industry to dimensional modeling, and now his books are considered the most authoritative guides in this space. This new third edition is a complete library of updated dimensional modeling techniques, the most comprehensive collection ever. It covers new and enhanced star schema dimensional modeling patterns, adds two new chapters on ETL techniques, includes new and expanded business matrices for 12 case studies, and more.Authored by Ralph Kimball and Margy Ross, known worldwide as educators, consultants, and influential thought leaders in data warehousing and business intelligenceBegins with fundamental design recommendations and progresses through increasingly complex scenariosPresents unique modeling techniques for business applications such as inventory management, procurement, invoicing, accounting, customer relationship management, big data analytics, and moreDraws real-world case studies from a variety of industries, including retail sales, financial services, telecommunications, education, health care, insurance, e-commerce, and more Design dimensional databases that are easy to understand and provide fast query response with "The Data Warehouse Toolkit: The Definitive Guide to Dimensional Modeling, 3rd Edition."
Video on Demand Systems brings together in one place important contributions and up-to-date research results in this fast moving area. Video on Demand Systems serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
Mining Very Large Databases with Parallel Processing addresses the problem of large-scale data mining. It is an interdisciplinary text, describing advances in the integration of three computer science areas, namely intelligent' (machine learning-based) data mining techniques, relational databases and parallel processing. The basic idea is to use concepts and techniques of the latter two areas - particularly parallel processing - to speed up and scale up data mining algorithms. The book is divided into three parts. The first part presents a comprehensive review of intelligent data mining techniques such as rule induction, instance-based learning, neural networks and genetic algorithms. Likewise, the second part presents a comprehensive review of parallel processing and parallel databases. Each of these parts includes an overview of commercially-available, state-of-the-art tools. The third part deals with the application of parallel processing to data mining. The emphasis is on finding generic, cost-effective solutions for realistic data volumes. Two parallel computational environments are discussed, the first excluding the use of commercial-strength DBMS, and the second using parallel DBMS servers. It is assumed that the reader has a knowledge roughly equivalent to a first degree (BSc) in accurate sciences, so that (s)he is reasonably familiar with basic concepts of statistics and computer science. The primary audience for Mining Very Large Databases with Parallel Processing is industry data miners and practitioners in general, who would like to apply intelligent data mining techniques to large amounts of data. The book will also be of interest to academic researchers and postgraduate students, particularly database researchers, interested in advanced, intelligent database applications, and artificial intelligence researchers interested in industrial, real-world applications of machine learning.
Data mining involves the non-trivial extraction of implicit, previously unknown, and potentially useful information from databases. Genetic Programming (GP) and Inductive Logic Programming (ILP) are two of the approaches for data mining. This book first sets the necessary backgrounds for the reader, including an overview of data mining, evolutionary algorithms and inductive logic programming. It then describes a framework, called GGP (Generic Genetic Programming), that integrates GP and ILP based on a formalism of logic grammars. The formalism is powerful enough to represent context- sensitive information and domain-dependent knowledge. This knowledge can be used to accelerate the learning speed and/or improve the quality of the knowledge induced. A grammar-based genetic programming system called LOGENPRO (The LOGic grammar based GENetic PROgramming system) is detailed and tested on many problems in data mining. It is found that LOGENPRO outperforms some ILP systems. We have also illustrated how to apply LOGENPRO to emulate Automatically Defined Functions (ADFs) to discover problem representation primitives automatically. By employing various knowledge about the problem being solved, LOGENPRO can find a solution much faster than ADFs and the computation required by LOGENPRO is much smaller than that of ADFs. Moreover, LOGENPRO can emulate the effects of Strongly Type Genetic Programming and ADFs simultaneously and effortlessly. Data Mining Using Grammar Based Genetic Programming and Applications is appropriate for researchers, practitioners and clinicians interested in genetic programming, data mining, and the extraction of data from databases.
Biometric Solutions for Authentication in an E-World provides a
collection of sixteen chapters containing tutorial articles and new
material in a unified manner. This includes the basic concepts,
theories, and characteristic features of integrating/formulating
different facets of biometric solutions for authentication, with
recent developments and significant applications in an E-world.
This book provides the reader with a basic concept of biometrics,
an in-depth discussion exploring biometric technologies in various
applications in an E-world. It also includes a detailed description
of typical biometric-based security systems and up-to-date coverage
of how these issues are developed. Experts from all over the world
demonstrate the various ways this integration can be made to
efficiently design methodologies, algorithms, architectures, and
implementations for biometric-based applications in an E-world.
Method Engineering focuses on the design, construction and evaluation of methods, techniques and support tools for information systems development It addresses a number of important topics, including: method representation formalisms; meta-modelling; situational methods; contingency approaches; system development practices of method engineering; terminology and reference models; ontologies; usability and experience reports; and organisational support and impact.
This book introduces quantitative intertextuality, a new approach to the algorithmic study of information reuse in text, sound and images. Employing a variety of tools from machine learning, natural language processing, and computer vision, readers will learn to trace patterns of reuse across diverse sources for scholarly work and practical applications. The respective chapters share highly novel methodological insights in order to guide the reader through the basics of intertextuality. In Part 1, "Theory", the theoretical aspects of intertextuality are introduced, leading to a discussion of how they can be embodied by quantitative methods. In Part 2, "Practice", specific quantitative methods are described to establish a set of automated procedures for the practice of quantitative intertextuality. Each chapter in Part 2 begins with a general introduction to a major concept (e.g., lexical matching, sound matching, semantic matching), followed by a case study (e.g., detecting allusions to a popular television show in tweets, quantifying sound reuse in Romantic poetry, identifying influences in fan faction by thematic matching), and finally the development of an algorithm that can be used to reveal parallels in the relevant contexts. Because this book is intended as a "gentle" introduction, the emphasis is often on simple yet effective algorithms for a given matching task. A set of exercises is included at the end of each chapter, giving readers the chance to explore more cutting-edge solutions and novel aspects to the material at hand. Additionally, the book's companion website includes software (R and C++ library code) and all of the source data for the examples in the book, as well as supplemental content (slides, high-resolution images, additional results) that may prove helpful for exploring the different facets of quantitative intertextuality that are presented in each chapter. Given its interdisciplinary nature, the book will appeal to a broad audience. From practitioners specializing in forensics to students of cultural studies, readers with diverse backgrounds (e.g., in the social sciences, natural language processing, or computer vision) will find valuable insights.
Multimedia data comprising of images, audio and video is becoming increasingly common. The decreasing costs of consumer electronic devices such as digital cameras and digital camcorders, along with the ease of transportation facilitated by the Internet, has lead to a phenomenal rise in the amount of multimedia data generated and distributed. Given that this trend of increased use of multimedia data is likely to accelerate, there is an urgent need for providing a clear means of capturing, storing, indexing, retrieving, analyzing and summarizing such data. Content-based access to multimedia data is of primary importance since it is the natural way by which human beings interact with such information. To facilitate the content-based access of multimedia information, the first step is to derive feature measures from these data so that a feature space representation of the data content can be formed. This can subsequently allow for mapping the feature space to the symbol space (semantics) either automatically or through human intervention. Thus, signal to symbol mapping, useful for any practical system, can be successfully achieved. Perspectives on Content-Based Multimedia Systems provides a comprehensive set of techniques to tackle these important issues. This book offers detailed solutions to a wide range of practical problems in building real systems by providing specifics of three systems built by the authors. While providing a systems focus, it also equips the reader with a keen understanding of the fundamental issues, including a formalism for content-based multimedia database systems, multimedia feature extraction, object-based techniques, signature-based techniques and fuzzy retrieval techniques. The performance evaluation issues of practical systems is also explained. This book brings together essential elements of building a content-based multimedia database system in a way that makes them accessible to practitioners in computer science and electrical engineering. It can also serve as a textbook for graduate-level courses.
"JDBC Metadata, MySQL, and Oracle Recipes" is the only book that focuses on metadata or annotation-based code recipes for JDBC API for use with Oracle and MySQL. It continues where the authors other book, "JDBC Recipes: A Problem-Solution Approach," leaves off. This edition is also a Java EE 5-compliant book, perfect for lightweight Java database development. And it provides cut-and-paste code templates that can be immediately customized and applied in each developer's application development.
Research Directions in Data and Applications Security describes original research results and innovative practical developments, all focused on maintaining security and privacy in database systems and applications that pervade cyberspace. The areas of coverage include: -Role-Based Access Control;
Data Mining is the science and technology of exploring large and complex bodies of data in order to discover useful patterns. It is extremely important because it enables modeling and knowledge extraction from abundant data availability. This book introduces soft computing methods extending the envelope of problems that data mining can solve efficiently. It presents practical soft-computing approaches in data mining and includes various real-world case studies with detailed results.
This book introduces advanced semantic web technologies, illustrating their utility and highlighting their implementation in biological, medical, and clinical scenarios. It covers topics ranging from database, ontology, and visualization to semantic web services and workflows. The volume also details the factors impacting on the establishment of the semantic web in life science and the legal challenges that will impact on its proliferation.
Data clustering is a highly interdisciplinary field, the goal of which is to divide a set of objects into homogeneous groups such that objects in the same group are similar and objects in different groups are quite distinct. Thousands of theoretical papers and a number of books on data clustering have been published over the past 50 years. However, few books exist to teach people how to implement data clustering algorithms. This book was written for anyone who wants to implement or improve their data clustering algorithms. Using object-oriented design and programming techniques, Data Clustering in C++ exploits the commonalities of all data clustering algorithms to create a flexible set of reusable classes that simplifies the implementation of any data clustering algorithm. Readers can follow the development of the base data clustering classes and several popular data clustering algorithms. Additional topics such as data pre-processing, data visualization, cluster visualization, and cluster interpretation are briefly covered. This book is divided into three parts-- * Data Clustering and C++ Preliminaries: A review of basic concepts of data clustering, the unified modeling language, object-oriented programming in C++, and design patterns * A C++ Data Clustering Framework: The development of data clustering base classes * Data Clustering Algorithms: The implementation of several popular data clustering algorithms A key to learning a clustering algorithm is to implement and experiment the clustering algorithm. Complete listings of classes, examples, unit test cases, and GNU configuration files are included in the appendices of this book as well as in the CD-ROM of the book. The only requirements to compile the code are a modern C++ compiler and the Boost C++ libraries.
Hybrid Intelligent Systems for Information Retrieval covers three areas along with the introduction to Intelligent IR, i.e., Optimal Information Retrieval Using Evolutionary Approaches, Semantic Search for Web Information Retrieval, and Natural Language Processing for Information Retrieval. * Talks about the design, implementation, and performance issues of the hybrid intelligent information retrieval system in one book * Gives a clear insight into challenges and issues in designing a hybrid information retrieval system * Includes case studies on structured and unstructured data for hybrid intelligent information retrieval * Provides research directions for the design and development of intelligent search engines This book is aimed primarily at graduates and researchers in the information retrieval domain.
Database and Mobile Computing brings together in one place important contributions and up-to-date research results in this important area. Databases and Mobile Computing serves as an excellent reference, providing insight into some of the most important research issues in the field.
Spatial trajectories have been bringing the unprecedented wealth to a variety of research communities. A spatial trajectory records the paths of a variety of moving objects, such as people who log their travel routes with GPS trajectories. The field of moving objects related research has become extremely active within the last few years, especially with all major database and data mining conferences and journals. "Computing with Spatial Trajectories" introduces the algorithms, technologies, and systems used to process, manage and understand existing spatial trajectories for different applications. This book also presents an overview on both fundamentals and the state-of-the-art research inspired by spatial trajectory data, as well as a special focus on trajectory pattern mining, spatio-temporal data mining and location-based social networks. Each chapter provides readers with a tutorial-style introduction to one important aspect of location trajectory computing, case studies and many valuable references to other relevant research work. "Computing with Spatial Trajectories" is designed as a reference or secondary text book for advanced-level students and researchers mainly focused on computer science and geography. Professionals working on spatial trajectory computing will also find this book very useful.
Knowledge-based (KB) technology is being applied to complex problem-solving and critical tasks in many application domains. Concerns have naturally arisen as to the dependability of knowledge-based systems (KBS). As with any software, attention to quality and safety must be paid throughout development of a KBS and rigorous verification and validation (V&V) techniques must be employed. Research in V&V of KBS has emerged as a distinct field only in the last decade and is intended to address issues associated with quality and safety aspects of KBS and to credit such applications with the same degree of dependability as conventional applications. In recent years, V&V of KBS has been the topic of annual workshops associated with the main AI conferences, such as AAAI, IJACI and ECAI. Validation and Verification of Knowledge Based Systems contains a collection of papers, dealing with all aspects of KBS V&V, presented at the Fifth European Symposium on Verification and Validation of Knowledge Based Systems and Components (EUROVAV'99 - http: //www.dnv.no/research/safekbs/eurovav99/) which was held in Oslo in the summer of 1999, and was sponsored by Det Norske Veritas and the British Computer Society's Specialist Group on Expert Systems (SGES).
High Performance Data Mining: Scaling Algorithms, Applications and Systems brings together in one place important contributions and up-to-date research results in this fast moving area. High Performance Data Mining: Scaling Algorithms, Applications and Systems serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
This is a compilation of papers presented at the Information System Concepts conference in Marburg, Germany. The special focus is consolidation and harmonisation of the numerous and widely diverging views in the field of information systems. This issue has become a hot topic, as many leading information system researchers and practitioners come to realise the importance of better communication among the members of the information systems community, and of a better scientific foundation of this rapidly evolving field. |
You may like...
Advances in Natural Language Generation…
Michael Zock, G. Sabah
Hardcover
R2,559
Discovery Miles 25 590
Natural Language Processing for Global…
Fatih Pinarbasi, M. Nurdan Taskiran
Hardcover
R6,306
Discovery Miles 63 060
Foundation Models for Natural Language…
Gerhard PaaĆ, Sven Giesselbach
Hardcover
R884
Discovery Miles 8 840
Cognitive Approach to Natural Language…
Bernadette Sharp, Florence Sedes, …
Hardcover
Annotation, Exploitation and Evaluation…
Silvia Hansen-Schirra, Sambor Grucza
Hardcover
R919
Discovery Miles 9 190
|