![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
This book assembles contributions from computer scientists and librarians that altogether encompass the complete range of tools, tasks and processes needed to successfully preserve the cultural heritage of the Web. It combines the librarian 's application knowledge with the computer scientist 's implementation knowledge, and serves as a standard introduction for everyone involved in keeping alive the immense amount of online information.
Key to our culture is that we can disseminate information, and then maintain and access it over time. While we are rapidly advancing from vulnerable physical solutions to superior, digital media, preserving and using data over the long term involves complicated research challenges and organization efforts. Uwe Borghoff and his coauthors address the problem of storing, reading, and using digital data for periods longer than 50 years. They briefly describe several markup and document description languages like TIFF, PDF, HTML, and XML, explain the most important techniques such as migration and emulation, and present the OAIS (Open Archival Information System) Reference Model. To complement this background information on the technology issues the authors present the most relevant international preservation projects, such as the Dublin Core Metadata Initiative, and experiences from sample projects run by the Cornell University Library and the National Library of the Netherlands. A rated survey list of available systems and tools completes the book. With this broad overview, the authors address librarians who preserve our digital heritage, computer scientists who develop technologies that access data, and information managers engaged with the social and methodological requirements of long-term information access.
This comprehensive book offers a full picture of the cutting edge technologies in the area of "Multimedia Retrieval and Management". It addresses graduate students and scientists in electrical engineering and in computer science as well as system designers, engineers, programmers and other technical managers in the IT industries. The book provides a complete set of theories and technologies necessary for a profound introduction to the field. It includes multimedia low-level feature extraction and high-level semantic description in addition to multimedia authentication and watermarking, and the most up-to-date MPEG-7 standard. A broad range of practical applications is covered, e.g., digital libraries, medical images, biometrics, human palm-print and face-for-security, living plants data management and video-on-demand service.
Fundamentals of Information Systems contains articles from the 7th International Workshop on Foundations of Models and Languages for Data and Objects (FoMLaDO '98), which was held in Timmel, Germany. These articles capture various aspects of database and information systems theory: identification as a primitive of database models deontic action programs marked nulls in queries topological canonization in spatial databases complexity of search queries complexity of Web queries attribute grammars for structured document queries hybrid multi-level concurrency control efficient navigation in persistent object stores formal semantics of UML reengineering of object bases and integrity dependence . Fundamentals of Information Systems serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
The book examines patterns of participation in human rights treaties. International relations theory is divided on what motivates states to participate in treaties, specifically human rights treaties. Instead of examining the specific motivations, this dissertation examines patterns of participation. In doing so, it attempts to match theoretical expectations of state behavior with participation. This book provides significant evidence that there are multiple motivations that lead states to participate in human rights treaties.
This book contains a selection of the best papers given at an international conference on advanced computer systems. The Advanced Computer Systems Conference was held in October 2006, in Miedzyzdroje, Poland. The book is organized into four topical areas: Artificial Intelligence; Computer Security and Safety; Image Analysis, Graphics and Biometrics; and Computer Simulation and Data Analysis.
Data compression is now indispensable to products and services of many industries including computers, communications, healthcare, publishing and entertainment. This invaluable resource introduces this area to information system managers and others who need to understand how it is changing the world of digital systems. For those who know the technology well, it reveals what happens when data compression is used in real-world applications and provides guidance for future technology development.
Computer technology evolves at a rate that challenges companies to maintain appropriate security for their enterprises. With the rapid growth in Internet and www facilities, database and information systems security remains a key topic in businesses and in the public sector, with implications for the whole of society. Research Advances in Database and Information Systems Security covers issues related to security and privacy of information in a wide range of applications, including: Critical Infrastructure Protection; Electronic Commerce; Information Assurance; Intrusion Detection; Workflow; Policy Modeling; Multilevel Security; Role-Based Access Control; Data Mining; Data Warehouses; Temporal Authorization Models; Object-Oriented Databases. This book contains papers and panel discussions from the Thirteenth Annual Working Conference on Database Security, organized by the International Federation for Information Processing (IFIP) and held July 25-28, 1999, in Seattle, Washington, USA. Research Advances in Database and Information Systems Security provides invaluable reading for faculty and advanced students as well as for industrial researchers and practitioners engaged in database security research and development.
This book introduces quantitative intertextuality, a new approach to the algorithmic study of information reuse in text, sound and images. Employing a variety of tools from machine learning, natural language processing, and computer vision, readers will learn to trace patterns of reuse across diverse sources for scholarly work and practical applications. The respective chapters share highly novel methodological insights in order to guide the reader through the basics of intertextuality. In Part 1, "Theory", the theoretical aspects of intertextuality are introduced, leading to a discussion of how they can be embodied by quantitative methods. In Part 2, "Practice", specific quantitative methods are described to establish a set of automated procedures for the practice of quantitative intertextuality. Each chapter in Part 2 begins with a general introduction to a major concept (e.g., lexical matching, sound matching, semantic matching), followed by a case study (e.g., detecting allusions to a popular television show in tweets, quantifying sound reuse in Romantic poetry, identifying influences in fan faction by thematic matching), and finally the development of an algorithm that can be used to reveal parallels in the relevant contexts. Because this book is intended as a "gentle" introduction, the emphasis is often on simple yet effective algorithms for a given matching task. A set of exercises is included at the end of each chapter, giving readers the chance to explore more cutting-edge solutions and novel aspects to the material at hand. Additionally, the book's companion website includes software (R and C++ library code) and all of the source data for the examples in the book, as well as supplemental content (slides, high-resolution images, additional results) that may prove helpful for exploring the different facets of quantitative intertextuality that are presented in each chapter. Given its interdisciplinary nature, the book will appeal to a broad audience. From practitioners specializing in forensics to students of cultural studies, readers with diverse backgrounds (e.g., in the social sciences, natural language processing, or computer vision) will find valuable insights.
This book investigates the ways in which these systems can promote public value by encouraging the disclosure and reuse of privately-held data in ways that support collective values such as environmental sustainability. Supported by funding from the National Science Foundation, the authors' research team has been working on one such system, designed to enhance consumers ability to access information about the sustainability of the products that they buy and the supply chains that produce them. Pulled by rapidly developing technology and pushed by budget cuts, politicians and public managers are attempting to find ways to increase the public value of their actions. Policymakers are increasingly acknowledging the potential that lies in publicly disclosing more of the data that they hold, as well as incentivizing individuals and organizations to access, use, and combine it in new ways. Due to technological advances which include smarter phones, better ways to track objects and people as they travel, and more efficient data processing, it is now possible to build systems which use shared, transparent data in creative ways. The book adds to the current conversation among academics and practitioners about how to promote public value through data disclosure, focusing particularly on the roles that governments, businesses and non-profit actors can play in this process, making it of interest to both scholars and policy-makers.
Database and database systems have become an essential part of everyday life, such as in banking activities, online shopping, or reservations of airline tickets and hotels. These trends place more demands on the capabilities of future database systems, which need to evolve into decision making systems based on data from multiple sources with varying reliability. In this book a model for the next generation of database systems is presented. It is demonstrated how to quantize favorable and unfavorable qualitative facts so that they can be stored and processed efficiently, as well as how to use the reliability of the contributing sources in our decision makings. The concept of a confidence index set (ciset), is introduced in order to mathematically model the above issues. A simple introduction to relational database systems is given allowing anyone with no background in database theory to appreciate the further contents of this work, especially the extended relational operations and semantics of the ciset relational database model.
The IFIP World Computer Congress (WCC) is one of the most important conferences in the area of computer science at the worldwide level and it has a federated structure, which takes into account the rapidly growing and expanding interests in this area. Informatics is rapidly changing and becoming more and more connected to a number of human and social science disciplines. Human-computer interaction is now a mature and still dynamically evolving part of this area, which is represented in IFIP by the Technical Committee 13 on HCI. In this WCC edition it was interesting and useful to have again a Symposium on Human-Computer Interaction in order to p- sent and discuss a number of contributions in this field. There has been increasing awareness among designers of interactive systems of the importance of designing for usability, but we are still far from having products that are really usable, and usability can mean different things depending on the app- cation domain. We are all aware that too many users of current technology often feel frustrated because computer systems are not compatible with their abilities and needs in existing work practices. As designers of tomorrow's technology, we have the - sponsibility of creating computer artifacts that would permit better user experience with the various computing devices, so that users may enjoy more satisfying expe- ences with information and communications technologies.
Understanding sequence data, and the ability to utilize this hidden knowledge, creates a significant impact on many aspects of our society. Examples of sequence data include DNA, protein, customer purchase history, web surfing history, and more. Sequence Data Mining provides balanced coverage of the existing results on sequence data mining, as well as pattern types and associated pattern mining methods. While there are several books on data mining and sequence data analysis, currently there are no books that balance both of these topics. This professional volume fills in the gap, allowing readers to access state-of-the-art results in one place. Sequence Data Mining is designed for professionals working in bioinformatics, genomics, web services, and financial data analysis. This book is also suitable for advanced-level students in computer science and bioengineering. Forward by Professor Jiawei Han, University of Illinois at Urbana-Champaign.
Searching for Semantics: Data Mining, Reverse Engineering Stefano Spaccapietra Fred M aryanski Swiss Federal Institute of Technology University of Connecticut Lausanne, Switzerland Storrs, CT, USA REVIEW AND FUTURE DIRECTIONS In the last few years, database semantics research has turned sharply from a highly theoretical domain to one with more focus on practical aspects. The DS- 7 Working Conference held in October 1997 in Leysin, Switzerland, demon strated the more pragmatic orientation of the current generation of leading researchers. The papers presented at the meeting emphasized the two major areas: the discovery of semantics and semantic data modeling. The work in the latter category indicates that although object-oriented database management systems have emerged as commercially viable prod ucts, many fundamental modeling issues require further investigation. Today's object-oriented systems provide the capability to describe complex objects and include techniques for mapping from a relational database to objects. However, we must further explore the expression of information regarding the dimensions of time and space. Semantic models possess the richness to describe systems containing spatial and temporal data. The challenge of in corporating these features in a manner that promotes efficient manipulation by the subject specialist still requires extensive development."
Nonlinear Assignment Problems (NAPs) are natural extensions of the classic Linear Assignment Problem, and despite the efforts of many researchers over the past three decades, they still remain some of the hardest combinatorial optimization problems to solve exactly. The purpose of this book is to provide in a single volume, major algorithmic aspects and applications of NAPs as contributed by leading international experts. The chapters included in this book are concerned with major applications and the latest algorithmic solution approaches for NAPs. Approximation algorithms, polyhedral methods, semidefinite programming approaches and heuristic procedures for NAPs are included, while applications of this problem class in the areas of multiple-target tracking in the context of military surveillance systems, of experimental high energy physics, and of parallel processing are presented. Audience: Researchers and graduate students in the areas of combinatorial optimization, mathematical programming, operations research, physics, and computer science.
Concurrency in Dependable Computing focuses on concurrency related issues in the area of dependable computing. Failures of system components, be hardware units or software modules, can be viewed as undesirable events occurring concurrently with a set of normal system events. Achieving dependability therefore is closely related to, and also benefits from, concurrency theory and formalisms. This beneficial relationship appears to manifest into three strands of work. Application level structuring of concurrent activities. Concepts such as atomic actions, conversations, exception handling, view synchrony, etc., are useful in structuring concurrent activities so as to facilitate attempts at coping with the effects of component failures. Replication induced concurrency management. Replication is a widely used technique for achieving reliability. Replica management essentially involves ensuring that replicas perceive concurrent events identically. Application of concurrency formalisms for dependability assurance. Fault-tolerant algorithms are harder to verify than their fault-free counterparts due to the fact that the impact of component faults at each state need to be considered in addition to valid state transitions. CSP, Petri nets, CCS are useful tools to specify and verify fault-tolerant designs and protocols. Concurrency in Dependable Computing explores many significant issues in all three strands. To this end, it is composed as a collection of papers written by authors well-known in their respective areas of research. To ensure quality, the papers are reviewed by a panel of at least three experts in the relevant area.
This volume provides an overview of multimedia data mining and knowledge discovery and discusses the variety of hot topics in multimedia data mining research. It describes the objectives and current tendencies in multimedia data mining research and their applications. Each part contains an overview of its chapters and leads the reader with a structured approach through the diverse subjects in the field.
Fuzzy Databases: Modeling, Design and Implementation focuses on some semantic aspects which have not been studied in previous works and extends the EER model with fuzzy capabilities. The exposed model is called FuzzyEER model, and some of the studied extensions are: fuzzy attributes, fuzzy aggregations and different aspects on specializations, such as fuzzy degrees, fuzzy constraints, etc. All these fuzzy extensions offer greater expressiveness in conceptual design. This book, while providing a global and integrated view of fuzzy database constructions, serves as an introduction to fuzzy logic, fuzzy databases and fuzzy modeling in databases.
Automatic transformation of a sequential program into a parallel form is a subject that presents a great intellectual challenge and promises a great practical award. There is a tremendous investment in existing sequential programs, and scientists and engineers continue to write their application programs in sequential languages (primarily in Fortran). The demand for higher speedups increases. The job of a restructuring compiler is to discover the dependence structure and the characteristics of the given machine. Much attention has been focused on the Fortran do loop. This is where one expects to find major chunks of computation that need to be performed repeatedly for different values of the index variable. Many loop transformations have been designed over the years, and several of them can be found in any parallelizing compiler currently in use in industry or at a university research facility. The book series on KappaLoop Transformations for Restructuring Compilerskappa provides a rigorous theory of loop transformations and dependence analysis. We want to develop the transformations in a consistent mathematical framework using objects like directed graphs, matrices, and linear equations. Then, the algorithms that implement the transformations can be precisely described in terms of certain abstract mathematical algorithms. The first volume, Loop Transformations for Restructuring Compilers: The Foundations, provided the general mathematical background needed for loop transformations (including those basic mathematical algorithms), discussed data dependence, and introduced the major transformations. The current volume, Loop Parallelization, builds a detailed theory of iteration-level loop transformations based on the material developed in the previous book.
This book gathers high-quality research articles and reviews that reflect the latest advances in the smart network-inspired paradigm and address current issues in IoT applications as well as other emerging areas. Featuring work from both academic and industry researchers, the book provides a concise overview of the current state of the art and highlights some of the most promising and exciting new ideas and techniques. Accordingly, it offers a valuable resource for senior undergraduate and graduate students, researchers, policymakers, and IT professionals and providers working in areas that call for state-of-the-art networks and IoT applications.
Software product lines represent perhaps the most exciting paradigm shift in software development since the advent of high-level programming languages. Nowhere else in software engineering have we seen such breathtaking improvements in cost, quality, time to market, and developer productivity, often registering in the order-of-magnitude range. Here, the authors combine academic research results with real-world industrial experiences, thus presenting a broad view on product line engineering so that both managers and technical specialists will benefit from exposure to this work. They capture the wealth of knowledge that eight companies have gathered during the introduction of the software product line engineering approach in their daily practice.
This book springs from a multidisciplinary, multi-organizational, and multi-sector conversation about the privacy and ethical implications of research in human affairs using big data. The need to cultivate and enlist the public's trust in the abilities of particular scientists and scientific institutions constitutes one of this book's major themes. The advent of the Internet, the mass digitization of research information, and social media brought about, among many other things, the ability to harvest - sometimes implicitly - a wealth of human genomic, biological, behavioral, economic, political, and social data for the purposes of scientific research as well as commerce, government affairs, and social interaction. What type of ethical dilemmas did such changes generate? How should scientists collect, manipulate, and disseminate this information? The effects of this revolution and its ethical implications are wide-ranging. This book includes the opinions of myriad investigators, practitioners, and stakeholders in big data on human beings who also routinely reflect on the privacy and ethical issues of this phenomenon. Dedicated to the practice of ethical reasoning and reflection in action, the book offers a range of observations, lessons learned, reasoning tools, and suggestions for institutional practice to promote responsible big data research on human affairs. It caters to a broad audience of educators, researchers, and practitioners. Educators can use the volume in courses related to big data handling and processing. Researchers can use it for designing new methods of collecting, processing, and disseminating big data, whether in raw form or as analysis results. Lastly, practitioners can use it to steer future tools or procedures for handling big data. As this topic represents an area of great interest that still remains largely undeveloped, this book is sure to attract significant interest by filling an obvious gap in currently available literature.
The book collects contributions from experts worldwide addressing recent scholarship in social network analysis such as influence spread, link prediction, dynamic network biclustering, and delurking. It covers both new topics and new solutions to known problems. The contributions rely on established methods and techniques in graph theory, machine learning, stochastic modelling, user behavior analysis and natural language processing, just to name a few. This text provides an understanding of using such methods and techniques in order to manage practical problems and situations. Trends in Social Network Analysis: Information Propagation, User Behavior Modelling, Forecasting, and Vulnerability Assessment appeals to students, researchers, and professionals working in the field. |
You may like...
Mathematical Methods in Data Science
Jingli Ren, Haiyan Wang
Paperback
R3,925
Discovery Miles 39 250
Data Analytics for Social Microblogging…
Soumi Dutta, Asit Kumar Das, …
Paperback
R3,335
Discovery Miles 33 350
Management Of Information Security
Michael Whitman, Herbert Mattord
Paperback
Database Principles - Fundamentals of…
Carlos Coronel, Keeley Crockett, …
Paperback
|