![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
This book contains a selection of the best papers given at an international conference on advanced computer systems. The Advanced Computer Systems Conference was held in October 2006, in Miedzyzdroje, Poland. The book is organized into four topical areas: Artificial Intelligence; Computer Security and Safety; Image Analysis, Graphics and Biometrics; and Computer Simulation and Data Analysis.
Data compression is now indispensable to products and services of many industries including computers, communications, healthcare, publishing and entertainment. This invaluable resource introduces this area to information system managers and others who need to understand how it is changing the world of digital systems. For those who know the technology well, it reveals what happens when data compression is used in real-world applications and provides guidance for future technology development.
Computer technology evolves at a rate that challenges companies to maintain appropriate security for their enterprises. With the rapid growth in Internet and www facilities, database and information systems security remains a key topic in businesses and in the public sector, with implications for the whole of society. Research Advances in Database and Information Systems Security covers issues related to security and privacy of information in a wide range of applications, including: Critical Infrastructure Protection; Electronic Commerce; Information Assurance; Intrusion Detection; Workflow; Policy Modeling; Multilevel Security; Role-Based Access Control; Data Mining; Data Warehouses; Temporal Authorization Models; Object-Oriented Databases. This book contains papers and panel discussions from the Thirteenth Annual Working Conference on Database Security, organized by the International Federation for Information Processing (IFIP) and held July 25-28, 1999, in Seattle, Washington, USA. Research Advances in Database and Information Systems Security provides invaluable reading for faculty and advanced students as well as for industrial researchers and practitioners engaged in database security research and development.
This book introduces quantitative intertextuality, a new approach to the algorithmic study of information reuse in text, sound and images. Employing a variety of tools from machine learning, natural language processing, and computer vision, readers will learn to trace patterns of reuse across diverse sources for scholarly work and practical applications. The respective chapters share highly novel methodological insights in order to guide the reader through the basics of intertextuality. In Part 1, "Theory", the theoretical aspects of intertextuality are introduced, leading to a discussion of how they can be embodied by quantitative methods. In Part 2, "Practice", specific quantitative methods are described to establish a set of automated procedures for the practice of quantitative intertextuality. Each chapter in Part 2 begins with a general introduction to a major concept (e.g., lexical matching, sound matching, semantic matching), followed by a case study (e.g., detecting allusions to a popular television show in tweets, quantifying sound reuse in Romantic poetry, identifying influences in fan faction by thematic matching), and finally the development of an algorithm that can be used to reveal parallels in the relevant contexts. Because this book is intended as a "gentle" introduction, the emphasis is often on simple yet effective algorithms for a given matching task. A set of exercises is included at the end of each chapter, giving readers the chance to explore more cutting-edge solutions and novel aspects to the material at hand. Additionally, the book's companion website includes software (R and C++ library code) and all of the source data for the examples in the book, as well as supplemental content (slides, high-resolution images, additional results) that may prove helpful for exploring the different facets of quantitative intertextuality that are presented in each chapter. Given its interdisciplinary nature, the book will appeal to a broad audience. From practitioners specializing in forensics to students of cultural studies, readers with diverse backgrounds (e.g., in the social sciences, natural language processing, or computer vision) will find valuable insights.
This book investigates the ways in which these systems can promote public value by encouraging the disclosure and reuse of privately-held data in ways that support collective values such as environmental sustainability. Supported by funding from the National Science Foundation, the authors' research team has been working on one such system, designed to enhance consumers ability to access information about the sustainability of the products that they buy and the supply chains that produce them. Pulled by rapidly developing technology and pushed by budget cuts, politicians and public managers are attempting to find ways to increase the public value of their actions. Policymakers are increasingly acknowledging the potential that lies in publicly disclosing more of the data that they hold, as well as incentivizing individuals and organizations to access, use, and combine it in new ways. Due to technological advances which include smarter phones, better ways to track objects and people as they travel, and more efficient data processing, it is now possible to build systems which use shared, transparent data in creative ways. The book adds to the current conversation among academics and practitioners about how to promote public value through data disclosure, focusing particularly on the roles that governments, businesses and non-profit actors can play in this process, making it of interest to both scholars and policy-makers.
Database and database systems have become an essential part of everyday life, such as in banking activities, online shopping, or reservations of airline tickets and hotels. These trends place more demands on the capabilities of future database systems, which need to evolve into decision making systems based on data from multiple sources with varying reliability. In this book a model for the next generation of database systems is presented. It is demonstrated how to quantize favorable and unfavorable qualitative facts so that they can be stored and processed efficiently, as well as how to use the reliability of the contributing sources in our decision makings. The concept of a confidence index set (ciset), is introduced in order to mathematically model the above issues. A simple introduction to relational database systems is given allowing anyone with no background in database theory to appreciate the further contents of this work, especially the extended relational operations and semantics of the ciset relational database model.
The IFIP World Computer Congress (WCC) is one of the most important conferences in the area of computer science at the worldwide level and it has a federated structure, which takes into account the rapidly growing and expanding interests in this area. Informatics is rapidly changing and becoming more and more connected to a number of human and social science disciplines. Human-computer interaction is now a mature and still dynamically evolving part of this area, which is represented in IFIP by the Technical Committee 13 on HCI. In this WCC edition it was interesting and useful to have again a Symposium on Human-Computer Interaction in order to p- sent and discuss a number of contributions in this field. There has been increasing awareness among designers of interactive systems of the importance of designing for usability, but we are still far from having products that are really usable, and usability can mean different things depending on the app- cation domain. We are all aware that too many users of current technology often feel frustrated because computer systems are not compatible with their abilities and needs in existing work practices. As designers of tomorrow's technology, we have the - sponsibility of creating computer artifacts that would permit better user experience with the various computing devices, so that users may enjoy more satisfying expe- ences with information and communications technologies.
Understanding sequence data, and the ability to utilize this hidden knowledge, creates a significant impact on many aspects of our society. Examples of sequence data include DNA, protein, customer purchase history, web surfing history, and more. Sequence Data Mining provides balanced coverage of the existing results on sequence data mining, as well as pattern types and associated pattern mining methods. While there are several books on data mining and sequence data analysis, currently there are no books that balance both of these topics. This professional volume fills in the gap, allowing readers to access state-of-the-art results in one place. Sequence Data Mining is designed for professionals working in bioinformatics, genomics, web services, and financial data analysis. This book is also suitable for advanced-level students in computer science and bioengineering. Forward by Professor Jiawei Han, University of Illinois at Urbana-Champaign.
Searching for Semantics: Data Mining, Reverse Engineering Stefano Spaccapietra Fred M aryanski Swiss Federal Institute of Technology University of Connecticut Lausanne, Switzerland Storrs, CT, USA REVIEW AND FUTURE DIRECTIONS In the last few years, database semantics research has turned sharply from a highly theoretical domain to one with more focus on practical aspects. The DS- 7 Working Conference held in October 1997 in Leysin, Switzerland, demon strated the more pragmatic orientation of the current generation of leading researchers. The papers presented at the meeting emphasized the two major areas: the discovery of semantics and semantic data modeling. The work in the latter category indicates that although object-oriented database management systems have emerged as commercially viable prod ucts, many fundamental modeling issues require further investigation. Today's object-oriented systems provide the capability to describe complex objects and include techniques for mapping from a relational database to objects. However, we must further explore the expression of information regarding the dimensions of time and space. Semantic models possess the richness to describe systems containing spatial and temporal data. The challenge of in corporating these features in a manner that promotes efficient manipulation by the subject specialist still requires extensive development."
Nonlinear Assignment Problems (NAPs) are natural extensions of the classic Linear Assignment Problem, and despite the efforts of many researchers over the past three decades, they still remain some of the hardest combinatorial optimization problems to solve exactly. The purpose of this book is to provide in a single volume, major algorithmic aspects and applications of NAPs as contributed by leading international experts. The chapters included in this book are concerned with major applications and the latest algorithmic solution approaches for NAPs. Approximation algorithms, polyhedral methods, semidefinite programming approaches and heuristic procedures for NAPs are included, while applications of this problem class in the areas of multiple-target tracking in the context of military surveillance systems, of experimental high energy physics, and of parallel processing are presented. Audience: Researchers and graduate students in the areas of combinatorial optimization, mathematical programming, operations research, physics, and computer science.
Concurrency in Dependable Computing focuses on concurrency related issues in the area of dependable computing. Failures of system components, be hardware units or software modules, can be viewed as undesirable events occurring concurrently with a set of normal system events. Achieving dependability therefore is closely related to, and also benefits from, concurrency theory and formalisms. This beneficial relationship appears to manifest into three strands of work. Application level structuring of concurrent activities. Concepts such as atomic actions, conversations, exception handling, view synchrony, etc., are useful in structuring concurrent activities so as to facilitate attempts at coping with the effects of component failures. Replication induced concurrency management. Replication is a widely used technique for achieving reliability. Replica management essentially involves ensuring that replicas perceive concurrent events identically. Application of concurrency formalisms for dependability assurance. Fault-tolerant algorithms are harder to verify than their fault-free counterparts due to the fact that the impact of component faults at each state need to be considered in addition to valid state transitions. CSP, Petri nets, CCS are useful tools to specify and verify fault-tolerant designs and protocols. Concurrency in Dependable Computing explores many significant issues in all three strands. To this end, it is composed as a collection of papers written by authors well-known in their respective areas of research. To ensure quality, the papers are reviewed by a panel of at least three experts in the relevant area.
This volume provides an overview of multimedia data mining and knowledge discovery and discusses the variety of hot topics in multimedia data mining research. It describes the objectives and current tendencies in multimedia data mining research and their applications. Each part contains an overview of its chapters and leads the reader with a structured approach through the diverse subjects in the field.
Automatic transformation of a sequential program into a parallel form is a subject that presents a great intellectual challenge and promises a great practical award. There is a tremendous investment in existing sequential programs, and scientists and engineers continue to write their application programs in sequential languages (primarily in Fortran). The demand for higher speedups increases. The job of a restructuring compiler is to discover the dependence structure and the characteristics of the given machine. Much attention has been focused on the Fortran do loop. This is where one expects to find major chunks of computation that need to be performed repeatedly for different values of the index variable. Many loop transformations have been designed over the years, and several of them can be found in any parallelizing compiler currently in use in industry or at a university research facility. The book series on KappaLoop Transformations for Restructuring Compilerskappa provides a rigorous theory of loop transformations and dependence analysis. We want to develop the transformations in a consistent mathematical framework using objects like directed graphs, matrices, and linear equations. Then, the algorithms that implement the transformations can be precisely described in terms of certain abstract mathematical algorithms. The first volume, Loop Transformations for Restructuring Compilers: The Foundations, provided the general mathematical background needed for loop transformations (including those basic mathematical algorithms), discussed data dependence, and introduced the major transformations. The current volume, Loop Parallelization, builds a detailed theory of iteration-level loop transformations based on the material developed in the previous book.
This book gathers high-quality research articles and reviews that reflect the latest advances in the smart network-inspired paradigm and address current issues in IoT applications as well as other emerging areas. Featuring work from both academic and industry researchers, the book provides a concise overview of the current state of the art and highlights some of the most promising and exciting new ideas and techniques. Accordingly, it offers a valuable resource for senior undergraduate and graduate students, researchers, policymakers, and IT professionals and providers working in areas that call for state-of-the-art networks and IoT applications.
Software product lines represent perhaps the most exciting paradigm shift in software development since the advent of high-level programming languages. Nowhere else in software engineering have we seen such breathtaking improvements in cost, quality, time to market, and developer productivity, often registering in the order-of-magnitude range. Here, the authors combine academic research results with real-world industrial experiences, thus presenting a broad view on product line engineering so that both managers and technical specialists will benefit from exposure to this work. They capture the wealth of knowledge that eight companies have gathered during the introduction of the software product line engineering approach in their daily practice.
This book springs from a multidisciplinary, multi-organizational, and multi-sector conversation about the privacy and ethical implications of research in human affairs using big data. The need to cultivate and enlist the public's trust in the abilities of particular scientists and scientific institutions constitutes one of this book's major themes. The advent of the Internet, the mass digitization of research information, and social media brought about, among many other things, the ability to harvest - sometimes implicitly - a wealth of human genomic, biological, behavioral, economic, political, and social data for the purposes of scientific research as well as commerce, government affairs, and social interaction. What type of ethical dilemmas did such changes generate? How should scientists collect, manipulate, and disseminate this information? The effects of this revolution and its ethical implications are wide-ranging. This book includes the opinions of myriad investigators, practitioners, and stakeholders in big data on human beings who also routinely reflect on the privacy and ethical issues of this phenomenon. Dedicated to the practice of ethical reasoning and reflection in action, the book offers a range of observations, lessons learned, reasoning tools, and suggestions for institutional practice to promote responsible big data research on human affairs. It caters to a broad audience of educators, researchers, and practitioners. Educators can use the volume in courses related to big data handling and processing. Researchers can use it for designing new methods of collecting, processing, and disseminating big data, whether in raw form or as analysis results. Lastly, practitioners can use it to steer future tools or procedures for handling big data. As this topic represents an area of great interest that still remains largely undeveloped, this book is sure to attract significant interest by filling an obvious gap in currently available literature.
The book collects contributions from experts worldwide addressing recent scholarship in social network analysis such as influence spread, link prediction, dynamic network biclustering, and delurking. It covers both new topics and new solutions to known problems. The contributions rely on established methods and techniques in graph theory, machine learning, stochastic modelling, user behavior analysis and natural language processing, just to name a few. This text provides an understanding of using such methods and techniques in order to manage practical problems and situations. Trends in Social Network Analysis: Information Propagation, User Behavior Modelling, Forecasting, and Vulnerability Assessment appeals to students, researchers, and professionals working in the field.
This book provides a summary of the manifold audio- and web-based approaches to music information retrieval (MIR) research. In contrast to other books dealing solely with music signal processing, it addresses additional cultural and listener-centric aspects and thus provides a more holistic view. Consequently, the text includes methods operating on features extracted directly from the audio signal, as well as methods operating on features extracted from contextual information, either the cultural context of music as represented on the web or the user and usage context of music. Following the prevalent document-centered paradigm of information retrieval, the book addresses models of music similarity that extract computational features to describe an entity that represents music on any level (e.g., song, album, or artist), and methods to calculate the similarity between them. While this perspective and the representations discussed cannot describe all musical dimensions, they enable us to effectively find music of similar qualities by providing abstract summarizations of musical artifacts from different modalities. The text at hand provides a comprehensive and accessible introduction to the topics of music search, retrieval, and recommendation from an academic perspective. It will not only allow those new to the field to quickly access MIR from an information retrieval point of view but also raise awareness for the developments of the music domain within the greater IR community. In this regard, Part I deals with content-based MIR, in particular the extraction of features from the music signal and similarity calculation for content-based retrieval. Part II subsequently addresses MIR methods that make use of the digitally accessible cultural context of music. Part III addresses methods of collaborative filtering and user-aware and multi-modal retrieval, while Part IV explores current and future applications of music retrieval and recommendation.>
Advanced visual analysis and problem solving has been conducted successfully for millennia. The Pythagorean Theorem was proven using visual means more than 2000 years ago. In the 19th century, John Snow stopped a cholera epidemic in London by proposing that a specific water pump be shut down. He discovered that pump by visually correlating data on a city map. The goal of this book is to present the current trends in visual and spatial analysis for data mining, reasoning, problem solving and decision-making. This is the first book to focus on visual decision making and problem solving in general with specific applications in the geospatial domain - combining theory with real-world practice. The book is unique in its integration of modern symbolic and visual approaches to decision making and problem solving. As such, it ties together much of the monograph and textbook literature in these emerging areas. This book contains 21 chapters that have been grouped into five parts: (1) visual problem solving and decision making, (2) visual and heterogeneous reasoning, (3) visual correlation, (4) visual and spatial data mining, and (5) visual and spatial problem solving in geospatial domains. Each chapter ends with a summary and exercises. The book is intended for professionals and graduate students in computer science, applied mathematics, imaging science and Geospatial Information Systems (GIS). In addition to being a state-of-the-art research compilation, this book can be used a text for advanced courses on the subjects such as modeling, computer graphics, visualization, image processing, data mining, GIS, and algorithm analysis.
Automation is nothing new to industry. It has a long tradition on the factory floor, where its constant objective has been to increase the productivity of manufacturing processes. Only with the advent of computers could the focus of automation widen to include administrative and information-handling tasks. More recently, automation has been extended to the more intellectual tasks of production planning and control, material and resource planning, engineering design, and quality control. New challenges arise in the form of flexible manu facturing, assembly automation, and automated floor vehicles, to name just a few. The sheer complexity of the problems as well as the state of the art has led scientists and engineers to concentrate on issues that could easily be isolated. For example, it was much simpler to build CAD systems whose sole objective was to ease the task of drawing, rather than to worry at the same time about how the design results could be interfaced with the manufacturing or assembly processes. It was less problematic to gather statistics from quality control and to print reports than to react immediately to first hints of irregularities by inter facing with the designers or manufacturing control, or, even better, by auto matically diagnosing the causes from the design and planning data. A heav- though perhaps unavoidable - price must today be paid whenever one tries to assemble these isolated solutions into a larger, integrated system."
Recommender systems, software programs that learn from human behavior and make predictions of what products we are expected to appreciate and purchase, have become an integral part of our everyday life. They proliferate across electronic commerce around the globe and exist for virtually all sorts of consumable goods, such as books, movies, music, or clothes. At the same time, a new evolution on the Web has started to take shape, commonly known as the "Web 2.0" or the "Social Web" Consumer-generated media has become rife, social networks have emerged and are pulling significant shares of Web traffic. In line with these developments, novel information and knowledge artifacts have become readily available on the Web, created by the collective effort of millions of people. This textbook presents approaches to exploit the new Social Web fountain of knowledge, zeroing in first and foremost on two of those information artifacts, namely classification taxonomies and trust networks. These two are used to improve the performance of product-focused recommender systems: While classification taxonomies are appropriate means to fight the sparsity problem prevalent in many productive recommender systems, interpersonal trust ties - when used as proxies for interest similarity - are able to mitigate the recommenders' scalability problem.
Biometric Solutions for Authentication in an E-World provides a
collection of sixteen chapters containing tutorial articles and new
material in a unified manner. This includes the basic concepts,
theories, and characteristic features of integrating/formulating
different facets of biometric solutions for authentication, with
recent developments and significant applications in an E-world.
This book provides the reader with a basic concept of biometrics,
an in-depth discussion exploring biometric technologies in various
applications in an E-world. It also includes a detailed description
of typical biometric-based security systems and up-to-date coverage
of how these issues are developed. Experts from all over the world
demonstrate the various ways this integration can be made to
efficiently design methodologies, algorithms, architectures, and
implementations for biometric-based applications in an E-world.
The rapid advancement of semantic web technologies, along with the fact that they are at various levels of maturity, has left many practitioners confused about the current state of these technologies. Focusing on the most mature technologies, Applied Semantic Web Technologies integrates theory with case studies to illustrate the history, current state, and future direction of the semantic web. It maintains an emphasis on real-world applications and examines the technical and practical issues related to the use of semantic technologies in intelligent information management. The book starts with an introduction to the fundamentals-reviewing ontology basics, ontology languages, and research related to ontology alignment, mediation, and mapping. Next, it covers ontology engineering issues and presents a collaborative ontology engineering tool that is an extension of the Semantic MediaWiki. Unveiling a novel approach to data and knowledge engineering, the text: Introduces cutting-edge taxonomy-aware algorithms Examines semantics-based service composition in transport logistics Offers ontology alignment tools that use information visualization techniques Explains how to enrich the representation of entity semantics in an ontology Addresses challenges in tackling the content creation bottleneck Using case studies, the book provides authoritative insights and highlights valuable lessons learned by the authors-information systems veterans with decades of experience. They explain how to create social ontologies and present examples of the application of semantic technologies in building automation, logistics, ontology-driven business process intelligence, decision making, and energy efficiency in smart homes.
In 2013, the International Conference on Advance Information Systems Engineering (CAiSE) turns 25. Initially launched in 1989, for all these years the conference has provided a broad forum for researchers working in the area of Information Systems Engineering. To reflect on the work done so far and to examine prospects for future work, the CAiSE Steering Committee decided to present a selection of seminal papers published for the conference during these years and to ask their authors, all prominent researchers in the field, to comment on their work and how it has developed over the years. The scope of the papers selected covers a broad range of topics related to modeling and designing information systems, collecting and managing requirements, and with special attention to how information systems are engineered towards their final development and deployment as software components.With this approach, the book provides not only a historical analysis on how information systems engineering evolved over the years, but also a fascinating social network analysis of the research community. Additionally, many inspiring ideas for future research and new perspectives in this area are sparked by the intriguing comments of the renowned authors. |
You may like...
Big Data and Smart Service Systems
Xiwei Liu, Rangachari Anand, …
Hardcover
Applied Big Data Analytics and Its Role…
Peng Zhao, Xin Wang, …
Hardcover
R6,648
Discovery Miles 66 480
Bitcoin And Cryptocurrency - The…
Crypto Trader & Crypto Gladiator
Hardcover
BTEC Nationals Information Technology…
Jenny Phillips, Alan Jarvis, …
Paperback
R1,018
Discovery Miles 10 180
Management Of Information Security
Michael Whitman, Herbert Mattord
Paperback
Database Principles - Fundamentals of…
Carlos Coronel, Keeley Crockett, …
Paperback
|