![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > General
In this fully updated second edition of the highly acclaimed
Managing Gigabytes, authors Witten, Moffat, and Bell continue to
provide unparalleled coverage of state-of-the-art techniques for
compressing and indexing data. Whatever your field, if you work
with large quantities of information, this book is essential
reading--an authoritative theoretical resource and a practical
guide to meeting the toughest storage and access challenges. It
covers the latest developments in compression and indexing and
their application on the Web and in digital libraries. It also
details dozens of powerful techniques supported by mg, the authors'
own system for compressing, storing, and retrieving text, images,
and textual images. mg's source code is freely available on the
Web.
Enterprises have made amazing advances by taking advantage of data about their business to provide predictions and understanding of their customers, markets, and products. But as the world of business becomes more interconnected and global, enterprise data is no long a monolith; it is just a part of a vast web of data. Managing data on a world-wide scale is a key capability for any business today. The Semantic Web treats data as a distributed resource on the scale of the World Wide Web, and incorporates features to address the challenges of massive data distribution as part of its basic design. The aim of the first two editions was to motivate the Semantic Web technology stack from end-to-end; to describe not only what the Semantic Web standards are and how they work, but also what their goals are and why they were designed as they are. It tells a coherent story from beginning to end of how the standards work to manage a world-wide distributed web of knowledge in a meaningful way. The third edition builds on this foundation to bring Semantic Web practice to enterprise. Fabien Gandon joins Dean Allemang and Jim Hendler, bringing with him years of experience in global linked data, to open up the story to a modern view of global linked data. While the overall story is the same, the examples have been brought up to date and applied in a modern setting, where enterprise and global data come together as a living, linked network of data. Also included with the third edition, all of the data sets and queries are available online for study and experimentation at data.world/swwo.
Genetic Programming Theory and Practice explores the emerging
interaction between theory and practice in the cutting-edge,
machine learning method of Genetic Programming (GP). The material
contained in this contributed volume was developed from a workshop
at the University of Michigan's Center for the Study of Complex
Systems where an international group of genetic programming
theorists and practitioners met to examine how GP theory informs
practice and how GP practice impacts GP theory. The contributions
cover the full spectrum of this relationship and are written by
leading GP theorists from major universities, as well as active
practitioners from leading industries and businesses. Chapters
include such topics as John Koza's development of human-competitive
electronic circuit designs; David Goldberg's application of
"competent GA" methodology to GP; Jason Daida's discovery of a new
set of factors underlying the dynamics of GP starting from applied
research; and Stephen Freeland's essay on the lessons of biology
for GP and the potential impact of GP on evolutionary theory.
This book focuses on three interdependent challenges related to managing transitions toward sustainable development, namely (a) mapping sustainability for global knowledge e-networking, (b) extending the value chain of knowledge and e-networking, and (c) engaging in explorations of new methods and venues for further developing knowledge and e-networking. While each of these challenges constitutes fundamentally different types of endeavors, they are highly interconnected. Jointly, they contribute to our expansion of knowledge and its applications in support of transitions toward sustainable development. The central theme of this book revolves around ways of transcending barriers that impede the use of knowledge and knowledge networking in transitions toward sustainability. In order to transcend these barriers, we examine the potential contributions of innovations in information technologies as well as computation and representation of attendant complexities. A related theme addresses new ways of managing information and systematic observation for the purpose of enhancing the value of knowledge. Finally, this book shows applications of new methodologies and related findings that would contribute to our understanding of sustainablity issues that have not yet been explored. In many ways, this is a book of theory and of practice; and it is one of methods as well as policy and performance.
This book elaborates the science and engineering basis for energy-efficient driving in conventional and autonomous cars. After covering the physics of energy-efficient motion in conventional, hybrid, and electric powertrains, the book chiefly focuses on the energy-saving potential of connected and automated vehicles. It reveals how being connected to other vehicles and the infrastructure enables the anticipation of upcoming driving-relevant factors, e.g. hills, curves, slow traffic, state of traffic signals, and movements of nearby vehicles. In turn, automation allows vehicles to adjust their motion more precisely in anticipation of upcoming events, and to save energy. Lastly, the energy-efficient motion of connected and automated vehicles could have a harmonizing effect on mixed traffic, leading to additional energy savings for neighboring vehicles. Building on classical methods of powertrain modeling, optimization, and optimal control, the book further develops the theory of energy-efficient driving. In addition, it presents numerous theoretical and applied case studies that highlight the real-world implications of the theory developed. The book is chiefly intended for undergraduate and graduate engineering students and industry practitioners with a background in mechanical, electrical, or automotive engineering, computer science or robotics.
Everyday more than half of American adult internet users read or write email messages at least once. The prevalence of email has significantly impacted the working world, functioning as a great asset on many levels, yet at times, a costly liability. In an effort to improve various aspects of work-related communication, this work applies sophisticated machine learning techniques to a large body of email data. Several effective models are proposed that can aid with the prioritization of incoming messages, help with coordination of shared tasks, improve tracking of deadlines, and prevent disastrous information leaks. Carvalho presents many data-driven techniques that can positively impact work-related email communication and offers robust models that may be successfully applied to future machine learning tasks.
Despite the growing interest in Real-Time Database Systems, there is no single book that acts as a reference to academics, professionals, and practitioners who wish to understand the issues involved in the design and development of RTDBS. Real-Time Database Systems: Issues and Applications fulfills this need. This book presents the spectrum of issues that may arise in various real-time database applications, the available solutions and technologies that may be used to address these issues, and the open problems that need to be tackled in the future. With rapid advances in this area, several concepts have been proposed without a widely accepted consensus on their definitions and implications. To address this need, the first chapter is an introduction to the key RTDBS concepts and definitions, which is followed by a survey of the state of the art in RTDBS research and practice. The remainder of the book consists of four sections: models and paradigms, applications and benchmarks, scheduling and concurrency control, and experimental systems. The chapters in each section are contributed by experts in the respective areas. Real-Time Database Systems: Issues and Applications is primarily intended for practicing engineers and researchers working in the growing area of real-time database systems. For practitioners, the book will provide a much needed bridge for technology transfer and continued education. For researchers, this book will provide a comprehensive reference for well-established results. This book can also be used in a senior or graduate level course on real-time systems, real-time database systems, and database systems or closely related courses.
Computer access is the only way to retrieve up-to-date sequences
and this book shows researchers puzzled by the maze of URLs, sites,
and searches how to use internet technology to find and analyze
genetic data. The book describes the different types of databases,
how to use a specific database to find a sequence that you need,
and how to analyze the data to compare it with your own work.
Clustering is an important unsupervised classification technique where data points are grouped such that points that are similar in some sense belong to the same cluster. Cluster analysis is a complex problem as a variety of similarity and dissimilarity measures exist in the literature. This is the first book focused on clustering with a particular emphasis on symmetry-based measures of similarity and metaheuristic approaches. The aim is to find a suitable grouping of the input data set so that some criteria are optimized, and using this the authors frame the clustering problem as an optimization one where the objectives to be optimized may represent different characteristics such as compactness, symmetrical compactness, separation between clusters, or connectivity within a cluster. They explain the techniques in detail and outline many detailed applications in data mining, remote sensing and brain imaging, gene expression data analysis, and face detection. The book will be useful to graduate students and researchers in computer science, electrical engineering, system science, and information technology, both as a text and as a reference book. It will also be useful to researchers and practitioners in industry working on pattern recognition, data mining, soft computing, metaheuristics, bioinformatics, remote sensing, and brain imaging.
This book constitutes the refereed proceedings of the 15th IFIP WG 5.5 Working Conference on Virtual Enterprises, PRO-VE 2014, held in Amsterdam, The Netherlands, in October 2014. The 73 revised papers were carefully selected from 190 submissions. They provide a comprehensive overview of identified challenges and recent advances in various collaborative network (CN) domains and their applications, with a particular focus on the following areas in support of smart networked environments: behavior and coordination; product-service systems; service orientation in collaborative networks; engineering and implementation of collaborative networks; cyber-physical systems; business strategies alignment; innovation networks; sustainability and trust; reference and conceptual models; collaboration platforms; virtual reality and simulation; interoperability and integration; performance management frameworks; performance management systems; risk analysis; optimization in collaborative networks; knowledge management in networks; health and care networks; and mobility and logistics.
This book offers means to handle interference as a central problem of operating wireless networks. It investigates centralized and decentralized methods to avoid and handle interference as well as approaches that resolve interference constructively. The latter type of approach tries to solve the joint detection and estimation problem of several data streams that share a common medium. In fact, an exciting insight into the operation of networks is that it may be beneficial, in terms of an overall throughput, to actively create and manage interference. Thus, when handled properly, "mixing" of data in networks becomes a useful tool of operation rather than the nuisance as which it has been treated traditionally. With the development of mobile, robust, ubiquitous, reliable and instantaneous communication being a driving and enabling factor of an information centric economy, the understanding, mitigation and exploitation of interference in networks must be seen as a centrally important task.
""As organizations have become more sophisticated, pressure to
provide information sharing across dissimilar platforms has
mounted. In addition, advances in distributed computing and
networking combined with the affordable high level of connectivity,
are making information sharing across databases closer to being
accomplished...With the advent of the internet, intranets, and
affordable network connectivity, business reengineering has become
a necessity for modern corporations to stay competitive in the
global market...An end-user in a heterogeneous computing
environment should be able to not only invoke multiple exiting
software systems and hardware devices, but also coordinate their
interactions.""--From the Introduction Seventeen leaders in the field contributed chapters specifically
for this unique book, together providing the most comprehensive
resource on managing multidatabase systems involving heterogeneous
and autonomous databases available today. The book covers virtually
all fundamental issues, concepts, and major research topics.
The present volume provides a collection of seven articles containing new and high quality research results demonstrating the significance of Multi-objective Evolutionary Algorithms (MOEA) for data mining tasks in Knowledge Discovery from Databases (KDD). These articles are written by leading experts around the world. It is shown how the different MOEAs can be utilized, both in individual and integrated manner, in various ways to efficiently mine data from large databases.
This comprehensive book focuses on better big-data security for healthcare organizations. Following an extensive introduction to the Internet of Things (IoT) in healthcare including challenging topics and scenarios, it offers an in-depth analysis of medical body area networks with the 5th generation of IoT communication technology along with its nanotechnology. It also describes a novel strategic framework and computationally intelligent model to measure possible security vulnerabilities in the context of e-health. Moreover, the book addresses healthcare systems that handle large volumes of data driven by patients' records and health/personal information, including big-data-based knowledge management systems to support clinical decisions. Several of the issues faced in storing/processing big data are presented along with the available tools, technologies and algorithms to deal with those problems as well as a case study in healthcare analytics. Addressing trust, privacy, and security issues as well as the IoT and big-data challenges, the book highlights the advances in the field to guide engineers developing different IoT devices and evaluating the performance of different IoT techniques. Additionally, it explores the impact of such technologies on public, private, community, and hybrid scenarios in healthcare. This book offers professionals, scientists and engineers the latest technologies, techniques, and strategies for IoT and big data.
Semantic Models for Multimedia Database Searching and Browsing begins with the introduction of multimedia information applications, the need for the development of the multimedia database management systems (MDBMSs), and the important issues and challenges of multimedia systems. The temporal relations, the spatial relations, the spatio-temporal relations, and several semantic models for multimedia information systems are also introduced. In addition, this book discusses recent advances in multimedia database searching and multimedia database browsing. More specifically, issues such as image/video segmentation, motion detection, object tracking, object recognition, knowledge-based event modeling, content-based retrieval, and key frame selections are presented for the first time in a single book. Two case studies consisting of two semantic models are included in the book to illustrate how to use semantic models to design multimedia information systems. Semantic Models for Multimedia Database Searching and Browsing is an excellent reference and can be used in advanced level courses for researchers, scientists, industry professionals, software engineers, students, and general readers who are interested in the issues, challenges, and ideas underlying the current practice of multimedia presentation, multimedia database searching, and multimedia browsing in multimedia information systems.
The present economic and social environment has given rise to new situations within which companies must operate. As a first example, the globalization of the economy and the need for performance has led companies to outsource and then to operate inside networks of enterprises such as supply chains or virtual enterprises. A second instance is related to environmental issues. The statement about the impact of ind- trial activities on the environment has led companies to revise processes, to save - ergy, to optimize transportation.... A last example relates to knowledge. Knowledge is considered today to be one of the main assets of a company. How to capitalize, to manage, to reuse it for the benefit of the company is an important current issue. The three examples above have no direct links. However, each of them constitutes a challenge that companies have to face today. This book brings together the opinions of several leading researchers from all around the world. Together they try to develop new approaches and find answers to those challenges. Through the individual ch- ters of this book, the authors present their understanding of the different challenges, the concepts on which they are working, the approaches they are developing and the tools they propose. The book is composed of six parts; each one focuses on a specific theme and is subdivided into subtopics.
The Engineering of Complex Real-Time Computer Control Systems brings together in one place important contributions and up-to-date research results in this important area. The Engineering of Complex Real-Time Computer Control Systems serves as an excellent reference, providing insight into some of the most important research issues in the field.
Privacy requirements have an increasing impact on the realization of modern applications. Commercial and legal regulations demand that privacy guarantees be provided whenever sensitive information is stored, processed, or communicated to external parties. Current approaches encrypt sensitive data, thus reducing query execution efficiency and preventing selective information release. Preserving Privacy in Data Outsourcing presents a comprehensive approach for protecting highly sensitive information when it is stored on systems that are not under the data owner's control. The approach illustrated combines access control and encryption, enforcing access control via structured encryption. This solution, coupled with efficient algorithms for key derivation and distribution, provides efficient and secure authorization management on outsourced data, allowing the data owner to outsource not only the data but the security policy itself. To reduce the amount of data to be encrypted the book also investigates data fragmentation as a possible way to protect privacy of data associations and provide fragmentation as a complementary means for protecting privacy: associations broken by fragmentation will be visible only to users authorized (by knowing the proper key) to join fragments. The book finally investigates the problem of executing queries over possible data distributed at different servers and which must be controlled to ensure sensitive information and sensitive associations be visible only to parties authorized for that. Case Studies are provided throughout the book. Privacy, data mining, data protection, data outsourcing, electronic commerce, machine learning professionals and others working in these related fields will find this book a valuable asset, as well as primary associations such as ACM, IEEE and Management Science. This book is also suitable for advanced level students and researchers concentrating on computer science as a secondary text or reference book.
In the last ten years, a true explosion of investigations into fuzzy modeling and its applications in control, diagnostics, decision making, optimization, pattern recognition, robotics, etc. has been observed. The attraction of fuzzy modeling results from its intelligibility and the high effectiveness of the models obtained. Owing to this the modeling can be applied for the solution of problems which could not be solved till now with any known conventional methods. The book provides the reader with an advanced introduction to the problems of fuzzy modeling and to one of its most important applications: fuzzy control. It is based on the latest and most significant knowledge of the subject and can be used not only by control specialists but also by specialists working in any field requiring plant modeling, process modeling, and systems modeling, e.g. economics, business, medicine, agriculture,and meteorology.
A modern information retrieval system must have the capability to find, organize and present very different manifestations of information - such as text, pictures, videos or database records - any of which may be of relevance to the user. However, the concept of relevance, while seemingly intuitive, is actually hard to define, and it's even harder to model in a formal way. Lavrenko does not attempt to bring forth a new definition of relevance, nor provide arguments as to why any particular definition might be theoretically superior or more complete. Instead, he takes a widely accepted, albeit somewhat conservative definition, makes several assumptions, and from them develops a new probabilistic model that explicitly captures that notion of relevance. With this book, he makes two major contributions to the field of information retrieval: first, a new way to look at topical relevance, complementing the two dominant models, i.e., the classical probabilistic model and the language modeling approach, and which explicitly combines documents, queries, and relevance in a single formalism; second, a new method for modeling exchangeable sequences of discrete random variables which does not make any structural assumptions about the data and which can also handle rare events. Thus his book is of major interest to researchers and graduate students in information retrieval who specialize in relevance modeling, ranking algorithms, and language modeling.
Advanced Topics in Database Research is a series of books on the fields of database, software engineering, and systems analysis and design. They feature the latest research ideas and topics on how to enhance current database systems, improve information storage, refine existing database models, and develop advanced applications. ""Advanced Topics in Database Research, Volume 5"" is a part of this series. ""Advanced Topics in Database Research, Volume 5"" presents the latest research ideas and topics on database systems and applications, and provides insights into important developments in the field of database and database management. This book describes the capabilities and features of new technologies and methodologies, and presents state-of-the-art research ideas, with an emphasis on theoretical issues regarding databases and database management.
In the course of fuzzy technological development, fuzzy graph theory was identified quite early on for its importance in making things work. Two very important and useful concepts are those of granularity and of nonlinear ap proximations. The concept of granularity has evolved as a cornerstone of Lotfi A.Zadeh's theory of perception, while the concept of nonlinear approx imation is the driving force behind the success of the consumer electronics products manufacturing. It is fair to say fuzzy graph theory paved the way for engineers to build many rule-based expert systems. In the open literature, there are many papers written on the subject of fuzzy graph theory. However, there are relatively books available on the very same topic. Professors' Mordeson and Nair have made a real contribution in putting together a very com prehensive book on fuzzy graphs and fuzzy hypergraphs. In particular, the discussion on hypergraphs certainly is an innovative idea. For an experienced engineer who has spent a great deal of time in the lab oratory, it is usually a good idea to revisit the theory. Professors Mordeson and Nair have created such a volume which enables engineers and design ers to benefit from referencing in one place. In addition, this volume is a testament to the numerous contributions Professor John N. Mordeson and his associates have made to the mathematical studies in so many different topics of fuzzy mathematics."
A comprehensive, systematic approach to multimedia database management systems. It presents methods for managing the increasing demands of multimedia databases and their inherent design and architecture issues, and covers how to create an effective multimedia database by integrating the various information indexing and retrieval methods available. It also addresses how to measure multimedia database performance that is based on similarity to queries and routinely affected by human judgement. The book concludes with a discussion of networking and operating system support for multimedia databases and a look at research and development in this dynamic field.
It is over 20 years since the functional data model and functional programming languages were first introduced to the computing community. Although developed by separate research communities, recent work, presented in this book, suggests there is powerful synergy in their integration. As database technology emerges as central to yet more complex and demanding applications in areas such as bioinformatics, national security, criminal investigations and advanced engineering, more sophisticated approaches like those presented here, are needed. A tutorial introduction by the editors prepares the reader for the chapters that follow, written by leading researchers, including some of the early pioneers. They provide a comprehensive treatment showing how the functional approach provides for modeling, analyzis and optimization in databases, and also data integration and interoperation in heterogeneous environments. Several chapters deal with mathematical results on the transformation of expressions, fundamental to the functional approach. The book also aims to show how the approach relates to the Internet and current work on semistructured data, XML and RDF. The book presents a comprehensive view of the functional approach to data management, bringing together important material hitherto widely scattered, some new research, and a comprehensive set of references. It will serve as a valuable resource for researchers, faculty and graduate students, as well as those in industry responsible for new systems development.
Information fusion is becoming a major requirement in data mining and knowledge discovery in databases. This book presents some recent fusion techniques that are currently in use in data mining, as well as data mining applications that use information fusion. Special focus of the book is on information fusion in preprocessing, model building and information extraction with various applications. |
You may like...
Introduction to Combinatorial…
Dingzhu Du, Panos M. Pardalos, …
Hardcover
R1,468
Discovery Miles 14 680
|