![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > General
The term Web Intelligence is de?ned as a new line of scienti?c research and development, whichisusedtoexplorethefundamentalrolesandpractical- pactofArti?cialIntelligence togetherwithadvancedInformationTechnology and its e?ect on the future generations of Web-empowered products. These include systems, services, amongst other activities, all of which are carried out by the Web Intelligence Consortium (http: //wi-consortium.org/). Web Intelligence was ?rst coined in the late 1999's. From that time, many new algorithms, methods and techniques were developedand used extracting both knowledge and wisdom from the data originating from the Web. A number of initiatives have been adopted by the world communities in this areaofstudy.Theseincludebooks, conferenceseries, andjournals.Thislatest book encomposes a variaty of up to date state of the art approaches in Web Intelligence. Furthermore, it hightlights successful applications in this area of research within a practical context. The present book aims to introduce a selection of research applications in the area of Web Intelligence. We have selected a number of researchers around the world, all of which are experts in their respective research areas. Each chapter focuses on a speci?c topic in the ?eld of Web Intelligence. Furthermorethe bookconsistsofa numberofinnovativeproposalswhichwill contribute to the development of web science and technology for the lo- term future, rendering this collective work a valuable piece of knowledge. It was a great honour to have collaborated with this team of very talented experts. We also wish to express our grattitude to those who reviewed this book o?ering their constructive feedbacks.
In this fully updated second edition of the highly acclaimed
Managing Gigabytes, authors Witten, Moffat, and Bell continue to
provide unparalleled coverage of state-of-the-art techniques for
compressing and indexing data. Whatever your field, if you work
with large quantities of information, this book is essential
reading--an authoritative theoretical resource and a practical
guide to meeting the toughest storage and access challenges. It
covers the latest developments in compression and indexing and
their application on the Web and in digital libraries. It also
details dozens of powerful techniques supported by mg, the authors'
own system for compressing, storing, and retrieving text, images,
and textual images. mg's source code is freely available on the
Web.
This book elaborates the science and engineering basis for energy-efficient driving in conventional and autonomous cars. After covering the physics of energy-efficient motion in conventional, hybrid, and electric powertrains, the book chiefly focuses on the energy-saving potential of connected and automated vehicles. It reveals how being connected to other vehicles and the infrastructure enables the anticipation of upcoming driving-relevant factors, e.g. hills, curves, slow traffic, state of traffic signals, and movements of nearby vehicles. In turn, automation allows vehicles to adjust their motion more precisely in anticipation of upcoming events, and to save energy. Lastly, the energy-efficient motion of connected and automated vehicles could have a harmonizing effect on mixed traffic, leading to additional energy savings for neighboring vehicles. Building on classical methods of powertrain modeling, optimization, and optimal control, the book further develops the theory of energy-efficient driving. In addition, it presents numerous theoretical and applied case studies that highlight the real-world implications of the theory developed. The book is chiefly intended for undergraduate and graduate engineering students and industry practitioners with a background in mechanical, electrical, or automotive engineering, computer science or robotics.
This book focuses on three interdependent challenges related to managing transitions toward sustainable development, namely (a) mapping sustainability for global knowledge e-networking, (b) extending the value chain of knowledge and e-networking, and (c) engaging in explorations of new methods and venues for further developing knowledge and e-networking. While each of these challenges constitutes fundamentally different types of endeavors, they are highly interconnected. Jointly, they contribute to our expansion of knowledge and its applications in support of transitions toward sustainable development. The central theme of this book revolves around ways of transcending barriers that impede the use of knowledge and knowledge networking in transitions toward sustainability. In order to transcend these barriers, we examine the potential contributions of innovations in information technologies as well as computation and representation of attendant complexities. A related theme addresses new ways of managing information and systematic observation for the purpose of enhancing the value of knowledge. Finally, this book shows applications of new methodologies and related findings that would contribute to our understanding of sustainablity issues that have not yet been explored. In many ways, this is a book of theory and of practice; and it is one of methods as well as policy and performance.
Everyday more than half of American adult internet users read or write email messages at least once. The prevalence of email has significantly impacted the working world, functioning as a great asset on many levels, yet at times, a costly liability. In an effort to improve various aspects of work-related communication, this work applies sophisticated machine learning techniques to a large body of email data. Several effective models are proposed that can aid with the prioritization of incoming messages, help with coordination of shared tasks, improve tracking of deadlines, and prevent disastrous information leaks. Carvalho presents many data-driven techniques that can positively impact work-related email communication and offers robust models that may be successfully applied to future machine learning tasks.
Genetic Programming Theory and Practice explores the emerging
interaction between theory and practice in the cutting-edge,
machine learning method of Genetic Programming (GP). The material
contained in this contributed volume was developed from a workshop
at the University of Michigan's Center for the Study of Complex
Systems where an international group of genetic programming
theorists and practitioners met to examine how GP theory informs
practice and how GP practice impacts GP theory. The contributions
cover the full spectrum of this relationship and are written by
leading GP theorists from major universities, as well as active
practitioners from leading industries and businesses. Chapters
include such topics as John Koza's development of human-competitive
electronic circuit designs; David Goldberg's application of
"competent GA" methodology to GP; Jason Daida's discovery of a new
set of factors underlying the dynamics of GP starting from applied
research; and Stephen Freeland's essay on the lessons of biology
for GP and the potential impact of GP on evolutionary theory.
Despite the growing interest in Real-Time Database Systems, there is no single book that acts as a reference to academics, professionals, and practitioners who wish to understand the issues involved in the design and development of RTDBS. Real-Time Database Systems: Issues and Applications fulfills this need. This book presents the spectrum of issues that may arise in various real-time database applications, the available solutions and technologies that may be used to address these issues, and the open problems that need to be tackled in the future. With rapid advances in this area, several concepts have been proposed without a widely accepted consensus on their definitions and implications. To address this need, the first chapter is an introduction to the key RTDBS concepts and definitions, which is followed by a survey of the state of the art in RTDBS research and practice. The remainder of the book consists of four sections: models and paradigms, applications and benchmarks, scheduling and concurrency control, and experimental systems. The chapters in each section are contributed by experts in the respective areas. Real-Time Database Systems: Issues and Applications is primarily intended for practicing engineers and researchers working in the growing area of real-time database systems. For practitioners, the book will provide a much needed bridge for technology transfer and continued education. For researchers, this book will provide a comprehensive reference for well-established results. This book can also be used in a senior or graduate level course on real-time systems, real-time database systems, and database systems or closely related courses.
Computer access is the only way to retrieve up-to-date sequences
and this book shows researchers puzzled by the maze of URLs, sites,
and searches how to use internet technology to find and analyze
genetic data. The book describes the different types of databases,
how to use a specific database to find a sequence that you need,
and how to analyze the data to compare it with your own work.
Clustering is an important unsupervised classification technique where data points are grouped such that points that are similar in some sense belong to the same cluster. Cluster analysis is a complex problem as a variety of similarity and dissimilarity measures exist in the literature. This is the first book focused on clustering with a particular emphasis on symmetry-based measures of similarity and metaheuristic approaches. The aim is to find a suitable grouping of the input data set so that some criteria are optimized, and using this the authors frame the clustering problem as an optimization one where the objectives to be optimized may represent different characteristics such as compactness, symmetrical compactness, separation between clusters, or connectivity within a cluster. They explain the techniques in detail and outline many detailed applications in data mining, remote sensing and brain imaging, gene expression data analysis, and face detection. The book will be useful to graduate students and researchers in computer science, electrical engineering, system science, and information technology, both as a text and as a reference book. It will also be useful to researchers and practitioners in industry working on pattern recognition, data mining, soft computing, metaheuristics, bioinformatics, remote sensing, and brain imaging.
This book constitutes the refereed proceedings of the 15th IFIP WG 5.5 Working Conference on Virtual Enterprises, PRO-VE 2014, held in Amsterdam, The Netherlands, in October 2014. The 73 revised papers were carefully selected from 190 submissions. They provide a comprehensive overview of identified challenges and recent advances in various collaborative network (CN) domains and their applications, with a particular focus on the following areas in support of smart networked environments: behavior and coordination; product-service systems; service orientation in collaborative networks; engineering and implementation of collaborative networks; cyber-physical systems; business strategies alignment; innovation networks; sustainability and trust; reference and conceptual models; collaboration platforms; virtual reality and simulation; interoperability and integration; performance management frameworks; performance management systems; risk analysis; optimization in collaborative networks; knowledge management in networks; health and care networks; and mobility and logistics.
This book offers means to handle interference as a central problem of operating wireless networks. It investigates centralized and decentralized methods to avoid and handle interference as well as approaches that resolve interference constructively. The latter type of approach tries to solve the joint detection and estimation problem of several data streams that share a common medium. In fact, an exciting insight into the operation of networks is that it may be beneficial, in terms of an overall throughput, to actively create and manage interference. Thus, when handled properly, "mixing" of data in networks becomes a useful tool of operation rather than the nuisance as which it has been treated traditionally. With the development of mobile, robust, ubiquitous, reliable and instantaneous communication being a driving and enabling factor of an information centric economy, the understanding, mitigation and exploitation of interference in networks must be seen as a centrally important task.
""As organizations have become more sophisticated, pressure to
provide information sharing across dissimilar platforms has
mounted. In addition, advances in distributed computing and
networking combined with the affordable high level of connectivity,
are making information sharing across databases closer to being
accomplished...With the advent of the internet, intranets, and
affordable network connectivity, business reengineering has become
a necessity for modern corporations to stay competitive in the
global market...An end-user in a heterogeneous computing
environment should be able to not only invoke multiple exiting
software systems and hardware devices, but also coordinate their
interactions.""--From the Introduction Seventeen leaders in the field contributed chapters specifically
for this unique book, together providing the most comprehensive
resource on managing multidatabase systems involving heterogeneous
and autonomous databases available today. The book covers virtually
all fundamental issues, concepts, and major research topics.
This comprehensive book focuses on better big-data security for healthcare organizations. Following an extensive introduction to the Internet of Things (IoT) in healthcare including challenging topics and scenarios, it offers an in-depth analysis of medical body area networks with the 5th generation of IoT communication technology along with its nanotechnology. It also describes a novel strategic framework and computationally intelligent model to measure possible security vulnerabilities in the context of e-health. Moreover, the book addresses healthcare systems that handle large volumes of data driven by patients' records and health/personal information, including big-data-based knowledge management systems to support clinical decisions. Several of the issues faced in storing/processing big data are presented along with the available tools, technologies and algorithms to deal with those problems as well as a case study in healthcare analytics. Addressing trust, privacy, and security issues as well as the IoT and big-data challenges, the book highlights the advances in the field to guide engineers developing different IoT devices and evaluating the performance of different IoT techniques. Additionally, it explores the impact of such technologies on public, private, community, and hybrid scenarios in healthcare. This book offers professionals, scientists and engineers the latest technologies, techniques, and strategies for IoT and big data.
A modern information retrieval system must have the capability to find, organize and present very different manifestations of information - such as text, pictures, videos or database records - any of which may be of relevance to the user. However, the concept of relevance, while seemingly intuitive, is actually hard to define, and it's even harder to model in a formal way. Lavrenko does not attempt to bring forth a new definition of relevance, nor provide arguments as to why any particular definition might be theoretically superior or more complete. Instead, he takes a widely accepted, albeit somewhat conservative definition, makes several assumptions, and from them develops a new probabilistic model that explicitly captures that notion of relevance. With this book, he makes two major contributions to the field of information retrieval: first, a new way to look at topical relevance, complementing the two dominant models, i.e., the classical probabilistic model and the language modeling approach, and which explicitly combines documents, queries, and relevance in a single formalism; second, a new method for modeling exchangeable sequences of discrete random variables which does not make any structural assumptions about the data and which can also handle rare events. Thus his book is of major interest to researchers and graduate students in information retrieval who specialize in relevance modeling, ranking algorithms, and language modeling.
Advice involves recommendations on what to think; through thought, on what to choose; and via choices, on how to act. Advice is information that moves by communication, from advisors to the recipient of advice. Ivan Jureta offers a general way to analyze advice. The analysis applies regardless of what the advice is about and from whom it comes or to whom it needs to be given, and it concentrates on the production and consumption of advice independent of the field of application. It is made up of two intertwined parts, a conceptual analysis and an analysis of the rationale of advice. He premises that giving advice is a design problem and he treats advice as an artifact designed and used to influence decisions. What is unusual is the theoretical backdrop against which the author's discussions are set: ontology engineering, conceptual analysis, and artificial intelligence. While classical decision theory would be expected to play a key role, this is not the case here for one principal reason: the difficulty of having relevant numerical, quantitative estimates of probability and utility in most practical situations. Instead conceptual models and mathematical logic are the author's tools of choice. The book is primarily intended for graduate students and researchers of management science. They are offered a general method of analysis that applies to giving and receiving advice when the decision problems are not well structured, and when there is imprecise, unclear, incomplete, or conflicting qualitative information.
Semantic Models for Multimedia Database Searching and Browsing begins with the introduction of multimedia information applications, the need for the development of the multimedia database management systems (MDBMSs), and the important issues and challenges of multimedia systems. The temporal relations, the spatial relations, the spatio-temporal relations, and several semantic models for multimedia information systems are also introduced. In addition, this book discusses recent advances in multimedia database searching and multimedia database browsing. More specifically, issues such as image/video segmentation, motion detection, object tracking, object recognition, knowledge-based event modeling, content-based retrieval, and key frame selections are presented for the first time in a single book. Two case studies consisting of two semantic models are included in the book to illustrate how to use semantic models to design multimedia information systems. Semantic Models for Multimedia Database Searching and Browsing is an excellent reference and can be used in advanced level courses for researchers, scientists, industry professionals, software engineers, students, and general readers who are interested in the issues, challenges, and ideas underlying the current practice of multimedia presentation, multimedia database searching, and multimedia browsing in multimedia information systems.
This doctoral thesis reports on an innovative data repository offering adaptive metadata management to maximise information sharing and comprehension in multidisciplinary and geographically distributed collaborations. It approaches metadata as a fluid, loosely structured and dynamical process rather than a fixed product, and describes the development of a novel data management platform based on a schemaless JSON data model, which represents the first fully JSON-based metadata repository designed for the biomedical sciences. Results obtained in various application scenarios (e.g. integrated biobanking, functional genomics and computational neuroscience) and corresponding performance tests are reported on in detail. Last but not least, the book offers a systematic overview of data platforms commonly used in the biomedical sciences, together with a fresh perspective on the role of and tools for data sharing and heterogeneous data integration in contemporary biomedical research.
The present volume provides a collection of seven articles containing new and high quality research results demonstrating the significance of Multi-objective Evolutionary Algorithms (MOEA) for data mining tasks in Knowledge Discovery from Databases (KDD). These articles are written by leading experts around the world. It is shown how the different MOEAs can be utilized, both in individual and integrated manner, in various ways to efficiently mine data from large databases.
The Engineering of Complex Real-Time Computer Control Systems brings together in one place important contributions and up-to-date research results in this important area. The Engineering of Complex Real-Time Computer Control Systems serves as an excellent reference, providing insight into some of the most important research issues in the field.
Privacy requirements have an increasing impact on the realization of modern applications. Commercial and legal regulations demand that privacy guarantees be provided whenever sensitive information is stored, processed, or communicated to external parties. Current approaches encrypt sensitive data, thus reducing query execution efficiency and preventing selective information release. Preserving Privacy in Data Outsourcing presents a comprehensive approach for protecting highly sensitive information when it is stored on systems that are not under the data owner's control. The approach illustrated combines access control and encryption, enforcing access control via structured encryption. This solution, coupled with efficient algorithms for key derivation and distribution, provides efficient and secure authorization management on outsourced data, allowing the data owner to outsource not only the data but the security policy itself. To reduce the amount of data to be encrypted the book also investigates data fragmentation as a possible way to protect privacy of data associations and provide fragmentation as a complementary means for protecting privacy: associations broken by fragmentation will be visible only to users authorized (by knowing the proper key) to join fragments. The book finally investigates the problem of executing queries over possible data distributed at different servers and which must be controlled to ensure sensitive information and sensitive associations be visible only to parties authorized for that. Case Studies are provided throughout the book. Privacy, data mining, data protection, data outsourcing, electronic commerce, machine learning professionals and others working in these related fields will find this book a valuable asset, as well as primary associations such as ACM, IEEE and Management Science. This book is also suitable for advanced level students and researchers concentrating on computer science as a secondary text or reference book.
In the last ten years, a true explosion of investigations into fuzzy modeling and its applications in control, diagnostics, decision making, optimization, pattern recognition, robotics, etc. has been observed. The attraction of fuzzy modeling results from its intelligibility and the high effectiveness of the models obtained. Owing to this the modeling can be applied for the solution of problems which could not be solved till now with any known conventional methods. The book provides the reader with an advanced introduction to the problems of fuzzy modeling and to one of its most important applications: fuzzy control. It is based on the latest and most significant knowledge of the subject and can be used not only by control specialists but also by specialists working in any field requiring plant modeling, process modeling, and systems modeling, e.g. economics, business, medicine, agriculture,and meteorology.
The present economic and social environment has given rise to new situations within which companies must operate. As a first example, the globalization of the economy and the need for performance has led companies to outsource and then to operate inside networks of enterprises such as supply chains or virtual enterprises. A second instance is related to environmental issues. The statement about the impact of ind- trial activities on the environment has led companies to revise processes, to save - ergy, to optimize transportation.... A last example relates to knowledge. Knowledge is considered today to be one of the main assets of a company. How to capitalize, to manage, to reuse it for the benefit of the company is an important current issue. The three examples above have no direct links. However, each of them constitutes a challenge that companies have to face today. This book brings together the opinions of several leading researchers from all around the world. Together they try to develop new approaches and find answers to those challenges. Through the individual ch- ters of this book, the authors present their understanding of the different challenges, the concepts on which they are working, the approaches they are developing and the tools they propose. The book is composed of six parts; each one focuses on a specific theme and is subdivided into subtopics.
In the course of fuzzy technological development, fuzzy graph theory was identified quite early on for its importance in making things work. Two very important and useful concepts are those of granularity and of nonlinear ap proximations. The concept of granularity has evolved as a cornerstone of Lotfi A.Zadeh's theory of perception, while the concept of nonlinear approx imation is the driving force behind the success of the consumer electronics products manufacturing. It is fair to say fuzzy graph theory paved the way for engineers to build many rule-based expert systems. In the open literature, there are many papers written on the subject of fuzzy graph theory. However, there are relatively books available on the very same topic. Professors' Mordeson and Nair have made a real contribution in putting together a very com prehensive book on fuzzy graphs and fuzzy hypergraphs. In particular, the discussion on hypergraphs certainly is an innovative idea. For an experienced engineer who has spent a great deal of time in the lab oratory, it is usually a good idea to revisit the theory. Professors Mordeson and Nair have created such a volume which enables engineers and design ers to benefit from referencing in one place. In addition, this volume is a testament to the numerous contributions Professor John N. Mordeson and his associates have made to the mathematical studies in so many different topics of fuzzy mathematics."
This comprehensive guide offers a detailed treatment of the analysis, design, simulation and testing of the full range of today's leading delta-sigma data converters. Written by professionals experienced in all practical aspects of delta-sigma modulator design, "Delta-Sigma Data Converters" provides comprehensive coverage of low and high-order single-bit, bandpass, continuous-time, multi-stage modulators as well as advanced topics, including idle-channel tones, stability, decimation and interpolation filter design, and simulation.
A comprehensive, systematic approach to multimedia database management systems. It presents methods for managing the increasing demands of multimedia databases and their inherent design and architecture issues, and covers how to create an effective multimedia database by integrating the various information indexing and retrieval methods available. It also addresses how to measure multimedia database performance that is based on similarity to queries and routinely affected by human judgement. The book concludes with a discussion of networking and operating system support for multimedia databases and a look at research and development in this dynamic field. |
You may like...
Fundamentals of Spatial Information…
Robert Laurini, Derek Thompson
Hardcover
R1,451
Discovery Miles 14 510
Event Mining for Explanatory Modeling
Laleh Jalali, Ramesh Jain
Hardcover
R1,302
Discovery Miles 13 020
|