![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
Clustering is an important technique for discovering relatively dense sub-regions or sub-spaces of a multi-dimension data distribution. Clus tering has been used in information retrieval for many different purposes, such as query expansion, document grouping, document indexing, and visualization of search results. In this book, we address issues of cluster ing algorithms, evaluation methodologies, applications, and architectures for information retrieval. The first two chapters discuss clustering algorithms. The chapter from Baeza-Yates et al. describes a clustering method for a general metric space which is a common model of data relevant to information retrieval. The chapter by Guha, Rastogi, and Shim presents a survey as well as detailed discussion of two clustering algorithms: CURE and ROCK for numeric data and categorical data respectively. Evaluation methodologies are addressed in the next two chapters. Ertoz et al. demonstrate the use of text retrieval benchmarks, such as TRECS, to evaluate clustering algorithms. He et al. provide objective measures of clustering quality in their chapter. Applications of clustering methods to information retrieval is ad dressed in the next four chapters. Chu et al. and Noel et al. explore feature selection using word stems, phrases, and link associations for document clustering and indexing. Wen et al. and Sung et al. discuss applications of clustering to user queries and data cleansing. Finally, we consider the problem of designing architectures for infor mation retrieval. Crichton, Hughes, and Kelly elaborate on the devel opment of a scientific data system architecture for information retrieval."
Since the early eighties IFIP/Sec has been an important rendezvous for Information Technology researchers and specialists involved in all aspects of IT security. The explosive growth of the Web is now faced with the formidable challenge of providing trusted information. IFIP/Sec'01 is the first of this decade (and century) and it will be devoted to "Trusted Information - the New Decade Challenge" This proceedings are divided in eleven parts related to the conference program. Session are dedicated to technologies: Security Protocols, Smart Card, Network Security and Intrusion Detection, Trusted Platforms. Others sessions are devoted to application like eSociety, TTP Management and PKI, Secure Workflow Environment, Secure Group Communications, and on the deployment of applications: Risk Management, Security Policies andTrusted System Design and Management. The year 2001 is a double anniversary. First, fifteen years ago, the first IFIP/Sec was held in France (IFIP/Sec'86, Monte-Carlo) and 2001 is also the anniversary of smart card technology. Smart cards emerged some twenty years ago as an innovation and have now become pervasive information devices used for highly distributed secure applications. These cards let millions of people carry a highly secure device that can represent them on a variety of networks. To conclude, we hope that the rich "menu" of conference papers for this IFIP/Sec conference will provide valuable insights and encourage specialists to pursue their work in trusted information.
This dictionary was produced in response to the rapidly increasing
amount of quasi-industrial jargon in the field of information
technology, compounded by the fact that these somewhat esoteric
terms are often further reduced to acronyms and abbreviations that
are seldom explained. Even when they are defined, individual
interpretations continue to diverge.
Today, most of the codes have passed into the public domain,
simply because they exist in most of the telecommunications systems
installed throughout the developed (and developing) world and are
largely known to most of those who work in that particular area.
However, foreign variants often defy even the most astute observer.
This dictionary seeks to clarify this bewildering situation as much
as possible. The 26,000 definitions set out here, drawn from some
16,000 individual cybernyms, cover computing, electronics,
telecommunications (including intelligent networks and mobile
telephony), together with satellite technology and Internet/Web
terminology.
This proceedings book presents selected papers from the 4th Conference on Signal and Information Processing, Networking and Computers (ICSINC) held in Qingdao, China on May 23-25, 2018. It focuses on the current research in a wide range of areas related to information theory, communication systems, computer science, signal processing, aerospace technologies, and other related technologies. With contributions from experts from both academia and industry, it is a valuable resource anyone interested in this field.
Enterprises have made amazing advances by taking advantage of data about their business to provide predictions and understanding of their customers, markets, and products. But as the world of business becomes more interconnected and global, enterprise data is no long a monolith; it is just a part of a vast web of data. Managing data on a world-wide scale is a key capability for any business today. The Semantic Web treats data as a distributed resource on the scale of the World Wide Web, and incorporates features to address the challenges of massive data distribution as part of its basic design. The aim of the first two editions was to motivate the Semantic Web technology stack from end-to-end; to describe not only what the Semantic Web standards are and how they work, but also what their goals are and why they were designed as they are. It tells a coherent story from beginning to end of how the standards work to manage a world-wide distributed web of knowledge in a meaningful way. The third edition builds on this foundation to bring Semantic Web practice to enterprise. Fabien Gandon joins Dean Allemang and Jim Hendler, bringing with him years of experience in global linked data, to open up the story to a modern view of global linked data. While the overall story is the same, the examples have been brought up to date and applied in a modern setting, where enterprise and global data come together as a living, linked network of data. Also included with the third edition, all of the data sets and queries are available online for study and experimentation at data.world/swwo.
Scalable High Performance Computing for Knowledge Discovery and Data Mining brings together in one place important contributions and up-to-date research results in this fast moving area. Scalable High Performance Computing for Knowledge Discovery and Data Mining serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
This 2nd edition has been completely revised and updated, with additional new chapters. It presents state-of-the-art research in this area and focuses on key topics such as: visualization of semantic and structural information and metadata in the context of the emerging Semantic Web; Ontology-based Information Visualization and the use of graphically represented ontologies; Semantic Visualizations using Topic Maps and graph techniques; Recommender systems for filtering and recommending on the Semantic Web; SVG and X3D as new XML-based languages for 2D and 3D visualisations; methods used to construct and visualize high quality metadata and ontologies; and navigating and exploring XML documents using interactive multimedia interfaces. The design of visual interfaces for e-commerce and information retrieval is currently a challenging area of practical web development.
This book addresses the privacy issue of On-Line Analytic Processing (OLAP) systems. OLAP systems usually need to meet two conflicting goals. First, the sensitive data stored in underlying data warehouses must be kept secret. Second, analytical queries about the data must be allowed for decision support purposes. The main challenge is that sensitive data can be inferred from answers to seemingly innocent aggregations of the data. This volume reviews a series of methods that can precisely answer data cube-style OLAP, regarding sensitive data while provably preventing adversaries from inferring data.
Web Intelligence is a new direction for scientific research and development that explores the fundamental roles as well as practical impacts of artificial intelligence and advanced information technology for the next generation of Web-empowered systems, services, and environments. Web Intelligence is regarded as the key research field for the development of the Wisdom Web (including the Semantic Web). As the first book devoted to Web Intelligence, this coherently written multi-author monograph provides a thorough introduction and a systematic overview of this new field. It presents both the current state of research and development as well as application aspects. The book will be a valuable and lasting source of reference for researchers and developers interested in Web Intelligence. Students and developers will additionally appreciate the numerous illustrations and examples.
There is a broad interest in feature extraction, construction, and selection among practitioners from statistics, pattern recognition, and data mining to machine learning. Data pre-processing is an essential step in the knowledge discovery process for real-world applications. This book compiles contributions from many leading and active researchers in this growing field and paints a picture of the state-of-the-art techniques that can boost the capabilities of many existing data mining tools. The objective of this collection is to increase the awareness of the data mining community about research into feature extraction, construction and selection, which are currently conducted mainly in isolation. This book is part of an endeavor to produce a contemporary overview of modern solutions, to create synergy among these seemingly different branches, and to pave the way for developing meta-systems and novel approaches. The book can be used by researchers and graduate students in machine learning, data mining, and knowledge discovery, who wish to understand techniques of feature extraction, construction and selection for data pre-processing and to solve large size, real-world problems. The book can also serve as a reference work for those who are conducting research into feature extraction, construction and selection, and are ready to meet the exciting challenges ahead of us.
The term Web Intelligence is de?ned as a new line of scienti?c research and development, whichisusedtoexplorethefundamentalrolesandpractical- pactofArti?cialIntelligence togetherwithadvancedInformationTechnology and its e?ect on the future generations of Web-empowered products. These include systems, services, amongst other activities, all of which are carried out by the Web Intelligence Consortium (http: //wi-consortium.org/). Web Intelligence was ?rst coined in the late 1999's. From that time, many new algorithms, methods and techniques were developedand used extracting both knowledge and wisdom from the data originating from the Web. A number of initiatives have been adopted by the world communities in this areaofstudy.Theseincludebooks, conferenceseries, andjournals.Thislatest book encomposes a variaty of up to date state of the art approaches in Web Intelligence. Furthermore, it hightlights successful applications in this area of research within a practical context. The present book aims to introduce a selection of research applications in the area of Web Intelligence. We have selected a number of researchers around the world, all of which are experts in their respective research areas. Each chapter focuses on a speci?c topic in the ?eld of Web Intelligence. Furthermorethe bookconsistsofa numberofinnovativeproposalswhichwill contribute to the development of web science and technology for the lo- term future, rendering this collective work a valuable piece of knowledge. It was a great honour to have collaborated with this team of very talented experts. We also wish to express our grattitude to those who reviewed this book o?ering their constructive feedbacks.
In this fully updated second edition of the highly acclaimed
Managing Gigabytes, authors Witten, Moffat, and Bell continue to
provide unparalleled coverage of state-of-the-art techniques for
compressing and indexing data. Whatever your field, if you work
with large quantities of information, this book is essential
reading--an authoritative theoretical resource and a practical
guide to meeting the toughest storage and access challenges. It
covers the latest developments in compression and indexing and
their application on the Web and in digital libraries. It also
details dozens of powerful techniques supported by mg, the authors'
own system for compressing, storing, and retrieving text, images,
and textual images. mg's source code is freely available on the
Web.
The massive quantity of data, information, and knowledge available in digital form on the web or within the organizational knowledge base requires a more effective way to control it. The Semantic Web and its growing complexity demands a resource for the understanding of proper tools for management.""Semantic Knowledge Management: An Ontology-Based Framework"" addresses the Semantic Web from an operative point of view using theoretical approaches, methodologies, and software applications as innovative solutions to true knowledge management. This advanced title provides readers with critical steps and tools for developing a semantic based knowledge management system.
This book elaborates the science and engineering basis for energy-efficient driving in conventional and autonomous cars. After covering the physics of energy-efficient motion in conventional, hybrid, and electric powertrains, the book chiefly focuses on the energy-saving potential of connected and automated vehicles. It reveals how being connected to other vehicles and the infrastructure enables the anticipation of upcoming driving-relevant factors, e.g. hills, curves, slow traffic, state of traffic signals, and movements of nearby vehicles. In turn, automation allows vehicles to adjust their motion more precisely in anticipation of upcoming events, and to save energy. Lastly, the energy-efficient motion of connected and automated vehicles could have a harmonizing effect on mixed traffic, leading to additional energy savings for neighboring vehicles. Building on classical methods of powertrain modeling, optimization, and optimal control, the book further develops the theory of energy-efficient driving. In addition, it presents numerous theoretical and applied case studies that highlight the real-world implications of the theory developed. The book is chiefly intended for undergraduate and graduate engineering students and industry practitioners with a background in mechanical, electrical, or automotive engineering, computer science or robotics.
This book focuses on three interdependent challenges related to managing transitions toward sustainable development, namely (a) mapping sustainability for global knowledge e-networking, (b) extending the value chain of knowledge and e-networking, and (c) engaging in explorations of new methods and venues for further developing knowledge and e-networking. While each of these challenges constitutes fundamentally different types of endeavors, they are highly interconnected. Jointly, they contribute to our expansion of knowledge and its applications in support of transitions toward sustainable development. The central theme of this book revolves around ways of transcending barriers that impede the use of knowledge and knowledge networking in transitions toward sustainability. In order to transcend these barriers, we examine the potential contributions of innovations in information technologies as well as computation and representation of attendant complexities. A related theme addresses new ways of managing information and systematic observation for the purpose of enhancing the value of knowledge. Finally, this book shows applications of new methodologies and related findings that would contribute to our understanding of sustainablity issues that have not yet been explored. In many ways, this is a book of theory and of practice; and it is one of methods as well as policy and performance.
Everyday more than half of American adult internet users read or write email messages at least once. The prevalence of email has significantly impacted the working world, functioning as a great asset on many levels, yet at times, a costly liability. In an effort to improve various aspects of work-related communication, this work applies sophisticated machine learning techniques to a large body of email data. Several effective models are proposed that can aid with the prioritization of incoming messages, help with coordination of shared tasks, improve tracking of deadlines, and prevent disastrous information leaks. Carvalho presents many data-driven techniques that can positively impact work-related email communication and offers robust models that may be successfully applied to future machine learning tasks.
Genetic Programming Theory and Practice explores the emerging
interaction between theory and practice in the cutting-edge,
machine learning method of Genetic Programming (GP). The material
contained in this contributed volume was developed from a workshop
at the University of Michigan's Center for the Study of Complex
Systems where an international group of genetic programming
theorists and practitioners met to examine how GP theory informs
practice and how GP practice impacts GP theory. The contributions
cover the full spectrum of this relationship and are written by
leading GP theorists from major universities, as well as active
practitioners from leading industries and businesses. Chapters
include such topics as John Koza's development of human-competitive
electronic circuit designs; David Goldberg's application of
"competent GA" methodology to GP; Jason Daida's discovery of a new
set of factors underlying the dynamics of GP starting from applied
research; and Stephen Freeland's essay on the lessons of biology
for GP and the potential impact of GP on evolutionary theory.
Knowledge in its pure state is tacit in nature-difficult to formalize and communicate-but can be converted into codified form and shared through both social interactions and the use of IT-based applications and systems. Even though there seems to be considerable synergies between the resulting huge data and the convertible knowledge, there is still a debate on how the increasing amount of data captured by corporations could improve decision making and foster innovation through effective knowledge-sharing practices. Big Data and Knowledge Sharing in Virtual Organizations provides innovative insights into the influence of big data analytics and artificial intelligence and the tools, methods, and techniques for knowledge-sharing processes in virtual organizations. The content within this publication examines cloud computing, machine learning, and knowledge sharing. It is designed for government officials and organizations, policymakers, academicians, researchers, technology developers, and students.
Despite the growing interest in Real-Time Database Systems, there is no single book that acts as a reference to academics, professionals, and practitioners who wish to understand the issues involved in the design and development of RTDBS. Real-Time Database Systems: Issues and Applications fulfills this need. This book presents the spectrum of issues that may arise in various real-time database applications, the available solutions and technologies that may be used to address these issues, and the open problems that need to be tackled in the future. With rapid advances in this area, several concepts have been proposed without a widely accepted consensus on their definitions and implications. To address this need, the first chapter is an introduction to the key RTDBS concepts and definitions, which is followed by a survey of the state of the art in RTDBS research and practice. The remainder of the book consists of four sections: models and paradigms, applications and benchmarks, scheduling and concurrency control, and experimental systems. The chapters in each section are contributed by experts in the respective areas. Real-Time Database Systems: Issues and Applications is primarily intended for practicing engineers and researchers working in the growing area of real-time database systems. For practitioners, the book will provide a much needed bridge for technology transfer and continued education. For researchers, this book will provide a comprehensive reference for well-established results. This book can also be used in a senior or graduate level course on real-time systems, real-time database systems, and database systems or closely related courses.
Clustering is an important unsupervised classification technique where data points are grouped such that points that are similar in some sense belong to the same cluster. Cluster analysis is a complex problem as a variety of similarity and dissimilarity measures exist in the literature. This is the first book focused on clustering with a particular emphasis on symmetry-based measures of similarity and metaheuristic approaches. The aim is to find a suitable grouping of the input data set so that some criteria are optimized, and using this the authors frame the clustering problem as an optimization one where the objectives to be optimized may represent different characteristics such as compactness, symmetrical compactness, separation between clusters, or connectivity within a cluster. They explain the techniques in detail and outline many detailed applications in data mining, remote sensing and brain imaging, gene expression data analysis, and face detection. The book will be useful to graduate students and researchers in computer science, electrical engineering, system science, and information technology, both as a text and as a reference book. It will also be useful to researchers and practitioners in industry working on pattern recognition, data mining, soft computing, metaheuristics, bioinformatics, remote sensing, and brain imaging.
This book constitutes the refereed proceedings of the 15th IFIP WG 5.5 Working Conference on Virtual Enterprises, PRO-VE 2014, held in Amsterdam, The Netherlands, in October 2014. The 73 revised papers were carefully selected from 190 submissions. They provide a comprehensive overview of identified challenges and recent advances in various collaborative network (CN) domains and their applications, with a particular focus on the following areas in support of smart networked environments: behavior and coordination; product-service systems; service orientation in collaborative networks; engineering and implementation of collaborative networks; cyber-physical systems; business strategies alignment; innovation networks; sustainability and trust; reference and conceptual models; collaboration platforms; virtual reality and simulation; interoperability and integration; performance management frameworks; performance management systems; risk analysis; optimization in collaborative networks; knowledge management in networks; health and care networks; and mobility and logistics.
This book offers means to handle interference as a central problem of operating wireless networks. It investigates centralized and decentralized methods to avoid and handle interference as well as approaches that resolve interference constructively. The latter type of approach tries to solve the joint detection and estimation problem of several data streams that share a common medium. In fact, an exciting insight into the operation of networks is that it may be beneficial, in terms of an overall throughput, to actively create and manage interference. Thus, when handled properly, "mixing" of data in networks becomes a useful tool of operation rather than the nuisance as which it has been treated traditionally. With the development of mobile, robust, ubiquitous, reliable and instantaneous communication being a driving and enabling factor of an information centric economy, the understanding, mitigation and exploitation of interference in networks must be seen as a centrally important task.
This book covers in a great depth the fast growing topic of tools, techniques and applications of soft computing (e.g., fuzzy logic, genetic algorithms, neural networks, rough sets, Bayesian networks, and other probabilistic techniques) in the ontologies and the Semantic Web. The author shows how components of the Semantic Web (like the RDF, Description Logics, ontologies) can be covered with a soft computing methodology. |
You may like...
Cognitive and Soft Computing Techniques…
Akash Kumar Bhoi, Victor Hugo Costa de Albuquerque, …
Paperback
R2,583
Discovery Miles 25 830
Data Analytics for Social Microblogging…
Soumi Dutta, Asit Kumar Das, …
Paperback
R3,335
Discovery Miles 33 350
Management Of Information Security
Michael Whitman, Herbert Mattord
Paperback
Database Principles - Fundamentals of…
Carlos Coronel, Keeley Crockett, …
Paperback
Big Data and Smart Service Systems
Xiwei Liu, Rangachari Anand, …
Hardcover
Formative Assessment, Learning Data…
Santi Caballe, Robert Clariso
Paperback
|