![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > General
The explosion of computer use and internet communication has placed
new emphasis on the ability to store, retrieve and search for all
types of images, both still photo and video images. The success and
the future of visual information retrieval depends on the cutting
edge research and applications explored in this book. It combines
the expertise from both computer vision and database research.
This book constitutes the thoroughly refereed post-conference proceedings of the 11th IFIP WG 6.11 Conference on e-Business, e-Services and e-Society, I3E 2011, held in Kaunas, Lithuania, in October 2011. The 25 revised papers presented were carefully reviewed and selected from numerous submissions. They are organized in the following topical sections: e-government and e-governance, e-services, digital goods and products, e-business process modeling and re-engineering, innovative e-business models and implementation, e-health and e-education, and innovative e-business models.
Blockchain technologies, as an emerging distributed architecture and computing paradigm, have accelerated the development/application of the Cloud/GPU/Edge Computing, Artificial Intelligence, cyber physical systems, social networking, crowdsourcing and crowdsensing, 5G, trust management, and finance. The popularity and rapid development of Blockchain brings many technical and regulatory challenges for research and academic communities. This book will feature contributions from experts on topics related to performance, benchmarking, durability, robustness, as well data gathering and management, algorithms, analytics techniques for transactions processing, and implementation of applications.
"The Berkeley DB Book" is a practical guide to the intricacies of the Berkeley DB. This book covers in-depth the complex design issues that are mostly only touched on in terse footnotes within the dense Berkeley DB reference manual. It explains the technology at a higher level and also covers the internals, providing generous code and design examples. In this book, you will get to see a developer's perspective on intriguing design issues in Berkeley DB-based applications, and you will be able to choose design options for specific conditions. Also included is a special look at fault tolerance and high-availability frameworks. Berkeley DB is becoming the database of choice for large-scale applications like search engines and high-traffic web sites.
Clustering is an important technique for discovering relatively dense sub-regions or sub-spaces of a multi-dimension data distribution. Clus tering has been used in information retrieval for many different purposes, such as query expansion, document grouping, document indexing, and visualization of search results. In this book, we address issues of cluster ing algorithms, evaluation methodologies, applications, and architectures for information retrieval. The first two chapters discuss clustering algorithms. The chapter from Baeza-Yates et al. describes a clustering method for a general metric space which is a common model of data relevant to information retrieval. The chapter by Guha, Rastogi, and Shim presents a survey as well as detailed discussion of two clustering algorithms: CURE and ROCK for numeric data and categorical data respectively. Evaluation methodologies are addressed in the next two chapters. Ertoz et al. demonstrate the use of text retrieval benchmarks, such as TRECS, to evaluate clustering algorithms. He et al. provide objective measures of clustering quality in their chapter. Applications of clustering methods to information retrieval is ad dressed in the next four chapters. Chu et al. and Noel et al. explore feature selection using word stems, phrases, and link associations for document clustering and indexing. Wen et al. and Sung et al. discuss applications of clustering to user queries and data cleansing. Finally, we consider the problem of designing architectures for infor mation retrieval. Crichton, Hughes, and Kelly elaborate on the devel opment of a scientific data system architecture for information retrieval."
This book inclusively and systematically presents the fundamental methods, models and techniques of practical application of grey data analysis, bringing together the authors' many years of theoretical exploration, real-life application, and teaching. It also reflects the majority of recent theoretical and applied advances in the theory achieved by scholars from across the world, providing readers a vivid overall picture of this new theory and its pioneering research activities. The book includes 12 chapters, covering the introduction to grey systems, a novel framework of grey system theory, grey numbers and their operations, sequence operators and grey data mining, grey incidence analysis models, grey clustering evaluation models, series of GM models, combined grey models, techniques for grey systems forecasting, grey models for decision-making, techniques for grey control, etc. It also includes a software package that allows practitioners to conveniently and practically employ the theory and methods presented in this book. All methods and models presented here were chosen for their practical applicability and have been widely employed in various research works. I still remember 1983, when I first participated in a course on Grey System Theory. The mimeographed teaching materials had a blue cover and were presented as a book. It was like finding a treasure: This fascinating book really inspired me as a young intellectual going through a period of confusion and lack of academic direction. It shone with pearls of wisdom and offered a beacon in the mist for a man trying to find his way in academic research. This book became the guiding light in my life journey, inspiring me to forge an indissoluble bond with Grey System Theory. --Sifeng Liu
This book explores the nexus of Sustainability and Information Communication Technologies that are rapidly changing the way we live, learn, and do business. The monumental amount of energy required to power the Zeta byte of data traveling across the globe's billions of computers and mobile phones daily cannot be overstated. This ground-breaking reference examines the possibility that our evolving technologies may enable us to mitigate our global energy crisis, rather than adding to it. By connecting concepts and trends such as smart homes, big data, and the internet of things with their applications to sustainability, the authors suggest that emerging and ubiquitous technologies embedded in our daily lives may rightfully be considered as enabling solutions for our future sustainable development.
Cyberspace security is a critical subject of our times. On the one hand the development of Internet, mobile communications, distributed computing, computer software and databases storing essential enterprise information has helped to conduct business and personal communication between individual people. On the other hand it has created many opportunities for abuse, fraud and expensive damage. This book is a selection of the best papers presented at the NATO Advanced Research Workshop dealing with the Subject of Cyberspace Security and Defense. The level of the individual contributions in the volume is advanced and suitable for senior and graduate students, researchers and technologists who wish to get some feeling of the state of the art in several sub-disciplines of Cyberspace security. Several papers provide a broad-brush description of national security issues and brief summaries of technology states. These papers can be read and appreciated by technically enlightened managers and executives who want to understand security issues and approaches to technical solutions. An important question of our times is not "Should we do something for enhancing our digital assets security," the question is "How to do it."
Based on the results of the study carried out in 1996 to investigate the state of the art of workflow and process technology, MCC initiated the Collaboration Management Infrastructure (CMI) research project to develop innovative agent-based process technology that can support the process requirements of dynamically changing organizations and the requirements of nomadic computing. With a research focus on the flow of interaction among people and software agents representing people, the project deliverables will include a scalable, heterogeneous, ubiquitous and nomadic infrastructure for business processes. The resulting technology is being tested in applications that stress an intensive mobile collaboration among people as part of large, evolving business processes. Workflow and Process Automation: Concepts and Technology provides an overview of the problems and issues related to process and workflow technology, and in particular to definition and analysis of processes and workflows, and execution of their instances. The need for a transactional workflow model is discussed and a spectrum of related transaction models is covered in detail. A plethora of influential projects in workflow and process automation is summarized. The projects are drawn from both academia and industry. The monograph also provides a short overview of the most popular workflow management products, and the state of the workflow industry in general. Workflow and Process Automation: Concepts and Technology offers a road map through the shortcomings of existing solutions of process improvement by people with daily first-hand experience, and is suitable as a secondary text for graduate-level courses on workflow and process automation, and as a reference for practitioners in industry.
Collaborative Networks for a Sustainable World Aiming to reach a sustainable world calls for a wider collaboration among multiple stakeholders from different origins, as the changes needed for sustainability exceed the capacity and capability of any individual actor. In recent years there has been a growing awareness both in the political sphere and in civil society including the bu- ness sectors, on the importance of sustainability. Therefore, this is an important and timely research issue, not only in terms of systems design but also as an effort to b- row and integrate contributions from different disciplines when designing and/or g- erning those systems. The discipline of collaborative networks especially, which has already emerged in many application sectors, shall play a key role in the implemen- tion of effective sustainability strategies. PRO-VE 2010 focused on sharing knowledge and experiences as well as identi- ing directions for further research and development in this area. The conference - dressed models, infrastructures, support tools, and governance principles developed for collaborative networks, as important resources to support multi-stakeholder s- tainable developments. Furthermore, the challenges of this theme open new research directions for CNs. PRO-VE 2010 held in St.
As computer power grows and data collection technologies advance, a plethora of data is generated in almost every field where computers are used. The com puter generated data should be analyzed by computers; without the aid of computing technologies, it is certain that huge amounts of data collected will not ever be examined, let alone be used to our advantages. Even with today's advanced computer technologies (e. g., machine learning and data mining sys tems), discovering knowledge from data can still be fiendishly hard due to the characteristics of the computer generated data. Taking its simplest form, raw data are represented in feature-values. The size of a dataset can be measUJ.ed in two dimensions, number of features (N) and number of instances (P). Both Nand P can be enormously large. This enormity may cause serious problems to many data mining systems. Feature selection is one of the long existing methods that deal with these problems. Its objective is to select a minimal subset of features according to some reasonable criteria so that the original task can be achieved equally well, if not better. By choosing a minimal subset offeatures, irrelevant and redundant features are removed according to the criterion. When N is reduced, the data space shrinks and in a sense, the data set is now a better representative of the whole data population. If necessary, the reduction of N can also give rise to the reduction of P by eliminating duplicates."
Social media is now ubiquitous on the internet, generating both new possibilities and new challenges in information analysis and retrieval. This comprehensive text/reference examines in depth the synergy between multimedia content analysis, personalization, and next-generation networking. The book demonstrates how this integration can result in robust, personalized services that provide users with an improved multimedia-centric quality of experience. Each chapter offers a practical step-by-step walkthrough for a variety of concepts, components and technologies relating to the development of applications and services. Topics and features: provides contributions from an international and interdisciplinary selection of experts in their fields; introduces the fundamentals of social media retrieval, presenting the most important areas of research in this domain; examines the important topic of multimedia tagging in social environments, including geo-tagging; discusses issues of personalization and privacy in social media; reviews advances in encoding, compression and network architectures for the exchange of social media information; describes a range of applications related to social media. Researchers and students interested in social media retrieval will find this book a valuable resource, covering a broad overview of state-of-the-art research and emerging trends in this area. The text will also be of use to practicing engineers involved in envisioning and building innovative social media applications and services.
As the first book devoted to relational data mining, this coherently written multi-author monograph provides a thorough introduction and systematic overview of the area. The first part introduces the reader to the basics and principles of classical knowledge discovery in databases and inductive logic programming; subsequent chapters by leading experts assess the techniques in relational data mining in a principled and comprehensive way; finally, three chapters deal with advanced applications in various fields and refer the reader to resources for relational data mining.This book will become a valuable source of reference for R&D professionals active in relational data mining. Students as well as IT professionals and ambitioned practitioners interested in learning about relational data mining will appreciate the book as a useful text and gentle introduction to this exciting new field.
Space support in databases poses new challenges in every part of a database management system & the capability of spatial support in the physical layer is considered very important. This has led to the design of spatial access methods to enable the effective & efficient management of spatial objects. R-trees have a simplicity of structure & together with their resemblance to the B-tree, allow developers to incorporate them easily into existing database management systems for the support of spatial query processing. This book provides an extensive survey of the R-tree evolution, studying the applicability of the structure & its variations to efficient query processing, accurate proposed cost models, & implementation issues like concurrency control and parallelism. Written for database researchers, designers & programmers as well as graduate students, this comprehensive monograph will be a welcome addition to the field.
In the past five years, the field of electrostatic discharge (ESD) control has under gone some notable changes. Industry standards have multiplied, though not all of these, in our view, are realistic and meaningful. Increasing importance has been ascribed to the Charged Device Model (CDM) versus the Human Body Model (HBM) as a cause of device damage and, presumably, premature (latent) failure. Packaging materials have significantly evolved. Air ionization techniques have improved, and usage has grown. Finally, and importantly, the government has ceased imposing MIL-STD-1686 on all new contracts, leaving companies on their own to formulate an ESD-control policy and write implementing documents. All these changes are dealt with in five new chapters and ten new reprinted papers added to this revised edition of ESD from A to Z. Also, the original chapters have been augmented with new material such as more troubleshooting examples in Chapter 8 and a 20-question multiple-choice test for certifying operators in Chapter 9. More than ever, the book seeks to provide advice, guidance, and practical ex amples, not just a jumble of facts and generalizations. For instance, the added tailored versions of the model specifications for ESD-safe handling and packaging are actually in use at medium-sized corporations and could serve as patterns for many readers."
This doctoral thesis reports on an innovative data repository offering adaptive metadata management to maximise information sharing and comprehension in multidisciplinary and geographically distributed collaborations. It approaches metadata as a fluid, loosely structured and dynamical process rather than a fixed product, and describes the development of a novel data management platform based on a schemaless JSON data model, which represents the first fully JSON-based metadata repository designed for the biomedical sciences. Results obtained in various application scenarios (e.g. integrated biobanking, functional genomics and computational neuroscience) and corresponding performance tests are reported on in detail. Last but not least, the book offers a systematic overview of data platforms commonly used in the biomedical sciences, together with a fresh perspective on the role of and tools for data sharing and heterogeneous data integration in contemporary biomedical research.
Scalable High Performance Computing for Knowledge Discovery and Data Mining brings together in one place important contributions and up-to-date research results in this fast moving area. Scalable High Performance Computing for Knowledge Discovery and Data Mining serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
Advice involves recommendations on what to think; through thought, on what to choose; and via choices, on how to act. Advice is information that moves by communication, from advisors to the recipient of advice. Ivan Jureta offers a general way to analyze advice. The analysis applies regardless of what the advice is about and from whom it comes or to whom it needs to be given, and it concentrates on the production and consumption of advice independent of the field of application. It is made up of two intertwined parts, a conceptual analysis and an analysis of the rationale of advice. He premises that giving advice is a design problem and he treats advice as an artifact designed and used to influence decisions. What is unusual is the theoretical backdrop against which the author's discussions are set: ontology engineering, conceptual analysis, and artificial intelligence. While classical decision theory would be expected to play a key role, this is not the case here for one principal reason: the difficulty of having relevant numerical, quantitative estimates of probability and utility in most practical situations. Instead conceptual models and mathematical logic are the author's tools of choice. The book is primarily intended for graduate students and researchers of management science. They are offered a general method of analysis that applies to giving and receiving advice when the decision problems are not well structured, and when there is imprecise, unclear, incomplete, or conflicting qualitative information.
There is a broad interest in feature extraction, construction, and selection among practitioners from statistics, pattern recognition, and data mining to machine learning. Data pre-processing is an essential step in the knowledge discovery process for real-world applications. This book compiles contributions from many leading and active researchers in this growing field and paints a picture of the state-of-the-art techniques that can boost the capabilities of many existing data mining tools. The objective of this collection is to increase the awareness of the data mining community about research into feature extraction, construction and selection, which are currently conducted mainly in isolation. This book is part of an endeavor to produce a contemporary overview of modern solutions, to create synergy among these seemingly different branches, and to pave the way for developing meta-systems and novel approaches. The book can be used by researchers and graduate students in machine learning, data mining, and knowledge discovery, who wish to understand techniques of feature extraction, construction and selection for data pre-processing and to solve large size, real-world problems. The book can also serve as a reference work for those who are conducting research into feature extraction, construction and selection, and are ready to meet the exciting challenges ahead of us.
This book presents a detailed review of high-performance computing infrastructures for next-generation big data and fast data analytics. Features: includes case studies and learning activities throughout the book and self-study exercises in every chapter; presents detailed case studies on social media analytics for intelligent businesses and on big data analytics (BDA) in the healthcare sector; describes the network infrastructure requirements for effective transfer of big data, and the storage infrastructure requirements of applications which generate big data; examines real-time analytics solutions; introduces in-database processing and in-memory analytics techniques for data mining; discusses the use of mainframes for handling real-time big data and the latest types of data management systems for BDA; provides information on the use of cluster, grid and cloud computing systems for BDA; reviews the peer-to-peer techniques and tools and the common information visualization techniques, used in BDA.
This book addresses the privacy issue of On-Line Analytic Processing (OLAP) systems. OLAP systems usually need to meet two conflicting goals. First, the sensitive data stored in underlying data warehouses must be kept secret. Second, analytical queries about the data must be allowed for decision support purposes. The main challenge is that sensitive data can be inferred from answers to seemingly innocent aggregations of the data. This volume reviews a series of methods that can precisely answer data cube-style OLAP, regarding sensitive data while provably preventing adversaries from inferring data.
The term Web Intelligence is de?ned as a new line of scienti?c research and development, whichisusedtoexplorethefundamentalrolesandpractical- pactofArti?cialIntelligence togetherwithadvancedInformationTechnology and its e?ect on the future generations of Web-empowered products. These include systems, services, amongst other activities, all of which are carried out by the Web Intelligence Consortium (http: //wi-consortium.org/). Web Intelligence was ?rst coined in the late 1999's. From that time, many new algorithms, methods and techniques were developedand used extracting both knowledge and wisdom from the data originating from the Web. A number of initiatives have been adopted by the world communities in this areaofstudy.Theseincludebooks, conferenceseries, andjournals.Thislatest book encomposes a variaty of up to date state of the art approaches in Web Intelligence. Furthermore, it hightlights successful applications in this area of research within a practical context. The present book aims to introduce a selection of research applications in the area of Web Intelligence. We have selected a number of researchers around the world, all of which are experts in their respective research areas. Each chapter focuses on a speci?c topic in the ?eld of Web Intelligence. Furthermorethe bookconsistsofa numberofinnovativeproposalswhichwill contribute to the development of web science and technology for the lo- term future, rendering this collective work a valuable piece of knowledge. It was a great honour to have collaborated with this team of very talented experts. We also wish to express our grattitude to those who reviewed this book o?ering their constructive feedbacks. |
You may like...
Hunting The Seven - How The Gugulethu…
Beverley Roos-Muller
Paperback
How to Be a Children's Book Illustrator…
Publishing 3DTotal
Paperback
Sy is Veilig - 'n Onthulling Van Die…
Emma van der Walt
Paperback
(1)
|