![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
Lattices are geometric objects that can be pictorially described as the set of intersection points of an infinite, regular n-dimensional grid. De spite their apparent simplicity, lattices hide a rich combinatorial struc ture, which has attracted the attention of great mathematicians over the last two centuries. Not surprisingly, lattices have found numerous ap plications in mathematics and computer science, ranging from number theory and Diophantine approximation, to combinatorial optimization and cryptography. The study of lattices, specifically from a computational point of view, was marked by two major breakthroughs: the development of the LLL lattice reduction algorithm by Lenstra, Lenstra and Lovasz in the early 80's, and Ajtai's discovery of a connection between the worst-case and average-case hardness of certain lattice problems in the late 90's. The LLL algorithm, despite the relatively poor quality of the solution it gives in the worst case, allowed to devise polynomial time solutions to many classical problems in computer science. These include, solving integer programs in a fixed number of variables, factoring polynomials over the rationals, breaking knapsack based cryptosystems, and finding solutions to many other Diophantine and cryptanalysis problems."
Clustering is an important technique for discovering relatively dense sub-regions or sub-spaces of a multi-dimension data distribution. Clus tering has been used in information retrieval for many different purposes, such as query expansion, document grouping, document indexing, and visualization of search results. In this book, we address issues of cluster ing algorithms, evaluation methodologies, applications, and architectures for information retrieval. The first two chapters discuss clustering algorithms. The chapter from Baeza-Yates et al. describes a clustering method for a general metric space which is a common model of data relevant to information retrieval. The chapter by Guha, Rastogi, and Shim presents a survey as well as detailed discussion of two clustering algorithms: CURE and ROCK for numeric data and categorical data respectively. Evaluation methodologies are addressed in the next two chapters. Ertoz et al. demonstrate the use of text retrieval benchmarks, such as TRECS, to evaluate clustering algorithms. He et al. provide objective measures of clustering quality in their chapter. Applications of clustering methods to information retrieval is ad dressed in the next four chapters. Chu et al. and Noel et al. explore feature selection using word stems, phrases, and link associations for document clustering and indexing. Wen et al. and Sung et al. discuss applications of clustering to user queries and data cleansing. Finally, we consider the problem of designing architectures for infor mation retrieval. Crichton, Hughes, and Kelly elaborate on the devel opment of a scientific data system architecture for information retrieval."
This book inclusively and systematically presents the fundamental methods, models and techniques of practical application of grey data analysis, bringing together the authors' many years of theoretical exploration, real-life application, and teaching. It also reflects the majority of recent theoretical and applied advances in the theory achieved by scholars from across the world, providing readers a vivid overall picture of this new theory and its pioneering research activities. The book includes 12 chapters, covering the introduction to grey systems, a novel framework of grey system theory, grey numbers and their operations, sequence operators and grey data mining, grey incidence analysis models, grey clustering evaluation models, series of GM models, combined grey models, techniques for grey systems forecasting, grey models for decision-making, techniques for grey control, etc. It also includes a software package that allows practitioners to conveniently and practically employ the theory and methods presented in this book. All methods and models presented here were chosen for their practical applicability and have been widely employed in various research works. I still remember 1983, when I first participated in a course on Grey System Theory. The mimeographed teaching materials had a blue cover and were presented as a book. It was like finding a treasure: This fascinating book really inspired me as a young intellectual going through a period of confusion and lack of academic direction. It shone with pearls of wisdom and offered a beacon in the mist for a man trying to find his way in academic research. This book became the guiding light in my life journey, inspiring me to forge an indissoluble bond with Grey System Theory. --Sifeng Liu
This book explores the nexus of Sustainability and Information Communication Technologies that are rapidly changing the way we live, learn, and do business. The monumental amount of energy required to power the Zeta byte of data traveling across the globe's billions of computers and mobile phones daily cannot be overstated. This ground-breaking reference examines the possibility that our evolving technologies may enable us to mitigate our global energy crisis, rather than adding to it. By connecting concepts and trends such as smart homes, big data, and the internet of things with their applications to sustainability, the authors suggest that emerging and ubiquitous technologies embedded in our daily lives may rightfully be considered as enabling solutions for our future sustainable development.
Cyberspace security is a critical subject of our times. On the one hand the development of Internet, mobile communications, distributed computing, computer software and databases storing essential enterprise information has helped to conduct business and personal communication between individual people. On the other hand it has created many opportunities for abuse, fraud and expensive damage. This book is a selection of the best papers presented at the NATO Advanced Research Workshop dealing with the Subject of Cyberspace Security and Defense. The level of the individual contributions in the volume is advanced and suitable for senior and graduate students, researchers and technologists who wish to get some feeling of the state of the art in several sub-disciplines of Cyberspace security. Several papers provide a broad-brush description of national security issues and brief summaries of technology states. These papers can be read and appreciated by technically enlightened managers and executives who want to understand security issues and approaches to technical solutions. An important question of our times is not "Should we do something for enhancing our digital assets security," the question is "How to do it."
Based on the results of the study carried out in 1996 to investigate the state of the art of workflow and process technology, MCC initiated the Collaboration Management Infrastructure (CMI) research project to develop innovative agent-based process technology that can support the process requirements of dynamically changing organizations and the requirements of nomadic computing. With a research focus on the flow of interaction among people and software agents representing people, the project deliverables will include a scalable, heterogeneous, ubiquitous and nomadic infrastructure for business processes. The resulting technology is being tested in applications that stress an intensive mobile collaboration among people as part of large, evolving business processes. Workflow and Process Automation: Concepts and Technology provides an overview of the problems and issues related to process and workflow technology, and in particular to definition and analysis of processes and workflows, and execution of their instances. The need for a transactional workflow model is discussed and a spectrum of related transaction models is covered in detail. A plethora of influential projects in workflow and process automation is summarized. The projects are drawn from both academia and industry. The monograph also provides a short overview of the most popular workflow management products, and the state of the workflow industry in general. Workflow and Process Automation: Concepts and Technology offers a road map through the shortcomings of existing solutions of process improvement by people with daily first-hand experience, and is suitable as a secondary text for graduate-level courses on workflow and process automation, and as a reference for practitioners in industry.
The objective of this monograph is to improve the performance of the sentiment analysis model by incorporating the semantic, syntactic and common-sense knowledge. This book proposes a novel semantic concept extraction approach that uses dependency relations between words to extract the features from the text. Proposed approach combines the semantic and common-sense knowledge for the better understanding of the text. In addition, the book aims to extract prominent features from the unstructured text by eliminating the noisy, irrelevant and redundant features. Readers will also discover a proposed method for efficient dimensionality reduction to alleviate the data sparseness problem being faced by machine learning model. Authors pay attention to the four main findings of the book : -Performance of the sentiment analysis can be improved by reducing the redundancy among the features. Experimental results show that minimum Redundancy Maximum Relevance (mRMR) feature selection technique improves the performance of the sentiment analysis by eliminating the redundant features. - Boolean Multinomial Naive Bayes (BMNB) machine learning algorithm with mRMR feature selection technique performs better than Support Vector Machine (SVM) classifier for sentiment analysis. - The problem of data sparseness is alleviated by semantic clustering of features, which in turn improves the performance of the sentiment analysis. - Semantic relations among the words in the text have useful cues for sentiment analysis. Common-sense knowledge in form of ConceptNet ontology acquires knowledge, which provides a better understanding of the text that improves the performance of the sentiment analysis.
Abstraction is a fundamental mechanism underlying both human and artificial perception, representation of knowledge, reasoning and learning. This mechanism plays a crucial role in many disciplines, notably Computer Programming, Natural and Artificial Vision, Complex Systems, Artificial Intelligence and Machine Learning, Art, and Cognitive Sciences. This book first provides the reader with an overview of the notions of abstraction proposed in various disciplines by comparing both commonalities and differences. After discussing the characterizing properties of abstraction, a formal model, the KRA model, is presented to capture them. This model makes the notion of abstraction easily applicable by means of the introduction of a set of abstraction operators and abstraction patterns, reusable across different domains and applications. It is the impact of abstraction in Artificial Intelligence, Complex Systems and Machine Learning which creates the core of the book. A general framework, based on the KRA model, is presented, and its pragmatic power is illustrated with three case studies: Model-based diagnosis, Cartographic Generalization, and learning Hierarchical Hidden Markov Models.
The book covers tools in the study of online social networks such as machine learning techniques, clustering, and deep learning. A variety of theoretical aspects, application domains, and case studies for analyzing social network data are covered. The aim is to provide new perspectives on utilizing machine learning and related scientific methods and techniques for social network analysis. Machine Learning Techniques for Online Social Networks will appeal to researchers and students in these fields.
Collaborative Networks for a Sustainable World Aiming to reach a sustainable world calls for a wider collaboration among multiple stakeholders from different origins, as the changes needed for sustainability exceed the capacity and capability of any individual actor. In recent years there has been a growing awareness both in the political sphere and in civil society including the bu- ness sectors, on the importance of sustainability. Therefore, this is an important and timely research issue, not only in terms of systems design but also as an effort to b- row and integrate contributions from different disciplines when designing and/or g- erning those systems. The discipline of collaborative networks especially, which has already emerged in many application sectors, shall play a key role in the implemen- tion of effective sustainability strategies. PRO-VE 2010 focused on sharing knowledge and experiences as well as identi- ing directions for further research and development in this area. The conference - dressed models, infrastructures, support tools, and governance principles developed for collaborative networks, as important resources to support multi-stakeholder s- tainable developments. Furthermore, the challenges of this theme open new research directions for CNs. PRO-VE 2010 held in St.
As computer power grows and data collection technologies advance, a plethora of data is generated in almost every field where computers are used. The com puter generated data should be analyzed by computers; without the aid of computing technologies, it is certain that huge amounts of data collected will not ever be examined, let alone be used to our advantages. Even with today's advanced computer technologies (e. g., machine learning and data mining sys tems), discovering knowledge from data can still be fiendishly hard due to the characteristics of the computer generated data. Taking its simplest form, raw data are represented in feature-values. The size of a dataset can be measUJ.ed in two dimensions, number of features (N) and number of instances (P). Both Nand P can be enormously large. This enormity may cause serious problems to many data mining systems. Feature selection is one of the long existing methods that deal with these problems. Its objective is to select a minimal subset of features according to some reasonable criteria so that the original task can be achieved equally well, if not better. By choosing a minimal subset offeatures, irrelevant and redundant features are removed according to the criterion. When N is reduced, the data space shrinks and in a sense, the data set is now a better representative of the whole data population. If necessary, the reduction of N can also give rise to the reduction of P by eliminating duplicates."
Social media is now ubiquitous on the internet, generating both new possibilities and new challenges in information analysis and retrieval. This comprehensive text/reference examines in depth the synergy between multimedia content analysis, personalization, and next-generation networking. The book demonstrates how this integration can result in robust, personalized services that provide users with an improved multimedia-centric quality of experience. Each chapter offers a practical step-by-step walkthrough for a variety of concepts, components and technologies relating to the development of applications and services. Topics and features: provides contributions from an international and interdisciplinary selection of experts in their fields; introduces the fundamentals of social media retrieval, presenting the most important areas of research in this domain; examines the important topic of multimedia tagging in social environments, including geo-tagging; discusses issues of personalization and privacy in social media; reviews advances in encoding, compression and network architectures for the exchange of social media information; describes a range of applications related to social media. Researchers and students interested in social media retrieval will find this book a valuable resource, covering a broad overview of state-of-the-art research and emerging trends in this area. The text will also be of use to practicing engineers involved in envisioning and building innovative social media applications and services.
As the first book devoted to relational data mining, this coherently written multi-author monograph provides a thorough introduction and systematic overview of the area. The first part introduces the reader to the basics and principles of classical knowledge discovery in databases and inductive logic programming; subsequent chapters by leading experts assess the techniques in relational data mining in a principled and comprehensive way; finally, three chapters deal with advanced applications in various fields and refer the reader to resources for relational data mining.This book will become a valuable source of reference for R&D professionals active in relational data mining. Students as well as IT professionals and ambitioned practitioners interested in learning about relational data mining will appreciate the book as a useful text and gentle introduction to this exciting new field.
Space support in databases poses new challenges in every part of a database management system & the capability of spatial support in the physical layer is considered very important. This has led to the design of spatial access methods to enable the effective & efficient management of spatial objects. R-trees have a simplicity of structure & together with their resemblance to the B-tree, allow developers to incorporate them easily into existing database management systems for the support of spatial query processing. This book provides an extensive survey of the R-tree evolution, studying the applicability of the structure & its variations to efficient query processing, accurate proposed cost models, & implementation issues like concurrency control and parallelism. Written for database researchers, designers & programmers as well as graduate students, this comprehensive monograph will be a welcome addition to the field.
Since the early eighties IFIP/Sec has been an important rendezvous for Information Technology researchers and specialists involved in all aspects of IT security. The explosive growth of the Web is now faced with the formidable challenge of providing trusted information. IFIP/Sec'01 is the first of this decade (and century) and it will be devoted to "Trusted Information - the New Decade Challenge" This proceedings are divided in eleven parts related to the conference program. Session are dedicated to technologies: Security Protocols, Smart Card, Network Security and Intrusion Detection, Trusted Platforms. Others sessions are devoted to application like eSociety, TTP Management and PKI, Secure Workflow Environment, Secure Group Communications, and on the deployment of applications: Risk Management, Security Policies andTrusted System Design and Management. The year 2001 is a double anniversary. First, fifteen years ago, the first IFIP/Sec was held in France (IFIP/Sec'86, Monte-Carlo) and 2001 is also the anniversary of smart card technology. Smart cards emerged some twenty years ago as an innovation and have now become pervasive information devices used for highly distributed secure applications. These cards let millions of people carry a highly secure device that can represent them on a variety of networks. To conclude, we hope that the rich "menu" of conference papers for this IFIP/Sec conference will provide valuable insights and encourage specialists to pursue their work in trusted information.
In the past five years, the field of electrostatic discharge (ESD) control has under gone some notable changes. Industry standards have multiplied, though not all of these, in our view, are realistic and meaningful. Increasing importance has been ascribed to the Charged Device Model (CDM) versus the Human Body Model (HBM) as a cause of device damage and, presumably, premature (latent) failure. Packaging materials have significantly evolved. Air ionization techniques have improved, and usage has grown. Finally, and importantly, the government has ceased imposing MIL-STD-1686 on all new contracts, leaving companies on their own to formulate an ESD-control policy and write implementing documents. All these changes are dealt with in five new chapters and ten new reprinted papers added to this revised edition of ESD from A to Z. Also, the original chapters have been augmented with new material such as more troubleshooting examples in Chapter 8 and a 20-question multiple-choice test for certifying operators in Chapter 9. More than ever, the book seeks to provide advice, guidance, and practical ex amples, not just a jumble of facts and generalizations. For instance, the added tailored versions of the model specifications for ESD-safe handling and packaging are actually in use at medium-sized corporations and could serve as patterns for many readers."
This dictionary was produced in response to the rapidly increasing
amount of quasi-industrial jargon in the field of information
technology, compounded by the fact that these somewhat esoteric
terms are often further reduced to acronyms and abbreviations that
are seldom explained. Even when they are defined, individual
interpretations continue to diverge.
Today, most of the codes have passed into the public domain,
simply because they exist in most of the telecommunications systems
installed throughout the developed (and developing) world and are
largely known to most of those who work in that particular area.
However, foreign variants often defy even the most astute observer.
This dictionary seeks to clarify this bewildering situation as much
as possible. The 26,000 definitions set out here, drawn from some
16,000 individual cybernyms, cover computing, electronics,
telecommunications (including intelligent networks and mobile
telephony), together with satellite technology and Internet/Web
terminology.
This doctoral thesis reports on an innovative data repository offering adaptive metadata management to maximise information sharing and comprehension in multidisciplinary and geographically distributed collaborations. It approaches metadata as a fluid, loosely structured and dynamical process rather than a fixed product, and describes the development of a novel data management platform based on a schemaless JSON data model, which represents the first fully JSON-based metadata repository designed for the biomedical sciences. Results obtained in various application scenarios (e.g. integrated biobanking, functional genomics and computational neuroscience) and corresponding performance tests are reported on in detail. Last but not least, the book offers a systematic overview of data platforms commonly used in the biomedical sciences, together with a fresh perspective on the role of and tools for data sharing and heterogeneous data integration in contemporary biomedical research.
In System-on-Chip Architectures and Implementations for Private-Key Data Encryption, new generic silicon architectures for the DES and Rijndael symmetric key encryption algorithms are presented. The generic architectures can be utilised to rapidly and effortlessly generate system-on-chip cores, which support numerous application requirements, most importantly, different modes of operation and encryption and decryption capabilities. In addition, efficient silicon SHA-1, SHA-2 and HMAC hash algorithm architectures are described. A single-chip Internet Protocol Security (IPSec) architecture is also presented that comprises a generic Rijndael design and a highly efficient HMAC-SHA-1 implementation. In the opinion of the authors, highly efficient hardware implementations of cryptographic algorithms are provided in this book. However, these are not hard-fast solutions. The aim of the book is to provide an excellent guide to the design and development process involved in the translation from encryption algorithm to silicon chip implementation.
Scalable High Performance Computing for Knowledge Discovery and Data Mining brings together in one place important contributions and up-to-date research results in this fast moving area. Scalable High Performance Computing for Knowledge Discovery and Data Mining serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
Advice involves recommendations on what to think; through thought, on what to choose; and via choices, on how to act. Advice is information that moves by communication, from advisors to the recipient of advice. Ivan Jureta offers a general way to analyze advice. The analysis applies regardless of what the advice is about and from whom it comes or to whom it needs to be given, and it concentrates on the production and consumption of advice independent of the field of application. It is made up of two intertwined parts, a conceptual analysis and an analysis of the rationale of advice. He premises that giving advice is a design problem and he treats advice as an artifact designed and used to influence decisions. What is unusual is the theoretical backdrop against which the author's discussions are set: ontology engineering, conceptual analysis, and artificial intelligence. While classical decision theory would be expected to play a key role, this is not the case here for one principal reason: the difficulty of having relevant numerical, quantitative estimates of probability and utility in most practical situations. Instead conceptual models and mathematical logic are the author's tools of choice. The book is primarily intended for graduate students and researchers of management science. They are offered a general method of analysis that applies to giving and receiving advice when the decision problems are not well structured, and when there is imprecise, unclear, incomplete, or conflicting qualitative information.
There is a broad interest in feature extraction, construction, and selection among practitioners from statistics, pattern recognition, and data mining to machine learning. Data pre-processing is an essential step in the knowledge discovery process for real-world applications. This book compiles contributions from many leading and active researchers in this growing field and paints a picture of the state-of-the-art techniques that can boost the capabilities of many existing data mining tools. The objective of this collection is to increase the awareness of the data mining community about research into feature extraction, construction and selection, which are currently conducted mainly in isolation. This book is part of an endeavor to produce a contemporary overview of modern solutions, to create synergy among these seemingly different branches, and to pave the way for developing meta-systems and novel approaches. The book can be used by researchers and graduate students in machine learning, data mining, and knowledge discovery, who wish to understand techniques of feature extraction, construction and selection for data pre-processing and to solve large size, real-world problems. The book can also serve as a reference work for those who are conducting research into feature extraction, construction and selection, and are ready to meet the exciting challenges ahead of us.
This book presents a detailed review of high-performance computing infrastructures for next-generation big data and fast data analytics. Features: includes case studies and learning activities throughout the book and self-study exercises in every chapter; presents detailed case studies on social media analytics for intelligent businesses and on big data analytics (BDA) in the healthcare sector; describes the network infrastructure requirements for effective transfer of big data, and the storage infrastructure requirements of applications which generate big data; examines real-time analytics solutions; introduces in-database processing and in-memory analytics techniques for data mining; discusses the use of mainframes for handling real-time big data and the latest types of data management systems for BDA; provides information on the use of cluster, grid and cloud computing systems for BDA; reviews the peer-to-peer techniques and tools and the common information visualization techniques, used in BDA.
This 2nd edition has been completely revised and updated, with additional new chapters. It presents state-of-the-art research in this area and focuses on key topics such as: visualization of semantic and structural information and metadata in the context of the emerging Semantic Web; Ontology-based Information Visualization and the use of graphically represented ontologies; Semantic Visualizations using Topic Maps and graph techniques; Recommender systems for filtering and recommending on the Semantic Web; SVG and X3D as new XML-based languages for 2D and 3D visualisations; methods used to construct and visualize high quality metadata and ontologies; and navigating and exploring XML documents using interactive multimedia interfaces. The design of visual interfaces for e-commerce and information retrieval is currently a challenging area of practical web development.
This book addresses the privacy issue of On-Line Analytic Processing (OLAP) systems. OLAP systems usually need to meet two conflicting goals. First, the sensitive data stored in underlying data warehouses must be kept secret. Second, analytical queries about the data must be allowed for decision support purposes. The main challenge is that sensitive data can be inferred from answers to seemingly innocent aggregations of the data. This volume reviews a series of methods that can precisely answer data cube-style OLAP, regarding sensitive data while provably preventing adversaries from inferring data. |
You may like...
Research Anthology on Combating…
Information R Management Association
Hardcover
R11,895
Discovery Miles 118 950
How to Stop Bullying in Classrooms and…
Phyllis Kaufman Goodstein
Paperback
R1,180
Discovery Miles 11 800
Domestic Violence 101 - Survival Guide…
Howexpert, Amanda Reilly
Hardcover
R718
Discovery Miles 7 180
Cyberbullying Across the Globe - Gender…
Raul Navarro, Santiago Yubero, …
Hardcover
The Truth About Bullying - What…
Jan Urbanski, Steve Permuth
Hardcover
R3,308
Discovery Miles 33 080
Developing Safer Online Environments for…
Information Resources Management Association
Hardcover
R5,863
Discovery Miles 58 630
|