![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
""As organizations have become more sophisticated, pressure to
provide information sharing across dissimilar platforms has
mounted. In addition, advances in distributed computing and
networking combined with the affordable high level of connectivity,
are making information sharing across databases closer to being
accomplished...With the advent of the internet, intranets, and
affordable network connectivity, business reengineering has become
a necessity for modern corporations to stay competitive in the
global market...An end-user in a heterogeneous computing
environment should be able to not only invoke multiple exiting
software systems and hardware devices, but also coordinate their
interactions.""--From the Introduction Seventeen leaders in the field contributed chapters specifically
for this unique book, together providing the most comprehensive
resource on managing multidatabase systems involving heterogeneous
and autonomous databases available today. The book covers virtually
all fundamental issues, concepts, and major research topics.
This comprehensive book focuses on better big-data security for healthcare organizations. Following an extensive introduction to the Internet of Things (IoT) in healthcare including challenging topics and scenarios, it offers an in-depth analysis of medical body area networks with the 5th generation of IoT communication technology along with its nanotechnology. It also describes a novel strategic framework and computationally intelligent model to measure possible security vulnerabilities in the context of e-health. Moreover, the book addresses healthcare systems that handle large volumes of data driven by patients' records and health/personal information, including big-data-based knowledge management systems to support clinical decisions. Several of the issues faced in storing/processing big data are presented along with the available tools, technologies and algorithms to deal with those problems as well as a case study in healthcare analytics. Addressing trust, privacy, and security issues as well as the IoT and big-data challenges, the book highlights the advances in the field to guide engineers developing different IoT devices and evaluating the performance of different IoT techniques. Additionally, it explores the impact of such technologies on public, private, community, and hybrid scenarios in healthcare. This book offers professionals, scientists and engineers the latest technologies, techniques, and strategies for IoT and big data.
A modern information retrieval system must have the capability to find, organize and present very different manifestations of information - such as text, pictures, videos or database records - any of which may be of relevance to the user. However, the concept of relevance, while seemingly intuitive, is actually hard to define, and it's even harder to model in a formal way. Lavrenko does not attempt to bring forth a new definition of relevance, nor provide arguments as to why any particular definition might be theoretically superior or more complete. Instead, he takes a widely accepted, albeit somewhat conservative definition, makes several assumptions, and from them develops a new probabilistic model that explicitly captures that notion of relevance. With this book, he makes two major contributions to the field of information retrieval: first, a new way to look at topical relevance, complementing the two dominant models, i.e., the classical probabilistic model and the language modeling approach, and which explicitly combines documents, queries, and relevance in a single formalism; second, a new method for modeling exchangeable sequences of discrete random variables which does not make any structural assumptions about the data and which can also handle rare events. Thus his book is of major interest to researchers and graduate students in information retrieval who specialize in relevance modeling, ranking algorithms, and language modeling.
Advice involves recommendations on what to think; through thought, on what to choose; and via choices, on how to act. Advice is information that moves by communication, from advisors to the recipient of advice. Ivan Jureta offers a general way to analyze advice. The analysis applies regardless of what the advice is about and from whom it comes or to whom it needs to be given, and it concentrates on the production and consumption of advice independent of the field of application. It is made up of two intertwined parts, a conceptual analysis and an analysis of the rationale of advice. He premises that giving advice is a design problem and he treats advice as an artifact designed and used to influence decisions. What is unusual is the theoretical backdrop against which the author's discussions are set: ontology engineering, conceptual analysis, and artificial intelligence. While classical decision theory would be expected to play a key role, this is not the case here for one principal reason: the difficulty of having relevant numerical, quantitative estimates of probability and utility in most practical situations. Instead conceptual models and mathematical logic are the author's tools of choice. The book is primarily intended for graduate students and researchers of management science. They are offered a general method of analysis that applies to giving and receiving advice when the decision problems are not well structured, and when there is imprecise, unclear, incomplete, or conflicting qualitative information.
Semantic Models for Multimedia Database Searching and Browsing begins with the introduction of multimedia information applications, the need for the development of the multimedia database management systems (MDBMSs), and the important issues and challenges of multimedia systems. The temporal relations, the spatial relations, the spatio-temporal relations, and several semantic models for multimedia information systems are also introduced. In addition, this book discusses recent advances in multimedia database searching and multimedia database browsing. More specifically, issues such as image/video segmentation, motion detection, object tracking, object recognition, knowledge-based event modeling, content-based retrieval, and key frame selections are presented for the first time in a single book. Two case studies consisting of two semantic models are included in the book to illustrate how to use semantic models to design multimedia information systems. Semantic Models for Multimedia Database Searching and Browsing is an excellent reference and can be used in advanced level courses for researchers, scientists, industry professionals, software engineers, students, and general readers who are interested in the issues, challenges, and ideas underlying the current practice of multimedia presentation, multimedia database searching, and multimedia browsing in multimedia information systems.
This book reports on the development and validation of a generic defeasible logic programming framework for carrying out argumentative reasoning in Semantic Web applications (GF@SWA). The proposed methodology is unique in providing a solution for representing incomplete and/or contradictory information coming from different sources, and reasoning with it. GF@SWA is able to represent this type of information, perform argumentation-driven hybrid reasoning to resolve conflicts, and generate graphical representations of the integrated information, thus assisting decision makers in decision making processes. GF@SWA represents the first argumentative reasoning engine for carrying out automated reasoning in the Semantic Web context and is expected to have a significant impact on future business applications. The book provides the readers with a detailed and clear exposition of different argumentation-based reasoning techniques, and of their importance and use in Semantic Web applications. It addresses both academics and professionals, and will be of primary interest to researchers, students and practitioners in the area of Web-based intelligent decision support systems and their application in various domains.
Since their invention in the late seventies, public key cryptosystems have become an indispensable asset in establishing private and secure electronic communication, and this need, given the tremendous growth of the Internet, is likely to continue growing. Elliptic curve cryptosystems represent the state of the art for such systems. Elliptic Curves and Their Applications to Cryptography: An Introduction provides a comprehensive and self-contained introduction to elliptic curves and how they are employed to secure public key cryptosystems. Even though the elegant mathematical theory underlying cryptosystems is considerably more involved than for other systems, this text requires the reader to have only an elementary knowledge of basic algebra. The text nevertheless leads to problems at the forefront of current research, featuring chapters on point counting algorithms and security issues. The Adopted unifying approach treats with equal care elliptic curves over fields of even characteristic, which are especially suited for hardware implementations, and curves over fields of odd characteristic, which have traditionally received more attention. Elliptic Curves and Their Applications: An Introduction has been used successfully for teaching advanced undergraduate courses. It will be of greatest interest to mathematicians, computer scientists, and engineers who are curious about elliptic curve cryptography in practice, without losing the beauty of the underlying mathematics.
The book covers tools in the study of online social networks such as machine learning techniques, clustering, and deep learning. A variety of theoretical aspects, application domains, and case studies for analyzing social network data are covered. The aim is to provide new perspectives on utilizing machine learning and related scientific methods and techniques for social network analysis. Machine Learning Techniques for Online Social Networks will appeal to researchers and students in these fields.
This doctoral thesis reports on an innovative data repository offering adaptive metadata management to maximise information sharing and comprehension in multidisciplinary and geographically distributed collaborations. It approaches metadata as a fluid, loosely structured and dynamical process rather than a fixed product, and describes the development of a novel data management platform based on a schemaless JSON data model, which represents the first fully JSON-based metadata repository designed for the biomedical sciences. Results obtained in various application scenarios (e.g. integrated biobanking, functional genomics and computational neuroscience) and corresponding performance tests are reported on in detail. Last but not least, the book offers a systematic overview of data platforms commonly used in the biomedical sciences, together with a fresh perspective on the role of and tools for data sharing and heterogeneous data integration in contemporary biomedical research.
The present volume provides a collection of seven articles containing new and high quality research results demonstrating the significance of Multi-objective Evolutionary Algorithms (MOEA) for data mining tasks in Knowledge Discovery from Databases (KDD). These articles are written by leading experts around the world. It is shown how the different MOEAs can be utilized, both in individual and integrated manner, in various ways to efficiently mine data from large databases.
Mining of Data with Complex Structures: - Clarifies the type and nature of data with complex structure including sequences, trees and graphs - Provides a detailed background of the state-of-the-art of sequence mining, tree mining and graph mining. -Defines the essential aspects of the tree mining problem: subtree types, support definitions, constraints. - Outlines the implementation issues one needs to consider when developing tree mining algorithms (enumeration strategies, data structures, etc.) - Details the Tree Model Guided (TMG) approach for tree mining and provides the mathematical model for the worst case estimate of complexity of mining ordered induced and embedded subtrees. - Explains the mechanism of the TMG framework for mining ordered/unordered induced/embedded and distance-constrained embedded subtrees. - Provides a detailed comparison of the different tree mining approaches highlighting the characteristics and benefits of each approach. - Overviews the implications and potential applications of tree mining in general knowledge management related tasks, and uses Web, health and bioinformatics related applications as case studies. - Details the extension of the TMG framework for sequence mining - Provides an overview of the future research direction with respect to technical extensions and application areas The primary audience is 3rd year, 4th year undergraduate students, Masters and PhD students and academics. The book can be used for both teaching and research. The secondary audiences are practitioners in industry, business, commerce, government and consortiums, alliances and partnerships to learn how to introduce and efficiently make use of the techniques for mining of data with complex structures into their applications. The scope of the book is both theoretical and practical and as such it will reach a broad market both within academia and industry. In addition, its subject matter is a rapidly emerging field that is critical for efficient analysis of knowledge stored in various domains."
Security Education and Critical Infrastructures presents the most recent developments in research and practice on teaching information security, and covers topics including: -Curriculum design;
The Engineering of Complex Real-Time Computer Control Systems brings together in one place important contributions and up-to-date research results in this important area. The Engineering of Complex Real-Time Computer Control Systems serves as an excellent reference, providing insight into some of the most important research issues in the field.
The International Federation for Information Processing (IFIP) series publishes state-of-the-art results in the sciences and technologies of information and communication. The IFIP series encourages education and the dissemination and exchange of information on all aspects of computing. This particular volume presents the most up-to-date research findings from leading experts from around the world on information security education.
Privacy requirements have an increasing impact on the realization of modern applications. Commercial and legal regulations demand that privacy guarantees be provided whenever sensitive information is stored, processed, or communicated to external parties. Current approaches encrypt sensitive data, thus reducing query execution efficiency and preventing selective information release. Preserving Privacy in Data Outsourcing presents a comprehensive approach for protecting highly sensitive information when it is stored on systems that are not under the data owner's control. The approach illustrated combines access control and encryption, enforcing access control via structured encryption. This solution, coupled with efficient algorithms for key derivation and distribution, provides efficient and secure authorization management on outsourced data, allowing the data owner to outsource not only the data but the security policy itself. To reduce the amount of data to be encrypted the book also investigates data fragmentation as a possible way to protect privacy of data associations and provide fragmentation as a complementary means for protecting privacy: associations broken by fragmentation will be visible only to users authorized (by knowing the proper key) to join fragments. The book finally investigates the problem of executing queries over possible data distributed at different servers and which must be controlled to ensure sensitive information and sensitive associations be visible only to parties authorized for that. Case Studies are provided throughout the book. Privacy, data mining, data protection, data outsourcing, electronic commerce, machine learning professionals and others working in these related fields will find this book a valuable asset, as well as primary associations such as ACM, IEEE and Management Science. This book is also suitable for advanced level students and researchers concentrating on computer science as a secondary text or reference book.
Database Recovery presents an in-depth discussion on all aspects of database recovery. Firstly, it introduces the topic informally to set the intuitive understanding, and then presents a formal treatment of recovery mechanism. In the past, recovery has been treated merely as a mechanism which is implemented on an ad-hoc basis. This book elevates the recovery from a mechanism to a concept, and presents its essential properties. A book on recovery is incomplete if it does not present how recovery is practiced in commercial systems. This book, therefore, presents a detailed description of recovery mechanisms as implemented on Informix, OpenIngres, Oracle, and Sybase commercial database systems. Database Recovery is suitable as a textbook for a graduate-level course on database recovery, as a secondary text for a graduate-level course on database systems, and as a reference for researchers and practitioners in industry.
In the last ten years, a true explosion of investigations into fuzzy modeling and its applications in control, diagnostics, decision making, optimization, pattern recognition, robotics, etc. has been observed. The attraction of fuzzy modeling results from its intelligibility and the high effectiveness of the models obtained. Owing to this the modeling can be applied for the solution of problems which could not be solved till now with any known conventional methods. The book provides the reader with an advanced introduction to the problems of fuzzy modeling and to one of its most important applications: fuzzy control. It is based on the latest and most significant knowledge of the subject and can be used not only by control specialists but also by specialists working in any field requiring plant modeling, process modeling, and systems modeling, e.g. economics, business, medicine, agriculture,and meteorology.
This book presents new approaches that advance research in all aspects of agent-based models, technologies, simulations and implementations for data intensive applications. The nine chapters contain a review of recent cross-disciplinary approaches in cloud environments and multi-agent systems, and important formulations of data intensive problems in distributed computational environments together with the presentation of new agent-based tools to handle those problems and Big Data in general. This volume can serve as a reference for students, researchers and industry practitioners working in or interested in joining interdisciplinary work in the areas of data intensive computing and Big Data systems using emergent large-scale distributed computing paradigms. It will also allow newcomers to grasp key concepts and potential solutions on advanced topics of theory, models, technologies, system architectures and implementation of applications in Multi-Agent systems and data intensive computing.
The present economic and social environment has given rise to new situations within which companies must operate. As a first example, the globalization of the economy and the need for performance has led companies to outsource and then to operate inside networks of enterprises such as supply chains or virtual enterprises. A second instance is related to environmental issues. The statement about the impact of ind- trial activities on the environment has led companies to revise processes, to save - ergy, to optimize transportation.... A last example relates to knowledge. Knowledge is considered today to be one of the main assets of a company. How to capitalize, to manage, to reuse it for the benefit of the company is an important current issue. The three examples above have no direct links. However, each of them constitutes a challenge that companies have to face today. This book brings together the opinions of several leading researchers from all around the world. Together they try to develop new approaches and find answers to those challenges. Through the individual ch- ters of this book, the authors present their understanding of the different challenges, the concepts on which they are working, the approaches they are developing and the tools they propose. The book is composed of six parts; each one focuses on a specific theme and is subdivided into subtopics.
The Semantic Web, which is intended to establish a machine-understandable Web, is currently changing from being an emerging trend to a technology used in complex real-world applications. A number of standards and techniques have been developed by the World Wide Web Consortium (W3C), e.g., the Resource Description Framework (RDF), which provides a general method for conceptual descriptions for Web resources, and SPARQL, an RDF querying language. Recent examples of large RDF data with billions of facts include the UniProt comprehensive catalog of protein sequence, function and annotation data, the RDF data extracted from Wikipedia, and Princeton University's WordNet. Clearly, querying performance has become a key issue for Semantic Web applications. In his book, Groppe details various aspects of high-performance Semantic Web data management and query processing. His presentation fills the gap between Semantic Web and database books, which either fail to take into account the performance issues of large-scale data management or fail to exploit the special properties of Semantic Web data models and queries. After a general introduction to the relevant Semantic Web standards, he presents specialized indexing and sorting algorithms, adapted approaches for logical and physical query optimization, optimization possibilities when using the parallel database technologies of today's multicore processors, and visual and embedded query languages. Groppe primarily targets researchers, students, and developers of large-scale Semantic Web applications. On the complementary book webpage readers will find additional material, such as an online demonstration of a query engine, and exercises, and their solutions, that challenge their comprehension of the topics presented.
The current IT environment deals with novel, complex approaches such as information privacy, trust, digital forensics, management, and human aspects. This volume includes papers offering research contributions that focus both on access control in complex environments as well as other aspects of computer security and privacy.
Botnets have become the platform of choice for launching attacks and committing fraud on the Internet. A better understanding of Botnets will help to coordinate and develop new technologies to counter this serious security threat. Botnet Detection: Countering the Largest Security Threat consists of chapters contributed by world-class leaders in this field, from the June 2006 ARO workshop on Botnets. This edited volume represents the state-of-the-art in research on Botnets.
Corpus Annotation gives an up-to-date picture of this fascinating new area of research, and will provide essential reading for newcomers to the field as well as those already involved in corpus annotation. Early chapters introduce the different levels and techniques of corpus annotation. Later chapters deal with software developments, applications, and the development of standards for the evaluation of corpus annotation. While the book takes detailed account of research world-wide, its focus is particularly on the work of the UCREL (University Centre for Computer Corpus Research on Language) team at Lancaster University, which has been at the forefront of developments in the field of corpus annotation since its beginnings in the 1970s.
A hands on guide to web scraping and text mining for both beginners and experienced users of R * Introduces fundamental concepts of the main architecture of the web and databases and covers HTTP, HTML, XML, JSON, SQL. * Provides basic techniques to query web documents and data sets (XPath and regular expressions). * An extensive set of exercises are presented to guide the reader through each technique. * Explores both supervised and unsupervised techniques as well as advanced techniques such as data scraping and text management. * Case studies are featured throughout along with examples for each technique presented. * R code and solutions to exercises featured in the book are provided on a supporting website.
In the course of fuzzy technological development, fuzzy graph theory was identified quite early on for its importance in making things work. Two very important and useful concepts are those of granularity and of nonlinear ap proximations. The concept of granularity has evolved as a cornerstone of Lotfi A.Zadeh's theory of perception, while the concept of nonlinear approx imation is the driving force behind the success of the consumer electronics products manufacturing. It is fair to say fuzzy graph theory paved the way for engineers to build many rule-based expert systems. In the open literature, there are many papers written on the subject of fuzzy graph theory. However, there are relatively books available on the very same topic. Professors' Mordeson and Nair have made a real contribution in putting together a very com prehensive book on fuzzy graphs and fuzzy hypergraphs. In particular, the discussion on hypergraphs certainly is an innovative idea. For an experienced engineer who has spent a great deal of time in the lab oratory, it is usually a good idea to revisit the theory. Professors Mordeson and Nair have created such a volume which enables engineers and design ers to benefit from referencing in one place. In addition, this volume is a testament to the numerous contributions Professor John N. Mordeson and his associates have made to the mathematical studies in so many different topics of fuzzy mathematics." |
You may like...
Management Of Information Security
Michael Whitman, Herbert Mattord
Paperback
Database Principles - Fundamentals of…
Carlos Coronel, Keeley Crockett, …
Paperback
|