![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
In the last ten years, a true explosion of investigations into fuzzy modeling and its applications in control, diagnostics, decision making, optimization, pattern recognition, robotics, etc. has been observed. The attraction of fuzzy modeling results from its intelligibility and the high effectiveness of the models obtained. Owing to this the modeling can be applied for the solution of problems which could not be solved till now with any known conventional methods. The book provides the reader with an advanced introduction to the problems of fuzzy modeling and to one of its most important applications: fuzzy control. It is based on the latest and most significant knowledge of the subject and can be used not only by control specialists but also by specialists working in any field requiring plant modeling, process modeling, and systems modeling, e.g. economics, business, medicine, agriculture,and meteorology.
This book presents new approaches that advance research in all aspects of agent-based models, technologies, simulations and implementations for data intensive applications. The nine chapters contain a review of recent cross-disciplinary approaches in cloud environments and multi-agent systems, and important formulations of data intensive problems in distributed computational environments together with the presentation of new agent-based tools to handle those problems and Big Data in general. This volume can serve as a reference for students, researchers and industry practitioners working in or interested in joining interdisciplinary work in the areas of data intensive computing and Big Data systems using emergent large-scale distributed computing paradigms. It will also allow newcomers to grasp key concepts and potential solutions on advanced topics of theory, models, technologies, system architectures and implementation of applications in Multi-Agent systems and data intensive computing.
The present economic and social environment has given rise to new situations within which companies must operate. As a first example, the globalization of the economy and the need for performance has led companies to outsource and then to operate inside networks of enterprises such as supply chains or virtual enterprises. A second instance is related to environmental issues. The statement about the impact of ind- trial activities on the environment has led companies to revise processes, to save - ergy, to optimize transportation.... A last example relates to knowledge. Knowledge is considered today to be one of the main assets of a company. How to capitalize, to manage, to reuse it for the benefit of the company is an important current issue. The three examples above have no direct links. However, each of them constitutes a challenge that companies have to face today. This book brings together the opinions of several leading researchers from all around the world. Together they try to develop new approaches and find answers to those challenges. Through the individual ch- ters of this book, the authors present their understanding of the different challenges, the concepts on which they are working, the approaches they are developing and the tools they propose. The book is composed of six parts; each one focuses on a specific theme and is subdivided into subtopics.
The Semantic Web, which is intended to establish a machine-understandable Web, is currently changing from being an emerging trend to a technology used in complex real-world applications. A number of standards and techniques have been developed by the World Wide Web Consortium (W3C), e.g., the Resource Description Framework (RDF), which provides a general method for conceptual descriptions for Web resources, and SPARQL, an RDF querying language. Recent examples of large RDF data with billions of facts include the UniProt comprehensive catalog of protein sequence, function and annotation data, the RDF data extracted from Wikipedia, and Princeton University's WordNet. Clearly, querying performance has become a key issue for Semantic Web applications. In his book, Groppe details various aspects of high-performance Semantic Web data management and query processing. His presentation fills the gap between Semantic Web and database books, which either fail to take into account the performance issues of large-scale data management or fail to exploit the special properties of Semantic Web data models and queries. After a general introduction to the relevant Semantic Web standards, he presents specialized indexing and sorting algorithms, adapted approaches for logical and physical query optimization, optimization possibilities when using the parallel database technologies of today's multicore processors, and visual and embedded query languages. Groppe primarily targets researchers, students, and developers of large-scale Semantic Web applications. On the complementary book webpage readers will find additional material, such as an online demonstration of a query engine, and exercises, and their solutions, that challenge their comprehension of the topics presented.
The current IT environment deals with novel, complex approaches such as information privacy, trust, digital forensics, management, and human aspects. This volume includes papers offering research contributions that focus both on access control in complex environments as well as other aspects of computer security and privacy.
Botnets have become the platform of choice for launching attacks and committing fraud on the Internet. A better understanding of Botnets will help to coordinate and develop new technologies to counter this serious security threat. Botnet Detection: Countering the Largest Security Threat consists of chapters contributed by world-class leaders in this field, from the June 2006 ARO workshop on Botnets. This edited volume represents the state-of-the-art in research on Botnets.
In the course of fuzzy technological development, fuzzy graph theory was identified quite early on for its importance in making things work. Two very important and useful concepts are those of granularity and of nonlinear ap proximations. The concept of granularity has evolved as a cornerstone of Lotfi A.Zadeh's theory of perception, while the concept of nonlinear approx imation is the driving force behind the success of the consumer electronics products manufacturing. It is fair to say fuzzy graph theory paved the way for engineers to build many rule-based expert systems. In the open literature, there are many papers written on the subject of fuzzy graph theory. However, there are relatively books available on the very same topic. Professors' Mordeson and Nair have made a real contribution in putting together a very com prehensive book on fuzzy graphs and fuzzy hypergraphs. In particular, the discussion on hypergraphs certainly is an innovative idea. For an experienced engineer who has spent a great deal of time in the lab oratory, it is usually a good idea to revisit the theory. Professors Mordeson and Nair have created such a volume which enables engineers and design ers to benefit from referencing in one place. In addition, this volume is a testament to the numerous contributions Professor John N. Mordeson and his associates have made to the mathematical studies in so many different topics of fuzzy mathematics."
This comprehensive guide offers a detailed treatment of the analysis, design, simulation and testing of the full range of today's leading delta-sigma data converters. Written by professionals experienced in all practical aspects of delta-sigma modulator design, "Delta-Sigma Data Converters" provides comprehensive coverage of low and high-order single-bit, bandpass, continuous-time, multi-stage modulators as well as advanced topics, including idle-channel tones, stability, decimation and interpolation filter design, and simulation.
This book offers a unique review of how astronomical information handling (in the broad sense) evolved in the course of the 20th century, and especially during its second half. This volume is a natural complement to the book Information handling in astronomy published in the same series. The scope of these two volumes includes not only dealing with professional astronomical data from the collecting instruments (ground-based and space-borne) to the users/researchers, but also publishing, education and public outreach. In short, the information flow in astronomy is thus illustrated from sources (cosmic objects) to end (mankind's knowledge). The experts contributing to this book have done their best to write in a way understandable to readers not necessarily hyperspecialized in astronomy while providing specific detailed information, as well as plenty of pointers and bibliographic elements. Especially enlightening are some lessons learned' sections.
This work takes a critical look at the current concept of isotopic landscapes ("isoscapes") in bioarchaeology and its application in future research. It specifically addresses the research potential of cremated finds, a somewhat neglected bioarchaeological substrate, resulting primarily from the inherent osteological challenges and complex mineralogy associated with it. In addition, for the first time data mining methods are applied. The chapters are the outcome of an international workshop sponsored by the German Science Foundation and the Centre of Advanced Studies at the Ludwig-Maximilian-University in Munich. Isotopic landscapes are indispensable tracers for the monitoring of the flow of matter through geo/ecological systems since they comprise existing temporally and spatially defined stable isotopic patterns found in geological and ecological samples. Analyses of stable isotopes of the elements nitrogen, carbon, oxygen, strontium, and lead are routinely utilized in bioarchaeology to reconstruct biodiversity, palaeodiet, palaeoecology, palaeoclimate, migration and trade. The interpretive power of stable isotopic ratios depends not only on firm, testable hypotheses, but most importantly on the cooperative networking of scientists from both natural and social sciences. Application of multi-isotopic tracers generates isotopic patterns with multiple dimensions, which accurately characterize a find, but can only be interpreted by use of modern data mining methods.
A comprehensive, systematic approach to multimedia database management systems. It presents methods for managing the increasing demands of multimedia databases and their inherent design and architecture issues, and covers how to create an effective multimedia database by integrating the various information indexing and retrieval methods available. It also addresses how to measure multimedia database performance that is based on similarity to queries and routinely affected by human judgement. The book concludes with a discussion of networking and operating system support for multimedia databases and a look at research and development in this dynamic field.
Advanced Topics in Database Research is a series of books on the fields of database, software engineering, and systems analysis and design. They feature the latest research ideas and topics on how to enhance current database systems, improve information storage, refine existing database models, and develop advanced applications. ""Advanced Topics in Database Research, Volume 5"" is a part of this series. ""Advanced Topics in Database Research, Volume 5"" presents the latest research ideas and topics on database systems and applications, and provides insights into important developments in the field of database and database management. This book describes the capabilities and features of new technologies and methodologies, and presents state-of-the-art research ideas, with an emphasis on theoretical issues regarding databases and database management.
The book introduces new techniques which imply rigorous lower bounds on the complexity of some number theoretic and cryptographic problems. These methods and techniques are based on bounds of character sums and numbers of solutions of some polynomial equations over finite fields and residue rings. It also contains a number of open problems and proposals for further research. We obtain several lower bounds, exponential in terms of logp, on the de grees and orders of * polynomials; * algebraic functions; * Boolean functions; * linear recurring sequences; coinciding with values of the discrete logarithm modulo a prime p at suf ficiently many points (the number of points can be as small as pI/He). These functions are considered over the residue ring modulo p and over the residue ring modulo an arbitrary divisor d of p - 1. The case of d = 2 is of special interest since it corresponds to the representation of the right most bit of the discrete logarithm and defines whether the argument is a quadratic residue. We also obtain non-trivial upper bounds on the de gree, sensitivity and Fourier coefficients of Boolean functions on bits of x deciding whether x is a quadratic residue. These results are used to obtain lower bounds on the parallel arithmetic and Boolean complexity of computing the discrete logarithm. For example, we prove that any unbounded fan-in Boolean circuit. of sublogarithmic depth computing the discrete logarithm modulo p must be of superpolynomial size.
New technology is always evolving and companies must have appropriate security for their business to be able to keep up-to-date with the changes. With the rapid growth in internet and www facilities, database security will always be a key topic in business and in the public sector and has implications for the whole of society. Database Security Volume XII covers issues related to security and privacy of information in a wide range of applications, including: Electronic Commerce Informational Assurances Workflow Privacy Policy Modeling Mediation Information Warfare Defense Multilevel Security Role-based Access Controls Mobile Databases Inference Data Warehouses and Data Mining. This book contains papers and panel discussions from the Twelfth Annual Working Conference on Database Security, organized by the International Federation for Information Processing (IFIP) and held July 15-17, 1998 in Chalkidiki, Greece. Database Security Volume XII will prove invaluable reading for faculty and advanced students as well as for industrial researchers and practitioners working in the area of database security research and development.
Safety-critical systems are found in almost every sector of industry. Faults in these systems will result in a breach of safe operating conditions and exposure to the possible risk of major loss of life or catastrophic damage to plant, equipment or the environment. An understanding of the basis for the functioning of these systems is therefore vital to all involved in their operation. In particular, the interaction of the disciplines of software engineering, safety engineering, human factors and safety management is a total process whose entirety is not widely understood by those working in any of the individual fields. This book will redress that problem by providing an introduction to each constituent part with a cohesive structure and overview of the whole subject. It will be of interest to engineers, managers, students and anyone with responsibilities in these areas.
Information fusion is becoming a major requirement in data mining and knowledge discovery in databases. This book presents some recent fusion techniques that are currently in use in data mining, as well as data mining applications that use information fusion. Special focus of the book is on information fusion in preprocessing, model building and information extraction with various applications.
The prevalence of data science has grown exponentially in recent years. Increases in data exchange have created the need for standards and formats on handling data from different sources. Developing Metadata Applications Profiles is an innovative reference source that discusses the latest trends and techniques for effectively managing and exchanging metadata. Including a range of perspectives on schemas and application profiles, such as interoperability, ontology-based design, and model-driven approaches, this book is ideally designed for researchers, academics, professionals, graduate students, and practitioners actively engaged in data science.
Information intermediation is the foundation stone of some of the most successful Internet companies, and is perhaps second only to the Internet Infrastructure companies. On the heels of information integration and interoperability, this book on information brokering discusses the next step in information interoperability and integration. The emerging Internet economy based on burgeoning B2B and B2C trading will soon demand semantics-based information intermediation for its feasibility and success. B2B ventures are involved in the rationalization' of new vertical markets and construction of domain specific product catalogs. This book provides approaches for re-use of existing vocabularies and domain ontologies as a basis for this rationalization and provides a framework based on inter-ontology interoperation. Infrastructural trade-offs that identify optimizations in performance and scalability of web sites will soon give way to information based trade-offs as alternate rationalization schemes come into play and the necessity of interoperating across these schemes is realized. Information Brokering Across Heterogeneous Digital Data's intended readers are researchers, software architects and CTOs, advanced product developers dealing with information intermediation issues in the context of e-commerce (B2B and B2C), information technology professionals in various vertical markets (e.g., geo-spatial information, medicine, auto), and all librarians interested in information brokering.
A hands on guide to web scraping and text mining for both beginners and experienced users of R * Introduces fundamental concepts of the main architecture of the web and databases and covers HTTP, HTML, XML, JSON, SQL. * Provides basic techniques to query web documents and data sets (XPath and regular expressions). * An extensive set of exercises are presented to guide the reader through each technique. * Explores both supervised and unsupervised techniques as well as advanced techniques such as data scraping and text management. * Case studies are featured throughout along with examples for each technique presented. * R code and solutions to exercises featured in the book are provided on a supporting website.
Security is the science and technology of secure communications and resource protection from security violation such as unauthorized access and modification. Putting proper security in place gives us many advantages. It lets us exchange confidential information and keep it confidential. We can be sure that a piece of information received has not been changed. Nobody can deny sending or receiving a piece of information. We can control which piece of information can be accessed, and by whom. We can know when a piece of information was accessed, and by whom. Networks and databases are guarded against unauthorized access. We have seen the rapid development of the Internet and also increasing security requirements in information networks, databases, systems, and other information resources. This comprehensive book responds to increasing security needs in the marketplace, and covers networking security and standards. There are three types of readers who are interested in security: non-technical readers, general technical readers who do not implement security, and technical readers who actually implement security. This book serves all three by providing a comprehensive explanation of fundamental issues of networking security, concept and principle of security standards, and a description of some emerging security technologies. The approach is to answer the following questions: 1. What are common security problems and how can we address them? 2. What are the algorithms, standards, and technologies that can solve common security problems? 3.
It is over 20 years since the functional data model and functional programming languages were first introduced to the computing community. Although developed by separate research communities, recent work, presented in this book, suggests there is powerful synergy in their integration. As database technology emerges as central to yet more complex and demanding applications in areas such as bioinformatics, national security, criminal investigations and advanced engineering, more sophisticated approaches like those presented here, are needed. A tutorial introduction by the editors prepares the reader for the chapters that follow, written by leading researchers, including some of the early pioneers. They provide a comprehensive treatment showing how the functional approach provides for modeling, analyzis and optimization in databases, and also data integration and interoperation in heterogeneous environments. Several chapters deal with mathematical results on the transformation of expressions, fundamental to the functional approach. The book also aims to show how the approach relates to the Internet and current work on semistructured data, XML and RDF. The book presents a comprehensive view of the functional approach to data management, bringing together important material hitherto widely scattered, some new research, and a comprehensive set of references. It will serve as a valuable resource for researchers, faculty and graduate students, as well as those in industry responsible for new systems development.
This book covers the basic statistical and analytical techniques of computer intrusion detection. It is aimed at both statisticians looking to become involved in the data analysis aspects of computer security and computer scientists looking to expand their toolbox of techniques for detecting intruders. The book is self-contained, assumng no expertise in either computer security or statistics. It begins with a description of the basics of TCP/IP, followed by chapters dealing with network traffic analysis, network monitoring for intrusion detection, host based intrusion detection, and computer viruses and other malicious code. Each section develops the necessary tools as needed. There is an extensive discussion of visualization as it relates to network data and intrusion detection. The book also contains a large bibliography covering the statistical, machine learning, and pattern recognition literature related to network monitoring and intrusion detection. David Marchette is a scientist at the Naval Surface Warfacre Center in Dalhgren, Virginia. He has worked at Navy labs for 15 years, doing research in pattern recognition, computational statistics, and image analysis. He has been a fellow by courtesy in the mathematical sciences department of the Johns Hopkins University since 2000. He has been working in conputer intrusion detection for several years, focusing on statistical methods for anomaly detection and visualization. Dr. Marchette received a Masters in Mathematics from the University of California, San Diego in 1982 and a Ph.D. in Computational Sciences and Informatics from George Mason University in 1996.
There are many invaluable books available on data mining theory and applications. However, in compiling a volume titled DATA MINING: Foundations and Intelligent Paradigms: Volume 3: Medical, Health, Social, Biological and other Applications we wish to introduce some of the latest developments to a broad audience of both specialists and non-specialists in this field."
This book proposes representations of multicast rate regions in wireless networks based on the mathematical concept of submodular functions, e.g., the submodular cut model and the polymatroid broadcast model. These models subsume and generalize the graph and hypergraph models. The submodular structure facilitates a dual decomposition approach to network utility maximization problems, which exploits the greedy algorithm for linear programming on submodular polyhedra. This approach yields computationally efficient characterizations of inner and outer bounds on the multicast capacity regions for various classes of wireless networks.
This book presents important applications of soft computing and fuzziness to the growing field of web planning. A new method of using fuzzy numbers to model uncertain probabilities and how these can be used to model a fuzzy queuing system is demonstrated, as well as a method of modeling fuzzy queuing systems employing fuzzy arrival rates and fuzzy service rates. All the computations needed to get to the fuzzy numbers for system performance are described starting for the one server case to more than three servers. A variety of optimization models are discussed with applications to the average response times, server utilization, server and queue costs, as well as to phenomena identified with web sites such as "burstiness" and "long tailed distributions". |
You may like...
Ionic Liquids and Their Application in…
Jamal Akhter Siddique, Akil Ahmad, …
Paperback
R4,417
Discovery Miles 44 170
Phenomena of Optical Metamaterials
Ortwin Hess, Tatjana Gric
Paperback
Solid State Physics, Volume 73
Robert L Stamps, Robert E Camley, …
Hardcover
R5,859
Discovery Miles 58 590
Dislocations in Solids, Volume 15
John P. Hirth, Ladislas Kubin
Hardcover
R7,392
Discovery Miles 73 920
Molecular Beam Epitaxy - From Research…
Mohamed Henini
Paperback
|