![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
This book provides two general granular computing approaches to mining relational data, the first of which uses abstract descriptions of relational objects to build their granular representation, while the second extends existing granular data mining solutions to a relational case. Both approaches make it possible to perform and improve popular data mining tasks such as classification, clustering, and association discovery. How can different relational data mining tasks best be unified? How can the construction process of relational patterns be simplified? How can richer knowledge from relational data be discovered? All these questions can be answered in the same way: by mining relational data in the paradigm of granular computing! This book will allow readers with previous experience in the field of relational data mining to discover the many benefits of its granular perspective. In turn, those readers familiar with the paradigm of granular computing will find valuable insights on its application to mining relational data. Lastly, the book offers all readers interested in computational intelligence in the broader sense the opportunity to deepen their understanding of the newly emerging field granular-relational data mining.
Disaster management is a process or strategy that is implemented when any type of catastrophic event takes place. The process may be initiated when anything threatens to disrupt normal operations or puts the lives of human beings at risk. Governments on all levels as well as many businesses create some sort of disaster plan that make it possible to overcome the catastrophe and return to normal function as quickly as possible. Response to natural disasters (e.g., floods, earthquakes) or technological disaster (e.g., nuclear, chemical) is an extreme complex process that involves severe time pressure, various uncertainties, high non-linearity and many stakeholders. Disaster management often requires several autonomous agencies to collaboratively mitigate, prepare, respond, and recover from heterogeneous and dynamic sets of hazards to society. Almost all disasters involve high degrees of novelty to deal with most unexpected various uncertainties and dynamic time pressures. Existing studies and approaches within disaster management have mainly been focused on some specific type of disasters with certain agency oriented. There is a lack of a general framework to deal with similarities and synergies among different disasters by taking their specific features into account. This book provides with various decisions analysis theories and support tools in complex systems in general and in disaster management in particular. The book is also generated during a long-term preparation of a European project proposal among most leading experts in the areas related to the book title. Chapters are evaluated based on quality and originality in theory and methodology, application oriented, relevance to the title of the book.
Uncertain data is inherent in many important applications, such as environmental surveillance, market analysis, and quantitative economics research. Due to the importance of those applications and rapidly increasing amounts of uncertain data collected and accumulated, analyzing large collections of uncertain data has become an important task. Ranking queries (also known as top-k queries) are often natural and useful in analyzing uncertain data. "Ranking Queries on Uncertain Data" discusses the motivations/applications, challenging problems, the fundamental principles, and the evaluation algorithms of ranking queries on uncertain data. Theoretical and algorithmic results of ranking queries on uncertain data are presented in the last section of this book. "Ranking Queries on Uncertain Data" is the first book to systematically discuss the problem of ranking queries on uncertain data.
These are exciting times in the fields of Fuzzy Logic and the Semantic Web, and this book will add to the excitement, as it is the first volume to focus on the growing connections between these two fields. This book is expected to be a valuable aid to anyone considering the application of Fuzzy Logic to the Semantic Web, because it contains a number of detailed accounts of these combined fields, written by leading authors in several countries. The Fuzzy Logic field has been maturing for forty years. These years have witnessed a tremendous growth in the number and variety of applications, with a real-world impact across a wide variety of domains with humanlike behavior and reasoning. And we believe that in the coming years, the Semantic Web will be major field of applications of Fuzzy Logic.
This book represents the combined peer-reviewed
proceedings The 41 contributions published in this book address many
topics
The rate at which geospatial data is being generated exceeds our computational capabilities to extract patterns for the understanding of a dynamically changing world. Geoinformatics and data mining focuses on the development and implementation of computational algorithms to solve these problems. This unique volume contains a collection of chapters on state-of-the-art data mining techniques applied to geoinformatic problems of high complexity and important societal value. Data Mining for Geoinformatics addresses current concerns and developments relating to spatio-temporal data mining issues in remotely-sensed data, problems in meteorological data such as tornado formation, estimation of radiation from the Fukushima nuclear power plant, simulations of traffic data using OpenStreetMap, real time traffic applications of data stream mining, visual analytics of traffic and weather data and the exploratory visualization of collective, mobile objects such as the flocking behavior of wild chickens. This book is designed for researchers and advanced-level students focused on computer science, earth science and geography as a reference or secondary text book. Practitioners working in the areas of data mining and geoscience will also find this book to be a valuable reference.
For a long time, there has been a need for a practical,
down-to-earth developers book for the Java Cryptography Extension.
I am very happy to see there is now a book that can answer many of
the technical questions that developers, managers, and researchers
have about such a critical topic. I am sure that this book will
contribute greatly to the success of securing Java applications and
deployments for e-business. --Anthony Nadalin, Java Security Lead
Architect, IBM
This book brings all of the elements of data mining together in a
single volume, saving the reader the time and expense of making
multiple purchases. It consolidates both introductory and advanced
topics, thereby covering the gamut of data mining and machine
learning tactics ? from data integration and pre-processing, to
fundamental algorithms, to optimization techniques and web mining
methodology.
This book brings all of the elements of database design together in
a single volume, saving the reader the time and expense of making
multiple purchases. It consolidates both introductory and advanced
topics, thereby covering the gamut of database design methodology ?
from ER and UML techniques, to conceptual data modeling and table
transformation, to storing XML and querying moving objects
databases.
This book examines recent developments in semantic systems that can respond to situations and environments and events. The contributors to this book cover how to design, implement and utilize disruptive technologies. The editor discusses the two fundamental sets of disruptive technologies: the development of semantic technologies including description logics, ontologies and agent frameworks; and the development of semantic information rendering and graphical forms of displays of high-density time-sensitive data to improve situational awareness. Beyond practical illustrations of emerging technologies, the editor proposes to utilize an incremental development method called knowledge scaffolding -a proven educational psychology technique for learning a subject matter thoroughly. The goal of this book is to help readers learn about managing information resources, from the ground up and reinforcing the learning as they read on.
Integrating Security and Software Engineering: Advances and Future Vision provides the first step towards narrowing the gap between security and software engineering. This book introduces the field of secure software engineering, which is a branch of research investigating the integration of security concerns into software engineering practices. ""Integrating Security and Software Engineering: Advances and Future Vision"" discusses problems and challenges of considering security during the development of software systems, and also presents the predominant theoretical and practical approaches that integrate security and software engineering.
Forecasting is a crucial function for companies in the fashion industry, but for many real-life forecasting applications in the, the data patterns are notorious for being highly volatile and it is very difficult, if not impossible, to analytically learn about the underlying patterns. As a result, many traditional methods (such as pure statistical models) will fail to make a sound prediction. Over the past decade, advances in artificial intelligence and computing technologies have provided an alternative way of generating precise and accurate forecasting results for fashion businesses. Despite being an important and timely topic, there is currently an absence of a comprehensive reference source that provides up-to-date theoretical and applied research findings on the subject of intelligent fashion forecasting systems. This three-part handbook fulfills this need and covers materials ranging from introductory studies and technical reviews, theoretical modeling research, to intelligent fashion forecasting applications and analysis. This book is suitable for academic researchers, graduate students, senior undergraduate students and practitioners who are interested in the latest research on fashion forecasting.
The three-volume set IFIP AICT 368-370 constitutes the refereed post-conference proceedings of the 5th IFIP TC 5, SIG 5.1 International Conference on Computer and Computing Technologies in Agriculture, CCTA 2011, held in Beijing, China, in October 2011. The 189 revised papers presented were carefully selected from numerous submissions. They cover a wide range of interesting theories and applications of information technology in agriculture, including simulation models and decision-support systems for agricultural production, agricultural product quality testing, traceability and e-commerce technology, the application of information and communication technology in agriculture, and universal information service technology and service systems development in rural areas. The 59 papers included in the third volume focus on simulation, optimization, monitoring, and control technology.
Mohamed Medhat Gaber "It is not my aim to surprise or shock you - but the simplest way I can summarise is to say that there are now in the world machines that think, that learn and that create. Moreover, their ability to do these things is going to increase rapidly until - in a visible future - the range of problems they can handle will be coextensive with the range to which the human mind has been applied" by Herbert A. Simon (1916-2001) 1Overview This book suits both graduate students and researchers with a focus on discovering knowledge from scienti c data. The use of computational power for data analysis and knowledge discovery in scienti c disciplines has found its roots with the re- lution of high-performance computing systems. Computational science in physics, chemistry, and biology represents the rst step towards automation of data analysis tasks. The rational behind the developmentof computationalscience in different - eas was automating mathematical operations performed in those areas. There was no attention paid to the scienti c discovery process. Automated Scienti c Disc- ery (ASD) [1-3] represents the second natural step. ASD attempted to automate the process of theory discovery supported by studies in philosophy of science and cognitive sciences. Although early research articles have shown great successes, the area has not evolved due to many reasons. The most important reason was the lack of interaction between scientists and the automating systems.
This book provides comprehensive coverage of fundamentals of database management system. It contains a detailed description on Relational Database Management System Concepts. There are a variety of solved examples and review questions with solutions. This book is for those who require a better understanding of relational data modeling, its purpose, its nature, and the standards used in creating relational data model.
Web services are increasingly important in information technology with the expansive growth of the Internet. As services proliferate in domains ranging from e-commerce to digital government, the need for tools and methods to measure and guide the achievement of quality outcomes is critical for organizations. Managing Web Service Quality: Measuring Outcomes and Effectiveness focuses on the advances in Web service quality, covering topics such as quality requirements, security issues, and development and integration methods of providing quality services. Covering both technical and managerial issues related to Web service quality, this authoritative collection provides academicians, researchers, and practitioners with the most advanced research in the field.
th I3E 2010 marked the 10 anniversary of the IFIP Conference on e-Business, e- Services, and e-Society, continuing a tradition that was invented in 1998 during the International Conference on Trends in Electronic Commerce, TrEC 1998, in Hamburg (Germany). Three years later the inaugural I3E 2001 conference was held in Zurich (Switzerland). Since then I3E has made its journey through the world: 2002 Lisbon (Portugal), 2003 Sao Paulo (Brazil), 2004 Toulouse (France), 2005 Poznan (Poland), 2006 Turku (Finland), 2007 Wuhan (China), 2008 Tokyo (Japan), and 2009 Nancy (France). I3E 2010 took place in Buenos Aires (Argentina) November 3-5, 2010. Known as "The Pearl" of South America, Buenos Aires is a cosmopolitan, colorful, and vibrant city, surprising its visitors with a vast variety of cultural and artistic performances, European architecture, and the passion for tango, coffee places, and football disc- sions. A cultural reference in Latin America, the city hosts 140 museums, 300 theaters, and 27 public libraries including the National Library. It is also the main educational center in Argentina and home of renowned universities including the U- versity of Buenos Aires, created in 1821. Besides location, the timing of I3E 2010 is th also significant--it coincided with the 200 anniversary celebration of the first local government in Argentina.
In practice, the design and architecture of a cloud varies among cloud providers. We present a generic evaluation framework for the performance, availability and reliability characteristics of various cloud platforms. We describe a generic benchmark architecture for cloud databases, specifically NoSQL database as a service. It measures the performance of replication delay and monetary cost. Service Level Agreements (SLA) represent the contract which captures the agreed upon guarantees between a service provider and its customers. The specifications of existing service level agreements (SLA) for cloud services are not designed to flexibly handle even relatively straightforward performance and technical requirements of consumer applications. We present a novel approach for SLA-based management of cloud-hosted databases from the consumer perspective and an end-to-end framework for consumer-centric SLA management of cloud-hosted databases. The framework facilitates adaptive and dynamic provisioning of the database tier of the software applications based on application-defined policies for satisfying their own SLA performance requirements, avoiding the cost of any SLA violation and controlling the monetary cost of the allocated computing resources. In this framework, the SLA of the consumer applications are declaratively defined in terms of goals which are subjected to a number of constraints that are specific to the application requirements. The framework continuously monitors the application-defined SLA and automatically triggers the execution of necessary corrective actions (scaling out/in the database tier) when required. The framework is database platform-agnostic, uses virtualization-based database replication mechanisms and requires zero source code changes of the cloud-hosted software applications.
Trustworthy Ubiquitous Computing covers aspects of trust in ubiquitous computing environments. The aspects of context, privacy, reliability, usability and user experience related to "emerged and exciting new computing paradigm of Ubiquitous Computing", includes pervasive, grid, and peer-to-peer computing including sensor networks to provide secure computing and communication services at anytime and anywhere. Marc Weiser presented his vision of disappearing and ubiquitous computing more than 15 years ago. The big picture of the computer introduced into our environment was a big innovation and the starting point for various areas of research. In order to totally adopt the idea of ubiquitous computing several houses were build, equipped with technology and used as laboratory in order to find and test appliances that are useful and could be made available in our everyday life. Within the last years industry picked up the idea of integrating ubiquitous computing and already available products like remote controls for your house were developed and brought to the market. In spite of many applications and projects in the area of ubiquitous and pervasive computing the success is still far away. One of the main reasons is the lack of acceptance of and confidence in this technology. Although researchers and industry are working in all of these areas a forum to elaborate security, reliability and privacy issues, that resolve in trustworthy interfaces and computing environments for people interacting within these ubiquitous environments is important. The user experience factor of trust thus becomes a crucial issue for the success of a UbiComp application. The goal of this book is to provide a state the art of Trustworthy Ubiquitous Computing to address recent research results and to present and discuss the ideas, theories, technologies, systems, tools, applications and experiences on all theoretical and practical issues.
The World Wide Web can be considered a huge library that in consequence needs a capable librarian responsible for the classification and retrieval of documents as well as the mediation between library resources and users. Based on this idea, the concept of the "Librarian of the Web" is introduced which comprises novel, librarian-inspired methods and technical solutions to decentrally search for text documents in the web using peer-to-peer technology. The concept's implementation in the form of an interactive peer-to-peer client, called "WebEngine", is elaborated on in detail. This software extends and interconnects common web servers creating a fully integrated, decentralised and self-organising web search system on top of the existing web structure. Thus, the web is turned into its own powerful search engine without the need for any central authority. This book is intended for researchers and practitioners having a solid background in the fields of Information Retrieval and Web Mining.
This book presents the proceedings of Workshops and Posters at the 13th International Conference on Spatial Information Theory (COSIT 2017), which is concerned with all aspects of space and spatial environments as experienced, represented and elaborated by humans, other animals and artificial agents. Complementing the main conference proceedings, workshop papers and posters investigate specialized research questions or challenges in spatial information theory and closely related topics, including advances in the conceptualization of specific spatio-temporal domains and diverse applications of spatial and temporal information.
Hyperspectral Image Fusion is the first text dedicated to the fusion techniques for such a huge volume of data consisting of a very large number of images. This monograph brings out recent advances in the research in the area of visualization of hyperspectral data. It provides a set of pixel-based fusion techniques, each of which is based on a different framework and has its own advantages and disadvantages. The techniques are presented with complete details so that practitioners can easily implement them. It is also demonstrated how one can select only a few specific bands to speed up the process of fusion by exploiting spatial correlation within successive bands of the hyperspectral data. While the techniques for fusion of hyperspectral images are being developed, it is also important to establish a framework for objective assessment of such techniques. This monograph has a dedicated chapter describing various fusion performance measures that are applicable to hyperspectral image fusion. This monograph also presents a notion of consistency of a fusion technique which can be used to verify the suitability and applicability of a technique for fusion of a very large number of images. This book will be a highly useful resource to the students, researchers, academicians and practitioners in the specific area of hyperspectral image fusion, as well as generic image fusion. |
You may like...
Management Of Information Security
Michael Whitman, Herbert Mattord
Paperback
Database Principles - Fundamentals of…
Carlos Coronel, Keeley Crockett, …
Paperback
Big Data and Smart Service Systems
Xiwei Liu, Rangachari Anand, …
Hardcover
Blockchain Life - Making Sense of the…
Kary Oberbrunner, Lee Richter
Hardcover
R506
Discovery Miles 5 060
|