![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
Privacy and security risks arising from the application of different data mining techniques to large institutional data repositories have been solely investigated by a new research domain, the so-called privacy preserving data mining. Association rule hiding is a new technique in data mining, which studies the problem of hiding sensitive association rules from within the data. Association Rule Hiding for Data Mining addresses the problem of "hiding" sensitive association rules, and introduces a number of heuristic solutions. Exact solutions of increased time complexity that have been proposed recently are presented, as well as a number of computationally efficient (parallel) approaches that alleviate time complexity problems, along with a thorough discussion regarding closely related problems (inverse frequent item set mining, data reconstruction approaches, etc.). Unsolved problems, future directions and specific examples are provided throughout this book to help the reader study, assimilate and appreciate the important aspects of this challenging problem. Association Rule Hiding for Data Mining is designed for researchers, professors and advanced-level students in computer science studying privacy preserving data mining, association rule mining, and data mining. This book is also suitable for practitioners working in this industry.
Examples abound in database applications of well-formulated queries running slowly, even if all levels of the database are properly tuned. It is essential to address each level separately by focusing first on underlying principles and root causes, and only then proposing both theoretical and practical solutions. "Database Performance Tuning and Optimization" comprehensively addresses each level separately by focusing first on underlying principles and root causes, and then proposes both theoretical and practical solutions using Oracle 8i examples as the RDBMS. The book combines theory with practical tools (in the form of Oracle and UNIX shell scripts) to address the tuning and optimization issues of DBAs and developers, irrespective of whether they use Oracle. Topics and features: * An integrated approach to tuning by improving all three levels of a database (conceptual, internal, and external) for optimal performance * Balances theory with practice, developing underlying principles and then applying them to other RDBMSs, not just Oracle * Includes CD-ROM containing all scripts and methods utilized in the book * Coverage of data warehouses provides readers much needed principles and tools for tuning large reporting databases * Coverage of web-based databases * Appendix B shows how to create an instance and its associated database and all its objects * Provides useful exercises, references, and Oracle 8i and select 9i examples Based on nearly two decades of experience as an Oracle developer and DBA, the author delivers comprehensive coverage of the fundamental principles and methodologies of tuning and optimizing database performance. Database professionals and practitioners with some experience developing, implementing, and maintaining relational databases will find the work an essential resource. It is also suitable for professional short courses and self-study purposes.
The efficient management of a consistent and integrated database is a central task in modern IT and highly relevant for science and industry. Hardly any critical enterprise solution comes without any functionality for managing data in its different forms. Web-Scale Data Management for the Cloud addresses fundamental challenges posed by the need and desire to provide database functionality in the context of the Database as a Service (DBaaS) paradigm for database outsourcing. This book also discusses the motivation of the new paradigm of cloud computing, and its impact to data outsourcing and service-oriented computing in data-intensive applications. Techniques with respect to the support in the current cloud environments, major challenges, and future trends are covered in the last section of this book. A survey addressing the techniques and special requirements for building database services are provided in this book as well.
Information Systems (IS) are a nearly omnipresent aspect of the modern world, playing crucial roles in the fields of science and engineering, business and law, art and culture, politics and government, and many others. As such, identity theft and unauthorized access to these systems are serious concerns. Theory and Practice of Cryptography Solutions for Secure Information Systems explores current trends in IS security technologies, techniques, and concerns, primarily through the use of cryptographic tools to safeguard valuable information resources. This reference book serves the needs of professionals, academics, and students requiring dedicated information systems free from outside interference, as well as developers of secure IS applications. This book is part of the Advances in Information Security, Privacy, and Ethics series collection.
This book introduces an efficient resource management approach for future spectrum sharing systems. The book focuses on providing an optimal resource allocation framework based on carrier aggregation to allocate multiple carriers' resources efficiently among mobile users. Furthermore, it provides an optimal traffic dependent pricing mechanism that could be used by network providers to charge mobile users for the allocated resources. The book provides different resource allocation with carrier aggregation solutions, for different spectrum sharing scenarios, and compares them. The provided solutions consider the diverse quality of experience requirement of multiple applications running on the user's equipment since different applications require different application performance. In addition, the book addresses the resource allocation problem for spectrum sharing systems that require user discrimination when allocating the network resources.
Communications and Multimedia Security is an essential reference for both academic and professional researchers in the fields of Communications and Multimedia Security. This state-of-the-art volume presents the proceedings of the Eighth Annual IFIP TC-6 TC-11 Conference on Communications and Multimedia Security, September 2004, in Windermere, UK. The papers presented here represent the very latest developments in security research from leading people in the field. The papers explore a wide variety of subjects including privacy protection and trust negotiation, mobile security, applied cryptography, and security of communication protocols. Of special interest are several papers which addressed security in the Microsoft .Net architecture, and the threats that builders of web service applications need to be aware of. The papers were a result of research sponsored by Microsoft at five European University research centers. This collection will be important not only for multimedia security experts and researchers, but also for all teachers and administrators interested in communications security.
ISGC 2009, The International Symposium on Grid Computing was held at Academia Sinica, Taipei, Taiwan in April 2009 bringing together prestigious scientists and engineers worldwide to exchange ideas, present challenges/solutions and introduce future development in the field of Grid Computing. Managed Grids and Cloud Systems in the Asia-Pacific Research Community presents the latest achievements in grid technology including Cloud Computing. This volume also covers international projects in Grid Operation, Grid Middleware, E-Science applications, technical developments in grid operations and management, Security and Networking, Digital Library and more. The resources used to support these advances, such as volunteer grids, production managed grids, and cloud systems are discussed in detail. This book is designed for a professional audience composed of grid users, developers and researchers working in the grid computing. Advanced-level students focusing on computer science and engineering will find this book valuable as a reference or secondary text book.
This book presents different use cases in big data applications and related practical experiences. Many businesses today are increasingly interested in utilizing big data technologies for supporting their business intelligence so that it is becoming more and more important to understand the various practical issues from different practical use cases. This book provides clear proof that big data technologies are playing an ever increasing important and critical role in a new cross-discipline research between computer science and business.
Data mining deals with finding patterns in data that are by
user-definition, interesting and valid. It is an interdisciplinary
area involving databases, machine learning, pattern recognition,
statistics, visualization and others. Independently, data mining and decision support are well-developed research areas, but until now there has been no systematic attempt to integrate them. Data Mining and Decision Support: Integration and Collaboration, written by leading researchers in the field, presents a conceptual framework, plus the methods and tools for integrating the two disciplines and for applying this technology to business problems in a collaborative setting.
This book embarks on a mission to dissect, unravel and demystify the concepts of Web services, including their implementation and composition techniques. It provides a comprehensive perspective on the fundamentals of implementation standards and strategies for Web services (in the first half of the book), while also presenting composition techniques for leveraging existing services to create larger ones (in the second half). Pursuing a unique approach, it begins with a sound overview of concepts, followed by a targeted technical discussion that is in turn linked to practical exercises for hands-on learning. For each chapter, practical exercises are available on Github. Mainly intended as a comprehensive textbook on the implementation and composition of Web services, it also offers a useful reference guide for academics and practitioners. Lecturers will find this book useful for a variety of courses, from undergraduate courses on the foundational technology of Web services through graduate courses on complex Web service composition. Students and researchers entering the field will benefit from the combination of a broad technical overview with practical self-guided exercises. Lastly, professionals will gain a well-informed grasp of how to synthesize the concepts of conventional and "newer" breeds of Web services, which they can use to revise foundational concepts or for practical implementation tasks.
This is the first book treating the fields of supervised, semi-supervised and unsupervised machine learning collectively. The book presents both the theory and the algorithms for mining huge data sets using support vector machines (SVMs) in an iterative way. It demonstrates how kernel based SVMs can be used for dimensionality reduction and shows the similarities and differences between the two most popular unsupervised techniques.
User passwords are the keys to the network kingdom, yet most users
choose overly simplistic passwords (like password) that anyone
could guess, while system administrators demand impossible to
remember passwords littered with obscure characters and random
numerals.
The use of geospatial technologies has become ubiquitous since the leading Internet vendors delivered a number of popular map websites. This book covers a wide spectrum of techniques, model methodologies and theories on development and applications of GIS relative to the internet. It includes coverage of business process services, and integration of GIS into global enterprise information systems and service architectures. The world's experts in this emerging field present examples and case studies for location-based services, coastal restoration, urban planning, battlefield planning, rehearsal environmental analysis and assessment.
New approaches are needed that could move us towards developing effective systems for problem solving and decision making, systems that can deal with complex and ill-structured situations, systems that can function in information rich environments, systems that can cope with imprecise information, systems that can rely on their knowledge and learn from experience - i.e. intelligent systems. One of the main efforts in intelligent systems development is focused on knowledge and information management which is regarded as the crucial issue in smart decision making support. The 13 Chapters of this book represent a sample of such effort. The overall aim of this book is to provide guidelines to develop tools for smart processing of knowledge and information. Still, the guide does not presume to give ultimate answers. Rather, it poses ideas and case studies to explore and the complexities and challenges of modern knowledge management issues. It also encourages its reader to become aware of the multifaceted interdisciplinary character of such issues. The premise of this book is that its reader will leave it with a heightened ability to think - in different ways - about developing, evaluating, and supporting intelligent knowledge and information management systems in real life based environment.
This book presents a framework through transformation and explains how business goals can be translated into realistic plans that are tangible and yield real results in terms of the top line and the bottom line. Process Transformation is like a tangram puzzle, which has multiple solutions yet is essentially composed of seven 'tans' that hold it together. Based on practical experience and intensive research into existing material, 'Process Tangram' is a simple yet powerful framework that proposes Process Transformation as a program. The seven 'tans' are: the transformation program itself, triggers, goals, tools and techniques, culture, communication and success factors. With its segregation into tans and division into core elements, this framework makes it possible to use 'pick and choose' to quickly and easily map an organization's specific requirements. Change management and process modeling are covered in detail. In addition, the book approaches managed services as a model of service delivery, which it explores as a case of process transformation. This book will appeal to anyone engaged in business process transformation, be it business process management professionals, change managers, sponsors, program managers or line managers. The book starts with the basics, making it suitable even for students who want to make a career in business process management.
Information retrieval (IR) aims at defining systems able to provide a fast and effective content-based access to a large amount of stored information. The aim of an IR system is to estimate the relevance of documents to users' information needs, expressed by means of a query. This is a very difficult and complex task, since it is pervaded with imprecision and uncertainty. Most of the existing IR systems offer a very simple model of IR, which privileges efficiency at the expense of effectiveness. A promising direction to increase the effectiveness of IR is to model the concept of "partially intrinsic" in the IR process and to make the systems adaptive, i.e. able to "learn" the user's concept of relevance. To this aim, the application of soft computing techniques can be of help to obtain greater flexibility in IR systems.
Mining the Web: Discovering Knowledge from Hypertext Data is the
first book devoted entirely to techniques for producing knowledge
from the vast body of unstructured Web data. Building on an initial
survey of infrastructural issues including Web crawling and
indexing Chakrabarti examines low-level machine learning techniques
as they relate specifically to the challenges of Web mining. He
then devotes the final part of the book to applications that unite
infrastructure and analysis to bring machine learning to bear on
systematically acquired and stored data. Here the focus is on
results: the strengths and weaknesses of these applications, along
with their potential as foundations for further progress. From
Chakrabarti's work painstaking, critical, and forward-looking
readers will gain the theoretical and practical understanding they
need to contribute to the Web mining effort.
Physical processes, involving atomic phenomena, allow more and more precise time and frequency measurements. This progress is not possible without convenient processing of the respective raw data. This book describes the data processing at various levels: design of the time and frequency references, characterization of the time and frequency references, and applications involving precise time and/or frequency references.
Fuzzy sets were first proposed by Lotfi Zadeh in his seminal paper [366] in 1965, and ever since have been a center of many discussions, fervently admired and condemned. Both proponents and opponents consider the argu ments pointless because none of them would step back from their territory. And stiH, discussions burst out from a single sparkle like a conference pa per or a message on some fuzzy-mail newsgroup. Here is an excerpt from an e-mail messagepostedin1993tofuzzy-mail@vexpert. dbai. twvien. ac. at. by somebody who signed "Dave". , . . . Why then the "logic" in "fuzzy logic"? I don't think anyone has successfully used fuzzy sets for logical inference, nor do I think anyone wiH. In my admittedly neophyte opinion, "fuzzy logic" is a misnomer, an oxymoron. (1 would be delighted to be proven wrong on that. ) . . . I carne to the fuzzy literature with an open mind (and open wal let), high hopes and keen interest. I am very much disiHusioned with "fuzzy" per se, but I did happen across some extremely interesting things along the way. " Dave, thanks for the nice quote! Enthusiastic on the surface, are not many of us suspicious deep down? In some books and journals the word fuzzy is religiously avoided: fuzzy set theory is viewed as a second-hand cheap trick whose aim is nothing else but to devalue good classical theories and open up the way to lazy ignorants and newcomers.
This book develops a crowdsourced sensor-cloud service composition framework taking into account spatio-temporal aspects. This book also unfolds new horizons to service-oriented computing towards the direction of crowdsourced sensor data based applications, in the broader context of Internet of Things (IoT). It is a massive challenge for the IoT research field how to effectively and efficiently capture, manage and deliver sensed data as user-desired services. The outcome of this research will contribute to solving this very important question, by designing a novel service framework and a set of unique service selection and composition frameworks. Delivering a novel service framework to manage crowdsourced sensor data provides high-level abstraction (i.e., sensor-cloud service) to model crowdsourced sensor data from functional and non-functional perspectives, seamlessly turning the raw data into "ready to go" services. A creative indexing model is developed to capture and manage the spatio-temporal dynamism of crowdsourced service providers. Delivering novel frameworks to compose crowdsourced sensor-cloud services is vital. These frameworks focuses on spatio-temporal composition of crowdsourced sensor-cloud services, which is a new territory for existing service oriented computing research. A creative failure-proof model is also designed to prevent composition failure caused by fluctuating QoS. Delivering an incentive model to drive the coverage of crowdsourced service providers is also vital. A new spatio-temporal incentive model targets changing coverage of the crowdsourced providers to achieve demanded coverage of crowdsourced sensor-cloud services within a region. The outcome of this research is expected to potentially create a sensor services crowdsourcing market and new commercial opportunities focusing on crowdsourced data based applications. The crowdsourced community based approach adds significant value to journey planning and map services thus creating a competitive edge for a technologically-minded companies incentivizing new start-ups, thus enabling higher market innovation. This book primarily targets researchers and practitioners, who conduct research work in service oriented computing, Internet of Things (IoT), smart city and spatio-temporal travel planning, as well as advanced-level students studying this field. Small and Medium Entrepreneurs, who invest in crowdsourced IoT services and journey planning infrastructures, will also want to purchase this book.
This book discusses the development of a theory of info-statics as a sub-theory of the general theory of information. It describes the factors required to establish a definition of the concept of information that fixes the applicable boundaries of the phenomenon of information, its linguistic structure and scientific applications. The book establishes the definitional foundations of information and how the concepts of uncertainty, data, fact, evidence and evidential things are sequential derivatives of information as the primary category, which is a property of matter and energy. The sub-definitions are extended to include the concepts of possibility, probability, expectation, anticipation, surprise, discounting, forecasting, prediction and the nature of past-present-future information structures. It shows that the factors required to define the concept of information are those that allow differences and similarities to be established among universal objects over the ontological and epistemological spaces in terms of varieties and identities. These factors are characteristic and signal dispositions on the basis of which general definitional foundations are developed to construct the general information definition (GID). The book then demonstrates that this definition is applicable to all types of information over the ontological and epistemological spaces. It also defines the concepts of uncertainty, data, fact, evidence and knowledge based on the GID. Lastly, it uses set-theoretic analytics to enhance the definitional foundations, and shows the value of the theory of info-statics to establish varieties and categorial varieties at every point of time and thus initializes the construct of the theory of info-dynamics.
Data warehouses have captured the attention of practitioners and researchers alike. But the design and optimization of data warehouses remains an art rather than a science. This book presents the first comparative review of the state of the art and best current practice of data warehouses. It covers source and data integration, multidimensional aggregation, query optimization, update propagation, metadata management, quality assessment, and design optimization. Also, based on results of the European Data Warehouse Quality project, it offers a conceptual framework by which the architecture and quality of data warehouse efforts can be assessed and improved using enriched metadata management combined with advanced techniques from databases, business modeling, and artificial intelligence. For researchers and database professionals in academia and industry, the book offers an excellent introduction to the issues of quality and metadata usage in the context of data warehouses.
The design of computer systems to be embedded in critical real-time applications is a complex task. Such systems must not only guarantee to meet hard real-time deadlines imposed by their physical environment, they must guarantee to do so dependably, despite both physical faults (in hardware) and design faults (in hardware or software). A fault-tolerance approach is mandatory for these guarantees to be commensurate with the safety and reliability requirements of many life- and mission-critical applications. A Generic Fault-Tolerant Architecture for Real-Time Dependable Systems explains the motivations and the results of a collaborative project(*), whose objective was to significantly decrease the lifecycle costs of such fault-tolerant systems. The end-user companies participating in this project currently deploy fault-tolerant systems in critical railway, space and nuclear-propulsion applications. However, these are proprietary systems whose architectures have been tailored to meet domain-specific requirements. This has led to very costly, inflexible, and often hardware-intensive solutions that, by the time they are developed, validated and certified for use in the field, can already be out-of-date in terms of their underlying hardware and software technology. The project thus designed a generic fault-tolerant architecture with two dimensions of redundancy and a third multi-level integrity dimension for accommodating software components of different levels of criticality. The architecture is largely based on commercial off-the-shelf (COTS) components and follows a software-implemented approach so as to minimise the need for special hardware. Using an associated development and validationenvironment, system developers may configure and validate instances of the architecture that can be shown to meet the very diverse requirements of railway, space, nuclear-propulsion and other critical real-time applications. This book describes the rationale of the generic architecture, the design and validation of its communication, scheduling and fault-tolerance components, and the tools that make up its design and validation environment. The book concludes with a description of three prototype systems that have been developed following the proposed approach. (*) Esprit project No. 20716: GUARDS: a Generic Upgradable Architecture for Real-time Dependable Systems.
Calendar units, such as months and days, clock units, such as hours and seconds, and specialized units, such as business days and academic years, play a major role in a wide range of information system applications. System support for reasoning about these units, called granularities in this book, is important for the efficient design, use, and implementation of such applications. The book deals with several aspects of temporal information and provides a unifying model for granularities. It is intended for computer scientists and engineers who are interested in the formal models and technical development of specific issues. Practitioners can learn about critical aspects that must be taken into account when designing and implementing databases supporting temporal information. Lecturers may find this book useful for an advanced course on databases. Moreover, any graduate student working on time representation and reasoning, either in data or knowledge bases, should definitely read it. |
You may like...
Entertainment Computing - ICEC 2012…
Marc Herrlich, Rainer Malaka, …
Paperback
R2,744
Discovery Miles 27 440
Contextual Theology - Skills and…
Sigurd Bergmann, Mika Vahakangas
Paperback
R1,406
Discovery Miles 14 060
Risk and Interdependencies in Critical…
Per Hokstad, Ingrid B. Utne, …
Hardcover
R2,672
Discovery Miles 26 720
|