![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases
Market Basket Analysis (MBA) provides the ability to continually monitor the affinities of a business and can help an organization achieve a key competitive advantage. Time Variant data enables data warehouses to directly associate events in the past with the participants in each individual event. In the past however, the use of these powerful tools in tandem led to performance degradation and resulted in unactionable and even damaging information. Data Warehouse Designs: Achieving ROI with Market Basket Analysis and Time Variance presents an innovative, soup-to-nuts approach that successfully combines what was previously incompatible, without degradation, and uses the relational architecture already in place. Built around two main chapters, Market Basket Solution Definition and Time Variant Solution Definition, it provides a tangible how-to design that can be used to facilitate MBA within the context of a data warehouse. Presents a solution for creating home-grown MBA data marts Includes database design solutions in the context of Oracle, DB2, SQL Server, and Teradata relational database management systems (RDBMS) Explains how to extract, transform, and load data used in MBA and Time Variant solutions The book uses standard RDBMS platforms, proven database structures, standard SQL and hardware, and software and practices already accepted and used in the data warehousing community to fill the gaps left by most conceptual discussions of MBA. It employs a form and language intended for a data warehousing audience to explain the practicality of how data is delivered, stored, and viewed. Offering a comprehensive explanation of the applications that provide, store, and use MBA data, Data Warehouse Designs provides you with the language and concepts needed to require and receive information that is relevant and actionable.
In practice, the design and architecture of a cloud varies among cloud providers. We present a generic evaluation framework for the performance, availability and reliability characteristics of various cloud platforms. We describe a generic benchmark architecture for cloud databases, specifically NoSQL database as a service. It measures the performance of replication delay and monetary cost. Service Level Agreements (SLA) represent the contract which captures the agreed upon guarantees between a service provider and its customers. The specifications of existing service level agreements (SLA) for cloud services are not designed to flexibly handle even relatively straightforward performance and technical requirements of consumer applications. We present a novel approach for SLA-based management of cloud-hosted databases from the consumer perspective and an end-to-end framework for consumer-centric SLA management of cloud-hosted databases. The framework facilitates adaptive and dynamic provisioning of the database tier of the software applications based on application-defined policies for satisfying their own SLA performance requirements, avoiding the cost of any SLA violation and controlling the monetary cost of the allocated computing resources. In this framework, the SLA of the consumer applications are declaratively defined in terms of goals which are subjected to a number of constraints that are specific to the application requirements. The framework continuously monitors the application-defined SLA and automatically triggers the execution of necessary corrective actions (scaling out/in the database tier) when required. The framework is database platform-agnostic, uses virtualization-based database replication mechanisms and requires zero source code changes of the cloud-hosted software applications.
The main purpose of this book is to sum up the vital and highly topical research issue of knowledge representation on the Web and to discuss novel solutions by combining benefits of folksonomies and Web 2.0 approaches with ontologies and semantic technologies. The book contains an overview of knowledge representation approaches in past, present and future, introduction to ontologies, Web indexing and in first case the novel approaches of developing ontologies. combines aspects of knowledge representation for both the Semantic Web (ontologies) and the Web 2.0 (folksonomies). Currently there is no monographic book which provides a combined overview over these topics. focus on the topic of using knowledge representation methods for document indexing purposes. For this purpose, considerations from classical librarian interests in knowledge representation (thesauri, classification schemes etc.) are included, which are not part of most other books which have a stronger background in computer science.
Trustworthy Ubiquitous Computing covers aspects of trust in ubiquitous computing environments. The aspects of context, privacy, reliability, usability and user experience related to "emerged and exciting new computing paradigm of Ubiquitous Computing", includes pervasive, grid, and peer-to-peer computing including sensor networks to provide secure computing and communication services at anytime and anywhere. Marc Weiser presented his vision of disappearing and ubiquitous computing more than 15 years ago. The big picture of the computer introduced into our environment was a big innovation and the starting point for various areas of research. In order to totally adopt the idea of ubiquitous computing several houses were build, equipped with technology and used as laboratory in order to find and test appliances that are useful and could be made available in our everyday life. Within the last years industry picked up the idea of integrating ubiquitous computing and already available products like remote controls for your house were developed and brought to the market. In spite of many applications and projects in the area of ubiquitous and pervasive computing the success is still far away. One of the main reasons is the lack of acceptance of and confidence in this technology. Although researchers and industry are working in all of these areas a forum to elaborate security, reliability and privacy issues, that resolve in trustworthy interfaces and computing environments for people interacting within these ubiquitous environments is important. The user experience factor of trust thus becomes a crucial issue for the success of a UbiComp application. The goal of this book is to provide a state the art of Trustworthy Ubiquitous Computing to address recent research results and to present and discuss the ideas, theories, technologies, systems, tools, applications and experiences on all theoretical and practical issues.
Information infrastructures are integrated solutions based on the fusion of information and communication technologies. They are characterized by the large amount of data that must be managed accordingly. An information infrastructure requires an efficient and effective information retrieval system to provide access to the items stored in the infrastructure. Terminological Ontologies: Design, Management and Practical Applications presents the main problems that affect the discovery systems of information infrastructures to manage terminological models, and introduces a combination of research tools and applications in Semantic Web technologies. This book specifically analyzes the need to create, relate, and integrate the models required for an infrastructure by elaborating on the problem of accessing these models in an efficient manner via interoperable services and components. Terminological Ontologies: Design, Management and Practical Applications is geared toward information management systems and semantic web professionals working as project managers, application developers, government workers and more. Advanced undergraduate and graduate level students, professors and researchers focusing on computer science will also find this book valuable as a secondary text or reference book.
The World Wide Web can be considered a huge library that in consequence needs a capable librarian responsible for the classification and retrieval of documents as well as the mediation between library resources and users. Based on this idea, the concept of the "Librarian of the Web" is introduced which comprises novel, librarian-inspired methods and technical solutions to decentrally search for text documents in the web using peer-to-peer technology. The concept's implementation in the form of an interactive peer-to-peer client, called "WebEngine", is elaborated on in detail. This software extends and interconnects common web servers creating a fully integrated, decentralised and self-organising web search system on top of the existing web structure. Thus, the web is turned into its own powerful search engine without the need for any central authority. This book is intended for researchers and practitioners having a solid background in the fields of Information Retrieval and Web Mining.
With the proliferation of social media and on-line communities in networked world a large gamut of data has been collected and stored in databases. The rate at which such data is stored is growing at a phenomenal rate and pushing the classical methods of data analysis to their limits. This book presents an integrated framework of recent empirical and theoretical research on social network analysis based on a wide range of techniques from various disciplines like data mining, social sciences, mathematics, statistics, physics, network science, machine learning with visualization techniques and security. The book illustrates the potential of multi-disciplinary techniques in various real life problems and intends to motivate researchers in social network analysis to design more effective tools by integrating swarm intelligence and data mining.
This volume presents a collection of carefully selected contributions in the area of social media analysis. Each chapter opens up a number of research directions that have the potential to be taken on further in this rapidly growing area of research. The chapters are diverse enough to serve a number of directions of research with Sentiment Analysis as the dominant topic in the book. The authors have provided a broad range of research achievements from multimodal sentiment identification to emotion detection in a Chinese microblogging website. The book will be useful to research students, academics and practitioners in the area of social media analysis.
Provides readers with the methods, algorithms, and means to perform text mining tasks This book is devoted to the fundamentals of text mining using Perl, an open-source programming tool that is freely available via the Internet (www.perl.org). It covers mining ideas from several perspectives--statistics, data mining, linguistics, and information retrieval--and provides readers with the means to successfully complete text mining tasks on their own. The book begins with an introduction to regular expressions, a text pattern methodology, and quantitative text summaries, all of which are fundamental tools of analyzing text. Then, it builds upon this foundation to explore: Probability and texts, including the bag-of-words model Information retrieval techniques such as the TF-IDF similarity measure Concordance lines and corpus linguistics Multivariate techniques such as correlation, principal components analysis, and clustering Perl modules, German, and permutation tests Each chapter is devoted to a single key topic, and the author carefully and thoughtfully introduces mathematical concepts as they arise, allowing readers to learn as they go without having to refer to additional books. The inclusion of numerous exercises and worked-out examples further complements the book's student-friendly format. Practical Text Mining with Perl is ideal as a textbook for undergraduate and graduate courses in text mining and as a reference for a variety of professionals who are interested in extracting information from text documents.
This book presents the proceedings of Workshops and Posters at the 13th International Conference on Spatial Information Theory (COSIT 2017), which is concerned with all aspects of space and spatial environments as experienced, represented and elaborated by humans, other animals and artificial agents. Complementing the main conference proceedings, workshop papers and posters investigate specialized research questions or challenges in spatial information theory and closely related topics, including advances in the conceptualization of specific spatio-temporal domains and diverse applications of spatial and temporal information.
This book addresses many-criteria decision-making (MCDM), a process used to find a solution in an environment with several criteria. In many real-world problems, there are several different objectives that need to be taken into account. Solving these problems is a challenging task and requires careful consideration. In real applications, often simple and easy to understand methods are used; as a result, the solutions accepted by decision makers are not always optimal solutions. On the other hand, algorithms that would provide better outcomes are very time consuming. The greatest challenge facing researchers is how to create effective algorithms that will yield optimal solutions with low time complexity. Accordingly, many current research efforts are focused on the implementation of biologically inspired algorithms (BIAs), which are well suited to solving uni-objective problems. This book introduces readers to state-of-the-art developments in biologically inspired techniques and their applications, with a major emphasis on the MCDM process. To do so, it presents a wide range of contributions on e.g. BIAs, MCDM, nature-inspired algorithms, multi-criteria optimization, machine learning and soft computing.
The book explores technological advances in the fourth industrial revolution (4IR), which is based on a variety of technologies such as artificial intelligence, Internet of Things, machine learning, big data, additive printing, cloud computing, and virtual and augmented reality. Critically analyzing the impacts and effects of these disruptive technologies on various areas, including economics, society, business, government, labor, law, and environment, the book also provides a broad overview of 4IR, with a focus on technologies, to allow readers to gain a deeper understanding of the recent advances and future trajectories. It is intended for researchers, practitioners, policy-makers and industry leaders.
This book provides a comprehensive analysis of Brooks-Iyengar Distributed Sensing Algorithm, which brings together the power of Byzantine Agreement and sensor fusion in building a fault-tolerant distributed sensor network. The authors analyze its long-term impacts, advances, and future prospects. The book starts by discussing the Brooks-Iyengar algorithm, which has made significant impact since its initial publication in 1996. The authors show how the technique has been applied in many domains such as software reliability, distributed systems and OS development, etc. The book exemplifies how the algorithm has enhanced new real-time features by adding fault-tolerant capabilities for many applications. The authors posit that the Brooks-Iyengar Algorithm will to continue to be used where fault-tolerant solutions are needed in redundancy system scenarios. This book celebrates S.S. Iyengar's accomplishments that led to his 2019 Institute of Electrical and Electronics Engineers' (IEEE) Cybermatics Congress "Test of Time Award" for his work on creating Brooks-Iyengar Algorithm and its impact in advancing modern computing.
"Cryptographic Protocol: Security Analysis Based on Trusted
Freshness" mainly discusses how to analyze and design cryptographic
protocols based on the idea of system engineering and that of the
trusted freshness component. A novel freshness principle based on
the trusted freshness component is presented; this principle is the
basis for an efficient and easy method for analyzing the security
of cryptographic protocols. The reasoning results of the new
approach, when compared with the security conditions, can either
establish the correctness of a cryptographic protocol when the
protocol is in fact correct, or identify the absence of the
security properties, which leads the structure to construct attacks
directly. Furthermore, based on the freshness principle, a belief
multiset formalism is presented. This formalism s efficiency,
rigorousness, and the possibility of its automation are also
presented.
The field of database security has expanded greatly, with the rapid development of global inter-networked infrastructure. Databases are no longer stand-alone systems accessible only to internal users of organizations. Today, businesses must allow selective access from different security domains. New data services emerge every day, bringing complex challenges to those whose job is to protect data security. The Internet and the web offer means for collecting and sharing data with unprecedented flexibility and convenience, presenting threats and challenges of their own. This book identifies and addresses these new challenges and more, offering solid advice for practitioners and researchers in industry.
This book provides a general and comprehensible overview of supervised descriptive pattern mining, considering classic algorithms and those based on heuristics. It provides some formal definitions and a general idea about patterns, pattern mining, the usefulness of patterns in the knowledge discovery process, as well as a brief summary on the tasks related to supervised descriptive pattern mining. It also includes a detailed description on the tasks usually grouped under the term supervised descriptive pattern mining: subgroups discovery, contrast sets and emerging patterns. Additionally, this book includes two tasks, class association rules and exceptional models, that are also considered within this field. A major feature of this book is that it provides a general overview (formal definitions and algorithms) of all the tasks included under the term supervised descriptive pattern mining. It considers the analysis of different algorithms either based on heuristics or based on exhaustive search methodologies for any of these tasks. This book also illustrates how important these techniques are in different fields, a set of real-world applications are described. Last but not least, some related tasks are also considered and analyzed. The final aim of this book is to provide a general review of the supervised descriptive pattern mining field, describing its tasks, its algorithms, its applications, and related tasks (those that share some common features). This book targets developers, engineers and computer scientists aiming to apply classic and heuristic-based algorithms to solve different kinds of pattern mining problems and apply them to real issues. Students and researchers working in this field, can use this comprehensive book (which includes its methods and tools) as a secondary textbook.
This book gathers selected papers presented at the KES International Symposium on Smart Transportation Systems (KES-STS 2021). Modern transportation systems have undergone a rapid transformation in recent years, producing a range of technological innovations such as connected vehicles, self-driving cars, electric vehicles, Hyperloop, and even flying cars, and with them, fundamental changes in transport systems around the world. The book discusses current challenges, innovations, and breakthroughs in smart transportation systems, as well as transport infrastructure modelling, safety analysis, freeway operations, intersection analysis, and other related cutting-edge topics.
Vulnerability analysis, also known as vulnerability assessment, is a process that defines, identifies, and classifies the security holes, or vulnerabilities, in a computer, network, or application. In addition, vulnerability analysis can forecast the effectiveness of proposed countermeasures and evaluate their actual effectiveness after they are put into use. Vulnerability Analysis and Defense for the Internet provides packet captures, flow charts and pseudo code, which enable a user to identify if an application/protocol is vulnerable. This edited volume also includes case studies that discuss the latest exploits.
In this volume, Rudi Studer and his team deliver a self-contained compendium about the exciting field of Semantic Web services, starting with the basic standards and technologies and also including advanced applications in eGovernment and eHealth. The contributions provide both the theoretical background and the practical knowledge necessary to understand the essential ideas and to design new cutting-edge applications.
This book is designed for the professional system administrators
who need to securely deploy Microsoft Vista in their networks.
Readers will not only learn about the new security features of
Vista, but they will learn how to safely integrate Vista with their
existing wired and wireless network infrastructure and safely
deploy with their existing applications and databases. The book
begins with a discussion of Microsoft's Trustworthy Computing
Initiative and Vista's development cycle, which was like none other
in Microsoft's history. Expert authors will separate the hype from
the reality of Vista s preparedness to withstand the 24 x 7 attacks
it will face from malicious attackers as the world s #1 desktop
operating system. The book has a companion CD which contains
hundreds of working scripts and utilities to help administrators
secure their environments.
The IEEE ICDM 2004 workshop on the Foundation of Data Mining and the IEEE ICDM 2005 workshop on the Foundation of Semantic Oriented Data and Web Mining focused on topics ranging from the foundations of data mining to new data mining paradigms. The workshops brought together both data mining researchers and practitioners to discuss these two topics while seeking solutions to long standing data mining problems and stimul- ing new data mining research directions. We feel that the papers presented at these workshops may encourage the study of data mining as a scienti?c ?eld and spark new communications and collaborations between researchers and practitioners. Toexpressthevisionsforgedintheworkshopstoawiderangeofdatam- ing researchers and practitioners and foster active participation in the study of foundations of data mining, we edited this volume by involving extended and updated versions of selected papers presented at those workshops as well as some other relevant contributions. The content of this book includes st- ies of foundations of data mining from theoretical, practical, algorithmical, and managerial perspectives. The following is a brief summary of the papers contained in this book.
This book will help organizations who have implemented or are considering implementing Microsoft Dynamics achieve a better result. It presents Regatta Dynamics, a methodology developed by the authors for the structured implementation of Microsoft Dynamics. From A-to-Z, it details the full implementation process, emphasizing the organizational component of the implementation process and the cohesion with functional and technical processes.
This book is a tribute to Professor Jacek Zurada, who is best known for his contributions to computational intelligence and knowledge-based neurocomputing. It is dedicated to Professor Jacek Zurada, Full Professor at the Computational Intelligence Laboratory, Department of Electrical and Computer Engineering, J.B. Speed School of Engineering, University of Louisville, Kentucky, USA, as a token of appreciation for his scientific and scholarly achievements, and for his longstanding service to many communities, notably the computational intelligence community, in particular neural networks, machine learning, data analyses and data mining, but also the fuzzy logic and evolutionary computation communities, to name but a few. At the same time, the book recognizes and honors Professor Zurada's dedication and service to many scientific, scholarly and professional societies, especially the IEEE (Institute of Electrical and Electronics Engineers), the world's largest professional technical professional organization dedicated to advancing science and technology in a broad spectrum of areas and fields. The volume is divided into five major parts, the first of which addresses theoretic, algorithmic and implementation problems related to the intelligent use of data in the sense of how to derive practically useful information and knowledge from data. In turn, Part 2 is devoted to various aspects of neural networks and connectionist systems. Part 3 deals with essential tools and techniques for intelligent technologies in systems modeling and Part 4 focuses on intelligent technologies in decision-making, optimization and control, while Part 5 explores the applications of intelligent technologies.
The energy cost associated with modern information technologies has been increasing exponentially over time, stimulating the search for alternative information storage and processing devices. Magnetic skyrmions are solitonic nanometer-scale quasiparticles whose unique topological properties can be thought of as that of a Mobius strip. Skyrmions are envisioned as information carriers in novel information processing and storage devices with low power consumption and high information density. As such, they could contribute to solving the energy challenge. In order to be used in applications, isolated skyrmions must be thermally stable at the scale of years. In this work, their stability is studied through two main approaches: the Kramers' method in the form of Langer's theory, and the forward flux sampling method. Good agreement is found between the two methods. We find that small skyrmions possess low internal energy barriers, but are stabilized by a large activation entropy. This is a direct consequence of the existence of stable modes of deformation of the skyrmion. Additionally, frustrated exchange that arises at some transition metal interfaces leads to new collapse paths in the form of the partial nucleation of the corresponding antiparticle, as merons and antimerons.
Based on research and industry experience, this book structures the issues pertaining to grid computing security into three main categories: architecture-related, infrastructure-related, and management-related issues. It discusses all three categories in detail, presents existing solutions, standards, and products, and pinpoints their shortcomings and open questions. Together with a brief introduction into grid computing in general and underlying security technologies, this book offers the first concise and detailed introduction to this important area, targeting professionals in the grid industry as well as students. |
![]() ![]() You may like...
Information Hiding: Steganography and…
Neil F. Johnson, Zoran Duric, …
Hardcover
R2,963
Discovery Miles 29 630
Systems Analysis And Design In A…
John Satzinger, Robert Jackson, …
Hardcover
![]()
Data Abstraction and Problem Solving…
Janet Prichard, Frank Carrano
Paperback
R2,421
Discovery Miles 24 210
|