![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
Health information about patients is critical; currently, health records are saved in databases controlled by individual users, organizations, or large groups of organizations. As there are many malicious users, this information is not shared with other organizations due to security issues and the chance of the data being modified or tampered with. Blockchain can be used to securely exchange healthcare data, which can be accessed by organizations sharing the same network, allowing doctors/practitioners to provide better care for patients. The key properties of decentralization, such as immutability and transparency, improve healthcare interoperability. This book brings forth the prospects and research trends of Blockchain in healthcare, so that Researchers, Database professionals, Academia, and Healthcare professionals across the world can know/use the concept of Blockchain in healthcare. The book provides the fundamental and technical details of Blockchain, the applications of Blockchain in healthcare, hands-on chapters for graduate/postgraduate/doctoral students/healthcare professionals to secure healthcare data of patients, and research challenges and future work directions for researchers in healthcare.
The IT Security Governance Guidebook with Security Program Metrics provides clear and concise explanations of key issues in information protection, describing the basic structure of information protection and enterprise protection programs. Including graphics to support the information in the text, this book includes both an overview of material as well as detailed explanations of specific issues. The accompanying downloadable resources offers a collection of metrics, formed from repeatable and comparable measurement, that are designed to correspond to the enterprise security governance model provided in the text, allowing an enterprise to measure its overall information protection program.
With advances and in-depth applications of computer technologies, and the extensive applications of Web technology in various areas, databases have become the repositories of large volumes of data. It is very critical to manage data resources for effective problem solving and decision making. Collecting and presenting the latest research and development results from the leading researchers in the field of intelligent databases, ""Intelligent Databases: Technologies and Applications"" provides a single record of current research and practical applications in this field. ""Intelligent Databases: Technologies and Applications"" integrates data management in databases with intelligent data processing and analysis in artificial intelligence. This book challenges today's database technology and promotes its evolution.
Data warehousing is an important topic that is of interest to both the industry and the knowledge engineering research communities. Both data mining and data warehousing technologies have similar objectives and can potentially benefit from each other's methods to facilitate knowledge discovery. Improving Knowledge Discovery through the Integration of Data Mining Techniques provides insight concerning the integration of data mining and data warehousing for enhancing the knowledge discovery process. Decision makers, academicians, researchers, advanced-level students, technology developers, and business intelligence professionals will find this book useful in furthering their research exposure to relevant topics in knowledge discovery.
Web search engines are not just indispensable tools for finding and accessing information online, but have become a defining component of the human condition and can be conceptualized as a complex behavior embedded within an individual's everyday social, cultural, political, and information-seeking activities. This book investigates Web search from the non-technical perspective, bringing together chapters that represent a range of multidisciplinary theories, models, and ideas.
This book presents real-world decision support systems, i.e., systems that have been running for some time and as such have been tested in real environments and complex situations; the cases are from various application domains and highlight the best practices in each stage of the system's life cycle, from the initial requirements analysis and design phases to the final stages of the project. Each chapter provides decision-makers with recommendations and insights into lessons learned so that failures can be avoided and successes repeated. For this reason unsuccessful cases, which at some point of their life cycle were deemed as failures for one reason or another, are also included. All decision support systems are presented in a constructive, coherent and deductive manner to enhance the learning effect. It complements the many works that focus on theoretical aspects or individual module design and development by offering 'good' and 'bad' practices when developing and using decision support systems. Combining high-quality research with real-world implementations, it is of interest to researchers and professionals in industry alike.
Considered the gold-standard reference on information security, the Information Security Management Handbook provides an authoritative compilation of the fundamental knowledge, skills, techniques, and tools required of today's IT security professional. Now in its sixth edition, this 3200 page, 4 volume stand-alone reference is organized under the CISSP Common Body of Knowledge domains and has been updated yearly. Each annual update, the latest is Volume 6, reflects the changes to the CBK in response to new laws and evolving technology.
Implement a vendor-neutral and multi-cloud cybersecurity and risk mitigation framework with advice from seasoned threat hunting pros In Threat Hunting in the Cloud: Defending AWS, Azure and Other Cloud Platforms Against Cyberattacks, celebrated cybersecurity professionals and authors Chris Peiris, Binil Pillai, and Abbas Kudrati leverage their decades of experience building large scale cyber fusion centers to deliver the ideal threat hunting resource for both business and technical audiences. You'll find insightful analyses of cloud platform security tools and, using the industry leading MITRE ATT&CK framework, discussions of the most common threat vectors. You'll discover how to build a side-by-side cybersecurity fusion center on both Microsoft Azure and Amazon Web Services and deliver a multi-cloud strategy for enterprise customers. And you will find out how to create a vendor-neutral environment with rapid disaster recovery capability for maximum risk mitigation. With this book you'll learn: Key business and technical drivers of cybersecurity threat hunting frameworks in today's technological environment Metrics available to assess threat hunting effectiveness regardless of an organization's size How threat hunting works with vendor-specific single cloud security offerings and on multi-cloud implementations A detailed analysis of key threat vectors such as email phishing, ransomware and nation state attacks Comprehensive AWS and Azure "how to" solutions through the lens of MITRE Threat Hunting Framework Tactics, Techniques and Procedures (TTPs) Azure and AWS risk mitigation strategies to combat key TTPs such as privilege escalation, credential theft, lateral movement, defend against command & control systems, and prevent data exfiltration Tools available on both the Azure and AWS cloud platforms which provide automated responses to attacks, and orchestrate preventative measures and recovery strategies Many critical components for successful adoption of multi-cloud threat hunting framework such as Threat Hunting Maturity Model, Zero Trust Computing, Human Elements of Threat Hunting, Integration of Threat Hunting with Security Operation Centers (SOCs) and Cyber Fusion Centers The Future of Threat Hunting with the advances in Artificial Intelligence, Machine Learning, Quantum Computing and the proliferation of IoT devices. Perfect for technical executives (i.e., CTO, CISO), technical managers, architects, system admins and consultants with hands-on responsibility for cloud platforms, Threat Hunting in the Cloud is also an indispensable guide for business executives (i.e., CFO, COO CEO, board members) and managers who need to understand their organization's cybersecurity risk framework and mitigation strategy.
Research in multi-agent systems offers a promising technology for problems with networks, online trading and negotiations but also social structures and communication. This is a book on agent and multi-agent technology for internet and enterprise systems. The book is a pioneer in the combination of the fields and is based on the concept of developing a platform to share ideas and presents research in technology in the field and application to real problems. The chapters range over both applications, illustrating the possible uses of agents in an enterprise domain, and design and analytic methods, needed to provide the solid foundation required for practical systems.
Community structure is a salient structural characteristic of
many real-world networks. Communities are generally hierarchical,
overlapping, multi-scale and coexist with other types of structural
regularities of networks. This poses major challenges for
conventional methods of community detection. This book will
comprehensively introduce the latest advances in community
detection, especially the detection of overlapping and hierarchical
community structures, the detection of multi-scale communities in
heterogeneous networks, and the exploration of multiple types of
structural regularities. These advances have been successfully
applied to analyze large-scale online social networks, such as
Facebook and Twitter. This book provides readers a convenient way
to grasp the cutting edge of community detection in complex
networks.
Database Concurrency Control: Methods, Performance and Analysis is a review of developments in concurrency control methods for centralized database systems, with a quick digression into distributed databases and multicomputers, the emphasis being on performance. The main goals of Database Concurrency Control: Methods, Performance and Analysis are to succinctly specify various concurrency control methods; to describe models for evaluating the relative performance of concurrency control methods; to point out problem areas in earlier performance analyses; to introduce queuing network models to evaluate the baseline performance of transaction processing systems; to provide insights into the relative performance of transaction processing systems; to illustrate the application of basic analytic methods to the performance analysis of various concurrency control methods; to review transaction models which are intended to relieve the effect of lock contention; to provide guidelines for improving the performance of transaction processing systems due to concurrency control; and to point out areas for further investigation. This monograph should be of direct interest to computer scientists doing research on concurrency control methods for high performance transaction processing systems, designers of such systems, and professionals concerned with improving (tuning) the performance of transaction processing systems.
Patrick Humphreys Department of Social Psychology London School of Economics and Political Science, Houghton Street, London WC2A 2AE. Email: P. Humphreys@lse. ac. uk This book presents a selection of contributions to the conference on Implementing Systems for Supporting Management Decisions: Concepts, Methods, and Experiences held in London in July, 1996. The conference was organized by the International Federation of Infonnation Processing's Working Group 8. 3 on Decision Support Systems and the London School of Economics and Political Science. (LSE). The Programme Committee for the Conference comprised Liam Bannon, University of Limerick; Patrick Humphreys, LSE, co-chairperson; Andrew McCosh, University of Edinburgh; Piero Migliarese, Politecnico di Milano, co chairperson; Jean-Charles Pomerol, LAFORIA, Universite Paris VI. The chairperson of the organizing committee was Dina Berkeley, LSE. The programme committee members served also as the editors of this book. Each contribution was selected by the editors after peer review and was developed by its authors specifically for inclusion in this volume. Working group 8. 3 was formally established in 1981 on the recommendation ofIFIP's Technical Committee on Information Systems (TC8). The scope of the working group covers: "Development of approaches for applying information systems technology to increase the effectiveness of decision makers in situations where the computer system can support and enhance human judgment in the perfonnance of tasks that have elements that cannot be specified in advance."
This book presents an improved design for service provisioning and allocation models that are validated through running genome sequence assembly tasks in a hybrid cloud environment. It proposes approaches for addressing scheduling and performance issues in big data analytics and showcases new algorithms for hybrid cloud scheduling. Scientific sectors such as bioinformatics, astronomy, high-energy physics, and Earth science are generating a tremendous flow of data, commonly known as big data. In the context of growing demand for big data analytics, cloud computing offers an ideal platform for processing big data tasks due to its flexible scalability and adaptability. However, there are numerous problems associated with the current service provisioning and allocation models, such as inefficient scheduling algorithms, overloaded memory overheads, excessive node delays and improper error handling of tasks, all of which need to be addressed to enhance the performance of big data analytics.
Modern computer-based control systems are able to collect a large amount of information, display it to operators and store it in databases but the interpretation of the data and the subsequent decision making relies mainly on operators with little computer support. This book introduces developments in automatic analysis and interpretation of process-operational data both in real-time and over the operational history, and describes new concepts and methodologies for developing intelligent, state space-based systems for process monitoring, control and diagnosis. The book brings together new methods and algorithms from process monitoring and control, data mining and knowledge discovery, artificial intelligence, pattern recognition, and causal relationship discovery, as well as signal processing. It also provides a framework for integrating plant operators and supervisors into the design of process monitoring and control systems.
The topic of preferences is a new branch of machine learning and data mining, and it has attracted considerable attention in artificial intelligence research in previous years. It involves learning from observations that reveal information about the preferences of an individual or a class of individuals. Representing and processing knowledge in terms of preferences is appealing as it allows one to specify desires in a declarative way, to combine qualitative and quantitative modes of reasoning, and to deal with inconsistencies and exceptions in a flexible manner. And, generalizing beyond training data, models thus learned may be used for preference prediction. This is the first book dedicated to this topic, and the treatment is comprehensive. The editors first offer a thorough introduction, including a systematic categorization according to learning task and learning technique, along with a unified notation. The first half of the book is organized into parts on label ranking, instance ranking, and object ranking; while the second half is organized into parts on applications of preference learning in multiattribute domains, information retrieval, and recommender systems. The book will be of interest to researchers and practitioners in artificial intelligence, in particular machine learning and data mining, and in fields such as multicriteria decision-making and operations research.
Actuarial Principles: Lifetables and Mortality Models explores the core of actuarial science: the study of mortality and other risks and applications. Including the CT4 and CT5 UK courses, but applicable to a global audience, this work lightly covers the mathematical and theoretical background of the subject to focus on real life practice. It offers a brief history of the field, why actuarial notation has become universal, and how theory can be applied to many situations. Uniquely covering both life contingency risks and survival models, the text provides numerous exercises (and their solutions), along with complete self-contained real-world assignments.
Geographic information systems have developed rapidly in the past decade, and are now a major class of software, with applications that include infrastructure maintenance, resource management, agriculture, Earth science, and planning. But a lack of standards has led to a general inability for one GIS to interoperate with another. It is difficult for one GIS to share data with another, or for people trained on one system to adapt easily to the commands and user interface of another. Failure to interoperate is a problem at many levels, ranging from the purely technical to the semantic and the institutional. Interoperating Geographic Information Systems is about efforts to improve the ability of GISs to interoperate, and has been assembled through a collaboration between academic researchers and the software vendor community under the auspices of the US National Center for Geographic Information and Analysis and the Open GIS Consortium Inc. It includes chapters on the basic principles and the various conceptual frameworks that the research community has developed to think about the problem. Other chapters review a wide range of applications and the experiences of the authors in trying to achieve interoperability at a practical level. Interoperability opens enormous potential for new ways of using GIS and new mechanisms for exchanging data, and these are covered in chapters on information marketplaces, with special reference to geographic information. Institutional arrangements are also likely to be profoundly affected by the trend towards interoperable systems, and nowhere is the impact of interoperability more likely to cause fundamental change than in education, as educators address the needs of a new generation of GIS users with access to a new generation of tools. The book concludes with a series of chapters on education and institutional change. Interoperating Geographic Information Systems is suitable as a secondary text for graduate level courses in computer science, geography, spatial databases, and interoperability and as a reference for researchers and practitioners in industry, commerce and government.
The book examines patterns of participation in human rights treaties. International relations theory is divided on what motivates states to participate in treaties, specifically human rights treaties. Instead of examining the specific motivations, this dissertation examines patterns of participation. In doing so, it attempts to match theoretical expectations of state behavior with participation. The conclusion of this study is that the data suggests there are multiple motivations that lead states to participate in human rights treaties. The book is divided into five substantive chapters. After an introduction, the second chapter examines the literature on why states join treaties in general, and human rights treaties in particular. The third chapter reviews the obligations states commit to under the fifteen treaties under consideration. The fourth chapter uses basic quantitative methods to examine any differences in the participation rates between democratic and non-democratic states. The fifth chapter examines reservations, declarations, and objections made in conjuncture with the fifteen treaties. The chapter employs both quantitative and qualitative methods to determine if there are substantial differences between democratic and non-democratic states. Finally, the sixth chapter examines those states that participate in the most human rights treaties to determine if there are characteristics that help to identify these states. Additionally, the chapter examines and evaluates theoretical predictions about participation.
This book introduces the concepts, applications and development of data science in the telecommunications industry by focusing on advanced machine learning and data mining methodologies in the wireless networks domain. Mining Over Air describes the problems and their solutions for wireless network performance and quality, device quality readiness and returns analytics, wireless resource usage profiling, network traffic anomaly detection, intelligence-based self-organizing networks, telecom marketing, social influence, and other important applications in the telecom industry. Written by authors who study big data analytics in wireless networks and telecommunication markets from both industrial and academic perspectives, the book targets the pain points in telecommunication networks and markets through big data. Designed for both practitioners and researchers, the book explores the intersection between the development of new engineering technology and uses data from the industry to understand consumer behavior. It combines engineering savvy with insights about human behavior. Engineers will understand how the data generated from the technology can be used to understand the consumer behavior and social scientists will get a better understanding of the data generation process.
Data Mining for Design and Manufacturing: Methods and Applications is the first book that brings together research and applications for data mining within design and manufacturing. The aim of the book is 1) to clarify the integration of data mining in engineering design and manufacturing, 2) to present a wide range of domains to which data mining can be applied, 3) to demonstrate the essential need for symbiotic collaboration of expertise in design and manufacturing, data mining, and information technology, and 4) to illustrate how to overcome central problems in design and manufacturing environments. The book also presents formal tools required to extract valuable information from design and manufacturing data, and facilitates interdisciplinary problem solving for enhanced decision making. Audience: The book is aimed at both academic and practising audiences. It can serve as a reference or textbook for senior or graduate level students in Engineering, Computer, and Management Sciences who are interested in data mining technologies. The book will be useful for practitioners interested in utilizing data mining techniques in design and manufacturing as well as for computer software developers engaged in developing data mining tools.
Organizations rely on data mining and warehousing technologies to store, integrate, query, and analyze essential data. Strategic Advancements in Utilizing Data Mining and Warehousing Technologies: New Concepts and Developments discusses developments in data mining and warehousing as well as techniques for successful implementation. Contributions investigate theoretical queries along with real-world applications, providing a useful foundation for academicians and practitioners to research new techniques and methodologies.
Advances in technology are making massive data sets common in many scientific disciplines, such as astronomy, medical imaging, bio-informatics, combinatorial chemistry, remote sensing, and physics. To find useful information in these data sets, scientists and engineers are turning to data mining techniques. This book is a collection of papers based on the first two in a series of workshops on mining scientific datasets. It illustrates the diversity of problems and application areas that can benefit from data mining, as well as the issues and challenges that differentiate scientific data mining from its commercial counterpart. While the focus of the book is on mining scientific data, the work is of broader interest as many of the techniques can be applied equally well to data arising in business and web applications. Audience: This work would be an excellent text for students and researchers who are familiar with the basic principles of data mining and want to learn more about the application of data mining to their problem in science or engineering.
"Machine Learning and Data Mining for Computer Security" provides an overview of the current state of research in machine learning and data mining as it applies to problems in computer security. This book has a strong focus on information processing and combines and extends results from computer security. The first part of the book surveys the data sources, the learning and mining methods, evaluation methodologies, and past work relevant for computer security. The second part of the book consists of articles written by the top researchers working in this area. These articles deals with topics of host-based intrusion detection through the analysis of audit trails, of command sequences and of system calls as well as network intrusion detection through the analysis of TCP packets and the detection of malicious executables. This book fills the great need for a book that collects and frames work on developing and applying methods from machine learning and data mining to problems in computer security.
Video segmentation is the most fundamental process for appropriate index ing and retrieval of video intervals. In general, video streams are composed 1 of shots delimited by physical shot boundaries. Substantial work has been done on how to detect such shot boundaries automatically (Arman et aI. , 1993) (Zhang et aI. , 1993) (Zhang et aI. , 1995) (Kobla et aI. , 1997). Through the inte gration of technologies such as image processing, speech/character recognition and natural language understanding, keywords can be extracted and associated with these shots for indexing (Wactlar et aI. , 1996). A single shot, however, rarely carries enough amount of information to be meaningful by itself. Usu ally, it is a semantically meaningful interval that most users are interested in re trieving. Generally, such meaningful intervals span several consecutive shots. There hardly exists any efficient and reliable technique, either automatic or manual, to identify all semantically meaningful intervals within a video stream. Works by (Smith and Davenport, 1992) (Oomoto and Tanaka, 1993) (Weiss et aI. , 1995) (Hjelsvold et aI. , 1996) suggest manually defining all such inter vals in the database in advance. However, even an hour long video may have an indefinite number of meaningful intervals. Moreover, video data is multi interpretative. Therefore, given a query, what is a meaningful interval to an annotator may not be meaningful to the user who issues the query. In practice, manual indexing of meaningful intervals is labour intensive and inadequate. |
You may like...
Gallium Oxide - Materials Properties…
Masataka Higashiwaki, Shizuo Fujita
Hardcover
R2,436
Discovery Miles 24 360
Theory of Heavy-Fermion Compounds…
Miron Ya. Amusia, Konstantin G. Popov, …
Hardcover
Phase-Field Crystals - Fast Interface…
Peter Galenko, Vladimir Ankudinov, …
Hardcover
R2,865
Discovery Miles 28 650
Progress in Fine Particle Plasmas
Tetsu Mieno, Yasuaki Hayashi, …
Hardcover
R3,093
Discovery Miles 30 930
Active Particles, Volume 1 - Advances in…
Nicola Bellomo, Pierre Degond, …
Hardcover
R3,522
Discovery Miles 35 220
|