![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
This edited book presents the scientific outcomes of the 4th IEEE/ACIS International Conference on Big Data, Cloud Computing, Data Science & Engineering (BCD 2019) which was held on May 29-31, 2019 in Honolulu, Hawaii. The aim of the conference was to bring together researchers and scientists, businessmen and entrepreneurs, teachers, engineers, computer users and students to discuss the numerous fields of computer science and to share their experiences and exchange new ideas and information in a meaningful way. Presenting 15 of the conference's most promising papers, the book discusses all aspects (theory, applications and tools) of computer and information science, the practical challenges encountered along the way, and the solutions adopted to solve them.
Covid-19 has hit the world unprepared, as the deadliest pandemic of the century. Governments and authorities, as leaders and decision makers fighting against the virus, enormously tap on the power of AI and its data analytics models for urgent decision supports at the greatest efforts, ever seen from human history. This book showcases a collection of important data analytics models that were used during the epidemic, and discusses and compares their efficacy and limitations. Readers who from both healthcare industries and academia can gain unique insights on how data analytics models were designed and applied on epidemic data. Taking Covid-19 as a case study, readers especially those who are working in similar fields, would be better prepared in case a new wave of virus epidemic may arise again in the near future.
This book highlights various evolutionary algorithm techniques for various medical conditions and introduces medical applications of evolutionary computation for real-time diagnosis. Evolutionary Intelligence for Healthcare Applications presents how evolutionary intelligence can be used in smart healthcare systems involving big data analytics, mobile health, personalized medicine, and clinical trial data management. It focuses on emerging concepts and approaches and highlights various evolutionary algorithm techniques used for early disease diagnosis, prediction, and prognosis for medical conditions. The book also presents ethical issues and challenges that can occur within the healthcare system. Researchers, healthcare professionals, data scientists, systems engineers, students, programmers, clinicians, and policymakers will find this book of interest.
This book focuses on the multi-omics big-data integration, the data-mining techniques and the cutting-edge omics researches in principles and applications for a deep understanding of Traditional Chinese Medicine (TCM) and diseases from the following aspects: (1) Basics about multi-omics data and analytical methods for TCM and diseases. (2) The needs of omics studies in TCM researches, and the basic background of omics research in TCM and disease. (3) Better understanding of the multi-omics big-data integration techniques. (4) Better understanding of the multi-omics big-data mining techniques, as well as with different applications, for most insights from these omics data for TCM and disease researches. (5) TCM preparation quality control for checking both prescribed and unexpected ingredients including biological and chemical ingredients. (6) TCM preparation source tracking. (7) TCM preparation network pharmacology analysis. (8) TCM analysis data resources, web services, and visualizations. (9) TCM geoherbalism examination and authentic TCM identification. Traditional Chinese Medicine has been in existence for several thousands of years, and only in recent tens of years have we realized that the researches on TCM could be profoundly boosted by the omics technologies. Devised as a book on TCM and disease researches in the omics age, this book has put the focus on data integration and data mining methods for multi-omics researches, which will be explained in detail and with supportive examples the "What", "Why" and "How" of omics on TCM related researches. It is an attempt to bridge the gap between TCM related multi-omics big data, and the data-mining techniques, for best practice of contemporary bioinformatics and in-depth insights on the TCM related questions.
More data has been produced in the 21st century than all of human history combined. Yet, are we making better decisions today than in the past? How many poor decisions result from the absence of data? The existence of an overwhelming amount of data has affected how we make decisions, but it has not necessarily improved how we make decisions. To make better decisions, people need good judgment based on data literacy-the ability to extract meaning from data. Including data in the decision-making process can bring considerable clarity in answering our questions. Nevertheless, human beings can become distracted, overwhelmed, and even confused in the presence of too much data. The book presents cautionary tales of what can happen when too much attention is spent on acquiring more data instead of understanding how to best use the data we already have. Data is not produced in a vacuum, and individuals who possess data literacy will understand the environment and incentives in the data-generating process. Readers of this book will learn what questions to ask, what data to pay attention to, and what pitfalls to avoid in order to make better decisions. They will also be less vulnerable to those who manipulate data for misleading purposes.
Peer-to-peer (P2P) technology, or peer computing, is a paradigm that is viewed as a potential technology for redesigning distributed architectures and, consequently, distributed processing. Yet the scale and dynamism that characterize P2P systems demand that we reexamine traditional distributed technologies. A paradigm shift that includes self-reorganization, adaptation and resilience is called for. On the other hand, the increased computational power of such networks opens up completely new applications, such as in digital content sharing, scientific computation, gaming, or collaborative work environments. In this book, Vu, Lupu and Ooi present the technical challenges offered by P2P systems, and the means that have been proposed to address them. They provide a thorough and comprehensive review of recent advances on routing and discovery methods; load balancing and replication techniques; security, accountability and anonymity, as well as trust and reputation schemes; programming models and P2P systems and projects. Besides surveying existing methods and systems, they also compare and evaluate some of the more promising schemes. The need for such a book is evident. It provides a single source for practitioners, researchers and students on the state of the art. For practitioners, this book explains best practice, guiding selection of appropriate techniques for each application. For researchers, this book provides a foundation for the development of new and more effective methods. For students, it is an overview of the wide range of advanced techniques for realizing effective P2P systems, and it can easily be used as a text for an advanced course on Peer-to-Peer Computing and Technologies, or as a companion text for courses on various subjects, such as distributed systems, and grid and cluster computing.
Statistics and hypothesis testing are routinely used in areas (such as linguistics) that are traditionally not mathematically intensive. In such fields, when faced with experimental data, many students and researchers tend to rely on commercial packages to carry out statistical data analysis, often without understanding the logic of the statistical tests they rely on. As a consequence, results are often misinterpreted, and users have difficulty in flexibly applying techniques relevant to their own research they use whatever they happen to have learned. A simple solution is to teach the fundamental ideas of statistical hypothesis testing without using too much mathematics. This book provides a non-mathematical, simulation-based introduction to basic statistical concepts and encourages readers to try out the simulations themselves using the source code and data provided (the freely available programming language R is used throughout). Since the code presented in the text almost always requires the use of previously introduced programming constructs, diligent students also acquire basic programming abilities in R. The book is intended for advanced undergraduate and graduate students in any discipline, although the focus is on linguistics, psychology, and cognitive science. It is designed for self-instruction, but it can also be used as a textbook for a first course on statistics. Earlier versions of the book have been used in undergraduate and graduate courses in Europe and the US. Vasishth and Broe have written an attractive introduction to the foundations of statistics. It is concise, surprisingly comprehensive, self-contained and yet quite accessible. Highly recommended. Harald Baayen, Professor of Linguistics, University of Alberta, Canada By using the text students not only learn to do the specific things outlined in the book, they also gain a skill set that empowers them to explore new areas that lie beyond the book s coverage. Colin Phillips, Professor of Linguistics, University of Maryland, USA
Whether building a relational, object-relational, or
object-oriented database, database developers are increasingly
relying on an object-oriented design approach as the best way to
meet user needs and performance criteria. This book teaches you how
to use the Unified Modeling Language-the official standard of the
Object Management Group-to develop and implement the best possible
design for your database. Inside, the author leads you step by step through the design
process, from requirements analysis to schema generation. You'll
learn to express stakeholder needs in UML use cases and actor
diagrams, to translate UML entities into database components, and
to transform the resulting design into relational,
object-relational, and object-oriented schemas for all major DBMS
products.
First of all, I would like to congratulate Gabriella Pasi and Gloria Bordogna for the work they accomplished in preparing this new book in the series "Study in Fuzziness and Soft Computing." "Recent Issues on the Management of Fuzziness in Databases" is undoubtedly a token of their long-lasting and active involvement in the area of Fuzzy Information Retrieval and Fuzzy Database Systems. This book is really welcome in the area of fuzzy databases where they are not numerous although the first works at the crossroads of fuzzy sets and databases were initiated about twenty years ago by L. Zadeh. Only five books have been published since 1995, when the first volume dedicated to fuzzy databases published in the series "Study in Fuzziness and Soft Computing" edited by J. Kacprzyk and myself appeared. Going beyond books strictly speaking, let us also mention the existence of review papers that are part of a couple of handbooks related to fuzzy sets published since 1998. The area known as fuzzy databases covers a bunch of topics among which: -flexible queries addressed to regular databases, -the extension of the notion of a functional dependency, -data mining and fuzzy summarization, -querying databases containing imperfect attribute values represented thanks to possibility distributions.
Software that covertly monitors user actions, also known as spyware, has become a first-level security threat due to its ubiquity and the difficulty of detecting and removing it. This is especially so for video conferencing, thin-client computing and Internet cafes. CryptoGraphics: Exploiting Graphics Cards for Security explores the potential for implementing ciphers within GPUs, and describes the relevance of GPU-based encryption to the security of applications involving remote displays. As the processing power of GPUs increases, research involving the use of GPUs for general purpose computing has arisen. This work extends such research by considering the use of a GPU as a parallel processor for encrypting data. The authors evaluate the operations found in symmetric and asymmetric key ciphers to determine if encryption can be programmed in existing GPUs. A detailed description for a GPU based implementation of AES is provided. The feasibility of GPU-based encryption allows the authors to explore the use of a GPU as a trusted system component. Unencrypted display data can be confined to the GPU to avoid exposing it to any malware running on the operating system.
This book describes various methods and recent advances in predictive computing and information security. It highlights various predictive application scenarios to discuss these breakthroughs in real-world settings. Further, it addresses state-of-art techniques and the design, development and innovative use of technologies for enhancing predictive computing and information security. Coverage also includes the frameworks for eTransportation and eHealth, security techniques, and algorithms for predictive computing and information security based on Internet-of-Things and Cloud computing. As such, the book offers a valuable resource for graduate students and researchers interested in exploring predictive modeling techniques and architectures to solve information security, privacy and protection issues in future communication.
This book includes high-quality papers presented at the Second International Conference on Data Science and Management (ICDSM 2021), organized by the Gandhi Institute for Education and Technology, Bhubaneswar, from 19 to 20 February 2021. It features research in which data science is used to facilitate the decision-making process in various application areas, and also covers a wide range of learning methods and their applications in a number of learning problems. The empirical studies, theoretical analyses and comparisons to psychological phenomena described contribute to the development of products to meet market demands.
Canadian Semantic Web is an edited volume based on the first Canadian Web Working Symposium, June 2006, in Quebec, Canada. It is the first edited volume based on this subject. This volume includes, but is not limited to, the following popular topics: "Trust, Privacy, Security on the Semantic Web," "Semantic Grid and Semantic Grid Services" and "Semantic Web Mining."
This open access book provides a comprehensive view on data ecosystems and platform economics from methodical and technological foundations up to reports from practical implementations and applications in various industries. To this end, the book is structured in four parts: Part I "Foundations and Contexts" provides a general overview about building, running, and governing data spaces and an introduction to the IDS and GAIA-X projects. Part II "Data Space Technologies" subsequently details various implementation aspects of IDS and GAIA-X, including eg data usage control, the usage of blockchain technologies, or semantic data integration and interoperability. Next, Part III describes various "Use Cases and Data Ecosystems" from various application areas such as agriculture, healthcare, industry, energy, and mobility. Part IV eventually offers an overview of several "Solutions and Applications", eg including products and experiences from companies like Google, SAP, Huawei, T-Systems, Innopay and many more. Overall, the book provides professionals in industry with an encompassing overview of the technological and economic aspects of data spaces, based on the International Data Spaces and Gaia-X initiatives. It presents implementations and business cases and gives an outlook to future developments. In doing so, it aims at proliferating the vision of a social data market economy based on data spaces which embrace trust and data sovereignty.
This book focuses on how businesses manage organizational innovation processes. It explores the innovative policies and practices that organizations need to develop to allow them to be successful in this digital age. These policies will be based on key resources such as research and development and human resources and need to enable companies to respond to challenges they may face due to the digital economy. It explains how organizational innovation can be used to improve business's development, performance, conduct and outcomes. Contributing to stimulate the growth and development of each individual in a dynamic, competitive and global economy, the present book can be used by a diverse range of readers, including academics, researchers, managers and engineers interested in matters related with Organizational Innovation in the Digital Age.
This book focuses on the design of secure and efficient signature and signcryption schemes for vehicular ad-hoc networks (VANETs). We use methods such as public key cryptography (PKI), identity-based cryptography (IDC), and certificateless cryptography (CLC) to design bilinear pairing and elliptic curve cryptography-based signature and signcryption schemes and prove their security in the random oracle model. The signature schemes ensure the authenticity of source and integrity of a safety message. While signcryption schemes ensure authentication and confidentiality of the safety message in a single logical step. To provide readers to study the schemes that securely and efficiently process a message and multiple messages in vehicle to vehicle and vehicle to infrastructure communications is the main benefit of this book. In addition, it can benefit researchers, engineers, and graduate students in the fields of security and privacy of VANETs, Internet of vehicles securty, wireless body area networks security, etc.
This book presents the latest cutting-edge research, theoretical methods, and novel applications in the field of computational intelligence techniques and methods for combating fake news. Fake news is everywhere. Despite the efforts of major social network players such as Facebook and Twitter to fight disinformation, miracle cures and conspiracy theories continue to rain down on the net. Artificial intelligence can be a bulwark against the diversity of fake news on the Internet and social networks. This book discusses new models, practical solutions, and technological advances related to detecting and analyzing fake news based on computational intelligence models and techniques, to help decision-makers, managers, professionals, and researchers design new paradigms considering the unique opportunities associated with computational intelligence techniques. Further, the book helps readers understand computational intelligence techniques combating fake news in a systematic and straightforward way.
This book discusses the effective use of modern ICT solutions for business needs, including the efficient use of IT resources, decision support systems, business intelligence, data mining and advanced data processing algorithms, as well as the processing of large datasets (inter alia social networking such as Twitter and Facebook, etc.). The ability to generate, record and process qualitative and quantitative data, including in the area of big data, the Internet of Things (IoT) and cloud computing offers a real prospect of significant improvements for business, as well as the operation of a company within Industry 4.0. The book presents new ideas, approaches, solutions and algorithms in the area of knowledge representation, management and processing, quantitative and qualitative data processing (including sentiment analysis), problems of simulation performance, and the use of advanced signal processing to increase the speed of computation. The solutions presented are also aimed at the effective use of business process modeling and notation (BPMN), business process semantization and investment project portfolio selection. It is a valuable resource for researchers, data analysts, entrepreneurs and IT professionals alike, and the research findings presented make it possible to reduce costs, increase the accuracy of investment, optimize resources and streamline operations and marketing.
Data-mining has become a popular research topic in recent years for the treatment of the "data rich and information poor" syndrome. Currently, application oriented engineers are only concerned with their immediate problems, which results in an ad hoc method of problem solving. Researchers, on the other hand, lack an understanding of the practical issues of data-mining for real-world problems and often concentrate on issues that are of no significance to the practitioners. In this volume, we hope to remedy problems by (1) presenting a theoretical foundation of data-mining, and (2) providing important new directions for data-mining research. A set of well respected data mining theoreticians were invited to present their views on the fundamental science of data mining. We have also called on researchers with practical data mining experiences to present new important data-mining topics.
This book provides a comprehensive methodology to measure systemic risk in many of its facets and dimensions based on state-of-the-art risk assessment methods. Systemic risk has gained attention in the public eye since the collapse of Lehman Brothers in 2008. The bankruptcy of the fourth-biggest bank in the USA raised questions whether banks that are allowed to become "too big to fail" and "too systemic to fail" should carry higher capital surcharges on their size and systemic importance. The Global Financial Crisis of 2008-2009 was followed by the Sovereign Debt Crisis in the euro area that saw the first Eurozone government de facto defaulting on its debt and prompted actions at international level to stem further domino and cascade effects to other Eurozone governments and banks. Against this backdrop, a careful measurement of systemic risk is of utmost importance for the new capital regulation to be successful and for sovereign risk to remain in check. Most importantly, the book introduces a number of systemic fragility indicators for banks and sovereigns that can help to assess systemic risk and the impact of macroprudential and microprudential policies.
The one instruction set computer (OISC) is the ultimate reduced instruction set computer (RISC). In OISC, the instruction set consists of only one instruction, and then by composition, all other necessary instructions are synthesized. This is an approach completely opposite to that of a complex instruction set computer (CISC), which incorporates complex instructions as microprograms within the processor. Computer Architecture: A Minimalist Perspective examines
computer architecture, computability theory, and the history of
computers from the perspective of one instruction set computing - a
novel approach in which the computer supports only one, simple
instruction. This bold, new paradigm offers significant promise in
biological, chemical, optical, and molecular scale computers. - Provides a comprehensive study of computer architecture using
computability theory as a base.
The book discusses how augmented intelligence can increase the efficiency and speed of diagnosis in healthcare organizations. The concept of augmented intelligence can reflect the enhanced capabilities of human decision-making in clinical settings when augmented with computation systems and methods. It includes real-life case studies highlighting impact of augmented intelligence in health care. The book offers a guided tour of computational intelligence algorithms, architecture design, and applications of learning in healthcare challenges. It presents a variety of techniques designed to represent, enhance, and empower multi-disciplinary and multi-institutional machine learning research in healthcare informatics. It also presents specific applications of augmented intelligence in health care, and architectural models and frameworks-based augmented solutions.
The purpose of this book is to discuss, in depth, the current state of research and practice in database security, to enable readers to expand their knowledge. The book brings together contributions from experts in the field throughout the world. Database security is still a key topic in mist businesses and in the public sector, having implications for the whole of society.
Knowledge management captures the right knowledge, to the right user, who in turn uses the knowledge to improve organizational or individual performance to increase effectiveness. ""Strategies for Knowledge Management Success: Exploring Organizational Efficacy"" collects and presents key research articles focused on identifying, defining, and measuring accomplishment in knowledge management. A significant collection of the latest international findings within the field, this book provides a strong reference for students, researchers, and practitioners involved with organizational knowledge management. |
You may like...
Management Of Information Security
Michael Whitman, Herbert Mattord
Paperback
Twin Research for Everyone - From…
Adam D. Tarnoki, David L. Tarnoki, …
Paperback
R3,606
Discovery Miles 36 060
Mechanisms and Therapy of Liver Cancer…
Paul B. Fisher, Devanand Sarkar
Hardcover
R3,734
Discovery Miles 37 340
|