![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > General
Recent advances in computing, communication, and data storage have
led to an increasing number of large digital libraries publicly
available on the Internet. In addition to alphanumeric data, other
modalities, including video play an important role in these
libraries. Ordinary techniques will not retrieve required
information from the enormous mass of data stored in digital video
libraries. Instead of words, a video retrieval system deals with
collections of video records. Therefore, the system is confronted
with the problem of video understanding. The system gathers key
information from a video in order to allow users to query semantics
instead of raw video data or video features. Users expect tools
that automatically understand and manipulate the video content in
the same structured way as a traditional database manages numeric
and textual data. Consequently, content-based search and retrieval
of video data becomes a challenging and important problem.
This book highlights the latest advances on the implementation and adaptation of blockchain technologies in real-world scientific, biomedical, and data applications. It presents rapid advancements in life sciences research and development by applying the unique capabilities inherent in distributed ledger technologies. The book unveils the current uses of blockchain in drug discovery, drug and device tracking, real-world data collection, and increased patient engagement used to unlock opportunities to advance life sciences research. This paradigm shift is explored from the perspectives of pharmaceutical professionals, biotechnology start-ups, regulatory agencies, ethical review boards, and blockchain developers. This book enlightens readers about the opportunities to empower and enable data in life sciences.
Modern AI techniques -- especially deep learning -- provide, in many cases, very good recommendations: where a self-driving car should go, whether to give a company a loan, etc. The problem is that not all these recommendations are good -- and since deep learning provides no explanations, we cannot tell which recommendations are good. It is therefore desirable to provide natural-language explanation of the numerical AI recommendations. The need to connect natural language rules and numerical decisions is known since 1960s, when the need emerged to incorporate expert knowledge -- described by imprecise words like "small" -- into control and decision making. For this incorporation, a special "fuzzy" technique was invented, that led to many successful applications. This book described how this technique can help to make AI more explainable.The book can be recommended for students, researchers, and practitioners interested in explainable AI.
This book constitutes the refereed post-conference proceedings of the IFIP TC 3 Open Conference on Computers in Education, OCCE 2021, held in Tampere, Finland, in August 2021. The 22 full papers and 2 short papers included in this volume were carefully reviewed and selected from 44 submissions. The papers discuss key emerging topics and evolving practices in the area of educational computing research. They are organized in the following topical sections: Digital education across educational institutions; National policies and plans for digital competence; Learning with digital technologies; and Management issues.
The benefits of distributed computing are evidenced by the increased functionality, retrieval capability, and reliability it provides for a number of networked applications. The growth of the Internet into a critical part of daily life has encouraged further study on how data can better be transferred, managed, and evaluated in an ever-changing online environment. Advancements in Distributed Computing and Internet Technologies: Trends and Issues compiles recent research trends and practical issues in the fields of distributed computing and Internet technologies. The book provides advancements on emerging technologies that aim to support the effective design and implementation of service-oriented networks, future Internet environments, and building management frameworks. Research on Internet-based systems design, wireless sensor networks and their application, and next generation distributed systems will inform graduate students, researchers, academics, and industry practitioners of new trends and vital research in this evolving discipline.
This edited book presents the scientific outcomes of the 4th IEEE/ACIS International Conference on Big Data, Cloud Computing, Data Science & Engineering (BCD 2019) which was held on May 29-31, 2019 in Honolulu, Hawaii. The aim of the conference was to bring together researchers and scientists, businessmen and entrepreneurs, teachers, engineers, computer users and students to discuss the numerous fields of computer science and to share their experiences and exchange new ideas and information in a meaningful way. Presenting 15 of the conference's most promising papers, the book discusses all aspects (theory, applications and tools) of computer and information science, the practical challenges encountered along the way, and the solutions adopted to solve them.
This book focuses on the multi-omics big-data integration, the data-mining techniques and the cutting-edge omics researches in principles and applications for a deep understanding of Traditional Chinese Medicine (TCM) and diseases from the following aspects: (1) Basics about multi-omics data and analytical methods for TCM and diseases. (2) The needs of omics studies in TCM researches, and the basic background of omics research in TCM and disease. (3) Better understanding of the multi-omics big-data integration techniques. (4) Better understanding of the multi-omics big-data mining techniques, as well as with different applications, for most insights from these omics data for TCM and disease researches. (5) TCM preparation quality control for checking both prescribed and unexpected ingredients including biological and chemical ingredients. (6) TCM preparation source tracking. (7) TCM preparation network pharmacology analysis. (8) TCM analysis data resources, web services, and visualizations. (9) TCM geoherbalism examination and authentic TCM identification. Traditional Chinese Medicine has been in existence for several thousands of years, and only in recent tens of years have we realized that the researches on TCM could be profoundly boosted by the omics technologies. Devised as a book on TCM and disease researches in the omics age, this book has put the focus on data integration and data mining methods for multi-omics researches, which will be explained in detail and with supportive examples the "What", "Why" and "How" of omics on TCM related researches. It is an attempt to bridge the gap between TCM related multi-omics big data, and the data-mining techniques, for best practice of contemporary bioinformatics and in-depth insights on the TCM related questions.
More data has been produced in the 21st century than all of human history combined. Yet, are we making better decisions today than in the past? How many poor decisions result from the absence of data? The existence of an overwhelming amount of data has affected how we make decisions, but it has not necessarily improved how we make decisions. To make better decisions, people need good judgment based on data literacy-the ability to extract meaning from data. Including data in the decision-making process can bring considerable clarity in answering our questions. Nevertheless, human beings can become distracted, overwhelmed, and even confused in the presence of too much data. The book presents cautionary tales of what can happen when too much attention is spent on acquiring more data instead of understanding how to best use the data we already have. Data is not produced in a vacuum, and individuals who possess data literacy will understand the environment and incentives in the data-generating process. Readers of this book will learn what questions to ask, what data to pay attention to, and what pitfalls to avoid in order to make better decisions. They will also be less vulnerable to those who manipulate data for misleading purposes.
Peer-to-peer (P2P) technology, or peer computing, is a paradigm that is viewed as a potential technology for redesigning distributed architectures and, consequently, distributed processing. Yet the scale and dynamism that characterize P2P systems demand that we reexamine traditional distributed technologies. A paradigm shift that includes self-reorganization, adaptation and resilience is called for. On the other hand, the increased computational power of such networks opens up completely new applications, such as in digital content sharing, scientific computation, gaming, or collaborative work environments. In this book, Vu, Lupu and Ooi present the technical challenges offered by P2P systems, and the means that have been proposed to address them. They provide a thorough and comprehensive review of recent advances on routing and discovery methods; load balancing and replication techniques; security, accountability and anonymity, as well as trust and reputation schemes; programming models and P2P systems and projects. Besides surveying existing methods and systems, they also compare and evaluate some of the more promising schemes. The need for such a book is evident. It provides a single source for practitioners, researchers and students on the state of the art. For practitioners, this book explains best practice, guiding selection of appropriate techniques for each application. For researchers, this book provides a foundation for the development of new and more effective methods. For students, it is an overview of the wide range of advanced techniques for realizing effective P2P systems, and it can easily be used as a text for an advanced course on Peer-to-Peer Computing and Technologies, or as a companion text for courses on various subjects, such as distributed systems, and grid and cluster computing.
The Ethics of Artificial Intelligence in Education identifies and confronts key ethical issues generated over years of AI research, development, and deployment in learning contexts. Adaptive, automated, and data-driven education systems are increasingly being implemented in universities, schools, and corporate training worldwide, but the ethical consequences of engaging with these technologies remain unexplored. Featuring expert perspectives from inside and outside the AIED scholarly community, this book provides AI researchers, learning scientists, educational technologists, and others with questions, frameworks, guidelines, policies, and regulations to ensure the positive impact of artificial intelligence in learning.
First of all, I would like to congratulate Gabriella Pasi and Gloria Bordogna for the work they accomplished in preparing this new book in the series "Study in Fuzziness and Soft Computing." "Recent Issues on the Management of Fuzziness in Databases" is undoubtedly a token of their long-lasting and active involvement in the area of Fuzzy Information Retrieval and Fuzzy Database Systems. This book is really welcome in the area of fuzzy databases where they are not numerous although the first works at the crossroads of fuzzy sets and databases were initiated about twenty years ago by L. Zadeh. Only five books have been published since 1995, when the first volume dedicated to fuzzy databases published in the series "Study in Fuzziness and Soft Computing" edited by J. Kacprzyk and myself appeared. Going beyond books strictly speaking, let us also mention the existence of review papers that are part of a couple of handbooks related to fuzzy sets published since 1998. The area known as fuzzy databases covers a bunch of topics among which: -flexible queries addressed to regular databases, -the extension of the notion of a functional dependency, -data mining and fuzzy summarization, -querying databases containing imperfect attribute values represented thanks to possibility distributions.
This open access book provides a comprehensive view on data ecosystems and platform economics from methodical and technological foundations up to reports from practical implementations and applications in various industries. To this end, the book is structured in four parts: Part I "Foundations and Contexts" provides a general overview about building, running, and governing data spaces and an introduction to the IDS and GAIA-X projects. Part II "Data Space Technologies" subsequently details various implementation aspects of IDS and GAIA-X, including eg data usage control, the usage of blockchain technologies, or semantic data integration and interoperability. Next, Part III describes various "Use Cases and Data Ecosystems" from various application areas such as agriculture, healthcare, industry, energy, and mobility. Part IV eventually offers an overview of several "Solutions and Applications", eg including products and experiences from companies like Google, SAP, Huawei, T-Systems, Innopay and many more. Overall, the book provides professionals in industry with an encompassing overview of the technological and economic aspects of data spaces, based on the International Data Spaces and Gaia-X initiatives. It presents implementations and business cases and gives an outlook to future developments. In doing so, it aims at proliferating the vision of a social data market economy based on data spaces which embrace trust and data sovereignty.
Data analytics underpin our modern data-driven economy. This textbook explains the relevance of data analytics at the firm and industry levels, tracing the evolution and key components of the field, and showing how data analytics insights can be leveraged for business results. The first section of the text covers key topics such as data analytics tools, data mining, business intelligence, customer relationship management, and cybersecurity. The chapters then take an industry focus, exploring how data analytics can be used in particular settings to strengthen business decision-making. A range of sectors are examined, including financial services, accounting, marketing, sport, health care, retail, transport, and education. With industry case studies, clear definitions of terminology, and no background knowledge required, this text supports students in gaining a solid understanding of data analytics and its practical applications. PowerPoint slides, a test bank of questions, and an instructor's manual are also provided as online supplements. This will be a valuable text for undergraduate level courses in data analytics, data mining, business intelligence, and related areas.
This book presents the latest cutting-edge research, theoretical methods, and novel applications in the field of computational intelligence techniques and methods for combating fake news. Fake news is everywhere. Despite the efforts of major social network players such as Facebook and Twitter to fight disinformation, miracle cures and conspiracy theories continue to rain down on the net. Artificial intelligence can be a bulwark against the diversity of fake news on the Internet and social networks. This book discusses new models, practical solutions, and technological advances related to detecting and analyzing fake news based on computational intelligence models and techniques, to help decision-makers, managers, professionals, and researchers design new paradigms considering the unique opportunities associated with computational intelligence techniques. Further, the book helps readers understand computational intelligence techniques combating fake news in a systematic and straightforward way.
Explains the basic concepts of Python and its role in machine learning Provides comprehensive coverage of feature-engineering including real-time case studies Perceive the structural patterns with reference to data science and statistics and analytics Includes machine learning based structured exercises Appreciates different algorithmic concepts of machine learning including unsupervised, supervised and reinforcement learning
This book provides a comprehensive methodology to measure systemic risk in many of its facets and dimensions based on state-of-the-art risk assessment methods. Systemic risk has gained attention in the public eye since the collapse of Lehman Brothers in 2008. The bankruptcy of the fourth-biggest bank in the USA raised questions whether banks that are allowed to become "too big to fail" and "too systemic to fail" should carry higher capital surcharges on their size and systemic importance. The Global Financial Crisis of 2008-2009 was followed by the Sovereign Debt Crisis in the euro area that saw the first Eurozone government de facto defaulting on its debt and prompted actions at international level to stem further domino and cascade effects to other Eurozone governments and banks. Against this backdrop, a careful measurement of systemic risk is of utmost importance for the new capital regulation to be successful and for sovereign risk to remain in check. Most importantly, the book introduces a number of systemic fragility indicators for banks and sovereigns that can help to assess systemic risk and the impact of macroprudential and microprudential policies.
The book discusses how augmented intelligence can increase the efficiency and speed of diagnosis in healthcare organizations. The concept of augmented intelligence can reflect the enhanced capabilities of human decision-making in clinical settings when augmented with computation systems and methods. It includes real-life case studies highlighting impact of augmented intelligence in health care. The book offers a guided tour of computational intelligence algorithms, architecture design, and applications of learning in healthcare challenges. It presents a variety of techniques designed to represent, enhance, and empower multi-disciplinary and multi-institutional machine learning research in healthcare informatics. It also presents specific applications of augmented intelligence in health care, and architectural models and frameworks-based augmented solutions.
Knowledge management captures the right knowledge, to the right user, who in turn uses the knowledge to improve organizational or individual performance to increase effectiveness. ""Strategies for Knowledge Management Success: Exploring Organizational Efficacy"" collects and presents key research articles focused on identifying, defining, and measuring accomplishment in knowledge management. A significant collection of the latest international findings within the field, this book provides a strong reference for students, researchers, and practitioners involved with organizational knowledge management.
This edited book covers ongoing research in both theory and practical applications of using deep learning for social media data. Social networking platforms are overwhelmed by different contents, and their huge amounts of data have enormous potential to influence business, politics, security, planning and other social aspects. Recently, deep learning techniques have had many successful applications in the AI field. The research presented in this book emerges from the conviction that there is still much progress to be made toward exploiting deep learning in the context of social media data analytics. It includes fifteen chapters, organized into four sections that report on original research in network structure analysis, social media text analysis, user behaviour analysis and social media security analysis. This work could serve as a good reference for researchers, as well as a compilation of innovative ideas and solutions for practitioners interested in applying deep learning techniques to social media data analytics.
In recent years, tremendous research has been devoted to the design of database systems for real-time applications, called real-time database systems (RTDBS), where transactions are associated with deadlines on their completion times, and some of the data objects in the database are associated with temporal constraints on their validity. Examples of important applications of RTDBS include stock trading systems, navigation systems and computer integrated manufacturing. Different transaction scheduling algorithms and concurrency control protocols have been proposed to satisfy transaction timing data temporal constraints. Other design issues important to the performance of a RTDBS are buffer management, index accesses and I/O scheduling. Real-Time Database Systems: Architecture and Techniques summarizes important research results in this area, and serves as an excellent reference for practitioners, researchers and educators of real-time systems and database systems.
"In what is certain to be a seminal work on metadata, John Horodyski masterfully affirms the value of metadata while providing practical examples of its role in our personal and professional lives. He does more than tell us that metadata matters-he vividly illustrates why it matters." -Patricia C. Franks, PhD, CA, CRM, IGP, CIGO, FAI, President, NAGARA, Professor Emerita, San Jose State University, USA If data is the language upon which our modern society will be built, then metadata will be its grammar, the construction of its meaning, the building for its content, and the ability to understand what data can be for us all. We are just starting to bring change into the management of the data that connects our experiences. Metadata Matters explains how metadata is the foundation of digital strategy. If digital assets are to be discovered, they want to be found. The path to good metadata design begins with the realization that digital assets need to be identified, organized, and made available for discovery. This book explains how metadata will help ensure that an organization is building the right system for the right users at the right time. Metadata matters and is the best chance for a return on investment on digital assets and is also a line of defense against lost opportunities. It matters to the digital experience of users. It helps organizations ensure that users can identify, discover, and experience their brands in the ways organizations intend. It is a necessary defense, which this book shows how to build.
People have a hard time communicating, and also have a hard time
finding business knowledge in the environment. With the
sophistication of search technologies like Google, business people
expect to be able to get their questions answered about the
business just like you can do an internet search. The truth is,
knowledge management is primitive today, and it is due to the fact
that we have poor business metadata management.
This book projects a futuristic scenario that is more existent than they have been at any time earlier. To be conscious of the bursting prospective of IoT, it has to be amalgamated with AI technologies. Predictive and advanced analysis can be made based on the data collected, discovered and analyzed. To achieve all these compatibility, complexity, legal and ethical issues arise due to automation of connected components and gadgets of widespread companies across the globe. While these are a few examples of issues, the authors' intention in editing this book is to offer concepts of integrating AI with IoT in a precise and clear manner to the research community. In editing this book, the authors' attempt is to provide novel advances and applications to address the challenge of continually discovering patterns for IoT by covering various aspects of implementing AI techniques to make IoT solutions smarter. The only way to remain pace with this data generated by the IoT and acquire the concealed acquaintance it encloses is to employ AI as the eventual catalyst for IoT. IoT together with AI is more than an inclination or existence; it will develop into a paradigm. It helps those researchers who have an interest in this field to keep insight into different concepts and their importance for applications in real life. This has been done to make the edited book more flexible and to stimulate further interest in topics. All these motivated the authors toward integrating AI in achieving smarter IoT. The authors believe that their effort can make this collection interesting and highly attract the student pursuing pre-research, research and even master in multidisciplinary domain.
This book provides an overview of the topics of data, sovereignty, and governance with respect to data and online activities through a legal lens and from a cybersecurity perspective. This first chapter explores the concepts of data, ownerships, and privacy with respect to digital media and content, before defining the intersection of sovereignty in law with application to data and digital media content. The authors delve into the issue of digital governance, as well as theories and systems of governance on a state level, national level, and corporate/organizational level. Chapter three jumps into the complex area of jurisdictional conflict of laws and the related issues regarding digital activities in international law, both public and private. Additionally, the book discusses the many technical complexities which underlay the evolution and creation of new law and governance strategies and structures. This includes socio-political, legal, and industrial technical complexities which can apply in these areas. The fifth chapter is a comparative examination of the legal strategies currently being explored by a variety of nations. The book concludes with a discussion about emerging topics which either influence, or are influenced by, data sovereignty and digital governance, such as indigenous data sovereignty, digital human rights and self-determination, artificial intelligence, and global digital social responsibility. Cumulatively, this book provides the full spectrum of information, from foundational principles underlining the described topics, through to the larger, more complex, evolving issues which we can foresee ahead of us.
A statisticallanguage model, or more simply a language model, is a prob abilistic mechanism for generating text. Such adefinition is general enough to include an endless variety of schemes. However, a distinction should be made between generative models, which can in principle be used to synthesize artificial text, and discriminative techniques to classify text into predefined cat egories. The first statisticallanguage modeler was Claude Shannon. In exploring the application of his newly founded theory of information to human language, Shannon considered language as a statistical source, and measured how weH simple n-gram models predicted or, equivalently, compressed natural text. To do this, he estimated the entropy of English through experiments with human subjects, and also estimated the cross-entropy of the n-gram models on natural 1 text. The ability of language models to be quantitatively evaluated in tbis way is one of their important virtues. Of course, estimating the true entropy of language is an elusive goal, aiming at many moving targets, since language is so varied and evolves so quickly. Yet fifty years after Shannon's study, language models remain, by all measures, far from the Shannon entropy liInit in terms of their predictive power. However, tbis has not kept them from being useful for a variety of text processing tasks, and moreover can be viewed as encouragement that there is still great room for improvement in statisticallanguage modeling." |
You may like...
Database Systems: Design, Implementation…
Bella Cunningham
Hardcover
Fundamentals of Spatial Information…
Robert Laurini, Derek Thompson
Hardcover
R1,451
Discovery Miles 14 510
CompTIA Data+ DA0-001 Exam Cram
Akhil Behl, Sivasubramanian
Digital product license key
R1,024
Discovery Miles 10 240
Advances in the Convergence of…
Tiago M. Fernandez-Carames, Paula Fraga-Lamas
Hardcover
R2,555
Discovery Miles 25 550
Blockchain and AI Technology in the…
Subhendu Kumar Pani, Sian Lun Lau, …
Hardcover
R6,170
Discovery Miles 61 700
Blockchain Life - Making Sense of the…
Kary Oberbrunner, Lee Richter
Hardcover
R506
Discovery Miles 5 060
|