![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
This book paves the road for researchers from various areas of engineering working in the realm of smart cities to discuss the intersections in these areas when it comes to infrastructure and its flexibility. The authors lay out models, algorithms and frameworks related to the 'smartness' in the future smart cities. In particular, manufacturing firms, electric generation, transmission and distribution utilities, hardware and software computer companies, automation and control manufacturing firms, and other industries will be able to use this book to enhance their energy operations, improve their comfort and privacy, as well as to increase the benefit from the electrical system. The book pertains to researchers, professionals, and R&D in an array of industries.
Machine learning (ML) has become a commonplace element in our everyday lives and a standard tool for many fields of science and engineering. To make optimal use of ML, it is essential to understand its underlying principles. This book approaches ML as the computational implementation of the scientific principle. This principle consists of continuously adapting a model of a given data-generating phenomenon by minimizing some form of loss incurred by its predictions. The book trains readers to break down various ML applications and methods in terms of data, model, and loss, thus helping them to choose from the vast range of ready-made ML methods. The book's three-component approach to ML provides uniform coverage of a wide range of concepts and techniques. As a case in point, techniques for regularization, privacy-preservation as well as explainability amount to specific design choices for the model, data, and loss of a ML method.
This book provides a written record of the synergy that already exists among the research communities and represents a solid framework in the advancement of big data and cloud computing disciplines from which new interaction will result in the future. This book is a compendium of the International Conference on Big Data and Cloud Computing (ICBDCC 2021). It includes recent advances in big data analytics, cloud computing, the Internet of nano things, cloud security, data analytics in the cloud, smart cities and grids, etc. This book primarily focuses on the application of knowledge that promotes ideas for solving the problems of society through cutting-edge technologies. The articles featured in this book provide novel ideas that contribute to the growth of world-class research and development. The contents of this book are of interest to researchers and professionals alike.
This book is a collection of high scientific novel contributions addressing several of these challenges. These articles are extended versions of a selection of the best papers that were initially presented at the French-speaking conferences EGC'2019held in Metz (France, January 21-25, 2019). These extended versions have been accepted after an additional peer-review process among papers already accepted in long format at the conference. Concerning the conference, the long and short papers selection were also the result of a double blind peer review process among the hundreds of papers initially submitted to each edition of the conference (acceptance rate for long papers is about 25%.
In Symbolic Analysis for Parallelizing Compilers the author presents an excellent demonstration of the effectiveness of symbolic analysis in tackling important optimization problems, some of which inhibit loop parallelization. The framework that Haghighat presents has proved extremely successful in induction and wraparound variable analysis, strength reduction, dead code elimination and symbolic constant propagation. The approach can be applied to any program transformation or optimization problem that uses properties and value ranges of program names. Symbolic analysis can be used on any transformational system or optimization problem that relies on compile-time information about program variables. This covers the majority of, if not all optimization and parallelization techniques. The book makes a compelling case for the potential of symbolic analysis, applying it for the first time - and with remarkable results - to a number of classical optimization problems: loop scheduling, static timing or size analysis, and dependence analysis. It demonstrates how symbolic analysis can solve these problems faster and more accurately than existing hybrid techniques.
This book is about methodological aspects of uncertainty propagation in data processing. Uncertainty propagation is an important problem: while computer algorithms efficiently process data related to many aspects of their lives, most of these algorithms implicitly assume that the numbers they process are exact. In reality, these numbers come from measurements, and measurements are never 100% exact. Because of this, it makes no sense to translate 61 kg into pounds and get the result-as computers do-with 13 digit accuracy. In many cases-e.g., in celestial mechanics-the state of a system can be described by a few numbers: the values of the corresponding physical quantities. In such cases, for each of these quantities, we know (at least) the upper bound on the measurement error. This bound is either provided by the manufacturer of the measuring instrument-or is estimated by the user who calibrates this instrument. However, in many other cases, the description of the system is more complex than a few numbers: we need a function to describe a physical field (e.g., electromagnetic field); we need a vector in Hilbert space to describe a quantum state; we need a pseudo-Riemannian space to describe the physical space-time, etc. To describe and process uncertainty in all such cases, this book proposes a general methodology-a methodology that includes intervals as a particular case. The book is recommended to students and researchers interested in challenging aspects of uncertainty analysis and to practitioners who need to handle uncertainty in such unusual situations.
This book discusses the advances of artificial intelligence and data sciences in climate change and provides the power of the climate data that is used as inputs to artificial intelligence systems. It is a good resource for researchers and professionals who work in the field of data sciences, artificial intelligence, and climate change applications.
"A First Course in Machine Learning by Simon Rogers and Mark Girolami is the best introductory book for ML currently available. It combines rigor and precision with accessibility, starts from a detailed explanation of the basic foundations of Bayesian analysis in the simplest of settings, and goes all the way to the frontiers of the subject such as infinite mixture models, GPs, and MCMC." -Devdatt Dubhashi, Professor, Department of Computer Science and Engineering, Chalmers University, Sweden "This textbook manages to be easier to read than other comparable books in the subject while retaining all the rigorous treatment needed. The new chapters put it at the forefront of the field by covering topics that have become mainstream in machine learning over the last decade." -Daniel Barbara, George Mason University, Fairfax, Virginia, USA "The new edition of A First Course in Machine Learning by Rogers and Girolami is an excellent introduction to the use of statistical methods in machine learning. The book introduces concepts such as mathematical modeling, inference, and prediction, providing 'just in time' the essential background on linear algebra, calculus, and probability theory that the reader needs to understand these concepts." -Daniel Ortiz-Arroyo, Associate Professor, Aalborg University Esbjerg, Denmark "I was impressed by how closely the material aligns with the needs of an introductory course on machine learning, which is its greatest strength...Overall, this is a pragmatic and helpful book, which is well-aligned to the needs of an introductory course and one that I will be looking at for my own students in coming months." -David Clifton, University of Oxford, UK "The first edition of this book was already an excellent introductory text on machine learning for an advanced undergraduate or taught masters level course, or indeed for anybody who wants to learn about an interesting and important field of computer science. The additional chapters of advanced material on Gaussian process, MCMC and mixture modeling provide an ideal basis for practical projects, without disturbing the very clear and readable exposition of the basics contained in the first part of the book." -Gavin Cawley, Senior Lecturer, School of Computing Sciences, University of East Anglia, UK "This book could be used for junior/senior undergraduate students or first-year graduate students, as well as individuals who want to explore the field of machine learning...The book introduces not only the concepts but the underlying ideas on algorithm implementation from a critical thinking perspective." -Guangzhi Qu, Oakland University, Rochester, Michigan, USA
Recent advances in computing, communication, and data storage have
led to an increasing number of large digital libraries publicly
available on the Internet. In addition to alphanumeric data, other
modalities, including video play an important role in these
libraries. Ordinary techniques will not retrieve required
information from the enormous mass of data stored in digital video
libraries. Instead of words, a video retrieval system deals with
collections of video records. Therefore, the system is confronted
with the problem of video understanding. The system gathers key
information from a video in order to allow users to query semantics
instead of raw video data or video features. Users expect tools
that automatically understand and manipulate the video content in
the same structured way as a traditional database manages numeric
and textual data. Consequently, content-based search and retrieval
of video data becomes a challenging and important problem.
Aspects of Robust Statistics are important in many areas. Based on the International Conference on Robust Statistics 2001 (ICORS 2001) in Vorau, Austria, this volume discusses future directions of the discipline, bringing together leading scientists, experienced researchers and practitioners, as well as younger researchers. The papers cover a multitude of different aspects of Robust Statistics. For instance, the fundamental problem of data summary (weights of evidence) is considered and its robustness properties are studied. Further theoretical subjects include e.g.: robust methods for skewness, time series, longitudinal data, multivariate methods, and tests. Some papers deal with computational aspects and algorithms. Finally, the aspects of application and programming tools complete the volume.
This book highlights the latest advances on the implementation and adaptation of blockchain technologies in real-world scientific, biomedical, and data applications. It presents rapid advancements in life sciences research and development by applying the unique capabilities inherent in distributed ledger technologies. The book unveils the current uses of blockchain in drug discovery, drug and device tracking, real-world data collection, and increased patient engagement used to unlock opportunities to advance life sciences research. This paradigm shift is explored from the perspectives of pharmaceutical professionals, biotechnology start-ups, regulatory agencies, ethical review boards, and blockchain developers. This book enlightens readers about the opportunities to empower and enable data in life sciences.
There are wide-ranging implications in information security beyond national defense. Securing our information has implications for virtually all aspects of our lives, including protecting the privacy of our ?nancial transactions and medical records, facilitating all operations of government, maintaining the integrity of national borders, securing important facilities, ensuring the safety of our food and commercial products, protecting the safety of our aviation system-even safeguarding the integrity of our very identity against theft. Information security is a vital element in all of these activities, particularly as information collection and distribution become ever more connected through electronic information delivery systems and commerce. This book encompasses results of research investigation and technologies that can be used to secure, protect, verify, and authenticate objects and inf- mation from theft, counterfeiting, and manipulation by unauthorized persons and agencies. The book has drawn on the diverse expertise in optical sciences and engineering, digital image processing, imaging systems, information p- cessing, mathematical algorithms, quantum optics, computer-based infor- tion systems, sensors, detectors, and biometrics to report novel technologies that can be applied to information-security issues. The book is unique because it has diverse contributions from the ?eld of optics, which is a new emerging technology for security, and digital techniques that are very accessible and can be interfaced with optics to produce highly e?ective security systems.
Based on the Lectures given during the Eurocourse on 'Computing with Parallel Architectures' held at the Joint Research Centre Ispra, Italy, September 10-14, 1990
Modern AI techniques -- especially deep learning -- provide, in many cases, very good recommendations: where a self-driving car should go, whether to give a company a loan, etc. The problem is that not all these recommendations are good -- and since deep learning provides no explanations, we cannot tell which recommendations are good. It is therefore desirable to provide natural-language explanation of the numerical AI recommendations. The need to connect natural language rules and numerical decisions is known since 1960s, when the need emerged to incorporate expert knowledge -- described by imprecise words like "small" -- into control and decision making. For this incorporation, a special "fuzzy" technique was invented, that led to many successful applications. This book described how this technique can help to make AI more explainable.The book can be recommended for students, researchers, and practitioners interested in explainable AI.
This book constitutes the refereed post-conference proceedings of the IFIP TC 3 Open Conference on Computers in Education, OCCE 2021, held in Tampere, Finland, in August 2021. The 22 full papers and 2 short papers included in this volume were carefully reviewed and selected from 44 submissions. The papers discuss key emerging topics and evolving practices in the area of educational computing research. They are organized in the following topical sections: Digital education across educational institutions; National policies and plans for digital competence; Learning with digital technologies; and Management issues.
In this book, Dr. Soofastaei and his colleagues reveal how all mining managers can effectively deploy advanced analytics in their day-to-day operations- one business decision at a time. Most mining companies have a massive amount of data at their disposal. However, they cannot use the stored data in any meaningful way. The powerful new business tool-advanced analytics enables many mining companies to aggressively leverage their data in key business decisions and processes with impressive results. From statistical analysis to machine learning and artificial intelligence, the authors show how many analytical tools can improve decisions about everything in the mine value chain, from exploration to marketing. Combining the science of advanced analytics with the mining industrial business solutions, introduce the "Advanced Analytics in Mining Engineering Book" as a practical road map and tools for unleashing the potential buried in your company's data. The book is aimed at providing mining executives, managers, and research and development teams with an understanding of the business value and applicability of different analytic approaches and helping data analytics leads by giving them a business framework in which to assess the value, cost, and risk of potential analytical solutions. In addition, the book will provide the next generation of miners - undergraduate and graduate IT and mining engineering students - with an understanding of data analytics applied to the mining industry. By providing a book with chapters structured in line with the mining value chain, we will provide a clear, enterprise-level view of where and how advanced data analytics can best be applied. This book highlights the potential to interconnect activities in the mining enterprise better. Furthermore, the book explores the opportunities for optimization and increased productivity offered by better interoperability along the mining value chain - in line with the emerging vision of creating a digital mine with much-enhanced capabilities for modeling, simulation, and the use of digital twins - in line with leading "digital" industries.
The benefits of distributed computing are evidenced by the increased functionality, retrieval capability, and reliability it provides for a number of networked applications. The growth of the Internet into a critical part of daily life has encouraged further study on how data can better be transferred, managed, and evaluated in an ever-changing online environment. Advancements in Distributed Computing and Internet Technologies: Trends and Issues compiles recent research trends and practical issues in the fields of distributed computing and Internet technologies. The book provides advancements on emerging technologies that aim to support the effective design and implementation of service-oriented networks, future Internet environments, and building management frameworks. Research on Internet-based systems design, wireless sensor networks and their application, and next generation distributed systems will inform graduate students, researchers, academics, and industry practitioners of new trends and vital research in this evolving discipline.
This edited book presents the scientific outcomes of the 4th IEEE/ACIS International Conference on Big Data, Cloud Computing, Data Science & Engineering (BCD 2019) which was held on May 29-31, 2019 in Honolulu, Hawaii. The aim of the conference was to bring together researchers and scientists, businessmen and entrepreneurs, teachers, engineers, computer users and students to discuss the numerous fields of computer science and to share their experiences and exchange new ideas and information in a meaningful way. Presenting 15 of the conference's most promising papers, the book discusses all aspects (theory, applications and tools) of computer and information science, the practical challenges encountered along the way, and the solutions adopted to solve them.
Covid-19 has hit the world unprepared, as the deadliest pandemic of the century. Governments and authorities, as leaders and decision makers fighting against the virus, enormously tap on the power of AI and its data analytics models for urgent decision supports at the greatest efforts, ever seen from human history. This book showcases a collection of important data analytics models that were used during the epidemic, and discusses and compares their efficacy and limitations. Readers who from both healthcare industries and academia can gain unique insights on how data analytics models were designed and applied on epidemic data. Taking Covid-19 as a case study, readers especially those who are working in similar fields, would be better prepared in case a new wave of virus epidemic may arise again in the near future.
This book focuses on the multi-omics big-data integration, the data-mining techniques and the cutting-edge omics researches in principles and applications for a deep understanding of Traditional Chinese Medicine (TCM) and diseases from the following aspects: (1) Basics about multi-omics data and analytical methods for TCM and diseases. (2) The needs of omics studies in TCM researches, and the basic background of omics research in TCM and disease. (3) Better understanding of the multi-omics big-data integration techniques. (4) Better understanding of the multi-omics big-data mining techniques, as well as with different applications, for most insights from these omics data for TCM and disease researches. (5) TCM preparation quality control for checking both prescribed and unexpected ingredients including biological and chemical ingredients. (6) TCM preparation source tracking. (7) TCM preparation network pharmacology analysis. (8) TCM analysis data resources, web services, and visualizations. (9) TCM geoherbalism examination and authentic TCM identification. Traditional Chinese Medicine has been in existence for several thousands of years, and only in recent tens of years have we realized that the researches on TCM could be profoundly boosted by the omics technologies. Devised as a book on TCM and disease researches in the omics age, this book has put the focus on data integration and data mining methods for multi-omics researches, which will be explained in detail and with supportive examples the "What", "Why" and "How" of omics on TCM related researches. It is an attempt to bridge the gap between TCM related multi-omics big data, and the data-mining techniques, for best practice of contemporary bioinformatics and in-depth insights on the TCM related questions.
More data has been produced in the 21st century than all of human history combined. Yet, are we making better decisions today than in the past? How many poor decisions result from the absence of data? The existence of an overwhelming amount of data has affected how we make decisions, but it has not necessarily improved how we make decisions. To make better decisions, people need good judgment based on data literacy-the ability to extract meaning from data. Including data in the decision-making process can bring considerable clarity in answering our questions. Nevertheless, human beings can become distracted, overwhelmed, and even confused in the presence of too much data. The book presents cautionary tales of what can happen when too much attention is spent on acquiring more data instead of understanding how to best use the data we already have. Data is not produced in a vacuum, and individuals who possess data literacy will understand the environment and incentives in the data-generating process. Readers of this book will learn what questions to ask, what data to pay attention to, and what pitfalls to avoid in order to make better decisions. They will also be less vulnerable to those who manipulate data for misleading purposes.
Peer-to-peer (P2P) technology, or peer computing, is a paradigm that is viewed as a potential technology for redesigning distributed architectures and, consequently, distributed processing. Yet the scale and dynamism that characterize P2P systems demand that we reexamine traditional distributed technologies. A paradigm shift that includes self-reorganization, adaptation and resilience is called for. On the other hand, the increased computational power of such networks opens up completely new applications, such as in digital content sharing, scientific computation, gaming, or collaborative work environments. In this book, Vu, Lupu and Ooi present the technical challenges offered by P2P systems, and the means that have been proposed to address them. They provide a thorough and comprehensive review of recent advances on routing and discovery methods; load balancing and replication techniques; security, accountability and anonymity, as well as trust and reputation schemes; programming models and P2P systems and projects. Besides surveying existing methods and systems, they also compare and evaluate some of the more promising schemes. The need for such a book is evident. It provides a single source for practitioners, researchers and students on the state of the art. For practitioners, this book explains best practice, guiding selection of appropriate techniques for each application. For researchers, this book provides a foundation for the development of new and more effective methods. For students, it is an overview of the wide range of advanced techniques for realizing effective P2P systems, and it can easily be used as a text for an advanced course on Peer-to-Peer Computing and Technologies, or as a companion text for courses on various subjects, such as distributed systems, and grid and cluster computing.
Statistics and hypothesis testing are routinely used in areas (such as linguistics) that are traditionally not mathematically intensive. In such fields, when faced with experimental data, many students and researchers tend to rely on commercial packages to carry out statistical data analysis, often without understanding the logic of the statistical tests they rely on. As a consequence, results are often misinterpreted, and users have difficulty in flexibly applying techniques relevant to their own research they use whatever they happen to have learned. A simple solution is to teach the fundamental ideas of statistical hypothesis testing without using too much mathematics. This book provides a non-mathematical, simulation-based introduction to basic statistical concepts and encourages readers to try out the simulations themselves using the source code and data provided (the freely available programming language R is used throughout). Since the code presented in the text almost always requires the use of previously introduced programming constructs, diligent students also acquire basic programming abilities in R. The book is intended for advanced undergraduate and graduate students in any discipline, although the focus is on linguistics, psychology, and cognitive science. It is designed for self-instruction, but it can also be used as a textbook for a first course on statistics. Earlier versions of the book have been used in undergraduate and graduate courses in Europe and the US. Vasishth and Broe have written an attractive introduction to the foundations of statistics. It is concise, surprisingly comprehensive, self-contained and yet quite accessible. Highly recommended. Harald Baayen, Professor of Linguistics, University of Alberta, Canada By using the text students not only learn to do the specific things outlined in the book, they also gain a skill set that empowers them to explore new areas that lie beyond the book s coverage. Colin Phillips, Professor of Linguistics, University of Maryland, USA
First of all, I would like to congratulate Gabriella Pasi and Gloria Bordogna for the work they accomplished in preparing this new book in the series "Study in Fuzziness and Soft Computing." "Recent Issues on the Management of Fuzziness in Databases" is undoubtedly a token of their long-lasting and active involvement in the area of Fuzzy Information Retrieval and Fuzzy Database Systems. This book is really welcome in the area of fuzzy databases where they are not numerous although the first works at the crossroads of fuzzy sets and databases were initiated about twenty years ago by L. Zadeh. Only five books have been published since 1995, when the first volume dedicated to fuzzy databases published in the series "Study in Fuzziness and Soft Computing" edited by J. Kacprzyk and myself appeared. Going beyond books strictly speaking, let us also mention the existence of review papers that are part of a couple of handbooks related to fuzzy sets published since 1998. The area known as fuzzy databases covers a bunch of topics among which: -flexible queries addressed to regular databases, -the extension of the notion of a functional dependency, -data mining and fuzzy summarization, -querying databases containing imperfect attribute values represented thanks to possibility distributions.
Software that covertly monitors user actions, also known as spyware, has become a first-level security threat due to its ubiquity and the difficulty of detecting and removing it. This is especially so for video conferencing, thin-client computing and Internet cafes. CryptoGraphics: Exploiting Graphics Cards for Security explores the potential for implementing ciphers within GPUs, and describes the relevance of GPU-based encryption to the security of applications involving remote displays. As the processing power of GPUs increases, research involving the use of GPUs for general purpose computing has arisen. This work extends such research by considering the use of a GPU as a parallel processor for encrypting data. The authors evaluate the operations found in symmetric and asymmetric key ciphers to determine if encryption can be programmed in existing GPUs. A detailed description for a GPU based implementation of AES is provided. The feasibility of GPU-based encryption allows the authors to explore the use of a GPU as a trusted system component. Unencrypted display data can be confined to the GPU to avoid exposing it to any malware running on the operating system. |
You may like...
Big Data and Smart Service Systems
Xiwei Liu, Rangachari Anand, …
Hardcover
Mathematical Methods in Data Science
Jingli Ren, Haiyan Wang
Paperback
R3,925
Discovery Miles 39 250
Fundamentals of Spatial Information…
Robert Laurini, Derek Thompson
Hardcover
R1,451
Discovery Miles 14 510
CompTIA Data+ DA0-001 Exam Cram
Akhil Behl, Sivasubramanian
Digital product license key
R1,024
Discovery Miles 10 240
|