![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
In Symbolic Analysis for Parallelizing Compilers the author presents an excellent demonstration of the effectiveness of symbolic analysis in tackling important optimization problems, some of which inhibit loop parallelization. The framework that Haghighat presents has proved extremely successful in induction and wraparound variable analysis, strength reduction, dead code elimination and symbolic constant propagation. The approach can be applied to any program transformation or optimization problem that uses properties and value ranges of program names. Symbolic analysis can be used on any transformational system or optimization problem that relies on compile-time information about program variables. This covers the majority of, if not all optimization and parallelization techniques. The book makes a compelling case for the potential of symbolic analysis, applying it for the first time - and with remarkable results - to a number of classical optimization problems: loop scheduling, static timing or size analysis, and dependence analysis. It demonstrates how symbolic analysis can solve these problems faster and more accurately than existing hybrid techniques.
This book is about methodological aspects of uncertainty propagation in data processing. Uncertainty propagation is an important problem: while computer algorithms efficiently process data related to many aspects of their lives, most of these algorithms implicitly assume that the numbers they process are exact. In reality, these numbers come from measurements, and measurements are never 100% exact. Because of this, it makes no sense to translate 61 kg into pounds and get the result-as computers do-with 13 digit accuracy. In many cases-e.g., in celestial mechanics-the state of a system can be described by a few numbers: the values of the corresponding physical quantities. In such cases, for each of these quantities, we know (at least) the upper bound on the measurement error. This bound is either provided by the manufacturer of the measuring instrument-or is estimated by the user who calibrates this instrument. However, in many other cases, the description of the system is more complex than a few numbers: we need a function to describe a physical field (e.g., electromagnetic field); we need a vector in Hilbert space to describe a quantum state; we need a pseudo-Riemannian space to describe the physical space-time, etc. To describe and process uncertainty in all such cases, this book proposes a general methodology-a methodology that includes intervals as a particular case. The book is recommended to students and researchers interested in challenging aspects of uncertainty analysis and to practitioners who need to handle uncertainty in such unusual situations.
Recent advances in computing, communication, and data storage have
led to an increasing number of large digital libraries publicly
available on the Internet. In addition to alphanumeric data, other
modalities, including video play an important role in these
libraries. Ordinary techniques will not retrieve required
information from the enormous mass of data stored in digital video
libraries. Instead of words, a video retrieval system deals with
collections of video records. Therefore, the system is confronted
with the problem of video understanding. The system gathers key
information from a video in order to allow users to query semantics
instead of raw video data or video features. Users expect tools
that automatically understand and manipulate the video content in
the same structured way as a traditional database manages numeric
and textual data. Consequently, content-based search and retrieval
of video data becomes a challenging and important problem.
This open access book presents how cutting-edge digital technologies like Big Data, Machine Learning, Artificial Intelligence (AI), and Blockchain are set to disrupt the financial sector. The book illustrates how recent advances in these technologies facilitate banks, FinTech, and financial institutions to collect, process, analyze, and fully leverage the very large amounts of data that are nowadays produced and exchanged in the sector. To this end, the book also describes some more the most popular Big Data, AI and Blockchain applications in the sector, including novel applications in the areas of Know Your Customer (KYC), Personalized Wealth Management and Asset Management, Portfolio Risk Assessment, as well as variety of novel Usage-based Insurance applications based on Internet-of-Things data. Most of the presented applications have been developed, deployed and validated in real-life digital finance settings in the context of the European Commission funded INFINITECH project, which is a flagship innovation initiative for Big Data and AI in digital finance. This book is ideal for researchers and practitioners in Big Data, AI, banking and digital finance.
Aspects of Robust Statistics are important in many areas. Based on the International Conference on Robust Statistics 2001 (ICORS 2001) in Vorau, Austria, this volume discusses future directions of the discipline, bringing together leading scientists, experienced researchers and practitioners, as well as younger researchers. The papers cover a multitude of different aspects of Robust Statistics. For instance, the fundamental problem of data summary (weights of evidence) is considered and its robustness properties are studied. Further theoretical subjects include e.g.: robust methods for skewness, time series, longitudinal data, multivariate methods, and tests. Some papers deal with computational aspects and algorithms. Finally, the aspects of application and programming tools complete the volume.
Implementing cryptography requires integers of significant
magnitude to resist cryptanalytic attacks. Modern programming
languages only provide support for integers which are relatively
small and single precision. The purpose of this text is to instruct
the reader regarding how to implement efficient multiple precision
algorithms.
This book highlights the latest advances on the implementation and adaptation of blockchain technologies in real-world scientific, biomedical, and data applications. It presents rapid advancements in life sciences research and development by applying the unique capabilities inherent in distributed ledger technologies. The book unveils the current uses of blockchain in drug discovery, drug and device tracking, real-world data collection, and increased patient engagement used to unlock opportunities to advance life sciences research. This paradigm shift is explored from the perspectives of pharmaceutical professionals, biotechnology start-ups, regulatory agencies, ethical review boards, and blockchain developers. This book enlightens readers about the opportunities to empower and enable data in life sciences.
There are wide-ranging implications in information security beyond national defense. Securing our information has implications for virtually all aspects of our lives, including protecting the privacy of our ?nancial transactions and medical records, facilitating all operations of government, maintaining the integrity of national borders, securing important facilities, ensuring the safety of our food and commercial products, protecting the safety of our aviation system-even safeguarding the integrity of our very identity against theft. Information security is a vital element in all of these activities, particularly as information collection and distribution become ever more connected through electronic information delivery systems and commerce. This book encompasses results of research investigation and technologies that can be used to secure, protect, verify, and authenticate objects and inf- mation from theft, counterfeiting, and manipulation by unauthorized persons and agencies. The book has drawn on the diverse expertise in optical sciences and engineering, digital image processing, imaging systems, information p- cessing, mathematical algorithms, quantum optics, computer-based infor- tion systems, sensors, detectors, and biometrics to report novel technologies that can be applied to information-security issues. The book is unique because it has diverse contributions from the ?eld of optics, which is a new emerging technology for security, and digital techniques that are very accessible and can be interfaced with optics to produce highly e?ective security systems.
Based on the Lectures given during the Eurocourse on 'Computing with Parallel Architectures' held at the Joint Research Centre Ispra, Italy, September 10-14, 1990
Modern AI techniques -- especially deep learning -- provide, in many cases, very good recommendations: where a self-driving car should go, whether to give a company a loan, etc. The problem is that not all these recommendations are good -- and since deep learning provides no explanations, we cannot tell which recommendations are good. It is therefore desirable to provide natural-language explanation of the numerical AI recommendations. The need to connect natural language rules and numerical decisions is known since 1960s, when the need emerged to incorporate expert knowledge -- described by imprecise words like "small" -- into control and decision making. For this incorporation, a special "fuzzy" technique was invented, that led to many successful applications. This book described how this technique can help to make AI more explainable.The book can be recommended for students, researchers, and practitioners interested in explainable AI.
The massive volume of data generated in modern applications can overwhelm our ability to conveniently transmit, store, and index it. For many scenarios, building a compact summary of a dataset that is vastly smaller enables flexibility and efficiency in a range of queries over the data, in exchange for some approximation. This comprehensive introduction to data summarization, aimed at practitioners and students, showcases the algorithms, their behavior, and the mathematical underpinnings of their operation. The coverage starts with simple sums and approximate counts, building to more advanced probabilistic structures such as the Bloom Filter, distinct value summaries, sketches, and quantile summaries. Summaries are described for specific types of data, such as geometric data, graphs, and vectors and matrices. The authors offer detailed descriptions of and pseudocode for key algorithms that have been incorporated in systems from companies such as Google, Apple, Microsoft, Netflix and Twitter.
This book constitutes the refereed post-conference proceedings of the IFIP TC 3 Open Conference on Computers in Education, OCCE 2021, held in Tampere, Finland, in August 2021. The 22 full papers and 2 short papers included in this volume were carefully reviewed and selected from 44 submissions. The papers discuss key emerging topics and evolving practices in the area of educational computing research. They are organized in the following topical sections: Digital education across educational institutions; National policies and plans for digital competence; Learning with digital technologies; and Management issues.
Artificial intelligence is changing the world of work. How can HR professionals understand the variety of opportunities AI has created for the HR function and how best to implement these in their organization? This book provides the answers. From using natural language processing to ensure job adverts are free from bias and gendered language to implementing chatbots to enhance the employee experience, artificial intelligence can add value throughout the work of HR professionals. Artificial Intelligence for HR demonstrates how to leverage this potential and use AI to improve efficiency and develop a talented and productive workforce. Outlining the current technology landscape as well as the latest AI developments, this book ensures that HR professionals fully understand what AI is and what it means for HR in practice. Alongside coverage of employee engagement and recruitment, this second edition features new material on applications of AI for virtual work, reskilling and data integrity. Packed with practical advice, research and new and updated case studies from global organizations including Uber, IBM and Unilever, the second edition of Artificial Intelligence for HR will equip HR professionals with the knowledge they need to improve people operational efficiencies, and allow AI solutions to become enhancements for driving business success.
Digital Image Processing with C++ presents the theory of digital image processing, and implementations of algorithms using a dedicated library. Processing a digital image means transforming its content (denoising, stylizing, etc.), or extracting information to solve a given problem (object recognition, measurement, motion estimation, etc.). This book presents the mathematical theories underlying digital image processing, as well as their practical implementation through examples of algorithms implemented in the C++ language, using the free and easy-to-use CImg library. Chapters cover in a broad way the field of digital image processing and proposes practical and functional implementations of each method theoretically described. The main topics covered include filtering in spatial and frequency domains, mathematical morphology, feature extraction and applications to segmentation, motion estimation, multispectral image processing and 3D visualization. Students or developers wishing to discover or specialize in this discipline, teachers and researchers wishing to quickly prototype new algorithms, or develop courses, will all find in this book material to discover image processing or deepen their knowledge in this field.
Many aspects of modern life have become personalized, yet healthcare practices have been lagging behind in this trend. It is now becoming more common to use big data analysis to improve current healthcare and medicinal systems, and offer better health services to all citizens. Applying Big Data Analytics in Bioinformatics and Medicine is a comprehensive reference source that overviews the current state of medical treatments and systems and offers emerging solutions for a more personalized approach to the healthcare field. Featuring coverage on relevant topics that include smart data, proteomics, medical data storage, and drug design, this publication is an ideal resource for medical professionals, healthcare practitioners, academicians, and researchers interested in the latest trends and techniques in personalized medicine.
In this book, Dr. Soofastaei and his colleagues reveal how all mining managers can effectively deploy advanced analytics in their day-to-day operations- one business decision at a time. Most mining companies have a massive amount of data at their disposal. However, they cannot use the stored data in any meaningful way. The powerful new business tool-advanced analytics enables many mining companies to aggressively leverage their data in key business decisions and processes with impressive results. From statistical analysis to machine learning and artificial intelligence, the authors show how many analytical tools can improve decisions about everything in the mine value chain, from exploration to marketing. Combining the science of advanced analytics with the mining industrial business solutions, introduce the "Advanced Analytics in Mining Engineering Book" as a practical road map and tools for unleashing the potential buried in your company's data. The book is aimed at providing mining executives, managers, and research and development teams with an understanding of the business value and applicability of different analytic approaches and helping data analytics leads by giving them a business framework in which to assess the value, cost, and risk of potential analytical solutions. In addition, the book will provide the next generation of miners - undergraduate and graduate IT and mining engineering students - with an understanding of data analytics applied to the mining industry. By providing a book with chapters structured in line with the mining value chain, we will provide a clear, enterprise-level view of where and how advanced data analytics can best be applied. This book highlights the potential to interconnect activities in the mining enterprise better. Furthermore, the book explores the opportunities for optimization and increased productivity offered by better interoperability along the mining value chain - in line with the emerging vision of creating a digital mine with much-enhanced capabilities for modeling, simulation, and the use of digital twins - in line with leading "digital" industries.
The benefits of distributed computing are evidenced by the increased functionality, retrieval capability, and reliability it provides for a number of networked applications. The growth of the Internet into a critical part of daily life has encouraged further study on how data can better be transferred, managed, and evaluated in an ever-changing online environment. Advancements in Distributed Computing and Internet Technologies: Trends and Issues compiles recent research trends and practical issues in the fields of distributed computing and Internet technologies. The book provides advancements on emerging technologies that aim to support the effective design and implementation of service-oriented networks, future Internet environments, and building management frameworks. Research on Internet-based systems design, wireless sensor networks and their application, and next generation distributed systems will inform graduate students, researchers, academics, and industry practitioners of new trends and vital research in this evolving discipline.
This edited book presents the scientific outcomes of the 4th IEEE/ACIS International Conference on Big Data, Cloud Computing, Data Science & Engineering (BCD 2019) which was held on May 29-31, 2019 in Honolulu, Hawaii. The aim of the conference was to bring together researchers and scientists, businessmen and entrepreneurs, teachers, engineers, computer users and students to discuss the numerous fields of computer science and to share their experiences and exchange new ideas and information in a meaningful way. Presenting 15 of the conference's most promising papers, the book discusses all aspects (theory, applications and tools) of computer and information science, the practical challenges encountered along the way, and the solutions adopted to solve them.
Covid-19 has hit the world unprepared, as the deadliest pandemic of the century. Governments and authorities, as leaders and decision makers fighting against the virus, enormously tap on the power of AI and its data analytics models for urgent decision supports at the greatest efforts, ever seen from human history. This book showcases a collection of important data analytics models that were used during the epidemic, and discusses and compares their efficacy and limitations. Readers who from both healthcare industries and academia can gain unique insights on how data analytics models were designed and applied on epidemic data. Taking Covid-19 as a case study, readers especially those who are working in similar fields, would be better prepared in case a new wave of virus epidemic may arise again in the near future.
This book highlights various evolutionary algorithm techniques for various medical conditions and introduces medical applications of evolutionary computation for real-time diagnosis. Evolutionary Intelligence for Healthcare Applications presents how evolutionary intelligence can be used in smart healthcare systems involving big data analytics, mobile health, personalized medicine, and clinical trial data management. It focuses on emerging concepts and approaches and highlights various evolutionary algorithm techniques used for early disease diagnosis, prediction, and prognosis for medical conditions. The book also presents ethical issues and challenges that can occur within the healthcare system. Researchers, healthcare professionals, data scientists, systems engineers, students, programmers, clinicians, and policymakers will find this book of interest.
Through three editions, Cryptography: Theory and Practice, has been embraced by instructors and students alike. It offers a comprehensive primer for the subject's fundamentals while presenting the most current advances in cryptography. The authors offer comprehensive, in-depth treatment of the methods and protocols that are vital to safeguarding the seemingly infinite and increasing amount of information circulating around the world. Key Features of the Fourth Edition: New chapter on the exciting, emerging new area of post-quantum cryptography (Chapter 9). New high-level, nontechnical overview of the goals and tools of cryptography (Chapter 1). New mathematical appendix that summarizes definitions and main results on number theory and algebra (Appendix A). An expanded treatment of stream ciphers, including common design techniques along with coverage of Trivium. Interesting attacks on cryptosystems, including: padding oracle attack correlation attacks and algebraic attacks on stream ciphers attack on the DUAL-EC random bit generator that makes use of a trapdoor. A treatment of the sponge construction for hash functions and its use in the new SHA-3 hash standard. Methods of key distribution in sensor networks. The basics of visual cryptography, allowing a secure method to split a secret visual message into pieces (shares) that can later be combined to reconstruct the secret. The fundamental techniques cryptocurrencies, as used in Bitcoin and blockchain. The basics of the new methods employed in messaging protocols such as Signal, including deniability and Diffie-Hellman key ratcheting.
This book focuses on the multi-omics big-data integration, the data-mining techniques and the cutting-edge omics researches in principles and applications for a deep understanding of Traditional Chinese Medicine (TCM) and diseases from the following aspects: (1) Basics about multi-omics data and analytical methods for TCM and diseases. (2) The needs of omics studies in TCM researches, and the basic background of omics research in TCM and disease. (3) Better understanding of the multi-omics big-data integration techniques. (4) Better understanding of the multi-omics big-data mining techniques, as well as with different applications, for most insights from these omics data for TCM and disease researches. (5) TCM preparation quality control for checking both prescribed and unexpected ingredients including biological and chemical ingredients. (6) TCM preparation source tracking. (7) TCM preparation network pharmacology analysis. (8) TCM analysis data resources, web services, and visualizations. (9) TCM geoherbalism examination and authentic TCM identification. Traditional Chinese Medicine has been in existence for several thousands of years, and only in recent tens of years have we realized that the researches on TCM could be profoundly boosted by the omics technologies. Devised as a book on TCM and disease researches in the omics age, this book has put the focus on data integration and data mining methods for multi-omics researches, which will be explained in detail and with supportive examples the "What", "Why" and "How" of omics on TCM related researches. It is an attempt to bridge the gap between TCM related multi-omics big data, and the data-mining techniques, for best practice of contemporary bioinformatics and in-depth insights on the TCM related questions.
More data has been produced in the 21st century than all of human history combined. Yet, are we making better decisions today than in the past? How many poor decisions result from the absence of data? The existence of an overwhelming amount of data has affected how we make decisions, but it has not necessarily improved how we make decisions. To make better decisions, people need good judgment based on data literacy-the ability to extract meaning from data. Including data in the decision-making process can bring considerable clarity in answering our questions. Nevertheless, human beings can become distracted, overwhelmed, and even confused in the presence of too much data. The book presents cautionary tales of what can happen when too much attention is spent on acquiring more data instead of understanding how to best use the data we already have. Data is not produced in a vacuum, and individuals who possess data literacy will understand the environment and incentives in the data-generating process. Readers of this book will learn what questions to ask, what data to pay attention to, and what pitfalls to avoid in order to make better decisions. They will also be less vulnerable to those who manipulate data for misleading purposes.
Peer-to-peer (P2P) technology, or peer computing, is a paradigm that is viewed as a potential technology for redesigning distributed architectures and, consequently, distributed processing. Yet the scale and dynamism that characterize P2P systems demand that we reexamine traditional distributed technologies. A paradigm shift that includes self-reorganization, adaptation and resilience is called for. On the other hand, the increased computational power of such networks opens up completely new applications, such as in digital content sharing, scientific computation, gaming, or collaborative work environments. In this book, Vu, Lupu and Ooi present the technical challenges offered by P2P systems, and the means that have been proposed to address them. They provide a thorough and comprehensive review of recent advances on routing and discovery methods; load balancing and replication techniques; security, accountability and anonymity, as well as trust and reputation schemes; programming models and P2P systems and projects. Besides surveying existing methods and systems, they also compare and evaluate some of the more promising schemes. The need for such a book is evident. It provides a single source for practitioners, researchers and students on the state of the art. For practitioners, this book explains best practice, guiding selection of appropriate techniques for each application. For researchers, this book provides a foundation for the development of new and more effective methods. For students, it is an overview of the wide range of advanced techniques for realizing effective P2P systems, and it can easily be used as a text for an advanced course on Peer-to-Peer Computing and Technologies, or as a companion text for courses on various subjects, such as distributed systems, and grid and cluster computing.
"A First Course in Machine Learning by Simon Rogers and Mark Girolami is the best introductory book for ML currently available. It combines rigor and precision with accessibility, starts from a detailed explanation of the basic foundations of Bayesian analysis in the simplest of settings, and goes all the way to the frontiers of the subject such as infinite mixture models, GPs, and MCMC." -Devdatt Dubhashi, Professor, Department of Computer Science and Engineering, Chalmers University, Sweden "This textbook manages to be easier to read than other comparable books in the subject while retaining all the rigorous treatment needed. The new chapters put it at the forefront of the field by covering topics that have become mainstream in machine learning over the last decade." -Daniel Barbara, George Mason University, Fairfax, Virginia, USA "The new edition of A First Course in Machine Learning by Rogers and Girolami is an excellent introduction to the use of statistical methods in machine learning. The book introduces concepts such as mathematical modeling, inference, and prediction, providing 'just in time' the essential background on linear algebra, calculus, and probability theory that the reader needs to understand these concepts." -Daniel Ortiz-Arroyo, Associate Professor, Aalborg University Esbjerg, Denmark "I was impressed by how closely the material aligns with the needs of an introductory course on machine learning, which is its greatest strength...Overall, this is a pragmatic and helpful book, which is well-aligned to the needs of an introductory course and one that I will be looking at for my own students in coming months." -David Clifton, University of Oxford, UK "The first edition of this book was already an excellent introductory text on machine learning for an advanced undergraduate or taught masters level course, or indeed for anybody who wants to learn about an interesting and important field of computer science. The additional chapters of advanced material on Gaussian process, MCMC and mixture modeling provide an ideal basis for practical projects, without disturbing the very clear and readable exposition of the basics contained in the first part of the book." -Gavin Cawley, Senior Lecturer, School of Computing Sciences, University of East Anglia, UK "This book could be used for junior/senior undergraduate students or first-year graduate students, as well as individuals who want to explore the field of machine learning...The book introduces not only the concepts but the underlying ideas on algorithm implementation from a critical thinking perspective." -Guangzhi Qu, Oakland University, Rochester, Michigan, USA |
You may like...
Big Data and Smart Service Systems
Xiwei Liu, Rangachari Anand, …
Hardcover
Management Of Information Security
Michael Whitman, Herbert Mattord
Paperback
Fundamentals of Spatial Information…
Robert Laurini, Derek Thompson
Hardcover
R1,451
Discovery Miles 14 510
Cognitive and Soft Computing Techniques…
Akash Kumar Bhoi, Victor Hugo Costa de Albuquerque, …
Paperback
R2,583
Discovery Miles 25 830
|