![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence > Machine learning
Image Modeling and Retrieval; E. Vicario. Efficient and Effective Nearest Neighbor Search in a Medical Image Database of Tumor Shapes; F. Korn, et al. Shape-Similarity-Based Retrieval in Image Databases; R. Mehrotra, J.E. Gary. Color Angular Indexing and Image Retrieval; G.D. Finlayson, et al. Indexing Color-Texture Image Patterns; A.D. Ventura, et al. Iconic Indexing for Visual Databases; Q-L. Zhang, S-K. Chang. Using Weighted Spatial Relationships in Retrieval by Visual Contents; A. Del Bimbo, et al.. Index.
This book develops two key machine learning principles: the semi-supervised paradigm and learning with interdependent data. It reveals new applications, primarily web related, that transgress the classical machine learning framework through learning with interdependent data. The book traces how the semi-supervised paradigm and the learning to rank paradigm emerged from new web applications, leading to a massive production of heterogeneous textual data. It explains how semi-supervised learning techniques are widely used, but only allow a limited analysis of the information content and thus do not meet the demands of many web-related tasks. Later chapters deal with the development of learning methods for ranking entities in a large collection with respect to precise information needed. In some cases, learning a ranking function can be reduced to learning a classification function over the pairs of examples. The book proves that this task can be efficiently tackled in a new framework: learning with interdependent data. Researchers and professionals in machine learning will find these new perspectives and solutions valuable. Learning with Partially Labeled and Interdependent Data is also useful for advanced-level students of computer science, particularly those focused on statistics and learning.
This book will teach you the basics of Streamlit, a Python-based application framework used to build interactive dashboards and machine learning web apps. Streamlit reduces development time for web-based application prototypes of data and machine learning models. As you'll see, Streamlit helps develop data-enhanced analytics, build dynamic user experiences, and showcases data for data science and machine learning models. Beginner's Guide to Streamlit with Python begins with the basics of Streamlit by demonstrating how to build a basic application and advances to visualization techniques and their features. Next, it covers the various aspects of a typical Streamlit web application, and explains how to manage flow control and status elements. You'll also explore performance optimization techniques necessary for data modules in a Streamlit application. Following this, you'll see how to deploy Streamlit applications on various platforms. The book concludes with a few prototype natural language processing apps with computer vision implemented using Streamlit. After reading this book, you will understand the concepts, functionalities, and performance of Streamlit, and be able to develop dynamic Streamlit web-based data and machine learning applications of your own. What You Will Learn How to start developing web applications using Streamlit What are Streamlit's components Media elements in Streamlit How to visualize data using various interactive and dynamic Python libraries How to implement models in Streamlit web applications Who This Book Is ForProfessionals working in data science and machine learning domains who want to showcase and deploy their work in a web application with no prior knowledge of web development.
The fundamental mathematical tools needed to understand machine learning include linear algebra, analytic geometry, matrix decompositions, vector calculus, optimization, probability and statistics. These topics are traditionally taught in disparate courses, making it hard for data science or computer science students, or professionals, to efficiently learn the mathematics. This self-contained textbook bridges the gap between mathematical and machine learning texts, introducing the mathematical concepts with a minimum of prerequisites. It uses these concepts to derive four central machine learning methods: linear regression, principal component analysis, Gaussian mixture models and support vector machines. For students and others with a mathematical background, these derivations provide a starting point to machine learning texts. For those learning the mathematics for the first time, the methods help build intuition and practical experience with applying mathematical concepts. Every chapter includes worked examples and exercises to test understanding. Programming tutorials are offered on the book's web site.
Introduction to Algorithms for Data Mining and Machine Learning introduces the essential ideas behind all key algorithms and techniques for data mining and machine learning, along with optimization techniques. Its strong formal mathematical approach, well selected examples, and practical software recommendations help readers develop confidence in their data modeling skills so they can process and interpret data for classification, clustering, curve-fitting and predictions. Masterfully balancing theory and practice, it is especially useful for those who need relevant, well explained, but not rigorous (proofs based) background theory and clear guidelines for working with big data.
THIS ISN'T AN EPISODE OF BLACK MIRROR. THIS. IS. THE. FUTURE. The problem of online disinformation is only getting worse. Social media may well play a role in the the US 2020 presidential election and other major political events. But that doesn't even begin to describe what future propaganda will look like. As Samuel Woolley shows, we will soon be navigating new technologies such as human-like automated voice systems, machine learning, 'deep-fake' AI-edited videos and images, interactive memes, virtual reality and augmented reality. In stories both deeply researched and compellingly written, Woolley describes this future, and explains how the technology can be manipulated, who might control it and its impact on political strategy. Finally, Woolley proposes strategic responses to this threat with the ultimate goal of empowering activists and pushing technology builders to design for democracy. We may not be able to alter how the internet was used to challenge democracy in years past but we can follow the signals to prevent manipulation in the future - and to use these powerful new tools not to control people but to empower them.
Embedded Analytics is one of the hottest trends in business intelligence right now. It's being used in multiple ways to improve decision making, provide faster insights, gain competitive advantages and grow revenue. Over the last 10 years, data analytics and data visualization have become essential components of an enterprise information strategy. Nevertheless, despite this recognition, the adoption of data analytics has remained remarkably static - perhaps reaching no more than thirty percent of potential users. This book explores the most important techniques for taking that adoption further: embedding analytics into the workflow of our everyday operations.
Effective decision-making while trading off the constraints and conflicting multiple objectives under rapid technological developments, massive generation of data, and extreme volatility is of paramount importance to organizations to win over the time-based competition today. While agility is a crucial issue, the firms have been increasingly relying on evidence-based decision-making through intelligent decision support systems driven by computational intelligence and automation to achieve a competitive advantage. The decisions are no longer confined to a specific functional area. Instead, business organizations today find actionable insight for formulating future courses of action by integrating multiple objectives and perspectives. Therefore, multi-objective decision-making plays a critical role in businesses and industries. In this regard, the importance of Operations Research (OR) models and their applications enables the firms to derive optimum solutions subject to various constraints and/or objectives while considering multiple functional areas of the organizations together. Hence, researchers and practitioners have extensively applied OR models to solve various organizational issues related to manufacturing, service, supply chain and logistics management, human resource management, finance, and market analysis, among others. Further, OR models driven by AI have been enabled to provide intelligent decision-support frameworks for achieving sustainable development goals. The present issue provides a unique platform to showcase the contributions of the leading international experts on production systems and business from academia, industry, and government to discuss the issues in intelligent manufacturing, operations management, financial management, supply chain management, and Industry 4.0 in the Artificial Intelligence era. Some of the general (but not specific) scopes of this proceeding entail OR models such as Optimization and Control, Combinatorial Optimization, Queuing Theory, Resource Allocation Models, Linear and Nonlinear Programming Models, Multi-objective and multi-attribute Decision Models, Statistical Quality Control along with AI, Bayesian Data Analysis, Machine Learning and Econometrics and their applications vis-à -vis AI & Data-driven Production Management, Marketing and Retail Management, Financial Management, Human Resource Management, Operations Management, Smart Manufacturing & Industry 4.0, Supply Chain and Logistics Management, Digital Supply Network, Healthcare Administration, Inventory Management, consumer behavior, security analysis, and portfolio management and sustainability.  The present issue shall be of interest to the faculty members, students, and scholars of various engineering and social science institutions and universities, along with the practitioners and policymakers of different industries and organizations.
Transfer learning deals with how systems can quickly adapt themselves to new situations, tasks and environments. It gives machine learning systems the ability to leverage auxiliary data and models to help solve target problems when there is only a small amount of data available. This makes such systems more reliable and robust, keeping the machine learning model faced with unforeseeable changes from deviating too much from expected performance. At an enterprise level, transfer learning allows knowledge to be reused so experience gained once can be repeatedly applied to the real world. For example, a pre-trained model that takes account of user privacy can be downloaded and adapted at the edge of a computer network. This self-contained, comprehensive reference text describes the standard algorithms and demonstrates how these are used in different transfer learning paradigms. It offers a solid grounding for newcomers as well as new insights for seasoned researchers and developers.
During the past half-century, exponential families have attained a position at the center of parametric statistical inference. Theoretical advances have been matched, and more than matched, in the world of applications, where logistic regression by itself has become the go-to methodology in medical statistics, computer-based prediction algorithms, and the social sciences. This book is based on a one-semester graduate course for first year Ph.D. and advanced master's students. After presenting the basic structure of univariate and multivariate exponential families, their application to generalized linear models including logistic and Poisson regression is described in detail, emphasizing geometrical ideas, computational practice, and the analogy with ordinary linear regression. Connections are made with a variety of current statistical methodologies: missing data, survival analysis and proportional hazards, false discovery rates, bootstrapping, and empirical Bayes analysis. The book connects exponential family theory with its applications in a way that doesn't require advanced mathematical preparation.
Contemporary research in science and engineering is seeking to harness the versatility and sustainability of living organisms. By exploiting natural principles, researchers hope to create new kinds of technology that are self-repairing, adaptable, and robust, and to invent a new class of machines that are perceptive, social, emotional, perhaps even conscious. This is the realm of the 'living machine'. Living machines can be divided into two types: biomimetic systems, that harness the principles discovered in nature and embody them in new artifacts, and biohybrid systems in which biological entities are coupled with synthetic ones. Living Machines: A handbook of research in biomimetic and biohybrid systems surveys this flourishing area of research, capturing the current state of play and pointing to the opportunities ahead. Promising areas in biomimetics include self-organization, biologically inspired active materials, self-assembly and self-repair, learning, memory, control architectures and self-regulation, locomotion in air, on land or in water, perception, cognition, control, and communication. Drawing on these advances the potential of biomimetics is revealed in devices that can harvest energy, grow or reproduce, and in animal-like robots that range from synthetic slime molds, to artificial fish, to humanoids. Biohybrid systems is a relatively new field, with exciting and largely unknown potential, but one that is likely to shape the future of humanity. This book surveys progress towards new kinds of biohybrid such as robots that merge electronic neurons with biological tissue, micro-scale machines made from living cells, prosthetic limbs with a sense of touch, and brain-machine interfaces that allow robotic devices to be controlled by human thought. The handbook concludes by exploring some of the impacts that living machine technologies could have on both society and the individual, exploring questions about how we will see and understand ourselves in a world in which the line between the natural and the artificial is increasingly blurred. With contributions from leading researchers from science, engineering, and the humanities, this handbook will be of broad interest to undergraduate and postgraduate students. Researchers in the areas of computational modeling and engineering, including artificial intelligence, machine learning, artificial life, biorobotics, neurorobotics, and human-machine interfaces will find Living Machines an invaluable resource.
'Absorbing, mind-enlarging, studded with insights ... This could have significant real-world results' Sunday Times Humanity's greatest feat is our incredible ability to learn. Even in their first year, infants acquire language, visual and social knowledge at a rate that surpasses the best supercomputers. But how, exactly, do our brains learn? In How We Learn, leading neuroscientist Stanislas Dehaene delves into the psychological, neuronal, synaptic and molecular mechanisms of learning. Drawing on case studies of children who learned despite huge difficulty and trauma, he explains why youth is such a sensitive period, during which brain plasticity is maximal, but also assures us that our abilities continue into adulthood. We can all enhance our learning and memory at any age and 'learn to learn' by taking maximal advantage of the four pillars of the brain's learning algorithm: attention, active engagement, error feedback and consolidation. The human brain is an extraordinary machine. Its ability to process information and adapt to circumstances by reprogramming itself is unparalleled, and it remains the best source of inspiration for recent developments in artificial intelligence. How We Learn finds the boundary of computer science, neurobiology, cognitive psychology and education to explain how learning really works and how to make the best use of the brain's learning algorithms - and even improve them - in our schools and universities as well as in everyday life.
This book provides a comprehensive explanation of precision (i.e., personalized) healthcare and explores how it can be advanced through artificial intelligence (AI) and other data-driven technologies. From improving the diagnosis, treatment, and monitoring of many medical conditions to the effective implementation of precise patient care, this book will help you understand datasets produced from digital health technologies and IoT and teach you how to employ analytical methods such as convolutional neural networks and deep learning to analyze that data. You'll also see how this data-driven approach can enhance and democratize value-based healthcare delivery. Additionally, you'll learn how the convergence of AI and precision health is revolutionizing healthcare, including some of the most difficult challenges facing precision medicine, such as ethics, bias, privacy, and health equity. Precision Health and Artificial Intelligence provides the groundwork for clinicians, engineers, bioinformaticians, and healthcare enthusiasts to apply AI to healthcare. What You Will Learn Understand the components required to facilitate precision health and personalized care Apply and implement precision health systems Overcome the challenges of delivering precision healthcare at scale Reconcile ethical and moral implications of delivering precision healthcare Gain insight into the hurdles providers face while implementing precision healthcare Who This Book Is For Healthcare professionals, clinicians, engineers, bioinformaticians, chief information officers (CIOs), and students
This compact course is written for the mathematically literate reader who wants to learn to analyze data in a principled fashion. The language of mathematics enables clear exposition that can go quite deep, quite quickly, and naturally supports an axiomatic and inductive approach to data analysis. Starting with a good grounding in probability, the reader moves to statistical inference via topics of great practical importance - simulation and sampling, as well as experimental design and data collection - that are typically displaced from introductory accounts. The core of the book then covers both standard methods and such advanced topics as multiple testing, meta-analysis, and causal inference.
How can machine learning help the design of future communication networks - and how can future networks meet the demands of emerging machine learning applications? Discover the interactions between two of the most transformative and impactful technologies of our age in this comprehensive book. First, learn how modern machine learning techniques, such as deep neural networks, can transform how we design and optimize future communication networks. Accessible introductions to concepts and tools are accompanied by numerous real-world examples, showing you how these techniques can be used to tackle longstanding problems. Next, explore the design of wireless networks as platforms for machine learning applications - an overview of modern machine learning techniques and communication protocols will help you to understand the challenges, while new methods and design approaches will be presented to handle wireless channel impairments such as noise and interference, to meet the demands of emerging machine learning applications at the wireless edge.
This book teaches the practical implementation of various concepts for time series analysis and modeling with Python through problem-solution-style recipes, starting with data reading and preprocessing. It begins with the fundamentals of time series forecasting using statistical modeling methods like AR (autoregressive), MA (moving-average), ARMA (autoregressive moving-average), and ARIMA (autoregressive integrated moving-average). Next, you'll learn univariate and multivariate modeling using different open-sourced packages like Fbprohet, stats model, and sklearn. You'll also gain insight into classic machine learning-based regression models like randomForest, Xgboost, and LightGBM for forecasting problems. The book concludes by demonstrating the implementation of deep learning models (LSTMs and ANN) for time series forecasting. Each chapter includes several code examples and illustrations. After finishing this book, you will have a foundational understanding of various concepts relating to time series and its implementation in Python. What You Will Learn Implement various techniques in time series analysis using Python. Utilize statistical modeling methods such as AR (autoregressive), MA (moving-average), ARMA (autoregressive moving-average) and ARIMA (autoregressive integrated moving-average) for time series forecasting Understand univariate and multivariate modeling for time series forecasting Forecast using machine learning and deep learning techniques such as GBM and LSTM (long short-term memory) Who This Book Is ForData Scientists, Machine Learning Engineers, and software developers interested in time series analysis.
The book discusses major technical advances and research findings in the field of machine intelligence in medical image analysis. It examines the latest technologies and that have been implemented in clinical practice, such as computational intelligence in computer-aided diagnosis, biological image analysis, and computer-aided surgery and therapy. This book provides insights into the basic science involved in processing, analysing, and utilising all aspects of advanced computational intelligence in medical decision-making based on medical imaging.
This book is a guide to productionizing AI solutions using best-of-breed cloud services with workarounds to lower costs. Supplemented with step-by-step instructions covering data import through wrangling to partitioning and modeling through to inference and deployment, and augmented with plenty of Python code samples, the book has been written to accelerate the process of moving from script or notebook to app. From an initial look at the context and ecosystem of AI solutions today, the book drills down from high-level business needs into best practices, working with stakeholders, and agile team collaboration. From there you'll explore data pipeline orchestration, machine and deep learning, including working with and finding shortcuts using artificial neural networks such as AutoML and AutoAI. You'll also learn about the increasing use of NoLo UIs through AI application development, industry case studies, and finally a practical guide to deploying containerized AI solutions. The book is intended for those whose role demands overcoming budgetary barriers or constraints in accessing cloud credits to undertake the often difficult process of developing and deploying an AI solution. What You Will Learn Develop and deliver production-grade AI in one month Deploy AI solutions at a low cost Work around Big Tech dominance and develop MVPs on the cheap Create demo-ready solutions without overly complex python scripts/notebooks Who this book is for: Data scientists and AI consultants with programming skills in Python and driven to succeed in AI.
An intuitive and accessible text explaining the fundamentals and applications of graph signal processing. Requiring only an elementary understanding of linear algebra, it covers both basic and advanced topics, including node domain processing, graph signal frequency, sampling, and graph signal representations, as well as how to choose a graph. Understand the basic insights behind key concepts and learn how graphs can be associated to a range of specific applications across physical, biological and social networks, distributed sensor networks, image and video processing, and machine learning. With numerous exercises and Matlab examples to help put knowledge into practice, and a solutions manual available online for instructors, this unique text is essential reading for graduate and senior undergraduate students taking courses on graph signal processing, signal processing, information processing, and data analysis, as well as researchers and industry professionals.
This text provides deep and comprehensive coverage of the mathematical background for data science, including machine learning, optimal recovery, compressed sensing, optimization, and neural networks. In the past few decades, heuristic methods adopted by big tech companies have complemented existing scientific disciplines to form the new field of Data Science. This text embarks the readers on an engaging itinerary through the theory supporting the field. Altogether, twenty-seven lecture-length chapters with exercises provide all the details necessary for a solid understanding of key topics in data science. While the book covers standard material on machine learning and optimization, it also includes distinctive presentations of topics such as reproducing kernel Hilbert spaces, spectral clustering, optimal recovery, compressed sensing, group testing, and applications of semidefinite programming. Students and data scientists with less mathematical background will appreciate the appendices that provide more background on some of the more abstract concepts.
This third edition expands on the original material. Large portions of the text have been reviewed and clarified. More emphasis is devoted to machine learning including more modern concepts and examples. This book provides the reader with the main concepts and tools needed to perform statistical analyses of experimental data, in particular in the field of high-energy physics (HEP). It starts with an introduction to probability theory and basic statistics, mainly intended as a refresher from readers' advanced undergraduate studies, but also to help them clearly distinguish between the Frequentist and Bayesian approaches and interpretations in subsequent applications. Following, the author discusses Monte Carlo methods with emphasis on techniques like Markov Chain Monte Carlo, and the combination of measurements, introducing the best linear unbiased estimator. More advanced concepts and applications are gradually presented, including unfolding and regularization procedures, culminating in the chapter devoted to discoveries and upper limits. The reader learns through many applications in HEP where the hypothesis testing plays a major role and calculations of look-elsewhere effect are also presented. Many worked-out examples help newcomers to the field and graduate students alike understand the pitfalls involved in applying theoretical concepts to actual data.
Every day we interact with machine learning systems offering individualized predictions for our entertainment, social connections, purchases, or health. These involve several modalities of data, from sequences of clicks to text, images, and social interactions. This book introduces common principles and methods that underpin the design of personalized predictive models for a variety of settings and modalities. The book begins by revising 'traditional' machine learning models, focusing on adapting them to settings involving user data, then presents techniques based on advanced principles such as matrix factorization, deep learning, and generative modeling, and concludes with a detailed study of the consequences and risks of deploying personalized predictive systems. A series of case studies in domains ranging from e-commerce to health plus hands-on projects and code examples will give readers understanding and experience with large-scale real-world datasets and the ability to design models and systems for a wide range of applications.
Understand advanced data analytics concepts such as time series and principal component analysis with ETL, supervised learning, and PySpark using Python. This book covers architectural patterns in data analytics, text and image classification, optimization techniques, natural language processing, and computer vision in the cloud environment. Generic design patterns in Python programming is clearly explained, emphasizing architectural practices such as hot potato anti-patterns. You'll review recent advances in databases such as Neo4j, Elasticsearch, and MongoDB. You'll then study feature engineering in images and texts with implementing business logic and see how to build machine learning and deep learning models using transfer learning. Advanced Analytics with Python, 2nd edition features a chapter on clustering with a neural network, regularization techniques, and algorithmic design patterns in data analytics with reinforcement learning. Finally, the recommender system in PySpark explains how to optimize models for a specific application. What You'll Learn Build intelligent systems for enterprise Review time series analysis, classifications, regression, and clustering Explore supervised learning, unsupervised learning, reinforcement learning, and transfer learning Use cloud platforms like GCP and AWS in data analytics Understand Covers design patterns in Python Who This Book Is For Data scientists and software developers interested in the field of data analytics.
Machine Learning under Resource Constraints addresses novel machine learning algorithms that are challenged by high-throughput data, by high dimensions, or by complex structures of the data in three volumes. Resource constraints are given by the relation between the demands for processing the data and the capacity of the computing machinery. The resources are runtime, memory, communication, and energy. Hence, modern computer architectures play a significant role. Novel machine learning algorithms are optimized with regard to minimal resource consumption. Moreover, learned predictions are executed on diverse architectures to save resources. It provides a comprehensive overview of the novel approaches to machine learning research that consider resource constraints, as well as the application of the described methods in various domains of science and engineering. Volume 2 covers machine learning for knowledge discovery in particle and astroparticle physics. Their instruments, e.g., particle detectors or telescopes, gather petabytes of data. Here, machine learning is necessary not only to process the vast amounts of data and to detect the relevant examples efficiently, but also as part of the knowledge discovery process itself. The physical knowledge is encoded in simulations that are used to train the machine learning models. At the same time, the interpretation of the learned models serves to expand the physical knowledge. This results in a cycle of theory enhancement supported by machine learning.
This book presents the refereed proceedings of the 6th International Conference on Advanced Machine Learning Technologies and Applications (AMLTA 2021) held in Cairo, Egypt, during March 22-24, 2021, and organized by the Scientific Research Group of Egypt (SRGE). The papers cover current research Artificial Intelligence Against COVID-19, Internet of Things Healthcare Systems, Deep Learning Technology, Sentiment analysis, Cyber-Physical System, Health Informatics, Data Mining, Power and Control Systems, Business Intelligence, Social media, Control Design, and Smart Systems. |
You may like...
Blockchain Technology: Platforms, Tools…
Pethuru Raj, Ganesh Chandra Deka
Hardcover
R4,211
Discovery Miles 42 110
Introduction to Video Search Engines
David C Gibbon, Zhu Liu
Hardcover
R1,435
Discovery Miles 14 350
User-Centred Engineering - Creating…
Michael Richter, Markus Fluckiger
Hardcover
R1,782
Discovery Miles 17 820
The Digital Twin Paradigm for Smarter…
Pethuru Raj, Preetha Evangeline
Hardcover
R4,216
Discovery Miles 42 160
Embedded and Real Time System…
Mohammad Ayoub Khan, Saqib Saeed, …
Hardcover
R3,445
Discovery Miles 34 450
|