![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Artificial intelligence > Machine learning
This book presents practical as well as conceptual insights into the latest trends, tools, techniques and methodologies of blockchains for the Internet of Things. The decentralised Internet of Things (IoT) not only reduces infrastructure costs, but also provides a standardised peer-to-peer communication model for billions of transactions. However, there are significant security challenges associated with peer-to-peer communication. The decentralised concept of blockchain technology ensures transparent interactions between different parties, which are more secure and reliable thanks to distributed ledger and proof-of-work consensus algorithms. Blockchains allow trustless, peer-to-peer communication and have already proven their worth in the world of financial services. The blockchain can be implanted in IoT systems to deal with the issues of scale, trustworthiness and decentralisation, allowing billions of devices to share the same network without the need for additional resources. This book discusses the latest tools and methodology and concepts in the decentralised Internet of Things. Each chapter presents an in-depth investigation of the potential of blockchains in the Internet of Things, addressing the state-of-the-art in and future perspectives of the decentralised Internet of Things. Further, industry experts, researchers and academicians share their ideas and experiences relating to frontier technologies, breakthrough and innovative solutions and applications.
This book presents the latest advances in photometric 3D reconstruction. It provides the reader with an overview of the state of the art in the field, and of the latest research into both the theoretical foundations of photometric 3D reconstruction and its practical application in several fields (including security, medicine, cultural heritage and archiving, and engineering). These techniques play a crucial role within such emerging technologies as 3D printing, since they permit the direct conversion of an image into a solid object. The book covers both theoretical analysis and real-world applications, highlighting the importance of deepening interdisciplinary skills, and as such will be of interest to both academic researchers and practitioners from the computer vision and mathematical 3D modeling communities, as well as engineers involved in 3D printing. No prior background is required beyond a general knowledge of classical computer vision models, numerical methods for optimization, and partial differential equations.
The recent developments in wireless communications, networking, and embedded systems have driven various innovative Internet of Things (IoT) applications, e.g., smart cities, mobile healthcare, autonomous driving and drones. A common feature of these applications is the stringent requirements for low-latency communications. Considering the typical small payload size of IoT applications, it is of critical importance to reduce the size of the overhead message, e.g., identification information, pilot symbols for channel estimation, and control data. Such low-overhead communications also help to improve the energy efficiency of IoT devices. Recently, structured signal processing techniques have been introduced and developed to reduce the overheads for key design problems in IoT networks, such as channel estimation, device identification, and message decoding. By utilizing underlying system structures, including sparsity and low rank, these methods can achieve significant performance gains. This book provides an overview of four general structured signal processing models: a sparse linear model, a blind demixing model, a sparse blind demixing model, and a shuffled linear model, and discusses their applications in enabling low-overhead communications in IoT networks. Further, it presents practical algorithms based on both convex and nonconvex optimization approaches, as well as theoretical analyses that use various mathematical tools.
This book presents high-quality, original contributions (both theoretical and experimental) on Information Security, Machine Learning, Data Mining and Internet of Things (IoT). It gathers papers presented at ICETIT 2019, the 1st International Conference on Emerging Trends in Information Technology, which was held in Delhi, India, in June 2019. This conference series represents a targeted response to the growing need for research that reports on and assesses the practical implications of IoT and network technologies, AI and machine learning, data analytics and cloud computing, security and privacy, and next generation computing technologies.
Adequate health and health care is no longer possible without proper data supervision from modern machine learning methodologies like cluster models, neural networks, and other data mining methodologies. The current book is the first publication of a complete overview of machine learning methodologies for the medical and health sector, and it was written as a training companion, and as a must-read, not only for physicians and students, but also for any one involved in the process and progress of health and health care. In this second edition the authors have removed the textual errors from the first edition. Also, the improved tables from the first edition, have been replaced with the original tables from the software programs as applied. This is, because, unlike the former, the latter were without error, and readers were better familiar with them. The main purpose of the first edition was, to provide stepwise analyses of the novel methods from data examples, but background information and clinical relevance information may have been somewhat lacking. Therefore, each chapter now contains a section entitled "Background Information". Machine learning may be more informative, and may provide better sensitivity of testing than traditional analytic methods may do. In the second edition a place has been given for the use of machine learning not only to the analysis of observational clinical data, but also to that of controlled clinical trials. Unlike the first edition, the second edition has drawings in full color providing a helpful extra dimension to the data analysis. Several machine learning methodologies not yet covered in the first edition, but increasingly important today, have been included in this updated edition, for example, negative binomial and Poisson regressions, sparse canonical analysis, Firth's bias adjusted logistic analysis, omics research, eigenvalues and eigenvectors.
This book presents the Statistical Learning Theory in a detailed and easy to understand way, by using practical examples, algorithms and source codes. It can be used as a textbook in graduation or undergraduation courses, for self-learners, or as reference with respect to the main theoretical concepts of Machine Learning. Fundamental concepts of Linear Algebra and Optimization applied to Machine Learning are provided, as well as source codes in R, making the book as self-contained as possible. It starts with an introduction to Machine Learning concepts and algorithms such as the Perceptron, Multilayer Perceptron and the Distance-Weighted Nearest Neighbors with examples, in order to provide the necessary foundation so the reader is able to understand the Bias-Variance Dilemma, which is the central point of the Statistical Learning Theory. Afterwards, we introduce all assumptions and formalize the Statistical Learning Theory, allowing the practical study of different classification algorithms. Then, we proceed with concentration inequalities until arriving to the Generalization and the Large-Margin bounds, providing the main motivations for the Support Vector Machines. From that, we introduce all necessary optimization concepts related to the implementation of Support Vector Machines. To provide a next stage of development, the book finishes with a discussion on SVM kernels as a way and motivation to study data spaces and improve classification results.
This book provides insights into smart ways of computer log data analysis, with the goal of spotting adversarial actions. It is organized into 3 major parts with a total of 8 chapters that include a detailed view on existing solutions, as well as novel techniques that go far beyond state of the art. The first part of this book motivates the entire topic and highlights major challenges, trends and design criteria for log data analysis approaches, and further surveys and compares the state of the art. The second part of this book introduces concepts that apply character-based, rather than token-based, approaches and thus work on a more fine-grained level. Furthermore, these solutions were designed for "online use", not only forensic analysis, but also process new log lines as they arrive in an efficient single pass manner. An advanced method for time series analysis aims at detecting changes in the overall behavior profile of an observed system and spotting trends and periodicities through log analysis. The third part of this book introduces the design of the AMiner, which is an advanced open source component for log data anomaly mining. The AMiner comes with several detectors to spot new events, new parameters, new correlations, new values and unknown value combinations and can run as stand-alone solution or as sensor with connection to a SIEM solution. More advanced detectors help to determines the characteristics of variable parts of log lines, specifically the properties of numerical and categorical fields. Detailed examples throughout this book allow the reader to better understand and apply the introduced techniques with open source software. Step-by-step instructions help to get familiar with the concepts and to better comprehend their inner mechanisms. A log test data set is available as free download and enables the reader to get the system up and running in no time. This book is designed for researchers working in the field of cyber security, and specifically system monitoring, anomaly detection and intrusion detection. The content of this book will be particularly useful for advanced-level students studying computer science, computer technology, and information systems. Forward-thinking practitioners, who would benefit from becoming familiar with the advanced anomaly detection methods, will also be interested in this book.
This book describes the recent innovation of deep in-memory architectures for realizing AI systems that operate at the edge of energy-latency-accuracy trade-offs. From first principles to lab prototypes, this book provides a comprehensive view of this emerging topic for both the practicing engineer in industry and the researcher in academia. The book is a journey into the exciting world of AI systems in hardware.
This unique volume reviews the latest advances in domain adaptation in the training of machine learning algorithms for visual understanding, offering valuable insights from an international selection of experts in the field. The text presents a diverse selection of novel techniques, covering applications of object recognition, face recognition, and action and event recognition. Topics and features: reviews the domain adaptation-based machine learning algorithms available for visual understanding, and provides a deep metric learning approach; introduces a novel unsupervised method for image-to-image translation, and a video segment retrieval model that utilizes ensemble learning; proposes a unique way to determine which dataset is most useful in the base training, in order to improve the transferability of deep neural networks; describes a quantitative method for estimating the discrepancy between the source and target data to enhance image classification performance; presents a technique for multi-modal fusion that enhances facial action recognition, and a framework for intuition learning in domain adaptation; examines an original interpolation-based approach to address the issue of tracking model degradation in correlation filter-based methods. This authoritative work will serve as an invaluable reference for researchers and practitioners interested in machine learning-based visual recognition and understanding.
This book highlights reliable, valid and practical testing and assessment of interpreting, presenting important developments in China, where testing and assessment have long been a major concern for interpreting educators and researchers, but have remained largely under-reported. The book not only offers theoretical insights into potential issues and problems undermining interpreting assessment, but also describes useful measurement models to address such concerns. Showcasing the latest Chinese research to create rubrics-referenced rating scales, enhance formative assessment practice, and explore (semi-)automated assessment, the book is a valuable resource for educators, trainers and researchers, enabling to gain a better understanding of interpreting testing and assessment as both a worthwhile endeavor and a promising research area.
Intelligent Computing for Interactive System Design provides a comprehensive resource on what has become the dominant paradigm in designing novel interaction methods, involving gestures, speech, text, touch and brain-controlled interaction, embedded in innovative and emerging human-computer interfaces. These interfaces support ubiquitous interaction with applications and services running on smartphones, wearables, in-vehicle systems, virtual and augmented reality, robotic systems, the Internet of Things (IoT), and many other domains that are now highly competitive, both in commercial and in research contexts. This book presents the crucial theoretical foundations needed by any student, researcher, or practitioner working on novel interface design, with chapters on statistical methods, digital signal processing (DSP), and machine learning (ML). These foundations are followed by chapters that discuss case studies on smart cities, brain-computer interfaces, probabilistic mobile text entry, secure gestures, personal context from mobile phones, adaptive touch interfaces, and automotive user interfaces. The case studies chapters also highlight an in-depth look at the practical application of DSP and ML methods used for processing of touch, gesture, biometric, or embedded sensor inputs. A common theme throughout the case studies is ubiquitous support for humans in their daily professional or personal activities. In addition, the book provides walk-through examples of different DSP and ML techniques and their use in interactive systems. Common terms are defined, and information on practical resources is provided (e.g., software tools, data resources) for hands-on project work to develop and evaluate multimodal and multi-sensor systems. In a series of in-chapter commentary boxes, an expert on the legal and ethical issues explores the emergent deep concerns of the professional community, on how DSP and ML should be adopted and used in socially appropriate ways, to most effectively advance human performance during ubiquitous interaction with omnipresent computers. This carefully edited collection is written by international experts and pioneers in the fields of DSP and ML. It provides a textbook for students and a reference and technology roadmap for developers and professionals working on interaction design on emerging platforms.
Algorithms are now widely employed to make decisions that have increasingly far-reaching impacts on individuals and society as a whole ("algorithmic governance"), which could potentially lead to manipulation, biases, censorship, social discrimination, violations of privacy, property rights, and more. This has sparked a global debate on how to regulate AI and robotics ("governance of algorithms"). This book discusses both of these key aspects: the impact of algorithms, and the possibilities for future regulation.
This book presents methods and approaches used to identify the true author of a doubtful document or text excerpt. It provides a broad introduction to all text categorization problems (like authorship attribution, psychological traits of the author, detecting fake news, etc.) grounded in stylistic features. Specifically, machine learning models as valuable tools for verifying hypotheses or revealing significant patterns hidden in datasets are presented in detail. Stylometry is a multi-disciplinary field combining linguistics with both statistics and computer science. The content is divided into three parts. The first, which consists of the first three chapters, offers a general introduction to stylometry, its potential applications and limitations. Further, it introduces the ongoing example used to illustrate the concepts discussed throughout the remainder of the book. The four chapters of the second part are more devoted to computer science with a focus on machine learning models. Their main aim is to explain machine learning models for solving stylometric problems. Several general strategies used to identify, extract, select, and represent stylistic markers are explained. As deep learning represents an active field of research, information on neural network models and word embeddings applied to stylometry is provided, as well as a general introduction to the deep learning approach to solving stylometric questions. In turn, the third part illustrates the application of the previously discussed approaches in real cases: an authorship attribution problem, seeking to discover the secret hand behind the nom de plume Elena Ferrante, an Italian writer known worldwide for her My Brilliant Friend's saga; author profiling in order to identify whether a set of tweets were generated by a bot or a human being and in this second case, whether it is a man or a woman; and an exploration of stylistic variations over time using US political speeches covering a period of ca. 230 years. A solutions-based approach is adopted throughout the book, and explanations are supported by examples written in R. To complement the main content and discussions on stylometric models and techniques, examples and datasets are freely available at the author's Github website.
Statistical learning and analysis techniques have become extremely important today, given the tremendous growth in the size of heterogeneous data collections and the ability to process it even from physically distant locations. Recent advances made in the field of machine learning provide a strong framework for robust learning from the diverse corpora and continue to impact a variety of research problems across multiple scientific disciplines. The aim of this handbook is to familiarize beginners as well as experts with some of the recent techniques in this field. The Handbook is divided in two sections: Theory and
Applications, covering machine learning, data analytics,
biometrics, document recognition and security. emphasis on applications-oriented techniques
This book presents an overview of the latest smart transportation systems, IoV connectivity frameworks, issues of security and safety in VANETs, future developments in the IoV, technical solutions to address key challenges, and other related topics. A connected vehicle is a vehicle equipped with Internet access and wireless LAN, which allows the sharing of data through various devices, inside as well as outside the vehicle. The ad-hoc network of such vehicles, often referred to as VANET or the Internet of vehicles (IoV), is an application of IoT technology, and may be regarded as an integration of three types of networks: inter-vehicle, intra-vehicle, and vehicular mobile networks. VANET involves several varieties of vehicle connectivity mechanisms, including vehicle-to-infrastructure (V2I), vehicle-to-vehicle (V2V), vehicle-to-cloud (V2C), and vehicle-to-everything (V2X). According to one survey, it is expected that there will be approximately 380 million connected cars on the roads by 2020. IoV is an important aspect of the new vision for smart transportation. The book is divided into three parts: examining the evolution of IoV (basic concepts, principles, technologies, and architectures), connectivity of vehicles in the IoT (protocols, frameworks, and methodologies), connected vehicle environments and advanced topics in VANETs (security and safety issues, autonomous operations, machine learning, sensor technology, and AI). By providing scientific contributions and workable suggestions from researchers and practitioners in the areas of IoT, IoV, and security, this valuable reference aims to extend the body of existing knowledge.
This book focuses on data and how modern business firms use social data, specifically Online Social Networks (OSNs) incorporated as part of the infrastructure for a number of emerging applications such as personalized recommendation systems, opinion analysis, expertise retrieval, and computational advertising. This book identifies how in such applications, social data offers a plethora of benefits to enhance the decision making process. This book highlights that business intelligence applications are more focused on structured data; however, in order to understand and analyse the social big data, there is a need to aggregate data from various sources and to present it in a plausible format. Big Social Data (BSD) exhibit all the typical properties of big data: wide physical distribution, diversity of formats, non-standard data models, independently-managed and heterogeneous semantics but even further valuable with marketing opportunities. The book provides a review of the current state-of-the-art approaches for big social data analytics as well as to present dissimilar methods to infer value from social data. The book further examines several areas of research that benefits from the propagation of the social data. In particular, the book presents various technical approaches that produce data analytics capable of handling big data features and effective in filtering out unsolicited data and inferring a value. These approaches comprise advanced technical solutions able to capture huge amounts of generated data, scrutinise the collected data to eliminate unwanted data, measure the quality of the inferred data, and transform the amended data for further data analysis. Furthermore, the book presents solutions to derive knowledge and sentiments from BSD and to provide social data classification and prediction. The approaches in this book also incorporate several technologies such as semantic discovery, sentiment analysis, affective computing and machine learning. This book has additional special feature enriched with numerous illustrations such as tables, graphs and charts incorporating advanced visualisation tools in accessible an attractive display.
Advances in Domain Adaptation Theory gives current, state-of-the-art results on transfer learning, with a particular focus placed on domain adaptation from a theoretical point-of-view. The book begins with a brief overview of the most popular concepts used to provide generalization guarantees, including sections on Vapnik-Chervonenkis (VC), Rademacher, PAC-Bayesian, Robustness and Stability based bounds. In addition, the book explains domain adaptation problem and describes the four major families of theoretical results that exist in the literature, including the Divergence based bounds. Next, PAC-Bayesian bounds are discussed, including the original PAC-Bayesian bounds for domain adaptation and their updated version. Additional sections present generalization guarantees based on the robustness and stability properties of the learning algorithm.
Ecologists and natural resource managers are charged with making complex management decisions in the face of a rapidly changing environment resulting from climate change, energy development, urban sprawl, invasive species and globalization. Advances in Geographic Information System (GIS) technology, digitization, online data availability, historic legacy datasets, remote sensors and the ability to collect data on animal movements via satellite and GPS have given rise to large, highly complex datasets. These datasets could be utilized for making critical management decisions, but are often "messy" and difficult to interpret. Basic artificial intelligence algorithms (i.e., machine learning) are powerful tools that are shaping the world and must be taken advantage of in the life sciences. In ecology, machine learning algorithms are critical to helping resource managers synthesize information to better understand complex ecological systems. Machine Learning has a wide variety of powerful applications, with three general uses that are of particular interest to ecologists: (1) data exploration to gain system knowledge and generate new hypotheses, (2) predicting ecological patterns in space and time, and (3) pattern recognition for ecological sampling. Machine learning can be used to make predictive assessments even when relationships between variables are poorly understood. When traditional techniques fail to capture the relationship between variables, effective use of machine learning can unearth and capture previously unattainable insights into an ecosystem's complexity. Currently, many ecologists do not utilize machine learning as a part of the scientific process. This volume highlights how machine learning techniques can complement the traditional methodologies currently applied in this field.
Medical imaging is an indispensable tool for modern healthcare. Machine leaning plays an essential role in the medical imaging field, with applications including medical image analysis, computer-aided diagnosis, organ/lesion segmentation, image fusion, image-guided therapy, and image annotation and image retrieval. Machine Learning in Computer-Aided Diagnosis: Medical Imaging Intelligence and Analysis provides a comprehensive overview of machine learning research and technology in medical decision-making based on medical images. This book covers major technical advancements and research findings in the field of Computer-Aided Diagnosis (CAD). As it demonstrates the practical applications of CAD, this book is a useful reference for professors in engineering and medical schools, students in engineering and applied-science, medical students, medical engineers, researchers in industry, academia, and health science, radiologists, cardiologists, surgeons, and healthcare professionals.
Modern life is increasingly relying on digital technology, which in turn runs on mathematics. However, this underlying math is hidden from us. That is mostly a good thing since we do not want to be solving equations and calculating fractions just to get things done in our everyday business. But the mathematical details do matter for anyone who wants to understand how stuff works, or wishes to create something new in the jungle of apps and algorithms. This book takes a look at the mathematical models behind weather forecasting, climate change prediction, artificial intelligence, medical imaging and computer graphics. The reader is expected to have only a curious mind; technical math skills are not needed for enjoying this text.
This compendium discusses the adaptive enterprise architecture (AEA) as information to support decisions and actions for desired efficiency and innovation (outcomes and impacts). This comprehensive information-driven approach uses data, analytics, and intelligence (AI/ML) for architecting intelligent enterprises.The unique reference text includes practical artefacts and vivid examples based on both practice and research. It benefits chief information officers, chief data officers, chief enterprise architects, enterprise architects, business architects, information architects, data architects, and anyone who has an interest in adaptive and digital enterprise architecture.
"Exposes the vast gap between the actual science underlying AI and the dramatic claims being made for it." -John Horgan "If you want to know about AI, read this book...It shows how a supposedly futuristic reverence for Artificial Intelligence retards progress when it denigrates our most irreplaceable resource for any future progress: our own human intelligence." -Peter Thiel Ever since Alan Turing, AI enthusiasts have equated artificial intelligence with human intelligence. A computer scientist working at the forefront of natural language processing, Erik Larson takes us on a tour of the landscape of AI to reveal why this is a profound mistake. AI works on inductive reasoning, crunching data sets to predict outcomes. But humans don't correlate data sets. We make conjectures, informed by context and experience. And we haven't a clue how to program that kind of intuitive reasoning, which lies at the heart of common sense. Futurists insist AI will soon eclipse the capacities of the most gifted mind, but Larson shows how far we are from superintelligence-and what it would take to get there. "Larson worries that we're making two mistakes at once, defining human intelligence down while overestimating what AI is likely to achieve...Another concern is learned passivity: our tendency to assume that AI will solve problems and our failure, as a result, to cultivate human ingenuity." -David A. Shaywitz, Wall Street Journal "A convincing case that artificial general intelligence-machine-based intelligence that matches our own-is beyond the capacity of algorithmic machine learning because there is a mismatch between how humans and machines know what they know." -Sue Halpern, New York Review of Books |
![]() ![]() You may like...
Data Analytics on Graphs
Ljubisa Stankovic, Danilo P. Mandic, …
Hardcover
R3,602
Discovery Miles 36 020
Deep Learning Applications
Pier Luigi Mazzeo, Paolo Spagnolo
Hardcover
R3,519
Discovery Miles 35 190
Cyber-Physical System Solutions for…
Vanamoorthy Muthumanikandan, Anbalagan Bhuvaneswari, …
Hardcover
R7,578
Discovery Miles 75 780
Deep Learning Applications: In Computer…
Qi Xuan, Yun Xiang, …
Hardcover
R2,985
Discovery Miles 29 850
Machine Learning Techniques for Pattern…
Mohit Dua, Ankit Kumar Jain
Hardcover
R9,088
Discovery Miles 90 880
|