Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Applications of computing > Artificial intelligence
The Science of Deep Learning emerged from courses taught by the author that have provided thousands of students with training and experience for their academic studies, and prepared them for careers in deep learning, machine learning, and artificial intelligence in top companies in industry and academia. The book begins by covering the foundations of deep learning, followed by key deep learning architectures. Subsequent parts on generative models and reinforcement learning may be used as part of a deep learning course or as part of a course on each topic. The book includes state-of-the-art topics such as Transformers, graph neural networks, variational autoencoders, and deep reinforcement learning, with a broad range of applications. The appendices provide equations for computing gradients in backpropagation and optimization, and best practices in scientific writing and reviewing. The text presents an up-to-date guide to the field built upon clear visualizations using a unified notation and equations, lowering the barrier to entry for the reader. The accompanying website provides complementary code and hundreds of exercises with solutions.
a short and accessible introduction on AI and Cars written by leading experts
Welcome to the proceedings of the 2010 International Conference on u- and e-Service, Science and Technology (UNESST 2010) - one of the partnering events of the Second International Mega-Conference on Future Generation Information Te- nology (FGIT 2010). UNESST brings together researchers from academia and industry as well as prac- tioners to share ideas, problems and solutions relating to the multifaceted aspects of u- and e-services and their applications, with links to computational sciences, mat- matics and information technology. In total, 1,630 papers were submitted to FGIT 2010 from 30 countries, which - cludes 223 papers submitted to UNESST 2010. The submitted papers went through a rigorous reviewing process: 395 of the 1,630 papers were accepted for FGIT 2010, while 50 papers were accepted for UNESST 2010. Of the 50 papers 8 were selected for the special FGIT 2010 volume published by Springer in the LNCS series. 27 papers are published in this volume and 15 papers were withdrawn due to technical reasons. We would like to acknowledge the great effort of the UNESST 2010 International Advisory Board and members of the International Program Committee, as well as all the organizations and individuals who supported the idea of publishing this volume of proceedings, including SERSC and Springer. Also, the success of the conference would not have been possible without the huge support from our sponsors and the work of the Chairs and Organizing Committee.
Are Amazon Alexa and Google Home limited to our bedrooms, or can they be used in hospitals? Do you envision a future where physicians work hand-in-hand with voice AI to revolutionize healthcare delivery? In the near future, clinical smart assistants will be able to automate many manual hospital tasks-and this will be only the beginning of the changes to come. Voice AI is the future of physician-machine interaction and this Focus book provides invaluable insight on its next frontier. It begins with a brief history and current implementations of voice-activated assistants and illustrates why clinical voice AI is at its inflection point. Next, it describes how the authors built the world's first smart surgical assistant using an off-the-shelf smart home device, outlining the implementation process in the operating room. From quantitative metrics to surgeons' feedback, the authors discuss the feasibility of this technology in the surgical setting. The book then provides an in-depth development guideline for engineers and clinicians desiring to develop their own smart surgical assistants. Lastly, the authors delve into their experiences in translating voice AI into the clinical setting and reflect on the challenges and merits of this pursuit. The world's first smart surgical assistant has not only reduced surgical time but eliminated major touch points in the operating room, resulting in positive, significant implications for patient outcomes and surgery costs. From clinicians eager for insight on the next digital health revolution to developers interested in building the next clinical voice AI, this book offers a guide for both audiences.
Modern Computational Techniques for Engineering Applications presents recent computational techniques used in the advancement of modern grids with the integration of non-conventional energy sources like wind and solar energy. It covers data analytics tools for smart cities, smart towns, and smart computing for sustainable development. This book- Discusses the importance of renewable energy source applications wind turbines and solar panels for electrical grids. Presents optimization-based computing techniques like fuzzy logic, neural networks, and genetic algorithms that enhance the computational speed. Showcases cloud computing tools and methodologies such as cybersecurity testbeds and data security for better accuracy of data. Covers novel concepts on artificial neural networks, fuzzy systems, machine learning, and artificial intelligence techniques. Highlights application-based case studies including cloud computing, optimization methods, and the Industrial Internet of Things. The book comprehensively introduces modern computational techniques, starting from basic tools to highly advanced procedures, and their applications. It further highlights artificial neural networks, fuzzy systems, machine learning, and artificial intelligence techniques and how they form the basis for algorithms. It presents application-based case studies on cloud computing, optimization methods, blockchain technology, fog and edge computing, and the Industrial Internet of Things. It will be a valuable resource for senior undergraduates, graduate students, and academic researchers in diverse fields, including electrical engineering, electronics and communications engineering, and computer engineering.
Web engineering is now a well-established and mature ?eld of research with strong relationships with other disciplines such as software engineering, human-computer interaction, and arti?cial intelligence. Web engineering has also been recognized as a multidisciplinary ?eld, which is growing fast together with the growth of the World Wide Web. This evolution is manifested in the richness of the Web Engineering Conferences which attract researchers, prac- tioners, educators, and students from di?erent countries. Thisvolumecontainstheproceedingsofthe10thInternationalConferenceon WebEngineering(ICWE2010),whichwasheldinVienna,Austria,inJuly2010. The ICWE conferences are among the most essential events of the Web en- neering community. This fact is manifested both by the number of accomplished researchers that support the conference series with their work and contributions as well as by the continuing patronage of several international organizations dedicated to promoting research and scienti?c progress in the ?eld of Web en- neering. ICWE 2010 followed conferences in San Sebastian, Spain; Yorktown Heights, NY, USA; Como, Italy; Palo Alto, CA, USA; Sydney, Australia; Munich, G- many; Oviedo, Spain; Santa Fe, Argentina; and Caceres, Spain. This year's call for research papers attracted a total of 120 submissions from 39 countries spanning all continents of the world with a good coverage of all the di?erent aspects of Web engineering. Topics addressed by the contributions includedareasrangingfrommoretraditional?eldssuchasmodel-drivenWeb- gineering,Webservices,performance,search,SemanticWeb,quality,andtesting to novel domains such as the Web 2.0, rich Internet applications, and mashups.
- Curating Social Data - Summarizing Social Data - Analyzing Social Data - Social Data Analytics Applications: Trust, Recommender Systems, Cognitive Analytics
Welcome to the proceedings of the 2010 International Conference on Future Gene- tion Communication and Networking (FGCN 2010) - one of the partnering events of the Second International Mega-Conference on Future Generation Information Technology (FGIT 2010). FGCN brings together researchers from academia and industry as well as practit- ners to share ideas, problems and solutions relating to the multifaceted aspects of communication and networking, including their links to computational sciences, mathematics and information technology. In total, 1,630 papers were submitted to FGIT 2010 from 30 countries, which - cludes 150 papers submitted to the FGCN 2010 Special Sessions. The submitted papers went through a rigorous reviewing process: 395 of the 1,630 papers were - cepted for FGIT 2010, while 70 papers were accepted for the FGCN 2010 Special Sessions. Of the 70 papers, 6 were selected for the special FGIT 2010 volume p- lished by Springer in LNCS series. Fifty-one papers are published in this volume, and 13 papers were withdrawn due to technical reasons. We would like to acknowledge the great effort of the FGCN 2010 International Advisory Board and Special Session Co-chairs, as well as all the organizations and individuals who supported the idea of publishing this volume of proceedings, incl- ing SERSC and Springer. Also, the success of the conference would not have been possible without the huge support from our sponsors and the work of the Organizing Committee.
The fundamental mathematical tools needed to understand machine learning include linear algebra, analytic geometry, matrix decompositions, vector calculus, optimization, probability and statistics. These topics are traditionally taught in disparate courses, making it hard for data science or computer science students, or professionals, to efficiently learn the mathematics. This self-contained textbook bridges the gap between mathematical and machine learning texts, introducing the mathematical concepts with a minimum of prerequisites. It uses these concepts to derive four central machine learning methods: linear regression, principal component analysis, Gaussian mixture models and support vector machines. For students and others with a mathematical background, these derivations provide a starting point to machine learning texts. For those learning the mathematics for the first time, the methods help build intuition and practical experience with applying mathematical concepts. Every chapter includes worked examples and exercises to test understanding. Programming tutorials are offered on the book's web site.
This book deals with computational anatomy, an emerging discipline recognized in medical science as a derivative of conventional anatomy. It is also a completely new research area on the boundaries of several sciences and technologies, such as medical imaging, computer vision, and applied mathematics. Computational Anatomy Based on Whole Body Imaging highlights the underlying principles, basic theories, and fundamental techniques in computational anatomy, which are derived from conventional anatomy, medical imaging, computer vision, and applied mathematics, in addition to various examples of applications in clinical data. The book will cover topics on the basics and applications of the new discipline. Drawing from areas in multidisciplinary fields, it provides comprehensive, integrated coverage of innovative approaches to computational anatomy. As well, Computational Anatomy Based on Whole Body Imaging serves as a valuable resource for researchers including graduate students in the field and a connection with the innovative approaches that are discussed. Each chapter has been supplemented with concrete examples of images and illustrations to facilitate understanding even for readers unfamiliar with computational anatomy.
The first of its kind, this anthology in the burgeoning field of technology ethics offers students and other interested readers 32 chapters, each written in an accessible and lively manner specifically for this volume. The chapters are conveniently organized into five parts: I. Perspectives on Technology and its Value II. Technology and the Good Life III. Computer and Information Technology IV. Technology and Business V. Biotechnologies and the Ethics of Enhancement A hallmark of the volume is multidisciplinary contributions both (1) in "analytic" and "continental" philosophies and (2) across several hot-button topics of interest to students, including the ethics of autonomous vehicles, psychotherapeutic phone apps, and bio-enhancement of cognition and in sports. The volume editors, both teachers of technology ethics, have compiled a set of original and timely chapters that will advance scholarly debate and stimulate fascinating and lively classroom discussion. Downloadable eResources (available from www.routledge.com/9781032038704) provide a glossary of all relevant terms, sample classroom activities/discussion questions relevant for chapters, and links to Stanford Encyclopedia of Philosophy entries and other relevant online materials. Key Features: Examines the most pivotal ethical questions around our use of technology, equipping readers to better understand technology's promises and perils. Explores throughout a central tension raised by technological progress: maintaining social stability vs. pursuing dynamic social improvements. Provides ample coverage of the pressing issues of free speech and productive online discourse.
With strong numerical and computational focus, this book serves as an essential resource on the methods for functional neuroimaging analysis, diffusion weighted image analysis, and longitudinal VBM analysis. It includes four MRI image modalities analysis methods. The first covers the PWI methods, which is the basis for understanding cerebral flow in human brain. The second part, the book's core, covers fMRI methods in three specific domains: first level analysis, second level analysis, and effective connectivity study. The third part covers the analysis of Diffusion weighted image, i.e. DTI, QBI and DSI image analysis. Finally, the book covers (longitudinal) VBM methods and its application to Alzheimer's disease study.
Welcome to the proceedings of the 2010 International Conferences on Database Theory and Application (DTA 2010), and Bio-Science and Bio-Technology (BSBT 2010) - two of the partnering events of the Second International Mega- Conference on Future Generation Information Technology (FGIT 2010). DTA and BSBT bring together researchers from academia and industry as well as practitioners to share ideas, problems and solutions relating to the multifaceted aspects of databases, data mining and biomedicine, including their links to computational sciences, mathematics and information technology. In total, 1,630 papers were submitted to FGIT 2010 from 30 countries, which includes 175 papers submitted to DTA/BSBT 2010. The submitted papers went through a rigorous reviewing process: 395 of the 1,630 papers were accepted for FGIT 2010, while 40 papers were accepted for DTA/BSBT 2010. Of the 40 papers 6 were selected for the special FGIT 2010 volume published by Springer in the LNCS series. 31 papers are published in this volume, and 3 papers were withdrawn due to technical reasons. We would like to acknowledge the great effort of the DTA/BSBT 2010 International Advisory Boards and members of the International Program Committees, as well as all the organizations and individuals who supported the idea of publishing this volume of proceedings, including SERSC and Springer. Also, the success of these two conferences would not have been possible without the huge support from our sponsors and the work of the Chairs and Organizing Committee.
This book explores recursive architectures in designing progressive hyperspectral imaging algorithms. In particular, it makes progressive imaging algorithms recursive by introducing the concept of Kalman filtering in algorithm design so that hyperspectral imagery can be processed not only progressively sample by sample or band by band but also recursively via recursive equations. This book can be considered a companion book of author's books, Real-Time Progressive Hyperspectral Image Processing, published by Springer in 2016.
The text focuses on the theory, design, and implementation of the Internet of Things (IoT), in a modern communication system. It will be useful to senior undergraduate, graduate students, and researchers in diverse fields domains including electrical engineering, electronics and communications engineering, computer engineering, and information technology. Features: Presents all the necessary information on the Internet of Things in modern computing Examines antenna integration challenges and constraints in the Internet of Things devices Discusses advanced Internet of Things networks and advanced controllers required for modern architecture Explores security and privacy challenges for the Internet of Things-based health care system Covers implementation of Internet of Things security protocols such as MQTT, Advanced Message Queuing Protocol, XMPP, and DSS The text addresses the issues and challenges in implementing communication and security protocols for IoT in modern computing. It further highlights the applications of IoT in diverse areas including remote health monitoring, remote monitoring of vehicle data and environmental characteristics, industry 4.0, 5G communications, and Next-gen IoT networks. The text presents case studies on IoT in modern digital computing. It will serve as an ideal reference text for senior undergraduate, graduate students, and academic researchers in diverse fields domains including electrical engineering, electronics and communications engineering, computer engineering, and information technology.
This book offers readers an essential introduction to the fundamentals of digital image processing. Pursuing a signal processing and algorithmic approach, it makes the fundamentals of digital image processing accessible and easy to learn. It is written in a clear and concise manner with a large number of 4 x 4 and 8 x 8 examples, figures and detailed explanations. Each concept is developed from the basic principles and described in detail with equal emphasis on theory and practice. The book is accompanied by a companion website that provides several MATLAB programs for the implementation of image processing algorithms. The book also offers comprehensive coverage of the following topics: Enhancement, Transform processing, Restoration, Registration, Reconstruction from projections, Morphological image processing, Edge detection, Object representation and classification, Compression, and Color processing.
With digital automation becoming ubiquitous, the relationship between man and machine is being redefined. This book, through a focus on America, identifies the tension this relationship has produced, and how it has divided America socially, politically, and economically, ultimately breeding two fundamentally incompatible nations within one: the "forgotten America" and "elite America." This book enables the reader to visualize the changes brought by automation on our producer and buyer identities, and suggests policy changes that global leaders could adopt to deal with the increasing discord. The book is heavily dependent on a few fundamental concepts of both economics and sociology, such as globalization, labor economics, and cultural homogenization. The book is ideally suited to students and academics researching political economics and sociology, with focuses on globalization, unemployment, and the social impacts of technological advances.
This book constitutes the thoroughly refereed post-conference proceedings of the First International Joint Conference on Knowledge Discovery, Knowledge Engineering, and Knowledge Management, IC3K 2009, held in Funchal, Madeira, Portugal, in October 2009. This book includes revised and extended versions of a strict selection of the best papers presented at the conference; 27 revised full papers together with 3 invited lectures were carefully reviewed and selected from 369 submissions. According to the three covered conferences KDIR 2009, KEOD 2009, and KMIS 2009, the papers are organized in topical sections on on knowledge discovery and information retrieval, knowledge engineering and ontology development, and on knowledge management and information sharing.
Nothing has been more prolific over the past century than human/machine interaction. Automobiles, telephones, computers, manufacturing machines, robots, office equipment, machines large and small; all affect the very essence of our daily lives. However, this interaction has not always been efficient or easy and has at times turned fairly hazardous. Cognitive Systems Engineering (CSE) seeks to improve this situation by the careful study of human/machine interaction as the meaningful behavior of a unified system. Written by pioneers in the development of CSE, Joint Cognitive Systems: Foundations of Cognitive Systems Engineering offers a principled approach to studying human work with complex technology. The authors use a top-down, functional approach and emphasize a proactive (coping) perspective on work that overcomes the limitations of the structural human information processing view. They describe a conceptual framework for analysis with concrete theories and methods for joint system modeling that can be applied across the spectrum of single human/machine systems, social/technical systems, and whole organizations. The book explores both current and potential applications of CSE illustrated by examples. Understanding the complexities and functions of the human/machine interaction is critical to designing safe, highly functional, and efficient technological systems. This is a critical reference for students, designers, and engineers in a wide variety of disciplines.
Deep learning has revealed ways to create algorithms for applications that we never dreamed were possible. For software developers, the challenge lies in taking cutting-edge technologies from R&D labs through to production. Deep Learning Design Patterns is here to help. In it, you'll find deep learning models presented in a unique new way: as extendable design patterns you can easily plug-and-play into your software projects. Written by Google deep learning expert Andrew Ferlitsch, it's filled with the latest deep learning insights and best practices from his work with Google Cloud AI. Each valuable technique is presented in a way that's easy to understand and filled with accessible diagrams and code samples. about the technologyYou don't need to design your deep learning applications from scratch! By viewing cutting-edge deep learning models as design patterns, developers can speed up their creation of AI models and improve model understandability for both themselves and other users. about the book Deep Learning Design Patterns distills models from the latest research papers into practical design patterns applicable to enterprise AI projects. Using diagrams, code samples, and easy-to-understand language, Google Cloud AI expert Andrew Ferlitsch shares insights from state-of-the-art neural networks. You'll learn how to integrate design patterns into deep learning systems from some amazing examples, including a real-estate program that can evaluate house prices just from uploaded photos and a speaking AI capable of delivering live sports broadcasting. Building on your existing deep learning knowledge, you'll quickly learn to incorporate the very latest models and techniques into your apps as idiomatic, composable, and reusable design patterns. what's inside Internal functioning of modern convolutional neural networks Procedural reuse design pattern for CNN architectures Models for mobile and IoT devices Composable design pattern for automatic learning methods Assembling large-scale model deployments Complete code samples and example notebooks Accompanying YouTube videos about the readerFor machine learning engineers familiar with Python and deep learning. about the author Andrew Ferlitsch is an expert on computer vision and deep learning at Google Cloud AI Developer Relations. He was formerly a principal research scientist for 20 years at Sharp Corporation of Japan, where he amassed 115 US patents and worked on emerging technologies in telepresence, augmented reality, digital signage, and autonomous vehicles. In his present role, he reaches out to developer communities, corporations and universities, teaching deep learning and evangelizing Google's AI technologies.
The 9th International Conference on Entertainment Computing (ICEC 2010) was held in September 2010 in Seoul Korea. After Pittsburgh (2008) and Paris (2009), the event returned to Asia. The conference venue was the COEX Exhibition Hall in one of the most vivid and largest cities of the world. This amazing mega-city was a perfect location for the c- ference. Seoul is on the one hand a metropolitan area with modern industries, univer- ties and great economic power. On the other hand, it is also a place with a very fas- nating historical and cultural background. It bridges the past and the future as well as east and west. Entertainment computing also aims at building bridges from technology to leisure, education, culture and work. Entertainment computing at its core has a strong focus on computer games. However, it is not only about computer games. The last ICEC c- ferences have shown that entertainment computing is a much wider field. For instance in games, technology developed for games can be used for a wide range of appli- tions such as therapy or education. Moreover, entertainment does not necessarily have to be understood as games. Entertainment computing finds its way to stage perfo- ances and all sorts of new interactive installations.
This volume: * Uses the Coronavirus pandemic to explore the link between news sentiment and global financial markets * Shows how the COVID-19 crisis differs from the Global Financial Crisis of 2008 * Focuses on the Noise vs Signal in news sentiment * will be invaluable for business professionals, bankers, media professionals, and investment consultants.
The complementary nature of physically-based and data-driven models in their demand for physical insight and historical data, leads to the notion that the predictions of a physically-based model can be improved and the associated uncertainty can be systematically reduced through the conjunctive use of a data-driven model of the residuals. The objective of this thesis is to minimise the inevitable mismatch between physically-based models and the actual processes as described by the mismatch between predictions and observations. Principles based on information theory are used to detect the presence and nature of residual information in model errors that might help to develop a data-driven model of the residuals by treating the gap between the process and its (physically-based) model as a separate process. The complementary modelling approach is applied to various hydrodynamic and hydrological models to forecast the expected errors and accuracy, using neural network and fuzzy rule-based models. Complementary modelling offers the opportunity of incorporating processes and data that are not considered by the model, without affecting the routine operation of physically-based models. The possibility that information may be obtained which will help to improve the physically-based model is also demonstrated.
The idea of this book is to establish a new scientific discipline, "noology," under which a set of fundamental principles are proposed for the characterization of both naturally occurring and artificial intelligent systems. The methodology adopted in Principles of Noology for the characterization of intelligent systems, or "noological systems," is a computational one, much like that of AI. Many AI devices such as predicate logic representations, search mechanisms, heuristics, and computational learning mechanisms are employed but they are recast in a totally new framework for the characterization of noological systems. The computational approach in this book provides a quantitative and high resolution understanding of noological processes, and at the same time the principles and methodologies formulated are directly implementable in AI systems. In contrast to traditional AI that ignores motivational and affective processes, under the paradigm of noology, motivational and affective processes are central to the functioning of noological systems and their roles in noological processes are elucidated in detailed computational terms. In addition, a number of novel representational and learning mechanisms are proposed, and ample examples and computer simulations are provided to show their applications. These include rapid effective causal learning (a novel learning mechanism that allows an AI/noological system to learn causality with a small number of training instances), learning of scripts that enables knowledge chunking and rapid problem solving, and learning of heuristics that further accelerates problem solving. Semantic grounding allows an AI/noological system to "truly understand" the meaning of the knowledge it encodes. This issue is extensively explored. This is a highly informative book providing novel and deep insights into intelligent systems which is particularly relevant to both researchers and students of AI and the cognitive sciences.
This book focuses on the role of soft-computing-based electromagnetic computational engines in design and optimization of a wide range of electromagnetic applications. In addition to the theoretical background of metamaterials and soft-computing techniques, the book discusses novel electromagnetic applications such as tensor analysis for invisibility cloaking, metamaterial structures for cloaking applications, broadband radar absorbers, and antennas. The book will prove to be a valuable resource for academics and professionals, as well as military researchers working in the area of metamaterials. |
You may like...
Regulatory Insights on Artificial…
Mark Findlay, Jolyon Ford, …
Hardcover
R3,302
Discovery Miles 33 020
Advanced Introduction to Artificial…
Tom Davenport, John Glaser, …
Paperback
R599
Discovery Miles 5 990
All-in On AI - How Smart Companies Win…
Thomas H Davenport, Nitin Mittal
Hardcover
R653
Discovery Miles 6 530
Neural Approximations for Optimal…
Riccardo Zoppoli, Marcello Sanguineti, …
Hardcover
R5,277
Discovery Miles 52 770
Data-Driven Science and Engineering…
Steven L. Brunton, J. Nathan Kutz
Hardcover
Handbook of Data Science with Semantic…
Archana Patel, Narayan C Debnath
Hardcover
R7,746
Discovery Miles 77 460
|