Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Showing 1 - 14 of 14 matches in All Departments
This book provides a broad overview of the benefits from a Systems Engineering design philosophy in architecting complex systems composed of artificial intelligence (AI), machine learning (ML) and humans situated in chaotic environments. The major topics include emergence, verification and validation of systems using AI/ML and human systems integration to develop robust and effective human-machine teams-where the machines may have varying degrees of autonomy due to the sophistication of their embedded AI/ML. The chapters not only describe what has been learned, but also raise questions that must be answered to further advance the general Science of Autonomy. The science of how humans and machines operate as a team requires insights from, among others, disciplines such as the social sciences, national and international jurisprudence, ethics and policy, and sociology and psychology. The social sciences inform how context is constructed, how trust is affected when humans and machines depend upon each other and how human-machine teams need a shared language of explanation. National and international jurisprudence determine legal responsibilities of non-trivial human-machine failures, ethical standards shape global policy, and sociology provides a basis for understanding team norms across cultures. Insights from psychology may help us to understand the negative impact on humans if AI/ML based machines begin to outperform their human teammates and consequently diminish their value or importance. This book invites professionals and the curious alike to witness a new frontier open as the Science of Autonomy emerges.
This book is intended to give researchers and practitioners in the cross-cutting fields of artificial intelligence, machine learning (AI/ML) and cyber security up-to-date and in-depth knowledge of recent techniques for improving the vulnerabilities of AI/ML systems against attacks from malicious adversaries. The ten chapters in this book, written by eminent researchers in AI/ML and cyber-security, span diverse, yet inter-related topics including game playing AI and game theory as defenses against attacks on AI/ML systems, methods for effectively addressing vulnerabilities of AI/ML operating in large, distributed environments like Internet of Things (IoT) with diverse data modalities, and, techniques to enable AI/ML systems to intelligently interact with humans that could be malicious adversaries and/or benign teammates. Readers of this book will be equipped with definitive information on recent developments suitable for countering adversarial threats in AI/ML systems towards making them operate in a safe, reliable and seamless manner.
This volume explores the intersection of robust intelligence (RI) and trust in autonomous systems across multiple contexts among autonomous hybrid systems, where hybrids are arbitrary combinations of humans, machines and robots. To better understand the relationships between artificial intelligence (AI) and RI in a way that promotes trust between autonomous systems and human users, this book explores the underlying theory, mathematics, computational models, and field applications. It uniquely unifies the fields of RI and trust and frames it in a broader context, namely the effective integration of human-autonomous systems. A description of the current state of the art in RI and trust introduces the research work in this area. With this foundation, the chapters further elaborate on key research areas and gaps that are at the heart of effective human-systems integration, including workload management, human computer interfaces, team integration and performance, advanced analytics, behavior modeling, training, and, lastly, test and evaluation. Written by international leading researchers from across the field of autonomous systems research, Robust Intelligence and Trust in Autonomous Systems dedicates itself to thoroughly examining the challenges and trends of systems that exhibit RI, the fundamental implications of RI in developing trusted relationships with present and future autonomous systems, and the effective human systems integration that must result for trust to be sustained. Contributing authors: David W. Aha, Jenny Burke, Joseph Coyne, M.L. Cummings, Munjal Desai, Michael Drinkwater, Jill L. Drury, Michael W. Floyd, Fei Gao, Vladimir Gontar, Ayanna M. Howard, Mo Jamshidi, W.F. Lawless, Kapil Madathil, Ranjeev Mittu, Arezou Moussavi, Gari Palmer, Paul Robinette, Behzad Sadrfaridpour, Hamed Saeidi, Kristin E. Schaefer, Anne Selwyn, Ciara Sibley, Donald A. Sofge, Erin Solovey, Aaron Steinfeld, Barney Tannahill, Gavin Taylor, Alan R. Wagner, Yue Wang, Holly A. Yanco, Dan Zwillinger.
Distributed Intelligent Systems: A Coordination Perspective comprehensively answers commonly asked questions about coordination in agent-oriented distributed systems. Characterizing the state-of-the-art research in the field of coordination with regard to the development of distributed agent-oriented systems is a particularly complex endeavour; while existing books deal with specific aspects of coordination, the major contribution of this book lies in the attempt to provide an in-depth review covering a wide range of issues regarding multi-agent coordination in Distributed Artificial Intelligence. Key features: Unveils the lack of coherence and order that characterizes the area of research pertaining to coordination of distributed intelligent systems Examines coordination models, frameworks, strategies and techniques to enable the development of distributed intelligent agent-oriented systems Provides specific recommendations to realize more widespread deployment of agent-based systems
This book explores how Artificial Intelligence (AI), by leading to an increase in the autonomy of machines and robots, is offering opportunities for an expanded but uncertain impact on society by humans, machines, and robots. To help readers better understand the relationships between AI, autonomy, humans and machines that will help society reduce human errors in the use of advanced technologies (e.g., airplanes, trains, cars), this edited volume presents a wide selection of the underlying theories, computational models, experimental methods, and field applications. While other literature deals with these topics individually, this book unifies the fields of autonomy and AI, framing them in the broader context of effective integration for human-autonomous machine and robotic systems. The contributions, written by world-class researchers and scientists, elaborate on key research topics at the heart of effective human-machine-robot-systems integration. These topics include, for example, computational support for intelligence analyses; the challenge of verifying today's and future autonomous systems; comparisons between today's machines and autism; implications of human information interaction on artificial intelligence and errors; systems that reason; the autonomy of machines, robots, buildings; and hybrid teams, where hybrid reflects arbitrary combinations of humans, machines and robots. The contributors span the field of autonomous systems research, ranging from industry and academia to government. Given the broad diversity of the research in this book, the editors strove to thoroughly examine the challenges and trends of systems that implement and exhibit AI; the social implications of present and future systems made autonomous with AI; systems with AI seeking to develop trusted relationships among humans, machines, and robots; and the effective human systems integration that must result for trust in these new systems and their applications to increase and to be sustained.
This volume addresses context from three comprehensive perspectives: first, its importance, the issues surrounding context, and its value in the laboratory and the field; second, the theory guiding the AI used to model its context; and third, its applications in the field (e.g., decision-making). This breadth poses a challenge. The book analyzes how the environment (context) influences human perception, cognition and action. While current books approach context narrowly, the major contribution of this book is to provide an in-depth review over a broad range of topics for a computational context no matter its breadth. The volume outlines numerous strategies and techniques from world-class scientists who have adapted their research to solve different problems with AI, in difficult environments and complex domains to address the many computational challenges posed by context. Context can be clear, uncertain or an illusion. Clear contexts: A father praising his child; a trip to the post office to buy stamps; a policewoman asking for identification. Uncertain contexts: A sneak attack; a surprise witness in a courtroom; a shout of "Fire! Fire!" Contexts as illusion: Humans fall prey to illusions that machines do not (Adelson's checkerboard illusion versus a photometer). Determining context is not easy when disagreement exists, interpretations vary, or uncertainty reigns. Physicists like Einstein (relativity), Bekenstein (holographs) and Rovelli (universe) have written that reality is not what we commonly believe. Even outside of awareness, individuals act differently whether alone or in teams. Can computational context with AI adapt to clear and uncertain contexts, to change over time, and to individuals, machines or robots as well as to teams? If a program automatically "knows" the context that improves performance or decisions, does it matter whether context is clear, uncertain or illusory? Written and edited by world class leaders from across the field of autonomous systems research, this volume carefully considers the computational systems being constructed to determine context for individual agents or teams, the challenges they face, and the advances they expect for the science of context.
This book is intended to give researchers and practitioners in the cross-cutting fields of artificial intelligence, machine learning (AI/ML) and cyber security up-to-date and in-depth knowledge of recent techniques for improving the vulnerabilities of AI/ML systems against attacks from malicious adversaries. The ten chapters in this book, written by eminent researchers in AI/ML and cyber-security, span diverse, yet inter-related topics including game playing AI and game theory as defenses against attacks on AI/ML systems, methods for effectively addressing vulnerabilities of AI/ML operating in large, distributed environments like Internet of Things (IoT) with diverse data modalities, and, techniques to enable AI/ML systems to intelligently interact with humans that could be malicious adversaries and/or benign teammates. Readers of this book will be equipped with definitive information on recent developments suitable for countering adversarial threats in AI/ML systems towards making them operate in a safe, reliable and seamless manner.
Distributed Intelligent Systems: A Coordination Perspective comprehensively answers commonly asked questions about coordination in agent-oriented distributed systems. Characterizing the state-of-the-art research in the field of coordination with regard to the development of distributed agent-oriented systems is a particularly complex endeavour; while existing books deal with specific aspects of coordination, the major contribution of this book lies in the attempt to provide an in-depth review covering a wide range of issues regarding multi-agent coordination in Distributed Artificial Intelligence. Key features: Unveils the lack of coherence and order that characterizes the area of research pertaining to coordination of distributed intelligent systems Examines coordination models, frameworks, strategies and techniques to enable the development of distributed intelligent agent-oriented systems Provides specific recommendations to realize more widespread deployment of agent-based systems
Providing a high level of autonomy for a human-machine team requires assumptions that address behavior and mutual trust. The performance of a human-machine team is maximized when the partnership provides mutual benefits that satisfy design rationales, balance of control, and the nature of autonomy. The distinctively different characteristics and features of humans and machines are likely why they have the potential to work well together, overcoming each other's weaknesses through cooperation, synergy, and interdependence which forms a “collective intelligence.” Trust is bidirectional and two-sided; humans need to trust AI technology, but future AI technology may also need to trust humans.Putting AI in the Critical Loop: Assured Trust and Autonomy in Human-Machine Teams focuses on human-machine trust and “assured” performance and operation in order to realize the potential of autonomy. This book aims to take on the primary challenges of bidirectional trust and performance of autonomous systems, providing readers with a review of the latest literature, the science of autonomy, and a clear path towards the autonomy of human-machine teams and systems. Throughout this book, the intersecting themes of collective intelligence, bidirectional trust, and continual assurance form the challenging and extraordinarily interesting themes which will help lay the groundwork for the audience to not only bridge the knowledge gaps, but also to advance this science to develop better solutions.
This volume addresses context from three comprehensive perspectives: first, its importance, the issues surrounding context, and its value in the laboratory and the field; second, the theory guiding the AI used to model its context; and third, its applications in the field (e.g., decision-making). This breadth poses a challenge. The book analyzes how the environment (context) influences human perception, cognition and action. While current books approach context narrowly, the major contribution of this book is to provide an in-depth review over a broad range of topics for a computational context no matter its breadth. The volume outlines numerous strategies and techniques from world-class scientists who have adapted their research to solve different problems with AI, in difficult environments and complex domains to address the many computational challenges posed by context. Context can be clear, uncertain or an illusion. Clear contexts: A father praising his child; a trip to the post office to buy stamps; a policewoman asking for identification. Uncertain contexts: A sneak attack; a surprise witness in a courtroom; a shout of "Fire! Fire!" Contexts as illusion: Humans fall prey to illusions that machines do not (Adelson's checkerboard illusion versus a photometer). Determining context is not easy when disagreement exists, interpretations vary, or uncertainty reigns. Physicists like Einstein (relativity), Bekenstein (holographs) and Rovelli (universe) have written that reality is not what we commonly believe. Even outside of awareness, individuals act differently whether alone or in teams. Can computational context with AI adapt to clear and uncertain contexts, to change over time, and to individuals, machines or robots as well as to teams? If a program automatically "knows" the context that improves performance or decisions, does it matter whether context is clear, uncertain or illusory? Written and edited by world class leaders from across the field of autonomous systems research, this volume carefully considers the computational systems being constructed to determine context for individual agents or teams, the challenges they face, and the advances they expect for the science of context.
This book provides a broad overview of the benefits from a Systems Engineering design philosophy in architecting complex systems composed of artificial intelligence (AI), machine learning (ML) and humans situated in chaotic environments. The major topics include emergence, verification and validation of systems using AI/ML and human systems integration to develop robust and effective human-machine teams-where the machines may have varying degrees of autonomy due to the sophistication of their embedded AI/ML. The chapters not only describe what has been learned, but also raise questions that must be answered to further advance the general Science of Autonomy. The science of how humans and machines operate as a team requires insights from, among others, disciplines such as the social sciences, national and international jurisprudence, ethics and policy, and sociology and psychology. The social sciences inform how context is constructed, how trust is affected when humans and machines depend upon each other and how human-machine teams need a shared language of explanation. National and international jurisprudence determine legal responsibilities of non-trivial human-machine failures, ethical standards shape global policy, and sociology provides a basis for understanding team norms across cultures. Insights from psychology may help us to understand the negative impact on humans if AI/ML based machines begin to outperform their human teammates and consequently diminish their value or importance. This book invites professionals and the curious alike to witness a new frontier open as the Science of Autonomy emerges.
Many current AI and machine learning algorithms and data and information fusion processes attempt in software to estimate situations in our complex world of nested feedback loops. Such algorithms and processes must gracefully and efficiently adapt to technical challenges such as data quality induced by these loops, and interdependencies that vary in complexity, space, and time. To realize effective and efficient designs of computational systems, a Systems Engineering perspective may provide a framework for identifying the interrelationships and patterns of change between components rather than static snapshots. We must study cascading interdependencies through this perspective to understand their behavior and to successfully adopt complex system-of-systems in society. This book derives in part from the presentations given at the AAAI 2021 Spring Symposium session on Leveraging Systems Engineering to Realize Synergistic AI / Machine Learning Capabilities. Its 16 chapters offer an emphasis on pragmatic aspects and address topics in systems engineering; AI, machine learning, and reasoning; data and information fusion; intelligent systems; autonomous systems; interdependence and teamwork; human-computer interaction; trust; and resilience.
Artificial Intelligence for the Internet of Everything considers the foundations, metrics and applications of IoE systems. It covers whether devices and IoE systems should speak only to each other, to humans or to both. Further, the book explores how IoE systems affect targeted audiences (researchers, machines, robots, users) and society, as well as future ecosystems. It examines the meaning, value and effect that IoT has had and may have on ordinary life, in business, on the battlefield, and with the rise of intelligent and autonomous systems. Based on an artificial intelligence (AI) perspective, this book addresses how IoE affects sensing, perception, cognition and behavior. Each chapter addresses practical, measurement, theoretical and research questions about how these "things" may affect individuals, teams, society or each other. Of particular focus is what may happen when these "things" begin to reason, communicate and act autonomously on their own, whether independently or interdependently with other "things".
Human-Machine Shared Contexts considers the foundations, metrics, and applications of human-machine systems. Editors and authors debate whether machines, humans, and systems should speak only to each other, only to humans, or to both and how. The book establishes the meaning and operation of "shared contexts" between humans and machines; it also explores how human-machine systems affect targeted audiences (researchers, machines, robots, users) and society, as well as future ecosystems composed of humans and machines. This book explores how user interventions may improve the context for autonomous machines operating in unfamiliar environments or when experiencing unanticipated events; how autonomous machines can be taught to explain contexts by reasoning, inferences, or causality, and decisions to humans relying on intuition; and for mutual context, how these machines may interdependently affect human awareness, teams and society, and how these "machines" may be affected in turn. In short, can context be mutually constructed and shared between machines and humans? The editors are interested in whether shared context follows when machines begin to think, or, like humans, develop subjective states that allow them to monitor and report on their interpretations of reality, forcing scientists to rethink the general model of human social behavior. If dependence on machine learning continues or grows, the public will also be interested in what happens to context shared by users, teams of humans and machines, or society when these machines malfunction. As scientists and engineers "think through this change in human terms," the ultimate goal is for AI to advance the performance of autonomous machines and teams of humans and machines for the betterment of society wherever these machines interact with humans or other machines. This book will be essential reading for professional, industrial, and military computer scientists and engineers; machine learning (ML) and artificial intelligence (AI) scientists and engineers, especially those engaged in research on autonomy, computational context, and human-machine shared contexts; advanced robotics scientists and engineers; scientists working with or interested in data issues for autonomous systems such as with the use of scarce data for training and operations with and without user interventions; social psychologists, scientists and physical research scientists pursuing models of shared context; modelers of the internet of things (IOT); systems of systems scientists and engineers and economists; scientists and engineers working with agent-based models (ABMs); policy specialists concerned with the impact of AI and ML on society and civilization; network scientists and engineers; applied mathematicians (e.g., holon theory, information theory); computational linguists; and blockchain scientists and engineers.
|
You may like...
|