Your cart is empty
This handy reference book detailing the intricacies of R updates the popular first edition by adding R version 3.4 and 3.5 features. Starting with the basic structure of R, the book takes you on a journey through the terminology used in R and the syntax required to make R work. You will find looking up the correct form for an expression quick and easy. Some of the new material includes information on RStudio, S4 syntax, working with character strings, and an example using the Twitter API. With a copy of the R Quick Syntax Reference in hand, you will find that you are able to use the multitude of functions available in R and are even able to write your own functions to explore and analyze data. What You Will Learn Discover the modes and classes of R objects and how to use them Use both packaged and user-created functions in R Import/export data and create new data objects in R Create descriptive functions and manipulate objects in R Take advantage of flow control and conditional statements Work with packages such as base, stats, and graphics Who This Book Is For Those with programming experience, either new to R, or those with at least some exposure to R but who are new to the latest version.
This book presents the outcomes of the 2019 International Conference on Cyber Security Intelligence and Analytics (CSIA2019), an international conference dedicated to promoting novel theoretical and applied research advances in the interdisciplinary field of cyber security, particularly focusing on threat intelligence, analytics, and countering cyber crime. The conference provides a forum for presenting and discussing innovative ideas, cutting-edge research findings, and novel techniques, methods and applications on all aspects of Cyber Security Intelligence and Analytics.
This book provides a concise introduction to Pervasive Computing, otherwise known as Internet of Things (IoT) and Ubiquitous Computing (Ubicomp) which addresses the seamless integration of computing systems within everyday objects. By introducing the core topics and exploring assistive pervasive systems which infer their context through pattern recognition, the author provides readers with a gentle yet robust foundation of knowledge to this growing field of research. The author explores a range of topics including data acquisition, signal processing, control theory, machine learning and system engineering explaining, with the use of simple mathematical concepts, the core principles underlying pervasive computing systems. Real-life examples are applied throughout, including self-driving cars, automatic insulin pumps, smart homes, and social robotic companions, with each chapter accompanied by a set of exercises for the reader. Practical tutorials are also available to guide enthusiastic readers through the process of building a smart system using cameras, microphones and robotic kits. Due to the power of MATLAB (TM), this can be achieved with no previous programming or robotics experience. Although Pervasive Computing is primarily for undergraduate students, the book is accessible to a wider audience of researchers and designers who are interested in exploring pervasive computing further.
This book presents TDF (Tactics Development Framework), a practical methodology for eliciting and engineering models of expert decision-making in dynamic domains. The authors apply the BDI (Beliefs, Desires, Intentions) paradigm to the elicitation and modelling of dynamic decision making expertise, including team behaviour, and map it to a diagrammatic representation that is intuitive to domain experts. The book will be of value to researchers and practitioners engaged in dynamic decision making.
Nothing has been more prolific over the past century than human/machine interaction. Automobiles, telephones, computers, manufacturing machines, robots, office equipment, machines large and small; all affect the very essence of our daily lives. However, this interaction has not always been efficient or easy and has at times turned fairly hazardous. Cognitive Systems Engineering (CSE) seeks to improve this situation by the careful study of human/machine interaction as the meaningful behavior of a unified system. Written by pioneers in the development of CSE, Joint Cognitive Systems: Foundations of Cognitive Systems Engineering offers a principled approach to studying human work with complex technology. The authors use a top-down, functional approach and emphasize a proactive (coping) perspective on work that overcomes the limitations of the structural human information processing view. They describe a conceptual framework for analysis with concrete theories and methods for joint system modeling that can be applied across the spectrum of single human/machine systems, social/technical systems, and whole organizations. The book explores both current and potential applications of CSE illustrated by examples. Understanding the complexities and functions of the human/machine interaction is critical to designing safe, highly functional, and efficient technological systems. This is a critical reference for students, designers, and engineers in a wide variety of disciplines.
The complementary nature of physically-based and data-driven models in their demand for physical insight and historical data, leads to the notion that the predictions of a physically-based model can be improved and the associated uncertainty can be systematically reduced through the conjunctive use of a data-driven model of the residuals. The objective of this thesis is to minimise the inevitable mismatch between physically-based models and the actual processes as described by the mismatch between predictions and observations. Principles based on information theory are used to detect the presence and nature of residual information in model errors that might help to develop a data-driven model of the residuals by treating the gap between the process and its (physically-based) model as a separate process. The complementary modelling approach is applied to various hydrodynamic and hydrological models to forecast the expected errors and accuracy, using neural network and fuzzy rule-based models. Complementary modelling offers the opportunity of incorporating processes and data that are not considered by the model, without affecting the routine operation of physically-based models. The possibility that information may be obtained which will help to improve the physically-based model is also demonstrated.
Over the last 20 years, approaches to designing speech and language processing algorithms have moved from methods based on linguistics and speech science to data-driven pattern recognition techniques. These techniques have been the focus of intense, fast-moving research and have contributed to significant advances in this field.
The world's leading expert on Lean Six Sigma provides the missing link for reducing waste and taking operations to the next level: Artificial Intelligence Lean Six Sigma (LSS) has been helping companies improve their processes since 2001-but as yet, no one has taken this revolutionary management approach to its limits. Now, The Fourth Revolution in Manufacturing shows exactly how to do that-by adding artificial intelligence (AI) to the mix. This game-changing guide takes you through the process of using AI to unlock maximum speed, solve complex manufacturing challenges, reduce waste, increase company profits, and ultimately beat the competition. Breakthrough Manufacturing explains how to: * Unlock your company's full potential with the AI + LSS approach* Utilize the AI + LSS three-step process to dramatically improve profits * Apply AI + LSS to engineering and other non-manufacturing processes* Harness the interaction of AI+LSS and the ERP system* Create a scorecard and measure your results
When it was first published in 1972, Hubert Dreyfus's manifesto on the inherent inability of disembodied machines to mimic higher mental functions caused an uproar in the artificial intelligence community. The world has changed since then. Today it is clear that "good old-fashioned AI," based on the idea of using symbolic representations to produce general intelligence, is in decline (although several believers still pursue its pot of gold), and the focus of the Al community has shifted to more complex models of the mind. It has also become more common for AI researchers to seek out and study philosophy. For this edition of his now classic book, Dreyfus has added a lengthy new introduction outlining these changes and assessing the paradigms of connectionism and neural networks that have transformed the field.At a time when researchers were proposing grand plans for general problem solvers and automatic translation machines, Dreyfus predicted that they would fail because their conception of mental functioning was naive, and he suggested that they would do well to acquaint themselves with modern philosophical approaches to human beings. What Computers Can't Do was widely attacked but quietly studied. Dreyfus's arguments are still provocative and focus our attention once again on what it is that makes human beings unique.Hubert L. Dreyfus, who is Professor of Philosophy at the University of California, Berkeley, is also the author of Being-in-the-World. A Commentary on Heidegger's Being and Time, Division I.
This unique book on intelligence analysis covers several vital but often overlooked topics. It teaches the evidential and inferential issues involved in 'connecting the dots' to draw defensible and persuasive conclusions from masses of evidence: from observations we make, or questions we ask, we generate alternative hypotheses as explanations or answers; we make use of our hypotheses to generate new lines of inquiry and discover new evidence; and we test the hypotheses with the discovered evidence. To facilitate understanding of these issues and enable the performance of complex analyses, the book introduces an intelligent analytical tool, called Disciple-CD. Readers will practice with Disciple-CD and learn how to formulate hypotheses; develop arguments that reduce complex hypotheses to simpler ones; collect evidence to evaluate the simplest hypotheses; and assess the relevance and the believability of evidence, which combine in complex ways to determine its inferential force and the probabilities of the hypotheses.
Prostheses, assistive systems, and rehabilitation systems are essential to increasing the quality of life for people with disabilities. Research and development over the last decade has resulted in enormous advances toward that goal-none more so than the development of intelligent systems and technologies.
Fuzzy set theory - and its underlying fuzzy logic - represents one of the most significant scientific and cultural paradigms to emerge in the last half-century. Its theoretical and technological promise is vast, and we are only beginning to experience its potential. Clustering is the first and most basic application of fuzzy set theory, but forms the basis of many, more sophisticated, intelligent computational models, particularly in pattern recognition, data mining, adaptive and hierarchical clustering, and classifier design.
The idea of artificial intelligence--job-killing robots, self-driving cars, and self-managing organizations--captures the imagination, evoking a combination of wonder and dread for those of us who will have to deal with the consequences. But what if it's not quite so complicated? The real job of artificial intelligence, argue these three eminent economists, is to lower the cost of prediction. And once you start talking about costs, you can use some well‐established economics to cut through the hype.
The constant challenge for all managers is to make decisions under uncertainty. And AI contributes by making knowing what's coming in the future cheaper and more certain. But decision making has another component: judgment, which is firmly in the realm of humans, not machines. Making prediction cheaper means that we can make more predictions more accurately and assess them with our better (human) judgment. Once managers can separate tasks into components of prediction and judgment, we can begin to understand how to optimize the interface between humans and machines.
More than just an account of AI's powerful capabilities, Prediction Machines shows managers how they can most effectively leverage AI, disrupting business as usual only where required, and provides businesses with a toolkit to navigate the coming wave of challenges and opportunities.
Prepare yourselves for the coming Cyber Revolution! Over time, humankind has transformed from hunter-gatherer to farmer, from farmer to industrial worker, and from industrial worker to service provider. Now, we are on the cusp of a fourth transformative wave, spurred by climate change, exponential population growth, and our ever-increasing reliance on technology. This Copernicus book follows the stream of changes we will likely experience over the next few decades. These will involve the design and planning of smart cities and vital new mega-cities, as well as the use of sophisticated artificial intelligence and knowledge systems in our professional and everyday lives. The book shows how the nature of work, economics, taxation, social intercourse, and a slew of other global human endeavors will almost certainly undergo fundamental shifts during this time. Despite the many crises the world is gearing up to face, this book is not all doom and gloom - it is a call to action, a guide to how we might harness novel technologies in space and cyberspace to address our most urgent needs.
We are crossing a new frontier in the evolution of computing and entering the era of cognitive systems. The victory of IBM's Watson on the television quiz show Jeopardy! revealed how scientists and engineers at IBM and elsewhere are pushing the boundaries of science and technology to create machines that sense, learn, reason, and interact with people in new ways to provide insight and advice. In Smart Machines, John E. Kelly III, director of IBM Research, and Steve Hamm, a writer at IBM and a former business and technology journalist, introduce the fascinating world of "cognitive systems" to general audiences and provide a window into the future of computing. Cognitive systems promise to penetrate complexity and assist people and organizations in better decision making. They can help doctors evaluate and treat patients, augment the ways we see, anticipate major weather events, and contribute to smarter urban planning. Kelly and Hamm's comprehensive perspective describes this technology inside and out and explains how it will help us conquer the harnessing and understanding of "big data," one of the major computing challenges facing businesses and governments in the coming decades. Absorbing and impassioned, their book will inspire governments, academics, and the global tech industry to work together to power this exciting wave in innovation.
"Virtual Futures" explores the ideas that the future lies in its ability to articulate the consequences of an increasingly synthetic and virtual world. New technologies like cyberspace, the internet, and Chaos theory are often discussed in the context of technology and its potential to liberate or in terms of technophobia. This collection examines both these ideas while also charting a new and controversial route through contemporary discourses on technology; a path that discusses the material evolution and the erotic relation between humans and machines. Including essays by Sadie Plant, Stelarc and Manuel de Landa, the collection heralds the death of humanism and the rise of posthuman pragmatism. This collection provides analyses by both established theorists and the most innovative new voices working in conjunction between the arts and contemporary technology.
There is perhaps no facet of modern society where the influence of
computer automation has not been felt. Flight management systems
for pilots, diagnostic and surgical aids for physicians,
navigational displays for drivers, and decision-aiding systems for
air-traffic controllers, represent only a few of the numerous
domains in which powerful new automation technologies have been
introduced. The benefits that have been reaped from this
technological revolution have been many. At the same time,
automation has not always worked as planned by designers, and many
problems have arisen--from minor inefficiencies of operation to
large-scale, catastrophic accidents. Understanding how humans
interact with automation is vital for the successful design of new
automated systems that are both safe and efficient.
Based on a symposium honoring the extensive work of Allen Newell --
one of the founders of artificial intelligence, cognitive science,
human-computer interaction, and the systematic study of
computational architectures -- this volume demonstrates how
unifying themes may be found in the diversity that characterizes
current research on computers and cognition. The subject matter
Living with Robots recounts a foundational shift in the field of robotics, from artificial intelligence to artificial empathy, and foreshadows an inflection point in human evolution. Today's robots engage with human beings in socially meaningful ways, as therapists, trainers, mediators, caregivers, and companions. Social robotics is grounded in artificial intelligence, but the field's most probing questions explore the nature of the very real human emotions that social robots are designed to emulate. Social roboticists conduct their inquiries out of necessity--every robot they design incorporates and tests a number of hypotheses about human relationships. Paul Dumouchel and Luisa Damiano show that as roboticists become adept at programming artificial empathy into their creations, they are abandoning the conventional conception of human emotions as discrete, private, internal experiences. Rather, they are reconceiving emotions as a continuum between two actors who coordinate their affective behavior in real time. Rethinking the role of sociability in emotion has also led the field of social robotics to interrogate a number of human ethical assumptions, and to formulate a crucial political insight: there are simply no universal human characteristics for social robots to emulate. What we have instead is a plurality of actors, human and nonhuman, in noninterchangeable relationships. As Living with Robots shows, for social robots to be effective, they must be attentive to human uniqueness and exercise a degree of social autonomy. More than mere automatons, they must become social actors, capable of modifying the rules that govern their interplay with humans.
The applications of Artificial Intelligence lie all around us; in our homes, schools and offices, in our cinemas, in art galleries and - not least - on the Internet. The results of Artificial Intelligence have been invaluable to biologists, psychologists, and linguists in helping to understand the processes of memory, learning, and language from a fresh angle. As a concept, Artificial Intelligence has fuelled and sharpened the philosophical debates concerning the nature of the mind, intelligence, and the uniqueness of human beings. Margaret A. Boden reviews the philosophical and technological challenges raised by Artificial Intelligence, considering whether programs could ever be really intelligent, creative or even conscious, and shows how the pursuit of Artificial Intelligence has helped us to appreciate how human and animal minds are possible.
A new field of collective intelligence has emerged in the last few years, prompted by a wave of digital technologies that make it possible for organizations and societies to think at large scale. This "bigger mind"--human and machine capabilities working together--has the potential to solve the great challenges of our time. So why do smart technologies not automatically lead to smart results? Gathering insights from diverse fields, including philosophy, computer science, and biology, Big Mind reveals how collective intelligence can guide corporations, governments, universities, and societies to make the most of human brains and digital technologies. Geoff Mulgan explores how collective intelligence has to be consciously organized and orchestrated in order to harness its powers. He looks at recent experiments mobilizing millions of people to solve problems, and at groundbreaking technology like Google Maps and Dove satellites. He also considers why organizations full of smart people and machines can make foolish mistakes--from investment banks losing billions to intelligence agencies misjudging geopolitical events--and shows how to avoid them. Highlighting differences between environments that stimulate intelligence and those that blunt it, Mulgan shows how human and machine intelligence could solve challenges in business, climate change, democracy, and public health. But for that to happen we'll need radically new professions, institutions, and ways of thinking. Informed by the latest work on data, web platforms, and artificial intelligence, Big Mind shows how collective intelligence could help us survive and thrive.
Two leading data scientists offer an up-close and user-friendly look at artificial intelligence: what it is, how it works, where it came from and how to harness its power for a better world. 'There comes a time in the life of a subject when someone steps up and writes the book about it. AIQ explores the fascinating history of the ideas that drive this technology of the future and demystifies the core concepts behind it; the result is a positive and entertaining look at the great potential unlocked by marrying human creativity with powerful machines.' Steven D. Levitt, co-author of Freakonomics Dozens of times per day, we all interact with intelligent machines that are constantly learning from the wealth of data now available to them. These machines, from smart phones to talking robots to self-driving cars, are remaking the world in the twenty first century in the same way that the Industrial Revolution remade the world in the nineteenth. AIQ is based on a simple premise: if you want to understand the modern world, then you have to know a little bit of the mathematical language spoken by intelligent machines. AIQ will teach you that language but in an unconventional way, anchored in stories rather than equations. You will meet a fascinating cast of historical characters who have a lot to teach you about data, probability and better thinking. Along the way, you'll see how these same ideas are playing out in the modern age of big data and intelligent machines, and how these technologies will soon help you to overcome some of your built-in cognitive weaknesses, giving you a chance to lead a happier, healthier, more fulfilled life.
You may like...
AI for Marketing and Product Innovation…
A K Pradeep, Andrew Appel, … Hardcover
Artificial Intelligence in Education…
Wayne Holmes, Maya Bialik, … Paperback R290 Discovery Miles 2 900
Robot-Proof - Higher Education in the…
Joseph E Aoun Paperback
Novacene - The Coming Age of…
James Lovelock Hardcover (1)
The Fourth Age - Smart Robots, Conscious…
Byron Reese Hardcover (1)
We have been harmonised - Life in…
Kai Strittmatter Paperback (1)
Python for Programmers - with Big Data…
Paul Deitel, Harvey Deitel Hardcover
The Technology Trap - Capital, Labor…
Carl Benedikt Frey Hardcover
The Creativity Code - How AI is Learning…
Marcus du Sautoy Paperback (1)
The Creativity Code - How Ai is Learning…
Marcus du Sautoy Hardcover (1)