Your cart is empty
This book describes the authors' investigations of computational sarcasm based on the notion of incongruity. In addition, it provides a holistic view of past work in computational sarcasm and the challenges and opportunities that lie ahead. Sarcastic text is a peculiar form of sentiment expression and computational sarcasm refers to computational techniques that process sarcastic text. To first understand the phenomenon of sarcasm, three studies are conducted: (a) how is sarcasm annotation impacted when done by non-native annotators? (b) How is sarcasm annotation impacted when the task is to distinguish between sarcasm and irony? And (c) can targets of sarcasm be identified by humans and computers. Following these studies, the book proposes approaches for two research problems: sarcasm detection and sarcasm generation. To detect sarcasm, incongruity is captured in two ways: 'intra-textual incongruity' where the authors look at incongruity within the text to be classified (i.e., target text) and 'context incongruity' where the authors incorporate information outside the target text. These approaches use machine-learning techniques such as classifiers, topic models, sequence labelling, and word embeddings. These approaches operate at multiple levels: (a) sentiment incongruity (based on sentiment mixtures), (b) semantic incongruity (based on word embedding distance), (c) language model incongruity (based on unexpected language model), (d) author's historical context (based on past text by the author), and (e) conversational context (based on cues from the conversation). In the second part of the book, the authors present the first known technique for sarcasm generation, which uses a template-based approach to generate a sarcastic response to user input. This book will prove to be a valuable resource for researchers working on sentiment analysis, especially as applied to automation in social media.
This book brings together philosophical approaches to cooperation and collective agency with research into human-machine interaction and cooperation from engineering, robotics, computer science and AI. Bringing these so far largely unrelated fields of study together leads to a better understanding of collective agency in natural and artificial systems and will help to improve the design and performance of hybrid systems involving human and artificial agents. Modeling collective agency with the help of computer simulations promises also philosophical insights into the emergence of collective agency. The volume consists of four sections. The first section is dedicated to the concept of agency. The second section of the book turns to human-machine cooperation. The focus of the third section is the transition from cooperation to collective agency. The last section concerns the explanatory value of social simulations of collective agency in the broader framework of cultural evolution.
Network revolutions of the past have shaped the present and set the stage for the revolution we are experiencing today. In an era of seemingly instant change, it's easy to think that today's revolutions-in communications, business, and many areas of daily life-are unprecedented. Today's changes may be new and may be happening faster than ever before. But our ancestors at times were just as bewildered by rapid upheavals in what we now call ""networks""-the physical links that bind any society together. In this fascinating book, former FCC chairman Tom Wheeler brings to life the two great network revolutions of the past and uses them to help put in perspective the confusion, uncertainty, and even excitement most people face today. The first big network revolution was the invention of movable-type printing in the fifteenth century. This book, its millions of predecessors, and even such broad trends as the Reformation, the Renaissance, and the multiple scientific revolutions of the past 500 years would not have been possible without that one invention. The second revolution came with the invention of the telegraph early in the nineteenth century. Never before had people been able to communicate over long distances faster than a horse could travel. Along with the development of the world's first high-speed network-the railroad-the telegraph upended centuries of stability and literally redrew the map of the world. Wheeler puts these past revolutions into the perspective of today, when rapid-fire changes in networking are upending the nature of work, personal privacy, education, the media, and nearly every other aspect of modern life. But he doesn't leave it there. Outlining ""What's Next,"" he describes how artificial intelligence, virtual reality, blockchain, and the need for cybersecurity are laying the foundation for a third network revolution.
We are crossing a new frontier in the evolution of computing and entering the era of cognitive systems. The victory of IBM's Watson on the television quiz show Jeopardy! revealed how scientists and engineers at IBM and elsewhere are pushing the boundaries of science and technology to create machines that sense, learn, reason, and interact with people in new ways to provide insight and advice. In Smart Machines, John E. Kelly III, director of IBM Research, and Steve Hamm, a writer at IBM and a former business and technology journalist, introduce the fascinating world of "cognitive systems" to general audiences and provide a window into the future of computing. Cognitive systems promise to penetrate complexity and assist people and organizations in better decision making. They can help doctors evaluate and treat patients, augment the ways we see, anticipate major weather events, and contribute to smarter urban planning. Kelly and Hamm's comprehensive perspective describes this technology inside and out and explains how it will help us conquer the harnessing and understanding of "big data," one of the major computing challenges facing businesses and governments in the coming decades. Absorbing and impassioned, their book will inspire governments, academics, and the global tech industry to work together to power this exciting wave in innovation.
The field of artificial intelligence (AI) and the law is on the cusp of a revolution that began with text analytic programs like IBM's Watson and Debater and the open-source information management architectures on which they are based. Today, new legal applications are beginning to appear and this book - designed to explain computational processes to non-programmers - describes how they will change the practice of law, specifically by connecting computational models of legal reasoning directly with legal text, generating arguments for and against particular outcomes, predicting outcomes and explaining these predictions with reasons that legal professionals will be able to evaluate for themselves. These legal applications will support conceptual legal information retrieval and allow cognitive computing, enabling a collaboration between humans and computers in which each does what it can do best. Anyone interested in how AI is changing the practice of law should read this illuminating work.
Commonsense psychology refers to the implicit theories that we all use to make sense of people's behavior in terms of their beliefs, goals, plans, and emotions. These are also the theories we employ when we anthropomorphize complex machines and computers as if they had humanlike mental lives. In order to successfully cooperate and communicate with people, these theories will need to be represented explicitly in future artificial intelligence systems. This book provides a large-scale logical formalization of commonsense psychology in support of humanlike artificial intelligence. It uses formal logic to encode the deep lexical semantics of the full breadth of psychological words and phrases, providing fourteen hundred axioms of first-order logic organized into twenty-nine commonsense psychology theories and sixteen background theories. This in-depth exploration of human commonsense reasoning for artificial intelligence researchers, linguists, and cognitive and social psychologists will serve as a foundation for the development of humanlike artificial intelligence.
Can you tell the difference between talking to a human and talking to a machine? Or, is it possible to create a machine which is able to converse like a human? In fact, what is it that even makes us human? Turing's Imitation Game, commonly known as the Turing Test, is fundamental to the science of artificial intelligence. Involving an interrogator conversing with hidden identities, both human and machine, the test strikes at the heart of any questions about the capacity of machines to behave as humans. While this subject area has shifted dramatically in the last few years, this book offers an up-to-date assessment of Turing's Imitation Game, its history, context and implications, all illustrated with practical Turing tests. The contemporary relevance of this topic and the strong emphasis on example transcripts makes this book an ideal companion for undergraduate courses in artificial intelligence, engineering or computer science.
Data science, data engineering and knowledge engineering requires networking and communication as a backbone and have wide scope of implementation in engineering sciences. Keeping this ideology in preference, this book includes the insights that reflect the advances in these fields from upcoming researchers and leading academicians across the globe. It contains high-quality peer-reviewed papers of 'International Conference on Recent Advancement in Computer, Communication and Computational Sciences (ICRACCCS 2016)', held at Janardan Rai Nagar Rajasthan Vidyapeeth University, Udaipur, India, during 25-26 November 2016. The volume covers variety of topics such as Advanced Communication Networks, Artificial Intelligence and Evolutionary Algorithms, Advanced Software Engineering and Cloud Computing, Image Processing and Computer Vision, and Security. The book will help the perspective readers from computer industry and academia to derive the advances of next generation communication and computational technology and shape them into real life applications.
As an important enabler for changing people's lives, advances in artificial intelligence (AI)-based applications and services are on the rise, despite being hindered by efficiency and latency issues. By focusing on deep learning as the most representative technique of AI, this book provides a comprehensive overview of how AI services are being applied to the network edge near the data sources, and demonstrates how AI and edge computing can be mutually beneficial. To do so, it introduces and discusses: 1) edge intelligence and intelligent edge; and 2) their implementation methods and enabling technologies, namely AI training and inference in the customized edge computing framework. Gathering essential information previously scattered across the communication, networking, and AI areas, the book can help readers to understand the connections between key enabling technologies, e.g. a) AI applications in edge; b) AI inference in edge; c) AI training for edge; d) edge computing for AI; and e) using AI to optimize edge. After identifying these five aspects, which are essential for the fusion of edge computing and AI, it discusses current challenges and outlines future trends in achieving more pervasive and fine-grained intelligence with the aid of edge computing.
Are psychometric tests valid for a new reality of artificial intelligence systems, technology-enhanced humans, and hybrids yet to come? Are the Turing Test, the ubiquitous CAPTCHAs, and the various animal cognition tests the best alternatives? In this fascinating and provocative book, Jose Hernandez-Orallo formulates major scientific questions, integrates the most significant research developments, and offers a vision of the universal evaluation of cognition. By replacing the dominant anthropocentric stance with a universal perspective where living organisms are considered as a special case, long-standing questions in the evaluation of behavior can be addressed in a wider landscape. Can we derive task difficulty intrinsically? Is a universal g factor - a common general component for all abilities - theoretically possible? Using algorithmic information theory as a foundation, the book elaborates on the evaluation of perceptual, developmental, social, verbal and collective features and critically analyzes what the future of intelligence might look like.
Many industries have been revolutionized by the widespread adoption of AI and machine learning. The programmatic availability of historical and real-time financial data in combination with techniques from AI and machine learning will also change the financial industry in a fundamental way. This practical book explains how to use AI and machine learning to discover statistical inefficiencies in financial markets and exploit them through algorithmic trading. Author Yves Hilpisch shows practitioners, students, and academics in both finance and data science how machine and deep learning algorithms can be applied to finance. Thanks to lots of self-contained Python examples, you'll be able to replicate all results and figures presented in the book. Examine how data is reshaping finance from a theory-driven to a data-driven discipline Understand the major possibilities, consequences, and resulting requirements of AI-first finance Get up to speed on the tools, skills, and major use cases to apply AI in finance yourself Apply neural networks and reinforcement learning to discover statistical inefficiencies in financial markets Delve into the concepts of the technological singularity and the financial singularity
Step into the future with AI The term "Artificial Intelligence" has been around since the 1950s, but a lot has changed since then. Today, AI is referenced in the news, books, movies, and TV shows, and the exact definition is often misinterpreted. Artificial Intelligence For Dummies provides a clear introduction to AI and how it's being used today. Inside, you'll get a clear overview of the technology, the common misconceptions surrounding it, and a fascinating look at its applications in everything from self-driving cars and drones to its contributions in the medical field. Learn about what AI has contributed to society Explore uses for AI in computer applications Discover the limits of what AI can do Find out about the history of AI The world of AI is fascinating--and this hands-on guide makes it more accessible than ever!
A Harvard researcher investigates the human eye in this insightful account of what vision reveals about intelligence, learning, and the greatest mysteries of neuroscience. Spotting a face in a crowd is so easy, you take it for granted. But how you do it is one of science's great mysteries. And vision is involved with so much of everything your brain does. Explaining how it works reveals more than just how you see. In We Know It When We See It, Harvard neuroscientist Richard Masland tackles vital questions about how the brain processes information -- how it perceives, learns, and remembers -- through a careful study of the inner life of the eye. Covering everything from what happens when light hits your retina, to the increasingly sophisticated nerve nets that turn that light into knowledge, to what a computer algorithm must be able to do before it can be called truly "intelligent," We Know It When We See It is a profound yet approachable investigation into how our bodies make sense of the world.
The fourth edition of this best-selling guide to Prolog and Artificial Intelligence has been updated to include key developments in the field while retaining its lucid approach to these topics. New and extended topics include Constraint Logic Programming, abductive reasoning and partial order planning. Divided into two parts, the first part of the book introduces the programming language Prolog, while the second part teaches Artificial Intelligence using Prolog as a tool for the implementation of AI techniques. This textbook is meant to teach Prolog as a practical programming tool and so it concentrates on the art of using the basic mechanisms of Prolog to solve interesting problems. The fourth edition has been fully revised and extended to provide an even greater range of applications, making it a self-contained guide to Prolog, AI or AI Programming for students and professional programmers.
This book offers the first comprehensive taxonomy for multimodal optimization algorithms, work with its root in topics such as niching, parallel evolutionary algorithms, and global optimization. The author explains niching in evolutionary algorithms and its benefits; he examines their suitability for use as diagnostic tools for experimental analysis, especially for detecting problem (type) properties; and he measures and compares the performances of niching and canonical EAs using different benchmark test problem sets. His work consolidates the recent successes in this domain, presenting and explaining use cases, algorithms, and performance measures, with a focus throughout on the goals of the optimization processes and a deep understanding of the algorithms used. The book will be useful for researchers and practitioners in the area of computational intelligence, particularly those engaged with heuristic search, multimodal optimization, evolutionary computing, and experimental analysis.
Artificial Intelligence is here, today. How can society make the best use of it?Until recently, "artificial intelligence" sounded like something out of science fiction. But the technology of artificial intelligence, AI, is becoming increasingly common, from self-driving cars to e-commerce algorithms that seem to know what you want to buy before you do. Throughout the economy and many aspects of daily life, artificial intelligence has become the transformative technology of our time. Despite its current and potential benefits, AI is little understood by the larger public and widely feared. The rapid growth of artificial intelligence has given rise to concerns that hidden technology will create a dystopian world of increased income inequality, a total lack of privacy, and perhaps a broad threat to humanity itself. In their compelling and readable book, two experts at Brookings discuss both the opportunities and risks posed by artificial intelligence and how near-term policy decisions could determine whether the technology leads to utopia or dystopia. Drawing on in-depth studies of major uses of AI, the authors detail how the technology actually works. They outline a policy and governance blueprint for gaining the benefits of artificial intelligence while minimizing its potential downsides. The book offers major recommendations for actions that governments, businesses, and individuals can take to promote trustworthy and responsible artificial intelligence. Their recommendations include: creation of ethical principles, strengthening government oversight, defining corporate culpability, establishment of advisory boards at federal agencies, using third-party audits to reduce biases inherent in algorithms, tightening personal privacy requirements, using insurance to mitigate exposure to AI risks, broadening decision-making about AI uses and procedures, penalizing malicious uses of new technologies, and taking pro-active steps to address how artificial intelligence affects the workforce. Turning Point is essential reading for anyone concerned about how artificial intelligence works and what can be done to ensure its benefits outweigh its harm.
Innovation in medicine and healthcare is an interdisciplinary research area, which combines the advanced technologies and problem solving skills with medical and biological science. A central theme of this proceedings is Smart Medical and Healthcare Systems (modern intelligent systems for medicine and healthcare), which can provide efficient and accurate solution to problems faced by healthcare and medical practitioners today by using advanced information communication techniques, computational intelligence, mathematics, robotics and other advanced technologies. The techniques developed in this area will have a significant effect on future medicine and healthcare. The volume includes 53 papers, which present the recent trend and innovations in medicine and healthcare including Medical Informatics; Biomedical Engineering; Management for Healthcare; Advanced ICT for Medical and Healthcare; Simulation and Visualization/VR for Medicine; Statistical Signal Processing and Artificial Intelligence; Smart Medical and Healthcare System and Healthcare Support System.
"The Fourth Age not only discusses what the rise of A.I. will mean for us, it also forces readers to challenge their preconceptions. And it manages to do all this in a way that is both entertaining and engaging." -The New York Times As we approach a great turning point in history when technology is poised to redefine what it means to be human, The Fourth Age offers fascinating insight into AI, robotics, and their extraordinary implications for our species. In The Fourth Age, Byron Reese makes the case that technology has reshaped humanity just three times in history: - 100,000 years ago, we harnessed fire, which led to language. - 10,000 years ago, we developed agriculture, which led to cities and warfare. - 5,000 years ago, we invented the wheel and writing, which lead to the nation state. We are now on the doorstep of a fourth change brought about by two technologies: AI and robotics. The Fourth Age provides extraordinary background information on how we got to this point, and how-rather than what-we should think about the topics we'll soon all be facing: machine consciousness, automation, employment, creative computers, radical life extension, artificial life, AI ethics, the future of warfare, superintelligence, and the implications of extreme prosperity. By asking questions like "Are you a machine?" and "Could a computer feel anything?", Reese leads you through a discussion along the cutting edge in robotics and AI, and, provides a framework by which we can all understand, discuss, and act on the issues of the Fourth Age, and how they'll transform humanity.
This book introduces a novel type of expert finder system that can determine the knowledge that specific users within a community hold, using explicit and implicit data sources to do so. Further, it details how this is accomplished by combining granular computing, natural language processing and a set of metrics that it introduces to measure and compare candidates' suitability. The book describes profiling techniques that can be used to assess knowledge requirements on the basis of a given problem statement or question, so as to ensure that only the most suitable candidates are recommended. The book brings together findings from natural language processing, artificial intelligence and big data, which it subsequently applies to the context of expert finder systems. Accordingly, it will appeal to researchers, developers and innovators alike.
The contributed volume aims to explicate and address the difficulties and challenges for the seamless integration of two core disciplines of computer science, i.e., computational intelligence and data mining. Data Mining aims at the automatic discovery of underlying non-trivial knowledge from datasets by applying intelligent analysis techniques. The interest in this research area has experienced a considerable growth in the last years due to two key factors: (a) knowledge hidden in organizations' databases can be exploited to improve strategic and managerial decision-making; (b) the large volume of data managed by organizations makes it impossible to carry out a manual analysis. The book addresses different methods and techniques of integration for enhancing the overall goal of data mining. The book helps to disseminate the knowledge about some innovative, active research directions in the field of data mining, machine and computational intelligence, along with some current issues and applications of related topics.
This comprehensive presentation of the core concepts and historical landmarks in robotics and artificial intelligence is a must-read for those who want to understand the important changes happening now in our everyday lives, in the workplace, and in our minds and bodies. What is deep in "deep learning"? Can artificial intelligence really think? What will robots really look like in the near future? Is there a new class divide between those who understand technology and those who fear it? A clear and exhaustive introduction for non-specialists, 30-Second AI & Robotics will help the reader to navigate the world of ubiquitous computers, smart cities, and collaborative robots. At last, an optimistic and friendly book about our human possibilities in the time of automata.
The purpose of this edited volume is to provide a comprehensive overview on the fundamentals of deep learning, introduce the widely-used learning architectures and algorithms, present its latest theoretical progress, discuss the most popular deep learning platforms and data sets, and describe how many deep learning methodologies have brought great breakthroughs in various applications of text, image, video, speech and audio processing. Deep learning (DL) has been widely considered as the next generation of machine learning methodology. DL attracts much attention and also achieves great success in pattern recognition, computer vision, data mining, and knowledge discovery due to its great capability in learning high-level abstract features from vast amount of data. This new book will not only attempt to provide a general roadmap or guidance to the current deep learning methodologies, but also present the challenges and envision new perspectives which may lead to further breakthroughs in this field. This book will serve as a useful reference for senior (undergraduate or graduate) students in computer science, statistics, electrical engineering, as well as others interested in studying or exploring the potential of exploiting deep learning algorithms. It will also be of special interest to researchers in the area of AI, pattern recognition, machine learning and related areas, alongside engineers interested in applying deep learning models in existing or new practical applications.
You may like...
Autonomy - The Quest to Build the…
Lawrence Burns Paperback (1)
Superintelligence - Paths, Dangers…
Nick Bostrom Paperback (2)
AI Superpowers: China, Silicon Valley…
Kai-Fu Lee Paperback (1)
The Alignment Problem - How Can Machines…
Brian Christian Paperback
Neural Approximations for Optimal…
Riccardo Zoppoli, Marcello Sanguineti, … Hardcover R5,337 Discovery Miles 53 370
Girl Decoded - My Quest to Make…
Rana El Kaliouby Paperback (1)
Talk To Me - Amazon, Google, Apple and…
James Vlahos Paperback (1)
Sentiment Analysis - Mining Opinions…
Bing Liu Hardcover R1,667 Discovery Miles 16 670
The Technology Trap - Capital, Labor…
Carl Benedikt Frey Paperback
The Alignment Problem - Machine Learning…
Brian Christian Hardcover