![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence
A comprehensive analysis of current theory and research in the psychological, computational, and neural sciences elucidates the stuctures and processes of language and thought. Chapters discuss language comprehension and artificial intelligence, ARCS system for analogical retrieval, ACME model of analogical mapping, PAULINE, an artificial intelligence system for pragmatic language generation, a theory of understanding of spoken and written text, recent developments and effect of different modes of language representation on the efficiency of information processing. This book will be of interest to professionals and scholars in psychology, artificial intelligence, and cognitive science.
The fields of artificial intelligence, intelligence control, and intelligent systems are constantly changing in the subject area of information science and technology. Semiotics and Intelligent Systems Development assembles semiotics and artificial intelligence techniques in order to design new kinds of intelligent systems. A reference publication, Semiotics and Intelligent Systems Development brings a new light to the research field of artificial intelligence by incorporating the study of meaning processes (semiosis), from the perspective of formal sciences, linguistics, and philosophy.
This book introduces the fundamentals of computer vision (CV), with a focus on extracting useful information from digital images and videos. Including a wealth of methods used in detecting and classifying image objects and their shapes, it is the first book to apply a trio of tools (computational geometry, topology and algorithms) in solving CV problems, shape tracking in image object recognition and detecting the repetition of shapes in single images and video frames. Computational geometry provides a visualization of topological structures such as neighborhoods of points embedded in images, while image topology supplies us with structures useful in the analysis and classification of image regions. Algorithms provide a practical, step-by-step means of viewing image structures. The implementations of CV methods in Matlab and Mathematica, classification of chapter problems with the symbols (easily solved) and (challenging) and its extensive glossary of key words, examples and connections with the fabric of CV make the book an invaluable resource for advanced undergraduate and first year graduate students in Engineering, Computer Science or Applied Mathematics. It offers insights into the design of CV experiments, inclusion of image processing methods in CV projects, as well as the reconstruction and interpretation of recorded natural scenes.
The publication is attempted to address emerging trends in machine learning applications. Recent trends in information identification have identified huge scope in applying machine learning techniques for gaining meaningful insights. Random growth of unstructured data poses new research challenges to handle this huge source of information. Efficient designing of machine learning techniques is the need of the hour. Recent literature in machine learning has emphasized on single technique of information identification. Huge scope exists in developing hybrid machine learning models with reduced computational complexity for enhanced accuracy of information identification. This book will focus on techniques to reduce feature dimension for designing light weight techniques for real time identification and decision fusion. Key Findings of the book will be the use of machine learning in daily lives and the applications of it to improve livelihood. However, it will not be able to cover the entire domain in machine learning in its limited scope. This book is going to benefit the research scholars, entrepreneurs and interdisciplinary approaches to find new ways of applications in machine learning and thus will have novel research contributions. The lightweight techniques can be well used in real time which will add value to practice.
This book examines the principles of and advances in personalized task recommendation in crowdsourcing systems, with the aim of improving their overall efficiency. It discusses the challenges faced by personalized task recommendation when crowdsourcing systems channel human workforces, knowledge, skills and perspectives beyond traditional organizational boundaries. The solutions presented help interested individuals find tasks that closely match their personal interests and capabilities in a context of ever-increasing opportunities of participating in crowdsourcing activities. In order to explore the design of mechanisms that generate task recommendations based on individual preferences, the book first lays out a conceptual framework that guides the analysis and design of crowdsourcing systems. Based on a comprehensive review of existing research, it then develops and evaluates a new kind of task recommendation service that integrates with existing systems. The resulting prototype provides a platform for both the field study and the practical implementation of task recommendation in productive environments.
The present book includes a set of selected extended papers from the sixth International Joint Conference on Computational Intelligence (IJCCI 2014), held in Rome, Italy, from 22 to 24 October 2014. The conference was composed by three co-located conferences: The International Conference on Evolutionary Computation Theory and Applications (ECTA), the International Conference on Fuzzy Computation Theory and Applications (FCTA), and the International Conference on Neural Computation Theory and Applications (NCTA). Recent progresses in scientific developments and applications in these three areas are reported in this book. IJCCI received 210 submissions, from 51 countries, in all continents. After a double blind paper review performed by the Program Committee, 15% were accepted as full papers and thus selected for oral presentation. Additional papers were accepted as short papers and posters. A further selection was made after the Conference, based also on the assessment of presentation quality and audience interest, so that this book includes the extended and revised versions of the very best papers of IJCCI 2014. Commitment to high quality standards is a major concern of IJCCI that will be maintained in the next editions, considering not only the stringent paper acceptance ratios but also the quality of the program committee, keynote lectures, participation level and logistics.
This book provides fresh insights into the cutting edge of multimedia data mining, reflecting how the research focus has shifted towards networked social communities, mobile devices and sensors. The work describes how the history of multimedia data processing can be viewed as a sequence of disruptive innovations. Across the chapters, the discussion covers the practical frameworks, libraries, and open source software that enable the development of ground-breaking research into practical applications. Features: reviews how innovations in mobile, social, cognitive, cloud and organic based computing impacts upon the development of multimedia data mining; provides practical details on implementing the technology for solving real-world problems; includes chapters devoted to privacy issues in multimedia social environments and large-scale biometric data processing; covers content and concept based multimedia search and advanced algorithms for multimedia data representation, processing and visualization.
This edited volume is devoted to Big Data Analysis from a Machine Learning standpoint as presented by some of the most eminent researchers in this area. It demonstrates that Big Data Analysis opens up new research problems which were either never considered before, or were only considered within a limited range. In addition to providing methodological discussions on the principles of mining Big Data and the difference between traditional statistical data analysis and newer computing frameworks, this book presents recently developed algorithms affecting such areas as business, financial forecasting, human mobility, the Internet of Things, information networks, bioinformatics, medical systems and life science. It explores, through a number of specific examples, how the study of Big Data Analysis has evolved and how it has started and will most likely continue to affect society. While the benefits brought upon by Big Data Analysis are underlined, the book also discusses some of the warnings that have been issued concerning the potential dangers of Big Data Analysis along with its pitfalls and challenges.
The book presents an integrative review of paleoneurology, the study of endocranial morphology in fossil species. The main focus is on showing how computed methods can be used to support advances in evolutionary neuroanatomy, paleoanthropology and archaeology and how they have contributed to creating a completely new perspective in cognitive neuroscience. Moreover, thanks to its multidisciplinary approach, the book addresses students and researchers approaching human paleoneurology from different angles and for different purposes, such as biologists, physicians, anthropologists, archaeologists and computer scientists. The individual chapters, written by international experts, represent authoritative reviews of the most important topics in the field. All the concepts are presented in an easy-to-understand style, making them accessible to university students, newcomers and also to anyone interested in understanding how methods like biomedical imaging, digital anatomy and computed and multivariate morphometrics can be used for analyzing ontogenetic and phylogenetic changes according to the principles of functional morphology, morphological integration and modularity.
This book presents recent research in the recognition of vulnerabilities of national systems and assets which gained special attention for the Critical Infrastructures in the last two decades. The book concentrates on R&D activities in the relation of Critical Infrastructures focusing on enhancing the performance of services as well as the level of security. The objectives of the book are based on a project entitled "Critical Infrastructure Protection Researches" (TAMOP-4.2.1.B-11/2/KMR-2011-0001) which concentrated on innovative UAV solutions, robotics, cybersecurity, surface engineering, and mechatornics and technologies providing safe operations of essential assets. This report is summarizing the methodologies and efforts taken to fulfill the goals defined. The project has been performed by the consortium of the Obuda University and the National University of Public Service.
This thesis takes an empirical approach to understanding of the behavior and interactions between the two main components of reinforcement learning: the learning algorithm and the functional representation of learned knowledge. The author approaches these entities using design of experiments not commonly employed to study machine learning methods. The results outlined in this work provide insight as to what enables and what has an effect on successful reinforcement learning implementations so that this learning method can be applied to more challenging problems.
This book delivers concise coverage of classical methods and new developments related to indoor location-based services. It collects results from isolated domains including geometry, artificial intelligence, statistics, cooperative algorithms, and distributed systems and thus provides an accessible overview of fundamental methods and technologies. This makes it an ideal starting point for researchers, students, and professionals in pervasive computing. Location-based services are services using the location of a mobile computing device as their primary input. While such services are fairly easy to implement outside buildings thanks to accessible global positioning systems and high-quality environmental information, the situation inside buildings is fundamentally different. In general, there is no simple way of determining the position of a moving target inside a building without an additional dedicated infrastructure. The book's structure is learning oriented, starting with a short introduction to wireless communication systems and basic positioning techniques and ending with advanced features like event detection, simultaneous localization and mapping, and privacy aspects. Readers who are not familiar with the individual topics will be able to work through the book from start to finish. At the same time all chapters are self-contained to support readers who are already familiar with some of the content and only want to pick selected topics that are of particular interest.
This monograph bridges the gap between the nonlinear predictor as a concept and as a practical tool, presenting a complete theory of the application of predictor feedback to time-invariant, uncertain systems with constant input delays and/or measurement delays. It supplies several methods for generating the necessary real-time solutions to the systems' nonlinear differential equations, which the authors refer to as approximate predictors. Predictor feedback for linear time-invariant (LTI) systems is presented in Part I to provide a solid foundation on the necessary concepts, as LTI systems pose fewer technical difficulties than nonlinear systems. Part II extends all of the concepts to nonlinear time-invariant systems. Finally, Part III explores extensions of predictor feedback to systems described by integral delay equations and to discrete-time systems. The book's core is the design of control and observer algorithms with which global stabilization, guaranteed in the previous literature with idealized (but non-implementable) predictors, is preserved with approximate predictors developed in the book. An applications-driven engineer will find a large number of explicit formulae, which are given throughout the book to assist in the application of the theory to a variety of control problems. A mathematician will find sophisticated new proof techniques, which are developed for the purpose of providing global stability guarantees for the nonlinear infinite-dimensional delay system under feedback laws employing practically implementable approximate predictors. Researchers working on global stabilization problems for time-delay systems will find this monograph to be a helpful summary of the state of the art, while graduate students in the broad field of systems and control will advance their skills in nonlinear control design and the analysis of nonlinear delay systems.
Chapters "Turing and Free Will: A New Take on an Old Debate" and "Turing and the History of Computer Music" are available open access under a Creative Commons Attribution 4.0 International License via link.springer.com.
This book aims to provide important information about adaptivity in computer-based and/or web-based educational systems. In order to make the student modeling process clear, a literature review concerning student modeling techniques and approaches during the past decade is presented in a special chapter. A novel student modeling approach including fuzzy logic techniques is presented. Fuzzy logic is used to automatically model the learning or forgetting process of a student. The presented novel student model is responsible for tracking cognitive state transitions of learners with respect to their progress or non-progress. It maximizes the effectiveness of learning and contributes, significantly, to the adaptation of the learning process to the learning pace of each individual learner. Therefore the book provides important information to researchers, educators and software developers of computer-based educational software ranging from e-learning and mobile learning systems to educational games including stand alone educational applications and intelligent tutoring systems.
The field of mechatronics (which is the synergistic combination of precision mechanical engineering, electronic control and systems thinking in the design of products and manufacturing processes) is gaining much attention in industries and academics. It was detected that the topics of computer vision, control and robotics are imperative for the successful of mechatronics systems. This book includes several chapters which report successful study cases about computer vision, control and robotics. The readers will have the latest information related to mechatronics, that contains the details of implementation, and the description of the test scenarios.
We are now entering an era where the human world assumes recognition of itself as data. Much of humanity's basis for existence is becoming subordinate to software processes that tabulate, index, and sort the relations that comprise what we perceive as reality. The acceleration of data collection threatens to relinquish ephemeral modes of representation to ceaseless processes of computation. This situation compels the human world to form relations with non-human agencies, to establish exchanges with software processes in order to allow a profound upgrade of our own ontological understanding. By mediating with a higher intelligence, we may be able to rediscover the inner logic of the age of intelligent machines. In The End of the Future, Stephanie Polsky conceives an understanding of the digital through its dynamic intersection with the advent and development of the nation-state, race, colonization, navigational warfare, mercantilism, and capitalism, and the mathematical sciences over the past five centuries, the era during which the world became "modern." The book animates the twenty-first century as an era in which the screen has split off from itself and proliferated onto multiple surfaces, allowing an inverted image of totalitarianism to flash up and be altered to support our present condition of binary apperception. It progresses through a recognition of atomized political power, whose authority lies in the control not of the means of production, but of information, and in which digital media now serves to legitimize and promote a customized micropolitics of identity management. On this new apostolate plane, humanity may be able to shape a new world in which each human soul is captured and reproduced as an autonomous individual bearing affects and identities. The digital infrastructure of the twenty-first century makes it possible for power to operate through an esoteric mathematical means, and for factual material to be manipulated in the interest of advancing the means of control. This volume travels a course from Elizabethan England, to North American slavery, through cybernetic Social Engineering, Cold War counterinsurgency, and the (neo)libertarianism of Silicon Valley in order to arrive at a place where an organizing intelligence that started from an ambition to resourcefully manipulate physical bodies has ended with their profound neutralization.
The first book of its kind devoted to this topic, this comprehensive text/reference presents state-of-the-art research and reviews current challenges in the application of computer vision to problems in sports. Opening with a detailed introduction to the use of computer vision across the entire life-cycle of a sports event, the text then progresses to examine cutting-edge techniques for tracking the ball, obtaining the whereabouts and pose of the players, and identifying the sport being played from video footage. The work concludes by investigating a selection of systems for the automatic analysis and classification of sports play. The insights provided by this pioneering collection will be of great interest to researchers and practitioners involved in computer vision, sports analysis and media production.
Local Area Networks (LANs) have a high potential for alleviating many of the problems associated with stand-alone microcomputers. Networking microcomputers to share information, software and hardware, as well as facilite electronic mail is not only feasible and desirable, but also logical. Harry Kibirige's issue-oriented study explores microcomputer networking systems with particular emphasis on LANs. Although his analysis emphasizes issues from an information scientist's perspective, readers who want to gain an understanding of LAN technology and its applications should find it useful. Written with a minimum of jargon, the book can be used in academic, corporate, library, federal and state agency, and not-for-profit organizational settings. The author begins with an introduction to the general concepts surrounding LANs. He discusses LANs as structures for processing information and compares and contrasts them with other structures such as time-sharing systems. Also considered are salient factors concerned with LAN design and implementation. In a chapter devoted to choosing an LAN, Kibirige explains in detail the fundamental problems of choice as well as steps which should be taken in making a final selection. Other issues covered are the relationship of LANs to other existing automation programs, significant management issues, currently implemented alternatives to LANS, technology trends which will impact the future of LANs, and social issues concerned with LANs. Finally, Kibirige summarizes the results of the CUNY study of microcomputer networking systems, a report that emphasized information center/libraries.
This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is investigated for compression of Light Field images based on the HEVC technology. A new linear prediction method using sparse constraints is also described, enabling improved coding performance of the HEVC standard, particularly for images with complex textures based on repeated structures. Finally, the authors present a new, generalized intra-prediction framework for the HEVC standard, which unifies the directional prediction methods used in the current video compression standards, with linear prediction methods using sparse constraints. Experimental results for the compression of natural images are provided, demonstrating the advantage of the unified prediction framework over the traditional directional prediction modes used in HEVC standard.
This monograph presents the concept of agents and agent systems. It starts with a formal approach and then presents examples of practical applications. In order to form the principles of construction of autonomous agents, a model of the agent is introduced. Subsequent parts of the monograph include several examples of applications of the term agent. Descriptions of different examples of applications of agent systems in such fields as evolution systems, mobile robot systems, artificial intelligence systems are given. The book constitutes an outline of methodology of the design and realization of agent systems based on the M-agent architecture oriented on different areas of applications. |
You may like...
Swimming Wild in the Lake District - The…
Suzanna Cruickshank
Paperback
(1)R520 Discovery Miles 5 200
HowExpert Guide to Scuba Diving - 101…
Howexpert, Christina Biasiello
Hardcover
R746
Discovery Miles 7 460
|