![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence
This book constitutes the refereed proceedings of the Second IFIP WG 5.5/SOCOLNET Doctoral Conference on Computing, Electrical and Industrial Systems, DoCEIS 2011, held in Costa de Caparica, Portugal, in February 2011. The 67 revised full papers were carefully selected from numerous submissions. They cover a wide spectrum of topics ranging from collaborative enterprise networks to microelectronics. The papers are organized in topical sections on collaborative networks, service-oriented systems, computational intelligence, robotic systems, Petri nets, sensorial and perceptional systems, sensorial systems and decision, signal processing, fault-tolerant systems, control systems, energy systems, electrical machines, and electronics.
In modern distributed systems, such as the Internet of Things or cloud computing, verifying their correctness is an essential aspect. This requires modeling approaches that reflect the natural characteristics of such systems: the locality of their components, autonomy of their decisions, and their asynchronous communication. However, most of the available verifiers are unrealistic because one or more of these features are not reflected. Accordingly, in this book we present an original formalism: the Integrated Distributed Systems Model (IMDS), which defines a system as two sets (states and messages), and a relation of the "actions" between these sets. The server view and the traveling agent's view of the system provide communication duality, while general temporal formulas for the IMDS allow automatic verification. The features that the model checks include: partial deadlock and partial termination, communication deadlock and resource deadlock. Automatic verification can support the rapid development of distributed systems. Further, on the basis of the IMDS, the Dedan tool for automatic verification of distributed systems has been developed.
This book describes the fundamental building-block of many new computer vision systems: dense and robust correspondence estimation. Dense correspondence estimation techniques are now successfully being used to solve a wide range of computer vision problems, very different from the traditional applications such techniques were originally developed to solve. This book introduces the techniques used for establishing correspondences between challenging image pairs, the novel features used to make these techniques robust, and the many problems dense correspondences are now being used to solve. The book provides information to anyone attempting to utilize dense correspondences in order to solve new or existing computer vision problems. The editors describe how to solve many computer vision problems by using dense correspondence estimation. Finally, it surveys resources, code and data, necessary for expediting the development of effective correspondence-based computer vision systems.
This book is a collection of representative and novel works done in Data Mining, Knowledge Discovery, Clustering and Classification that were originally presented in French at the EGC'2013 (Toulouse, France, January 2013) and EGC'2014 Conferences (Rennes, France, January 2014). These conferences were respectively the 13th and 14th editions of this event, which takes place each year and which is now successful and well-known in the French-speaking community. This community was structured in 2003 by the foundation of the French-speaking EGC society (EGC in French stands for "Extraction et Gestion des Connaissances" and means "Knowledge Discovery and Management", or KDM). This book is aiming at all researchers interested in these fields, including PhD or MSc students, and researchers from public or private laboratories. It concerns both theoretical and practical aspects of KDM. The book is structured in two parts called "Applications of KDM to real datasets" and "Foundations of KDM".
This book comprises a selection of extended abstracts and papers presented at the EVOLVE 2012 held in Mexico City, Mexico. The aim of the EVOLVE is to build a bridge between probability, set oriented numerics, and evolutionary computation as to identify new common and challenging research aspects. The conference is also intended to foster a growing interest for robust and efficient methods with a sound theoretical background. EVOLVE aims to unify theory-inspired methods and cutting-edge techniques ensuring performance guarantee factors. By gathering researchers with different backgrounds, a unified view and vocabulary can emerge where the theoretical advancements may echo in different domains. Summarizing, the EVOLVE conference focuses on challenging aspects arising at the passage from theory to new paradigms and aims to provide a unified view while raising questions related to reliability, performance guarantees, and modeling. The extended papers of the EVOLVE 2012 make a contribution to this goal.
Chapter 1 Introduction introduction 3 Chapter 1 Introduction ; ; / 2 4 F F # D . < 4 " % %+ " % )+ " % '+ 4 G G % & % : 2 < 4 4 = . % > 2 " % (+ 4 chapter one 1.1 Argument Assistants 4 4 , 2 # 4 2 < H ! H 4 H F H H / ! H ! " # 4 H H H H F H introduction 5 H H 2 . H H G 2 4 2 2 1.2 Defeasible Argumentation in the Field of Law 4 4 3 " 8 2 2 , < 2 " # < , # 6 chapter one " + 4 # 4 6 1 G G 6 4 G " + , G < # # A # 0 # # 4 2 # # D G %@*$ # 4 "%@:*+ F , introduction 7 , 9 2 2 , 3 . G # , " + D F < H 4 H H 4 H 4 4 1.3 Theory Construction and the Application of Law to Cases 4 I H # " ", % %+ 8 chapter one " + 8 4 G , 2 D 2 < " + " + 4 # #< 2 introduction 9 2 4 # , 2 4 # 9 < # ! / 2 6 # I ", % )J % '+ 9 G # 9 2
Evolutionary scheduling is a vital research domain at the interface of artificial intelligence and operational research. This edited book gives an overview of many of the current developments in the large and growing field of evolutionary scheduling. It demonstrates the applicability of evolutionary computational techniques to solve scheduling problems, not only to small-scale test problems, but also fully-fledged real-world problems.
The technique of data fusion has been used extensively in information retrieval due to the complexity and diversity of tasks involved such as web and social networks, legal, enterprise, and many others. This book presents both a theoretical and empirical approach to data fusion. Several typical data fusion algorithms are discussed, analyzed and evaluated. A reader will find answers to the following questions, among others: What are the key factors that affect the performance of data fusion algorithms significantly? What conditions are favorable to data fusion algorithms? CombSum and CombMNZ, which one is better? and why? What is the rationale of using the linear combination method? How can the best fusion option be found under any given circumstances?"
Developments in the areas of biology and bioinformatics are continuously evolving and creating a plethora of data that needs to be analyzed and decrypted. Since it can be difficult to decipher the multitudes of data within these areas, new computational techniques and tools are being employed to assist researchers in their findings. The Handbook of Research on Computational Intelligence Applications in Bioinformatics examines emergent research in handling real-world problems through the application of various computation technologies and techniques. Featuring theoretical concepts and best practices in the areas of computational intelligence, artificial intelligence, big data, and bio-inspired computing, this publication is a critical reference source for graduate students, professionals, academics, and researchers.
There are many invaluable books available on data mining theory and applications. However, in compiling a volume titled DATA MINING: Foundations and Intelligent Paradigms: Volume 1: Clustering, Association and Classification we wish to introduce some of the latest developments to a broad audience of both specialists and non-specialists in this field. "
This book presents advanced software development tools for construction, deployment and governance of Service Oriented Architecture (SOA) applications. Novel technical concepts and paradigms, formulated during the research stage and during development of such tools are presented and illustrated by practical usage examples. Hence this book will be of interest not only to theoreticians but also to engineers who cope with real-life problems. Additionally, each chapter contains an overview of related work, enabling comparison of the proposed concepts with exiting solutions in various areas of the SOA development process. This makes the book interesting also for students and scientists who investigate similar issues.
TheThird International Workshop on Multi-Robot Systems was held in March 2005 at the Naval Research Laboratory in Washington, D. C. , USA. Bringing together leading researchers and government sponsors for three days of technicalinterchange on multi-robot systems, theworkshop follows two previous highly successful gatherings in 2002 and 2003. Likethe previous two workshops, the meeting began with presentations byvarious government p- gram managers describing application areas and programs with an interest in multi-robot systems. U. S. Government representatives were on handfrom theOf?ce of Naval Research and several other governmental of?ces. Top - searchers inthe ?eld then presented their current activities in many areas of multi-robot systems. Presentations spannedawide rangeof topics, incl- ing task allocation, coordination in dynamicenvironments, information/sensor sharing andfusion, distributed mapping and coverage, motion planning and control, human-robot interaction, and applications of multi-robot systems. All presentations were given in a single-track workshop format. This proce- ings documents the work presented at the workshop. The research presen- tions were followed by panel discussions, in which all participants interacted to highlight the challenges of this ?eld and to develop possible solutions. In addition to the invited research talks, researchers and students were given an opportunity to present their work at poster sessions. We would like to thank the Naval Research Laboratory for sponsoring this workshop and providing the - cilitiesforthesemeetingstotakeplace. WeareextremelygratefultoMagdalena Bugajska, Paul Wiegand, and Mitchell A. Potter, for their vital help (and long hours) in editing these proceedings and to Michelle Caccivio for providing the administrative support to the workshop.
Past and current research in computer performance analysis has
focused primarily on dedicated parallel machines. However, future
applications in the area of high-performance computing will not
only use individual parallel systems but a large set of networked
resources. This scenario of computational and data Grids is
attracting a great deal of attention from both computer and
computational scientists. In addition to the inherent complexity of
parallel machines, the sharing and transparency of the available
resources introduces new challenges on performance analysis,
techniques, and systems. In order to meet those challenges, a
multi-disciplinary approach to the multi-faceted problems of
performance is required. New degrees of freedom will come into play
with a direct impact on the performance of Grid computing,
including wide-area network performance, quality-of-service (QoS),
heterogeneity, and middleware systems, to mention only a few.
This book demonstrates how to describe and analyze a system's behavior and extract the desired prediction and control algorithms from this analysis. A typical prediction is based on observing similar situations in the past, knowing the outcomes of these past situations, and expecting that the future outcome of the current situation will be similar to these past observed outcomes. In mathematical terms, similarity corresponds to symmetry, and similarity of outcomes to invariance. This book shows how symmetries can be used in all classes of algorithmic problems of sciences and engineering: from analysis to prediction to control. Applications cover chemistry, geosciences, intelligent control, neural networks, quantum physics, and thermal physics. Specifically, it is shown how the approach based on symmetry and similarity can be used in the analysis of real-life systems, in the algorithms of prediction, and in the algorithms of control.
This book explains aspects of social networks, varying from development and application of new artificial intelligence and computational intelligence techniques for social networks to understanding the impact of social networks. Chapters 1 and 2 deal with the basic strategies towards social networks such as mining text from such networks and applying social network metrics using a hybrid approach; Chaps. 3 to 8 focus on the prime research areas in social networks: community detection, influence maximization and opinion mining. Chapter 9 to 13 concentrate on studying the impact and use of social networks in society, primarily in education, commerce, and crowd sourcing. The contributions provide a multidimensional approach, and the book will serve graduate students and researchers as a reference in computer science, electronics engineering, communications, and information technology.
Probabilistic Conditional Independence Structures provides the mathematical description of probabilistic conditional independence structures; the author uses non-graphical methods of their description, and takes an algebraic approach. The monograph presents the methods of structural imsets and supermodular functions, and deals with independence implication and equivalence of structural imsets. Motivation, mathematical foundations and areas of application are included, and a rough overview of graphical methods is also given. In particular, the author has been careful to use suitable terminology, and presents the work so that it will be understood by both statisticians, and by researchers in artificial intelligence. The necessary elementary mathematical notions are recalled in an appendix.
Artificial intelligence provides an environmentally rich paradigm within which design research based on computational constructions can be carried out. This has been one of the foundations for the developing field called "design computing." Recently, there has been a growing interest in what designers do when they design and how they use computational tools. This forms the basis of a newly emergent field called "design cognition" that draws partly on cognitive science. This new conference series aims to provide a bridge between the two fields of "design computing" and "design cognition." The papers in this volume are from the "First International Conference on Design Computing and Cognition" (DCC'04) held at the Massachusetts Institute of Technology, USA. They represent state-of-the art research and development in design computing and cognition. They are of particular interest to researchers, developers and users of advanced computation in design and those who need to gain a better understanding of designing.
Explainable Deep Learning AI: Methods and Challenges presents the latest works of leading researchers in the XAI area, offering an overview of the XAI area, along with several novel technical methods and applications that address explainability challenges for deep learning AI systems. The book overviews XAI and then covers a number of specific technical works and approaches for deep learning, ranging from general XAI methods to specific XAI applications, and finally, with user-oriented evaluation approaches. It also explores the main categories of explainable AI - deep learning, which become the necessary condition in various applications of artificial intelligence. The groups of methods such as back-propagation and perturbation-based methods are explained, and the application to various kinds of data classification are presented.
Blockchain Technology Solutions for the Security of IoT-Based Healthcare Systems explores the various benefits and challenges associated with the integration of blockchain with IoT healthcare systems, focusing on designing cognitive-embedded data technologies to aid better decision-making, processing and analysis of large amounts of data collected through IoT. This book series targets the adaptation of decision-making approaches under cognitive computing paradigms to demonstrate how the proposed procedures, as well as big data and Internet of Things (IoT) problems can be handled in practice. Current Internet of Things (IoT) based healthcare systems are incapable of sharing data between platforms in an efficient manner and holding them securely at the logical and physical level. To this end, blockchain technology guarantees a fully autonomous and secure ecosystem by exploiting the combined advantages of smart contracts and global consensus. However, incorporating blockchain technology in IoT healthcare systems is not easy. Centralized networks in their current capacity will be incapable to meet the data storage demands of the incoming surge of IoT based healthcare wearables.
This volume is an initiative undertaken by the IEEE Computational Intelligence Society's Task Force on Security, Surveillance and Defense to consolidate and disseminate the role of CI techniques in the design, development and deployment of security and defense solutions. Applications range from the detection of buried explosive hazards in a battlefield to the control of unmanned underwater vehicles, the delivery of superior video analytics for protecting critical infrastructures or the development of stronger intrusion detection systems and the design of military surveillance networks. Defense scientists, industry experts, academicians and practitioners alike will all benefit from the wide spectrum of successful applications compiled in this volume. Senior undergraduate or graduate students may also discover uncharted territory for their own research endeavors.
This volume is based on lectures given at the NATO Advanced Study Institute on "Stochastic Games and Applications," which took place at Stony Brook, NY, USA, July 1999. It gives the editors great pleasure to present it on the occasion of L.S. Shapley's eightieth birthday, and on the fiftieth "birthday" of his seminal paper "Stochastic Games," with which this volume opens. We wish to thank NATO for the grant that made the Institute and this volume possible, and the Center for Game Theory in Economics of the State University of New York at Stony Brook for hosting this event. We also wish to thank the Hebrew University of Jerusalem, Israel, for providing continuing financial support, without which this project would never have been completed. In particular, we are grateful to our editorial assistant Mike Borns, whose work has been indispensable. We also would like to acknowledge the support of the Ecole Poly tech nique, Paris, and the Israel Science Foundation. March 2003 Abraham Neyman and Sylvain Sorin ix STOCHASTIC GAMES L.S. SHAPLEY University of California at Los Angeles Los Angeles, USA 1. Introduction In a stochastic game the play proceeds by steps from position to position, according to transition probabilities controlled jointly by the two players."
This bookisan outgrowthoften yearsof researchatthe Universityof Florida Computational NeuroEngineering Laboratory (CNEL) in the general area of statistical signal processing and machine learning. One of the goals of writing the book is exactly to bridge the two ?elds that share so many common problems and techniques but are not yet e?ectively collaborating. Unlikeotherbooks thatcoverthe state ofthe artinagiven?eld, this book cuts across engineering (signal processing) and statistics (machine learning) withacommontheme: learningseenfromthepointofviewofinformationt- orywithanemphasisonRenyi'sde?nitionofinformation.Thebasicapproach is to utilize the information theory descriptors of entropy and divergence as nonparametric cost functions for the design of adaptive systems in unsup- vised or supervised training modes. Hence the title: Information-Theoretic Learning (ITL). In the course of these studies, we discovered that the main idea enabling a synergistic view as well as algorithmic implementations, does not involve the conventional central moments of the data (mean and covariance). Rather, the core concept is the ?-norm of the PDF, in part- ular its expected value (? = 2), which we call the information potential. This operator and related nonparametric estimators link information theory, optimization of adaptive systems, and reproducing kernel Hilbert spaces in a simple and unconventional way.
Neural networks are members of a class of software that have the potential to enable intelligent computational systems capable of simulating characteristics of biological thinking and learning. Currently no standards exist to verify and validate neural network-based systems. NASA Independent Verification and Validation Facility has contracted the Institute for Scientific Research, Inc. to perform research on this topic and develop a comprehensive guide to performing V&V on adaptive systems, with emphasis on neural networks used in safety-critical or mission-critical applications. Methods and Procedures for the Verification and Validation of Artificial Neural Networks is the culmination of the first steps in that research. This volume introduces some of the more promising methods and techniques used for the verification and validation (V&V) of neural networks and adaptive systems. A comprehensive guide to performing V&V on neural network systems, aligned with the IEEE Standard for Software Verification and Validation, will follow this book.
Artificial intelligence (AI) is revolutionizing every aspect of human life including human healthcare and wellbeing management. Various types of intelligent healthcare engineering applications have been created that help to address patient healthcare and outcomes such as identifying diseases and gathering patient information. Advancements in AI applications in healthcare continue to be sought to aid rapid disease detection, health monitoring, and prescription drug tracking. Advancement of Artificial Intelligence in Healthcare Engineering is an essential scholarly publication that provides comprehensive research on the possible applications of machine learning, deep learning, soft computing, and evolutionary computing techniques in the design, implementation, and optimization of healthcare engineering solutions. Featuring a wide range of topics such as genetic algorithms, mobile robotics, and neuroinformatics, this book is ideal for engineers, technology developers, IT consultants, hospital administrators, academicians, healthcare professionals, practitioners, researchers, and students. |
You may like...
The Legend Of Zola Mahobe - And The…
Don Lepati, Nikolaos Kirkinis
Paperback
(1)R382 Discovery Miles 3 820
Democracy Works - Re-Wiring Politics To…
Greg Mills, Olusegun Obasanjo, …
Paperback
1 Recce: Volume 3 - Onsigbaarheid Is Ons…
Alexander Strachan
Paperback
|