![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence > General
The ink and stylus tablets discovered at the Roman fort of Vindolanda are a unique resource for scholars of ancient history. However, the stylus tablets in particular are extremely difficult to read. This book details the development of what appears to be the first system constructed to aid experts in the process of reading an ancient document, exploring the extent to which techniques from Artificial Intelligence can be used to develop a system that could aid historians in reading the stylus texts. Image to Interpretation includes a model of how experts read ancient texts, a corpora of letter forms from the Vindolanda text corpus, and a detailed description of the architecture of the system. It will be of interest to papyrologists, researchers in Roman history and palaeography, computer and engineering scientists working in the field of Artificial Intelligence and image processing, and those interested in the use of computing in the humanities.
The need of video compression in the modern age of visual communication cannot be over-emphasized. This monograph will provide useful information to the postgraduate students and researchers who wish to work in the domain of VLSI design for video processing applications. In this book, one can find an in-depth discussion of several motion estimation algorithms and their VLSI implementation as conceived and developed by the authors. It records an account of research done involving fast three step search, successive elimination, one-bit transformation and its effective combination with diamond search and dynamic pixel truncation techniques. Two appendices provide a number of instances of proof of concept through Matlab and Verilog program segments. In this aspect, the book can be considered as first of its kind. The architectures have been developed with an eye to their applicability in everyday low-power handheld appliances including video camcorders and smartphones.
This book explores various renewal processes in the context of probability theory, uncertainty theory and chance theory. It also covers the applications of these renewal processes in maintenance models and insurance risk models. The methods used to derive the limit of the renewal rate, the reward rate, and the availability rate are of particular interest, as they can easily be extended to the derivation of other models. Its comprehensive and systematic treatment of renewal processes, renewal reward processes and the alternating renewal process is one of the book's major features, making it particularly valuable for readers who are interested in learning about renewal theory. Given its scope, the book will benefit researchers, engineers, and graduate students in the fields of mathematics, information science, operations research, industrial engineering, etc.
The book reports on the latest advances and applications of nonlinear control systems. It consists of 30 contributed chapters by subject experts who are specialized in the various topics addressed in this book. The special chapters have been brought out in the broad areas of nonlinear control systems such as robotics, nonlinear circuits, power systems, memristors, underwater vehicles, chemical processes, observer design, output regulation, backstepping control, sliding mode control, time-delayed control, variables structure control, robust adaptive control, fuzzy logic control, chaos, hyperchaos, jerk systems, hyperjerk systems, chaos control, chaos synchronization, etc. Special importance was given to chapters offering practical solutions, modeling and novel control methods for the recent research problems in nonlinear control systems. This book will serve as a reference book for graduate students and researchers with a basic knowledge of electrical and control systems engineering. The resulting design procedures on the nonlinear control systems are emphasized using MATLAB software.
This book focuses on computational intelligence techniques and its applications fast-growing and promising research topics that have drawn a great deal of attention from researchers over the years. It brings together many different aspects of the current research on intelligence technologies such as neural networks, support vector machines, fuzzy logic and evolutionary computation, and covers a wide range of applications from pattern recognition and system modeling, to intelligent control problems and biomedical applications. Fundamental concepts and essential analysis of various computational techniques are presented to offer a systematic and effective tool for better treatment of different applications, and simulation and experimental results are included to illustrate the design procedure and the effectiveness of the approaches.
In the mid 1990s, Tim Berners-Lee had the idea of developing the World Wide Web into a "Semantic Web", a web of information that could be interpreted by machines in order to allow the automatic exploitation of data, which until then had to be done by humans manually. One of the first people to research topics related to the Semantic Web was Professor Rudi Studer. From the beginning, Rudi drove projects like ONTOBROKER and On-to-Knowledge, which later resulted in W3C standards such as RDF and OWL. By the late 1990s, Rudi had established a research group at the University of Karlsruhe, which later became the nucleus and breeding ground for Semantic Web research, and many of today's well-known research groups were either founded by his disciples or benefited from close cooperation with this think tank. In this book, published in celebration of Rudi's 60th birthday, many of his colleagues look back on the main research results achieved during the last 20 years. Under the editorship of Dieter Fensel, once one of Rudi's early PhD students, an impressive list of contributors and contributions has been collected, covering areas like Knowledge Management, Ontology Engineering, Service Management, and Semantic Search. Overall, this book provides an excellent overview of the state of the art in Semantic Web research, by combining historical roots with the latest results, which may finally make the dream of a "Web of knowledge, software and services" come true.
The book consists of 19 extended and revised chapters based on original works presented during a poster session organized within the 5th International Conference on Computational Collective Intelligence that was held between 11 and 13 of September 2013 in Craiova, Romania. The book is divided into three parts. The first part is titled "Agents and Multi-Agent Systems" and consists of 8 chapters that concentrate on many problems related to agent and multi-agent systems, including: formal models, agent autonomy, emergent properties, agent programming, agent-based simulation and planning. The second part of the book is titled "Intelligent Computational Methods" and consists of 6 chapters. The authors present applications of various intelligent computational methods like neural networks, mathematical optimization and multistage decision processes in areas like cooperation, character recognition, wireless networks, transport, and metal structures. The third part of the book is titled "Language and Knowledge Processing Systems," and consists of 5 papers devoted to processing methods for knowledge and language information in various applications, including: language identification, corpus comparison, opinion classification, group decision making, and rule bases.
Recent advancements in the field of telecommunications, medical imaging and signal processing deal with signals that are inherently time varying, nonlinear and complex-valued. The time varying, nonlinear characteristics of these signals can be effectively analyzed using artificial neural networks. Furthermore, to efficiently preserve the physical characteristics of these complex-valued signals, it is important to develop complex-valued neural networks and derive their learning algorithms to represent these signals at every step of the learning process. This monograph comprises a collection of new supervised learning algorithms along with novel architectures for complex-valued neural networks. The concepts of meta-cognition equipped with a self-regulated learning have been known to be the best human learning strategy. In this monograph, the principles of meta-cognition have been introduced for complex-valued neural networks in both the batch and sequential learning modes. For applications where the computation time of the training process is critical, a fast learning complex-valued neural network called as a fully complex-valued relaxation network along with its learning algorithm has been presented. The presence of orthogonal decision boundaries helps complex-valued neural networks to outperform real-valued networks in performing classification tasks. This aspect has been highlighted. The performances of various complex-valued neural networks are evaluated on a set of benchmark and real-world function approximation and real-valued classification problems.
One of the most successful methodology that arose from the worldwide diffusion of Fuzzy Logic is Fuzzy Control. After the first attempts dated in the seventies, this methodology has been widely exploited for controlling many industrial components and systems. At the same time, and very independently from Fuzzy Logic or Fuzzy Control, the birth of the Web has impacted upon almost all aspects of computing discipline. Evolution of Web, Web2.0 and Web 3.0 has been making scenarios of ubiquitous computing much more feasible; consequently information technology has been thoroughly integrated into everyday objects and activities. What happens when Fuzzy Logic meets Web technology? Interesting results might come out, as you will discover in this book. Fuzzy Mark-up Language is a son of this synergistic view, where some technological issues of Web are re-interpreted taking into account the transparent notion of Fuzzy Control, as discussed here. The concept of a Fuzzy Control that is conceived and modeled in terms of a native web wisdom represents another step towards the last picture of Pervasive Web Intelligence.
This book presents mathematical models of mob control with threshold (conformity) collective decision-making of the agents. Based on the results of analysis of the interconnection between the micro- and macromodels of active network structures, it considers the static (deterministic, stochastic and game-theoretic) and dynamic (discrete- and continuous-time) models of mob control, and highlights models of informational confrontation. Many of the results are applicable not only to mob control problems, but also to control problems arising in social groups, online social networks, etc. Aimed at researchers and practitioners, it is also a valuable resource for undergraduate and postgraduate students as well as doctoral candidates specializing in the field of collective behavior modeling.
This book chiefly presents a novel approach referred to as backward fuzzy rule interpolation and extrapolation (BFRI). BFRI allows observations that directly relate to the conclusion to be inferred or interpolated from other antecedents and conclusions. Based on the scale and move transformation interpolation, this approach supports both interpolation and extrapolation, which involve multiple hierarchical intertwined fuzzy rules, each with multiple antecedents. As such, it offers a means of broadening the applications of fuzzy rule interpolation and fuzzy inference. The book deals with the general situation, in which there may be more than one antecedent value missing for a given problem. Two techniques, termed the parametric approach and feedback approach, are proposed in an attempt to perform backward interpolation with multiple missing antecedent values. In addition, to further enhance the versatility and potential of BFRI, the backward fuzzy interpolation method is extended to support -cut based interpolation by employing a fuzzy interpolation mechanism for multi-dimensional input spaces (IMUL). Finally, from an integrated application analysis perspective, experimental studies based upon a real-world scenario of terrorism risk assessment are provided in order to demonstrate the potential and efficacy of the hierarchical fuzzy rule interpolation methodology.
Posited by Professor Leon Chua at UC Berkeley more than 40 years ago, memristors, a nonlinear element in electrical circuitry, are set to revolutionize computing technology. Finally discovered by scientists at Hewlett-Packard in 2008, memristors generate huge interest because they can facilitate nanoscale, real-time computer learning, as well as due to their potential of serving as instant memories. This edited volume bottles some of the excitement about memristors, providing a state-of-the-art overview of neuromorphic memristor theory, as well as its technological and practical aspects. Based on work presented to specialist memristor seminars organized by the editors, the volume takes readers from a general introduction the fundamental concepts involved, to specialized analysis of computational modeling, hardware, and applications. The latter include the ground-breaking potential of memristors in facilitating hybrid wetware-hardware technologies for in-vitro experiments. The book evinces, and devotes space to the discussion of, the socially transformative potential of memristors, which could be as pervasive as was the invention of the silicon chip: machines that learn in the style of brains, are a computational Holy Grail. With contributions from key players in a fast-moving field, this edited volume is the first to cover memristors in the depth needed to trigger the further advances that surely lie around the corner.
This book presents an overview of a variety of contemporary statistical, mathematical and computer science techniques which are used to further the knowledge in the medical domain. The authors focus on applying data mining to the medical domain, including mining the sets of clinical data typically found in patient's medical records, image mining, medical mining, data mining and machine learning applied to generic genomic data and more. This work also introduces modeling behavior of cancer cells, multi-scale computational models and simulations of blood flow through vessels by using patient-specific models. The authors cover different imaging techniques used to generate patient-specific models. This is used in computational fluid dynamics software to analyze fluid flow. Case studies are provided at the end of each chapter. Professionals and researchers with quantitative backgrounds will find Computational Medicine in Data Mining and Modeling useful as a reference. Advanced-level students studying computer science, mathematics, statistics and biomedicine will also find this book valuable as a reference or secondary text book.
This collection of papers, published in honour of Hector J. Levesque on the occasion of his 60th birthday, addresses a number of core areas in the field of knowledge representation and reasoning. In a broad sense, the book is about knowledge and belief, tractable reasoning, and reasoning about action and change. More specifically, the book contains contributions to Description Logics, the expressiveness of knowledge representation languages, limited forms of inference, satisfiablity (SAT), the logical foundations of BDI architectures, only-knowing, belief revision, planning, causation, the situation calculus, the action language Golog, and cognitive robotics.
Counting belongs to the most elementary and frequent mental activities of human beings. Its results are a basis for coming to a decision in a lot of situations and dimensions of our life. This book presents a novel approach to the advanced and sophisticated case, called intelligent counting, in which the objects of counting are imprecisely, fuzzily specified. Formally, this collapses to counting in fuzzy sets, interval-valued fuzzy sets or I-fuzzy sets (Atanassov's intuitionistic fuzzy sets). The monograph is the first one showing and emphasizing that the presented methods of intelligent counting are human-consistent: are reflections and formalizations of real, human counting procedures performed under imprecision and, possibly, incompleteness of information. Other applications of intelligent counting in various areas of intelligent systems and decision support will be discussed, too. The whole presentation is self-contained, systematic, and equipped with many examples, figures and tables. Computer and information scientists, researchers, engineers and practitioners, applied mathematicians, and postgraduate students interested in information imprecision are the target readers.
Imagine yourself as a military officer in a conflict zone trying to identify locations of weapons caches supporting road-side bomb attacks on your country's troops. Or imagine yourself as a public health expert trying to identify the location of contaminated water that is causing diarrheal diseases in a local population. Geospatial abduction is a new technique introduced by the authors that allows such problems to be solved. Geospatial Abduction provides the mathematics underlying geospatial abduction and the algorithms to solve them in practice; it has wide applicability and can be used by practitioners and researchers in many different fields. Real-world applications of geospatial abduction to military problems are included. Compelling examples drawn from other domains as diverse as criminology, epidemiology and archaeology are covered as well. This book also includes access to a dedicated website on geospatial abduction hosted by University of Maryland. Geospatial Abduction targets practitioners working in general AI, game theory, linear programming, data mining, machine learning, and more. Those working in the fields of computer science, mathematics, geoinformation, geological and biological science will also find this book valuable.
Is meaningful communication possible between two intelligent parties who share no common language or background? In this work, a theoretical framework is proposed in which it is possible to address when and to what extent such semantic communication is possible: such problems can be rigorously addressed by explicitly focusing on the goals of the communication. Under this framework, it is possible to show that for many goals, communication without any common language or background is possible using universal protocols. This work should be accessible to anyone with an undergraduate-level knowledge of the theory of computation. The theoretical framework presented here is of interest to anyone wishing to design systems with flexible interfaces, either among computers or between computers and their users.
In Artificial Intelligence in Finance and Investing, authors Robert Trippi and Jae Lee explain this fascinating new technology in terms that portfolio managers, institutional investors, investment analysis, and information systems professionals can understand. Using real-life examples and a practical approach, this rare and readable volume discusses the entire field of artificial intelligence of relevance to investing, so that readers can realize the benefits and evaluate the features of existing or proposed systems, and ultimately construct their own systems. Topics include using Expert Systems for Asset Allocation, Timing Decisions, Pattern Recognition, and Risk Assessment; overview of Popular Knowledge-Based Systems; construction of Synergistic Rule Bases for Securities Selection; incorporating the Markowitz Portfolio Optimization Model into Knowledge-Based Systems; Bayesian Theory and Fuzzy Logic System Components; Machine Learning in Portfolio Selection and Investment Timing, including Pattern-Based Learning and Fenetic Algorithms; and Neural Network-Based Systems. To illustrate the concepts presented in the book, the authors conclude with a valuable practice session and analysis of a typical knowledge-based system for investment management, K-FOLIO. For those who want to stay on the cutting edge of the "application" revolution, Artificial Intelligence in Finance and Investing offers a pragmatic introduction to the use of knowledge-based systems in securities selection and portfolio management.
The volume, complexity, and irregularity of computational data in modern algorithms and simulations necessitates an unorthodox approach to computing. Understanding the facets and possibilities of soft computing algorithms is necessary for the accurate and timely processing of complex data. Research Advances in the Integration of Big Data and Smart Computing builds on the available literature in the realm of Big Data while providing further research opportunities in this dynamic field. This publication provides the resources necessary for technology developers, scientists, and policymakers to adopt and implement new paradigms in computational methods across the globe. The chapters in this publication advance the body of knowledge on soft computing techniques through topics such as transmission control protocol for mobile ad hoc networks, feature extraction, comparative analysis of filtering techniques, big data in economic policy, and advanced dimensionality reduction methods.
This research volume presents a sample of recent contributions related to the issue of quality-assessment for Web Based information in the context of information access, retrieval, and filtering systems. The advent of the Web and the uncontrolled process of documents' generation have raised the problem of declining quality assessment to information on the Web, by considering both the nature of documents (texts, images, video, sounds, and so on), the genre of documents ( news, geographic information, ontologies, medical records, products records, and so on), the reputation of information sources and sites, and, last but not least the actions performed on documents (content indexing, retrieval and ranking, collaborative filtering, and so on). The volume constitutes a compendium of both heterogeneous approaches and sample applications focusing specific aspects of the quality assessment for Web-based information for researchers, PhD students and practitioners carrying out their research activity in the field of Web information retrieval and filtering, Web information mining, information quality representation and management.
This volume introduces new approaches in intelligent control area from both the viewpoints of theory and application. It consists of eleven contributions by prominent authors from all over the world and an introductory chapter. This volume is strongly connected to another volume entitled "New Approaches in Intelligent Image Analysis" (Eds. Roumen Kountchev and Kazumi Nakamatsu). The chapters of this volume are self-contained and include summary, conclusion and future works. Some of the chapters introduce specific case studies of various intelligent control systems and others focus on intelligent theory based control techniques with applications. A remarkable specificity of this volume is that three chapters are dealing with intelligent control based on paraconsistent logics.
Complex Automated Negotiations represent an important, emerging area in the field of Autonomous Agents and Multi-Agent Systems. Automated negotiations can be complex, since there are a lot of factors that characterize such negotiations. These factors include the number of issues, dependencies between these issues, representation of utilities, the negotiation protocol, the number of parties in the negotiation (bilateral or multi-party), time constraints, etc. Software agents can support automation or simulation of such complex negotiations on the behalf of their owners, and can provide them with efficient bargaining strategies. To realize such a complex automated negotiation, we have to incorporate advanced Artificial Intelligence technologies includes search, CSP, graphical utility models, Bayes nets, auctions, utility graphs, predicting and learning methods. Applications could include e-commerce tools, decision-making support tools, negotiation support tools, collaboration tools, etc. This book aims to provide a description of the new trends in Agent-based, Complex Automated Negotiation, based on the papers from leading researchers. Moreover, it gives an overview of the latest scientific efforts in this field, such as the platform and strategies of automated negotiating techniques.
This book contains research contributions from leading global scholars in nature-inspired computing. It includes comprehensive coverage of each respective topic, while also highlighting recent and future trends. The contributions provides readers with a snapshot of the state of the art in the field of nature-inspired computing and its application. This book has focus on the current researches while highlighting the empirical results along with theoretical concepts to provide a comprehensive reference for students, researchers, scholars, professionals and practitioners in the field of Advanced Artificial Intelligence, Nature-Inspired Algorithms and Soft Computing.
The success of a BCI system depends as much on the system itself as on the user's ability to produce distinctive EEG activity. BCI systems can be divided into two groups according to the placement of the electrodes used to detect and measure neurons firing in the brain. These groups are: invasive systems, electrodes are inserted directly into the cortex are used for single cell or multi unit recording, and electrocorticography (EcoG), electrodes are placed on the surface of the cortex (or dura); noninvasive systems, they are placed on the scalp and use electroencephalography (EEG) or magnetoencephalography (MEG) to detect neuron activity. The book is basically divided into three parts. The first part of the book covers the basic concepts and overviews of Brain Computer Interface. The second part describes new theoretical developments of BCI systems. The third part covers views on real applications of BCI systems.
This volume contains a contemporary, integrated description of the processes of language. These range from fast scales (fractions of a second) to slow ones (over a million years). The contributors, all experts in their fields, address language in the brain, production of sentences and dialogues, language learning, transmission and evolutionary processes that happen over centuries or millenia, the relation between language and genes, the origins of language, self-organization, and language competition and death. The book as a whole will help to show how processes at different scales affect each other, thus presenting language as a dynamic, complex and profoundly human phenomenon. |
You may like...
Dialectical Perspectives on Media…
Alfred O. Akwala, Joel K. Ngetich, …
Hardcover
R5,117
Discovery Miles 51 170
The Oxford Handbook of Case
Andrej Malchukov, Andrew Spencer
Hardcover
R4,564
Discovery Miles 45 640
Information Systems Reengineering for…
Raul Valverde, Malleswara Rao Talla
Hardcover
R4,687
Discovery Miles 46 870
Fundamentals of Relational Database…
S. Sumathi, S. Esakkirajan
Hardcover
R5,309
Discovery Miles 53 090
|