Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Applications of computing > Artificial intelligence > Knowledge-based systems / expert systems
The objective of this book is two-fold. Firstly, it is aimed at bringing to gether key research articles concerned with methodologies for knowledge discovery in databases and their applications. Secondly, it also contains articles discussing fundamentals of rough sets and their relationship to fuzzy sets, machine learning, management of uncertainty and systems of logic for formal reasoning about knowledge. Applications of rough sets in different areas such as medicine, logic design, image processing and expert systems are also represented. The articles included in the book are based on selected papers presented at the International Workshop on Rough Sets and Knowledge Discovery held in Banff, Canada in 1993. The primary methodological approach emphasized in the book is the mathematical theory of rough sets, a relatively new branch of mathematics concerned with the modeling and analysis of classification problems with imprecise, uncertain, or incomplete information. The methods of the theory of rough sets have applications in many sub-areas of artificial intelligence including knowledge discovery, machine learning, formal reasoning in the presence of uncertainty, knowledge acquisition, and others. This spectrum of applications is reflected in this book where articles, although centered around knowledge discovery problems, touch a number of related issues. The book is intended to provide an important reference material for students, researchers, and developers working in the areas of knowledge discovery, machine learning, reasoning with uncertainty, adaptive expert systems, and pattern classification."
Software design patterns are known to play a vital role in enhancing the quality of software systems while reducing development time and cost. However, the use of these design patterns has also been known to introduce problems that can significantly reduce the stability, robustness, and reusability of software. This book introduces a new process for creating software design patterns that leads to highly stable, reusable, and cost-effective software. The basis of this new process is a topology of software patterns called knowledge maps. This book provides readers with a detailed view of the art and practice of creating meaningful knowledge maps. It demonstrates how to classify software patterns within knowledge maps according to their application rationale and nature. It provides readers with a clear methodology in the form of step-by-step guidelines, heuristics, and quality factors that simplify the process of creating knowledge maps. This book is designed to allow readers to master the basics of knowledge maps from their theoretical aspects to practical application. It begins with an overview of knowledge map concepts and moves on to knowledge map goals, capabilities, stable design patterns, development scenarios, and case studies. Each chapter of the book concludes with an open research issue, review questions, exercises, and a series of projects.
Knowledge representation research is not only formal, it is also descriptiveand normative. Its aim is to implement a formal system which captures a practically relevant body of cognitive faculties employed by humans and capitalizes on its technical strength to extend human knowledge representation and reasoning capabilities. In this monograph, the author develops formalisms for his own notion of a vivid knowledge representation and reasoning system, characterized by the presence of two kinds of negation (weak and strong) and the requirements of restricted reflexivity, constructivity, and non-explosiveness. The book is based on work carried out within an interdisciplinary research project at the Free University of Berlin.
This volume constitutes the proceedings of the 4th International Conference on Database and Expert Systems Applications (DEXA), held in Prague, Czech Republic, in September 1993. Traditionally the objective of the DEXA conferences is to serve as an international forum for the discussion and exchange of research results and practical experinece among theoreticians and professionals working in the field of database and artificial intelligence technologies. Despite the fact that in the conference title the applications aspect is mentioned explicitly, the theoretical and the practical points of view in the field are well-balanced in the program of DEXA'93. The growing importance of the conference series is outlined by the remarkably high number of 269 submissions and by the support given by renown organizations. DEXA'93 is held for the first time outside the former GDR in an East-European country, and is essentially contributing to the advancement of the East-West scientific cooperation in the field of database and AI systems. This proceedings contains the 78 contributed papers carefully selected by an international program committee with thesupport of a high number of subreferees. The volume is organized in sectionson data models, distributed databases, advanced database aspects, database optimization and performance evaluation, spatial and geographic databases, expert systems and knowledge engineering, legal systems, other database and artificial intelligence applications, software engineering, and hypertext/hypermedia and user interfaces.
The Database and Expert Systems Application -DEXA - conferences are mainly oriented to establish a state-of-the art forum on Database and Expert System applications. But Practice without Theory has no sense, as Leonardo said five centuries ago. In this Conference we try a comprornise between these two complementary aspects. A total of 5 sessions are application-oriented, ranging from classical applications to more unusual ones in Software Engineering. Recent research aspects in Databases, such as activity, deductivity and/or Object Orientation are also present in DEXA 92, as weIl as the implication of the new "data models" such as OO-Model, Deductive Model, etc .. included in the Modelling sessions. Other areas of interest, such as Hyper-Text and Multimedia application, together with the classical field of Information Retrieval are also considered. FinaIly, Implementation Apects are reflected in very concrete fields. A total of of nearly 200 papers submitted from all over the world were sent to DEXA 92. Only 90 could be accepted. A Poster session has also been establishcd. DEXA 90 was held in Vienna, Austria; DEXA 91 in Berlin, Germany; and DEXA 92 will take place in Valencia, Spain, where we are celebrating the discovery of thc New World just five centurics ago, in Leonardo's age. Both the quality of the Conference and the compromise between Practice and Thcory are duc to the credit of all the DEXA 92 authors.
The working conference dealt with recent developments in the field of modelling and optimization and with knowledge based decision support systems. This contributed to the realiza- tion of the aims of the working group 7.6 which are: - to promote theoretical research in the field of optimization including mathematical programming and optimal control; -to encourage the development of sophisticated knowledge based systems in which refined optimization models and algorithms are used; - to contribute to the exchange and dissemination of information and collective experience among the inter- ested groups and individuals; - to support the practical ap- plication of such systems in control, engineering, industry, economy etc. A selection of papers is included into this proceedings vo- lume since they reflect the current state of research in areas of interest to the field of (KB)DDS, and/or they are the value for the dissemination and exchange of information related to research topicsof interest, and/or they describe relevant practical experience related to designing, buil- ding, implementing and using (KB)DSS.
Implementation of Smart Healthcare Systems using AI, IoT, and Blockchain provides imperative research on the development of data fusion and analytics for healthcare and their implementation into current issues in a real-time environment. While highlighting IoT, bio-inspired computing, big data, and evolutionary programming, the book explores various concepts and theories of data fusion, IoT, and Big Data Analytics. It also investigates the challenges and methodologies required to integrate data from multiple heterogeneous sources, analytical platforms in healthcare sectors. This book is unique in the way that it provides useful insights into the implementation of a smart and intelligent healthcare system in a post-Covid-19 world using enabling technologies like Artificial Intelligence, Internet of Things, and blockchain in providing transparent, faster, secure and privacy preserved healthcare ecosystem for the masses.
With the SPARC (Scalable Processor ARChitecture) architecture and system software as the underlying foundation, Sun Microsys terns is delivering a new model of computing-easy workgroup computing-to enhance the way people work, automating processes across groups, departments, and teams locally and globally. Sun and a large and growing number of companies in the computer industry have embarked on a new approach to meet the needs of computer users and system developers in the 1990s. Originated by Sun, the approach targets users who need a range of compatible computer systems with a variety of application soft ware and want the option to buy those systems from a choice of vendors. The approach also meets the needs of system developers to be part of a broad, growing market of compatible systems and software-developers who need to design products quickly and cost-effecti vel y. The SPARe approach ensures that computer systems can be easy to use for all classes of users and members of the workgroup, end users, system administrators, and software developers. For the end user, the SPARC technologies facilitate system set-up and the daily use of various applications. For the system administrator supporting the computer installation, setting up and monitoring the network are easier. For the software developer, there are ad vanced development tools and support. Furthermore, the features of the SPARC hardware and software technologies ensure that SPARC systems and applications play an important role in the years to come."
Complex machines can fail in complex ways. Often the nature of the fault can be determined only through the interpretation of machine behavior over time. This book presents a novel approach to the representation and recognition of temporally distributed symptoms. Existing diagnostic expert systems usually operate under a set of simplifying assumptions that limit their applicability. A common assumption is that the device to be diagnosed has a static behavior, with the relation between inputs and outputs constant over time. In most realistic application domains this assumption is violated and both the normal, intended function of the device and the potential malfunctions are complex behaviors over time. This book addresses the problem of systematically treating information about fault symptoms that are spread out over periods of time. These symptoms are characterized by a specific order of events, and in the general case a single snapshot of the device state does not suffice to recognize the symptoms. Instead one has to plan a measurement sequence that consists of several observations at more than one time point. Starting with a classification of various types of dynamic faulty behavior, the author identifies temporally distributed systems (TDSs) and designs a representation language that allows TDSs to be specified in a declarative manner. The definition of a successful match of a measurement sequence against a TDS specification is operationalized as an algorithm which plans such an observation sequence based on the TDS specification. The author demonstrates that his novel solution is a generic, paradigm-independent building block for diagnostic expert systems by embedding it into the frameworks of both an associative and a model-based diagnostic system. The book will be valuable both for researchers working on applications of temporal reasoning and prospective users of technical expert systems.
The Database and Expert Systems Applications - DEXA - conferences are dedi cated to providing an international forum for the presentation of applications in the database and expert systems field, for the exchange of ideas and experiences, and for defining requirements for the future systems in these fields. After the very promising DEXA 90 in Vienna, Austria, we hope to have successfully established wjth this year's DEXA 91 a stage where scientists from diverse fields interested in application-oriented research can present and discuss their work. This year there was a total of more than 250 submitted papers from 28 different countries, in all continents. Only 98 of the papers could be accepted. The collection of papers in these proceedings offers a cross-section of the issues facing the area of databases and expert systems, i.e., topics of basic research interest on one hand and questions occurring when developing applications on the other. Major credit for the success of the conference goes to all of our colleagues who submitted papers for consideration and to those who have organized and chaired the panel sessions. Many persons contributed numerous hours to organize this conference. The names of most of them will appear on the following pages. In particular we wish to thank the Organization Committee Chairmen Johann Gordesch, A Min Tjoa, and Roland Wag ner, who also helped establishing the program. Special thanks also go to Gabriella Wagner and Anke Ruckert. Dimitris Karagiannis General Conference Chairman Contents Conference Committee."
The development of database technology has currently reached the stage of deductive database systems which use Horn clauses for defining relations. An important characteristic of these systems is the clear separation of logic and control. However, the programmer cannot affect the control part of a deductive database system. To eliminate this deficiency, this monograph presents a so-called expert deductive database system that allows explicit control of the deduction process. The system consists of an object-level describing the logical aspects of a problem and of a meta-level that contains application-specific control information affecting the object-level deduction process. For example, object-level rules can be disregarded, and some tuples deduced at the object-level can be preferred to others. Besides the architecture of this system, the book also identifies some important possibilities of deduction control which are explained by characteristic examples.
Welcome Sun users. This guide will be your key to understanding your Sun workstation. Within these pages you will find out how to use all of the basic functions and capabilities in a minimal amount of time. From SunView to Security, from Backups to Permissions, you will find out what you need quickly. This book is not intended to replace the current Sun docu mentation. It is a fast learning tool for you to become a functional Sun user quickly. Each chapter will cover the basic information needed to allow you to use that area efficiently. The chapters on UNIX file systems and permissions are for beginners' reference and will aid in learning the file system. All examples will refer to the machine name 1 tahoe. This is done to make the references to a system prompt consistent and avoid confusion. You should use this book in conjunction with the Sun manual pages included with your system. When referencing system com mands or functions, the manual pages will give you the additional capabilities which will prove invaluable in the future. I hope you enjoy this book and your new Sun workstation."
Although no-one is, probably, too enthused about the idea, it is a fact that the development of most empirical sciences to a great extent depends on the development of data analysis methods and techniques, which, due to the necessity of application of computers for that purpose, actually means that it practically depends on the advancement and orientation of computer statistics. Every other year the International Association for Statistical Computing sponsors the organizition of meetings of individual s professiona77y involved in computational statistics. Since these meetings attract professionals from allover the world, they are a good sample for the estimation of trends in this area which some believe is a statistics proper while others claim it is computer science. It seems, though, that an increasing number of colleagues treat it as an independent scientific or at least technical discipline. This volume contains six invited papers, 41 contributed papers and, finally, two papers which are, formally, software descriptions, but it was agreed by the Program Committee that they should be included in a separate section entitled "Notes about new developments in statistical software," due to their special significance for current trends in computational statistics.
Use and development of database and expert systems can be found in all fields of computer science. The aim of this book is to present a large spectrum of already implemented or just being developed database and expert systems. Contributions cover new requirements, concepts for implementations (e.g. languages, models, storage structures), management of meta data, system architectures, and experiences gained by using traditional databases in as many areas of applications as possibble (at least in the fields listed). The aim of the book is to inspire a fruitful dialogue between developement in practice, users of database and expert systems, and scientists working in the field.
This book springs from a conference held in Stockholm in May June 1988 on Culture, Language and Artificial Intelligence. It assembled more than 300 researchers and practitioners in the fields of technology, philosophy, history of ideas, literature, lin guistics, social science, etc. It was an initiative from the Swedish Center for Working Life, based on the project AI-Based Systems and the Future of Language, Knowledge and Responsibility in Professions within the COST 13 programme of the European Commission. Participants in the conference, or in some cases researchers related to its aims, were chosen to contribute to this book. It was preceded by Knowledge, Skill and Artificial Intelligence (ed. B. G6ranzon and 1. Josefson, Springer-Verlag, London, 1988) and will be followed by Dialogue and Technology (ed. M. Florin and B. Goranzon, Springer-Verlag, London, 1990). The contributors' thinking in this field varies greatly; so do their styles of writing. For example: contributors have varied in their choice of 'he' or 'he/she' for the third person. No distinction is intended but chapters have been left with the original usage to avoid extensive changes. Similarly, individual contributor's preferences as to notes or references lists have been followed. We want to thank our researcher Satinder P. Gill for excellent work with summaries and indexes, and Sandi Irvine of Springer Verlag for eminent editorial work."
In this book the consequent use of probability theory is proposed for handling uncertainty in expert systems. It is shown that methods violating this suggestion may have dangerous consequences (e.g., the Dempster-Shafer rule and the method used in MYCIN). The necessity of some requirements for a correct combining of uncertain information in expert systems is demonstrated and suitable rules are provided. The possibility is taken into account that interval estimates are given instead of exact information about probabilities. For combining information containing interval estimates rules are provided which are useful in many cases.
Dieser Informatik-Fachbericht ist der Tagungsband der 6. ITG/GI-Fachtagung "Kommunikation in verteilten Systemen," Stuttgart, 22.-24.2.1989, in einer in zweijahrigem Turnus stattfindenden Tagungsreihe. Er enthalt 60 originare Beitrage aus den Gebieten Kommunikationsdienste und -protokolle, Standardisierung, Kommunikations- und Transaktionsmechanismen in verteilten Systemen, Last- und Funktionsverbund in heterogenen Rechnernetzen, Beschreibungs-, Entwicklungsmethoden und -werkzeuge, Technik lokaler Hochgeschwindigkeitsnetze, Mobilfunknetze, Breitband-Vermittlungstechnik auf ATM-Basis, Kopplung heterogener Netze, Netzdiagnose, Verkehrsmessungen, Netzverwaltung (Netzmanagement), Modellierung und Leistungsbewertung, Netzplanung, verteilte Datenbanken, Buroautomatisierung, Fertigungsautomatisierung, Individual-Strassenverkehr. Das Buch vermittelt einen Uberblick uber den Stand der Technik und Wissenschaft auf dem Gebiet der Kommunikation in verteilten Systemen. Es ist der sechste Band in einer in zweijahrigem Turnus stattfindenden Tagungsreihe."
This is a collection of papers from the Symposium on Formal Techniques in Real-Time and Fault-Tolerant Systems held at the University of Warwick on 22-23 September 1988. The papers cover a variety of subjects in these areas and illustrate different approaches to modeling safety critical systems. Important notions of time, synchrony, redundancy and replication are examined using assertional reasoning, temporal logic and the logics of knowledge. The volume will be invaluable to researchers in formal modeling of concurrency, real-time and fault-tolerance, and to software engineers in safety-critical applications.
This book contains the papers presented at the 2nd IPMU Conference, held in Urbino (Italy), on July 4-7, 1988. The theme of the conference, Management of Uncertainty and Approximate Reasoning, is at the heart of many knowledge-based systems and a number of approaches have been developed for representing these types of information. The proceedings of the conference provide, on one hand, the opportunity for researchers to have a comprehensive view of recent results and, on the other, bring to the attention of a broader community the potential impact of developments in this area for future generation knowledge-based systems. The main topics are the following: frameworks for knowledge-based systems: representation scheme, neural networks, parallel reasoning schemes; reasoning techniques under uncertainty: non-monotonic and default reasoning, evidence theory, fuzzy sets, possibility theory, Bayesian inference, approximate reasoning; information theoretical approaches; knowledge acquisition and automated learning.
Der vorliegende Band enthalt die Beitrage zur Fachtagung "Informationsbedarfsermittlung und -analyse fur den Entwurf " "von Informationssystemen," die von der Fachgruppe EMISA der Gesellschaft fur Informatik (GI) an der Universitat Linz im Juli 1987 durchgefuhrt wurde. Die Fachgruppe EMISA beschaftigt sich mit Methoden und Werkzeugen fur den Entwurf von Informationssystemen. Dabei haben verschiedene Tagungen der Fachgruppe spezielle Phasen oder Probleme des Entwurfsvorganges detailliert untersucht. Die Linzer Fachtagung ist vor allem denjenigen Fragen gewidmet, die am Beginn des Entwurfs von Informationssystemen stehen. Obwohl der Phase der Informationsbedarfsermittlung und -analyse im Lebenszyklus eines Informationssystems zentrale Bedeutung zukommt, wird sie bislang noch immer wesentlich weniger beherrscht als die nachfolgenden Entwicklungsschritte. Die Fachtagung sollte daher Praktikern und Wissenschaftlern Gelegenheit geben, die zur Losung anstehenden Probleme zu identifizieren und existierende Ansatze zu diskutieren. Das Tagungsprogramm uberdeckt die gesamte Bandbreite von Erfahrungen mit bereits in der Praxis eingesetzten Verfahren bis zur Prasentation des aktuellen Standes der Entwicklung neuer Methoden."
The present volume contains edited versions of the communications presented at an International Workshop on "Expert Systems in Production Engineering," held in Spa, Belgium, in 1986. Introductory papers on Artificial Intelligence and Expert Systems are complemented by case studies of Expert Systems in practice, primarily, in the area of Mechanical Engineering and discussions of the possibilities and the limitations of Expert Systems.
The Seventh International Conference on Automated Deduction was held May 14-16, 19S4, in Napa, California. The conference is the primary forum for reporting research in all aspects of automated deduction, including the design, implementation, and applications of theorem-proving systems, knowledge representation and retrieval, program verification, logic programming, formal specification, program synthesis, and related areas. The presented papers include 27 selected by the program committee, an invited keynote address by Jorg Siekmann, and an invited banquet address by Patrick Suppes. Contributions were presented by authors from Canada, France, Spain, the United Kingdom , the United States, and West Germany. The first conference in this series was held a decade earlier in Argonne, Illinois. Following the Argonne conference were meetings in Oberwolfach, West Germany (1976), Cambridge, Massachusetts (1977), Austin, Texas (1979), Les Arcs, France (19S0), and New York, New York (19S2). Program Committee P. Andrews (CMU) W.W. Bledsoe (U. Texas) past chairman L. Henschen (Northwestern) G. Huet (INRIA) D. Loveland (Duke) past chairman R. Milner (Edinburgh) R. Overbeek (Argonne) T. Pietrzykowski (Acadia) D. Plaisted (U. Illinois) V. Pratt (Stanford) R. Shostak (SRI) chairman J. Siekmann (U. Kaiserslautern) R. Waldinger (SRI) Local Arrangements R. Schwartz (SRI) iv CONTENTS Monday Morning Universal Unification (Keynote Address) Jorg H. Siekmann (FRG) . |
You may like...
Deep Learning Applications for…
Monica R. Mundada, Seema S., …
Hardcover
R7,022
Discovery Miles 70 220
Probabilistic and Causal Inference - The…
Hector Geffner, Rina Dechter, …
Hardcover
R4,097
Discovery Miles 40 970
Research Anthology on Artificial Neural…
Information R Management Association
Hardcover
R13,692
Discovery Miles 136 920
Research Anthology on Artificial Neural…
Information R Management Association
Hardcover
R13,686
Discovery Miles 136 860
|