![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Artificial intelligence > Knowledge-based systems / expert systems
This volume contains the complete proceedings of a NATO Advanced Study Institute on various aspects of the reliability of electronic and other systems. The aim of the Insti~ute was to bring together specialists in this subject. An important outcome of this Conference, as many of the delegates have pointed out to me, was complementing theoretical concepts and practical applications in both software and hardware. The reader will find papers on the mathematical background, on reliability problems in establishments where system failure may be hazardous, on reliability assessment in mechanical systems, and also on life cycle cost models and spares allocation. The proceedings contain the texts of all the lectures delivered and also verbatim accounts of panel discussions on subjects chosen from a wide range of important issues. In this introduction I will give a short account of each contribution, stressing what I feel are the most interesting topics introduced by a lecturer or a panel member. To visualise better the extent and structure. of the Institute, I present a tree-like diagram showing the subjects which my co-directors and I would have wished to include in our deliberations (Figures 1 and 2). The names of our lecturers appear underlined under suitable headings. It can be seen that we have managed to cover most of the issues which seemed important to us. VI SYSTEM EFFECTIVENESS _---~-I~--_- Performance Safety Reliability ~intenance ~istic Lethality Hazards Support S.N.R. JARDINE Max. Vel. etc.
This book is about time-domain modelling, stability, stabilization, control design and filtering for JTDS. It gives readers a thorough understanding of the basic mathematical analysis and fundamentals, offers a straightforward treatment of the different topics and provides broad coverage of the recent methodologies.
Cellular automata are fully discrete dynamical systems with dynamical variables defined at the nodes of a lattice and taking values in a finite set. Application of a local transition rule at each lattice site generates the dynamics. The interpretation of systems with a large number of degrees of freedom in terms of lattice gases has received considerable attention recently due to the many applications of this approach, e.g. for simulating fluid flows under nearly realistic conditions, for modeling complex microscopic natural phenomena such as diffusion-reaction or catalysis, and for analysis of pattern-forming systems. The discussion in this book covers aspects of cellular automata theory related to general problems of information theory and statistical physics, lattice gas theory, direct applications, problems arising in the modeling of microscopic physical processes, complex macroscopic behavior (mostly in connection with turbulence), and the design of special-purpose computers.
The book provides an in-depth understanding of the fundamentals of superconducting electronics and the practical considerations for the fabrication of superconducting electronic structures. Additionally, it covers in detail the opportunities afforded by superconductivity for uniquely sensitive electronic devices and illustrates how these devices (in some cases employing high-temperature, ceramic superconductors) can be applied in analog and digital signal processing, laboratory instruments, biomagnetism, geophysics, nondestructive evaluation and radioastronomy. Improvements in cryocooler technology for application to cryoelectronics are also covered. This is the first book in several years to treat the fundamentals and applications of superconducting electronics in a comprehensive manner, and it is the very first book to consider the implications of high-temperature, ceramic superconductors for superconducting electronic devices. Not only does this new class of superconductors create new opportunities, but recently impressive milestones have been reached in superconducting analog and digital signal processing which promise to lead to a new generation of sensing, processing and computational systems. The 15 chapters are authored by acknowledged leaders in the fundamental science and in the applications of this increasingly active field, and many of the authors provide a timely assessment of the potential for devices and applications based upon ceramic-oxide superconductors or hybrid structures incorporating these new superconductors with other materials. The book takes the reader from a basic discussion of applicable (BCS and Ginzburg-Landau) theories and tunneling phenomena, through the structure and characteristics of Josephson devices and circuits, to applications that utilize the world's most sensitive magnetometer, most sensitive microwave detector, and fastest arithmetic logic unit.
Visualization in scientific computing is getting more and more attention from many people. Especially in relation with the fast increase of com puting power, graphic tools are required in many cases for interpreting and presenting the results of various simulations, or for analyzing physical phenomena. The Eurographics Working Group on Visualization in Scientific Com puting has therefore organized a first workshop at Electricite de France (Clamart) in cooperation with ONERA (Chatillon). A wide range of pa pers were selected in order to cover most of the topics of interest for the members of the group, for this first edition, and 26 of them were presented in two days. Subsequently 18 papers were selected for this volume. 1'he presentations were organized in eight small sessions, in addition to discussions in small subgroups. The first two sessions were dedicated to the specific needs for visualization in computational sciences: the need for graphics support in large computing centres and high performance net works, needs of research and education in universities and academic cen tres, and the need for effective and efficient ways of integrating numerical computations or experimental data and graphics. Three of those papers are in Part I of this book. The third session discussed the importance and difficulties of using stan dards in visualization software, and was related to the fourth session where some reference models and distributed graphics systems were discussed. Part II has five papers from these sessions.
The NATO Advanced Research Workshop on Signal Processing and Pattern Recognition in Nondestructive Evaluation (NOE) of Materials was held August 19-22, 1987 at the Manoir St-Castin, Lac Beauport, Quebec, Canada. Modern signal processing, pattern recognition and artificial intelligence have been playing an increasingly important role in improving nondestructive evaluation and testing techniques. The cross fertilization of the two major areas can lead to major advances in NOE as well as presenting a new research area in signal processing. With this in mind, the Workshop provided a good review of progress and comparison of potential techniques, as well as constructive discussions and suggestions for effective use of modern signal processing to improve flaw detection, classification and prediction, as well as material characterization. This Proceedings volume includes most presentations given at the Workshop. This publication, like the meeting itself, is unique in the sense that it provides extensive interactions among the interrelated areas of NOE. The book starts with research advances on inverse problems and then covers different aspects of digital waveform processing in NOE and eddy current signal analysis. These are followed by four papers of pattern recognition and AI in NOE, and five papers of image processing and reconstruction in NOE. The last two papers deal with parameter estimation problems. Though the list of papers is not extensive, as the field of NOE signal processing is very new, the book has an excellent collection of both tutorial and research papers in this exciting new field.
Software Diversity is one of the fault-tolerance means to achieve dependable systems. In this volume, some experimental systems as well as real-life applications of software diversity are presented. The history, the current state-of-the-art and future perspectives are given. Although this technique is used quite successfully in industrial applications, further research is necessary to solve some open questions. We hope to report on new results and applications in another volume of this series within some years. Acknowledgements The idea of the workshop was put forward by the chairpersons of IFIP WG lOA, J. -c. Laprie, J. F. Meyer and Y. Tohma, in January 1986, and the edi tor of this volume was asked to organize the workshop. This volume was edited with the assistance of the editors of the series, A. AviZienis, H. Kopetz and J. -C. Laprie, who also had the function of reviewers. Karlsruhe, October 1987 U. Voges, Editor Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1. Introduction U. Voges 2. Railway Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 ERICSSON Safety System for Railway Control . . . . . . . . . . . . . . . . . . 11 G. Hagelin 3. Nuclear Applications . . . . . . . . . . . . . . . . . . . . . . 23 Use of Diversity in Experimental Reactor Safety Systems . 29 U. Voges The PODS Diversity Experiment . 51 P. G. Bishop 4. Flight Applications . . . . . . . . . . . . . . . . . . . . . . . . . 85 AIRBUS and ATR System Architecture and Specification. . 95 P. Traverse 5. University Research . . . . . . . . . . . . . . . . . . . 105 Tolerating Software Design Faults in a Command and Control System . . . . . . . . . . . . . . . . . . . . . . 109 T. Anderson, P. A. Barrett, D. N. Halliwell, M. R. Moulding DEDIX 87 - A Supervisory System for Design Diversity Experiments at UCLA . . . . . . . . . . . . . . . . . ."
For the editors of this book, as well as for many other researchers in the area of fault-tolerant computing, Dr. William Caswell Carter is one of the key figures in the formation and development of this important field. We felt that the IFIP Working Group 10.4 at Baden, Austria, in June 1986, which coincided with an important step in Bill's career, was an appropriate occasion to honor Bill's contributions and achievements by organizing a one day "Symposium on the Evolution of Fault-Tolerant Computing" in the honor of William C. Carter. The Symposium, held on June 30, 1986, brought together a group of eminent scientists from all over the world to discuss the evolu tion, the state of the art, and the future perspectives of the field of fault-tolerant computing. Historic developments in academia and industry were presented by individuals who themselves have actively been involved in bringing them about. The Symposium proved to be a unique historic event and these Proceedings, which contain the final versions of the papers presented at Baden, are an authentic reference document."
In recent years, increases in the amount and changes in the distribution of air traffic have been very dramatic and are continuing. The need for changes in the current air traffic systems is equally clear. While automation is generally accepted as a method of improving system safety and performance, high levels of automation in complex human-machine systems can have a negative effect on total system performance and have been identified as contributing factors in many accidents and failures. Those responsible for designing the advanced air traffic control systems to be implemented throughout the alliance during the next decade need to be aware of recent progress concerning the most effective application of automation and artificial intelligence in human-computer systems. This volume gives the proceedings of the NATO Advanced Study Institute held in Maratea, Italy, June 18-29, 1990, at which these issues were discussed.
It has been almost 5 years since the inauguration of the TRON project, a concept first proposed by Dr. K. Sakamura of the University of Tokyo. The TRON Association, which was founded as an independent organization in March 1988, has been taking over the activities of the earlier TRON Association, which was a division of Japan Electronic Industry Development Association (JEIDA). It has been expanding various operations to globalize the organizations activities. The number of member companies already exceeds 100, with increasing participation from overseas companies. It is truly an awaring historical event that so many members with the same qualifications and aims engaged in the research and development of the computer environment could be gathered together. The TRON concept aims at the creation of a new and complete environment beneficial to both computer and mankind. It has a very wide scope and great diversity. As it includes the open architecture concept and as the TRON machine should be able to work with various foreign languages, the TRON is targetted to be used internationally. In order for us to create a complete TRON world, at though there are several TRON products already on the market, continuous and aggressive participation from as members together with concentration as further development are indispensable. We, the TRON promoters, are much encouraged by such a driving force.
The origin of the development of integrated circuits up to VLSI is found in the invention of the transistor, which made it possible to achieve the ac- tion of a vacuum tube in a semiconducting solid. The structure of the tran- sistor can be constructed by a manufacturing technique such as the intro- duction of a small amount of an impurity into a semiconductor and, in ad- dition, most transistor characteristics can be improved by a reduction of dimensions. These are all important factors in the development. Actually, the microfabrication of the integrated circuit can be used for two purposes, namely to increase the integration density and to obtain an improved perfor- mance, e. g. a high speed. When one of these two aims is pursued, the result generally satisfies both. We use the Engl ish translation "very large scale integration (VLSIl" for "Cho LSI" in Japanese. In the United States of America, however, similar technology is bei ng developed under the name "very hi gh speed integrated circuits (VHSIl". This also originated from the nature of the integrated circuit which satisfies both purposes. Fortunately, the Japanese word "Cho LSI" has a wider meani ng than VLSI, so it can be used ina broader area. However, VLSI has a larger industrial effect than VHSI.
Welcome to Middleware'98 and to one of England's most beautiful regions. In recent years the distributed systems community has witnessed a growth in the number of conferences, leading to difficulties in tracking the literature and a consequent loss of awareness of work done by others in this important field. The aim of Middleware'98 is to synthesise many of the smaller workshops and conferences in this area, bringing together research communities which were becoming fragmented. The conference has been designed to maximise the experience for attendees. This is reflected in the choice of a resort venue (rather than a big city) to ensure a strong focus on interaction with other distributed systems researchers. The programme format incorporates a question-and-answer panel in each session, enabling significant issues to be discussed in the context of related papers and presentations. The invited speakers and tutorials are intended to not only inform the attendees, but also to stimulate discussion and debate.
W.J.Quirk 1.1 Real-time software and the real world Real-time software and the real world are inseparably related. Real time cannot be turned back and the real world will not always forget its history. The consequences of previous influences may last for a long time and the undesired effects may range from being inconvenient to disastrous in both economic and human terms. As a result, there is much pressure to develop and apply techniques to improve the reliability of real-time software so that the frequency and consequences of failure are reduced to a level that is as low as reasonably achievable. This report is about such techniques. After a detailed description of the software life cycle, a chapter is devoted to each of the four principle categories of technique available at present. These cover all stages of the software development process and each chapter identifies relevant techniques, the stages to which they are applicable and their effectiveness in improving real-time software reliability. 1.2 The characteristics of real-time software As well as the enhanced reliability requirement discussed above, real-time software has a number of other distinguishing characteristics. First, the sequencing and timing of inputs are determined by the real world and not by the programmer. Thus the program needs to be prepared for the unexpected and the demands made on the system may be conflicting. Second, the demands on the system may occur in parallel rather than in sequence.
It is almost six years since the inauguration of the TRON project, a con cept first proposed by Dr. K. Sakamura of the University of Tokyo, and it is almost 2 years since the foundation of the TRON Association on March 1988. The number of regular member companies registered in the TRON Association as of November 1988 is 145 which is a new re cord for the Association. Some of this year's major activities that I would particularly like to mention are: - Over 50 TRON project-related products have been or are about to be introduced to the marketplace, according to a preliminary report from the Future Study Committee of the TRON Association. In par ticular, I am happy to say that the ITRON subproject, which is ahead of the other subprojects, has progressed so far that several papers on ITRON applications will be presented at this conference, which means that the ITRON specifications are now ready for application to em bedded commercial and industrial products."
Artificial Intelligence and expert systems research, development, and demonstration have rapidly expanded over the past several years; as a result, new terminology is appearing at a phenomenal rate. This sourcebook provides an introduction to artificial intelligence and expert systems, it provides brief definitions, it includes brief descriptions of software products, and vendors, and notes leaders in the field. Extensive support material is provided by delineating points of contact for receiving additional information, acronyms, a detailed bibliography, and other reference data. The terminology includes artificial intelligence and expert system elements for: * Artificial Intelligence * Expert Systems * Natural language Processing * Smart Robots * Machine Vision * Speech Synthesis The Artificial Intelligence and Expert System Sourcebook is compiled from informa tion acquired from numerous books, journals, and authorities in the field of artificial intelligence and expert systems. I hope this compilation of information will help clarify the terminology for artificial intelligence and expert systems' activities. Your comments, revisions, or questions are welcome. V. Daniel Hunt Springfield, Virginia May, 1986 ix Acknowledgments The information in Artificial Intelligence and Expert Systems Sourcebook has been compiled from a wide variety of authorities who are specialists in their respective fields. The following publications were used as the basic technical resources for this book. Portions of these publications may have been used in the book. Those definitions or artwork used have been reproduced with the permission to reprint of the respective publisher.
The LNCS journal Transactions on Large-Scale Data- and Knowledge-Centered Systems focuses on data management, knowledge discovery, and knowledge processing, which are core and hot topics in computer science. Since the 1990s, the Internet has become the main driving force behind application development in all domains. An increase in the demand for resource sharing across different sites connected through networks has led to an evolution of data- and knowledge-management systems from centralized systems to decentralized systems enabling large-scale distributed applications providing high scalability. Current decentralized systems still focus on data and knowledge as their main resource. Feasibility of these systems relies basically on P2P (peer-to-peer) techniques and the support of agent systems with scaling and decentralized control. Synergy between Grids, P2P systems, and agent technologies is the key to data- and knowledge-centered systems in large-scale environments. This special issue of Transactions on Large-Scale Data- and Knowledge-Centered Systems highlights some of the major challenges emerging from the biomedical applications that are currently inspiring and promoting database research. These include the management, organization, and integration of massive amounts of heterogeneous data; the semantic gap between high-level research questions and low-level data; and privacy and efficiency. The contributions cover a large variety of biological and medical applications, including genome-wide association studies, epidemic research, and neuroscience.
Transaction processing is an established technique for the concurrent and fault tolerant access of persistent data. While this technique has been successful in standard database systems, factors such as time-critical applications, emerg ing technologies, and a re-examination of existing systems suggest that the performance, functionality and applicability of transactions may be substan tially enhanced if temporal considerations are taken into account. That is, transactions should not only execute in a "legal" (i.e., logically correct) man ner, but they should meet certain constraints with regard to their invocation and completion times. Typically, these logical and temporal constraints are application-dependent, and we address some fundamental issues for the man agement of transactions in the presence of such constraints. Our model for transaction-processing is based on extensions to established mod els, and we briefly outline how logical and temporal constraints may be ex pressed in it. For scheduling the transactions, we describe how legal schedules differ from one another in terms of meeting the temporal constraints. Exist ing scheduling mechanisms do not differentiate among legal schedules, and are thereby inadequate with regard to meeting temporal constraints. This provides the basis for seeking scheduling strategies that attempt to meet the temporal constraints while continuing to produce legal schedules."
Real World Speech Processing brings together in one place important
contributions and up-to-date research results in this fast-moving
area. The contributors to this work were selected from the leading
researchers and practitioners in this field.
This book constitutes the proceedings of the 16th International Workshop on Formal Methods for Industrial Critical Systems, FMICS 2011, held in Trento, Italy, in August 2011. The 16 papers presented together with 2 invited talks were carefully reviewed and selected from 39 submissions. The aim of the FMICS workshop series is to provide a forum for researchers who are interested in the development and application of formal methods in industry. It also strives to promote research and development for the improvement of formal methods and tools for industrial applications.
Architecture-independent programming and automatic parallelisation have long been regarded as two different means of alleviating the prohibitive costs of parallel software development. Building on recent advances in both areas, Architecture-Independent Loop Parallelisation proposes a unified approach to the parallelisation of scientific computing code. This novel approach is based on the bulk-synchronous parallel model of computation, and succeeds in automatically generating parallel code that is architecture-independent, scalable, and of analytically predictable performance.
This book constitutes the refereed proceedings of the 30th International Conference on Computer Safety, Reliability, and Security, SAFECOMP 2011, held in Naples, Italy, in September 2011. The 34 full papers presented together were carefully reviewed and selected from 100 submissions. The papers are organized in topical sections on RAM evaluation, complex systems dependability, formal verification, risk and hazard analysis, cybersecurity and optimization methods.
The advent of multimedia technology is creating a number of new problems in the fields of computer and communication systems. Perhaps the most important of these problems in communication, and certainly the most interesting, is that of designing networks to carry multimedia traffic, including digital audio and video, with acceptable quality. The main challenge in integrating the different services needed by the different types of traffic into the same network (an objective that is made worthwhile by its obvious economic advantages) is to satisfy the performance requirements of continuous media applications, as the quality of audio and video streams at the receiver can be guaranteed only if bounds on delay, delay jitters, bandwidth, and reliability are guaranteed by the network. Since such guarantees cannot be provided by traditional packet-switching technology, a number of researchers and research groups during the last several years have tried to meet the challenge by proposing new protocols or modifications of old ones, to make packet-switching networks capable of delivering audio and video with good quality while carrying all sorts of other traffic. The focus of this book is on HeiTS (the Heidelberg Transport System), and its contributions to integrated services network design. The HeiTS architecture is based on using the Internet Stream Protocol Version 2 (ST-II) at the network layer. The Heidelberg researchers were the first to implement ST-II. The author documents this activity in the book and provides thorough coverage of the improvements made to the protocol. The book also includes coverage of HeiTP as used in error handling, error control, congestion control, and the full specification of ST2+, a new version of ST-II. The ideas and techniques implemented by the Heidelberg group and their coverage in this volume apply to many other approaches to multimedia networking.
The creation of the text really began in 1976 with the author being involved with a group of researchers at Stanford University and the Naval Ocean Systems Center, San Diego. At that time, adaptive techniques were more laboratory (and mental) curiosities than the accepted and pervasive categories of signal processing that they have become. Over the lasl 10 years, adaptive filters have become standard components in telephony, data communications, and signal detection and tracking systems. Their use and consumer acceptance will undoubtedly only increase in the future. The mathematical principles underlying adaptive signal processing were initially fascinating and were my first experience in seeing applied mathematics work for a paycheck. Since that time, the application of even more advanced mathematical techniques have kept the area of adaptive signal processing as exciting as those initial days. The text seeks to be a bridge between the open literature in the professional journals, which is usually quite concentrated, concise, and advanced, and the graduate classroom and research environment where underlying principles are often more important.
This book constitutes the refereed proceedings of the 14th International Conference on Information Security, ISC 2011, held in Xi'an, China, in October 2011. The 25 revised full papers were carefully reviewed and selected from 95 submissions. The papers are organized in topical sections on attacks; protocols; public-key cryptosystems; network security; software security; system security; database security; privacy; digital signatures.
The two-volume set LNCS 6852/6853 constitutes the refereed proceedings of the 17th International Euro-Par Conference held in Bordeaux, France, in August/September 2011.The 81 revised full papers presented were carefully reviewed and selected from 271 submissions. The papers are organized in topical sections on support tools and environments; performance prediction and evaluation; scheduling and load-balancing; high-performance architectures and compilers; parallel and distributed data management; grid, cluster and cloud computing; peer to peer computing; distributed systems and algorithms; parallel and distributed programming; parallel numerical algorithms; multicore and manycore programming; theory and algorithms for parallel computation; high performance networks and mobile ubiquitous computing. |
![]() ![]() You may like...
|