![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Artificial intelligence > Knowledge-based systems / expert systems
This book constitutes the refereed proceedings of the International Workshop on Robotics in Smart Manufacturing, WRSM 2013, held in Porto, Portugal, in June 2013. The 20 revised full papers presented were carefully reviewed and selected from numerous submissions. The papers address issues such as robotic machining, off-line robot programming, robot calibration, new robotic hardware and software architectures, advanced robot teaching methods, intelligent warehouses, robot co-workers and application of robots in the textile industry.
NATO's Division of Scientific and Environmental Affairs sponsored this Advan ced Study Institute because it was felt to be timely to cover this important and challengjng subject for the first time in the framework of NATO's ASI programme. The significance of real-time systems in everyones' life is rapidly growing. The vast spectrum of these systems can be characterised by just a few examples of increasing complexity: controllers in washing machines, air traffic control systems, control and safety systems of nuclear power plants and, finally, future military systems like the Strategic Defense Initiative (SDI). The import ance of such systems for the well-being of people requires considerable efforts in research and development of highly reliable real-time systems. Furthermore, the competitiveness and prosperity of entire nations now depend on the early app lication and efficient utilisation of computer integrated manufacturing systems (CIM), of which real-time systems are an essential and decisive part. Owing to its key significance in computerised defence systems, real-time computing has also a special importance for the Alliance. The early research and development activities in this field in the 1960s and 1970s aimed towards improving the then unsatisfactory software situation. Thus, the first high-level real-time languages were defined and developed: RTL/2, Coral 66, Procol, LTR, and PEARL. In close connection with these language develop ments and with the utilisation of special purpose process control peripherals, the research on real-time operating systems advanced considerably."
Information granules are fundamental conceptual entities facilitating perception of complex phenomena and contributing to the enhancement of human centricity in intelligent systems. The formal frameworks of information granules and information granulation comprise fuzzy sets, interval analysis, probability, rough sets, and shadowed sets, to name only a few representatives. Among current developments of Granular Computing, interesting options concern information granules of higher order and of higher type. The higher order information granularity is concerned with an effective formation of information granules over the space being originally constructed by information granules of lower order. This construct is directly associated with the concept of hierarchy of systems composed of successive processing layers characterized by the increasing levels of abstraction. This idea of layered, hierarchical realization of models of complex systems has gained a significant level of visibility in fuzzy modeling with the well-established concept of hierarchical fuzzy models where one strives to achieve a sound tradeoff between accuracy and a level of detail captured by the model and its level of interpretability. Higher type information granules emerge when the information granules themselves cannot be fully characterized in a purely numerical fashion but instead it becomes convenient to exploit their realization in the form of other types of information granules such as type-2 fuzzy sets, interval-valued fuzzy sets, or probabilistic fuzzy sets. Higher order and higher type of information granules constitute the focus of the studies on Granular Computing presented in this study. The book elaborates on sound methodologies of Granular Computing, algorithmic pursuits and an array of diverse applications and case studies in environmental studies, option price forecasting, and power engineering.
Both algorithms and the software . and hardware of automatic computers have gone through a rapid development in the past 35 years. The dominant factor in this development was the advance in computer technology. Computer parameters were systematically improved through electron tubes, transistors and integrated circuits of ever-increasing integration density, which also influenced the development of new algorithms and programming methods. Some years ago the situation in computers development was that no additional enhancement of their performance could be achieved by increasing the speed of their logical elements, due to the physical barrier of the maximum transfer speed of electric signals. Another enhancement of computer performance has been achieved by parallelism, which makes it possible by a suitable organization of n processors to obtain a perform ance increase of up to n times. Research into parallel computations has been carried out for several years in many countries and many results of fundamental importance have been obtained. Many parallel computers have been designed and their algorithmic and program ming systems built. Such computers include ILLIAC IV, DAP, STARAN, OMEN, STAR-100, TEXAS INSTRUMENTS ASC, CRAY-1, C mmp, CM*, CLIP-3, PEPE. This trend is supported by the fact that: a) many algorithms and programs are highly parallel in their structure, b) the new LSI and VLSI technologies have allowed processors to be combined into large parallel structures, c) greater and greater demands for speed and reliability of computers are made."
The Engineering of Complex Real-Time Computer Control Systems brings together in one place important contributions and up-to-date research results in this important area. The Engineering of Complex Real-Time Computer Control Systems serves as an excellent reference, providing insight into some of the most important research issues in the field.
Intelligent control is a rapidly developing, complex and challenging field with great practical importance and potential. Because of the rapidly developing and interdisciplinary nature of the subject, there are only a few edited volumes consisting of research papers on intelligent control systems but little is known and published about the fundamentals and the general know-how in designing, implementing and operating intelligent control systems. Intelligent control system emerged from artificial intelligence and computer controlled systems as an interdisciplinary field. Therefore the book summarizes the fundamentals of knowledge representation, reasoning, expert systems and real-time control systems and then discusses the design, implementation verification and operation of real-time expert systems using G2 as an example. Special tools and techniques applied in intelligent control are also described including qualitative modelling, Petri nets and fuzzy controllers. The material is illlustrated with simple examples taken from the field of intelligent process control.
Foundations of Dependable Computing: Paradigms for Dependable Applications, presents a variety of specific approaches to achieving dependability at the application level. Driven by the higher level fault models of Models and Frameworks for Dependable Systems, and built on the lower level abstractions implemented in a third companion book subtitled System Implementation, these approaches demonstrate how dependability may be tuned to the requirements of an application, the fault environment, and the characteristics of the target platform. Three classes of paradigms are considered: protocol-based paradigms for distributed applications, algorithm-based paradigms for parallel applications, and approaches to exploiting application semantics in embedded real-time control systems. The companion volume subtitled Models and Frameworks for Dependable Systems presents two comprehensive frameworks for reasoning about system dependability, thereby establishing a context for understanding the roles played by specific approaches presented in this book's two companion volumes. It then explores the range of models and analysis methods necessary to design, validate and analyze dependable systems. Another companion book (published by Kluwer) subtitled System Implementation, explores the system infrastructure needed to support the various paradigms of Paradigms for Dependable Applications. Approaches to implementing support mechanisms and to incorporating additional appropriate levels of fault detection and fault tolerance at the processor, network, and operating system level are presented. A primary concern at these levels is balancing cost and performance against coverage and overall dependability. As these chapters demonstrate, low overhead, practical solutions are attainable and not necessarily incompatible with performance considerations. The section on innovative compiler support, in particular, demonstrates how the benefits of application specificity may be obtained while reducing hardware cost and run-time overhead.
Most everything in our experience requires management in some form or other: our gardens, our automobiles, our minds, our bodies, our love lives, our businesses, our forests, our countries, etc. Sometimes we don't call it "management" per se. We seldom talk about managing our minds or automobiles. But if we think of management in terms of monitoring, maintaining, and cultivating with respect to some goal, then it makes sense. We certainly monitor an automobile, albeit unconsciously, to make sure that it doesn't exhibit signs of trouble. And we certainly try to cultivate our minds. This book is about managing networks. That itself is not a new concept. We've been managing the networks that support our telephones for about 100 years, and we've been managing the networks that support our computers for about 20 years. What is new (and what motivated me to write this book) is the following: (i) the enormous advancements in networking technology as we transition th st from the 20 century to the 21 century, (ii) the increasing dependence of human activities on networking technology, and (iii) the commercialization of services that depend on networking technology (e.g., email and electronic commerce).
Many real-time systems rely on static scheduling algorithms. This includes cyclic scheduling, rate monotonic scheduling and fixed schedules created by off-line scheduling techniques such as dynamic programming, heuristic search, and simulated annealing. However, for many real-time systems, static scheduling algorithms are quite restrictive and inflexible. For example, highly automated agile manufacturing, command, control and communications, and distributed real-time multimedia applications all operate over long lifetimes and in highly non-deterministic environments. Dynamic real-time scheduling algorithms are more appropriate for these systems and are used in such systems. Many of these algorithms are based on earliest deadline first (EDF) policies. There exists a wealth of literature on EDF-based scheduling with many extensions to deal with sophisticated issues such as precedence constraints, resource requirements, system overload, multi-processors, and distributed systems.Deadline Scheduling for Real-Time Systems: EDF and Related Algorithms aims at collecting a significant body of knowledge on EDF scheduling for real-time systems, but it does not try to be all-inclusive (the literature is too extensive). The book primarily presents the algorithms and associated analysis, but guidelines, rules, and implementation considerations are also discussed, especially for the more complicated situations where mathematical analysis is difficult. In general, it is very difficult to codify and taxonomize scheduling knowledge because there are many performance metrics, task characteristics, and system configurations. Also, adding to the complexity is the fact that a variety of algorithms have been designed for different combinations of these considerations. In spite of the recent advances there are still gaps in the solution space and there is a need to integrate the available solutions.For example, a list of issues to consider includes: * preemptive versus non-preemptive tasks, * uni-processors versus multi-processors, * using EDF at dispatch time versus EDF-based planning, * precedence constraints among tasks, * resource constraints, * periodic versus aperiodic versus sporadic tasks, * scheduling during overload, * fault tolerance requirements, and * providing guarantees and levels of guarantees (meeting quality of service requirements). Deadline Scheduling for Real-Time Systems: EDF and Related Algorithms should be of interest to researchers, real-time system designers, and instructors and students, either as a focussed course on deadline-based scheduling for real-time systems, or, more likely, as part of a more general course on real-time computing. The book serves as an invaluable reference in this fast-moving field.
Real-time systems are now used in a wide variety of applications. Conventionally, they were configured at design to perform a given set of tasks and could not readily adapt to dynamic situations. The concept of imprecise and approximate computation has emerged as a promising approach to providing scheduling flexibility and enhanced dependability in dynamic real-time systems. The concept can be utilized in a wide variety of applications, including signal processing, machine vision, databases, networking, etc. For those who wish to build dynamic real-time systems which must deal safely with resource unavailability while continuing to operate, leading to situations where computations may not be carried through to completion, the techniques of imprecise and approximate computation facilitate the generation of partial results that may enable the system to operate safely and avert catastrophe. Audience: Of special interest to researchers. May be used as a supplementary text in courses on real-time systems.
Responsive Computing brings together in one place important contributions and state-of-the-art research results in this rapidly advancing area. Responsive Computing serves as an excellent reference, providing insight into some of the most important issues in the field.
This book constitutes the refereed post-proceedings of the Joint International Conference on Pervasive Computing and the Networked World, ICPCA-SWS 2012, held in Istanbul, Turkey, in November 2012. This conference is a merger of the 7th International Conference on Pervasive Computing and Applications (ICPCA) and the 4th Symposium on Web Society (SWS). The 53 revised full papers and 26 short papers presented were carefully reviewed and selected from 143 submissions. The papers cover a wide range of topics from different research communities such as computer science, sociology and psychology and explore both theoretical and practical issues in and around the emerging computing paradigms, e.g., pervasive collaboration, collaborative business, and networked societies. They highlight the unique characteristics of the "everywhere" computing paradigm and promote the awareness of its potential social and psychological consequences.
This volume forms the edited proceedings of the Sixth International Symposium on Communications Interworking, held in Perth, Western Australia, from 13-16 October, 2002. In total, 39 research papers were submitted for consideration, and after full refereeing by international referees, 27 papers from authors in 11 countries were accepted for publication. Invited keynote addresses were presented by Dr Hugh Bradlow, Chief Technology Offleer for Telstra Corporation, Australia, and Dr Sathya Rao, Director ofTelscom A.G., Switzerland. The symposium brought together 60 active international researchers and telecommunications engineers to discuss the important questions as to whether there is a convergence of all communications, including real-time communications, over the Internet Protocol (IP), and whether existing IP technology is capable of supporting this convergence, or whether it requires further development of that technology. The papers selected to appear in this volume make an important and timely contribution to this debate. Specific symposium paper sessions were held to present and discuss ernerging research on the topics of converged networking, real-time communications over IP, quality of service, routing and metrics, ernerging issues in mobile networks, differentiated services, and wireless networking.
Real-Time Video Compression: Techniques and Algorithms introduces the XYZ video compression technique, which operates in three dimensions, eliminating the overhead of motion estimation. First, video compression standards, MPEG and H.261/H.263, are described. They both use asymmetric compression algorithms, based on motion estimation. Their encoders are much more complex than decoders. The XYZ technique uses a symmetric algorithm, based on the Three-Dimensional Discrete Cosine Transform (3D-DCT). 3D-DCT was originally suggested for compression about twenty years ago; however, at that time the computational complexity of the algorithm was too high, it required large buffer memory, and was not as effective as motion estimation. We have resurrected the 3D-DCT-based video compression algorithm by developing several enhancements to the original algorithm. These enhancements make the algorithm feasible for real-time video compression in applications such as video-on-demand, interactive multimedia, and videoconferencing. The demonstrated results, presented in this book, suggest that the XYZ video compression technique is not only a fast algorithm, but also provides superior compression ratios and high quality of the video compared to existing standard techniques, such as MPEG and H.261/H.263. The elegance of the XYZ technique is in its simplicity, which leads to inexpensive VLSI implementation of any XYZ codec. Real-Time Video Compression: Techniques and Algorithms can be used as a text for graduate students and researchers working in the area of real-time video compression. In addition, the book serves as an essential reference for professionals in the field.
Real-Time Systems Engineering and Applications is a well-structured collection of chapters pertaining to present and future developments in real-time systems engineering. After an overview of real-time processing, theoretical foundations are presented. The book then introduces useful modeling concepts and tools. This is followed by concentration on the more practical aspects of real-time engineering with a thorough overview of the present state of the art, both in hardware and software, including related concepts in robotics. Examples are given of novel real-time applications which illustrate the present state of the art. The book concludes with a focus on future developments, giving direction for new research activities and an educational curriculum covering the subject. This book can be used as a source for academic and industrial researchers as well as a textbook for computing and engineering courses covering the topic of real-time systems engineering.
(Preliminary): The Orthogonal Frequency Division Multiplexing (OFDM) digital transmission technique has several advantages in broadcast and mobile communications applications. The main objective of this book is to give a good insight into these efforts, and provide the reader with a comprehensive overview of the scientific progress which was achieved in the last decade. Besides topics of the physical layer, such as coding, modulation and non-linearities, a special emphasis is put on system aspects and concepts, in particular regarding cellular networks and using multiple antenna techniques. The work extensively addresses challenges of link adaptation, adaptive resource allocation and interference mitigation in such systems. Moreover, the domain of cross-layer design, i.e. the combination of physical layer aspects and issues of higher layers, are considered in detail. These results will facilitate and stimulate further innovation and development in the design of modern communication systems, based on the powerful OFDM transmission technique.
This book constitutes the thoroughly refereed post-conference proceedings of the Third International Joint Conference on Knowledge Discovery, Knowledge Engineering, and Knowledge Management, IC3K 2011, held in Paris, France, in October 2011. This book includes revised and extended versions of a strict selection of the best papers presented at the conference; 39 revised full papers together with one invited lecture were carefully reviewed and selected from 429 submissions. According to the three covered conferences KDIR 2011, KEOD 2011, and KMIS 2011, the papers are organized in topical sections on knowledge discovery and information retrieval, knowledge engineering and ontology development, and on knowledge management and information sharing.
This book constitutes the thoroughly refereed proceedings of the 5th International Conference on Ad Hoc Networks, ADHOCNETS 2013, held in Barcelona, Spain, in October 2013. The 14 revised full papers presented were carefully selected and reviewed from numerous submissions and cover a wide range of applications, commercial and military such as mobile ad hoc networks, sensor networks, vehicular networks, underwater networks, underground networks, personal area networks, home networks and large-scale metropolitan networks for smart cities. They are organized in topical sections on wireless sensor networks, routing, applications and security.
In recent years, tremendous research has been devoted to the design of database systems for real-time applications, called real-time database systems (RTDBS), where transactions are associated with deadlines on their completion times, and some of the data objects in the database are associated with temporal constraints on their validity. Examples of important applications of RTDBS include stock trading systems, navigation systems and computer integrated manufacturing. Different transaction scheduling algorithms and concurrency control protocols have been proposed to satisfy transaction timing data temporal constraints. Other design issues important to the performance of a RTDBS are buffer management, index accesses and I/O scheduling. Real-Time Database Systems: Architecture and Techniques summarizes important research results in this area, and serves as an excellent reference for practitioners, researchers and educators of real-time systems and database systems.
A straightforward introduction to basic concepts and methodologies for digital photoelasticity, providing a foundation on which future researchers and students can develop their own ideas. The book thus promotes research into the formulation of problems in digital photoelasticity and the application of these techniques to industries. In one volume it provides data acquisition by DIP techniques, its analysis by statistical techniques, and its presentation by computer graphics plus the use of rapid prototyping technologies to speed up the entire process. The book not only presents the various techniques but also provides the relevant time-tested software codes. Exercises designed to support and extend the treatment are found at the end of each chapter.
Autonomous agents or multiagent systems are computational systems in which several computational agents interact or work together to perform some set of tasks. These systems may involve computational agents having common goals or distinct goals. Real-Time Search for Learning Autonomous Agents focuses on extending real-time search algorithms for autonomous agents and for a multiagent world. Although real-time search provides an attractive framework for resource-bounded problem solving, the behavior of the problem solver is not rational enough for autonomous agents. The problem solver always keeps the record of its moves and the problem solver cannot utilize and improve previous experiments. Other problems are that although the algorithms interleave planning and execution, they cannot be directly applied to a multiagent world. The problem solver cannot adapt to the dynamically changing goals and the problem solver cannot cooperatively solve problems with other problem solvers. This book deals with all these issues. Real-Time Search for Learning Autonomous Agents serves as an excellent resource for researchers and engineers interested in both practical references and some theoretical basis for agent/multiagent systems. The book can also be used as a text for advanced courses on the subject.
The Testability of Distributed Real-Time Systems starts by collecting and analyzing all principal problems, as well as their interrelations that one has to keep in mind wh4en testing a distributed real-time system. The book discusses them in some detail from the viewpoints of software engineering, distributed systems principles, and real-time system development. These problems are organization, observability, reproducibility, the host/target approach, environment simulation, and (test) representativity. Based on this framework, the book summarizes and evaluates the current work done in this area before going on to argue that the particular system architecture (hardware plus operating system) has a much greater influence on testing than is the case for 'ordinary', non-real-time software. The notions of event-triggered and time-triggered system architectures are introduced, and its is shown that time-triggered systems 'automatically' (i.e. by the nature of their system architecture) solve or greatly ease solving of some of the problems introduced earlier, i.e. observability, reproducibility, and (partly) representativity.A test methodology is derived for the time-triggered, distributed real-time system MARS. The book describes in detail how the author has taken advantage of its architecture, and shows how the remaining problems can be solved for this particular system architecture. Some experiments conducted to evaluate this test methodology are reported, including the experience gained from them, leading to a description of a number of prototype support tools. The Testability of Distributed Real-Time Systems can be used by both academic and industrial researchers interested in distributed and/or real-time systems, or in software engineering for such systems. This book can also be used as a text in advanced courses on distributed or real-time systems.
Real-time computer systems are very often subject to dependability requirements because of their application areas. Fly-by-wire airplane control systems, control of power plants, industrial process control systems and others are required to continue their function despite faults. Fault-tolerance and real-time requirements thus constitute a kind of natural combination in process control applications. Systematic fault-tolerance is based on redundancy, which is used to mask failures of individual components. The problem of replica determinism is thereby to ensure that replicated components show consistent behavior in the absence of faults. It might seem trivial that, given an identical sequence of inputs, replicated computer systems will produce consistent outputs. Unfortunately, this is not the case. The problem of replica non-determinism and the presentation of its possible solutions is the subject of Fault-Tolerant Real-Time Systems: The Problem of Replica Determinism. The field of automotive electronics is an important application area of fault-tolerant real-time systems. Systems like anti-lock braking, engine control, active suspension or vehicle dynamics control have demanding real-time and fault-tolerance requirements. These requirements have to be met even in the presence of very limited resources since cost is extremely important. Because of its interesting properties Fault-Tolerant Real-Time Systems gives an introduction to the application area of automotive electronics. The requirements of automotive electronics are a topic of discussion in the remainder of this work and are used as a benchmark to evaluate solutions to the problem of replica determinism.
This book is written for software engineers, software project leaders, and software managers who would like to introduce a new advanced software technology, expert systems, into their product. Expert system technology brings into programming a new dimension in which "rule of thumb" or heuristic expert knowledge is encoded in the program. In contrast to conventional procedural languages {e. g. , Fortran or C}, expert systems employ high-level programming languages {Le. , expert system shells} that enable us to capture the judgmental knowledge of experts such as geologists, doctors, lawyers, bankers, or insurance underwriters. Past expert systems have been more successfully applied in the problem areas of analysis and synthesis where the boundary of lo;nowledge is well defined and where experts are available and can be identified. Early successful applications include diagnosis systems such as MYCIN, geological systems such as PROSPECTOR, or design/configu ration systems such as XC ON. These early expert systems were mainly applicable to scientific and engineering problems, which are not theoreti cally well understood in terms of decisionmaking processes by their experts and which therefore require judgmental assessment. The more recent expert systems are being applied to sophisticated synthesis problems that involve a large number of choices, such as how the elements are to be compared. These problems normally entailed a large search space and slower speed for the expert systems designed. Examples of these systems include factory scheduling applications such as ISIS, or legal reasoning applications such as TAXMAN.
The Knowledge Seeker is a useful system to develop various intelligent applications such as ontology-based search engine, ontology-based text classification system, ontological agent system, and semantic web system etc. The Knowledge Seeker contains four different ontological components. First, it defines the knowledge representation model !V Ontology Graph. Second, an ontology learning process that based on chi-square statistics is proposed for automatic learning an Ontology Graph from texts for different domains. Third, it defines an ontology generation method that transforms the learning outcome to the Ontology Graph format for machine processing and also can be visualized for human validation. Fourth, it defines different ontological operations (such as similarity measurement and text classification) that can be carried out with the use of generated Ontology Graphs. The final goal of the KnowledgeSeeker system framework is that it can improve the traditional information system with higher efficiency. In particular, it can increase the accuracy of a text classification system, and also enhance the search intelligence in a search engine. This can be done by enhancing the system with machine processable ontology. |
![]() ![]() You may like...
Modeling and Simulation in Medicine and…
Frank C. Hoppensteadt, Charles S. Peskin
Hardcover
R2,843
Discovery Miles 28 430
Modelling Soil Erosion by Water
John Boardman, D. Favis-Mortlock
Hardcover
R2,623
Discovery Miles 26 230
Research Advancements in Smart…
Pandian Vasant, Gerhard Weber, …
Hardcover
R6,736
Discovery Miles 67 360
SpiNNaker - A Spiking Neural Network…
Steve Furber, Petrut Bogdan
Hardcover
R2,180
Discovery Miles 21 800
|