![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence > Knowledge-based systems / expert systems
(Preliminary): The Orthogonal Frequency Division Multiplexing (OFDM) digital transmission technique has several advantages in broadcast and mobile communications applications. The main objective of this book is to give a good insight into these efforts, and provide the reader with a comprehensive overview of the scientific progress which was achieved in the last decade. Besides topics of the physical layer, such as coding, modulation and non-linearities, a special emphasis is put on system aspects and concepts, in particular regarding cellular networks and using multiple antenna techniques. The work extensively addresses challenges of link adaptation, adaptive resource allocation and interference mitigation in such systems. Moreover, the domain of cross-layer design, i.e. the combination of physical layer aspects and issues of higher layers, are considered in detail. These results will facilitate and stimulate further innovation and development in the design of modern communication systems, based on the powerful OFDM transmission technique.
Cellular automata are fully discrete dynamical systems with dynamical variables defined at the nodes of a lattice and taking values in a finite set. Application of a local transition rule at each lattice site generates the dynamics. The interpretation of systems with a large number of degrees of freedom in terms of lattice gases has received considerable attention recently due to the many applications of this approach, e.g. for simulating fluid flows under nearly realistic conditions, for modeling complex microscopic natural phenomena such as diffusion-reaction or catalysis, and for analysis of pattern-forming systems. The discussion in this book covers aspects of cellular automata theory related to general problems of information theory and statistical physics, lattice gas theory, direct applications, problems arising in the modeling of microscopic physical processes, complex macroscopic behavior (mostly in connection with turbulence), and the design of special-purpose computers.
The book provides an in-depth understanding of the fundamentals of superconducting electronics and the practical considerations for the fabrication of superconducting electronic structures. Additionally, it covers in detail the opportunities afforded by superconductivity for uniquely sensitive electronic devices and illustrates how these devices (in some cases employing high-temperature, ceramic superconductors) can be applied in analog and digital signal processing, laboratory instruments, biomagnetism, geophysics, nondestructive evaluation and radioastronomy. Improvements in cryocooler technology for application to cryoelectronics are also covered. This is the first book in several years to treat the fundamentals and applications of superconducting electronics in a comprehensive manner, and it is the very first book to consider the implications of high-temperature, ceramic superconductors for superconducting electronic devices. Not only does this new class of superconductors create new opportunities, but recently impressive milestones have been reached in superconducting analog and digital signal processing which promise to lead to a new generation of sensing, processing and computational systems. The 15 chapters are authored by acknowledged leaders in the fundamental science and in the applications of this increasingly active field, and many of the authors provide a timely assessment of the potential for devices and applications based upon ceramic-oxide superconductors or hybrid structures incorporating these new superconductors with other materials. The book takes the reader from a basic discussion of applicable (BCS and Ginzburg-Landau) theories and tunneling phenomena, through the structure and characteristics of Josephson devices and circuits, to applications that utilize the world's most sensitive magnetometer, most sensitive microwave detector, and fastest arithmetic logic unit.
Reactive systems are computing systems which are interactive, such as real-time systems, operating systems, concurrent systems, control systems, etc. They are among the most difficult computing systems to program. Temporal logic is a formal tool/language which yields excellent results in specifying reactive systems. This volume, the first of two, subtitled Specification, has a self-contained introduction to temporal logic and, more important, an introduction to the computational model for reactive programs, developed by Zohar Manna and Amir Pnueli of Stanford University and the Weizmann Institute of Science, Israel, respectively.
Visualization in scientific computing is getting more and more attention from many people. Especially in relation with the fast increase of com puting power, graphic tools are required in many cases for interpreting and presenting the results of various simulations, or for analyzing physical phenomena. The Eurographics Working Group on Visualization in Scientific Com puting has therefore organized a first workshop at Electricite de France (Clamart) in cooperation with ONERA (Chatillon). A wide range of pa pers were selected in order to cover most of the topics of interest for the members of the group, for this first edition, and 26 of them were presented in two days. Subsequently 18 papers were selected for this volume. 1'he presentations were organized in eight small sessions, in addition to discussions in small subgroups. The first two sessions were dedicated to the specific needs for visualization in computational sciences: the need for graphics support in large computing centres and high performance net works, needs of research and education in universities and academic cen tres, and the need for effective and efficient ways of integrating numerical computations or experimental data and graphics. Three of those papers are in Part I of this book. The third session discussed the importance and difficulties of using stan dards in visualization software, and was related to the fourth session where some reference models and distributed graphics systems were discussed. Part II has five papers from these sessions.
It is almost six years since the inauguration of the TRON project, a con cept first proposed by Dr. K. Sakamura of the University of Tokyo, and it is almost 2 years since the foundation of the TRON Association on March 1988. The number of regular member companies registered in the TRON Association as of November 1988 is 145 which is a new re cord for the Association. Some of this year's major activities that I would particularly like to mention are: - Over 50 TRON project-related products have been or are about to be introduced to the marketplace, according to a preliminary report from the Future Study Committee of the TRON Association. In par ticular, I am happy to say that the ITRON subproject, which is ahead of the other subprojects, has progressed so far that several papers on ITRON applications will be presented at this conference, which means that the ITRON specifications are now ready for application to em bedded commercial and industrial products."
Sir Isaac Newton's philosophi Naturalis Principia Mathematica'(the Principia) contains a prose-style mixture of geometric and limit reasoning that has often been viewed as logically vague. In A Combination of Geometry Theorem Proving and Nonstandard Analysis, Jacques Fleuriot presents a formalization of Lemmas and Propositions from the Principia using a combination of methods from geometry and nonstandard analysis. The mechanization of the procedures, which respects much of Newton's original reasoning, is developed within the theorem prover Isabelle. The application of this framework to the mechanization of elementary real analysis using nonstandard techniques is also discussed.
It has been almost 5 years since the inauguration of the TRON project, a concept first proposed by Dr. K. Sakamura of the University of Tokyo. The TRON Association, which was founded as an independent organization in March 1988, has been taking over the activities of the earlier TRON Association, which was a division of Japan Electronic Industry Development Association (JEIDA). It has been expanding various operations to globalize the organizations activities. The number of member companies already exceeds 100, with increasing participation from overseas companies. It is truly an awaring historical event that so many members with the same qualifications and aims engaged in the research and development of the computer environment could be gathered together. The TRON concept aims at the creation of a new and complete environment beneficial to both computer and mankind. It has a very wide scope and great diversity. As it includes the open architecture concept and as the TRON machine should be able to work with various foreign languages, the TRON is targetted to be used internationally. In order for us to create a complete TRON world, at though there are several TRON products already on the market, continuous and aggressive participation from as members together with concentration as further development are indispensable. We, the TRON promoters, are much encouraged by such a driving force.
In recent years, tremendous research has been devoted to the design of database systems for real-time applications, called real-time database systems (RTDBS), where transactions are associated with deadlines on their completion times, and some of the data objects in the database are associated with temporal constraints on their validity. Examples of important applications of RTDBS include stock trading systems, navigation systems and computer integrated manufacturing. Different transaction scheduling algorithms and concurrency control protocols have been proposed to satisfy transaction timing data temporal constraints. Other design issues important to the performance of a RTDBS are buffer management, index accesses and I/O scheduling. Real-Time Database Systems: Architecture and Techniques summarizes important research results in this area, and serves as an excellent reference for practitioners, researchers and educators of real-time systems and database systems.
This book is about time-domain modelling, stability, stabilization, control design and filtering for JTDS. It gives readers a thorough understanding of the basic mathematical analysis and fundamentals, offers a straightforward treatment of the different topics and provides broad coverage of the recent methodologies.
CAD (Computer Aided Design) technology is now crucial for every division of modern industry, from a viewpoint of higher productivity and better products. As technologies advance, the amount of information and knowledge that engineers have to deal with is constantly increasing. This results in seeking more advanced computer technology to achieve higher functionalities, flexibility, and efficient performance of the CAD systems. Knowledge engineering, or more broadly artificial intelligence, is considered a primary candidate technology to build a new generation of CAD systems. Since design is a very intellectual human activity, this approach seems to make sense. The ideas of intelligent CAD systems (ICAD) are now increasingly discussed everywhere. We can observe many conferences and workshops reporting a number of research efforts on this particular subject. Researchers are coming from computer science, artificial intelligence, mechanical engineering, electronic engineering, civil engineering, architectural science, control engineering, etc. But, still we cannot see the direction of this concept, or at least, there is no widely accepted concept of ICAD. What can designers expect from these future generation CAD systems? In which direction must developers proceed? The situation is somewhat confusing.
The origin of the development of integrated circuits up to VLSI is found in the invention of the transistor, which made it possible to achieve the ac- tion of a vacuum tube in a semiconducting solid. The structure of the tran- sistor can be constructed by a manufacturing technique such as the intro- duction of a small amount of an impurity into a semiconductor and, in ad- dition, most transistor characteristics can be improved by a reduction of dimensions. These are all important factors in the development. Actually, the microfabrication of the integrated circuit can be used for two purposes, namely to increase the integration density and to obtain an improved perfor- mance, e. g. a high speed. When one of these two aims is pursued, the result generally satisfies both. We use the Engl ish translation "very large scale integration (VLSIl" for "Cho LSI" in Japanese. In the United States of America, however, similar technology is bei ng developed under the name "very hi gh speed integrated circuits (VHSIl". This also originated from the nature of the integrated circuit which satisfies both purposes. Fortunately, the Japanese word "Cho LSI" has a wider meani ng than VLSI, so it can be used ina broader area. However, VLSI has a larger industrial effect than VHSI.
The first ESPRIT Basic Research Project on Predictably Dependable Computing Systems (No. 3092, PDCS) commenced in May 1989, and ran until March 1992. The institutions and principal investigators that were involved in PDCS were: City University, London, UK (Bev Littlewood), lEI del CNR, Pisa, Italy (Lorenzo Strigini), Universitiit Karlsruhe, Germany (Tom Beth), LAAS-CNRS, Toulouse, France (Jean-Claude Laprie), University of Newcastle upon Tyne, UK (Brian Randell), LRI-CNRS/Universite Paris-Sud, France (Marie-Claude Gaudel), Technische Universitiit Wien, Austria (Hermann Kopetz), and University of York, UK (John McDermid). The work continued after March 1992, and a three-year successor project (No. 6362, PDCS2) officially started in August 1992, with a slightly changed membership: Chalmers University of Technology, Goteborg, Sweden (Erland Jonsson), City University, London, UK (Bev Littlewood), CNR, Pisa, Italy (Lorenzo Strigini), LAAS-CNRS, Toulouse, France (Jean-Claude Laprie), Universite Catholique de Louvain, Belgium (Pierre-Jacques Courtois), University of Newcastle upon Tyne, UK (Brian Randell), LRI-CNRS/Universite Paris-Sud, France (Marie-Claude Gaudel), Technische Universitiit Wien, Austria (Hermann Kopetz), and University of York, UK (John McDermid). The summary objective of both projects has been "to contribute to making the process of designing and constructing dependable computing systems much more predictable and cost-effective." In the case of PDCS2, the concentration has been on the problems of producing dependable distributed real-time systems and especially those where the dependability requirements centre on issues of safety and/or security.
Autonomous agents or multiagent systems are computational systems in which several computational agents interact or work together to perform some set of tasks. These systems may involve computational agents having common goals or distinct goals. Real-Time Search for Learning Autonomous Agents focuses on extending real-time search algorithms for autonomous agents and for a multiagent world. Although real-time search provides an attractive framework for resource-bounded problem solving, the behavior of the problem solver is not rational enough for autonomous agents. The problem solver always keeps the record of its moves and the problem solver cannot utilize and improve previous experiments. Other problems are that although the algorithms interleave planning and execution, they cannot be directly applied to a multiagent world. The problem solver cannot adapt to the dynamically changing goals and the problem solver cannot cooperatively solve problems with other problem solvers. This book deals with all these issues. Real-Time Search for Learning Autonomous Agents serves as an excellent resource for researchers and engineers interested in both practical references and some theoretical basis for agent/multiagent systems. The book can also be used as a text for advanced courses on the subject.
The Knowledge Seeker is a useful system to develop various intelligent applications such as ontology-based search engine, ontology-based text classification system, ontological agent system, and semantic web system etc. The Knowledge Seeker contains four different ontological components. First, it defines the knowledge representation model !V Ontology Graph. Second, an ontology learning process that based on chi-square statistics is proposed for automatic learning an Ontology Graph from texts for different domains. Third, it defines an ontology generation method that transforms the learning outcome to the Ontology Graph format for machine processing and also can be visualized for human validation. Fourth, it defines different ontological operations (such as similarity measurement and text classification) that can be carried out with the use of generated Ontology Graphs. The final goal of the KnowledgeSeeker system framework is that it can improve the traditional information system with higher efficiency. In particular, it can increase the accuracy of a text classification system, and also enhance the search intelligence in a search engine. This can be done by enhancing the system with machine processable ontology.
This book constitutes the refereed proceedings of the international competition aimed at the evaluation and assessment of Ambient Assisted Living (AAL) systems and services, EvAAL 2011, which was organized in two major events, the Competition in Valencia, Spain, in July 2011, and the Final workshop in Lecce, Italy, in September 2011. The papers included in this book describe the organization and technical aspects of the competition, and provide a complete technical description of the competing artefacts and report on the experience lessons learned by the teams during the competition.
Artificial Intelligence and expert systems research, development, and demonstration have rapidly expanded over the past several years; as a result, new terminology is appearing at a phenomenal rate. This sourcebook provides an introduction to artificial intelligence and expert systems, it provides brief definitions, it includes brief descriptions of software products, and vendors, and notes leaders in the field. Extensive support material is provided by delineating points of contact for receiving additional information, acronyms, a detailed bibliography, and other reference data. The terminology includes artificial intelligence and expert system elements for: * Artificial Intelligence * Expert Systems * Natural language Processing * Smart Robots * Machine Vision * Speech Synthesis The Artificial Intelligence and Expert System Sourcebook is compiled from informa tion acquired from numerous books, journals, and authorities in the field of artificial intelligence and expert systems. I hope this compilation of information will help clarify the terminology for artificial intelligence and expert systems' activities. Your comments, revisions, or questions are welcome. V. Daniel Hunt Springfield, Virginia May, 1986 ix Acknowledgments The information in Artificial Intelligence and Expert Systems Sourcebook has been compiled from a wide variety of authorities who are specialists in their respective fields. The following publications were used as the basic technical resources for this book. Portions of these publications may have been used in the book. Those definitions or artwork used have been reproduced with the permission to reprint of the respective publisher.
The LNCS journal Transactions on Large-Scale Data- and Knowledge-Centered Systems focuses on data management, knowledge discovery, and knowledge processing, which are core and hot topics in computer science. Since the 1990s, the Internet has become the main driving force behind application development in all domains. An increase in the demand for resource sharing across different sites connected through networks has led to an evolution of data- and knowledge-management systems from centralized systems to decentralized systems enabling large-scale distributed applications providing high scalability. Current decentralized systems still focus on data and knowledge as their main resource. Feasibility of these systems relies basically on P2P (peer-to-peer) techniques and the support of agent systems with scaling and decentralized control. Synergy between Grids, P2P systems, and agent technologies is the key to data- and knowledge-centered systems in large-scale environments. This special issue of Transactions on Large-Scale Data- and Knowledge-Centered Systems highlights some of the major challenges emerging from the biomedical applications that are currently inspiring and promoting database research. These include the management, organization, and integration of massive amounts of heterogeneous data; the semantic gap between high-level research questions and low-level data; and privacy and efficiency. The contributions cover a large variety of biological and medical applications, including genome-wide association studies, epidemic research, and neuroscience.
Computers are currently used in a variety of critical applications, including systems for nuclear reactor control, flight control (both aircraft and spacecraft), and air traffic control. Moreover, experience has shown that the dependability of such systems is particularly sensitive to that of its software components, both the system software of the embedded computers and the application software they support. Software Performability: From Concepts to Applications addresses the construction and solution of analytic performability models for critical-application software. The book includes a review of general performability concepts along with notions which are peculiar to software performability. Since fault tolerance is widely recognized as a viable means for improving the dependability of computer system (beyond what can be achieved by fault prevention), the examples considered are fault-tolerant software systems that incorporate particular methods of design diversity and fault recovery. Software Performability: From Concepts to Applications will be of direct benefit to both practitioners and researchers in the area of performance and dependability evaluation, fault-tolerant computing, and dependable systems for critical applications. For practitioners, it supplies a basis for defining combined performance-dependability criteria (in the form of objective functions) that can be used to enhance the performability (performance/dependability) of existing software designs. For those with research interests in model-based evaluation, the book provides an analytic framework and a variety of performability modeling examples in an application context of recognized importance. The material contained in this book will both stimulate future research on related topics and, for teaching purposes, serve as a reference text in courses on computer system evaluation, fault-tolerant computing, and dependable high-performance computer systems.
This book reviews current state of the art methods for building intelligent systems using type-2 fuzzy logic and bio-inspired optimization techniques. Combining type-2 fuzzy logic with optimization algorithms, powerful hybrid intelligent systems have been built using the advantages that each technique offers. This book is intended to be a reference for scientists and engineers interested in applying type-2 fuzzy logic for solving problems in pattern recognition, intelligent control, intelligent manufacturing, robotics and automation. This book can also be used as a reference for graduate courses like the following: soft computing, intelligent pattern recognition, computer vision, applied artificial intelligence, and similar ones. We consider that this book can also be used to get novel ideas for new lines of re-search, or to continue the lines of research proposed by the authors.
We describe in this book, new methods for intelligent manufacturing using soft computing techniques and fractal theory. Soft Computing (SC) consists of several computing paradigms, including fuzzy logic, neural networks, and genetic algorithms, which can be used to produce powerful hybrid intelligent systems. Fractal theory provides us with the mathematical tools to understand the geometrical complexity of natural objects and can be used for identification and modeling purposes. Combining SC techniques with fractal theory, we can take advantage of the "intelligence" provided by the computer methods and also take advantage of the descriptive power of the fractal mathematical tools. Industrial manufacturing systems can be considered as non-linear dynamical systems, and as a consequence can have highly complex dynamic behaviors. For this reason, the need for computational intelligence in these manufacturing systems has now been well recognized. We consider in this book the concept of "intelligent manufacturing" as the application of soft computing techniques and fractal theory for achieving the goals of manufacturing, which are production planning and control, monitoring and diagnosis of faults, and automated quality control. As a prelude, we provide a brief overview of the existing methodologies in Soft Computing. We then describe our own approach in dealing with the problems in achieving intelligent manufacturing. Our particular point of view is that to really achieve intelligent manufacturing in real-world applications we need to use SC techniques and fractal theory.
Transaction processing is an established technique for the concurrent and fault tolerant access of persistent data. While this technique has been successful in standard database systems, factors such as time-critical applications, emerg ing technologies, and a re-examination of existing systems suggest that the performance, functionality and applicability of transactions may be substan tially enhanced if temporal considerations are taken into account. That is, transactions should not only execute in a "legal" (i.e., logically correct) man ner, but they should meet certain constraints with regard to their invocation and completion times. Typically, these logical and temporal constraints are application-dependent, and we address some fundamental issues for the man agement of transactions in the presence of such constraints. Our model for transaction-processing is based on extensions to established mod els, and we briefly outline how logical and temporal constraints may be ex pressed in it. For scheduling the transactions, we describe how legal schedules differ from one another in terms of meeting the temporal constraints. Exist ing scheduling mechanisms do not differentiate among legal schedules, and are thereby inadequate with regard to meeting temporal constraints. This provides the basis for seeking scheduling strategies that attempt to meet the temporal constraints while continuing to produce legal schedules."
Architecture-independent programming and automatic parallelisation have long been regarded as two different means of alleviating the prohibitive costs of parallel software development. Building on recent advances in both areas, Architecture-Independent Loop Parallelisation proposes a unified approach to the parallelisation of scientific computing code. This novel approach is based on the bulk-synchronous parallel model of computation, and succeeds in automatically generating parallel code that is architecture-independent, scalable, and of analytically predictable performance.
Real World Speech Processing brings together in one place important
contributions and up-to-date research results in this fast-moving
area. The contributors to this work were selected from the leading
researchers and practitioners in this field.
This book constitutes the proceedings of the 16th International Workshop on Formal Methods for Industrial Critical Systems, FMICS 2011, held in Trento, Italy, in August 2011. The 16 papers presented together with 2 invited talks were carefully reviewed and selected from 39 submissions. The aim of the FMICS workshop series is to provide a forum for researchers who are interested in the development and application of formal methods in industry. It also strives to promote research and development for the improvement of formal methods and tools for industrial applications. |
You may like...
RF and Microwave Module Level Design and…
Mohammad Almalkawi
Hardcover
Gaseous Dielectrics, 6th - International…
L.G. Christophorou, Isidor Sauers
Hardcover
R2,503
Discovery Miles 25 030
Cross-Layer Reliability of Computing…
Giorgio Natale, Dimitris Gizopoulos, …
Hardcover
Microelectronic Systems - Circuits…
Albert Heuberger, Gunter Elst, …
Hardcover
R2,706
Discovery Miles 27 060
Infrared Thermography in the Evaluation…
Carosena Meola, Simone Boccardi, …
Hardcover
R3,497
Discovery Miles 34 970
|