![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence > Knowledge-based systems / expert systems
This book constitutes the thoroughly refereed post-proceedings of the 8th International Symposium on Computer Music Modeling and Retrieval, CMMR 2011 and the 20th International Symposium on Frontiers of Research in Speech and Music, FRSM 2011. This year the 2 conferences merged for the first time and were held in Bhubanes, India, in March 2011. The 17 revised full papers presented were specially reviewed and revised for inclusion in this proceedings volume. The book is divided in four main chapters which reflect the high quality of the sessions of CMMR 2011, the collaboration with FRSM 2011 and the Indian influence, in the topics of Indian Music, Music Information Retrieval, Sound analysis synthesis and perception and Speech processing of Indian languages.
This book constitutes the refereed proceedings of the 31st International Conference on Computer Safety, Reliability, and Security, SAFECOMP 2012, held in Magdeburg, Germany, in September 2012. The 33 revised full papers presented were carefully reviewed and selected from more than 70 submissions. The papers are organized in topical sections on tools, risk analysis, testing, quantitative analysis, security, formal methods, aeronautic, automotive, and process. Also included are 4 case studies.
I wish to extend my warm greetings to you all on behalf of the TRON Association, on this occasion of the Seventh International TRON Project Symposium. The TRON Project was proposed by Dr. Ken Sakamura of the University of Tokyo, with the aim of designing a new, comprehen sive computer architecture that is open to worldwide use. Already more than six years have passed since the project was put in motion. The TRON Association is now made up of over 140 co m panies and organizations, including 25 overseas firms or their affiliates. A basic goal of TRON Project activities is to offer the world a human-oriented computer culture, that will lead to a richer and more fulfilling life for people throughout the world. It is our desire to bring to reality a new order in the world of computers, based on design concepts that consider the needs of human beings first of all, and to enable people to enjoy the full benefits of these com puters in their daily life. Thanks to the efforts of Association members, in recent months a number of TRON-specification 32-bit microprocessors have been made available. ITRON-specification products are continuing to appear, and we are now seeing commercial implementations of BTRON specifications as well. The CTRON subproject, mean while, is promoting standardization through validation testing and a portability experiment, and products are being marketed by sev eral firms. This is truly a year in which the TRON Project has reached the practical implementation stage."
This book constitutes the refereed proceedings of the 5th International Conference on Image and Signal Processing, ICISP 2012, held in Agadir, Morocco, in June 2012. The 75 revised full papers presented were carefully reviewed and selected from 158 submissions. The contributions are grouped into the following topical sections: multi/hyperspectral imaging; image itering and coding; signal processing; biometric; watermarking and texture; segmentation and retieval; image processing; pattern recognition.
Computers are currently used in a variety of critical applications, including systems for nuclear reactor control, flight control (both aircraft and spacecraft), and air traffic control. Moreover, experience has shown that the dependability of such systems is particularly sensitive to that of its software components, both the system software of the embedded computers and the application software they support. Software Performability: From Concepts to Applications addresses the construction and solution of analytic performability models for critical-application software. The book includes a review of general performability concepts along with notions which are peculiar to software performability. Since fault tolerance is widely recognized as a viable means for improving the dependability of computer system (beyond what can be achieved by fault prevention), the examples considered are fault-tolerant software systems that incorporate particular methods of design diversity and fault recovery. Software Performability: From Concepts to Applications will be of direct benefit to both practitioners and researchers in the area of performance and dependability evaluation, fault-tolerant computing, and dependable systems for critical applications. For practitioners, it supplies a basis for defining combined performance-dependability criteria (in the form of objective functions) that can be used to enhance the performability (performance/dependability) of existing software designs. For those with research interests in model-based evaluation, the book provides an analytic framework and a variety of performability modeling examples in an application context of recognized importance. The material contained in this book will both stimulate future research on related topics and, for teaching purposes, serve as a reference text in courses on computer system evaluation, fault-tolerant computing, and dependable high-performance computer systems.
The NATO Advanced Research Workshop on Signal Processing and Pattern Recognition in Nondestructive Evaluation (NOE) of Materials was held August 19-22, 1987 at the Manoir St-Castin, Lac Beauport, Quebec, Canada. Modern signal processing, pattern recognition and artificial intelligence have been playing an increasingly important role in improving nondestructive evaluation and testing techniques. The cross fertilization of the two major areas can lead to major advances in NOE as well as presenting a new research area in signal processing. With this in mind, the Workshop provided a good review of progress and comparison of potential techniques, as well as constructive discussions and suggestions for effective use of modern signal processing to improve flaw detection, classification and prediction, as well as material characterization. This Proceedings volume includes most presentations given at the Workshop. This publication, like the meeting itself, is unique in the sense that it provides extensive interactions among the interrelated areas of NOE. The book starts with research advances on inverse problems and then covers different aspects of digital waveform processing in NOE and eddy current signal analysis. These are followed by four papers of pattern recognition and AI in NOE, and five papers of image processing and reconstruction in NOE. The last two papers deal with parameter estimation problems. Though the list of papers is not extensive, as the field of NOE signal processing is very new, the book has an excellent collection of both tutorial and research papers in this exciting new field.
This book constitutes the proceedings of the 17th International Workshop on Formal Methods for Industrial Critical Systems, FMICS 2012, held in Paris, France, in August 2012. The 14 papers presented were carefully reviewed and selected from 37 submissions. The aim of the FMICS workshop series is to provide a forum for researchers who are interested in the development and application of formal methods in industry. It also strives to promote research and development for the improvement of formal methods and tools for industrial applications.
This book constitutes the thoroughly refereed post-conference proceedings of the Second International Workshop on Graph Structures for Knowledge Representation and Reasoning, GKR 2011, held in Barcelona, Spain, in July 2011 as satellite event of IJCAI 2011, the 22nd International Joint Conference on Artificial Intelligence. The 7 revised full papers presented together with 1 invited paper were carefully reviewed and selected from 12 submissions. The papers feature current research involved in the development and application of graph-based knowledge representation formalisms and reasoning techniques and investigate further developments of knowledge representation and reasoning graph based techniques. Topics addressed are such as: bayesian networks, semantic networks, conceptual graphs, formal concept analysis, cp-nets, gai-nets, euler diagrams, existential graphs all of which have been successfully used in a number of applications (semantic Web, recommender systems, bioinformatics etc.).
Exploration and Innovation in Design is one of the first books to present both conceptual and computational models of processes which have the potential to produce innovative results at early stages of design. Discussed here is the concept of exploration where the system, using computational processes, moves outside predefined available decisions. Sections of this volume discuss areas such as design representation and search, exploration and the emergence of new criteria, and precedent-based adaptation. In addition, the author presents the overall architecture of a design system and shows how the pieces fit together into one coherent system. Concluding chapters of the book discuss relationships of work in design to other research efforts, applications, and future research directions in design. The ideas and processes presented in this volume further our understanding of computational models of design, particularly those that are capable of assisting in the production of non-routine designs, and affirm that we are indeed moving toward a science of design.
The first ESPRIT Basic Research Project on Predictably Dependable Computing Systems (No. 3092, PDCS) commenced in May 1989, and ran until March 1992. The institutions and principal investigators that were involved in PDCS were: City University, London, UK (Bev Littlewood), lEI del CNR, Pisa, Italy (Lorenzo Strigini), Universitiit Karlsruhe, Germany (Tom Beth), LAAS-CNRS, Toulouse, France (Jean-Claude Laprie), University of Newcastle upon Tyne, UK (Brian Randell), LRI-CNRS/Universite Paris-Sud, France (Marie-Claude Gaudel), Technische Universitiit Wien, Austria (Hermann Kopetz), and University of York, UK (John McDermid). The work continued after March 1992, and a three-year successor project (No. 6362, PDCS2) officially started in August 1992, with a slightly changed membership: Chalmers University of Technology, Goteborg, Sweden (Erland Jonsson), City University, London, UK (Bev Littlewood), CNR, Pisa, Italy (Lorenzo Strigini), LAAS-CNRS, Toulouse, France (Jean-Claude Laprie), Universite Catholique de Louvain, Belgium (Pierre-Jacques Courtois), University of Newcastle upon Tyne, UK (Brian Randell), LRI-CNRS/Universite Paris-Sud, France (Marie-Claude Gaudel), Technische Universitiit Wien, Austria (Hermann Kopetz), and University of York, UK (John McDermid). The summary objective of both projects has been "to contribute to making the process of designing and constructing dependable computing systems much more predictable and cost-effective." In the case of PDCS2, the concentration has been on the problems of producing dependable distributed real-time systems and especially those where the dependability requirements centre on issues of safety and/or security.
Numerical linear algebra, digital signal processing, and parallel algorithms are three disciplines with a great deal of activity in the last few years. The interaction between them has been growing to a level that merits an Advanced Study Institute dedicated to the three areas together. This volume gives an account of the main results in this interdisciplinary field. The following topics emerged as major themes of the meeting: - Singular value and eigenvalue decompositions, including applications, - Toeplitz matrices, including special algorithms and architectures, - Recursive least squares in linear algebra, digital signal processing and control, - Updating and downdating techniques in linear algebra and signal processing, - Stability and sensitivity analysis of special recursive least squares problems, - Special architectures for linear algebra and signal processing. This book contains tutorials on these topics given by leading scientists in each of the three areas. A consider- able number of new research results are presented in contributed papers. The tutorials and papers will be of value to anyone interested in the three disciplines.
This volume contains the complete proceedings of a NATO Advanced Study Institute on various aspects of the reliability of electronic and other systems. The aim of the Insti~ute was to bring together specialists in this subject. An important outcome of this Conference, as many of the delegates have pointed out to me, was complementing theoretical concepts and practical applications in both software and hardware. The reader will find papers on the mathematical background, on reliability problems in establishments where system failure may be hazardous, on reliability assessment in mechanical systems, and also on life cycle cost models and spares allocation. The proceedings contain the texts of all the lectures delivered and also verbatim accounts of panel discussions on subjects chosen from a wide range of important issues. In this introduction I will give a short account of each contribution, stressing what I feel are the most interesting topics introduced by a lecturer or a panel member. To visualise better the extent and structure. of the Institute, I present a tree-like diagram showing the subjects which my co-directors and I would have wished to include in our deliberations (Figures 1 and 2). The names of our lecturers appear underlined under suitable headings. It can be seen that we have managed to cover most of the issues which seemed important to us. VI SYSTEM EFFECTIVENESS _---~-I~--_- Performance Safety Reliability ~intenance ~istic Lethality Hazards Support S.N.R. JARDINE Max. Vel. etc.
Introduction The goal of this book is to introduce XML to a bioinformatics audience. It does so by introducing the fundamentals of XML, Document Type De?nitions (DTDs), XML Namespaces, XML Schema, and XML parsing, and illustrating these concepts with speci?c bioinformatics case studies. The book does not assume any previous knowledge of XML and is geared toward those who want a solid introduction to fundamental XML concepts. The book is divided into nine chapters: Chapter 1: Introduction to XML for Bioinformatics. This chapter provides an introduction to XML and describes the use of XML in biological data exchange. A bird's-eye view of our ?rst case study, the Distributed Annotation System (DAS), is provided and we examine a sample DAS XML document. The chapter concludes with a discussion of the pros and cons of using XML in bioinformatic applications. Chapter 2: Fundamentals of XML and BSML. This chapter introduces the fundamental concepts of XML and the Bioinformatic Sequence Markup Language (BSML). We explore the origins of XML, de?ne basic rules for XML document structure, and introduce XML Na- spaces. We also explore several sample BSML documents and visualize these documents in the TM Rescentris Genomic Workspace Viewer.
This volume contains the articles presented at the Fourth InternationallFIP Working Conference on Dependable Computing for Critical Applications held in San Diego, California, on January 4-6, 1994. In keeping with the previous three conferences held in August 1989 at Santa Barbara (USA), in February 1991 at Tucson (USA), and in September 1992 at Mondello (Italy), the conference was concerned with an important basic question: can we rely on computer systems for critical applications? This conference, like its predecessors, addressed various aspects of dependability, a broad term defined as the degree of trust that may justifiably be placed in a system's reliability, availability, safety, security and performance. Because of its broad scope, a main goal was to contribute to a unified understanding and integration of these concepts. The Program Committee selected 21 papers for presentation from a total of 95 submissions at a September meeting in Menlo Park, California. The resulting program represents a broad spectrum of interests, with papers from universities, corporations and government agencies in eight countries. The selection process was greatly facilitated by the diligent work of the program committee members, for which we are most grateful. As a Working Conference, the program was designed to promote the exchange of ideas by extensive discussions. All paper sessions ended with a 30 minute discussion period on the topics covered by the session. In addition, three panel sessions have been organizcd.
The papers in this volume comprise the refereed proceedings of the Second IFIP International Conference on Computer and Computing Technologies in Agriculture (CCTA2008), in Beijing, China, 2008. The conference on the Second IFIP International Conference on Computer and Computing Technologies in Agriculture (CCTA 2008) is cooperatively sponsored and organized by the China Agricultural University (CAU), the National Engineering Research Center for Information Technology in Agriculture (NERCITA), the Chinese Society of Agricultural Engineering (CSAE) , International Federation for Information Processing (IFIP), Beijing Society for Information Technology in Agriculture, China and Beijing Research Center for Agro-products Test and Farmland Inspection, China. The related departments of China's central government bodies like: Ministry of Science and Technology, Ministry of Industry and Information Technology, Ministry of Education and the Beijing Municipal Natural Science Foundation, Beijing Academy of Agricultural and Forestry Sciences, etc. have greatly contributed and supported to this event. The conference is as good platform to bring together scientists and researchers, agronomists and information engineers, extension servers and entrepreneurs from a range of disciplines concerned with impact of Information technology for sustainable agriculture and rural development. The representatives of all the supporting organizations, a group of invited speakers, experts and researchers from more than 15 countries, such as: the Netherlands, Spain, Portugal, Mexico, Germany, Greece, Australia, Estonia, Japan, Korea, India, Iran, Nigeria, Brazil, China, etc.
Duration calculus constitutes a formal approach to the development of real-time systems; as an interval logic with special features for expressing and analyzing time durations of states in real-time systems, it allows for representing and formally reasoning about requirements and designs at an appropriate level of abstraction. This book presents the logical foundations of duration calculus in a coherent and thorough manner. Through selective case studies it explains how duration calculus can be applied to the formal specification and verification of real-time systems. The book also contains an extensive survey of the current research in this field. The material included in this book has been used for graduate and postgraduate courses, while it is also suitable for experienced researchers and professionals.
The main intention of this book is to give an impression of the state-of-the-art in system-level memory management (data transfer and storage) related issues for complex data-dominated real-time signal and data processing applications. The material is based on research at IMEC in this area in the period 1989- 1997. In order to deal with the stringent timing requirements and the data dominated characteristics of this domain, we have adopted a target architecture style and a systematic methodology to make the exploration and optimization of such systems feasible. Our approach is also very heavily application driven which is illustrated by several realistic demonstrators, partly used as red-thread examples in the book. Moreover, the book addresses only the steps above the traditional high-level synthesis (scheduling and allocation) or compilation (traditional or ILP oriented) tasks. The latter are mainly focussed on scalar or scalar stream operations and data where the internal structure of the complex data types is not exploited, in contrast to the approaches discussed here. The proposed methodologies are largely independent of the level of programmability in the data-path and controller so they are valuable for the realisation of both hardware and software systems. Our target domain consists of signal and data processing systems which deal with large amounts of data."
The papers in this volume comprise the refereed proceedings of the Second IFIP International Conference on Computer and Computing Technologies in Agriculture (CCTA2008), in Beijing, China, 2008. The conference on the Second IFIP International Conference on Computer and Computing Technologies in Agriculture (CCTA 2008) is cooperatively sponsored and organized by the China Agricultural University (CAU), the National Engineering Research Center for Information Technology in Agriculture (NERCITA), the Chinese Society of Agricultural Engineering (CSAE) , International Federation for Information Processing (IFIP), Beijing Society for Information Technology in Agriculture, China and Beijing Research Center for Agro-products Test and Farmland Inspection, China. The related departments of China's central government bodies like: Ministry of Science and Technology, Ministry of Industry and Information Technology, Ministry of Education and the Beijing Municipal Natural Science Foundation, Beijing Academy of Agricultural and Forestry Sciences, etc. have greatly contributed and supported to this event. The conference is as good platform to bring together scientists and researchers, agronomists and information engineers, extension servers and entrepreneurs from a range of disciplines concerned with impact of Information technology for sustainable agriculture and rural development. The representatives of all the supporting organizations, a group of invited speakers, experts and researchers from more than 15 countries, such as: the Netherlands, Spain, Portugal, Mexico, Germany, Greece, Australia, Estonia, Japan, Korea, India, Iran, Nigeria, Brazil, China, etc.
Behavioral Intervals in Embedded Software introduces a
comprehensive approach to timing, power, and communication analysis
of embedded software processes. Embedded software timing, power and
communication are typically not unique but occur in intervals which
result from data dependent behavior, environment timing and target
system properties.
The International Working Conference on Dependable Computing for Critical Applications was the first conference organized by IFIP Working Group 10. 4 "Dependable Computing and Fault Tolerance," in cooperation with the Technical Committee on Fault-Tolerant Computing of the IEEE Computer Society, and the Technical Committee 7 on Systems Reliability, Safety and Security of EWlCS. The rationale for the Working Conference is best expressed by the aims of WG 10. 4: " Increasingly, individuals and organizations are developing or procuring sophisticated computing systems on whose services they need to place great reliance. In differing circumstances, the focus will be on differing properties of such services - e. g. continuity, performance, real-time response, ability to avoid catastrophic failures, prevention of deliberate privacy intrusions. The notion of dependability, defined as that property of a computing system which allows reliance to be justifiably placed on the service it delivers, enables these various concerns to be subsumed within a single conceptual framework. Dependability thus includes as special cases such attributes as reliability, availability, safety, security. The Working Group is aimed at identifying and integrating approaches, methods and techniques for specifying, designing, building, assessing, validating, operating and maintaining computer systems which should exhibit some or all of these attributes. " The concept of WG 10. 4 was formulated during the IFIP Working Conference on Reliable Computing and Fault Tolerance on September 27-29, 1979 in London, England, held in conjunction with the Europ-IFIP 79 Conference. Profs A. Avi ienis (UCLA, Los Angeles, USA) and A.
Event-Triggered and Time-Triggered Control Paradigms presents a valuable survey about existing architectures for safety-critical applications and discusses the issues that must be considered when moving from a federated to an integrated architecture. The book focuses on one key topic - the amalgamation of the event-triggered and the time-triggered control paradigm into a coherent integrated architecture. The architecture provides for the integration of independent distributed application subsystems by introducing multi-criticality nodes and virtual networks of known temporal properties. The feasibility and the tangible advantages of this new architecture are demonstrated with practical examples taken from the automotive industry. Event-Triggered and Time-Triggered Control Paradigms offers significant insights into the architecture and design of integrated embedded systems, both at the conceptual and at the practical level.
Integrated System-Level Modeling of Network-on-Chip Enabled Multi-Processor Platforms first gives a comprehensive update on recent developments in the area of SoC platforms and ESL design methodologies. The main contribution is the rigorous definition of a framework for modeling at the timing approximate level of abstraction. Subsequently this book presents a set of tools for the creation and exploration of timing approximate SoC platform models.
What is exactly "Safety"? A safety system should be defined as a system that will not endanger human life or the environment. A safety-critical system requires utmost care in their specification and design in order to avoid possible errors in their implementation that should result in unexpected system's behavior during his operating "life." An inappropriate method could lead to loss of life, and will almost certainly result in financial penalties in the long run, whether because of loss of business or because the imposition of fines. Risks of this kind are usually managed with the methods and tools of the "safety engineering." A life-critical system is designed to 9 lose less than one life per billion (10 ). Nowadays, computers are used at least an order of magnitude more in safety-critical applications compared to two decades ago. Increasingly electronic devices are being used in applications where their correct operation is vital to ensure the safety of the human life and the environment. These application ranging from the anti-lock braking systems (ABS) in automobiles, to the fly-by-wire aircrafts, to biomedical supports to the human care. Therefore, it is vital that electronic designers be aware of the safety implications of the systems they develop. State of the art electronic systems are increasingly adopting progr- mable devices for electronic applications on earthling system. In particular, the Field Programmable Gate Array (FPGA) devices are becoming very interesting due to their characteristics in terms of performance, dimensions and cost.
The papers in this volume comprise the refereed proceedings of the First Int- national Conference on Computer and Computing Technologies in Agriculture (CCTA 2007), in Wuyishan, China, 2007. This conference is organized by China Agricultural University, Chinese Society of Agricultural Engineering and the Beijing Society for Information Technology in Agriculture. The purpose of this conference is to facilitate the communication and cooperation between institutions and researchers on theories, methods and implementation of computer science and information technology. By researching information technology development and the - sources integration in rural areas in China, an innovative and effective approach is expected to be explored to promote the technology application to the development of modern agriculture and contribute to the construction of new countryside. The rapid development of information technology has induced substantial changes and impact on the development of China's rural areas. Western thoughts have exerted great impact on studies of Chinese information technology devel- ment and it helps more Chinese and western scholars to expand their studies in this academic and application area. Thus, this conference, with works by many prominent scholars, has covered computer science and technology and information development in China's rural areas; and probed into all the important issues and the newest research topics, such as Agricultural Decision Support System and Expert System, GIS, GPS, RS and Precision Farming, CT applications in Rural Area, Agricultural System Simulation, Evolutionary Computing, etc.
Ontology Managememt provides an up-to-date, scientifically correct, concise and easy-to-read reference on this topic. The book includes relevant tasks, practical and theoretical challenges, limitations and methodologies, plus available tooling support. The editors discuss integrating the conceptual and technical dimensions with a business view on using ontologies, stressing the cost dimension of ontology engineering and offering guidance on how to derive ontologies semi-automatically from existing standards and specifications. |
You may like...
SIMD Programming Manual for Linux and…
Paul Cockshott, Kenneth Renfrew
Hardcover
R2,946
Discovery Miles 29 460
Data Hiding Techniques in Windows OS - A…
Nihad Hassan, Rami Hijazi
Paperback
R1,054
Discovery Miles 10 540
|