![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Artificial intelligence > Knowledge-based systems / expert systems
This book examines the requirements, risks, and solutions to improve the security and quality of complex cyber-physical systems (C-CPS), such as production systems, power plants, and airplanes, in order to ascertain whether it is possible to protect engineering organizations against cyber threats and to ensure engineering project quality. The book consists of three parts that logically build upon each other. Part I "Product Engineering of Complex Cyber-Physical Systems" discusses the structure and behavior of engineering organizations producing complex cyber-physical systems, providing insights into processes and engineering activities, and highlighting the requirements and border conditions for secure and high-quality engineering. Part II "Engineering Quality Improvement" addresses quality improvements with a focus on engineering data generation, exchange, aggregation, and use within an engineering organization, and the need for proper data modeling and engineering-result validation. Lastly, Part III "Engineering Security Improvement" considers security aspects concerning C-CPS engineering, including engineering organizations' security assessments and engineering data management, security concepts and technologies that may be leveraged to mitigate the manipulation of engineering data, as well as design and run-time aspects of secure complex cyber-physical systems. The book is intended for several target groups: it enables computer scientists to identify research issues related to the development of new methods, architectures, and technologies for improving quality and security in multi-disciplinary engineering, pushing forward the current state of the art. It also allows researchers involved in the engineering of C-CPS to gain a better understanding of the challenges and requirements of multi-disciplinary engineering that will guide them in their future research and development activities. Lastly, it offers practicing engineers and managers with engineering backgrounds insights into the benefits and limitations of applicable methods, architectures, and technologies for selected use cases.
Innovation, agility, and coordination are paramount in the support of value in the global knowledge economy. Therefore, the long-term success of a company is increasingly dependent on its underlying resilience and agility. ""Knowledge Reuse and Agile Processes"" addresses flexibility of both business and information systems through component technology at the nexus of three seemingly unrelated disciplines: service-oriented architecture, knowledge management, and business process management. Providing practitioners and academicians with timely, compelling research on agile, adaptive processes and information systems, this premier reference source will enhance the collection of every reference library.
In the early days of artificial intelligence it was widely believed that powerful computers would, in the future, enable mankind to solve many real-world problems through the use of very general inference procedures and very little domain-specific knowledge. With the benefit of hindsight, this view can now be called quite naive. The field of expert systems, which developed during the early 1970s, embraced the paradigm that "Knowledge is Power" - even very fast computers require very large amounts of very specific knowledge to solve non-trivial problems. Thus, the field of large knowledge bases has emerged. This book presents progress on building and sharing very large-scale knowledge bases. Progress has been made in specific scientific domains, including molecular biology, where large knowledge bases have become important tools for researchers. Another development is the attention being paid to structuring large knowledge bases. The use of a carefully developed set of concepts, called an "ontalogy", is becoming almost standard practice. This text provides a guide to the current state of the art in building and sharing very large knowledge bases, and is intended to act as a catalyst to future research, development and applications.
Fuzzy logic in narrow sense is a promising new chapter of formal logic whose basic ideas were formulated by Lotfi Zadeh (see Zadeh 1975]a). The aim of this theory is to formalize the "approximate reasoning" we use in everyday life, the object of investigation being the human aptitude to manage vague properties (as, for example, "beautiful," "small," "plausible," "believable," etc. ) that by their own nature can be satisfied to a degree different from 0 (false) and I (true). It is worth noting that the traditional deductive framework in many-valued logic is different from the one adopted in this book for fuzzy logic: in the former logics one always uses a "crisp" deduction apparatus, producing crisp sets of formulas, the formulas that are considered logically valid. By contrast, fuzzy logical deductive machinery is devised to produce a fuzzy set of formulas (the theorems) from a fuzzy set of formulas (the hypotheses). Approximate reasoning has generated a very interesting literature in recent years. However, in spite of several basic results, in our opinion, we are still far from a satisfactory setting of this very hard and mysterious subject. The aim of this book is to furnish some theoretical devices and to sketch a general framework for fuzzy logic. This is also in accordance with the non Fregean attitude of the book."
Analog Circuit Design contains the contribution of 18 tutorials of
the 20th workshop on Advances in Analog Circuit Design. Each part
discusses a specific to-date topic on new and valuable design ideas
in the area of analog circuit design. Each part is presented by six
experts in that field and state of the art information is shared
and overviewed. This book is number 20 in this successful series of
Analog Circuit Design, providing valuable information and excellent
overviews of:
In the last decade, ontologies have received much attention within computer science and related disciplines, most often as the semantic web. Ontology Learning and Population from Text: Algorithms, Evaluation and Applications discusses ontologies for the semantic web, as well as knowledge management, information retrieval, text clustering and classification, as well as natural language processing. Ontology Learning and Population from Text: Algorithms, Evaluation and Applications is structured for research scientists and practitioners in industry. This book is also suitable for graduate-level students in computer science.
On any advanced integrated circuit or "system-on-chip" there is a need for security. In many applications the actual implementation has become the weakest link in security rather than the algorithms or protocols. The purpose of the book is to give the integrated circuits and systems designer an insight into the basics of security and cryptography from the implementation point of view. As a designer of integrated circuits and systems it is important to know both the state-of-the-art attacks as well as the countermeasures. Optimizing for security is different from optimizations for speed, area, or power consumption. It is therefore difficult to attain the delicate balance between the extra cost of security measures and the added benefits.
Autonomous, Model-Based Diagnosis Agents defines and describes the implementation of an architecture for autonomous, model-based diagnosis agents. It does this by developing a logic programming approach for model-based diagnosis and introducing strategies to deal with more complex diagnosis problems, and then embedding the diagnosis framework into the agent architecture of vivid agents. Autonomous, Model-Based Diagnosis Agents surveys extended logic programming and shows how this expressive language is used to model diagnosis problems stemming from applications such as digital circuits, traffic control, integrity checking of a chemical database, alarm-correlation in cellular phone networks, diagnosis of an automatic mirror furnace, and diagnosis of communication protocols. The book reviews a bottom-up algorithm to remove contradiction from extended logic programs and substantially improves it by top-down evaluation of extended logic programs. Both algorithms are evaluated in the circuit domain including some of the ISCAS85 benchmark circuits. This comprehensive in-depth study of concepts, architectures, and implementation of autonomous, model-based diagnosis agents will be of great value for researchers, engineers, and graduate students with a background in artificial intelligence. For practitioners, it provides three main contributions: first, it provides many examples from diverse areas such as alarm correlation in phone networks to inconsistency checking in databases; second, it describes an architecture to develop agents; and third, it describes a sophisticated and declarative implementation of the concepts and architectures introduced.
This book constitutes the refereed proceedings of the Third International Conference on Intelligence Science, ICIS 2018, held in Beijing China, in November 2018. The 44 full papers and 5 short papers presented were carefully reviewed and selected from 85 submissions. They deal with key issues in intelligence science and have been organized in the following topical sections: brain cognition; machine learning; data intelligence; language cognition; perceptual intelligence; intelligent robots; fault diagnosis; and ethics of artificial intelligence.
Intelligent Decision Support Systems have the potential to transform human decision making by combining research in artificial intelligence, information technology, and systems engineering. The field of intelligent decision making is expanding rapidly due, in part, to advances in artificial intelligence and network-centric environments that can deliver the technology. Communication and coordination between dispersed systems can deliver just-in-time information, real-time processing, collaborative environments, and globally up-to-date information to a human decision maker. At the same time, artificial intelligence techniques have demonstrated that they have matured sufficiently to provide computational assistance to humans in practical applications. This book includes contributions from leading researchers in the field beginning with the foundations of human decision making and the complexity of the human cognitive system. Researchers contrast human and artificial intelligence, survey computational intelligence, present pragmatic systems, and discuss future trends. This book will be an invaluable resource to anyone interested in the current state of knowledge and key research gaps in the rapidly developing field of intelligent decision support.
Traditionally, scientific fields have defined boundaries, and scientists work on research problems within those boundaries. However, from time to time those boundaries get shifted or blurred to evolve new fields. For instance, the original goal of computer vision was to understand a single image of a scene, by identifying objects, their structure, and spatial arrangements. This has been referred to as image understanding. Recently, computer vision has gradually been making the transition away from understanding single images to analyzing image sequences, or video understanding. Video understanding deals with understanding of video sequences, e. g., recognition of gestures, activities, facial expressions, etc. The main shift in the classic paradigm has been from the recognition of static objects in the scene to motion-based recognition of actions and events. Video understanding has overlapping research problems with other fields, therefore blurring the fixed boundaries. Computer graphics, image processing, and video databases have obvious overlap with computer vision. The main goal of computer graphics is to gener ate and animate realistic looking images, and videos. Researchers in computer graphics are increasingly employing techniques from computer vision to gen erate the synthetic imagery. A good example of this is image-based rendering and modeling techniques, in which geometry, appearance, and lighting is de rived from real images using computer vision techniques. Here the shift is from synthesis to analysis followed by synthesis."
Air traffic controllers need advanced information and automated systems to provide a safe environment for everyone traveling by plane. One of the primary challenges in developing training for automated systems is to determine how much a trainee will need to know about the underlying technologies to use automation safely and efficiently. To ensure safety and success, task analysis techniques should be used as the basis of the design for training in automated systems in the aviation and aerospace industries. Automated Systems in the Aviation and Aerospace Industries is a pivotal reference source that provides vital research on the application of underlying technologies used to enforce automation safety and efficiency. While highlighting topics such as expert systems, text mining, and human-machine interface, this publication explores the concept of constructing navigation algorithms, based on the use of video information and the methods of the estimation of the availability and accuracy parameters of satellite navigation. This book is ideal for aviation professionals, researchers, and managers seeking current research on information technology used to reduce the risk involved in aviation.
There is no doubt that Design Technology plays a key role in
today's advanced product development environment. To reduce the
time to market, achieve zero defect quality the first time, and use
available production and logistics resources effectively, product
and design process knowledge covering the whole product life cycle
must be efficiently and effectively used during product design.
With greater emphasis over the last decade on better functionally,
cheaper and greener man-made objects, understanding how to
systematically capture, structure and proactively reuse product
life cycle knowledge is becoming more and more a necessity if we
are to effectively and efficiently support such an industrial
requirement. -Part One - KIC Development Approaches,
Intelligent control is a rapidly developing, complex and challenging field with great practical importance and potential. Because of the rapidly developing and interdisciplinary nature of the subject, there are only a few edited volumes consisting of research papers on intelligent control systems but little is known and published about the fundamentals and the general know-how in designing, implementing and operating intelligent control systems. Intelligent control system emerged from artificial intelligence and computer controlled systems as an interdisciplinary field. Therefore the book summarizes the fundamentals of knowledge representation, reasoning, expert systems and real-time control systems and then discusses the design, implementation verification and operation of real-time expert systems using G2 as an example. Special tools and techniques applied in intelligent control are also described including qualitative modelling, Petri nets and fuzzy controllers. The material is illlustrated with simple examples taken from the field of intelligent process control.
1. GENERAL The term "diagnostics" refers to the general theory of diagnosis, not to the study of specific diagnoses but to their general framework. It borrows from different sciences and from different philosophies. Traditionally, the general framework of diagnostics was not distinguished from the framework of medicine. It was not taught in special courses in any systematic way; it was not accorded special attention: students absorbed it intuitively. There is almost no comprehensive study of diagnostics. The instruction in diagnosis provided in medical schools is exclusively specific. Clinical instruction includes (in addition to vital background information, such as anatomy and physiology) specific instruction in nosology, the theory and classification of diseases, and this includes information on diagnoses and prognoses of diverse diseases. What is the cause of the neglect of diagnostics, and of its integrated teaching? The main cause may be the prevalence of the view of diagnostics as part-and parcel of nosology. In this book nosology is taken as a given, autonomous field of study, which invites almost no comments; we shall freely borrow from it a few important general theses and a few examples. We attempt to integrate here three studies: ll of the way nosology is used in the diagnostic process; of the diagnostic process as a branch of applied ethics; ~ of the diagnostic process as a branch of social science and social technology.
This practically-focused text presents a hands-on guide to making biometric technology work in real-life scenarios. Extensively revised and updated, this new edition takes a fresh look at what it takes to integrate biometrics into wider applications. An emphasis is placed on the importance of a complete understanding of the broader scenario, covering technical, human and implementation factors. This understanding may then be exercised through interactive chapters dealing with educational software utilities and the BANTAM Program Manager. Features: provides a concise introduction to biometrics; examines both technical issues and human factors; highlights the importance of a broad understanding of biometric technology implementation from both a technical and operational perspective; reviews a selection of freely available utilities including the BANTAM Program Manager; considers the logical next steps on the path from aspiration to implementation, and looks towards the future use of biometrics in context.
As the world currently subsists as a platform for exchange among complex, intelligent systems that are constantly adapting and evolving to suit the surrounding physical, sociological, emotional, and sensory environment, understanding the theory and emergence of complex adaptive systems is of paramount importance. ""Intelligent Complex Adaptive Systems"" explores the foundation, history, and theory of intelligent adaptive systems, providing scholars, researchers, and practitioners with a fundamental resource on topics such as the emergence of intelligent adaptive systems in social sciences, biologically inspired artificial social systems, sensory information processing, as well as the conceptual and methodological issues and approaches to intelligent adaptive systems.
Previous research in the knowledge management and information systems fields simply define knowledge by a few categories, and then describe knowledge systems and their usage and the difficulties with them. Knowledge and Knowledge Systems: Learning from the Wonders of the Mind starts from the beginning: where and how knowledge is formed and how it can be measured, describing humans and their knowledge path from conception and birth to maturity.
Distributed and Parallel Database Object Management brings together in one place important contributions and state-of-the-art research results in this rapidly advancing area of computer science. Distributed and Parallel Database Object Management serves as an excellent reference, providing insights into some of the most important issues in the field.
With continual computer advances in the information technology age, information systems have become an integral part of many disciplines. Business, medicine, geography, aviation, forensics, agriculture, even traffic lights all have one thing in common - computers. ""Utilizing Information Technology Systems Across Disciplines: Advancements in the Application of Computer Science"" provides original material concerned with all aspects of information resources management, managerial and organizational applications, as well as implications of information technology. An advanced reference work in its field, this book is instrumental in the improvement and development of the theory and practice of information resources management, appealing to both practicing managers and academicians.
This book constitutes the refereed post-conference proceedings of the 17th IFIP WG 5.1 International Conference on Product Lifecycle Management, PLM 2020, held in Rapperswil, Switzerland, in July 2020. The conference was held virtually due to the COVID-19 crisis. The 60 revised full papers presented together with 2 technical industrial papers were carefully reviewed and selected from 80 submissions. The papers are organized in the following topical sections: smart factory; digital twins; Internet of Things (IoT, IIoT); analytics in the order fulfillment process; ontologies for interoperability; tools to support early design phases; new product development; business models; circular economy; maturity implementation and adoption; model based systems engineering; artificial intelligence in CAx, MBE, and PLM; building information modelling; and industrial technical contributions.
Handbook of Metaheuristic Algorithms: From Fundamental Theories to Advanced Applications provides a brief introduction to metaheuristic algorithms from the ground up, including basic ideas and advanced solutions. Although readers may be able to find source code for some metaheuristic algorithms on the Internet, the coding styles and explanations are generally quite different, and thus requiring expanded knowledge between theory and implementation. This book can also help students and researchers construct an integrated perspective of metaheuristic and unsupervised algorithms for artificial intelligence research in computer science and applied engineering domains. Metaheuristic algorithms can be considered the epitome of unsupervised learning algorithms for the optimization of engineering and artificial intelligence problems, including simulated annealing (SA), tabu search (TS), genetic algorithm (GA), ant colony optimization (ACO), particle swarm optimization (PSO), differential evolution (DE), and others. Distinct from most supervised learning algorithms that need labeled data to learn and construct determination models, metaheuristic algorithms inherit characteristics of unsupervised learning algorithms used for solving complex engineering optimization problems without labeled data, just like self-learning, to find solutions to complex problems.
This book provides modern technical answers to the legal requirements of pseudonymisation as recommended by privacy legislation. It covers topics such as modern regulatory frameworks for sharing and linking sensitive information, concepts and algorithms for privacy-preserving record linkage and their computational aspects, practical considerations such as dealing with dirty and missing data, as well as privacy, risk, and performance assessment measures. Existing techniques for privacy-preserving record linkage are evaluated empirically and real-world application examples that scale to population sizes are described. The book also includes pointers to freely available software tools, benchmark data sets, and tools to generate synthetic data that can be used to test and evaluate linkage techniques. This book consists of fourteen chapters grouped into four parts, and two appendices. The first part introduces the reader to the topic of linking sensitive data, the second part covers methods and techniques to link such data, the third part discusses aspects of practical importance, and the fourth part provides an outlook of future challenges and open research problems relevant to linking sensitive databases. The appendices provide pointers and describe freely available, open-source software systems that allow the linkage of sensitive data, and provide further details about the evaluations presented. A companion Web site at https://dmm.anu.edu.au/lsdbook2020 provides additional material and Python programs used in the book. This book is mainly written for applied scientists, researchers, and advanced practitioners in governments, industry, and universities who are concerned with developing, implementing, and deploying systems and tools to share sensitive information in administrative, commercial, or medical databases. The Book describes how linkage methods work and how to evaluate their performance. It covers all the major concepts and methods and also discusses practical matters such as computational efficiency, which are critical if the methods are to be used in practice - and it does all this in a highly accessible way!David J. Hand, Imperial College, London
Information granules are fundamental conceptual entities facilitating perception of complex phenomena and contributing to the enhancement of human centricity in intelligent systems. The formal frameworks of information granules and information granulation comprise fuzzy sets, interval analysis, probability, rough sets, and shadowed sets, to name only a few representatives. Among current developments of Granular Computing, interesting options concern information granules of higher order and of higher type. The higher order information granularity is concerned with an effective formation of information granules over the space being originally constructed by information granules of lower order. This construct is directly associated with the concept of hierarchy of systems composed of successive processing layers characterized by the increasing levels of abstraction. This idea of layered, hierarchical realization of models of complex systems has gained a significant level of visibility in fuzzy modeling with the well-established concept of hierarchical fuzzy models where one strives to achieve a sound tradeoff between accuracy and a level of detail captured by the model and its level of interpretability. Higher type information granules emerge when the information granules themselves cannot be fully characterized in a purely numerical fashion but instead it becomes convenient to exploit their realization in the form of other types of information granules such as type-2 fuzzy sets, interval-valued fuzzy sets, or probabilistic fuzzy sets. Higher order and higher type of information granules constitute the focus of the studies on Granular Computing presented in this study. The book elaborates on sound methodologies of Granular Computing, algorithmic pursuits and an array of diverse applications and case studies in environmental studies, option price forecasting, and power engineering.
This book presents the reader with a complete and comprehensive picture of what is happening today in banks and other financial institutions in terms of expert systems implementation. In addition it helps in refining the reader's thoughts on how to build an environment for the successful implementation of expert systems in banking - and how to sell this concept to management including risks and opportunities. |
![]() ![]() You may like...
Research Anthology on Artificial Neural…
Information R Management Association
Hardcover
R14,050
Discovery Miles 140 500
Exploring Future Opportunities of…
Madhulika Bhatia, Tanupriya Choudhury, …
Hardcover
R7,249
Discovery Miles 72 490
Research Anthology on Artificial Neural…
Information R Management Association
Hardcover
R14,040
Discovery Miles 140 400
Deep Learning Applications for…
Monica R. Mundada, Seema S., …
Hardcover
R7,211
Discovery Miles 72 110
Information Modelling and Knowledge…
Y. Kiyoki, B. Wangler, …
Hardcover
R2,433
Discovery Miles 24 330
Foundation Models for Natural Language…
Gerhard PaaĆ, Sven Giesselbach
Hardcover
|