![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Artificial intelligence > Knowledge-based systems / expert systems
This monograph introduces a novel multiset-based conceptual, mathematical and knowledge engineering paradigm, called multigrammatical framework (MGF), used for planning and scheduling in resource-consuming, resource-producing (industrial) and resource-distributing (economical) sociotechnological systems (STS). This framework is meant to enable smart operation not only in a "business-as-usual" mode, but also in extraordinary, highly volatile or hazardous environments. It is the result of convergence and deep integration into a unified, flexible and effectively implemented formalism operating on multisets of several well-known paradigms from classical operations research and modern knowledge engineering, such as: mathematical programming, game theory, optimal scheduling, logic programming and constraint programming. The mathematical background needed for MGF, its algorithmics, applications, implementation issues, as well as its nexus with known models from operations research and theoretical computer science areas are considered. The resilience and recovery issues of an STS are studied by applying the MGF toolkit and on paying special attention to the multigrammatical assessment of resilience of energy infrastructures. MGF-represented resource-based games are introduced, and directions for further development are discussed. The author presents multiple applications to business intelligence, critical infrastructure, ecology, economy and industry. This book is addressed to scholars working in the areas of theoretical and applied computer science, artificial intelligence, systems analysis, operations research, mathematical economy and critical infrastructure protection, to engineers developing software-intensive solutions for implementation of the knowledge-based digital economy and Industry 4.0, as well as to students, aspirants and university staff. Foundational knowledge of set theory, mathematical logic and routine operations on data bases is needed to read this book. The content of the monograph is gradually presented, from simple to complex, in a well-understandable step-by-step manner. Multiple examples and accompanying figures are included in order to support the explanation of the various notions, expressions and algorithms.
5G IoT and Edge Computing for Smart Healthcare addresses the importance of a 5G IoT and Edge-Cognitive-Computing-based system for the successful implementation and realization of a smart-healthcare system. The book provides insights on 5G technologies, along with intelligent processing algorithms/processors that have been adopted for processing the medical data that would assist in addressing the challenges in computer-aided diagnosis and clinical risk analysis on a real-time basis. Each chapter is self-sufficient, solving real-time problems through novel approaches that help the audience acquire the right knowledge. With the progressive development of medical and communication - computer technologies, the healthcare system has seen a tremendous opportunity to support the demand of today's new requirements.
In knowledge-based natural language generation, issues of formal knowledge representation meet with the linguistic problems of choosing the most appropriate verbalization in a particular situation of utterance. Lexical Semantics and Knowledge Representation in Multilingual Text Generation presents a new approach to systematically linking the realms of lexical semantics and knowledge represented in a description logic. For language generation from such abstract representations, lexicalization is taken as the central step: when choosing words that cover the various parts of the content representation, the principal decisions on conveying the intended meaning are made. A preference mechanism is used to construct the utterance that is best tailored to parameters representing the context. Lexical Semantics and Knowledge Representation in Multilingual Text Generation develops the means for systematically deriving a set of paraphrases from the same underlying representation with the emphasis on events and verb meaning. Furthermore, the same mapping mechanism is used to achieve multilingual generation: English and German output are produced in parallel, on the basis of an adequate division between language-neutral and language-specific (lexical and grammatical) knowledge. Lexical Semantics and Knowledge Representation in Multilingual Text Generation provides detailed insights into designing the representations and organizing the generation process. Readers with a background in artificial intelligence, cognitive science, knowledge representation, linguistics, or natural language processing will find a model of language production that can be adapted to a variety of purposes.
The book discusses a broad overview of traditional machine learning methods and state-of-the-art deep learning practices for hardware security applications, in particular the techniques of launching potent "modeling attacks" on Physically Unclonable Function (PUF) circuits, which are promising hardware security primitives. The volume is self-contained and includes a comprehensive background on PUF circuits, and the necessary mathematical foundation of traditional and advanced machine learning techniques such as support vector machines, logistic regression, neural networks, and deep learning. This book can be used as a self-learning resource for researchers and practitioners of hardware security, and will also be suitable for graduate-level courses on hardware security and application of machine learning in hardware security. A stand-out feature of the book is the availability of reference software code and datasets to replicate the experiments described in the book.
Visual Question Answering (VQA) usually combines visual inputs like image and video with a natural language question concerning the input and generates a natural language answer as the output. This is by nature a multi-disciplinary research problem, involving computer vision (CV), natural language processing (NLP), knowledge representation and reasoning (KR), etc. Further, VQA is an ambitious undertaking, as it must overcome the challenges of general image understanding and the question-answering task, as well as the difficulties entailed by using large-scale databases with mixed-quality inputs. However, with the advent of deep learning (DL) and driven by the existence of advanced techniques in both CV and NLP and the availability of relevant large-scale datasets, we have recently seen enormous strides in VQA, with more systems and promising results emerging. This book provides a comprehensive overview of VQA, covering fundamental theories, models, datasets, and promising future directions. Given its scope, it can be used as a textbook on computer vision and natural language processing, especially for researchers and students in the area of visual question answering. It also highlights the key models used in VQA.
The papers in this volume comprise the refereed proceedings of the Second IFIP International Conference on Computer and Computing Technologies in Agriculture (CCTA2008), in Beijing, China, 2008. The conference on the Second IFIP International Conference on Computer and Computing Technologies in Agriculture (CCTA 2008) is cooperatively sponsored and organized by the China Agricultural University (CAU), the National Engineering Research Center for Information Technology in Agriculture (NERCITA), the Chinese Society of Agricultural Engineering (CSAE) , International Federation for Information Processing (IFIP), Beijing Society for Information Technology in Agriculture, China and Beijing Research Center for Agro-products Test and Farmland Inspection, China. The related departments of China's central government bodies like: Ministry of Science and Technology, Ministry of Industry and Information Technology, Ministry of Education and the Beijing Municipal Natural Science Foundation, Beijing Academy of Agricultural and Forestry Sciences, etc. have greatly contributed and supported to this event. The conference is as good platform to bring together scientists and researchers, agronomists and information engineers, extension servers and entrepreneurs from a range of disciplines concerned with impact of Information technology for sustainable agriculture and rural development. The representatives of all the supporting organizations, a group of invited speakers, experts and researchers from more than 15 countries, such as: the Netherlands, Spain, Portugal, Mexico, Germany, Greece, Australia, Estonia, Japan, Korea, India, Iran, Nigeria, Brazil, China, etc.
This book constitutes the refereed post-conference proceedings of the Fourth IFIP International Cross-Domain Conference on Internet of Things, IFIPIoT 2021, held virtually in November 2021. The 15 full papers presented were carefully reviewed and selected from 33 submissions. Also included is a summary of two panel sessions held at the conference. The papers are organized in the following topical sections: challenges in IoT Applications and Research, Modernizing Agricultural Practice Using IoT, Cyber-physical IoT systems in Wildfire Context, IoT for Smart Health, Security, Methods.
This book provides a coherent and complete overview of various Question Answering (QA) systems. It covers three main categories based on the source of the data that can be unstructured text (TextQA), structured knowledge graphs (KBQA), and the combination of both. Developing a QA system usually requires using a combination of various important techniques, including natural language processing, information retrieval and extraction, knowledge graph processing, and machine learning. After a general introduction and an overview of the book in Chapter 1, the history of QA systems and the architecture of different QA approaches are explained in Chapter 2. It starts with early close domain QA systems and reviews different generations of QA up to state-of-the-art hybrid models. Next, Chapter 3 is devoted to explaining the datasets and the metrics used for evaluating TextQA and KBQA. Chapter 4 introduces the neural and deep learning models used in QA systems. This chapter includes the required knowledge of deep learning and neural text representation models for comprehending the QA models over text and QA models over knowledge base explained in Chapters 5 and 6, respectively. In some of the KBQA models the textual data is also used as another source besides the knowledge base; these hybrid models are studied in Chapter 7. In Chapter 8, a detailed explanation of some well-known real applications of the QA systems is provided. Eventually, open issues and future work on QA are discussed in Chapter 9. This book delivers a comprehensive overview on QA over text, QA over knowledge base, and hybrid QA systems which can be used by researchers starting in this field. It will help its readers to follow the state-of-the-art research in the area by providing essential and basic knowledge.
Knowledge Representation and Relation Nets introduces a fresh approach to knowledge representation that can be used to organize study material in a convenient, teachable and learnable form. The method extends and formalizes concept mapping by developing knowledge representation as a structure of concepts and the relationships among them. Such a formal description of analogy results in a controlled method of modeling new' knowledge in terms of existing' knowledge in teaching and learning situations, and its applications result in a consistent and well-organized approach to problem solving. Additionally, strategies for the presentation of study material to learners arise naturally in this representation. While the theory of relation nets is dealt with in detail in part of this book, the reader need not master the formal mathematics in order to apply the theory to this method of knowledge representation. To assist the reader, each chapter starts with a brief summary, and the main ideas are illustrated by examples. The reader is also given an intuitive view of the formal notions used in the applications by means of diagrams, informal descriptions, and simple sets of construction rules. Knowledge Representation and Relation Nets is an excellent source for teachers, courseware designers and researchers in knowledge representation, cognitive science, theories of learning, the psychology of education, and structural modeling.
This book focuses on how real-time task schedules for reconfigurable hardware-based embedded platforms may be affected due to the vulnerability of hardware and proposes self-aware security strategies to counteract the various threats. The emergence of Industry 4.0 has witnessed the deployment of reconfigurable hardware or field programmable gate arrays (FPGAs) in diverse embedded applications. These are associated with the execution of several real-time tasks arranged in schedules. However, they are associated with several issues. Development of fully and partially reconfigurable task schedules are discussed that eradicates the existing problems. However, such real-time task schedules may be jeopardized due to hardware threats. Analysis of such threats is discussed and self-aware security techniques are proposed that can detect and mitigate such threats at runtime.
This book presents established and state-of-the-art methods in Language Technology (including text mining, corpus linguistics, computational linguistics, and natural language processing), and demonstrates how they can be applied by humanities scholars working with textual data. The landscape of humanities research has recently changed thanks to the proliferation of big data and large textual collections such as Google Books, Early English Books Online, and Project Gutenberg. These resources have yet to be fully explored by new generations of scholars, and the authors argue that Language Technology has a key role to play in the exploration of large-scale textual data. The authors use a series of illustrative examples from various humanistic disciplines (mainly but not exclusively from History, Classics, and Literary Studies) to demonstrate basic and more complex use-case scenarios. This book will be useful to graduate students and researchers in humanistic disciplines working with textual data, including History, Modern Languages, Literary studies, Classics, and Linguistics. This is also a very useful book for anyone teaching or learning Digital Humanities and interested in the basic concepts from computational linguistics, corpus linguistics, and natural language processing.
Knowledge representation is a key area of modern AI, underlying the development of semantic networks. Description logics are languages that represent knowledge in a structured and formally well-understood way: they are the cornerstone of the Semantic Web. This is the first textbook describing this importan new topic and will be suitable for courses aimed at advanced undergraduate and beginning graduate students, or for self-study. It assumes only a basic knowledge of computer science concepts. After generla introducitons motivating and overviewing the subject, the authors describe a simple DL and how it works and can be used, utilizing a running example that recurs through the book. Methods of reasoning and their implementation and complexity are examined, Finally, the authors provide a non-trivial DL knowledge base and use it to illsutrate featues that have been introduced: this base is available for free online access in a form usable by modern ontology editors.
This book constitutes the refereed proceedings of the IFIP Industry Oriented Conferences held at the 20th World Computer Congress in Milano, Italy on September 7-10, 2008. The IFIP series publishes state-of-the-art results in the sciences and technologies of information and communication. The scope of the series includes: foundations of computer science; software theory and practice; education; computer applications in technology; communication systems; systems modeling and optimization; information systems; computers and society; computer systems technology; security and protection in information processing systems; artificial intelligence; and human-computer interaction. Proceedings and post-proceedings of refereed international conferences in computer science and interdisciplinary fields are featured. These results often precede journal publication and represent the most current research. The principal aim of the IFIP series is to encourage education and the dissemination and exchange of information about all aspects of computing.
The development of modern knowledge-based systems, for applications ranging from medicine to finance, necessitates going well beyond traditional rule-based programming. Frontiers of Expert Systems: Reasoning with Limited Knowledge attempts to satisfy such a need, introducing exciting and recent advances at the frontiers of the field of expert systems. Beginning with the central topics of logic, uncertainty and rule-based reasoning, each chapter in the book presents a different perspective on how we may solve problems that arise due to limitations in the knowledge of an expert system's reasoner. Successive chapters address (i) the fundamentals of knowledge-based systems, (ii) formal inference, and reasoning about models of a changing and partially known world, (iii) uncertainty and probabilistic methods, (iv) the expression of knowledge in rule-based systems, (v) evolving representations of knowledge as a system interacts with the environment, (vi) applying connectionist learning algorithms to improve on knowledge acquired from experts, (vii) reasoning with cases organized in indexed hierarchies, (viii) the process of acquiring and inductively learning knowledge, (ix) extraction of knowledge nuggets from very large data sets, and (x) interactions between multiple specialized reasoners with specialized knowledge bases. Each chapter takes the reader on a journey from elementary concepts to topics of active research, providing a concise description of several topics within and related to the field of expert systems, with pointers to practical applications and other relevant literature. Frontiers of Expert Systems: Reasoning with Limited Knowledge is suitable as a secondary text for a graduate-level course, and as a reference for researchers and practitioners in industry.
There is a tremendous interest in the design and applications of agents in virtually every area including avionics, business, internet, engineering, health sciences and management. There is no agreed one definition of an agent but we can define an agent as a computer program that autonomously or semi-autonomously acts on behalf of the user. In the last five years transition of intelligent systems research in general and agent based research in particular from a laboratory environment into the real world has resulted in the emergence of several phenomenon. These trends can be placed in three catego ries, namely, humanization, architectures and learning and adapta tion. These phenomena are distinct from the traditional logic centered approach associated with the agent paradigm. Humaniza tion of agents can be understood among other aspects, in terms of the semantics quality of design of agents. The need to humanize agents is to allow practitioners and users to make more effective use of this technology. It relates to the semantic quality of the agent design. Further, context-awareness is another aspect which has as sumed importance in the light of ubiquitous computing and ambi ent intelligence. The widespread and varied use of agents on the other hand has cre ated a need for agent-based software development frameworks and design patterns as well architectures for situated interaction, nego tiation, e-commerce, e-business and informational retrieval. Fi- vi Preface nally, traditional agent designs did not incorporate human-like abilities of learning and adaptation."
The papers in this volume comprise the refereed proceedings of the Second IFIP International Conference on Computer and Computing Technologies in Agriculture (CCTA2008), in Beijing, China, 2008. The conference on the Second IFIP International Conference on Computer and Computing Technologies in Agriculture (CCTA 2008) is cooperatively sponsored and organized by the China Agricultural University (CAU), the National Engineering Research Center for Information Technology in Agriculture (NERCITA), the Chinese Society of Agricultural Engineering (CSAE) , International Federation for Information Processing (IFIP), Beijing Society for Information Technology in Agriculture, China and Beijing Research Center for Agro-products Test and Farmland Inspection, China. The related departments of China's central government bodies like: Ministry of Science and Technology, Ministry of Industry and Information Technology, Ministry of Education and the Beijing Municipal Natural Science Foundation, Beijing Academy of Agricultural and Forestry Sciences, etc. have greatly contributed and supported to this event. The conference is as good platform to bring together scientists and researchers, agronomists and information engineers, extension servers and entrepreneurs from a range of disciplines concerned with impact of Information technology for sustainable agriculture and rural development. The representatives of all the supporting organizations, a group of invited speakers, experts and researchers from more than 15 countries, such as: the Netherlands, Spain, Portugal, Mexico, Germany, Greece, Australia, Estonia, Japan, Korea, India, Iran, Nigeria, Brazil, China, etc.
How to draw plausible conclusions from uncertain and conflicting sources of evidence is one of the major intellectual challenges of Artificial Intelligence. It is a prerequisite of the smart technology needed to help humans cope with the information explosion of the modern world. In addition, computational modelling of uncertain reasoning is a key to understanding human rationality. Previous computational accounts of uncertain reasoning have fallen into two camps: purely symbolic and numeric. This book represents a major advance by presenting a unifying framework which unites these opposing camps. The Incidence Calculus can be viewed as both a symbolic and a numeric mechanism. Numeric values are assigned indirectly to evidence via the possible worlds in which that evidence is true. This facilitates purely symbolic reasoning using the possible worlds and numeric reasoning via the probabilities of those possible worlds. Moreover, the indirect assignment solves some difficult technical problems, like the combinat ion of dependent sources of evidcence, which had defeated earlier mechanisms. Weiru Liu generalises the Incidence Calculus and then compares it to a succes sion of earlier computational mechanisms for uncertain reasoning: Dempster-Shafer Theory, Assumption-Based Truth Maintenance, Probabilis tic Logic, Rough Sets, etc. She shows how each of them is represented and interpreted in Incidence Calculus. The consequence is a unified mechanism which includes both symbolic and numeric mechanisms as special cases. It provides a bridge between symbolic and numeric approaches, retaining the advantages of both and overcoming some of their disadvantages."
Machine Learning, Big Data, and IoT for Medical Informatics focuses on the latest techniques adopted in the field of medical informatics. In medical informatics, machine learning, big data, and IOT-based techniques play a significant role in disease diagnosis and its prediction. In the medical field, the structure of data is equally important for accurate predictive analytics due to heterogeneity of data such as ECG data, X-ray data, and image data. Thus, this book focuses on the usability of machine learning, big data, and IOT-based techniques in handling structured and unstructured data. It also emphasizes on the privacy preservation techniques of medical data. This volume can be used as a reference book for scientists, researchers, practitioners, and academicians working in the field of intelligent medical informatics. In addition, it can also be used as a reference book for both undergraduate and graduate courses such as medical informatics, machine learning, big data, and IoT.
Intelligent Systems and Learning Data Analytics in Online Education provides novel artificial intelligence (AI) and analytics-based methods to improve online teaching and learning. This book addresses key problems such as attrition and lack of engagement in MOOCs and online learning in general. This book explores the state of the art of artificial intelligence, software tools and innovative learning strategies to provide better understanding and solutions to the various challenges of current e-learning in general and MOOC education. In particular, Intelligent Systems and Learning Data Analytics in Online Education shares stimulating theoretical and practical research from leading international experts. This publication provides useful references for educational institutions, industry, academic researchers, professionals, developers, and practitioners to evaluate and apply.
The book gives a comprehensive discussion of Database Semantics (DBS) as an agent-based data-driven theory of how natural language communication essentially works. In language communication, agents switch between speak mode, driven by cognition-internal content (input) resulting in cognition-external raw data (e.g. sound waves or pixels, which have no meaning or grammatical properties but can be measured by natural science), and hear mode, driven by the raw data produced by the speaker resulting in cognition-internal content. The motivation is to compare two approaches for an ontology of communication: agent-based data-driven vs. sign-based substitution-driven. Agent-based means: design of a cognitive agent with (i) an interface component for converting raw data into cognitive content (recognition) and converting cognitive content into raw data (action), (ii) an on-board, content-addressable memory (database) for the storage and content retrieval, (iii) separate treatments of the speak and the hear mode. Data-driven means: (a) mapping a cognitive content as input to the speak-mode into a language-dependent surface as output, (b) mapping a surface as input to the hear-mode into a cognitive content as output. Oppositely, sign-based means: no distinction between speak and hear mode, whereas substitution-driven means: using a single start symbol as input for generating infinitely many outputs, based on substitutions by rewrite rules. Collecting recent research of the author, this beautiful, novel and original exposition begins with an introduction to DBS, makes a linguistic detour on subject/predicate gapping and slot-filler repetition, and moves on to discuss computational pragmatics, inference and cognition, grammatical disambiguation and other related topics. The book is mostly addressed to experts working in the field of computational linguistics, as well as to enthusiasts interested in the history and early development of this subject, starting with the pre-computational foundations of theoretical computer science and symbolic logic in the 30s.
This volume is the last (IV) of four under the main themes of Digitizing Agriculture and Information and Communication Technologies (ICT). The four volumes cover rapidly developing processes including Sensors (I), Data (II), Decision (III), and Actions (IV). Volumes are related to 'digital transformation" within agricultural production and provision systems, and in the context of Smart Farming Technology and Knowledge-based Agriculture. Content spans broadly from data mining and visualization to big data analytics and decision making, alongside with the sustainability aspects stemming from the digital transformation of farming. The four volumes comprise the outcome of the 12th EFITA Congress, also incorporating chapters that originated from select presentations of the Congress. The focus in this volume is on the directions of Agriculture 4.0 which incorporates the transition to a new era of action in the Agricultural sector, represented by the evolution of digital technologies in 4 aspects: Big Data, Open Data, Internet of Things (IoT), and Cloud Computing. Under the heading of "Action," 14 Chapters investigate the implementation of cutting-edge technologies on real world applications. It will become apparent to the reader that the penetration of ICT in agriculture can result in several benefits related to the sustainability of the sector and to yield the maximum benefits, successful management is required. The entire discussion highlights the importance of proper education in the adoption of innovative technologies starting with the adaption of educational systems to the new era and moving to the familiarization of farmers to the new technologies. This book covers topics that relate to the digital transformation of farming. It provides examples and case studies of this transformation from around the world, examines the process of diffusion of digital technologies, and assesses the current and future sustainability aspects of digital agriculture. More specifically, it deals with issues such as: Challenges and opportunities from the transition to Agriculture 4.0 Safety and health in agricultural work automation The role of digital farming on regional-spatial planning The enrollment of Social Media in IoT-based agriculture The role of education in digital agriculture Real-life implementation cases of smart agriculture around the world
This volume is the third (III) of four under the main themes of Digitizing Agriculture and Information and Communication Technologies (ICT). The four volumes cover rapidly developing processes including Sensors (I), Data (II), Decision (III), and Actions (IV). Volumes are related to 'digital transformation" within agricultural production and provision systems, and in the context of Smart Farming Technology and Knowledge-based Agriculture. Content spans broadly from data mining and visualization to big data analytics and decision making, alongside with the sustainability aspects stemming from the digital transformation of farming. The four volumes comprise the outcome of the 12th EFITA Congress, also incorporating chapters that originated from select presentations of the Congress. The focus of this book (III) is on the transformation of collected information into valuable decisions and aims to shed light on how best to use digital technologies to reduce cost, inputs, and time, toward becoming more efficient and transparent. Fourteen chapters are grouped into 3 Sections. The first section of is dedicated to decisions in the value chain of agricultural products. The next section, titled Primary Production, elaborates on decision making for the improvement of processes taking place with the farm under the implementation of ICT. The last section is devoted to the development of innovative decision applications that also consider the protection of the environment, recognizing its importance in the preservation and considerate use of resources, as well as the mitigation of adverse impacts that are related to agricultural production. Planning and modeling the assessment of agricultural practices can provide farmers with valuable information prior to the execution of any task. This book provides a valuable reference for them as well as for those directly involved with decision making in planning and assessment of agricultural production. Specific advances covered in the volume: Modelling and Simulation of ICT-based agricultural systems Farm Management Information Systems (FMIS) Planning for unmanned aerial systems Agri-robotics awareness and planning Smart livestock farming Sustainable strategic planning in agri-production Food business information systems
The book offers a comprehensive survey of interval-valued intuitionistic fuzzy sets. It reports on cutting-edge research carried out by the founder of the intuitionistic fuzzy sets, Prof. Krassimir Atanassov, giving a special emphasis to the practical applications of this extension. A few interesting case studies, such as in the area of data mining, decision making and pattern recognition, among others, are discussed in detail. The book offers the first comprehensive guide on interval-valued intuitionistic fuzzy sets. By providing the readers with a thorough survey and important practical details, it is expected to support them in carrying out applied research and to encourage them to test the theory behind the sets for new advanced applications. The book is a valuable reference resource for graduate students and researchers alike.
The papers in this volume comprise the refereed proceedings of the Second IFIP International Conference on Computer and Computing Technologies in Agriculture (CCTA2008), in Beijing, China, 2008. The conference on the Second IFIP International Conference on Computer and Computing Technologies in Agriculture (CCTA 2008) is cooperatively sponsored and organized by the China Agricultural University (CAU), the National Engineering Research Center for Information Technology in Agriculture (NERCITA), the Chinese Society of Agricultural Engineering (CSAE) , International Federation for Information Processing (IFIP), Beijing Society for Information Technology in Agriculture, China and Beijing Research Center for Agro-products Test and Farmland Inspection, China. The related departments of China's central government bodies like: Ministry of Science and Technology, Ministry of Industry and Information Technology, Ministry of Education and the Beijing Municipal Natural Science Foundation, Beijing Academy of Agricultural and Forestry Sciences, etc. have greatly contributed and supported to this event. The conference is as good platform to bring together scientists and researchers, agronomists and information engineers, extension servers and entrepreneurs from a range of disciplines concerned with impact of Information technology for sustainable agriculture and rural development. The representatives of all the supporting organizations, a group of invited speakers, experts and researchers from more than 15 countries, such as: the Netherlands, Spain, Portugal, Mexico, Germany, Greece, Australia, Estonia, Japan, Korea, India, Iran, Nigeria, Brazil, China, etc.
Inquiring Organizations: Moving from Knowledge Management to Wisdom' assembles into one volume a comprehensive collection of the key current thinking regarding the use of C. West Churchman's Design of Inquiring Systems as a basis for computer-based inquiring systems design and implementation. Inquiring systems are systems that go beyond knowledge management to actively inquire about their environment. While self-adaptive is an appropriate adjective for inquiring systems, they are critically different from self-adapting systems as they have evolved in the fields of computer science or artificial intelligence. Inquiring systems draw on epistemology to guide knowledge creation and organizational learning. As such, we can for the first time ever, begin to entertain the notion of support for wise"" decision-making. Readers of ""Inquiring Organizations: Moving from Knowledge Management to Wisdom"" will gain an appreciation for the role that epistemology can play in the design of the next generation of knowledge management systems, systems that focus on supporting wise decision-making processes. |
![]() ![]() You may like...
Pro Oracle Application Express 4
Tim Fox, Scott Spendolini, …
Paperback
R1,701
Discovery Miles 17 010
Oracle Performance Tuning 101
Gaja Vaidyanatha, Kirtikumar Deshpande, …
Paperback
Migrating to the Cloud - Oracle…
Tom Laszewski, Prakash Nauduri
Paperback
|