Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Applications of computing > Artificial intelligence > Knowledge-based systems / expert systems
This book presents established and state-of-the-art methods in Language Technology (including text mining, corpus linguistics, computational linguistics, and natural language processing), and demonstrates how they can be applied by humanities scholars working with textual data. The landscape of humanities research has recently changed thanks to the proliferation of big data and large textual collections such as Google Books, Early English Books Online, and Project Gutenberg. These resources have yet to be fully explored by new generations of scholars, and the authors argue that Language Technology has a key role to play in the exploration of large-scale textual data. The authors use a series of illustrative examples from various humanistic disciplines (mainly but not exclusively from History, Classics, and Literary Studies) to demonstrate basic and more complex use-case scenarios. This book will be useful to graduate students and researchers in humanistic disciplines working with textual data, including History, Modern Languages, Literary studies, Classics, and Linguistics. This is also a very useful book for anyone teaching or learning Digital Humanities and interested in the basic concepts from computational linguistics, corpus linguistics, and natural language processing.
Knowledge Representation and Relation Nets introduces a fresh approach to knowledge representation that can be used to organize study material in a convenient, teachable and learnable form. The method extends and formalizes concept mapping by developing knowledge representation as a structure of concepts and the relationships among them. Such a formal description of analogy results in a controlled method of modeling new' knowledge in terms of existing' knowledge in teaching and learning situations, and its applications result in a consistent and well-organized approach to problem solving. Additionally, strategies for the presentation of study material to learners arise naturally in this representation. While the theory of relation nets is dealt with in detail in part of this book, the reader need not master the formal mathematics in order to apply the theory to this method of knowledge representation. To assist the reader, each chapter starts with a brief summary, and the main ideas are illustrated by examples. The reader is also given an intuitive view of the formal notions used in the applications by means of diagrams, informal descriptions, and simple sets of construction rules. Knowledge Representation and Relation Nets is an excellent source for teachers, courseware designers and researchers in knowledge representation, cognitive science, theories of learning, the psychology of education, and structural modeling.
This volume is the last (IV) of four under the main themes of Digitizing Agriculture and Information and Communication Technologies (ICT). The four volumes cover rapidly developing processes including Sensors (I), Data (II), Decision (III), and Actions (IV). Volumes are related to 'digital transformation" within agricultural production and provision systems, and in the context of Smart Farming Technology and Knowledge-based Agriculture. Content spans broadly from data mining and visualization to big data analytics and decision making, alongside with the sustainability aspects stemming from the digital transformation of farming. The four volumes comprise the outcome of the 12th EFITA Congress, also incorporating chapters that originated from select presentations of the Congress. The focus in this volume is on the directions of Agriculture 4.0 which incorporates the transition to a new era of action in the Agricultural sector, represented by the evolution of digital technologies in 4 aspects: Big Data, Open Data, Internet of Things (IoT), and Cloud Computing. Under the heading of "Action," 14 Chapters investigate the implementation of cutting-edge technologies on real world applications. It will become apparent to the reader that the penetration of ICT in agriculture can result in several benefits related to the sustainability of the sector and to yield the maximum benefits, successful management is required. The entire discussion highlights the importance of proper education in the adoption of innovative technologies starting with the adaption of educational systems to the new era and moving to the familiarization of farmers to the new technologies. This book covers topics that relate to the digital transformation of farming. It provides examples and case studies of this transformation from around the world, examines the process of diffusion of digital technologies, and assesses the current and future sustainability aspects of digital agriculture. More specifically, it deals with issues such as: Challenges and opportunities from the transition to Agriculture 4.0 Safety and health in agricultural work automation The role of digital farming on regional-spatial planning The enrollment of Social Media in IoT-based agriculture The role of education in digital agriculture Real-life implementation cases of smart agriculture around the world
This volume is the third (III) of four under the main themes of Digitizing Agriculture and Information and Communication Technologies (ICT). The four volumes cover rapidly developing processes including Sensors (I), Data (II), Decision (III), and Actions (IV). Volumes are related to 'digital transformation" within agricultural production and provision systems, and in the context of Smart Farming Technology and Knowledge-based Agriculture. Content spans broadly from data mining and visualization to big data analytics and decision making, alongside with the sustainability aspects stemming from the digital transformation of farming. The four volumes comprise the outcome of the 12th EFITA Congress, also incorporating chapters that originated from select presentations of the Congress. The focus of this book (III) is on the transformation of collected information into valuable decisions and aims to shed light on how best to use digital technologies to reduce cost, inputs, and time, toward becoming more efficient and transparent. Fourteen chapters are grouped into 3 Sections. The first section of is dedicated to decisions in the value chain of agricultural products. The next section, titled Primary Production, elaborates on decision making for the improvement of processes taking place with the farm under the implementation of ICT. The last section is devoted to the development of innovative decision applications that also consider the protection of the environment, recognizing its importance in the preservation and considerate use of resources, as well as the mitigation of adverse impacts that are related to agricultural production. Planning and modeling the assessment of agricultural practices can provide farmers with valuable information prior to the execution of any task. This book provides a valuable reference for them as well as for those directly involved with decision making in planning and assessment of agricultural production. Specific advances covered in the volume: Modelling and Simulation of ICT-based agricultural systems Farm Management Information Systems (FMIS) Planning for unmanned aerial systems Agri-robotics awareness and planning Smart livestock farming Sustainable strategic planning in agri-production Food business information systems
This book constitutes the refereed proceedings of the IFIP Industry Oriented Conferences held at the 20th World Computer Congress in Milano, Italy on September 7-10, 2008. The IFIP series publishes state-of-the-art results in the sciences and technologies of information and communication. The scope of the series includes: foundations of computer science; software theory and practice; education; computer applications in technology; communication systems; systems modeling and optimization; information systems; computers and society; computer systems technology; security and protection in information processing systems; artificial intelligence; and human-computer interaction. Proceedings and post-proceedings of refereed international conferences in computer science and interdisciplinary fields are featured. These results often precede journal publication and represent the most current research. The principal aim of the IFIP series is to encourage education and the dissemination and exchange of information about all aspects of computing.
The papers in this volume comprise the refereed proceedings of the Second IFIP International Conference on Computer and Computing Technologies in Agriculture (CCTA2008), in Beijing, China, 2008. The conference on the Second IFIP International Conference on Computer and Computing Technologies in Agriculture (CCTA 2008) is cooperatively sponsored and organized by the China Agricultural University (CAU), the National Engineering Research Center for Information Technology in Agriculture (NERCITA), the Chinese Society of Agricultural Engineering (CSAE) , International Federation for Information Processing (IFIP), Beijing Society for Information Technology in Agriculture, China and Beijing Research Center for Agro-products Test and Farmland Inspection, China. The related departments of China's central government bodies like: Ministry of Science and Technology, Ministry of Industry and Information Technology, Ministry of Education and the Beijing Municipal Natural Science Foundation, Beijing Academy of Agricultural and Forestry Sciences, etc. have greatly contributed and supported to this event. The conference is as good platform to bring together scientists and researchers, agronomists and information engineers, extension servers and entrepreneurs from a range of disciplines concerned with impact of Information technology for sustainable agriculture and rural development. The representatives of all the supporting organizations, a group of invited speakers, experts and researchers from more than 15 countries, such as: the Netherlands, Spain, Portugal, Mexico, Germany, Greece, Australia, Estonia, Japan, Korea, India, Iran, Nigeria, Brazil, China, etc.
The development of modern knowledge-based systems, for applications ranging from medicine to finance, necessitates going well beyond traditional rule-based programming. Frontiers of Expert Systems: Reasoning with Limited Knowledge attempts to satisfy such a need, introducing exciting and recent advances at the frontiers of the field of expert systems. Beginning with the central topics of logic, uncertainty and rule-based reasoning, each chapter in the book presents a different perspective on how we may solve problems that arise due to limitations in the knowledge of an expert system's reasoner. Successive chapters address (i) the fundamentals of knowledge-based systems, (ii) formal inference, and reasoning about models of a changing and partially known world, (iii) uncertainty and probabilistic methods, (iv) the expression of knowledge in rule-based systems, (v) evolving representations of knowledge as a system interacts with the environment, (vi) applying connectionist learning algorithms to improve on knowledge acquired from experts, (vii) reasoning with cases organized in indexed hierarchies, (viii) the process of acquiring and inductively learning knowledge, (ix) extraction of knowledge nuggets from very large data sets, and (x) interactions between multiple specialized reasoners with specialized knowledge bases. Each chapter takes the reader on a journey from elementary concepts to topics of active research, providing a concise description of several topics within and related to the field of expert systems, with pointers to practical applications and other relevant literature. Frontiers of Expert Systems: Reasoning with Limited Knowledge is suitable as a secondary text for a graduate-level course, and as a reference for researchers and practitioners in industry.
There is a tremendous interest in the design and applications of agents in virtually every area including avionics, business, internet, engineering, health sciences and management. There is no agreed one definition of an agent but we can define an agent as a computer program that autonomously or semi-autonomously acts on behalf of the user. In the last five years transition of intelligent systems research in general and agent based research in particular from a laboratory environment into the real world has resulted in the emergence of several phenomenon. These trends can be placed in three catego ries, namely, humanization, architectures and learning and adapta tion. These phenomena are distinct from the traditional logic centered approach associated with the agent paradigm. Humaniza tion of agents can be understood among other aspects, in terms of the semantics quality of design of agents. The need to humanize agents is to allow practitioners and users to make more effective use of this technology. It relates to the semantic quality of the agent design. Further, context-awareness is another aspect which has as sumed importance in the light of ubiquitous computing and ambi ent intelligence. The widespread and varied use of agents on the other hand has cre ated a need for agent-based software development frameworks and design patterns as well architectures for situated interaction, nego tiation, e-commerce, e-business and informational retrieval. Fi- vi Preface nally, traditional agent designs did not incorporate human-like abilities of learning and adaptation."
How to draw plausible conclusions from uncertain and conflicting sources of evidence is one of the major intellectual challenges of Artificial Intelligence. It is a prerequisite of the smart technology needed to help humans cope with the information explosion of the modern world. In addition, computational modelling of uncertain reasoning is a key to understanding human rationality. Previous computational accounts of uncertain reasoning have fallen into two camps: purely symbolic and numeric. This book represents a major advance by presenting a unifying framework which unites these opposing camps. The Incidence Calculus can be viewed as both a symbolic and a numeric mechanism. Numeric values are assigned indirectly to evidence via the possible worlds in which that evidence is true. This facilitates purely symbolic reasoning using the possible worlds and numeric reasoning via the probabilities of those possible worlds. Moreover, the indirect assignment solves some difficult technical problems, like the combinat ion of dependent sources of evidcence, which had defeated earlier mechanisms. Weiru Liu generalises the Incidence Calculus and then compares it to a succes sion of earlier computational mechanisms for uncertain reasoning: Dempster-Shafer Theory, Assumption-Based Truth Maintenance, Probabilis tic Logic, Rough Sets, etc. She shows how each of them is represented and interpreted in Incidence Calculus. The consequence is a unified mechanism which includes both symbolic and numeric mechanisms as special cases. It provides a bridge between symbolic and numeric approaches, retaining the advantages of both and overcoming some of their disadvantages."
Making use of data is not anymore a niche project but central to almost every project. With access to massive compute resources and vast amounts of data, it seems at least in principle possible to solve any problem. However, successful data science projects result from the intelligent application of: human intuition in combination with computational power; sound background knowledge with computer-aided modelling; and critical reflection of the obtained insights and results. Substantially updating the previous edition, then entitled Guide to Intelligent Data Analysis, this core textbook continues to provide a hands-on instructional approach to many data science techniques, and explains how these are used to solve real world problems. The work balances the practical aspects of applying and using data science techniques with the theoretical and algorithmic underpinnings from mathematics and statistics. Major updates on techniques and subject coverage (including deep learning) are included. Topics and features: guides the reader through the process of data science, following the interdependent steps of project understanding, data understanding, data blending and transformation, modeling, as well as deployment and monitoring; includes numerous examples using the open source KNIME Analytics Platform, together with an introductory appendix; provides a review of the basics of classical statistics that support and justify many data analysis methods, and a glossary of statistical terms; integrates illustrations and case-study-style examples to support pedagogical exposition; supplies further tools and information at an associated website. This practical and systematic textbook/reference is a "need-to-have" tool for graduate and advanced undergraduate students and essential reading for all professionals who face data science problems. Moreover, it is a "need to use, need to keep" resource following one's exploration of the subject.
Knowledge representation is a key area of modern AI, underlying the development of semantic networks. Description logics are languages that represent knowledge in a structured and formally well-understood way: they are the cornerstone of the Semantic Web. This is the first textbook describing this importan new topic and will be suitable for courses aimed at advanced undergraduate and beginning graduate students, or for self-study. It assumes only a basic knowledge of computer science concepts. After generla introducitons motivating and overviewing the subject, the authors describe a simple DL and how it works and can be used, utilizing a running example that recurs through the book. Methods of reasoning and their implementation and complexity are examined, Finally, the authors provide a non-trivial DL knowledge base and use it to illsutrate featues that have been introduced: this base is available for free online access in a form usable by modern ontology editors.
The papers in this volume comprise the refereed proceedings of the Second IFIP International Conference on Computer and Computing Technologies in Agriculture (CCTA2008), in Beijing, China, 2008. The conference on the Second IFIP International Conference on Computer and Computing Technologies in Agriculture (CCTA 2008) is cooperatively sponsored and organized by the China Agricultural University (CAU), the National Engineering Research Center for Information Technology in Agriculture (NERCITA), the Chinese Society of Agricultural Engineering (CSAE) , International Federation for Information Processing (IFIP), Beijing Society for Information Technology in Agriculture, China and Beijing Research Center for Agro-products Test and Farmland Inspection, China. The related departments of China's central government bodies like: Ministry of Science and Technology, Ministry of Industry and Information Technology, Ministry of Education and the Beijing Municipal Natural Science Foundation, Beijing Academy of Agricultural and Forestry Sciences, etc. have greatly contributed and supported to this event. The conference is as good platform to bring together scientists and researchers, agronomists and information engineers, extension servers and entrepreneurs from a range of disciplines concerned with impact of Information technology for sustainable agriculture and rural development. The representatives of all the supporting organizations, a group of invited speakers, experts and researchers from more than 15 countries, such as: the Netherlands, Spain, Portugal, Mexico, Germany, Greece, Australia, Estonia, Japan, Korea, India, Iran, Nigeria, Brazil, China, etc.
Social media data contains our communication and online sharing, mirroring our daily life. This book looks at how we can use and what we can discover from such big data: Basic knowledge (data & challenges) on social media analytics Clustering as a fundamental technique for unsupervised knowledge discovery and data mining A class of neural inspired algorithms, based on adaptive resonance theory (ART), tackling challenges in big social media data clustering Step-by-step practices of developing unsupervised machine learning algorithms for real-world applications in social media domain Adaptive Resonance Theory in Social Media Data Clustering stands on the fundamental breakthrough in cognitive and neural theory, i.e. adaptive resonance theory, which simulates how a brain processes information to perform memory, learning, recognition, and prediction. It presents initiatives on the mathematical demonstration of ART's learning mechanisms in clustering, and illustrates how to extend the base ART model to handle the complexity and characteristics of social media data and perform associative analytical tasks. Both cutting-edge research and real-world practices on machine learning and social media analytics are included in the book and if you wish to learn the answers to the following questions, this book is for you: How to process big streams of multimedia data? How to analyze social networks with heterogeneous data? How to understand a user's interests by learning from online posts and behaviors? How to create a personalized search engine by automatically indexing and searching multimodal information resources? .
Inquiring Organizations: Moving from Knowledge Management to Wisdom' assembles into one volume a comprehensive collection of the key current thinking regarding the use of C. West Churchman's Design of Inquiring Systems as a basis for computer-based inquiring systems design and implementation. Inquiring systems are systems that go beyond knowledge management to actively inquire about their environment. While self-adaptive is an appropriate adjective for inquiring systems, they are critically different from self-adapting systems as they have evolved in the fields of computer science or artificial intelligence. Inquiring systems draw on epistemology to guide knowledge creation and organizational learning. As such, we can for the first time ever, begin to entertain the notion of support for wise"" decision-making. Readers of ""Inquiring Organizations: Moving from Knowledge Management to Wisdom"" will gain an appreciation for the role that epistemology can play in the design of the next generation of knowledge management systems, systems that focus on supporting wise decision-making processes.
This book presents the complex topic of using computational intelligence for pattern recognition in a straightforward and applicable way, using Matlab to illustrate topics and concepts. The author covers computational intelligence tools like particle swarm optimization, bacterial foraging, simulated annealing, genetic algorithm, and artificial neural networks. The Matlab based illustrations along with the code are given for every topic. Readers get a quick basic understanding of various pattern recognition techniques using only the required depth in math. The Matlab program and algorithm are given along with the running text, providing clarity and usefulness of the various techniques. Presents pattern recognition and the computational intelligence using Matlab; Includes mixtures of theory, math, and algorithms, letting readers understand the concepts quickly; Outlines an array of classifiers, various regression models, statistical tests and the techniques for pattern recognition using computational intelligence.
Provides a wide snapshot of building knowledge-based systems, inconsistency measures, methods for handling consistency, and methods for integrating knowledge bases. Provides the mathematical background to solve problems of restoring consistency and problems of integrating probabilistic knowledge bases in the integrating process. The research results presented in the book can be applied in decision support systems, semantic web systems, multimedia information retrieval systems, medical imaging systems, cooperative information systems, and more.
Personal data is increasingly important in our lives. We use personal data to quantify our behaviour, through health apps or for 'personal branding' and we are also increasingly forced to part with our data to access services. With the proliferation of embedded sensors, the built environment is playing a key role in this developing use of data, even though this remains relatively hidden. Buildings are sites for the capture of personal data. This data is used to adapt buildings to people's behaviour, and increasingly, organisations use this data to understand how buildings are occupied and how communities develop within them. A whole host of technical, practical, social and ethical challenges emerge from this still developing area across interior, architectural and urban design, and many open questions remain. This book makes a contribution to this on-going discourse by bringing together a community of researchers interested in personal informatics and the design of interactive buildings and environments. The book's aim is to foster critical discussion about the future role of personal data in interactions with the built environment. People, Personal Data and the Built Environment is ideal for researchers and practitioners interested in Architecture, Computer Science and Human Building Interaction.
This book presents the many facets, concepts and theories that have influenced knowledge management and the state of practice concerning strategy, organization, systems and economics. The second edition updates the material to cover the most recent developments in ICT-supported knowledge management. It also provides a more coverage of its theoretical foundation including a new account of knowledge work, discusses the potentials and challenges of process-oriented knowledge management, adds a new chapter on modelling that plays an important role in knowledge management initiatives and contrasts architectures for centralized and distributed or peer-to-peer knowledge management systems.
This open access book explores the dataspace paradigm as a best-effort approach to data management within data ecosystems. It establishes the theoretical foundations and principles of real-time linked dataspaces as a data platform for intelligent systems. The book introduces a set of specialized best-effort techniques and models to enable loose administrative proximity and semantic integration for managing and processing events and streams. The book is divided into five major parts: Part I "Fundamentals and Concepts" details the motivation behind and core concepts of real-time linked dataspaces, and establishes the need to evolve data management techniques in order to meet the challenges of enabling data ecosystems for intelligent systems within smart environments. Further, it explains the fundamental concepts of dataspaces and the need for specialization in the processing of dynamic real-time data. Part II "Data Support Services" explores the design and evaluation of critical services, including catalog, entity management, query and search, data service discovery, and human-in-the-loop. In turn, Part III "Stream and Event Processing Services" addresses the design and evaluation of the specialized techniques created for real-time support services including complex event processing, event service composition, stream dissemination, stream matching, and approximate semantic matching. Part IV "Intelligent Systems and Applications" explores the use of real-time linked dataspaces within real-world smart environments. In closing, Part V "Future Directions" outlines future research challenges for dataspaces, data ecosystems, and intelligent systems. Readers will gain a detailed understanding of how the dataspace paradigm is now being used to enable data ecosystems for intelligent systems within smart environments. The book covers the fundamental theory, the creation of new techniques needed for support services, and lessons learned from real-world intelligent systems and applications focused on sustainability. Accordingly, it will benefit not only researchers and graduate students in the fields of data management, big data, and IoT, but also professionals who need to create advanced data management platforms for intelligent systems, smart environments, and data ecosystems.
This book represents the Flight Operations Manual for a reusable microsatellite platform - the "Future Low-cost Platform" (FLP), developed at the University of Stuttgart, Germany. It provides a basic insight on the onboard software functions, the core data handling system and on the power, communications, attitude control and thermal subsystem of the platform. Onboard failure detection, isolation and recovery functions are treated in detail. The platform is suited for satellites in the 50-150 kg class and is baseline of the microsatellite "Flying Laptop" from the University. The book covers the essential information for ground operators to controls an FLP-based satellite applying international command and control standards (CCSDS and ECSS PUS). Furthermore it provides an overview on the Flight Control Center in Stuttgart and on the link to the German Space Agency DLR Ground Station which is used for early mission phases. Flight procedure and mission planning chapters complement the book.
This book provides readers a thorough understanding of the applicability of new-generation silicon-germanium (SiGe) electronic subsystems for electronic warfare and defensive countermeasures in military contexts. It explains in detail the theoretical and technical background, and addresses all aspects of the integration of SiGe as an enabling technology for maritime, land, and airborne / spaceborne electronic warfare, including research, design, development, and implementation. The coverage is supported by mathematical derivations, informative illustrations, practical examples, and case studies. While SiGe technology provides speed, performance, and price advantages in many markets, to date only limited information has been available on its use in electronic warfare systems, especially in developing nations. Addressing that need, this book offers essential engineering guidelines that especially focus on the speed and reliability of current-generation SiGe circuits and highlight emerging innovations that help to ensure the sustainable long-term integration of SiGe into electronic warfare systems.
Without correct timing, there is no safe and reliable embedded software. This book shows how to consider timing early in the development process for embedded systems, how to solve acute timing problems, how to perform timing optimization, and how to address the aspect of timing verification.The book is organized in twelve chapters. The first three cover various basics of microprocessor technologies and the operating systems used therein. The next four chapters cover timing problems both in theory and practice, covering also various timing analysis techniques as well as special issues like multi- and many-core timing. Chapter 8 deals with aspects of timing optimization, followed by chapter 9 that highlights various methodological issues of the actual development process. Chapter 10 presents timing analysis in AUTOSAR in detail, while chapter 11 focuses on safety aspects and timing verification. Finally, chapter 12 provides an outlook on upcoming and future developments in software timing. The number of embedded systems that we encounter in everyday life is growing steadily. At the same time, the complexity of the software is constantly increasing. This book is mainly written for software developers and project leaders in industry. It is enriched by many practical examples mostly from the automotive domain, yet the vast majority of the book is relevant for any embedded software project. This way it is also well-suited as a textbook for academic courses with a strong practical emphasis, e.g. at applied sciences universities. Features and Benefits * Shows how to consider timing in the development process for embedded systems, how to solve timing problems, and how to address timing verification * Enriched by many practical examples mostly from the automotive domain * Mainly written for software developers and project leaders in industry
This book presents the state of the art, challenges and future trends in automotive software engineering. The amount of automotive software has grown from just a few lines of code in the 1970s to millions of lines in today's cars. And this trend seems destined to continue in the years to come, considering all the innovations in electric/hybrid, autonomous, and connected cars. Yet there are also concerns related to onboard software, such as security, robustness, and trust. This book covers all essential aspects of the field. After a general introduction to the topic, it addresses automotive software development, automotive software reuse, E/E architectures and safety, C-ITS and security, and future trends. The specific topics discussed include requirements engineering for embedded software systems, tools and methods used in the automotive industry, software product lines, architectural frameworks, various related ISO standards, functional safety and safety cases, cooperative intelligent transportation systems, autonomous vehicles, and security and privacy issues. The intended audience includes researchers from academia who want to learn what the fundamental challenges are and how they are being tackled in the industry, and practitioners looking for cutting-edge academic findings. Although the book is not written as lecture notes, it can also be used in advanced master's-level courses on software and system engineering. The book also includes a number of case studies that can be used for student projects. |
You may like...
Wild About You - A 60-Day Devotional For…
John Eldredge, Stasi Eldredge
Hardcover
R302
Discovery Miles 3 020
Think, Learn, Succeed - Understanding…
Dr. Caroline Leaf, Peter Amua-Quarshie, …
Paperback
(1)
|