![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence > Knowledge-based systems / expert systems
Brain-computer interfaces (BCIs) are devices that enable people to communicate via thought alone. Brain signals can be directly translated into messages or commands. Until recently, these devices were used primarily to help people who could not move. However, BCIs are now becoming practical tools for a wide variety of people, in many different situations. What will BCIs in the future be like? Who will use them, and why? This book, written by many of the top BCI researchers and developers, reviews the latest progress in the different components of BCIs. Chapters also discuss practical issues in an emerging BCI enabled community. The book is intended both for professionals and for interested laypeople who are not experts in BCI research.
Within the framework of so-called second generation expert systems [62] knowledge modeling is one of the most important aspects. On the one hand, knowledge acquisition is no longer seen as a knowledge transfer process, rather it is now considered as model construction process which is typically a cyclic and error prone process. On the other hand, the distinction between knowledge and symbol level descriptions [166] resulted in various proposals for conceptual knowledge models describing knowledge in an implementation independent way. One of the most prominent examples of such a conceptual model is the KADS model of expertise which is characterized by its clear distinction of different know ledge types and by the usage of specific modeling primitives to describe these different knowledge types [185]. The semi formal KADS expertise model entails all the advantages and disadvantages which have been identified for semi-formal system models e.g. in the software engineering community.
eMaintenance: Essential Electronic Tools for Efficiency enables the reader to improve efficiency of operations, maintenance staff, infrastructure managers and system integrators, by accessing a real time computerized system from data to decision. In recent years, the exciting possibilities of eMaintenance have become increasingly recognized as a source of productivity improvement in industry. The seamless linking of systems and equipment to control centres for real time reconfiguring is improving efficiency, reliability, and sustainability in a variety of settings. The book provides an introduction to collecting and processing data from machinery, explains the methods of overcoming the challenges of data collection and processing, and presents tools for data driven condition monitoring and decision making. This is a groundbreaking handbook for those interested in the possibilities of running a plant as a smart asset.
The book provides a comprehensive and timely report on the topic of decision making and decision analysis in economics and the social sciences. The various contributions included in the book, selected using a peer review process, present important studies and research conducted in various countries around the globe. The majority of these studies are concerned with the analysis, modeling and formalization of the behavior of groups or committees that are in charge of making decisions of social and economic importance. Decisions in these contexts have to meet precise coherence standards and achieve a significant degree of sharing, consensus and acceptance, even in uncertain and fuzzy environments. This necessitates the confluence of several research fields, such as foundations of social choice and decision making, mathematics, complexity, psychology, sociology and economics. A large spectrum of problems that may be encountered during decision making and decision analysis in the areas of economics and the social sciences, together with a broad range of tools and techniques that may be used to solve those problems, are presented in detail in this book, making it an ideal reference work for all those interested in analyzing and implementing mathematical tools for application to relevant issues involving the economy and society.
This edited collection discusses the emerging topics in statistical modeling for biomedical research. Leading experts in the frontiers of biostatistics and biomedical research discuss the statistical procedures, useful methods, and their novel applications in biostatistics research. Interdisciplinary in scope, the volume as a whole reflects the latest advances in statistical modeling in biomedical research, identifies impactful new directions, and seeks to drive the field forward. It also fosters the interaction of scholars in the arena, offering great opportunities to stimulate further collaborations. This book will appeal to industry data scientists and statisticians, researchers, and graduate students in biostatistics and biomedical science. It covers topics in: Next generation sequence data analysis Deep learning, precision medicine, and their applications Large scale data analysis and its applications Biomedical research and modeling Survival analysis with complex data structure and its applications.
This book discusses the challenges facing current research in knowledge discovery and data mining posed by the huge volumes of complex data now gathered in various real-world applications (e.g., business process monitoring, cybersecurity, medicine, language processing, and remote sensing). The book consists of 14 chapters covering the latest research by the authors and the research centers they represent. It illustrates techniques and algorithms that have recently been developed to preserve the richness of the data and allow us to efficiently and effectively identify the complex information it contains. Presenting the latest developments in complex pattern mining, this book is a valuable reference resource for data science researchers and professionals in academia and industry.
A fundamental assumption of work in artificial intelligence and machine learning is that knowledge is expressed in a computer with the help of knowledge representations. Since the proper choice of such representations is a difficult task that fundamentally affects the capabilities of a system, the problem of automatic representation change is an important topic in current research. Concept Formation and Knowledge Revision focuses on representation change as a concept formation task, regarding concepts as the elementary representational vocabulary from which further statements are constructed. Taking an interdisciplinary approach from psychological foundations to computer implementations, the book draws on existing psychological results about the nature of human concepts and concept formation to determine the scope of concept formation phenomena, and to identify potential components of computational concept formation models. The central idea of this work is that computational concept formation can usefully be understood as a process that is triggered in a demand-driven fashion by the representational needs of the learning system, and identify the knowledge revision activities of a system as a particular context for such a process. The book presents a detailed analysis of the revision problem for first-order clausal theories, and develops a set of postulates that any such operation should satisfy. It shows how a minimum theory revision operator can be realized by using exception sets, and that this operator is indeed maximally general. The book then shows that concept formation can be triggered from within the knowledge revision process whenever the existing representation does not permit the plausible reformulation of an exception set, demonstrating the usefulness of the approach both theoretically and empirically within the learning knowledge acquisition system MOBAL. In using a first-order representation, this book is part of the rapidly developing field of Inductive Logic Programming (ILP). By integrating the computational issues with psychological and fundamental discussions of concept formation phenomena, the book will be of interest to readers both theoretically and psychologically inclined. From the foreword by Katharina Morik: The ideal to combine the three sources of artificial intelligence research has almost never been reached. Such a combined and integrated research requires the researcher to master different ways of thinking, different work styles, different sets of literature, and different research procedures. It requires capabilities in software engineering for the application part, in theoretical computer science for the theory part, and in psychology for the cognitive part. The most important capability for artificial intelligence is to keep the integrative view and to create a true original work that goes beyond the collection of pieces from different fields. This book achieves such an integrative view of concept formation and knowledge revision by presenting the way from psychological investigations that indicate that concepts are theories and point at the important role of a demand for learning. to an implemented system which supports users in their tasks when working with a knowledge base and its theoretical foundation. '
In professional practice, many designers collect and maintain personal notes as guidelines about experiences and insights for handling technical problems and design situations. An intelligent personal assistant (IPA) can act as a database for these notes, making the entire design process more efficient. Based on real industrial procedures, this book contains practical examples for professionals and students interested in real implementations of knowledge based systems in engineering. It integrates two major ideas: a computer system integrating computer design tools and a computer system fulfilling the role of an intelligent personal assistant. This user-friendly approach to the main ideas, concepts and techniques shows how an IPA can serve as a significant and fruitful knowledge based technique in engineering design.
This book presents Explainable Artificial Intelligence (XAI), which aims at producing explainable models that enable human users to understand and appropriately trust the obtained results. The authors discuss the challenges involved in making machine learning-based AI explainable. Firstly, that the explanations must be adapted to different stakeholders (end-users, policy makers, industries, utilities etc.) with different levels of technical knowledge (managers, engineers, technicians, etc.) in different application domains. Secondly, that it is important to develop an evaluation framework and standards in order to measure the effectiveness of the provided explanations at the human and the technical levels. This book gathers research contributions aiming at the development and/or the use of XAI techniques in order to address the aforementioned challenges in different applications such as healthcare, finance, cybersecurity, and document summarization. It allows highlighting the benefits and requirements of using explainable models in different application domains in order to provide guidance to readers to select the most adapted models to their specified problem and conditions. Includes recent developments of the use of Explainable Artificial Intelligence (XAI) in order to address the challenges of digital transition and cyber-physical systems; Provides a textual scientific description of the use of XAI in order to address the challenges of digital transition and cyber-physical systems; Presents examples and case studies in order to increase transparency and understanding of the methodological concepts.
This open access book explores cutting-edge solutions and best practices for big data and data-driven AI applications for the data-driven economy. It provides the reader with a basis for understanding how technical issues can be overcome to offer real-world solutions to major industrial areas. The book starts with an introductory chapter that provides an overview of the book by positioning the following chapters in terms of their contributions to technology frameworks which are key elements of the Big Data Value Public-Private Partnership and the upcoming Partnership on AI, Data and Robotics. The remainder of the book is then arranged in two parts. The first part "Technologies and Methods" contains horizontal contributions of technologies and methods that enable data value chains to be applied in any sector. The second part "Processes and Applications" details experience reports and lessons from using big data and data-driven approaches in processes and applications. Its chapters are co-authored with industry experts and cover domains including health, law, finance, retail, manufacturing, mobility, and smart cities. Contributions emanate from the Big Data Value Public-Private Partnership and the Big Data Value Association, which have acted as the European data community's nucleus to bring together businesses with leading researchers to harness the value of data to benefit society, business, science, and industry. The book is of interest to two primary audiences, first, undergraduate and postgraduate students and researchers in various fields, including big data, data science, data engineering, and machine learning and AI. Second, practitioners and industry experts engaged in data-driven systems, software design and deployment projects who are interested in employing these advanced methods to address real-world problems.
In this book, Haridimos Tsoukas, one of the most imaginative organization theorists of our time, examines the nature of knowledge in organizations, and how individuals and scholars approach the concept of knowledge. Tsoukas firstly looks at organizational knowledge and its embessedness in social contexts and forms of life. He shows that knowledge is not just a collection of free floating representations of the world to be used at will, but an activity constitute of the world. On the one hand, the organization as an institutionalized system does produce regularities that can be captured via propositional forms of knowledge. On the other, the organization as practice as a lifeworld, or as an open-ended system produce stories, values, and shared traditions which can only be captured by narrative forms of knowledge. Secondly, Tsoukas looks at the issue of how individuals deal with the notion of complexity in organizations: Our inability to reduce the behavior of complex organizations to their constituent parts. Drawing on concepts such as discourse, narrativity, and reflexivity, he adopts a hermeneutical approach to the issue. Finally, Tsoukas examines the concept of meta-knowledge, and how we know what we know. Arguing that the underlying representationalist epistemology of much of mainstream management causes many problems, he advocates adopting a more discursive approach. He describes what such an epistemology might be, and illustrates it with examples from organization studies and strategic management. An ideal introduction to the thinking of a leading organizational theorist, this book will be essential reading for academics, researchers, and students of Knowledge Management, Organization Studies, Management Studies, Business Strategy and Applied Epistemology.
This book brings together the research of a number of researchers in the field of knoledge ceation and imparts a sense of order to that field.
This book will provide a comprehensive overview of business analytics, for those who have either a technical background (quantitative methods) or a practitioner business background. Business analytics, in the context of the 4th Industrial Revolution, is the "new normal" for businesses that operate in this digital age. This book provides a comprehensive primer and overview of the field (and related fields such as Business Intelligence and Data Science). It will discuss the field as it applies to financial institutions, with some minor departures to other industries. Readers will gain understanding and insight into the field of data science, including traditional as well as emerging techniques. Further, many chapters are dedicated to the establishment of a data-driven team - from executive buy-in and corporate governance to managing and quantifying the return of data-driven projects.
This book constitutes the refereed proceedings of the Third International Conference on Intelligence Science, ICIS 2018, held in Beijing China, in November 2018. The 44 full papers and 5 short papers presented were carefully reviewed and selected from 85 submissions. They deal with key issues in intelligence science and have been organized in the following topical sections: brain cognition; machine learning; data intelligence; language cognition; perceptual intelligence; intelligent robots; fault diagnosis; and ethics of artificial intelligence.
In the years since the bestselling first edition, fusion research and applications have adapted to service-oriented architectures and pushed the boundaries of situational modeling in human behavior, expanding into fields such as chemical and biological sensing, crisis management, and intelligent buildings. Multisensor Data Fusion, Second Edition represents the most current concepts and theory as information fusion expands into the realm of network-centric architectures. It reflects new developments in distributed and detection fusion, situation and impact awareness in complex applications, and human cognitive concepts. With contributions from the world's leading fusion experts, this second edition expands to 31 chapters covering the fundamental theory and cutting-edge developments that are driving this field. New to the Second Edition- - Applications in electromagnetic systems and chemical and biological sensors - Army command and combat identification techniques - Techniques for automated reasoning - Advances in Kalman filtering - Fusion in a network centric environment - Service-oriented architecture concepts - Intelligent agents for improved decision making - Commercial off-the-shelf (COTS) software tools From basic information to state-of-the-art theories, this second edition continues to be a unique, comprehensive, and up-to-date resource for data fusion systems designers.
The three-volume set IFIP AICT 368-370 constitutes the refereed post-conference proceedings of the 5th IFIP TC 5, SIG 5.1 International Conference on Computer and Computing Technologies in Agriculture, CCTA 2011, held in Beijing, China, in October 2011. The 189 revised papers presented were carefully selected from numerous submissions. They cover a wide range of interesting theories and applications of information technology in agriculture, including simulation models and decision-support systems for agricultural production, agricultural product quality testing, traceability and e-commerce technology, the application of information and communication technology in agriculture, and universal information service technology and service systems development in rural areas. The 68 papers included in the second volume focus on GIS, GPS, RS, and precision farming.
This book presents established and state-of-the-art methods in Language Technology (including text mining, corpus linguistics, computational linguistics, and natural language processing), and demonstrates how they can be applied by humanities scholars working with textual data. The landscape of humanities research has recently changed thanks to the proliferation of big data and large textual collections such as Google Books, Early English Books Online, and Project Gutenberg. These resources have yet to be fully explored by new generations of scholars, and the authors argue that Language Technology has a key role to play in the exploration of large-scale textual data. The authors use a series of illustrative examples from various humanistic disciplines (mainly but not exclusively from History, Classics, and Literary Studies) to demonstrate basic and more complex use-case scenarios. This book will be useful to graduate students and researchers in humanistic disciplines working with textual data, including History, Modern Languages, Literary studies, Classics, and Linguistics. This is also a very useful book for anyone teaching or learning Digital Humanities and interested in the basic concepts from computational linguistics, corpus linguistics, and natural language processing.
This book constitutes the refereed post-conference proceedings of the 17th IFIP WG 5.1 International Conference on Product Lifecycle Management, PLM 2020, held in Rapperswil, Switzerland, in July 2020. The conference was held virtually due to the COVID-19 crisis. The 60 revised full papers presented together with 2 technical industrial papers were carefully reviewed and selected from 80 submissions. The papers are organized in the following topical sections: smart factory; digital twins; Internet of Things (IoT, IIoT); analytics in the order fulfillment process; ontologies for interoperability; tools to support early design phases; new product development; business models; circular economy; maturity implementation and adoption; model based systems engineering; artificial intelligence in CAx, MBE, and PLM; building information modelling; and industrial technical contributions.
Knowledge representation is at the very core of a radical idea for
understanding intelligence. Instead of trying to understand or
build brains from the bottom up, its goal is to understand and
build intelligent behavior from the top down, putting the focus on
what an agent needs to know in order to behave intelligently, how
this knowledge can be represented symbolically, and how automated
reasoning procedures can make this knowledge available as needed.
With the collapse of high-profile companies such as Enron and Tyco,
worldwide anti-globalization protests, and recent revelations of
questionable behavior by financial groups and auditors, corporate
behavior has become the highest priority topic for businesspeople,
investors, politicians and the public. Yet despite the critical
importance of maintaining public and shareholder trust, most
corporations make very little formal effort to actively manage the
activities that can put their reputation, share price, and customer
base at risk. Most corporations officially embrace the concept of
Corporate Social Responsibility; but giving money away to local
communities or worthy causes will not prevent an ethical disaster.
This book provides modern technical answers to the legal requirements of pseudonymisation as recommended by privacy legislation. It covers topics such as modern regulatory frameworks for sharing and linking sensitive information, concepts and algorithms for privacy-preserving record linkage and their computational aspects, practical considerations such as dealing with dirty and missing data, as well as privacy, risk, and performance assessment measures. Existing techniques for privacy-preserving record linkage are evaluated empirically and real-world application examples that scale to population sizes are described. The book also includes pointers to freely available software tools, benchmark data sets, and tools to generate synthetic data that can be used to test and evaluate linkage techniques. This book consists of fourteen chapters grouped into four parts, and two appendices. The first part introduces the reader to the topic of linking sensitive data, the second part covers methods and techniques to link such data, the third part discusses aspects of practical importance, and the fourth part provides an outlook of future challenges and open research problems relevant to linking sensitive databases. The appendices provide pointers and describe freely available, open-source software systems that allow the linkage of sensitive data, and provide further details about the evaluations presented. A companion Web site at https://dmm.anu.edu.au/lsdbook2020 provides additional material and Python programs used in the book. This book is mainly written for applied scientists, researchers, and advanced practitioners in governments, industry, and universities who are concerned with developing, implementing, and deploying systems and tools to share sensitive information in administrative, commercial, or medical databases. The Book describes how linkage methods work and how to evaluate their performance. It covers all the major concepts and methods and also discusses practical matters such as computational efficiency, which are critical if the methods are to be used in practice - and it does all this in a highly accessible way!David J. Hand, Imperial College, London
The book presents the proceedings of two conferences: the 16th International Conference on Data Science (ICDATA 2020) and the 19th International Conference on Information & Knowledge Engineering (IKE 2020), which took place in Las Vegas, NV, USA, July 27-30, 2020. The conferences are part of the larger 2020 World Congress in Computer Science, Computer Engineering, & Applied Computing (CSCE'20), which features 20 major tracks. Papers cover all aspects of Data Science, Data Mining, Machine Learning, Artificial and Computational Intelligence (ICDATA) and Information Retrieval Systems, Information & Knowledge Engineering, Management and Cyber-Learning (IKE). Authors include academics, researchers, professionals, and students. Presents the proceedings of the 16th International Conference on Data Science (ICDATA 2020) and the 19th International Conference on Information & Knowledge Engineering (IKE 2020); Includes papers on topics from data mining to machine learning to informational retrieval systems; Authors include academics, researchers, professionals and students.
Self-driving cars, natural language recognition, and online recommendation engines are all possible thanks to Machine Learning. Now you can create your own genetic algorithms, nature-inspired swarms, Monte Carlo simulations, cellular automata, and clusters. Learn how to test your ML code and dive into even more advanced topics. If you are a beginner-to-intermediate programmer keen to understand machine learning, this book is for you. Discover machine learning algorithms using a handful of self-contained recipes. Build a repertoire of algorithms, discovering terms and approaches that apply generally. Bake intelligence into your algorithms, guiding them to discover good solutions to problems. In this book, you will: Use heuristics and design fitness functions. Build genetic algorithms. Make nature-inspired swarms with ants, bees and particles. Create Monte Carlo simulations. Investigate cellular automata. Find minima and maxima, using hill climbing and simulated annealing. Try selection methods, including tournament and roulette wheels. Learn about heuristics, fitness functions, metrics, and clusters. Test your code and get inspired to try new problems. Work through scenarios to code your way out of a paper bag; an important skill for any competent programmer. See how the algorithms explore and learn by creating visualizations of each problem. Get inspired to design your own machine learning projects and become familiar with the jargon. What You Need: Code in C++ (>= C++11), Python (2.x or 3.x) and JavaScript (using the HTML5 canvas). Also uses matplotlib and some open source libraries, including SFML, Catch and Cosmic-Ray. These plotting and testing libraries are not required but their use will give you a fuller experience. Armed with just a text editor and compiler/interpreter for your language of choice you can still code along from the general algorithm descriptions.
Embedded systems have long become essential in application areas in which human control is impossible or infeasible. The development of modern embedded systems is becoming increasingly difficult and challenging because of their overall system complexity, their tighter and cross-functional integration, the increasing requirements concerning safety and real-time behavior, and the need to reduce development and operation costs. This book provides a comprehensive overview of the Software Platform Embedded Systems (SPES) modeling framework and demonstrates its applicability in embedded system development in various industry domains such as automation, automotive, avionics, energy, and healthcare. In SPES 2020, twenty-one partners from academia and industry have joined forces in order to develop and evaluate in different industrial domains a modeling framework that reflects the current state of the art in embedded systems engineering. The content of this book is structured in four parts. Part I "Starting Point" discusses the status quo of embedded systems development and model-based engineering, and summarizes the key requirements faced when developing embedded systems in different application domains. Part II "The SPES Modeling Framework" describes the SPES modeling framework. Part III "Application and Evaluation of the SPES Modeling Framework" reports on the validation steps taken to ensure that the framework met the requirements discussed in Part I. Finally, Part IV "Impact of the SPES Modeling Framework" summarizes the results achieved and provides an outlook on future work. The book is mainly aimed at professionals and practitioners who deal with the development of embedded systems on a daily basis. Researchers in academia and industry may use it as a compendium for the requirements and state-of-the-art solution concepts for embedded systems development.
The Handbook of Applied Expert Systems is a landmark work dedicated
solely to this rapidly advancing area of study. Edited by Jay
Liebowitz, a professor, author, and consultant known around the
world for his work in the field, this authoritative source covers
the latest expert system technologies, applications, methodologies,
and practices. The book features contributions from more than 40 of
the world's foremost expert systems authorities in industry,
government, and academia. |
You may like...
Computing in Communication Networks…
Frank H. P. Fitzek, Fabrizio Granelli, …
Paperback
R2,667
Discovery Miles 26 670
Big Data Recommender Systems, Volume 1…
Osman Khalid, Samee U. Khan, …
Hardcover
Web Technologies & Applications
Sammulal Porika, M Peddi Kishore
Hardcover
The Host in the Machine - Examining the…
Angela Thomas-Jones
Paperback
R1,318
Discovery Miles 13 180
|