![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence > Knowledge-based systems / expert systems
The book highlights new trends and challenges in research on agents and the new digital and knowledge economy. It includes papers on business process management, agent-based modeling and simulation and anthropic-oriented computing that were originally presented at the 14th International KES Conference on Agents and Multi-Agent Systems: Technologies and Applications (KES-AMSTA 2020), being held as a Virtual Conference in June 17-19, 2020. The respective papers cover topics such as software agents, multi-agent systems, agent modeling, mobile and cloud computing, big data analysis, business intelligence, artificial intelligence, social systems, computer embedded systems and nature inspired manufacturing, all of which contribute to the modern digital economy.
The book discusses a broad overview of traditional machine learning methods and state-of-the-art deep learning practices for hardware security applications, in particular the techniques of launching potent "modeling attacks" on Physically Unclonable Function (PUF) circuits, which are promising hardware security primitives. The volume is self-contained and includes a comprehensive background on PUF circuits, and the necessary mathematical foundation of traditional and advanced machine learning techniques such as support vector machines, logistic regression, neural networks, and deep learning. This book can be used as a self-learning resource for researchers and practitioners of hardware security, and will also be suitable for graduate-level courses on hardware security and application of machine learning in hardware security. A stand-out feature of the book is the availability of reference software code and datasets to replicate the experiments described in the book.
This is the latest book from law and technology guru Richard Susskind, author of best-selling The Future of Law, bringing together in one volume eleven significant essays on the application of IT to legal practice and the administration of justice, including Susskind's very latest thinking on key topics such as knowledge management and the impact of electronic commerce and electronic government.
This book proposes a general methodology to introduce Global Navigation Satellite System (GNSS) integrity, starting from a rigorous mathematical description of the problem. It highlights the major issues that designers need to resolve during the development of GNSS-based systems requiring a certain level of confidence on the position estimates. Although it follows a general approach, the final chapters focus on the application of GNSS integrity to rail transportation, as an example. By describing the main requirements in the context of train position function, one of which is the safe function of any train control system, it shows the critical issues associated with the concept of safe position integrity. In particular, one case study clarifies the key differences between the avionic domain and the railway domain related to the application of GNSS technologies, and identifies a number of railway-signaling hazards linked with the use of such technology. Furthermore, it describes various railway-signaling techniques to mitigate such hazards to prepare readers for the future evolution of train control systems, also based on the GNSS technology. This unique book offers a valuable reference guide for engineers and researchers in the fields of satellite navigation and rail transportation.
This book provides readers a thorough understanding of the applicability of new-generation silicon-germanium (SiGe) electronic subsystems for electronic warfare and defensive countermeasures in military contexts. It explains in detail the theoretical and technical background, and addresses all aspects of the integration of SiGe as an enabling technology for maritime, land, and airborne / spaceborne electronic warfare, including research, design, development, and implementation. The coverage is supported by mathematical derivations, informative illustrations, practical examples, and case studies. While SiGe technology provides speed, performance, and price advantages in many markets, to date only limited information has been available on its use in electronic warfare systems, especially in developing nations. Addressing that need, this book offers essential engineering guidelines that especially focus on the speed and reliability of current-generation SiGe circuits and highlight emerging innovations that help to ensure the sustainable long-term integration of SiGe into electronic warfare systems.
This book presents a guide to navigating the complicated issues of quality and process improvement in enterprise software implementation, and the effect these have on the software development life cycle (SDLC). Offering an integrated approach that includes important management and decision practices, the text explains how to create successful automated solutions that fit user and customer needs, by mixing different SDLC methodologies. With an emphasis on the realities of practice, the book offers essential advice on defining business requirements, and managing change. This revised and expanded second edition includes new content on such areas as cybersecurity, big data, and digital transformation. Features: presents examples, case studies, and chapter-ending problems and exercises; concentrates on the skills needed to distinguish successful software implementations; considers the political and cultural realities in organizations; suggests many alternatives for how to manage and model a system.
This open access book explores the dataspace paradigm as a best-effort approach to data management within data ecosystems. It establishes the theoretical foundations and principles of real-time linked dataspaces as a data platform for intelligent systems. The book introduces a set of specialized best-effort techniques and models to enable loose administrative proximity and semantic integration for managing and processing events and streams. The book is divided into five major parts: Part I "Fundamentals and Concepts" details the motivation behind and core concepts of real-time linked dataspaces, and establishes the need to evolve data management techniques in order to meet the challenges of enabling data ecosystems for intelligent systems within smart environments. Further, it explains the fundamental concepts of dataspaces and the need for specialization in the processing of dynamic real-time data. Part II "Data Support Services" explores the design and evaluation of critical services, including catalog, entity management, query and search, data service discovery, and human-in-the-loop. In turn, Part III "Stream and Event Processing Services" addresses the design and evaluation of the specialized techniques created for real-time support services including complex event processing, event service composition, stream dissemination, stream matching, and approximate semantic matching. Part IV "Intelligent Systems and Applications" explores the use of real-time linked dataspaces within real-world smart environments. In closing, Part V "Future Directions" outlines future research challenges for dataspaces, data ecosystems, and intelligent systems. Readers will gain a detailed understanding of how the dataspace paradigm is now being used to enable data ecosystems for intelligent systems within smart environments. The book covers the fundamental theory, the creation of new techniques needed for support services, and lessons learned from real-world intelligent systems and applications focused on sustainability. Accordingly, it will benefit not only researchers and graduate students in the fields of data management, big data, and IoT, but also professionals who need to create advanced data management platforms for intelligent systems, smart environments, and data ecosystems.
This book presents a comprehensive review for Knowledge Engineering tools and techniques that can be used in Artificial Intelligence Planning and Scheduling. KE tools can be used to aid in the acquisition of knowledge and in the construction of domain models, which this book will illustrate. AI planning engines require a domain model which captures knowledge about how a particular domain works - e.g. the objects it contains and the available actions that can be used. However, encoding a planning domain model is not a straightforward task - a domain expert may be needed for their insight into the domain but this information must then be encoded in a suitable representation language. The development of such domain models is both time-consuming and error-prone. Due to these challenges, researchers have developed a number of automated tools and techniques to aid in the capture and representation of knowledge. This book targets researchers and professionals working in knowledge engineering, artificial intelligence and software engineering. Advanced-level students studying AI will also be interested in this book.
This book focuses on data and how modern business firms use social data, specifically Online Social Networks (OSNs) incorporated as part of the infrastructure for a number of emerging applications such as personalized recommendation systems, opinion analysis, expertise retrieval, and computational advertising. This book identifies how in such applications, social data offers a plethora of benefits to enhance the decision making process. This book highlights that business intelligence applications are more focused on structured data; however, in order to understand and analyse the social big data, there is a need to aggregate data from various sources and to present it in a plausible format. Big Social Data (BSD) exhibit all the typical properties of big data: wide physical distribution, diversity of formats, non-standard data models, independently-managed and heterogeneous semantics but even further valuable with marketing opportunities. The book provides a review of the current state-of-the-art approaches for big social data analytics as well as to present dissimilar methods to infer value from social data. The book further examines several areas of research that benefits from the propagation of the social data. In particular, the book presents various technical approaches that produce data analytics capable of handling big data features and effective in filtering out unsolicited data and inferring a value. These approaches comprise advanced technical solutions able to capture huge amounts of generated data, scrutinise the collected data to eliminate unwanted data, measure the quality of the inferred data, and transform the amended data for further data analysis. Furthermore, the book presents solutions to derive knowledge and sentiments from BSD and to provide social data classification and prediction. The approaches in this book also incorporate several technologies such as semantic discovery, sentiment analysis, affective computing and machine learning. This book has additional special feature enriched with numerous illustrations such as tables, graphs and charts incorporating advanced visualisation tools in accessible an attractive display.
Handbook of Metaheuristic Algorithms: From Fundamental Theories to Advanced Applications provides a brief introduction to metaheuristic algorithms from the ground up, including basic ideas and advanced solutions. Although readers may be able to find source code for some metaheuristic algorithms on the Internet, the coding styles and explanations are generally quite different, and thus requiring expanded knowledge between theory and implementation. This book can also help students and researchers construct an integrated perspective of metaheuristic and unsupervised algorithms for artificial intelligence research in computer science and applied engineering domains. Metaheuristic algorithms can be considered the epitome of unsupervised learning algorithms for the optimization of engineering and artificial intelligence problems, including simulated annealing (SA), tabu search (TS), genetic algorithm (GA), ant colony optimization (ACO), particle swarm optimization (PSO), differential evolution (DE), and others. Distinct from most supervised learning algorithms that need labeled data to learn and construct determination models, metaheuristic algorithms inherit characteristics of unsupervised learning algorithms used for solving complex engineering optimization problems without labeled data, just like self-learning, to find solutions to complex problems.
This is the first comprehensive research monograph devoted to the use of augmented reality in education. It is written by a team of 58 world-leading researchers, practitioners and artists from 15 countries, pioneering in employing augmented reality as a new teaching and learning technology and tool. The authors explore the state of the art in educational augmented reality and its usage in a large variety of particular areas, such as medical education and training, English language education, chemistry learning, environmental and special education, dental training, mining engineering teaching, historical and fine art education. Augmented Reality in Education: A New Technology for Teaching and Learning is essential reading not only for educators of all types and levels, educational researchers and technology developers, but also for students (both graduates and undergraduates) and anyone who is interested in the educational use of emerging augmented reality technology.
Fuzzy set theory provides a framework for representing uncertainty.
As increasing importance is being given to uncertainty management
in intelligent systems, fuzzy inferencing procedures are vital.
Using Fest (Fuzzy Expert System Tools), the authors focus on the
parameters of fuzzy rule-based systems. The book then goes on to
show how Fest can be used for inference of indistinct data and
algorithmic descriptions. Divided into three parts, this
comprehensive text covers the characteristics of expert systems and
fuzzy sets theory, knowledge representation and the inference
process. Features include:
This book presents the state of the art, challenges and future trends in automotive software engineering. The amount of automotive software has grown from just a few lines of code in the 1970s to millions of lines in today's cars. And this trend seems destined to continue in the years to come, considering all the innovations in electric/hybrid, autonomous, and connected cars. Yet there are also concerns related to onboard software, such as security, robustness, and trust. This book covers all essential aspects of the field. After a general introduction to the topic, it addresses automotive software development, automotive software reuse, E/E architectures and safety, C-ITS and security, and future trends. The specific topics discussed include requirements engineering for embedded software systems, tools and methods used in the automotive industry, software product lines, architectural frameworks, various related ISO standards, functional safety and safety cases, cooperative intelligent transportation systems, autonomous vehicles, and security and privacy issues. The intended audience includes researchers from academia who want to learn what the fundamental challenges are and how they are being tackled in the industry, and practitioners looking for cutting-edge academic findings. Although the book is not written as lecture notes, it can also be used in advanced master's-level courses on software and system engineering. The book also includes a number of case studies that can be used for student projects.
This book summarizes the research findings presented at the 13th International Joint Conference on Knowledge-Based Software Engineering (JCKBSE 2020), which took place on August 24-26, 2020. JCKBSE 2020 was originally planned to take place in Larnaca, Cyprus. Unfortunately, the COVID-19 pandemic forced it be rescheduled as an online conference. JCKBSE is a well-established, international, biennial conference that focuses on the applications of artificial intelligence in software engineering. The 2020 edition of the conference was organized by Hiroyuki Nakagawa, Graduate School of Information Science and Technology, Osaka University, Japan, and George A. Tsihrintzis and Maria Virvou, Department of Informatics, University of Piraeus, Greece. This research book is a valuable resource for experts and researchers in the field of (knowledge-based) software engineering, as well as general readers in the fields of artificial and computational Intelligence and, more generally, computer science wanting to learn more about the field of (knowledge-based) software engineering and its applications. An extensive list of bibliographic references at the end of each paper helps readers to probe further into the application areas of interest to them.
Without correct timing, there is no safe and reliable embedded software. This book shows how to consider timing early in the development process for embedded systems, how to solve acute timing problems, how to perform timing optimization, and how to address the aspect of timing verification.The book is organized in twelve chapters. The first three cover various basics of microprocessor technologies and the operating systems used therein. The next four chapters cover timing problems both in theory and practice, covering also various timing analysis techniques as well as special issues like multi- and many-core timing. Chapter 8 deals with aspects of timing optimization, followed by chapter 9 that highlights various methodological issues of the actual development process. Chapter 10 presents timing analysis in AUTOSAR in detail, while chapter 11 focuses on safety aspects and timing verification. Finally, chapter 12 provides an outlook on upcoming and future developments in software timing. The number of embedded systems that we encounter in everyday life is growing steadily. At the same time, the complexity of the software is constantly increasing. This book is mainly written for software developers and project leaders in industry. It is enriched by many practical examples mostly from the automotive domain, yet the vast majority of the book is relevant for any embedded software project. This way it is also well-suited as a textbook for academic courses with a strong practical emphasis, e.g. at applied sciences universities. Features and Benefits * Shows how to consider timing in the development process for embedded systems, how to solve timing problems, and how to address timing verification * Enriched by many practical examples mostly from the automotive domain * Mainly written for software developers and project leaders in industry
"Reliable Knowledge Discovery" focuses on theory, methods, and techniques for RKDD, a new sub-field of KDD. It studies the theory and methods to assure the reliability and trustworthiness of discovered knowledge and to maintain the stability and consistency of knowledge discovery processes. RKDD has a broad spectrum of applications, especially in critical domains like medicine, finance, and military. "Reliable Knowledge Discovery" also presents methods and techniques for designing robust knowledge-discovery processes. Approaches to assessing the reliability of the discovered knowledge are introduced. Particular attention is paid to methods for reliable feature selection, reliable graph discovery, reliable classification, and stream mining. Estimating the data trustworthiness is covered in this volume as well. Case studies are provided in many chapters. "Reliable Knowledge Discovery" is designed for researchers and advanced-level students focused on computer science and electrical engineering as a secondary text or reference. Professionals working in this related field and KDD application developers will also find this book useful.
Presenting a reference model architecture for the design of intelligent systems Engineering of Mind presents the foundations for a computational theory of intelligence. It discusses the main streams of investigation that will eventually converge in a scientific theory of mind and proposes an avenue of research that might best lead to the development of truly intelligent systems. This book presents a model of the brain as a hierarchy of massive parallel computational modules and data structures interconnected by information pathways. Using this as the basic model on which intelligent systems should be based, the authors propose a reference model architecture that accommodates concepts from artificial intelligence, control theory, image understanding, signal processing, and decision theory. Algorithms, procedures, and data embedded within this architecture would enable the analysis of situations, the formulation of plans, the choice of behaviors, and the computation of uncertainties. The computational power to implement the model can be achieved in practical systems in the foreseeable future through hierarchical and parallel distribution of computational tasks. The authors’ reference model architecture is expressed in terms of the Real-time Control System (RCS) that has been developed primarily at the National Institute of Standards and Technology. Suitable for engineers, computer scientists, researchers, and students, Engineering of Mind blends current theory and practice to achieve a coherent model for the design of intelligent systems.
This book presents a comprehensive report on the evolution of Fuzzy Logic since its formulation in Lotfi Zadeh's seminal paper on "fuzzy sets," published in 1965. In addition, it features a stimulating sampling from the broad field of research and development inspired by Zadeh's paper. The chapters, written by pioneers and prominent scholars in the field, show how fuzzy sets have been successfully applied to artificial intelligence, control theory, inference, and reasoning. The book also reports on theoretical issues; features recent applications of Fuzzy Logic in the fields of neural networks, clustering, data mining and software testing; and highlights an important paradigm shift caused by Fuzzy Logic in the area of uncertainty management. Conceived by the editors as an academic celebration of the fifty years' anniversary of the 1965 paper, this work is a must-have for students and researchers willing to get an inspiring picture of the potentialities, limitations, achievements and accomplishments of Fuzzy Logic-based systems.
The British philosopher Stephan Toulmin, in his The Uses of Argument, made the provocative claim that "logic is generalized jurisprudence." For Toulmin, logic is the study of nonns for practical argumentation and decision making. In his view, mathematical logicians were preoccupied with fonnalizing the concepts of logical necessity, consequence and contradiction, at the expense of other equally important issues, such as how to allocate the burden of proof and make rational decisions given limited resources. He also considered it a mistake to look primarily to psychology, linguistics or the cognitive sciences for answers to these fundamentally nonnative questions. Toulmin's concerns about logic, writing in the 1950's, are equally applicable to the field of Artificial Intelligence today. The mainstream of Artificial Intelligence has focused on the analytical and empirical aspects of intelligence, without giving adequate attention to the nonnative, regulative functions of knowledge representation, problem solving and decision-making. Nonnative issues should now be of even greater interest, with the shift in perspective of AI from individual to collective intelligence, in areas such as multi-agent systems, cooperative design, distributed artificial intelligence, and computer-supported cooperative work. Networked "virtual societies" of humans and software agents would also require "virtual legal systems" to fairly balance interests, resolve conflicts, and promote security.
Provides a wide snapshot of building knowledge-based systems, inconsistency measures, methods for handling consistency, and methods for integrating knowledge bases. Provides the mathematical background to solve problems of restoring consistency and problems of integrating probabilistic knowledge bases in the integrating process. The research results presented in the book can be applied in decision support systems, semantic web systems, multimedia information retrieval systems, medical imaging systems, cooperative information systems, and more.
This book provides a review of advanced topics relating to the theory, research, analysis and implementation in the context of big data platforms and their applications, with a focus on methods, techniques, and performance evaluation. The explosive growth in the volume, speed, and variety of data being produced every day requires a continuous increase in the processing speeds of servers and of entire network infrastructures, as well as new resource management models. This poses significant challenges (and provides striking development opportunities) for data intensive and high-performance computing, i.e., how to efficiently turn extremely large datasets into valuable information and meaningful knowledge. The task of context data management is further complicated by the variety of sources such data derives from, resulting in different data formats, with varying storage, transformation, delivery, and archiving requirements. At the same time rapid responses are needed for real-time applications. With the emergence of cloud infrastructures, achieving highly scalable data management in such contexts is a critical problem, as the overall application performance is highly dependent on the properties of the data management service.
Knowledge-based systems are increasingly found in a wide variety of settings and this handbook has been written to meet a specific need in their widening use. While there have been many successful applications of knowledge-based systems, some applications have failed because they never received the corrective feedback that evaluation provides for keeping development focused on the users' needs in their actual working environment. This handbook provides a conceptual framework and compendium of methods for performing evaluations of knowledge-based systems during their development. Its focus is on the users' and subject matter experts' evaluation of the usefulness of the system, and not on the developers' testing of the adequacy of the programming code. The handbook permits evaluators to systematically answer the following kinds of questions: Does the knowledge-based system meet the users' task requirements? Is the system easy to use? Is the knowledge base logically consistent? Does it meet the required level of expertise? Does the system improve performance? The authors have produced a handbook that will serve two audiences: a tool that can be used to create knowledge-based systems (practitioners, developers, and evaluators) and a framework that will stimulate more research in the area (academic researchers and students). To accomplish this, the handbook is built around a conceptual framework that integrates the different types of evaluations into the system of development process. The kinds of questions that can be answered, and the methods available for answering them, will change throughout the system development life cycle. And throughout this process, one needs to know what can be done, and what can't. It is this dichotomy that addresses needs in both the practitioner and academic research audiences.
This book focuses on one of the major challenges of the newly created scientific domain known as data science: turning data into actionable knowledge in order to exploit increasing data volumes and deal with their inherent complexity. Actionable knowledge has been qualitatively and intensively studied in management, business, and the social sciences but in computer science and engineering, its connection has only recently been established to data mining and its evolution, 'Knowledge Discovery and Data Mining' (KDD). Data mining seeks to extract interesting patterns from data, but, until now, the patterns discovered from data have not always been 'actionable' for decision-makers in Socio-Technical Organizations (STO). With the evolution of the Internet and connectivity, STOs have evolved into Cyber-Physical and Social Systems (CPSS) that are known to describe our world today. In such complex and dynamic environments, the conventional KDD process is insufficient, and additional processes are required to transform complex data into actionable knowledge. Readers are presented with advanced knowledge concepts and the analytics and information fusion (AIF) processes aimed at delivering actionable knowledge. The authors provide an understanding of the concept of 'relation' and its exploitation, relational calculus, as well as the formalization of specific dimensions of knowledge that achieve a semantic growth along the AIF processes. This book serves as an important technical presentation of relational calculus and its application to processing chains in order to generate actionable knowledge. It is ideal for graduate students, researchers, or industry professionals interested in decision science and knowledge engineering.
This open access book constitutes the refereed post-conference proceedings of the First IFIP International Cross-Domain Conference on Internet of Things, IFIPIoT 2018, held at the 24th IFIP World Computer Congress, WCC 2018, in Poznan, Poland, in September 2018. The 12 full papers presented were carefully reviewed and selected from 24 submissions. Also included in this volume are 4 WCC 2018 plenary contributions, an invited talk and a position paper from the IFIP domain committee on IoT. The papers cover a wide range of topics from a technology to a business perspective and include among others hardware, software and management aspects, process innovation, privacy, power consumption, architecture, applications.
Applied Computing in Medicine and Health is a comprehensive presentation of on-going investigations into current applied computing challenges and advances, with a focus on a particular class of applications, primarily artificial intelligence methods and techniques in medicine and health. Applied computing is the use of practical computer science knowledge to enable use of the latest technology and techniques in a variety of different fields ranging from business to scientific research. One of the most important and relevant areas in applied computing is the use of artificial intelligence (AI) in health and medicine. Artificial intelligence in health and medicine (AIHM) is assuming the challenge of creating and distributing tools that can support medical doctors and specialists in new endeavors. The material included covers a wide variety of interdisciplinary perspectives concerning the theory and practice of applied computing in medicine, human biology, and health care. Particular attention is given to AI-based clinical decision-making, medical knowledge engineering, knowledge-based systems in medical education and research, intelligent medical information systems, intelligent databases, intelligent devices and instruments, medical AI tools, reasoning and metareasoning in medicine, and methodological, philosophical, ethical, and intelligent medical data analysis. |
You may like...
Knowledge-Based Software Engineering…
Maria Virvou, Fumihiro Kumeno, …
Hardcover
R4,034
Discovery Miles 40 340
Recent Trends in Computational…
Siddhartha Bhattacharyya, Paramartha Dutta, …
Paperback
R3,483
Discovery Miles 34 830
Exploring Future Opportunities of…
Madhulika Bhatia, Tanupriya Choudhury, …
Hardcover
R6,683
Discovery Miles 66 830
Research Anthology on Artificial Neural…
Information R Management Association
Hardcover
R12,947
Discovery Miles 129 470
|