![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence > Knowledge-based systems / expert systems
The Knowledge Seeker is a useful system to develop various intelligent applications such as ontology-based search engine, ontology-based text classification system, ontological agent system, and semantic web system etc. The Knowledge Seeker contains four different ontological components. First, it defines the knowledge representation model !V Ontology Graph. Second, an ontology learning process that based on chi-square statistics is proposed for automatic learning an Ontology Graph from texts for different domains. Third, it defines an ontology generation method that transforms the learning outcome to the Ontology Graph format for machine processing and also can be visualized for human validation. Fourth, it defines different ontological operations (such as similarity measurement and text classification) that can be carried out with the use of generated Ontology Graphs. The final goal of the KnowledgeSeeker system framework is that it can improve the traditional information system with higher efficiency. In particular, it can increase the accuracy of a text classification system, and also enhance the search intelligence in a search engine. This can be done by enhancing the system with machine processable ontology.
The rapid advances in performance and miniaturisation in microtechnology are constantly opening up new markets for the programmable logic controller (PLC). Specially designed controller hardware or PC-based controllers, extended by hardware and software with real-time capability, now control highly complex automation processes. This has been extended by the new subject of "safe- related controllers," aimed at preventing injury by machines during the production process. The different types of PLC cover a wide task spectrum - ranging from small network node computers and distributed compact units right up to modular, fau- tolerant, high-performance PLCs. They differ in performance characteristics such as processing speed, networking ability or the selection of I/O modules they support. Throughout this book, the term PLC is used to refer to the technology as a whole, both hardware and software, and not merely to the hardware architecture. The IEC61131 programming languages can be used for programming classical PLCs, embedded controllers, industrial PCs and even standard PCs, if suitable hardware (e.g. fieldbus board) for connecting sensors and actors is available.
Speech--to--Speech Translation: a Massively Parallel Memory-Based Approach describes one of the world's first successful speech--to--speech machine translation systems. This system accepts speaker-independent continuous speech, and produces translations as audio output. Subsequent versions of this machine translation system have been implemented on several massively parallel computers, and these systems have attained translation performance in the milliseconds range. The success of this project triggered several massively parallel projects, as well as other massively parallel artificial intelligence projects throughout the world. Dr. Hiroaki Kitano received the distinguished Computers and Thought Award' from the International Joint Conferences on Artificial Intelligence in 1993 for his work in this area, and that work is reported in this book.
Frontiers in Belief Revision is a unique collection of leading edge research in Belief Revision. It contains the latest innovative ideas of highly respected and pioneering experts in the area, including Isaac Levi, Krister Segerberg, Sven Ove Hansson, Didier Dubois, and Henri Prade. The book addresses foundational issues of inductive reasoning and minimal change, generalizations of the standard belief revision theories, strategies for iterated revisions, probabilistic beliefs, multiagent environments and a variety of data structures and mechanisms for implementations. This book is suitable for students and researchers interested in knowledge representation and in the state of the art of the theory and practice of belief revision.
Thinking in terms of facts and rules is perhaps one of the most common ways of approaching problem de?nition and problem solving both in everyday life and under more formal circumstances. The best known set of rules, the Ten Commandments have been accompanying us since the times of Moses; the Decalogue proved to be simple but powerful, concise and universal. It is logically consistent and complete. There are also many other attempts to impose rule-based regulations in almost all areas of life, including professional work, education, medical services, taxes, etc. Some most typical examples may include various codes (e.g. legal or tra?c code), regulations (especially military ones), and many systems of customary or informal rules. The universal nature of rule-based formulation of behavior or inference principles follows from the concept of rules being a simple and intuitive yet powerful concept of very high expressive power. Moreover, rules as such encode in fact functional aspects of behavior and can be used for modeling numerous phenomena.
Pervasive healthcare is the conceptual system of providing healthcare to anyone, at anytime, and anywhere by removing restraints of time and location while increasing both the coverage and the quality of healthcare. Pervasive Healthcare Monitoring is at the forefront of this research, and presents the ways in which mobile and wireless technologies can be used to implement the vision of pervasive healthcare. This vision includes prevention, healthcare maintenance and checkups; short-term monitoring (home healthcare monitoring), long-term monitoring (nursing home), and personalized healthcare monitoring; and incidence detection and management, emergency intervention, and transportation and treatment. The pervasive healthcare applications include pervasive health monitoring, intelligent emergency management system, pervasive healthcare data access, and ubiquitous mobile telemedicine. Pervasive Healthcare Monitoring fills the need for a research-oriented book on the wide array of emerging healthcare applications and services, including the treatment of several new wireless technologies and the ways in which they will implement the vision of pervasive healthcare. This book is written primarily for university faculty and graduate students in the field of healthcare technologies, and industry professionals involved in healthcare IT research, design, and development.
In knowledge-based natural language generation, issues of formal knowledge representation meet with the linguistic problems of choosing the most appropriate verbalization in a particular situation of utterance. Lexical Semantics and Knowledge Representation in Multilingual Text Generation presents a new approach to systematically linking the realms of lexical semantics and knowledge represented in a description logic. For language generation from such abstract representations, lexicalization is taken as the central step: when choosing words that cover the various parts of the content representation, the principal decisions on conveying the intended meaning are made. A preference mechanism is used to construct the utterance that is best tailored to parameters representing the context. Lexical Semantics and Knowledge Representation in Multilingual Text Generation develops the means for systematically deriving a set of paraphrases from the same underlying representation with the emphasis on events and verb meaning. Furthermore, the same mapping mechanism is used to achieve multilingual generation: English and German output are produced in parallel, on the basis of an adequate division between language-neutral and language-specific (lexical and grammatical) knowledge. Lexical Semantics and Knowledge Representation in Multilingual Text Generation provides detailed insights into designing the representations and organizing the generation process. Readers with a background in artificial intelligence, cognitive science, knowledge representation, linguistics, or natural language processing will find a model of language production that can be adapted to a variety of purposes.
This open access book provides a comprehensive view on data ecosystems and platform economics from methodical and technological foundations up to reports from practical implementations and applications in various industries. To this end, the book is structured in four parts: Part I "Foundations and Contexts" provides a general overview about building, running, and governing data spaces and an introduction to the IDS and GAIA-X projects. Part II "Data Space Technologies" subsequently details various implementation aspects of IDS and GAIA-X, including eg data usage control, the usage of blockchain technologies, or semantic data integration and interoperability. Next, Part III describes various "Use Cases and Data Ecosystems" from various application areas such as agriculture, healthcare, industry, energy, and mobility. Part IV eventually offers an overview of several "Solutions and Applications", eg including products and experiences from companies like Google, SAP, Huawei, T-Systems, Innopay and many more. Overall, the book provides professionals in industry with an encompassing overview of the technological and economic aspects of data spaces, based on the International Data Spaces and Gaia-X initiatives. It presents implementations and business cases and gives an outlook to future developments. In doing so, it aims at proliferating the vision of a social data market economy based on data spaces which embrace trust and data sovereignty.
This book highlights new trends and challenges in research on agents and the new digital and knowledge economy. It includes papers on business process management, agent-based modeling and simulation, and anthropic-oriented computing that were originally presented at the 15th International KES Conference on Agents and Multi-Agent Systems: Technologies and Applications (KES-AMSTA 2021), being held as a Virtual Conference in June 14-16, 2021. The respective papers cover topics such as software agents, multi-agent systems, agent modeling, mobile and cloud computing, big data analysis, business intelligence, artificial intelligence, social systems, computer embedded systems, and nature-inspired manufacturing, all of which contribute to the modern digital economy.
The papers in this volume comprise the refereed proceedings of the Second IFIP International Conference on Computer and Computing Technologies in Agriculture (CCTA2008), in Beijing, China, 2008. The conference on the Second IFIP International Conference on Computer and Computing Technologies in Agriculture (CCTA 2008) is cooperatively sponsored and organized by the China Agricultural University (CAU), the National Engineering Research Center for Information Technology in Agriculture (NERCITA), the Chinese Society of Agricultural Engineering (CSAE) , International Federation for Information Processing (IFIP), Beijing Society for Information Technology in Agriculture, China and Beijing Research Center for Agro-products Test and Farmland Inspection, China. The related departments of China's central government bodies like: Ministry of Science and Technology, Ministry of Industry and Information Technology, Ministry of Education and the Beijing Municipal Natural Science Foundation, Beijing Academy of Agricultural and Forestry Sciences, etc. have greatly contributed and supported to this event. The conference is as good platform to bring together scientists and researchers, agronomists and information engineers, extension servers and entrepreneurs from a range of disciplines concerned with impact of Information technology for sustainable agriculture and rural development. The representatives of all the supporting organizations, a group of invited speakers, experts and researchers from more than 15 countries, such as: the Netherlands, Spain, Portugal, Mexico, Germany, Greece, Australia, Estonia, Japan, Korea, India, Iran, Nigeria, Brazil, China, etc.
Knowledge Representation and Relation Nets introduces a fresh approach to knowledge representation that can be used to organize study material in a convenient, teachable and learnable form. The method extends and formalizes concept mapping by developing knowledge representation as a structure of concepts and the relationships among them. Such a formal description of analogy results in a controlled method of modeling new' knowledge in terms of existing' knowledge in teaching and learning situations, and its applications result in a consistent and well-organized approach to problem solving. Additionally, strategies for the presentation of study material to learners arise naturally in this representation. While the theory of relation nets is dealt with in detail in part of this book, the reader need not master the formal mathematics in order to apply the theory to this method of knowledge representation. To assist the reader, each chapter starts with a brief summary, and the main ideas are illustrated by examples. The reader is also given an intuitive view of the formal notions used in the applications by means of diagrams, informal descriptions, and simple sets of construction rules. Knowledge Representation and Relation Nets is an excellent source for teachers, courseware designers and researchers in knowledge representation, cognitive science, theories of learning, the psychology of education, and structural modeling.
The book discusses a broad overview of traditional machine learning methods and state-of-the-art deep learning practices for hardware security applications, in particular the techniques of launching potent "modeling attacks" on Physically Unclonable Function (PUF) circuits, which are promising hardware security primitives. The volume is self-contained and includes a comprehensive background on PUF circuits, and the necessary mathematical foundation of traditional and advanced machine learning techniques such as support vector machines, logistic regression, neural networks, and deep learning. This book can be used as a self-learning resource for researchers and practitioners of hardware security, and will also be suitable for graduate-level courses on hardware security and application of machine learning in hardware security. A stand-out feature of the book is the availability of reference software code and datasets to replicate the experiments described in the book.
This book provides a broad overview of the benefits from a Systems Engineering design philosophy in architecting complex systems composed of artificial intelligence (AI), machine learning (ML) and humans situated in chaotic environments. The major topics include emergence, verification and validation of systems using AI/ML and human systems integration to develop robust and effective human-machine teams-where the machines may have varying degrees of autonomy due to the sophistication of their embedded AI/ML. The chapters not only describe what has been learned, but also raise questions that must be answered to further advance the general Science of Autonomy. The science of how humans and machines operate as a team requires insights from, among others, disciplines such as the social sciences, national and international jurisprudence, ethics and policy, and sociology and psychology. The social sciences inform how context is constructed, how trust is affected when humans and machines depend upon each other and how human-machine teams need a shared language of explanation. National and international jurisprudence determine legal responsibilities of non-trivial human-machine failures, ethical standards shape global policy, and sociology provides a basis for understanding team norms across cultures. Insights from psychology may help us to understand the negative impact on humans if AI/ML based machines begin to outperform their human teammates and consequently diminish their value or importance. This book invites professionals and the curious alike to witness a new frontier open as the Science of Autonomy emerges.
The book is the complete introduction and applications guide to this new technology. This book introduces the reader to features and gives an overview of geometric modeling techniques, discusses the conceptual development of features as modeling entities, illustrates the use of features for a variety of engineering design applications, and develops a set of broad functional requirements and addresses high level design issues.
This open access book provides an in-depth description of the EU project European Language Grid (ELG). Its motivation lies in the fact that Europe is a multilingual society with 24 official European Union Member State languages and dozens of additional languages including regional and minority languages. The only meaningful way to enable multilingualism and to benefit from this rich linguistic heritage is through Language Technologies (LT) including Natural Language Processing (NLP), Natural Language Understanding (NLU), Speech Technologies and language-centric Artificial Intelligence (AI) applications. The European Language Grid provides a single umbrella platform for the European LT community, including research and industry, effectively functioning as a virtual home, marketplace, showroom, and deployment centre for all services, tools, resources, products and organisations active in the field. Today the ELG cloud platform already offers access to more than 13,000 language processing tools and language resources. It enables all stakeholders to deposit, upload and deploy their technologies and datasets. The platform also supports the long-term objective of establishing digital language equality in Europe by 2030 - to create a situation in which all European languages enjoy equal technological support. This is the very first book dedicated to Language Technology and NLP platforms. Cloud technology has only recently matured enough to make the development of a platform like ELG feasible on a larger scale. The book comprehensively describes the results of the ELG project. Following an introduction, the content is divided into four main parts: (I) ELG Cloud Platform; (II) ELG Inventory of Technologies and Resources; (III) ELG Community and Initiative; and (IV) ELG Open Calls and Pilot Projects.
This monograph introduces a novel multiset-based conceptual, mathematical and knowledge engineering paradigm, called multigrammatical framework (MGF), used for planning and scheduling in resource-consuming, resource-producing (industrial) and resource-distributing (economical) sociotechnological systems (STS). This framework is meant to enable smart operation not only in a "business-as-usual" mode, but also in extraordinary, highly volatile or hazardous environments. It is the result of convergence and deep integration into a unified, flexible and effectively implemented formalism operating on multisets of several well-known paradigms from classical operations research and modern knowledge engineering, such as: mathematical programming, game theory, optimal scheduling, logic programming and constraint programming. The mathematical background needed for MGF, its algorithmics, applications, implementation issues, as well as its nexus with known models from operations research and theoretical computer science areas are considered. The resilience and recovery issues of an STS are studied by applying the MGF toolkit and on paying special attention to the multigrammatical assessment of resilience of energy infrastructures. MGF-represented resource-based games are introduced, and directions for further development are discussed. The author presents multiple applications to business intelligence, critical infrastructure, ecology, economy and industry. This book is addressed to scholars working in the areas of theoretical and applied computer science, artificial intelligence, systems analysis, operations research, mathematical economy and critical infrastructure protection, to engineers developing software-intensive solutions for implementation of the knowledge-based digital economy and Industry 4.0, as well as to students, aspirants and university staff. Foundational knowledge of set theory, mathematical logic and routine operations on data bases is needed to read this book. The content of the monograph is gradually presented, from simple to complex, in a well-understandable step-by-step manner. Multiple examples and accompanying figures are included in order to support the explanation of the various notions, expressions and algorithms.
This book constitutes the refereed proceedings of the IFIP Industry Oriented Conferences held at the 20th World Computer Congress in Milano, Italy on September 7-10, 2008. The IFIP series publishes state-of-the-art results in the sciences and technologies of information and communication. The scope of the series includes: foundations of computer science; software theory and practice; education; computer applications in technology; communication systems; systems modeling and optimization; information systems; computers and society; computer systems technology; security and protection in information processing systems; artificial intelligence; and human-computer interaction. Proceedings and post-proceedings of refereed international conferences in computer science and interdisciplinary fields are featured. These results often precede journal publication and represent the most current research. The principal aim of the IFIP series is to encourage education and the dissemination and exchange of information about all aspects of computing.
The development of modern knowledge-based systems, for applications ranging from medicine to finance, necessitates going well beyond traditional rule-based programming. Frontiers of Expert Systems: Reasoning with Limited Knowledge attempts to satisfy such a need, introducing exciting and recent advances at the frontiers of the field of expert systems. Beginning with the central topics of logic, uncertainty and rule-based reasoning, each chapter in the book presents a different perspective on how we may solve problems that arise due to limitations in the knowledge of an expert system's reasoner. Successive chapters address (i) the fundamentals of knowledge-based systems, (ii) formal inference, and reasoning about models of a changing and partially known world, (iii) uncertainty and probabilistic methods, (iv) the expression of knowledge in rule-based systems, (v) evolving representations of knowledge as a system interacts with the environment, (vi) applying connectionist learning algorithms to improve on knowledge acquired from experts, (vii) reasoning with cases organized in indexed hierarchies, (viii) the process of acquiring and inductively learning knowledge, (ix) extraction of knowledge nuggets from very large data sets, and (x) interactions between multiple specialized reasoners with specialized knowledge bases. Each chapter takes the reader on a journey from elementary concepts to topics of active research, providing a concise description of several topics within and related to the field of expert systems, with pointers to practical applications and other relevant literature. Frontiers of Expert Systems: Reasoning with Limited Knowledge is suitable as a secondary text for a graduate-level course, and as a reference for researchers and practitioners in industry.
There is a tremendous interest in the design and applications of agents in virtually every area including avionics, business, internet, engineering, health sciences and management. There is no agreed one definition of an agent but we can define an agent as a computer program that autonomously or semi-autonomously acts on behalf of the user. In the last five years transition of intelligent systems research in general and agent based research in particular from a laboratory environment into the real world has resulted in the emergence of several phenomenon. These trends can be placed in three catego ries, namely, humanization, architectures and learning and adapta tion. These phenomena are distinct from the traditional logic centered approach associated with the agent paradigm. Humaniza tion of agents can be understood among other aspects, in terms of the semantics quality of design of agents. The need to humanize agents is to allow practitioners and users to make more effective use of this technology. It relates to the semantic quality of the agent design. Further, context-awareness is another aspect which has as sumed importance in the light of ubiquitous computing and ambi ent intelligence. The widespread and varied use of agents on the other hand has cre ated a need for agent-based software development frameworks and design patterns as well architectures for situated interaction, nego tiation, e-commerce, e-business and informational retrieval. Fi- vi Preface nally, traditional agent designs did not incorporate human-like abilities of learning and adaptation."
How to draw plausible conclusions from uncertain and conflicting sources of evidence is one of the major intellectual challenges of Artificial Intelligence. It is a prerequisite of the smart technology needed to help humans cope with the information explosion of the modern world. In addition, computational modelling of uncertain reasoning is a key to understanding human rationality. Previous computational accounts of uncertain reasoning have fallen into two camps: purely symbolic and numeric. This book represents a major advance by presenting a unifying framework which unites these opposing camps. The Incidence Calculus can be viewed as both a symbolic and a numeric mechanism. Numeric values are assigned indirectly to evidence via the possible worlds in which that evidence is true. This facilitates purely symbolic reasoning using the possible worlds and numeric reasoning via the probabilities of those possible worlds. Moreover, the indirect assignment solves some difficult technical problems, like the combinat ion of dependent sources of evidcence, which had defeated earlier mechanisms. Weiru Liu generalises the Incidence Calculus and then compares it to a succes sion of earlier computational mechanisms for uncertain reasoning: Dempster-Shafer Theory, Assumption-Based Truth Maintenance, Probabilis tic Logic, Rough Sets, etc. She shows how each of them is represented and interpreted in Incidence Calculus. The consequence is a unified mechanism which includes both symbolic and numeric mechanisms as special cases. It provides a bridge between symbolic and numeric approaches, retaining the advantages of both and overcoming some of their disadvantages."
The papers in this volume comprise the refereed proceedings of the Second IFIP International Conference on Computer and Computing Technologies in Agriculture (CCTA2008), in Beijing, China, 2008. The conference on the Second IFIP International Conference on Computer and Computing Technologies in Agriculture (CCTA 2008) is cooperatively sponsored and organized by the China Agricultural University (CAU), the National Engineering Research Center for Information Technology in Agriculture (NERCITA), the Chinese Society of Agricultural Engineering (CSAE) , International Federation for Information Processing (IFIP), Beijing Society for Information Technology in Agriculture, China and Beijing Research Center for Agro-products Test and Farmland Inspection, China. The related departments of China's central government bodies like: Ministry of Science and Technology, Ministry of Industry and Information Technology, Ministry of Education and the Beijing Municipal Natural Science Foundation, Beijing Academy of Agricultural and Forestry Sciences, etc. have greatly contributed and supported to this event. The conference is as good platform to bring together scientists and researchers, agronomists and information engineers, extension servers and entrepreneurs from a range of disciplines concerned with impact of Information technology for sustainable agriculture and rural development. The representatives of all the supporting organizations, a group of invited speakers, experts and researchers from more than 15 countries, such as: the Netherlands, Spain, Portugal, Mexico, Germany, Greece, Australia, Estonia, Japan, Korea, India, Iran, Nigeria, Brazil, China, etc.
The proliferation of Internet of Things (IoT) has enabled rapid enhancements for applications, not only in home and environment scenarios, but also in factory automation. Now, Industrial Internet of Things (IIoT) offers all the advantages of IoT to industry, with applications ranging from remote sensing and actuating, to de-centralization and autonomy. In this book, the editor presents the IIoT and its place during the new industrial revolution (Industry 4.0) as it takes us to a better, sustainable, automated, and safer world. The book covers the cross relations and implications of IIoT with existing wired/wireless communication/networking and safety technologies of the Industrial Networks. Moreover, the book includes practical use-case scenarios from the industry for the application of IIoT on smart factories, smart cities, and smart grids. IoT-driven advances in commercial and industrial building lighting and in street lighting are presented as an example to shed light on the application domain of IIoT. The state of the art in Industrial Automation is also presented to give a better understanding of the enabling technologies, potential advantages, and challenges of the Industry 4.0 and IIoT. Finally, yet importantly, the security section of the book covers the cyber-security related needs of the IIoT users and the services that might address these needs. User privacy, data ownership, and proprietary information handling related to IIoT networks are all investigated. Intrusion prevention, detection, and mitigation are all covered at the conclusion of the book.
Visual Question Answering (VQA) usually combines visual inputs like image and video with a natural language question concerning the input and generates a natural language answer as the output. This is by nature a multi-disciplinary research problem, involving computer vision (CV), natural language processing (NLP), knowledge representation and reasoning (KR), etc. Further, VQA is an ambitious undertaking, as it must overcome the challenges of general image understanding and the question-answering task, as well as the difficulties entailed by using large-scale databases with mixed-quality inputs. However, with the advent of deep learning (DL) and driven by the existence of advanced techniques in both CV and NLP and the availability of relevant large-scale datasets, we have recently seen enormous strides in VQA, with more systems and promising results emerging. This book provides a comprehensive overview of VQA, covering fundamental theories, models, datasets, and promising future directions. Given its scope, it can be used as a textbook on computer vision and natural language processing, especially for researchers and students in the area of visual question answering. It also highlights the key models used in VQA.
This book serves as a convenient entry point for researchers, practitioners, and students to understand the problems and challenges, learn state-of-the-art solutions for their specific needs, and quickly identify new research problems in their domains. The contributors to this volume describe the recent advancements in three related parts: (1) user engagements in the dissemination of information disorder; (2) techniques on detecting and mitigating disinformation; and (3) trending issues such as ethics, blockchain, clickbaits, etc. This edited volume will appeal to students, researchers, and professionals working on disinformation, misinformation and fake news in social media from a unique lens.
In the statistical domain, certain topics have received considerable attention during the last decade or so, necessitated by the growth and evolution of data and theoretical challenges. This growth has invariably been accompanied by computational advancement, which has presented end users as well as researchers with the necessary opportunities to handle data and implement modelling solutions for statistical purposes. Showcasing the interplay among a variety of disciplines, this book offers pioneering theoretical and applied solutions to practice-oriented problems. As a carefully curated collection of prominent international thought leaders, it fosters collaboration between statisticians and biostatisticians and provides an array of thought processes and tools to its readers. The book thereby creates an understanding and appreciation of recent developments as well as an implementation of these contributions within the broader framework of both academia and industry. Computational and Methodological Statistics and Biostatistics is composed of three main themes: * Recent developments in theory and applications of statistical distributions;* Recent developments in supervised and unsupervised modelling;* Recent developments in biostatistics; and also features programming code and accompanying algorithms to enable readers to replicate and implement methodologies. Therefore, this monograph provides a concise point of reference for a variety of current trends and topics within the statistical domain. With interdisciplinary appeal, it will be useful to researchers, graduate students, and practitioners in statistics, biostatistics, clinical methodology, geology, data science, and actuarial science, amongst others. |
You may like...
The Host in the Machine - Examining the…
Angela Thomas-Jones
Paperback
R1,318
Discovery Miles 13 180
Handbook of Research on the IoT, Cloud…
Surjit Singh, Rajeev Mohan Sharma
Hardcover
R6,389
Discovery Miles 63 890
Architectural Design - Conception and…
Chris A. Vissers, Luis Ferreira Pires, …
Hardcover
Accelerating MATLAB with GPU Computing…
Jung Suh, Youngmin Kim
Paperback
R1,459
Discovery Miles 14 590
|