![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence > Knowledge-based systems / expert systems
Creating scientific workflow applications is a very challenging task due to the complexity of the distributed computing environments involved, the complex control and data flow requirements of scientific applications, and the lack of high-level languages and tools support. Particularly, sophisticated expertise in distributed computing is commonly required to determine the software entities to perform computations of workflow tasks, the computers on which workflow tasks are to be executed, the actual execution order of workflow tasks, and the data transfer between them. Qin and Fahringer present a novel workflow language called Abstract Workflow Description Language (AWDL) and the corresponding standards-based, knowledge-enabled tool support, which simplifies the development of scientific workflow applications. AWDL is an XML-based language for describing scientific workflow applications at a high level of abstraction. It is designed in a way that allows users to concentrate on specifying such workflow applications without dealing with either the complexity of distributed computing environments or any specific implementation technology. This research monograph is organized into five parts: overview, programming, optimization, synthesis, and conclusion, and is complemented by an appendix and an extensive reference list. The topics covered in this book will be of interest to both computer science researchers (e.g. in distributed programming, grid computing, or large-scale scientific applications) and domain scientists who need to apply workflow technologies in their work, as well as engineers who want to develop distributed and high-throughput workflow applications, languages and tools.
Thiseditedbookispublishedin honorofDr. GeorgeJ. Vachtsevanos, ourDr. V, c- rently Professor Emeritus, School of Electrical and Computer Engineering, Georgia Institute of Technology, on the occasion of his 70th birthday and for his more than 30 years of contribution to the discipline of Intelligent Control and its application to a wide spectrum of engineering and bioengineering systems. The book is nothing but a very small token of appreciation from Dr. V's former graduate students, his peers and colleagues in the profession - and not only - to the Scientist, the Engineer, the Professor, the mentor, but most important of all, to the friend and human being. All those who have met Dr. V over the years and haveinteractedwith himin someprofessionaland/orsocial capacityunderstandthis statement: Georgenevermadeanybodyfeelinferiortohim, hehelpedandsupported everybody, and he was there when anybody needed him I was not Dr. V's student. I rst met him and his wife Athena more than 26 years ago during one of their visits to RPI, in the house of my late advisor, Dr. George N. Saridis. Since then, I have been very fortunate to have had and continue to have interactions with him. It is not an exaggeration if I say that we all learned a lot from him.
The Knowledge Seeker is a useful system to develop various intelligent applications such as ontology-based search engine, ontology-based text classification system, ontological agent system, and semantic web system etc. The Knowledge Seeker contains four different ontological components. First, it defines the knowledge representation model !V Ontology Graph. Second, an ontology learning process that based on chi-square statistics is proposed for automatic learning an Ontology Graph from texts for different domains. Third, it defines an ontology generation method that transforms the learning outcome to the Ontology Graph format for machine processing and also can be visualized for human validation. Fourth, it defines different ontological operations (such as similarity measurement and text classification) that can be carried out with the use of generated Ontology Graphs. The final goal of the KnowledgeSeeker system framework is that it can improve the traditional information system with higher efficiency. In particular, it can increase the accuracy of a text classification system, and also enhance the search intelligence in a search engine. This can be done by enhancing the system with machine processable ontology.
The rapid advances in performance and miniaturisation in microtechnology are constantly opening up new markets for the programmable logic controller (PLC). Specially designed controller hardware or PC-based controllers, extended by hardware and software with real-time capability, now control highly complex automation processes. This has been extended by the new subject of "safe- related controllers," aimed at preventing injury by machines during the production process. The different types of PLC cover a wide task spectrum - ranging from small network node computers and distributed compact units right up to modular, fau- tolerant, high-performance PLCs. They differ in performance characteristics such as processing speed, networking ability or the selection of I/O modules they support. Throughout this book, the term PLC is used to refer to the technology as a whole, both hardware and software, and not merely to the hardware architecture. The IEC61131 programming languages can be used for programming classical PLCs, embedded controllers, industrial PCs and even standard PCs, if suitable hardware (e.g. fieldbus board) for connecting sensors and actors is available.
Speech--to--Speech Translation: a Massively Parallel Memory-Based Approach describes one of the world's first successful speech--to--speech machine translation systems. This system accepts speaker-independent continuous speech, and produces translations as audio output. Subsequent versions of this machine translation system have been implemented on several massively parallel computers, and these systems have attained translation performance in the milliseconds range. The success of this project triggered several massively parallel projects, as well as other massively parallel artificial intelligence projects throughout the world. Dr. Hiroaki Kitano received the distinguished Computers and Thought Award' from the International Joint Conferences on Artificial Intelligence in 1993 for his work in this area, and that work is reported in this book.
Frontiers in Belief Revision is a unique collection of leading edge research in Belief Revision. It contains the latest innovative ideas of highly respected and pioneering experts in the area, including Isaac Levi, Krister Segerberg, Sven Ove Hansson, Didier Dubois, and Henri Prade. The book addresses foundational issues of inductive reasoning and minimal change, generalizations of the standard belief revision theories, strategies for iterated revisions, probabilistic beliefs, multiagent environments and a variety of data structures and mechanisms for implementations. This book is suitable for students and researchers interested in knowledge representation and in the state of the art of the theory and practice of belief revision.
Thinking in terms of facts and rules is perhaps one of the most common ways of approaching problem de?nition and problem solving both in everyday life and under more formal circumstances. The best known set of rules, the Ten Commandments have been accompanying us since the times of Moses; the Decalogue proved to be simple but powerful, concise and universal. It is logically consistent and complete. There are also many other attempts to impose rule-based regulations in almost all areas of life, including professional work, education, medical services, taxes, etc. Some most typical examples may include various codes (e.g. legal or tra?c code), regulations (especially military ones), and many systems of customary or informal rules. The universal nature of rule-based formulation of behavior or inference principles follows from the concept of rules being a simple and intuitive yet powerful concept of very high expressive power. Moreover, rules as such encode in fact functional aspects of behavior and can be used for modeling numerous phenomena.
Pervasive healthcare is the conceptual system of providing healthcare to anyone, at anytime, and anywhere by removing restraints of time and location while increasing both the coverage and the quality of healthcare. Pervasive Healthcare Monitoring is at the forefront of this research, and presents the ways in which mobile and wireless technologies can be used to implement the vision of pervasive healthcare. This vision includes prevention, healthcare maintenance and checkups; short-term monitoring (home healthcare monitoring), long-term monitoring (nursing home), and personalized healthcare monitoring; and incidence detection and management, emergency intervention, and transportation and treatment. The pervasive healthcare applications include pervasive health monitoring, intelligent emergency management system, pervasive healthcare data access, and ubiquitous mobile telemedicine. Pervasive Healthcare Monitoring fills the need for a research-oriented book on the wide array of emerging healthcare applications and services, including the treatment of several new wireless technologies and the ways in which they will implement the vision of pervasive healthcare. This book is written primarily for university faculty and graduate students in the field of healthcare technologies, and industry professionals involved in healthcare IT research, design, and development.
In knowledge-based natural language generation, issues of formal knowledge representation meet with the linguistic problems of choosing the most appropriate verbalization in a particular situation of utterance. Lexical Semantics and Knowledge Representation in Multilingual Text Generation presents a new approach to systematically linking the realms of lexical semantics and knowledge represented in a description logic. For language generation from such abstract representations, lexicalization is taken as the central step: when choosing words that cover the various parts of the content representation, the principal decisions on conveying the intended meaning are made. A preference mechanism is used to construct the utterance that is best tailored to parameters representing the context. Lexical Semantics and Knowledge Representation in Multilingual Text Generation develops the means for systematically deriving a set of paraphrases from the same underlying representation with the emphasis on events and verb meaning. Furthermore, the same mapping mechanism is used to achieve multilingual generation: English and German output are produced in parallel, on the basis of an adequate division between language-neutral and language-specific (lexical and grammatical) knowledge. Lexical Semantics and Knowledge Representation in Multilingual Text Generation provides detailed insights into designing the representations and organizing the generation process. Readers with a background in artificial intelligence, cognitive science, knowledge representation, linguistics, or natural language processing will find a model of language production that can be adapted to a variety of purposes.
This open access book provides a comprehensive view on data ecosystems and platform economics from methodical and technological foundations up to reports from practical implementations and applications in various industries. To this end, the book is structured in four parts: Part I "Foundations and Contexts" provides a general overview about building, running, and governing data spaces and an introduction to the IDS and GAIA-X projects. Part II "Data Space Technologies" subsequently details various implementation aspects of IDS and GAIA-X, including eg data usage control, the usage of blockchain technologies, or semantic data integration and interoperability. Next, Part III describes various "Use Cases and Data Ecosystems" from various application areas such as agriculture, healthcare, industry, energy, and mobility. Part IV eventually offers an overview of several "Solutions and Applications", eg including products and experiences from companies like Google, SAP, Huawei, T-Systems, Innopay and many more. Overall, the book provides professionals in industry with an encompassing overview of the technological and economic aspects of data spaces, based on the International Data Spaces and Gaia-X initiatives. It presents implementations and business cases and gives an outlook to future developments. In doing so, it aims at proliferating the vision of a social data market economy based on data spaces which embrace trust and data sovereignty.
The papers in this volume comprise the refereed proceedings of the Second IFIP International Conference on Computer and Computing Technologies in Agriculture (CCTA2008), in Beijing, China, 2008. The conference on the Second IFIP International Conference on Computer and Computing Technologies in Agriculture (CCTA 2008) is cooperatively sponsored and organized by the China Agricultural University (CAU), the National Engineering Research Center for Information Technology in Agriculture (NERCITA), the Chinese Society of Agricultural Engineering (CSAE) , International Federation for Information Processing (IFIP), Beijing Society for Information Technology in Agriculture, China and Beijing Research Center for Agro-products Test and Farmland Inspection, China. The related departments of China's central government bodies like: Ministry of Science and Technology, Ministry of Industry and Information Technology, Ministry of Education and the Beijing Municipal Natural Science Foundation, Beijing Academy of Agricultural and Forestry Sciences, etc. have greatly contributed and supported to this event. The conference is as good platform to bring together scientists and researchers, agronomists and information engineers, extension servers and entrepreneurs from a range of disciplines concerned with impact of Information technology for sustainable agriculture and rural development. The representatives of all the supporting organizations, a group of invited speakers, experts and researchers from more than 15 countries, such as: the Netherlands, Spain, Portugal, Mexico, Germany, Greece, Australia, Estonia, Japan, Korea, India, Iran, Nigeria, Brazil, China, etc.
This book highlights new trends and challenges in research on agents and the new digital and knowledge economy. It includes papers on business process management, agent-based modeling and simulation, and anthropic-oriented computing that were originally presented at the 15th International KES Conference on Agents and Multi-Agent Systems: Technologies and Applications (KES-AMSTA 2021), being held as a Virtual Conference in June 14-16, 2021. The respective papers cover topics such as software agents, multi-agent systems, agent modeling, mobile and cloud computing, big data analysis, business intelligence, artificial intelligence, social systems, computer embedded systems, and nature-inspired manufacturing, all of which contribute to the modern digital economy.
Knowledge Representation and Relation Nets introduces a fresh approach to knowledge representation that can be used to organize study material in a convenient, teachable and learnable form. The method extends and formalizes concept mapping by developing knowledge representation as a structure of concepts and the relationships among them. Such a formal description of analogy results in a controlled method of modeling new' knowledge in terms of existing' knowledge in teaching and learning situations, and its applications result in a consistent and well-organized approach to problem solving. Additionally, strategies for the presentation of study material to learners arise naturally in this representation. While the theory of relation nets is dealt with in detail in part of this book, the reader need not master the formal mathematics in order to apply the theory to this method of knowledge representation. To assist the reader, each chapter starts with a brief summary, and the main ideas are illustrated by examples. The reader is also given an intuitive view of the formal notions used in the applications by means of diagrams, informal descriptions, and simple sets of construction rules. Knowledge Representation and Relation Nets is an excellent source for teachers, courseware designers and researchers in knowledge representation, cognitive science, theories of learning, the psychology of education, and structural modeling.
The book is the complete introduction and applications guide to this new technology. This book introduces the reader to features and gives an overview of geometric modeling techniques, discusses the conceptual development of features as modeling entities, illustrates the use of features for a variety of engineering design applications, and develops a set of broad functional requirements and addresses high level design issues.
This open access book provides an in-depth description of the EU project European Language Grid (ELG). Its motivation lies in the fact that Europe is a multilingual society with 24 official European Union Member State languages and dozens of additional languages including regional and minority languages. The only meaningful way to enable multilingualism and to benefit from this rich linguistic heritage is through Language Technologies (LT) including Natural Language Processing (NLP), Natural Language Understanding (NLU), Speech Technologies and language-centric Artificial Intelligence (AI) applications. The European Language Grid provides a single umbrella platform for the European LT community, including research and industry, effectively functioning as a virtual home, marketplace, showroom, and deployment centre for all services, tools, resources, products and organisations active in the field. Today the ELG cloud platform already offers access to more than 13,000 language processing tools and language resources. It enables all stakeholders to deposit, upload and deploy their technologies and datasets. The platform also supports the long-term objective of establishing digital language equality in Europe by 2030 - to create a situation in which all European languages enjoy equal technological support. This is the very first book dedicated to Language Technology and NLP platforms. Cloud technology has only recently matured enough to make the development of a platform like ELG feasible on a larger scale. The book comprehensively describes the results of the ELG project. Following an introduction, the content is divided into four main parts: (I) ELG Cloud Platform; (II) ELG Inventory of Technologies and Resources; (III) ELG Community and Initiative; and (IV) ELG Open Calls and Pilot Projects.
This monograph introduces a novel multiset-based conceptual, mathematical and knowledge engineering paradigm, called multigrammatical framework (MGF), used for planning and scheduling in resource-consuming, resource-producing (industrial) and resource-distributing (economical) sociotechnological systems (STS). This framework is meant to enable smart operation not only in a "business-as-usual" mode, but also in extraordinary, highly volatile or hazardous environments. It is the result of convergence and deep integration into a unified, flexible and effectively implemented formalism operating on multisets of several well-known paradigms from classical operations research and modern knowledge engineering, such as: mathematical programming, game theory, optimal scheduling, logic programming and constraint programming. The mathematical background needed for MGF, its algorithmics, applications, implementation issues, as well as its nexus with known models from operations research and theoretical computer science areas are considered. The resilience and recovery issues of an STS are studied by applying the MGF toolkit and on paying special attention to the multigrammatical assessment of resilience of energy infrastructures. MGF-represented resource-based games are introduced, and directions for further development are discussed. The author presents multiple applications to business intelligence, critical infrastructure, ecology, economy and industry. This book is addressed to scholars working in the areas of theoretical and applied computer science, artificial intelligence, systems analysis, operations research, mathematical economy and critical infrastructure protection, to engineers developing software-intensive solutions for implementation of the knowledge-based digital economy and Industry 4.0, as well as to students, aspirants and university staff. Foundational knowledge of set theory, mathematical logic and routine operations on data bases is needed to read this book. The content of the monograph is gradually presented, from simple to complex, in a well-understandable step-by-step manner. Multiple examples and accompanying figures are included in order to support the explanation of the various notions, expressions and algorithms.
This book constitutes the refereed proceedings of the IFIP Industry Oriented Conferences held at the 20th World Computer Congress in Milano, Italy on September 7-10, 2008. The IFIP series publishes state-of-the-art results in the sciences and technologies of information and communication. The scope of the series includes: foundations of computer science; software theory and practice; education; computer applications in technology; communication systems; systems modeling and optimization; information systems; computers and society; computer systems technology; security and protection in information processing systems; artificial intelligence; and human-computer interaction. Proceedings and post-proceedings of refereed international conferences in computer science and interdisciplinary fields are featured. These results often precede journal publication and represent the most current research. The principal aim of the IFIP series is to encourage education and the dissemination and exchange of information about all aspects of computing.
The development of modern knowledge-based systems, for applications ranging from medicine to finance, necessitates going well beyond traditional rule-based programming. Frontiers of Expert Systems: Reasoning with Limited Knowledge attempts to satisfy such a need, introducing exciting and recent advances at the frontiers of the field of expert systems. Beginning with the central topics of logic, uncertainty and rule-based reasoning, each chapter in the book presents a different perspective on how we may solve problems that arise due to limitations in the knowledge of an expert system's reasoner. Successive chapters address (i) the fundamentals of knowledge-based systems, (ii) formal inference, and reasoning about models of a changing and partially known world, (iii) uncertainty and probabilistic methods, (iv) the expression of knowledge in rule-based systems, (v) evolving representations of knowledge as a system interacts with the environment, (vi) applying connectionist learning algorithms to improve on knowledge acquired from experts, (vii) reasoning with cases organized in indexed hierarchies, (viii) the process of acquiring and inductively learning knowledge, (ix) extraction of knowledge nuggets from very large data sets, and (x) interactions between multiple specialized reasoners with specialized knowledge bases. Each chapter takes the reader on a journey from elementary concepts to topics of active research, providing a concise description of several topics within and related to the field of expert systems, with pointers to practical applications and other relevant literature. Frontiers of Expert Systems: Reasoning with Limited Knowledge is suitable as a secondary text for a graduate-level course, and as a reference for researchers and practitioners in industry.
There is a tremendous interest in the design and applications of agents in virtually every area including avionics, business, internet, engineering, health sciences and management. There is no agreed one definition of an agent but we can define an agent as a computer program that autonomously or semi-autonomously acts on behalf of the user. In the last five years transition of intelligent systems research in general and agent based research in particular from a laboratory environment into the real world has resulted in the emergence of several phenomenon. These trends can be placed in three catego ries, namely, humanization, architectures and learning and adapta tion. These phenomena are distinct from the traditional logic centered approach associated with the agent paradigm. Humaniza tion of agents can be understood among other aspects, in terms of the semantics quality of design of agents. The need to humanize agents is to allow practitioners and users to make more effective use of this technology. It relates to the semantic quality of the agent design. Further, context-awareness is another aspect which has as sumed importance in the light of ubiquitous computing and ambi ent intelligence. The widespread and varied use of agents on the other hand has cre ated a need for agent-based software development frameworks and design patterns as well architectures for situated interaction, nego tiation, e-commerce, e-business and informational retrieval. Fi- vi Preface nally, traditional agent designs did not incorporate human-like abilities of learning and adaptation."
This book provides a broad overview of the benefits from a Systems Engineering design philosophy in architecting complex systems composed of artificial intelligence (AI), machine learning (ML) and humans situated in chaotic environments. The major topics include emergence, verification and validation of systems using AI/ML and human systems integration to develop robust and effective human-machine teams-where the machines may have varying degrees of autonomy due to the sophistication of their embedded AI/ML. The chapters not only describe what has been learned, but also raise questions that must be answered to further advance the general Science of Autonomy. The science of how humans and machines operate as a team requires insights from, among others, disciplines such as the social sciences, national and international jurisprudence, ethics and policy, and sociology and psychology. The social sciences inform how context is constructed, how trust is affected when humans and machines depend upon each other and how human-machine teams need a shared language of explanation. National and international jurisprudence determine legal responsibilities of non-trivial human-machine failures, ethical standards shape global policy, and sociology provides a basis for understanding team norms across cultures. Insights from psychology may help us to understand the negative impact on humans if AI/ML based machines begin to outperform their human teammates and consequently diminish their value or importance. This book invites professionals and the curious alike to witness a new frontier open as the Science of Autonomy emerges.
How to draw plausible conclusions from uncertain and conflicting sources of evidence is one of the major intellectual challenges of Artificial Intelligence. It is a prerequisite of the smart technology needed to help humans cope with the information explosion of the modern world. In addition, computational modelling of uncertain reasoning is a key to understanding human rationality. Previous computational accounts of uncertain reasoning have fallen into two camps: purely symbolic and numeric. This book represents a major advance by presenting a unifying framework which unites these opposing camps. The Incidence Calculus can be viewed as both a symbolic and a numeric mechanism. Numeric values are assigned indirectly to evidence via the possible worlds in which that evidence is true. This facilitates purely symbolic reasoning using the possible worlds and numeric reasoning via the probabilities of those possible worlds. Moreover, the indirect assignment solves some difficult technical problems, like the combinat ion of dependent sources of evidcence, which had defeated earlier mechanisms. Weiru Liu generalises the Incidence Calculus and then compares it to a succes sion of earlier computational mechanisms for uncertain reasoning: Dempster-Shafer Theory, Assumption-Based Truth Maintenance, Probabilis tic Logic, Rough Sets, etc. She shows how each of them is represented and interpreted in Incidence Calculus. The consequence is a unified mechanism which includes both symbolic and numeric mechanisms as special cases. It provides a bridge between symbolic and numeric approaches, retaining the advantages of both and overcoming some of their disadvantages."
The papers in this volume comprise the refereed proceedings of the Second IFIP International Conference on Computer and Computing Technologies in Agriculture (CCTA2008), in Beijing, China, 2008. The conference on the Second IFIP International Conference on Computer and Computing Technologies in Agriculture (CCTA 2008) is cooperatively sponsored and organized by the China Agricultural University (CAU), the National Engineering Research Center for Information Technology in Agriculture (NERCITA), the Chinese Society of Agricultural Engineering (CSAE) , International Federation for Information Processing (IFIP), Beijing Society for Information Technology in Agriculture, China and Beijing Research Center for Agro-products Test and Farmland Inspection, China. The related departments of China's central government bodies like: Ministry of Science and Technology, Ministry of Industry and Information Technology, Ministry of Education and the Beijing Municipal Natural Science Foundation, Beijing Academy of Agricultural and Forestry Sciences, etc. have greatly contributed and supported to this event. The conference is as good platform to bring together scientists and researchers, agronomists and information engineers, extension servers and entrepreneurs from a range of disciplines concerned with impact of Information technology for sustainable agriculture and rural development. The representatives of all the supporting organizations, a group of invited speakers, experts and researchers from more than 15 countries, such as: the Netherlands, Spain, Portugal, Mexico, Germany, Greece, Australia, Estonia, Japan, Korea, India, Iran, Nigeria, Brazil, China, etc.
Visual Question Answering (VQA) usually combines visual inputs like image and video with a natural language question concerning the input and generates a natural language answer as the output. This is by nature a multi-disciplinary research problem, involving computer vision (CV), natural language processing (NLP), knowledge representation and reasoning (KR), etc. Further, VQA is an ambitious undertaking, as it must overcome the challenges of general image understanding and the question-answering task, as well as the difficulties entailed by using large-scale databases with mixed-quality inputs. However, with the advent of deep learning (DL) and driven by the existence of advanced techniques in both CV and NLP and the availability of relevant large-scale datasets, we have recently seen enormous strides in VQA, with more systems and promising results emerging. This book provides a comprehensive overview of VQA, covering fundamental theories, models, datasets, and promising future directions. Given its scope, it can be used as a textbook on computer vision and natural language processing, especially for researchers and students in the area of visual question answering. It also highlights the key models used in VQA.
This book constitutes the refereed post-conference proceedings of the Fourth IFIP International Cross-Domain Conference on Internet of Things, IFIPIoT 2021, held virtually in November 2021. The 15 full papers presented were carefully reviewed and selected from 33 submissions. Also included is a summary of two panel sessions held at the conference. The papers are organized in the following topical sections: challenges in IoT Applications and Research, Modernizing Agricultural Practice Using IoT, Cyber-physical IoT systems in Wildfire Context, IoT for Smart Health, Security, Methods.
This book promotes a meaningful and appropriate dialogue and cross-disciplinary partnerships on Artificial Intelligence (AI) in governance and disaster management. The frequency and the cost of losses and damages due to disasters are rising every year. From wildfires to tsunamis, drought to hurricanes, floods to landslides combined with chemical, nuclear and biological disasters of epidemic proportions has increased human vulnerability and ecosystem sustainability. Life is not as it used to be and governance to manage disasters cannot be a business as usual. The quantum and proportion of responsibilities with the emergency services has increased many times to strain them beyond their human capacities. Its time that the struggling disaster management services get supported and facilitated by new technology of combining Artificial Intelligence (AI) and Machine Learning (ML) with Data Analytics Technologies (DAT)to serve people and government in disaster management. AI and ML have advanced to a state where they could be utilized for many operations in disaster risk reduction. Even though many disasters cannot be prevented and a number of them are blind natural disasters yet through an appropriate application of AI and ML quick predictions, vulnerability identification and classification of relief and rescue operations could be achieved. |
You may like...
Exploring Future Opportunities of…
Madhulika Bhatia, Tanupriya Choudhury, …
Hardcover
R6,683
Discovery Miles 66 830
Research Anthology on Artificial Neural…
Information R Management Association
Hardcover
R12,938
Discovery Miles 129 380
Blockchain Technology for Emerging…
S. K. Hafizul Islam, Arup Kumar Pal, …
Paperback
R2,941
Discovery Miles 29 410
Research Anthology on Artificial Neural…
Information R Management Association
Hardcover
R12,932
Discovery Miles 129 320
5G IoT and Edge Computing for Smart…
Akash Kumar Bhoi, Victor Hugo Costa de Albuquerque, …
Paperback
R2,588
Discovery Miles 25 880
Recent Trends in Computational…
Siddhartha Bhattacharyya, Paramartha Dutta, …
Paperback
R3,483
Discovery Miles 34 830
Deep Learning Applications for…
Monica R. Mundada, Seema S., …
Hardcover
R6,648
Discovery Miles 66 480
Probabilistic and Causal Inference - The…
Hector Geffner, Rina Dechter, …
Hardcover
R3,885
Discovery Miles 38 850
|