![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Artificial intelligence > Natural language & machine translation
This volume examines the concept of falsification as a central notion of semantic theories and its effects on logical laws. The point of departure is the general constructivist line of argument that Michael Dummett has offered over the last decades. From there, the author examines the ways in which falsifications can enter into a constructivist semantics, displays the full spectrum of options, and discusses the logical systems most suitable to each one of them. While the idea of introducing falsifications into the semantic account is Dummett's own, the many ways in which falsificationism departs quite radically from verificationism are here spelled out in detail for the first time. The volume is divided into three large parts. The first part provides important background information about Dummett s program, intuitionism and logics with gaps and gluts. The second part is devoted to the introduction of falsifications into the constructive account and shows that there is more than one way in which one can do this. The third part details the logical effects of these various moves. In the end, the book shows that the constructive path may branch in different directions: towards intuitionistic logic, dual intuitionistic logic and several variations of Nelson logics. The author argues that, on balance, the latter are the more promising routes to take. "Kapsner s book is the first detailed investigation of how to incorporate the notion of falsification into formal logic. This is a fascinating logico-philosophical investigation, which will interest non-classical logicians of all stripes." Graham Priest, "Graduate Center, City University of New York" and "University of Melbourne""
The papers collected in this volume are selected as a sample of the progress in Natural Language Processing (NLP) performed within the Italian NLP community and especially attested by the PARLI project. PARLI (Portale per l'Accesso alle Risorse in Lingua Italiana) is a project partially funded by the Ministero Italiano per l'Universita e la Ricerca (PRIN 2008) from 2008 to 2012 for monitoring and fostering the harmonic growth and coordination of the activities of Italian NLP. It was proposed by various teams of researchers working in Italian universities and research institutions. According to the spirit of the PARLI project, most of the resources and tools created within the project and here described are freely distributed and they did not terminate their life at the end of the project itself, hoping they could be a key factor in future development of computational linguistics.
Although there has been much progress in developing theories, models and systems in the areas of Natural Language Processing (NLP) and Vision Processing (VP) there has up to now been little progress on integrating these two subareas of Artificial Intelligence (AI). This book contains a set of edited papers on recent advances in the theories, computational models and systems of the integration of NLP and VP. The volume includes original work of notable researchers: Alex Waibel outlines multimodal interfaces including studies in speech, gesture and points; eye-gaze, lip motion and facial expression; hand writing, face recognition, face tracking and sound localization in a connectionist framework. Antony Cohen and John Gooday use spatial relations to describe visual languages. Naoguki Okada considers intentions of agents in visual environments. In addition to these studies, the volume includes many recent advances from North America, Europe and Asia demonstrating the fact that integration of Natural Language Processing and Vision is truly an international challenge.
This book presents recent advances in NLP and speech technology, a topic attracting increasing interest in a variety of fields through its myriad applications, such as the demand for speech guided touchless technology during the Covid-19 pandemic. The authors present results of recent experimental research that provides contributions and solutions to different issues related to speech technology and speech in industry. Technologies include natural language processing, automatic speech recognition (for under-resourced dialects) and speech synthesis that are useful for applications such as intelligent virtual assistants, among others. Applications cover areas such as sentiment analysis and opinion mining, Arabic named entity recognition, and language modelling. This book is relevant for anyone interested in the latest in language and speech technology.
This open access book describes the results of natural language processing and machine learning methods applied to clinical text from electronic patient records. It is divided into twelve chapters. Chapters 1-4 discuss the history and background of the original paper-based patient records, their purpose, and how they are written and structured. These initial chapters do not require any technical or medical background knowledge. The remaining eight chapters are more technical in nature and describe various medical classifications and terminologies such as ICD diagnosis codes, SNOMED CT, MeSH, UMLS, and ATC. Chapters 5-10 cover basic tools for natural language processing and information retrieval, and how to apply them to clinical text. The difference between rule-based and machine learning-based methods, as well as between supervised and unsupervised machine learning methods, are also explained. Next, ethical concerns regarding the use of sensitive patient records for research purposes are discussed, including methods for de-identifying electronic patient records and safely storing patient records. The book's closing chapters present a number of applications in clinical text mining and summarise the lessons learned from the previous chapters. The book provides a comprehensive overview of technical issues arising in clinical text mining, and offers a valuable guide for advanced students in health informatics, computational linguistics, and information retrieval, and for researchers entering these fields.
Semantic agent systems are about the integration of the semantic Web, software agents, and multi-agent systems technologies. Like in the past (e.g. biology and informatics yielding bioinformatics) a whole new perspective is emerging with semantic agent systems. In this context, the semantic Web is a Web of semantically linked data which aims to enable man and machine to execute tasks in tandem. Here, software agents in a multi-agent system as delegates of humans are endowed with power to use semantically linked data. This edited book "Semantic Agent Systems: Foundations and Applications" proposes contributions on a wide range of topics on foundations and applications written by a selection of international experts. It first introduces in an accessible style the nature of semantic agent systems. Then it explores with numerous illustrations new frontiers in software agent technology. "Semantic Agent Systems: Foundations and Applications" is recommended for scientists, experts, researchers, and learners in the field of artificial intelligence, the semantic Web, software agents, and multi-agent systems technologies.
The book presents the history of time-domain representation and the extent of its development along with that of spectral domain representation in the cognitive and technology domains. It discusses all the cognitive experiments related to this development, along with details of technological developments related to both automatic speech recognition (ASR) and text to speech synthesis (TTS), and introduces a viable time-domain representation for both objective and subjective analysis, as an alternative to the well-known spectral representation. The book also includes a new cohort study on the use of lexical knowledge in ASR. India has numerous official dialects, and spoken-language technology development is a burgeoning area. In fact TTS and ASR taken together constitute the most important technology for empowering people. As such, the book describes time domain representation in such a way that it can be easily and seamlessly incorporated into ASR and TTS research and development. In short, it is a valuable guidebook for the development of ASR and TTS in all the Indian Standard Dialects using signal domain parameters.
The infonnation revolution is upon us. Whereas the industrial revolution heralded the systematic augmentation of human physical limitations by har nessing external energy sources, the infonnation revolution strives to augment human memory and mental processing limitations by harnessing external computational resources. Computers can accumulate. transmit and output much more infonnation and in a more timely fashion than more con ventional printed or spoken media. Of greater interest, however, is the computer's ability to process, classify and retrieve infonnation selectively in response to the needs of each human user. One cannot drink from the fire hydrant of infonnation without being immediately flooded with irrelevant text. Recent technological advances such as optical character readers only exacerbate the problem by increasing the volume of electronic text. Just as steam and internal combustion engines brought powerful energy sources under control to yield useful work in the industrial revolution, so must we build computational engines that control and apply the vast infonnation sources that they may yield useful knowledge. Information science is the study of systematic means to control, classify, process and retrieve vast amounts of infonnation in electronic fonn. In par ticular, several methodologies have been developed to classify texts manually by annies of human indexers, as illustrated quite clearly at the National Library ofMedicine, and many computational techniques have been developed to search textual data bases automatically, such as full-text keyword searches. In general."
It is becoming crucial to accurately estimate and monitor speech quality in various ambient environments to guarantee high quality speech communication. This practical hands-on book shows speech intelligibility measurement methods so that the readers can start measuring or estimating speech intelligibility of their own system. The book also introduces subjective and objective speech quality measures, and describes in detail speech intelligibility measurement methods. It introduces a diagnostic rhyme test which uses rhyming word-pairs, and includes: An investigation into the effect of word familiarity on speech intelligibility. Speech intelligibility measurement of localized speech in virtual 3-D acoustic space using the rhyme test. Estimation of speech intelligibility using objective measures, including the ITU standard PESQ measures, and automatic speech recognizers.
This volume represents the first attempt in the field of language pedagogy to apply a systems approach to issues in English language education. In the literature of language education, or more specifically, second or foreign language learning and teaching, each topic or issue has often been dealt with independently, and been treated as an isolated item. Taking grammar instruction as an example, grammatical items are often taught in a sequential, step-by-step manner; there has been no "road map" in which the interrelations between the various items are demonstrated. This may be one factor that makes it more difficult for students to learn the language organically. The topics covered in this volume, including language acquisition, pedagogical grammar, and teacher collaboration, are viewed from a holistic perspective. In other words, language pedagogy is approached as a dynamic system of interrelations. In this way, "emergent properties" are expected to manifest. This book is recommended for anyone involved in language pedagogy, including researchers, teachers, and teacher trainers, as well as learners.
Advances in graph-based natural language processing (NLP) and information retrieval tasks have shown the importance of processing using the Graph of Words method. This book covers recent concrete information, from the basics to advanced level, about graph-based learning, such as neural network-based approaches, computational intelligence for learning parameters and feature reduction, and network science for graph-based NPL. It also contains information about language generation based on graphical theories and language models. Features: Presents a comprehensive study of the interdisciplinary graphical approach to NLP Covers recent computational intelligence techniques for graph-based neural network models Discusses advances in random walk-based techniques, semantic webs, and lexical networks Explores recent research into NLP for graph-based streaming data Reviews advances in knowledge graph embedding and ontologies for NLP approaches This book is aimed at researchers and graduate students in computer science, natural language processing, and deep and machine learning.
Computational Analysis and Understanding of Natural Languages: Principles, Methods and Applications, Volume 38, the latest release in this monograph that provides a cohesive and integrated exposition of these advances and associated applications, includes new chapters on Linguistics: Core Concepts and Principles, Grammars, Open-Source Libraries, Application Frameworks, Workflow Systems, Mathematical Essentials, Probability, Inference and Prediction Methods, Random Processes, Bayesian Methods, Machine Learning, Artificial Neural Networks for Natural Language Processing, Information Retrieval, Language Core Tasks, Language Understanding Applications, and more. The synergistic confluence of linguistics, statistics, big data, and high-performance computing is the underlying force for the recent and dramatic advances in analyzing and understanding natural languages, hence making this series all the more important.
Computational semantics is concerned with computing the meanings of
linguistic objects such as sentences, text fragments, and dialogue
contributions. As such it is the interdisciplinary child of
semantics, the study of meaning and its linguistic encoding, and
computational linguistics, the discipline that is concerned with
computations on linguistic objects.
This book gathers outstanding research papers presented at the 5th International Joint Conference on Advances in Computational Intelligence (IJCACI 2021), held online during October 23-24, 2021. IJCACI 2021 is jointly organized by Jahangirnagar University (JU), Bangladesh, and South Asian University (SAU), India. The book presents the novel contributions in areas of computational intelligence and it serves as a reference material for advance research. The topics covered are collective intelligence, soft computing, optimization, cloud computing, machine learning, intelligent software, robotics, data science, data security, big data analytics, and signal and natural language processing.
This book applies formal language and automata theory in the context of Tibetan computational linguistics; further, it constructs a Tibetan-spelling formal grammar system that generates a Tibetan-spelling formal language group, and an automata group that can recognize the language group. In addition, it investigates the application technologies of Tibetan-spelling formal language and automata. Given its creative and original approach, the book offers a valuable reference guide for researchers, teachers and graduate students in the field of computational linguistics.
This book is a description of some of the most recent advances in text classification as part of a concerted effort to achieve computer understanding of human language. In particular, it addresses state-of-the-art developments in the computation of higher-level linguistic features, ranging from etymology to grammar and syntax for the practical task of text classification according to genres, registers and subject domains. Serving as a bridge between computational methods and sophisticated linguistic analysis, this book will be of particular interest to academics and students of computational linguistics as well as professionals in natural language engineering.
This book gathers outstanding research papers presented in the 2nd International Conference on Artificial Intelligence: Advances and Application (ICAIAA 2021), held in Poornima College of Engineering, Jaipur, India during 27-28 March 2021. This book covers research works carried out by various students such as bachelor, master and doctoral scholars, faculty and industry persons in the area of artificial intelligence, machine learning, deep learning applications in healthcare, agriculture, business, security, etc. It will also cover research in core concepts of computer networks, intelligent system design and deployment, real time systems, WSN, sensors and sensor nodes, SDN, NFV, etc.
This edited volume covers the development and application of metalanguages for concretely describing and communicating translation processes in practice. In a modern setting of project-based translation, it is crucial to bridge the gaps between various actors involved in the translation process, especially among clients, translation service providers (TSPs), translators, and technology developers. However, we have been confronted with the lack of common understanding among them about the notion and detailed mechanisms of translation. Against this backdrop, we are developing systematic, fine-grained metalanguages that are designed to describe and analyse translation processes in concrete terms. Underpinned by the rich accumulation of theoretical findings in translation studies and established standards of practical translation services, such as ISO 17100, our metalanguages extensively cover the core processes in translation projects, namely project management, source document analysis, translation, and revision. Gathering authors with diverse backgrounds and expertise, this book proffers the fruits of the contributors' collaborative endeavour; it not only provides practicable metalanguages, but also reports on wide-ranging case studies on the application of metalanguages in practical and pedagogical scenarios. This book supplies concrete guidance for those who are involved in the translation practices and translation training/education. In addition to being of practical use, the metalanguages reflect explication of the translation process. As such, this book provides essential insights for researchers and students in the field of translation studies.
Data driven methods have long been used in Automatic Speech Recognition (ASR) and Text-To-Speech (TTS) synthesis and have more recently been introduced for dialogue management, spoken language understanding, and Natural Language Generation. Machine learning is now present "end-to-end" in Spoken Dialogue Systems (SDS). However, these techniques require data collection and annotation campaigns, which can be time-consuming and expensive, as well as dataset expansion by simulation. In this book, we provide an overview of the current state of the field and of recent advances, with a specific focus on adaptivity.
To date, the relation between multilingualism and the Semantic Web has not yet received enough attention in the research community. One major challenge for the Semantic Web community is to develop architectures, frameworks and systems that can help in overcoming national and language barriers, facilitating equal access to information produced in different cultures and languages. As such, this volume aims at documenting the state-of-the-art with regard to the vision of a Multilingual Semantic Web, in which semantic information will be accessible in and across multiple languages. The Multilingual Semantic Web as envisioned in this volume will support the following functionalities: (1) responding to information needs in any language with regard to semantically structured data available on the Semantic Web and Linked Open Data (LOD) cloud, (2) verbalizing and accessing semantically structured data, ontologies or other conceptualizations in multiple languages, (3) harmonizing, integrating, aggregating, comparing and repurposing semantically structured data across languages and (4) aligning and reconciling ontologies or other conceptualizations across languages. The volume is divided into three main sections: Principles, Methods and Applications. The section on "Principles" discusses models, architectures and methodologies that enrich the current Semantic Web architecture with features necessary to handle multiple languages. The section on "Methods" describes algorithms and approaches for solving key issues related to the construction of the Multilingual Semantic Web. The section on "Applications" describes the use of Multilingual Semantic Web based approaches in the context of several application domains. This volume is essential reading for all academic and industrial researchers who want to embark on this new research field at the intersection of various research topics, including the Semantic Web, Linked Data, natural language processing, computational linguistics, terminology and information retrieval. It will also be of great interest to practitioners who are interested in re-examining their existing infrastructure and methodologies for handling multiple languages in Web applications or information retrieval systems.
This book focuses on grammatical inference, presenting classic and modern methods of grammatical inference from the perspective of practitioners. To do so, it employs the Python programming language to present all of the methods discussed. Grammatical inference is a field that lies at the intersection of multiple disciplines, with contributions from computational linguistics, pattern recognition, machine learning, computational biology, formal learning theory and many others. Though the book is largely practical, it also includes elements of learning theory, combinatorics on words, the theory of automata and formal languages, plus references to real-world problems. The listings presented here can be directly copied and pasted into other programs, thus making the book a valuable source of ready recipes for students, academic researchers, and programmers alike, as well as an inspiration for their further development.>
This book presents four approaches to jointly training bidirectional neural machine translation (NMT) models. First, in order to improve the accuracy of the attention mechanism, it proposes an agreement-based joint training approach to help the two complementary models agree on word alignment matrices for the same training data. Second, it presents a semi-supervised approach that uses an autoencoder to reconstruct monolingual corpora, so as to incorporate these corpora into neural machine translation. It then introduces a joint training algorithm for pivot-based neural machine translation, which can be used to mitigate the data scarcity problem. Lastly it describes an end-to-end bidirectional NMT model to connect the source-to-target and target-to-source translation models, allowing the interaction of parameters between these two directional models.
This volume celebrates the twentieth anniversary of CLEF - the Cross-Language Evaluation Forum for the first ten years, and the Conference and Labs of the Evaluation Forum since - and traces its evolution over these first two decades. CLEF's main mission is to promote research, innovation and development of information retrieval (IR) systems by anticipating trends in information management in order to stimulate advances in the field of IR system experimentation and evaluation. The book is divided into six parts. Parts I and II provide background and context, with the first part explaining what is meant by experimental evaluation and the underlying theory, and describing how this has been interpreted in CLEF and in other internationally recognized evaluation initiatives. Part II presents research architectures and infrastructures that have been developed to manage experimental data and to provide evaluation services in CLEF and elsewhere. Parts III, IV and V represent the core of the book, presenting some of the most significant evaluation activities in CLEF, ranging from the early multilingual text processing exercises to the later, more sophisticated experiments on multimodal collections in diverse genres and media. In all cases, the focus is not only on describing "what has been achieved", but above all on "what has been learnt". The final part examines the impact CLEF has had on the research world and discusses current and future challenges, both academic and industrial, including the relevance of IR benchmarking in industrial settings. Mainly intended for researchers in academia and industry, it also offers useful insights and tips for practitioners in industry working on the evaluation and performance issues of IR tools, and graduate students specializing in information retrieval.
This book describes effective methods for automatically analyzing a sentence, based on the syntactic and semantic characteristics of the elements that form it. To tackle ambiguities, the authors use selectional preferences (SP), which measure how well two words fit together semantically in a sentence. Today, many disciplines require automatic text analysis based on the syntactic and semantic characteristics of language and as such several techniques for parsing sentences have been proposed. Which is better? In this book the authors begin with simple heuristics before moving on to more complex methods that identify nouns and verbs and then aggregate modifiers, and lastly discuss methods that can handle complex subordinate and relative clauses. During this process, several ambiguities arise. SP are commonly determined on the basis of the association between a pair of words. However, in many cases, SP depend on more words. For example, something (such as grass) may be edible, depending on who is eating it (a cow?). Moreover, things such as popcorn are usually eaten at the movies, and not in a restaurant. The authors deal with these phenomena from different points of view.
The computational approach of this book is aimed at simulating the human ability to understand various kinds of phrases with a novel metaphoric component. That is, interpretations of metaphor as literal paraphrases are based on literal meanings of the metaphorically used words. This method distinguishes itself from statistical approaches, which in general do not account for novel usages, and from efforts directed at metaphor constrained to one type of phrase or to a single topic domain. The more interesting and novel metaphors appear to be based on concepts generally represented as nouns, since such concepts can be understood from a variety of perspectives. The core of the process of interpreting nominal concepts is to represent them in such a way that readers or hearers can infer which aspect(s) of the nominal concept is likely to be intended to be applied to its interpretation. These aspects are defined in terms of verbal and adjectival predicates. A section on the representation and processing of part-sentence verbal metaphor will therefore also serve as preparation for the representation of salient aspects of metaphorically used nouns. As the ability to process metaphorically used verbs and nouns facilitates the interpretation of more complex tropes, computational analysis of two other kinds of metaphorically based expressions are outlined: metaphoric compound nouns, such as "idea factory" and, together with the representation of inferences, modified metaphoric idioms, such as "Put the cat back into the bag". |
![]() ![]() You may like...
Interconnect-Centric Design for Advanced…
Jari Nurmi, H Tenhunen, …
Hardcover
R4,619
Discovery Miles 46 190
|