Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Applications of computing > Artificial intelligence > Natural language & machine translation
This book constitutes the refereed proceedings of the 16th China Conference on Machine Translation, CCMT 2020, held in Hohhot, China, in October 2020. The 13 papers presented in this volume were carefully reviewed and selected from 78 submissions and focus on all aspects of machine translation, including preprocessing, neural machine translation models, hybrid model, evaluation method, and post-editing.
This open access book provides a comprehensive overview of the state of the art in research and applications of Foundation Models and is intended for readers familiar with basic Natural Language Processing (NLP) concepts. Over the recent years, a revolutionary new paradigm has been developed for training models for NLP. These models are first pre-trained on large collections of text documents to acquire general syntactic knowledge and semantic information. Then, they are fine-tuned for specific tasks, which they can often solve with superhuman accuracy. When the models are large enough, they can be instructed by prompts to solve new tasks without any fine-tuning. Moreover, they can be applied to a wide range of different media and problem domains, ranging from image and video processing to robot control learning. Because they provide a blueprint for solving many tasks in artificial intelligence, they have been called Foundation Models. After a brief introduction to basic NLP models the main pre-trained language models BERT, GPT and sequence-to-sequence transformer are described, as well as the concepts of self-attention and context-sensitive embedding. Then, different approaches to improving these models are discussed, such as expanding the pre-training criteria, increasing the length of input texts, or including extra knowledge. An overview of the best-performing models for about twenty application areas is then presented, e.g., question answering, translation, story generation, dialog systems, generating images from text, etc. For each application area, the strengths and weaknesses of current models are discussed, and an outlook on further developments is given. In addition, links are provided to freely available program code. A concluding chapter summarizes the economic opportunities, mitigation of risks, and potential developments of AI.
Get a hands-on introduction to Transformer architecture using the Hugging Face library. This book explains how Transformers are changing the AI domain, particularly in the area of natural language processing. This book covers Transformer architecture and its relevance in natural language processing (NLP). It starts with an introduction to NLP and a progression of language models from n-grams to a Transformer-based architecture. Next, it offers some basic Transformers examples using the Google colab engine. Then, it introduces the Hugging Face ecosystem and the different libraries and models provided by it. Moving forward, it explains language models such as Google BERT with some examples before providing a deep dive into Hugging Face API using different language models to address tasks such as sentence classification, sentiment analysis, summarization, and text generation. After completing Introduction to Transformers for NLP, you will understand Transformer concepts and be able to solve problems using the Hugging Face library. What You Will Learn Understand language models and their importance in NLP and NLU (Natural Language Understanding) Master Transformer architecture through practical examples Use the Hugging Face library in Transformer-based language models Create a simple code generator in Python based on Transformer architecture Who This Book Is ForData Scientists and software developers interested in developing their skills in NLP and NLU (Natural Language Understanding)
This book assesses the place of logic, mathematics, and computer science in present day, interdisciplinary areas of computational linguistics. Computational linguistics studies natural language in its various manifestations from a computational point of view, both on the theoretical level (modeling grammar modules dealing with natural language form and meaning and the relation between these two) and on the practical level (developing applications for language and speech technology). It is a collection of chapters presenting new and future research. The book focuses mainly on logical approaches to computational processing of natural language and on the applicability of methods and techniques from the study of formal languages, programming, and other specification languages. It presents work from other approaches to linguistics, as well, especially because they inspire new work and approaches.
The two-volume set LNCS 13451 and 13452 constitutes revised selected papers from the CICLing 2019 conference which took place in La Rochelle, France, April 2019.The total of 95 papers presented in the two volumes was carefully reviewed and selected from 335 submissions. The book also contains 3 invited papers. The papers are organized in the following topical sections: General, Information extraction, Information retrieval, Language modeling, Lexical resources, Machine translation, Morphology, sintax, parsing, Name entity recognition, Semantics and text similarity, Sentiment analysis, Speech processing, Text categorization, Text generation, and Text mining.
This book presents recent advances in NLP and speech technology, a topic attracting increasing interest in a variety of fields through its myriad applications, such as the demand for speech guided touchless technology during the Covid-19 pandemic. The authors present results of recent experimental research that provides contributions and solutions to different issues related to speech technology and speech in industry. Technologies include natural language processing, automatic speech recognition (for under-resourced dialects) and speech synthesis that are useful for applications such as intelligent virtual assistants, among others. Applications cover areas such as sentiment analysis and opinion mining, Arabic named entity recognition, and language modelling. This book is relevant for anyone interested in the latest in language and speech technology.
This book constitutes the refereed proceedings of the 11th Conference on Artificial Intelligence and Natural Language, AINL 2022, held in St. Petersburg, Russia, in April 2022. The 8 revised full papers and 1 short paper were carefully reviewed and selected from 20 submissions. The volume presents recent research in areas of of text mining, speech technologies, dialogue systems, information retrieval, machine learning, articial intelligence, and robotics.
The book gives a comprehensive discussion of Database Semantics (DBS) as an agent-based data-driven theory of how natural language communication essentially works. In language communication, agents switch between speak mode, driven by cognition-internal content (input) resulting in cognition-external raw data (e.g. sound waves or pixels, which have no meaning or grammatical properties but can be measured by natural science), and hear mode, driven by the raw data produced by the speaker resulting in cognition-internal content. The motivation is to compare two approaches for an ontology of communication: agent-based data-driven vs. sign-based substitution-driven. Agent-based means: design of a cognitive agent with (i) an interface component for converting raw data into cognitive content (recognition) and converting cognitive content into raw data (action), (ii) an on-board, content-addressable memory (database) for the storage and content retrieval, (iii) separate treatments of the speak and the hear mode. Data-driven means: (a) mapping a cognitive content as input to the speak-mode into a language-dependent surface as output, (b) mapping a surface as input to the hear-mode into a cognitive content as output. Oppositely, sign-based means: no distinction between speak and hear mode, whereas substitution-driven means: using a single start symbol as input for generating infinitely many outputs, based on substitutions by rewrite rules. Collecting recent research of the author, this beautiful, novel and original exposition begins with an introduction to DBS, makes a linguistic detour on subject/predicate gapping and slot-filler repetition, and moves on to discuss computational pragmatics, inference and cognition, grammatical disambiguation and other related topics. The book is mostly addressed to experts working in the field of computational linguistics, as well as to enthusiasts interested in the history and early development of this subject, starting with the pre-computational foundations of theoretical computer science and symbolic logic in the 30s.
The impact of computer systems that can understand natural language will be tremendous. To develop this capability we need to be able to automatically and efficiently analyze large amounts of text. Manually devised rules are not sufficient to provide coverage to handle the complex structure of natural language, necessitating systems that can automatically learn from examples. To handle the flexibility of natural language, it has become standard practice to use statistical models, which assign probabilities for example to the different meanings of a word or the plausibility of grammatical constructions. This book develops a general coarse-to-fine framework for learning and inference in large statistical models for natural language processing. Coarse-to-fine approaches exploit a sequence of models which introduce complexity gradually. At the top of the sequence is a trivial model in which learning and inference are both cheap. Each subsequent model refines the previous one, until a final, full-complexity model is reached. Applications of this framework to syntactic parsing, speech recognition and machine translation are presented, demonstrating the effectiveness of the approach in terms of accuracy and speed. The book is intended for students and researchers interested in statistical approaches to Natural Language Processing. "Slav s work"Coarse-to-Fine Natural Language Processing "represents a major advance in the area of syntactic parsing, and a great advertisement for the superiority of the machine-learning approach." Eugene Charniak (Brown University)"
This book constitutes revised selected papers from the 23rd International Symposium on Trends in Functional Programming, TFP 2022, which was held virtually in March 2022. The 9 full papers presented in this volume were carefully reviewed and selected from 17 submissions. They deal with all aspects of functional programming, taking a broad view of current and future trends in the area.
This book constitutes selected revised papers of the 16th International Conference on Formalizing Natural Languages: Applications to Natural Language Processing and Digital Humanities, NooJ 2022, held in Rosario, Argentina, in June 2022. Due to COVID-19 pandemic the conference was held virtually. NooJ is a linguistic development environment that provides tools for linguists to construct linguistic resources that formalize a large gamut of linguistic phenomena: typography, orthography, lexicons for simple words, multiword units and discontinuous expressions, inflectional, derivational and agglutinative morphology, local, phrase-structure and dependency grammars, as well as transformational and semantic grammars. The 17 full papers presented were carefully reviewed and selected from 50 submissions. The papers are organized in the following topics: Morphological and Lexical Resources; Syntactic and Semantic Resources; Corpus Linguistics and Discourse Analysis; Natural Language Processing Applications.
This book constitutes the proceedings of the International Joint Conference on Rules and Reasoning, RuleML+RR 2022, held in Berlin, Germany, during September 26-28, 2022. This is the 6th conference of a new series, joining the efforts of two existing conference series, namely "RuleML" (International Web Rule Symposium) and "RR" (Web Reasoning and Rule Systems). The 18 full research papers presented in this book were carefully reviewed and selected from 54 submissions. The papers cover the following topics: answer set programming; foundations of nonmonotonic reasoning; datalog; queries over ontologies; proofs, error-tolerance, and rules; as well as agents and argumentation.
The content of this textbook is organized as a theory of language for the construction of talking robots. The main topic is the mechanism of natural language communication in both the speaker and the hearer. In the third edition the author has modernized the text, leaving the overview of traditional, theoretical, and computational linguistics, analytic philosophy of language, and mathematical complexity theory with their historical backgrounds intact. The format of the empirical analyses of English and German syntax and semantics has been adapted to current practice; and Chaps. 22-24 have been rewritten to focus more sharply on the construction of a talking robot.
This book constitutes the refereed proceedings of the 24th International Conference on Asia-Pacific Digital Libraries, ICADL 2022, which was held in November/December 2022. The 14 full, 18 short, and 12 poster papers presented in this volume were carefully reviewed and selected from 78 submissions. Based on significant contributions, the full and short papers have been classified into the following topics: intelligent document analysis; neural-based knowledge extraction; knowledge discovery for enhancing collaboration; smart search and annotation; cultural data collection and analysis; scholarly data processing; data archive and management; research activities and digital library; and trends in digital library.
This open access book introduces Vector semantics, which links the formal theory of word vectors to the cognitive theory of linguistics. The computational linguists and deep learning researchers who developed word vectors have relied primarily on the ever-increasing availability of large corpora and of computers with highly parallel GPU and TPU compute engines, and their focus is with endowing computers with natural language capabilities for practical applications such as machine translation or question answering. Cognitive linguists investigate natural language from the perspective of human cognition, the relation between language and thought, and questions about conceptual universals, relying primarily on in-depth investigation of language in use. In spite of the fact that these two schools both have 'linguistics' in their name, so far there has been very limited communication between them, as their historical origins, data collection methods, and conceptual apparatuses are quite different. Vector semantics bridges the gap by presenting a formal theory, cast in terms of linear polytopes, that generalizes both word vectors and conceptual structures, by treating each dictionary definition as an equation, and the entire lexicon as a set of equations mutually constraining all meanings.
This book constitutes the refereed proceedings of the 18th China Conference on Machine Translation, CCMT 2022, held in Lhasa, China, during August 6-10, 2022. The 16 full papers were included in this book were carefully reviewed and selected from 73 submissions.
This book constitutes the refereed proceedings of the 29th International Symposium on Static Analysis, SAS 2022, held in Auckland, New Zealand, in December 2022. The 18 full papers included in this book were carefully reviewed and selected from 43 submissions. Static analysis is widely recognized as a fundamental tool for program verification, bug detection, compiler optimization, program understanding, and software maintenance. The papers deal with theoretical, practical and application advances in the area.
This book, with a foreword by Manuel Castells, explores the core strategies of digital political communication. It reviews the field's evolution over the past 25 years and examines the coexistence of old and new actors (lobbyists, citizens, parliaments, political parties, media outlets, digital platforms, among others), as well as hybrid communication tactics. Topics covered include frames, fake news, filter bubbles, echo chambers, artificial intelligence, the significance of emotions, and engagement with citizens.As we find ourselves in the fourth wave of digital communication, and in the wake of a pandemic which has shaken the foundations of political communication, an evaluation of these topics is essential to the reinvention of democracy. The book is geared towards students and researchers who wish to delve into the latest trends in digital communication, political communication actors and journalists. It further aims to prepare citizens to effectively deal with messaging that blurs the line between truth and falsehood with increasingly powerful strategies supported by artificial intelligence.
This book constitutes the refereed proceedings of the 25th Brazilian Symposium on Formal Methods, SBMF 2022, which was held virtually in December 2022. The 8 regular papers presented in this book were carefully reviewed and selected from 15 submissions. The symposium focuses on the development, dissemination, and use of formal methods for the construction of high-quality computational systems, aiming to promote opportunities for researchers and practitioners with an interest in formal methods to discuss the recent advances in this area.
Wineinformatics is a new data science application with a focus on understanding wine through artificial intelligence. Thousands of new wine reviews are produced monthly, which benefits the understanding of wine through wine experts for winemakers and consumers. This book systematically investigates how to process human language format reviews and mine useful knowledge from a large volume of processed data. This book presents a human language processing tool named Computational Wine Wheel to process professional wine reviews and three novel Wineinformatics studies to analyze wine quality, price and reviewers. Through the lens of data science, the author demonstrates how the wine receives 90+ scores out of 100 points from Wine Spectator, how to predict a wine's specific grade and price through wine reviews and how to rank a group of wine reviewers. The book also shows the advanced application of the Computational Wine Wheel to capture more information hidden in wine reviews and the possibility of extending the wheel to coffee, tea beer, sake and liquors. This book targets computer scientists, data scientists and wine industrial researchers, who are interested in Wineinformatics. Senior data science undergraduate and graduate students may also benefit from this book.
This book constitutes the refereed proceedings of the 18th International Conference on Frontiers in Handwriting Recognition, ICFHR 2022, which took place in Hyderabad, India, during December 4-7, 2022. The 36 full papers and 1 short paper presented in this volume were carefully reviewed and selected from 61 submissions. The contributions were organized in topical sections as follows: Historical Document Processing; Signature Verification and Writer Identification; Symbol and Graphics Recognition; Handwriting Recognition and Understanding; Handwriting Datasets and Synthetic Handwriting Generation; Document Analysis and Processing.
This updated book expands upon prosody for recognition applications of speech processing. It includes importance of prosody for speech processing applications; builds on why prosody needs to be incorporated in speech processing applications; and presents methods for extraction and representation of prosody for applications such as speaker recognition, language recognition and speech recognition. The updated book also includes information on the significance of prosody for emotion recognition and various prosody-based approaches for automatic emotion recognition from speech.
This book constitutes revised selected papers from the thoroughly refereed proceedings of the 10th International Conference on Analysis of Images, Social Networks and Texts, AIST 2021, held in Tbilisi, Georgia, during December 16-18, 2021. The 20 full papers and 5 short papers included in this book were carefully reviewed and selected from 118 submissions. They were organized in topical sections as follows: Invited papers; natural language processing; computer vision; data analysis and machine learning; social network analysis; and theoretical machine learning and optimization.
This open access book provides an in-depth description of the EU project European Language Grid (ELG). Its motivation lies in the fact that Europe is a multilingual society with 24 official European Union Member State languages and dozens of additional languages including regional and minority languages. The only meaningful way to enable multilingualism and to benefit from this rich linguistic heritage is through Language Technologies (LT) including Natural Language Processing (NLP), Natural Language Understanding (NLU), Speech Technologies and language-centric Artificial Intelligence (AI) applications. The European Language Grid provides a single umbrella platform for the European LT community, including research and industry, effectively functioning as a virtual home, marketplace, showroom, and deployment centre for all services, tools, resources, products and organisations active in the field. Today the ELG cloud platform already offers access to more than 13,000 language processing tools and language resources. It enables all stakeholders to deposit, upload and deploy their technologies and datasets. The platform also supports the long-term objective of establishing digital language equality in Europe by 2030 - to create a situation in which all European languages enjoy equal technological support. This is the very first book dedicated to Language Technology and NLP platforms. Cloud technology has only recently matured enough to make the development of a platform like ELG feasible on a larger scale. The book comprehensively describes the results of the ELG project. Following an introduction, the content is divided into four main parts: (I) ELG Cloud Platform; (II) ELG Inventory of Technologies and Resources; (III) ELG Community and Initiative; and (IV) ELG Open Calls and Pilot Projects.
The 39-volume set, comprising the LNCS books 13661 until 13699, constitutes the refereed proceedings of the 17th European Conference on Computer Vision, ECCV 2022, held in Tel Aviv, Israel, during October 23-27, 2022. The 1645 papers presented in these proceedings were carefully reviewed and selected from a total of 5804 submissions. The papers deal with topics such as computer vision; machine learning; deep neural networks; reinforcement learning; object recognition; image classification; image processing; object detection; semantic segmentation; human pose estimation; 3d reconstruction; stereo vision; computational photography; neural networks; image coding; image reconstruction; object recognition; motion estimation. |
You may like...
Graph Learning and Network Science for…
Muskan Garg, Amit Kumar Gupta, …
Hardcover
R3,253
Discovery Miles 32 530
Metalanguages for Dissecting Translation…
Rei Miyata, Masaru Yamada, …
Hardcover
R4,149
Discovery Miles 41 490
Algebraic Structures in Natural Language
Shalom Lappin, Jean-Philippe Bernardy
Hardcover
R2,212
Discovery Miles 22 120
Perspectives on Sentence Processing
Lyn Frazier, Keith Rayner, …
Hardcover
R1,180
Discovery Miles 11 800
Machine Translation and Global Research…
Lynne Bowker, Jairo Buitrago CIro
Paperback
R793
Discovery Miles 7 930
|