|
Books > Computing & IT > Applications of computing > Artificial intelligence > Natural language & machine translation
Companies are spending billions on machine learning projects, but
it's money wasted if the models can't be deployed effectively. In
this practical guide, Hannes Hapke and Catherine Nelson walk you
through the steps of automating a machine learning pipeline using
the TensorFlow ecosystem. You'll learn the techniques and tools
that will cut deployment time from days to minutes, so that you can
focus on developing new models rather than maintaining legacy
systems. Data scientists, machine learning engineers, and DevOps
engineers will discover how to go beyond model development to
successfully productize their data science projects, while managers
will better understand the role they play in helping to accelerate
these projects. Understand the steps to build a machine learning
pipeline Build your pipeline using components from TensorFlow
Extended Orchestrate your machine learning pipeline with Apache
Beam, Apache Airflow, and Kubeflow Pipelines Work with data using
TensorFlow Data Validation and TensorFlow Transform Analyze a model
in detail using TensorFlow Model Analysis Examine fairness and bias
in your model performance Deploy models with TensorFlow Serving or
TensorFlow Lite for mobile devices Learn privacy-preserving machine
learning techniques
This comprehensive reference work provides an overview of the
concepts, methodologies, and applications in computational
linguistics and natural language processing (NLP). * Features
contributions by the top researchers in the field, reflecting the
work that is driving the discipline forward * Includes an
introduction to the major theoretical issues in these fields, as
well as the central engineering applications that the work has
produced * Presents the major developments in an accessible way,
explaining the close connection between scientific understanding of
the computational properties of natural language and the creation
of effective language technologies * Serves as an invaluable
state-of-the-art reference source for computational linguists and
software engineers developing NLP applications in industrial
research and development labs of software companies
This book focuses on dialog from a varied combination of fields:
Linguistics, Philosophy of Language and Computation. It builds on
the hypothesis that meaning in human communication arises at the
discourse level rather than at the word level. The book offers a
complex analytical framework and integration of the central areas
of research around human communication. The content revolves around
meaning but it also gives evidence of the connection among
different points of view. Besides discussing issues of general
interest to the field, the book triggers theoretical argumentation
that is currently under scientific discussion. It examines such
topics as immanent reasoning joined with Recanati's lekta and free
enrichment, challenges of internet conversation, inner dialogs,
cognition and language, and the relation between assertion and
denial. It proposes a dialogical framework for intra-negotiation
and gives a geolinguistic perspective on spoken discourse. Finally,
it examines dialog and abduction and sheds light on a generation of
dialog contexts by means of multimodal logic applied to speech
acts.
Computer processing of natural language is a burgeoning field, but
until now there has been no agreement of a standardized
classification of the diverse structural elements that occur in
real-life language material. This book attempts to define a
'Linnaean taxonomy' for the English language: an annotation scheme,
the SUSANNE scheme, which yields a labelled constituency structure
for any string of English, comprehensively identifying all of its
surface and logical structural properties. The structure is
specified with sufficient rigour that analysts working
independently must produce identical annotations for a given
example. The scheme is based on large samples of real-life use of
British and American written and spoken English. The book also
describes the SUSANNE electronic corpus of English which is
annotated in accordance with the scheme. It is freely available as
a research resource to anyone working at a computer connected to
Internet, and since 1992 has come into widespread use in academic
and commercial research environments on four continents.
A practical guide to the construction of thesauri for use in
information retrieval. In recent years, new applications for
thesauri have been emerging, for example, in front-end systems,
cross-database searching, hypertext systems, expert systems and in
natural-language processing. In-house thesauri are still needed for
internal special collections. The fourth edition of this work has
been fully revised and the bibliography much extended, in
particular, to include web addresses.
This book discusses some of the basic issues relating to corpus
generation and the methods normally used to generate a corpus.
Since corpus-related research goes beyond corpus generation, the
book also addresses other major topics connected with the use and
application of language corpora, namely, corpus readiness in the
context of corpus sanitation and pre-editing of corpus texts; the
application of statistical methods; and various text processing
techniques. Importantly, it explores how corpora can be used as a
primary or secondary resource in English language teaching, in
creating dictionaries, in word sense disambiguation, in various
language technologies, and in other branches of linguistics.
Lastly, the book sheds light on the status quo of corpus generation
in Indian languages and identifies current and future needs.
Discussing various technical issues in the field in a lucid manner,
providing extensive new diagrams and charts for easy comprehension,
and using simplified English, the book is an ideal resource for
non-native English readers. Written by academics with many years of
experience teaching and researching corpus linguistics, its focus
on Indian languages and on English corpora makes it applicable to
graduate and postgraduate students of applied linguistics,
computational linguistics and language processing in South Asia and
across countries where English is spoken as a first or second
language.
This book covers theoretical work, applications, approaches, and
techniques for computational models of information and its
presentation by language (artificial, human, or natural in other
ways). Computational and technological developments that
incorporate natural language are proliferating. Adequate coverage
encounters difficult problems related to ambiguities and dependency
on context and agents (humans or computational systems). The goal
is to promote computational systems of intelligent natural language
processing and related models of computation, language, thought,
mental states, reasoning, and other cognitive processes.
This is the first monograph on the emerging area of linguistic
linked data. Presenting a combination of background information on
linguistic linked data and concrete implementation advice, it
introduces and discusses the main benefits of applying linked data
(LD) principles to the representation and publication of linguistic
resources, arguing that LD does not look at a single resource in
isolation but seeks to create a large network of resources that can
be used together and uniformly, and so making more of the single
resource. The book describes how the LD principles can be applied
to modelling language resources. The first part provides the
foundation for understanding the remainder of the book, introducing
the data models, ontology and query languages used as the basis of
the Semantic Web and LD and offering a more detailed overview of
the Linguistic Linked Data Cloud. The second part of the book
focuses on modelling language resources using LD principles,
describing how to model lexical resources using Ontolex-lemon, the
lexicon model for ontologies, and how to annotate and address
elements of text represented in RDF. It also demonstrates how to
model annotations, and how to capture the metadata of language
resources. Further, it includes a chapter on representing
linguistic categories. In the third part of the book, the authors
describe how language resources can be transformed into LD and how
links can be inferred and added to the data to increase
connectivity and linking between different datasets. They also
discuss using LD resources for natural language processing. The
last part describes concrete applications of the technologies:
representing and linking multilingual wordnets, applications in
digital humanities and the discovery of language resources. Given
its scope, the book is relevant for researchers and graduate
students interested in topics at the crossroads of natural language
processing / computational linguistics and the Semantic Web /
linked data. It appeals to Semantic Web experts who are not
proficient in applying the Semantic Web and LD principles to
linguistic data, as well as to computational linguists who are used
to working with lexical and linguistic resources wanting to learn
about a new paradigm for modelling, publishing and exploiting
linguistic resources.
Explores the direct relation of modern CALL (Computer-Assisted
Language Learning) to aspects of natural language processing for
theoretical and practical applications, and worldwide demand for
formal language education and training that focuses on restricted
or specialized professional domains. Unique in its broad-based,
state-of-the-art, coverage of current knowledge and research in the
interrelated fields of computer-based learning and teaching and
processing of specialized linguistic domains. The articles in this
book offer insights on or analyses of the current state and future
directions of many recent key concepts regarding the application of
computers to natural languages, such as: authenticity,
personalization, normalization, evaluation. Other articles present
fundamental research on major techniques, strategies and
methodologies that are currently the focus of international
language research projects, both of a theoretical and an applied
nature.
Graph theory and the fields of natural language processing and
information retrieval are well-studied disciplines. Traditionally,
these areas have been perceived as distinct, with different
algorithms, different applications, and different potential
end-users. However, recent research has shown that these
disciplines are intimately connected, with a large variety of
natural language processing and information retrieval applications
finding efficient solutions within graph-theoretical frameworks.
This book extensively covers the use of graph-based algorithms for
natural language processing and information retrieval. It brings
together topics as diverse as lexical semantics, text
summarization, text mining, ontology construction, text
classification, and information retrieval, which are connected by
the common underlying theme of the use of graph-theoretical methods
for text and information processing tasks. Readers will come away
with a firm understanding of the major methods and applications in
natural language processing and information retrieval that rely on
graph-based representations and algorithms.
This book provides readers with a practical guide to the principles
of hybrid approaches to natural language processing (NLP) involving
a combination of neural methods and knowledge graphs. To this end,
it first introduces the main building blocks and then describes how
they can be integrated to support the effective implementation of
real-world NLP applications. To illustrate the ideas described, the
book also includes a comprehensive set of experiments and exercises
involving different algorithms over a selection of domains and
corpora in various NLP tasks. Throughout, the authors show how to
leverage complementary representations stemming from the analysis
of unstructured text corpora as well as the entities and relations
described explicitly in a knowledge graph, how to integrate such
representations, and how to use the resulting features to
effectively solve NLP tasks in a range of domains. In addition, the
book offers access to executable code with examples, exercises and
real-world applications in key domains, like disinformation
analysis and machine reading comprehension of scientific
literature. All the examples and exercises proposed in the book are
available as executable Jupyter notebooks in a GitHub repository.
They are all ready to be run on Google Colaboratory or, if
preferred, in a local environment. A valuable resource for anyone
interested in the interplay between neural and knowledge-based
approaches to NLP, this book is a useful guide for readers with a
background in structured knowledge representations as well as those
whose main approach to AI is fundamentally based on logic. Further,
it will appeal to those whose main background is in the areas of
machine and deep learning who are looking for ways to leverage
structured knowledge bases to optimize results along the NLP
downstream.
 |
Advances in Intelligent Data Analysis XIX
- 19th International Symposium on Intelligent Data Analysis, IDA 2021, Porto, Portugal, April 26-28, 2021, Proceedings
(Paperback, 1st ed. 2021)
Pedro Henriques Abreu, Pedro Pereira Rodrigues, Alberto Fernandez, Joao Gama
|
R1,613
Discovery Miles 16 130
|
Ships in 10 - 15 working days
|
|
This book constitutes the proceedings of the 19th International
Symposium on Intelligent Data Analysis, IDA 2021, which was planned
to take place in Porto, Portugal. Due to the COVID-19 pandemic the
conference was held online during April 26-28, 2021.The 35 papers
included in this book were carefully reviewed and selected from 113
submissions. The papers were organized in topical sections named:
modeling with neural networks; modeling with statistical learning;
modeling language and graphs; and modeling special data formats.
This book investigates two major systems: firstly, co-operating
distributed grammar systems, where the grammars work on one common
sequential form and the co-operation is realized by the control of
the sequence of active grammars; secondly, parallel communicating
grammar systems, where each grammar works on its own sequential
form and co-operation is done by means of communicating between
grammars. The investigation concerns hierarchies with respect to
different variants of co-operation, relations with classical formal
language theory, syntactic parameters such as the number of
components and their size, power of synchronization, and general
notions generated from artificial intelligence.
This book covers deep-learning-based approaches for sentiment
analysis, a relatively new, but fast-growing research area, which
has significantly changed in the past few years. The book presents
a collection of state-of-the-art approaches, focusing on the
best-performing, cutting-edge solutions for the most common and
difficult challenges faced in sentiment analysis research.
Providing detailed explanations of the methodologies, the book is a
valuable resource for researchers as well as newcomers to the
field.
Get hands-on knowledge of how BERT (Bidirectional Encoder
Representations from Transformers) can be used to develop question
answering (QA) systems by using natural language processing (NLP)
and deep learning. The book begins with an overview of the
technology landscape behind BERT. It takes you through the basics
of NLP, including natural language understanding with tokenization,
stemming, and lemmatization, and bag of words. Next, you'll look at
neural networks for NLP starting with its variants such as
recurrent neural networks, encoders and decoders, bi-directional
encoders and decoders, and transformer models. Along the way,
you'll cover word embedding and their types along with the basics
of BERT. After this solid foundation, you'll be ready to take a
deep dive into BERT algorithms such as masked language models and
next sentence prediction. You'll see different BERT variations
followed by a hands-on example of a question answering system.
Hands-on Question Answering Systems with BERT is a good starting
point for developers and data scientists who want to develop and
design NLP systems using BERT. It provides step-by-step guidance
for using BERT. What You Will Learn Examine the fundamentals of
word embeddings Apply neural networks and BERT for various NLP
tasks Develop a question-answering system from scratch Train
question-answering systems for your own data Who This Book Is For
AI and machine learning developers and natural language processing
developers.
This book presents studies involving algorithms in the machine
learning paradigms. It discusses a variety of learning problems
with diverse applications, including prediction, concept learning,
explanation-based learning, case-based (exemplar-based) learning,
statistical rule-based learning, feature extraction-based learning,
optimization-based learning, quantum-inspired learning,
multi-criteria-based learning and hybrid intelligence-based
learning.
 |
Monotonicity in Logic and Language
- Second Tsinghua Interdisciplinary Workshop on Logic, Language and Meaning, TLLM 2020, Beijing, China, December 17-20, 2020, Proceedings
(Paperback, 1st ed. 2020)
Dun Deng, Fenrong Liu, Mingming Liu, Dag Westerstahl
|
R2,190
Discovery Miles 21 900
|
Ships in 10 - 15 working days
|
|
Edited in collaboration with FoLLI, the Association of Logic,
Language and Information this book constitutes the refereed
proceedings of the Second Interdisciplinary Workshop on Logic,
Language, and Meaning, TLLM 2020, held in Tsinghua, China, in
December 2020. The 12 full papers together presented were fully
reviewed and selected from 40 submissions. Due to COVID-19 the
workshop will be held online. The workshop covers a wide range of
topics where monotonicity is discussed in the context of logic,
causality, belief revision, quantification, polarity, syntax,
comparatives, and various semantic phenomena in particular
languages.
Accompanying continued industrial production and sales of
artificial intelligence and expert systems is the risk that
difficult and resistant theoretical problems and issues will be
ignored. The participants at the Third Tinlap Workshop, whose
contributions are contained in Theoretical Issues in Natural
Language Processing, remove that risk. They discuss and promote
theoretical research on natural language processing, examinations
of solutions to current problems, development of new theories, and
representations of published literature on the subject. Discussions
among these theoreticians in artificial intelligence, logic,
psychology, philosophy, and linguistics draw a comprehensive,
up-to-date picture of the natural language processing field.
Accompanying continued industrial production and sales of
artificial intelligence and expert systems is the risk that
difficult and resistant theoretical problems and issues will be
ignored. The participants at the Third Tinlap Workshop, whose
contributions are contained in Theoretical Issues in Natural
Language Processing, remove that risk. They discuss and promote
theoretical research on natural language processing, examinations
of solutions to current problems, development of new theories, and
representations of published literature on the subject. Discussions
among these theoreticians in artificial intelligence, logic,
psychology, philosophy, and linguistics draw a comprehensive,
up-to-date picture of the natural language processing field.
This book constitutes the refereed proceedings of the 16th
International Conference on Integrated Formal Methods, IFM 2019,
held in Lugano, Switzerland, in November 2020. The 24 full papers
and 2 short papers were carefully reviewed and selected from 63
submissions. The papers cover a broad spectrum of topics:
Integrating Machine Learning and Formal Modelling; Modelling and
Verification in B and Event-B; Program Analysis and Testing;
Verification of Interactive Behaviour; Formal Verification; Static
Analysis; Domain-Specific Approaches; and Algebraic Techniques.
This book deals with "Computer Aided Writing", CAW for short. The
contents of that is a sector of Knowledge based technics and
Knowledge Management. The role of Knowledge Management in social
media, education and Industry 4.0 is out of question. More
important is the expectation of combining Knowledge Management and
Cognitive Technology, which needs more and more new innovations in
this field to face recent problems in social and technological
areas. The book is intended to provide an overview of the state of
research in this field, show the extent to which computer
assistance in writing is already being used and present current
research contributions. After a brief introduction into the history
of writing and the tools that were created, the current
developments are examined on the basis of a formal writing model.
Tools such as word processing and content management systems will
be discussed in detail. The special form of writing, "journalism",
is used to examine the effects of Computer Aided Writing. We
dedicate a separate chapter to the topic of research, since it is
of essential importance in the writing process. With Knowledge
Discovery from Text (KDT) and recommendation systems we enter the
field of Knowledge Management in the context of Computer Aided
Writing. Finally, we will look at methods for automated text
generation before giving a final outlook on future developments.
In light of the rapid rise of new trends and applications in
various natural language processing tasks, this book presents
high-quality research in the field. Each chapter addresses a common
challenge in a theoretical or applied aspect of intelligent natural
language processing related to Arabic language. Many challenges
encountered during the development of the solutions can be resolved
by incorporating language technology and artificial intelligence.
The topics covered include machine translation; speech recognition;
morphological, syntactic, and semantic processing; information
retrieval; text classification; text summarization; sentiment
analysis; ontology construction; Arabizi translation; Arabic
dialects; Arabic lemmatization; and building and evaluating
linguistic resources. This book is a valuable reference for
scientists, researchers, and students from academia and industry
interested in computational linguistics and artificial
intelligence, especially for Arabic linguistics and related areas.
This book focuses mainly on logical approaches to computational
linguistics, but also discusses integrations with other approaches,
presenting both classic and newly emerging theories and
applications.Decades of research on theoretical work and practical
applications have demonstrated that computational linguistics is a
distinctively interdisciplinary area. There is convincing evidence
that computational approaches to linguistics can benefit from
research on the nature of human language, including from the
perspective of its evolution. This book addresses various topics in
computational theories of human language, covering grammar, syntax,
and semantics. The common thread running through the research
presented is the role of computer science, mathematical logic and
other subjects of mathematics in computational linguistics and
natural language processing (NLP). Promoting intelligent approaches
to artificial intelligence (AI) and NLP, the book is intended for
researchers and graduate students in the field.
|
|