![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence > Natural language & machine translation
Modern applications of logic, in mathematics, theoretical computer science, and linguistics, require combined systems involving many different logics working together. In this book the author offers a basic methodology for combining - or fibring - systems. This means that many existing complex systems can be broken down into simpler components, hence making them much easier to manipulate.
This contributed volume explores the achievements gained and the remaining puzzling questions by applying dynamical systems theory to the linguistic inquiry. In particular, the book is divided into three parts, each one addressing one of the following topics: 1) Facing complexity in the right way: mathematics and complexity 2) Complexity and theory of language 3) From empirical observation to formal models: investigation of specific linguistic phenomena, like enunciation, deixis, or the meaning of the metaphorical phrases The application of complexity theory to describe cognitive phenomena is a recent and very promising trend in cognitive science. At the time when dynamical approaches triggered a paradigm shift in cognitive science some decade ago, the major topic of research were the challenges imposed by classical computational approaches dealing with the explanation of cognitive phenomena like consciousness, decision making and language. The target audience primarily comprises researchers and experts in the field but the book may also be beneficial for graduate and post-graduate students who want to enter the field.
This book offers an introduction to modern natural language processing using machine learning, focusing on how neural networks create a machine interpretable representation of the meaning of natural language. Language is crucially linked to ideas - as Webster's 1923 "English Composition and Literature" puts it: "A sentence is a group of words expressing a complete thought". Thus the representation of sentences and the words that make them up is vital in advancing artificial intelligence and other "smart" systems currently being developed. Providing an overview of the research in the area, from Bengio et al.'s seminal work on a "Neural Probabilistic Language Model" in 2003, to the latest techniques, this book enables readers to gain an understanding of how the techniques are related and what is best for their purposes. As well as a introduction to neural networks in general and recurrent neural networks in particular, this book details the methods used for representing words, senses of words, and larger structures such as sentences or documents. The book highlights practical implementations and discusses many aspects that are often overlooked or misunderstood. The book includes thorough instruction on challenging areas such as hierarchical softmax and negative sampling, to ensure the reader fully and easily understands the details of how the algorithms function. Combining practical aspects with a more traditional review of the literature, it is directly applicable to a broad readership. It is an invaluable introduction for early graduate students working in natural language processing; a trustworthy guide for industry developers wishing to make use of recent innovations; and a sturdy bridge for researchers already familiar with linguistics or machine learning wishing to understand the other.
Cross-Disciplinary Advances in Applied Natural Language Processing: Issues and Approaches defines the role of ANLP within NLP, and alongside other disciplines such as linguistics, computer science, and cognitive science. The description also includes the categorization of current ANLP research, and examples of current research in ANLP. This book is a useful reference for teachers, students, and materials developers in fields spanning linguistics, computer science, and cognitive science.
This book brings together scientists, researchers, practitioners, and students from academia and industry to present recent and ongoing research activities concerning the latest advances, techniques, and applications of natural language processing systems, and to promote the exchange of new ideas and lessons learned. Taken together, the chapters of this book provide a collection of high-quality research works that address broad challenges in both theoretical and applied aspects of intelligent natural language processing. The book presents the state-of-the-art in research on natural language processing, computational linguistics, applied Arabic linguistics and related areas. New trends in natural language processing systems are rapidly emerging - and finding application in various domains including education, travel and tourism, and healthcare, among others. Many issues encountered during the development of these applications can be resolved by incorporating language technology solutions. The topics covered by the book include: Character and Speech Recognition; Morphological, Syntactic, and Semantic Processing; Information Extraction; Information Retrieval and Question Answering; Text Classification and Text Mining; Text Summarization; Sentiment Analysis; Machine Translation Building and Evaluating Linguistic Resources; and Intelligent Language Tutoring Systems.
Recent advances in the fields of knowledge representation, reasoning and human-computer interaction have paved the way for a novel approach to treating and handling context. The field of research presented in this book addresses the problem of contextual computing in artificial intelligence based on the state of the art in knowledge representation and human-computer interaction. The author puts forward a knowledge-based approach for employing high-level context in order to solve some persistent and challenging problems in the chosen showcase domain of natural language understanding. Specifically, the problems addressed concern the handling of noise due to speech recognition errors, semantic ambiguities, and the notorious problem of underspecification. Consequently the book examines the individual contributions of contextual composing for different types of context. Therefore, contextual information stemming from the domain at hand, prior discourse, and the specific user and real world situation are considered and integrated in a formal model that is applied and evaluated employing different multimodal mobile dialog systems. This book is intended to meet the needs of readers from at least three fields - AI and computer science; computational linguistics; and natural language processing - as well as some computationally oriented linguists, making it a valuable resource for scientists, researchers, lecturers, language processing practitioners and professionals as well as postgraduates and some undergraduates in the aforementioned fields. "The book addresses a problem of great and increasing technical and practical importance - the role of context in natural language processing (NLP). It considers the role of context in three important tasks: Automatic Speech Recognition, Semantic Interpretation, and Pragmatic Interpretation. Overall, the book represents a novel and insightful investigation into the potential of contextual information processing in NLP." Jerome A Feldman, Professor of Electrical Engineering and Computer Science, UC Berkeley, USA http://dm.tzi.de/research/contextual-computing/
Collaboratively Constructed Language Resources (CCLRs) such as
Wikipedia, Wiktionary, Linked Open Data, and various resources
developed using crowdsourcing techniques such as Games with a
Purpose and Mechanical Turk have substantially contributed to the
research in natural language processing (NLP). Various NLP tasks
utilize such resources to substitute for or supplement conventional
lexical semantic resources and linguistically annotated corpora.
These resources also provide an extensive body of texts from which
valuable knowledge is mined. There are an increasing number of
community efforts to link and maintain multiple linguistic
resources.
A guide on the use of SVMs in pattern classification, including a rigorous performance comparison of classifiers and regressors. The book presents architectures for multiclass classification and function approximation problems, as well as evaluation criteria for classifiers and regressors. Features: Clarifies the characteristics of two-class SVMs; Discusses kernel methods for improving the generalization ability of neural networks and fuzzy systems; Contains ample illustrations and examples; Includes performance evaluation using publicly available data sets; Examines Mahalanobis kernels, empirical feature space, and the effect of model selection by cross-validation; Covers sparse SVMs, learning using privileged information, semi-supervised learning, multiple classifier systems, and multiple kernel learning; Explores incremental training based batch training and active-set training methods, and decomposition techniques for linear programming SVMs; Discusses variable selection for support vector regressors.
The computational approach of this book is aimed at simulating the human ability to understand various kinds of phrases with a novel metaphoric component. That is, interpretations of metaphor as literal paraphrases are based on literal meanings of the metaphorically used words. This method distinguishes itself from statistical approaches, which in general do not account for novel usages, and from efforts directed at metaphor constrained to one type of phrase or to a single topic domain. The more interesting and novel metaphors appear to be based on concepts generally represented as nouns, since such concepts can be understood from a variety of perspectives. The core of the process of interpreting nominal concepts is to represent them in such a way that readers or hearers can infer which aspect(s) of the nominal concept is likely to be intended to be applied to its interpretation. These aspects are defined in terms of verbal and adjectival predicates. A section on the representation and processing of part-sentence verbal metaphor will therefore also serve as preparation for the representation of salient aspects of metaphorically used nouns. As the ability to process metaphorically used verbs and nouns facilitates the interpretation of more complex tropes, computational analysis of two other kinds of metaphorically based expressions are outlined: metaphoric compound nouns, such as "idea factory" and, together with the representation of inferences, modified metaphoric idioms, such as "Put the cat back into the bag".
Proactive Spoken Dialogue Interaction in Multi-Party Environments describes spoken dialogue systems that act as independent dialogue partners in the conversation with and between users. The resulting novel characteristics such as proactiveness and multi-party capabilities pose new challenges on the dialogue management component of such a system and require the use and administration of an extensive dialogue history. In order to assist the proactive spoken dialogue systems development, a comprehensive data collection seems mandatory and may be performed in a Wizard-of-Oz environment. Such an environment builds also the appropriate basis for an extensive usability and acceptance evaluation. Proactive Spoken Dialogue Interaction in Multi-Party Environments is a useful reference for students and researchers in speech processing.
The contributions in this volume focus on the Bayesian interpretation of natural languages, which is widely used in areas of artificial intelligence, cognitive science, and computational linguistics. This is the first volume to take up topics in Bayesian Natural Language Interpretation and make proposals based on information theory, probability theory, and related fields. The methodologies offered here extend to the target semantic and pragmatic analyses of computational natural language interpretation. Bayesian approaches to natural language semantics and pragmatics are based on methods from signal processing and the causal Bayesian models pioneered by especially Pearl. In signal processing, the Bayesian method finds the most probable interpretation by finding the one that maximizes the product of the prior probability and the likelihood of the interpretation. It thus stresses the importance of a production model for interpretation as in Grice's contributions to pragmatics or in interpretation by abduction.
There are not many people who can be said to have influenced and impressed researchers in so many disparate areas and language-geographic fields as Lauri Carlson, as is evidenced in the present Festschrift. His insight and acute linguistic sensitivity and linguistic rationality have spawned findings and research work in many areas, from non-standard etymology to hardcore formal linguistics, not forgetting computational areas such as parsing, terminological databases, and, last but not least, machine translation. In addition to his renowned and widely acknowledged insights in tense and aspect and its relationship with nominal quantification, and his ground-breaking work in dialog using game-theoretic machinery, Lauri has in the last fifteen years as Professor of Language Theory and Translation Technology contributed immensely to areas such as translation, terminology and general applications of computational linguistics. The three editors of the present volume have successfully performed doctoral studies under Lauri 's supervision, and wish with this volume to pay tribute to his supervision and to his influence in matters associated with research and scientific, linguistic and philosophical inquiry, as well as to his humanity and friendship.
This book draws on the recent remarkable advances in speech and language processing: advances that have moved speech technology beyond basic applications such as medical dictation and telephone self-service to increasingly sophisticated and clinically significant applications aimed at complex speech and language disorders. The book provides an introduction to the basic elements of speech and natural language processing technology, and illustrates their clinical potential by reviewing speech technology software currently in use for disorders such as autism and aphasia. The discussion is informed by the authors' own experiences in developing and investigating speech technology applications for these populations. Topics include detailed examples of speech and language technologies in both remediative and assistive applications, overviews of a number of current applications, and a checklist of criteria for selecting the most appropriate applications for particular user needs. This book will be of benefit to four audiences: application developers who are looking to apply these technologies; clinicians who are looking for software that may be of value to their clients; students of speech-language pathology and application development; and finally, people with speech and language disorders and their friends and family members.
The book offers a comprehensive survey of soft-computing models for optical character recognition systems. The various techniques, including fuzzy and rough sets, artificial neural networks and genetic algorithms, are tested using real texts written in different languages, such as English, French, German, Latin, Hindi and Gujrati, which have been extracted by publicly available datasets. The simulation studies, which are reported in details here, show that soft-computing based modeling of OCR systems performs consistently better than traditional models. Mainly intended as state-of-the-art survey for postgraduates and researchers in pattern recognition, optical character recognition and soft computing, this book will be useful for professionals in computer vision and image processing alike, dealing with different issues related to optical character recognition.
There is increasing interaction among communities with multiple languages, thus we need services that can effectively support multilingual communication. The Language Grid is an initiative to build an infrastructure that allows end users to create composite language services for intercultural collaboration. The aim is to support communities to create customized multilingual environments by using language services to overcome local language barriers. The stakeholders of the Language Grid are the language resource providers, the language service users, and the language grid operators who coordinate the former. This book includes 18 chapters in six parts that summarize various research results and associated development activities on the Language Grid. The chapters in Part I describe the framework of the Language Grid, i.e., service-oriented collective intelligence, used to bridge providers, users and operators. Two kinds of software are introduced, the service grid server software and the Language Grid Toolbox, and code for both is available via open source licenses. Part II describes technologies for service workflows that compose atomic language services. Part III reports on research work and activities relating to sharing and using language services. Part IV describes various applications of language services as applicable to intercultural collaboration. Part V contains reports on applying the Language Grid for translation activities, including localization of industrial documents and Wikipedia articles. Finally, Part VI illustrates how the Language Grid can be connected to other service grids, such as DFKI's Heart of Gold and smart classroom services in Tsinghua University in Beijing. The book will be valuable for researchers in artificial intelligence, natural language processing, services computing and human--computer interaction, particularly those who are interested in bridging technologies and user communities. "
This book is written for both linguists and computer scientists working in the field of artificial intelligence as well as to anyone interested in intelligent text processing. Lexical function is a concept that formalizes semantic and syntactic relations between lexical units. Collocational relation is a type of institutionalized lexical relations which holds between the base and its partner in a collocation. Knowledge of collocation is important for natural language processing because collocation comprises the restrictions on how words can be used together. The book shows how collocations can be annotated with lexical functions in a computer readable dictionary - allowing their precise semantic analysis in texts and their effective use in natural language applications including parsers, high quality machine translation, periphrasis system and computer-aided learning of lexica. The books shows how to extract collocations from corpora and annotate them with lexical functions automatically. To train algorithms, the authors created a dictionary of lexical functions containing more than 900 Spanish disambiguated and annotated examples which is a part of this book. The obtained results show that machine learning is feasible to achieve the task of automatic detection of lexical functions.
The theory of formal languages is widely accepted as the backbone of t- oretical computer science. It mainly originated from mathematics (com- natorics, algebra, mathematical logic) and generative linguistics. Later, new specializations emerged from areas ofeither computer science(concurrent and distributed systems, computer graphics, arti?cial life), biology (plant devel- ment, molecular genetics), linguistics (parsing, text searching), or mathem- ics (cryptography). All human problem solving capabilities can be considered, in a certain sense, as a manipulation of symbols and structures composed by symbols, which is actually the stem of formal language theory. Language - in its two basic forms, natural and arti?cial - is a particular case of a symbol system. This wide range of motivations and inspirations explains the diverse - plicability of formal language theory ? and all these together explain the very large number of monographs and collective volumes dealing with formal language theory. In 2004 Springer-Verlag published the volume Formal Languages and - plications, edited by C. Martin-Vide, V. Mitrana and G. P?un in the series Studies in Fuzziness and Soft Computing 148, which was aimed at serving as an overall course-aid and self-study material especially for PhD students in formal language theory and applications. Actually, the volume emerged in such a context: it contains the core information from many of the lectures - livered to the students of the International PhD School in Formal Languages and Applications organized since 2002 by the Research Group on Mathem- ical Linguistics from Rovira i Virgili University, Tarragona, Spain."
The volume "Genres on the Web" has been designed for a wide audience, from the expert to the novice. It is a required book for scholars, researchers and students who want to become acquainted with the latest theoretical, empirical and computational advances in the expanding field of web genre research. The study of web genre is an overarching and interdisciplinary novel area of research that spans from corpus linguistics, computational linguistics, NLP, and text-technology, to web mining, webometrics, social network analysis and information studies. This book gives readers a thorough grounding in the latest research on web genres and emerging document types. The book covers a wide range of web-genre focused subjects, such
as: One of the driving forces behind genre research is the idea of a genre-sensitive information system, which incorporates genre cues complementing the current keyword-based search and retrieval applications."
The design of formal calculi in which fundamental concepts underlying interactive systems can be described and studied has been a central theme of theoretical computer science in recent decades, while membrane computing, a rule-based formalism inspired by biological cells, is a more recent field that belongs to the general area of natural computing. This is the first book to establish a link between these two research directions while treating mobility as the central topic. In the first chapter the authors offer a formal description of mobility in process calculi, noting the entities that move: links ( -calculus), ambients (ambient calculi) and branes (brane calculi). In the second chapter they study mobility in the framework of natural computing. The authors define several systems of mobile membranes in which the movement inside a spatial structure is provided by rules inspired by endocytosis and exocytosis. They study their computational power in comparison with the classical notion of Turing computability and their efficiency in algorithmically solving hard problems in polynomial time. The final chapter deals with encodings, establishing links between process calculi and membrane computing so that researchers can share techniques between these fields. The book is suitable for computer scientists working in concurrency and in biologically inspired formalisms, and also for mathematically inclined scientists interested in formalizing moving agents and biological phenomena. The text is supported with examples and exercises, so it can also be used for courses on these topics.
Research in Natural Language Processing (NLP) has rapidly advanced in recent years, resulting in exciting algorithms for sophisticated processing of text and speech in various languages. Much of this work focuses on English; in this book we address another group of interesting and challenging languages for NLP research: the Semitic languages. The Semitic group of languages includes Arabic (206 million native speakers), Amharic (27 million), Hebrew (7 million), Tigrinya (6.7 million), Syriac (1 million) and Maltese (419 thousand). Semitic languages exhibit unique morphological processes, challenging syntactic constructions and various other phenomena that are less prevalent in other natural languages. These challenges call for unique solutions, many of which are described in this book. The 13 chapters presented in this book bring together leading scientists from several universities and research institutes worldwide. While this book devotes some attention to cutting-edge algorithms and techniques, its primary purpose is a thorough explication of best practices in the field. Furthermore, every chapter describes how the techniques discussed apply to Semitic languages. The book covers both statistical approaches to NLP, which are dominant across various applications nowadays and the more traditional, rule-based approaches, that were proven useful for several other application domains. We hope that this book will provide a "one-stop-shop'' for all the requisite background and practical advice when building NLP applications for Semitic languages.
This book celebrates the work of Yorick Wilks in the form of a selection of his papers which are intended to reflect the range and depth of his work. The volume accompanies a Festschrift which celebrates his contribution to the fields of Computational Linguistics and Artificial Intelligence. The selected papers reflect Yorick 's contribution to both practical and theoretical aspects of automatic language processing.
Parsing can be defined as the decomposition of complex structures
into their constituent parts, and parsing technology as the
methods, the tools and the software to parse automatically. Parsing
is a central area of research in the automatic processing of human
language. Parsers are being used in many application areas, for
example question answering, extraction of information from text,
speech recognition and understanding, and machine translation. New
developments in parsing technology are thus widely applicable.
|
You may like...
Natural Language Processing for Global…
Fatih Pinarbasi, M. Nurdan Taskiran
Hardcover
R6,306
Discovery Miles 63 060
Eyetracking and Applied Linguistics
Silvia Hansen-Schirra, Sambor Grucza
Hardcover
R835
Discovery Miles 8 350
The Social Semantic Web
John G. Breslin, Alexandre Passant, …
Hardcover
R1,572
Discovery Miles 15 720
Handbook of Research on Recent…
Siddhartha Bhattacharyya, Nibaran Das, …
Hardcover
R9,028
Discovery Miles 90 280
|