![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence > Natural language & machine translation
Two Top Industry Leaders Speak Out Judith Markowitz When Amy asked me to co-author the foreword to her new book on advances in speech recognition, I was honored. Amy's work has always been infused with c- ative intensity, so I knew the book would be as interesting for established speech professionals as for readers new to the speech-processing industry. The fact that I would be writing the foreward with Bill Scholz made the job even more enjoyable. Bill and I have known each other since he was at UNISYS directing projects that had a profound impact on speech-recognition tools and applications. Bill Scholz The opportunity to prepare this foreword with Judith provides me with a rare oppor- nity to collaborate with a seasoned speech professional to identify numerous signi- cant contributions to the field offered by the contributors whom Amy has recruited. Judith and I have had our eyes opened by the ideas and analyses offered by this collection of authors. Speech recognition no longer needs be relegated to the ca- gory of an experimental future technology; it is here today with sufficient capability to address the most challenging of tasks. And the point-click-type approach to GUI control is no longer sufficient, especially in the context of limitations of mode- day hand held devices. Instead, VUI and GUI are being integrated into unified multimodal solutions that are maturing into the fundamental paradigm for comput- human interaction in the future.
The aim of this book is to advocate and promote network models of linguistic systems that are both based on thorough mathematical models and substantiated in terms of linguistics. In this way, the book contributes first steps towards establishing a statistical network theory as a theoretical basis of linguistic network analysis the boarder of the natural sciences and the humanities. This book addresses researchers who want to get familiar with theoretical developments, computational models and their empirical evaluation in the field of complex linguistic networks. It is intended to all those who are interested in statistical models of linguistic systems from the point of view of network research. This includes all relevant areas of linguistics ranging from phonological, morphological and lexical networks on the one hand and syntactic, semantic and pragmatic networks on the other. In this sense, the volume concerns readers from many disciplines such as physics, linguistics, computer science and information science. It may also be of interest for the upcoming area of systems biology with which the chapters collected here share the view on systems from the point of view of network analysis.
This book encompasses a collection of topics covering recent advances that are important to the Arabic language in areas of natural language processing, speech and image analysis. This book presents state-of-the-art reviews and fundamentals as well as applications and recent innovations.The book chapters by top researchers present basic concepts and challenges for the Arabic language in linguistic processing, handwritten recognition, document analysis, text classification and speech processing. In addition, it reports on selected applications in sentiment analysis, annotation, text summarization, speech and font analysis, word recognition and spotting and question answering.Moreover, it highlights and introduces some novel applications in vital areas for the Arabic language. The book is therefore a useful resource for young researchers who are interested in the Arabic language and are still developing their fundamentals and skills in this area. It is also interesting for scientists who wish to keep track of the most recent research directions and advances in this area.
The Generalized LR parsing algorithm (some call it "Tomita's algorithm") was originally developed in 1985 as a part of my Ph.D thesis at Carnegie Mellon University. When I was a graduate student at CMU, I tried to build a couple of natural language systems based on existing parsing methods. Their parsing speed, however, always bothered me. I sometimes wondered whether it was ever possible to build a natural language parser that could parse reasonably long sentences in a reasonable time without help from large mainframe machines. At the same time, I was always amazed by the speed of programming language compilers, because they can parse very long sentences (i.e., programs) very quickly even on workstations. There are two reasons. First, programming languages are considerably simpler than natural languages. And secondly, they have very efficient parsing methods, most notably LR. The LR parsing algorithm first precompiles a grammar into an LR parsing table, and at the actual parsing time, it performs shift-reduce parsing guided deterministically by the parsing table. So, the key to the LR efficiency is the grammar precompilation; something that had never been tried for natural languages in 1985. Of course, there was a good reason why LR had never been applied for natural languages; it was simply impossible. If your context-free grammar is sufficiently more complex than programming languages, its LR parsing table will have multiple actions, and deterministic parsing will be no longer possible.
Parsing Efficiency is crucial when building practical natural language systems. 'Ibis is especially the case for interactive systems such as natural language database access, interfaces to expert systems and interactive machine translation. Despite its importance, parsing efficiency has received little attention in the area of natural language processing. In the areas of compiler design and theoretical computer science, on the other hand, parsing algorithms 3 have been evaluated primarily in terms of the theoretical worst case analysis (e.g. lXn", and very few practical comparisons have been made. This book introduces a context-free parsing algorithm that parses natural language more efficiently than any other existing parsing algorithms in practice. Its feasibility for use in practical systems is being proven in its application to Japanese language interface at Carnegie Group Inc., and to the continuous speech recognition project at Carnegie-Mellon University. This work was done while I was pursuing a Ph.D degree at Carnegie-Mellon University. My advisers, Herb Simon and Jaime Carbonell, deserve many thanks for their unfailing support, advice and encouragement during my graduate studies. I would like to thank Phil Hayes and Ralph Grishman for their helpful comments and criticism that in many ways improved the quality of this book. I wish also to thank Steven Brooks for insightful comments on theoretical aspects of the book (chapter 4, appendices A, B and C), and Rich Thomason for improving the linguistic part of tile book (the very beginning of section 1.1).
Addresses a central problem in cognitive science, concerning the learning procedures through which humans acquire and represent natural language. Brings together world leading scholars from a range of disciplines, includingcomputational linguistics, psychology, behavioural science, and mathematical linguistics. Will appeal to researchers in computational and mathematical linguistics, psychology and behavioral science, AI and NLP. Represents a wide spectrum of perspectives
This book presents recent advances by leading researchers in computational modelling of language acquisition. The contributors have been drawn from departments of linguistics, cognitive science, psychology, and computer science. They show what light can be thrown on fundamental problems when powerful computational techniques are combined with real data. The book considers the extent to which linguistic structure is readily available in the environment, the degree to which language learning is inductive or deductive, and the power of different modelling formalisms for different problems and approaches. It will appeal to linguists, psychologists, cognitive scientists working in language acquisition,and to those involved in computational modelling in linguistic and behavioural science.
Social media platforms have been ubiquitously used in our daily lives and are steadily transforming the ways people communicate, socialize and conduct business. However, the growing popularity of social media adversely leads to wild spread of unreliable information. This in turn inevitably creates serious pollution problem of the global social media environment, which is harmful against humanity. For example, President Donald Trump used social media strategically to win in the 2016 USA Presidential Election. But it was found that many messages he delivered over social media were unproven, if not untrue. This problem must be prevented at all cost and as soon as possible. Thus, analysis of social media content is a pressing issue. It is a timely and important research subject worldwide. However, the short and informal nature of social media messages renders conventional content analysis, which is based on natural language processing (NLP), ineffective. This volume consists of a collection of highly relevant scientific articles published by the authors in different international conferences and journals, and is divided into three distinct parts: (I) search and filtering; (II) opinion and sentiment analysis; and (III) event detection and summarization. This book presents the latest advances in NLP technologies for social media content analysis, especially content on microblogging platforms such as Twitter and Weibo.
This text presents the formal concepts underlying Computer Science.It starts with a wide introduction to Logic with an emphasis on reasoning and proof, with chapters on Program Verification and Prolog.The treatment of computability with Automata and Formal Languages stands out in several ways:The style is appropriate for both undergraduate and graduate classes.
This text presents the formal concepts underlying Computer Science.It starts with a wide introduction to Logic with an emphasis on reasoning and proof, with chapters on Program Verification and Prolog.The treatment of computability with Automata and Formal Languages stands out in several ways:The style is appropriate for both undergraduate and graduate classes.
As natural language processing spans many different disciplines, it is sometimes difficult to understand the contributions and the challenges that each of them presents. This book explores the special relationship between natural language processing and cognitive science, and the contribution of computer science to these two fields. It is based on the recent research papers submitted at the international workshops of Natural Language and Cognitive Science (NLPCS) which was launched in 2004 in an effort to bring together natural language researchers, computer scientists, and cognitive and linguistic scientists to collaborate together and advance research in natural language processing. The chapters cover areas related to language understanding, language generation, word association, word sense disambiguation, word predictability, text production and authorship attribution. This book will be relevant to students and researchers interested in the interdisciplinary nature of language processing.
Natural language understanding is central to the goals of artificial intelligence. Any truly intelligent machine must be capable of carrying on a conversation: dialogue, particularly clarification dialogue, is essential if we are to avoid disasters caused by the misunderstanding of the intelligent interactive systems of the future. This book is an interim report on the grand enterprise of devising a machine that can use natural language as fluently as a human. What has really been achieved since this goal was first formulated in Turing's famous test? What obstacles still need to be overcome?
The impact of computer systems that can understand natural language will be tremendous. To develop this capability we need to be able to automatically and efficiently analyze large amounts of text. Manually devised rules are not sufficient to provide coverage to handle the complex structure of natural language, necessitating systems that can automatically learn from examples. To handle the flexibility of natural language, it has become standard practice to use statistical models, which assign probabilities for example to the different meanings of a word or the plausibility of grammatical constructions. This book develops a general coarse-to-fine framework for learning and inference in large statistical models for natural language processing. Coarse-to-fine approaches exploit a sequence of models which introduce complexity gradually. At the top of the sequence is a trivial model in which learning and inference are both cheap. Each subsequent model refines the previous one, until a final, full-complexity model is reached. Applications of this framework to syntactic parsing, speech recognition and machine translation are presented, demonstrating the effectiveness of the approach in terms of accuracy and speed. The book is intended for students and researchers interested in statistical approaches to Natural Language Processing. "Slav s work"Coarse-to-Fine Natural Language Processing "represents a major advance in the area of syntactic parsing, and a great advertisement for the superiority of the machine-learning approach." Eugene Charniak (Brown University)"
The content of this textbook is organized as a theory of language for the construction of talking robots. The main topic is the mechanism of natural language communication in both the speaker and the hearer. In the third edition the author has modernized the text, leaving the overview of traditional, theoretical, and computational linguistics, analytic philosophy of language, and mathematical complexity theory with their historical backgrounds intact. The format of the empirical analyses of English and German syntax and semantics has been adapted to current practice; and Chaps. 22-24 have been rewritten to focus more sharply on the construction of a talking robot.
This book records a unique attempt over a ten-year period to use stochastic optimization in the natural language processing domain. Setting the work against the background of the logical rule-based approach, the author provides a context for understanding the differences in assumptions about the nature of language and cognition.
The 1990s saw a paradigm change in the use of corpus-driven methods in NLP. In the field of multilingual NLP (such as machine translation and terminology mining) this implied the use of parallel corpora. However, parallel resources are relatively scarce: many more texts are produced daily by native speakers of any given language than translated. This situation resulted in a natural drive towards the use of comparable corpora, i.e. non-parallel texts in the same domain or genre. Nevertheless, this research direction has not produced a single authoritative source suitable for researchers and students coming to the field. The proposed volume provides a reference source, identifying the state of the art in the field as well as future trends. The book is intended for specialists and students in natural language processing, machine translation and computer-assisted translation.
In light of the rapid rise of new trends and applications in various natural language processing tasks, this book presents high-quality research in the field. Each chapter addresses a common challenge in a theoretical or applied aspect of intelligent natural language processing related to Arabic language. Many challenges encountered during the development of the solutions can be resolved by incorporating language technology and artificial intelligence. The topics covered include machine translation; speech recognition; morphological, syntactic, and semantic processing; information retrieval; text classification; text summarization; sentiment analysis; ontology construction; Arabizi translation; Arabic dialects; Arabic lemmatization; and building and evaluating linguistic resources. This book is a valuable reference for scientists, researchers, and students from academia and industry interested in computational linguistics and artificial intelligence, especially for Arabic linguistics and related areas.
Natural language understanding is central to the goals of artificial intelligence. Any truly intelligent machine must be capable of carrying on a conversation: dialogue, particularly clarification dialogue, is essential if we are to avoid disasters caused by the misunderstanding of the intelligent interactive systems of the future. This book is an interim report on the grand enterprise of devising a machine that can use natural language as fluently as a human. What has really been achieved since this goal was first formulated in Turing's famous test? What obstacles still need to be overcome?
This text introduces the semantic aspects of natural language processing and its applications. Topics covered include: measuring word meaning similarity, multi-lingual querying, and parametric theory, named entity recognition, semantics, query language, the and the nature of language. The book also emphasizes the portions of mathematics needed to understand the discussed algorithms.
This Pivot reconsiders the controversial literary figure of Lin Shu and the debate surrounding his place in the history of Modern Chinese Literature. Although recent Chinese mainland research has recognized some of the innovations introduced by Lin Shu, he has often been labeled a 'rightist reformer' in contrast to 'leftist reformers' such as Chen Duxiu and the new wave scholars of the May Fourth Movement. This book provides a well-documented account of his place in the different polemics between these two circles ('conservatives' and 'reformers') and provides a more nuanced account of the different literary movements of the time. Notably, it argues that these differences were neither in content nor in politics, but in the methodological approach of both parties. Examining Lin Shu and the 'conservatives' advocated coexistence of both traditional and modern thought, the book provides background to the major changes occurring in the intellectual landscape of Modern China.
In the not so distant future, we can expect a world where humans and robots coexist and interact with each other. For this to occur, we need to understand human traits, such as seeing, hearing, thinking, speaking, etc., and institute these traits in robots. The most essential feature necessary for robots to achieve is that of integrative multimedia understanding (IMU) which occurs naturally in humans. It allows us to assimilate pieces of information expressed through different modes such as speech, pictures, gestures, etc. The book describes how robots acquire traits like natural language understanding (NLU) as the central part of IMU. Mental image directed semantic theory (MIDST) is its core, and is based on the hypothesis that NLU is essentially the processing of mental image associated with natural language expressions, namely, mental-image based understanding (MBU). MIDST is intended to model omnisensory mental image in human and to afford a knowledge representation system in order for integrative management of knowledge subjective to cognitive mechanisms of intelligent entities such as humans and robots based on a mental image model visualized as 'Loci in Attribute Spaces' and its description language Lmd (mental image description language) to be employed for predicate logic with a systematic scheme for symbol-grounding. This language works as an interlingua among various kinds of information media, and has been applied to several versions of the intelligent system interlingual understanding model aiming at general system (IMAGES). Its latest version, i.e. conversation management system (CMS) simulates MBU and comprehends the user's intention through dialogue to find and solve problems, and finally, provides a response in text or animation. The book is aimed at researchers and students interested in artificial intelligence, robotics, and cognitive science. Based on philosophical considerations, the methodology will also have an appeal in linguistics, psychology, ontology, geography, and cartography. Key Features: Describes the methodology to provide robots with human-like capability of natural language understanding (NLU) as the central part of IMU Uses methodology that also relates to linguistics, psychology, ontology, geography, and cartography Examines current trends in machine translation
This book reviews ways to improve statistical machine speech translation between Polish and English. Research has been conducted mostly on dictionary-based, rule-based, and syntax-based, machine translation techniques. Most popular methodologies and tools are not well-suited for the Polish language and therefore require adaptation, and language resources are lacking in parallel and monolingual data. The main objective of this volume to develop an automatic and robust Polish-to-English translation system to meet specific translation requirements and to develop bilingual textual resources by mining comparable corpora.
This book presents a unique opportunity for constructing a consistent image of collaborative manual annotation for Natural Language Processing (NLP). NLP has witnessed two major evolutions in the past 25 years: firstly, the extraordinary success of machine learning, which is now, for better or for worse, overwhelmingly dominant in the field, and secondly, the multiplication of evaluation campaigns or shared tasks. Both involve manually annotated corpora, for the training and evaluation of the systems. These corpora have progressively become the hidden pillars of our domain, providing food for our hungry machine learning algorithms and reference for evaluation. Annotation is now the place where linguistics hides in NLP. However, manual annotation has largely been ignored for some time, and it has taken a while even for annotation guidelines to be recognized as essential. Although some efforts have been made lately to address some of the issues presented by manual annotation, there has still been little research done on the subject. This book aims to provide some useful insights into the subject. Manual corpus annotation is now at the heart of NLP, and is still largely unexplored. There is a need for manual annotation engineering (in the sense of a precisely formalized process), and this book aims to provide a first step towards a holistic methodology, with a global view on annotation.
A reconsideration of the semantics of a lexical category prepositions that has recently witnessed a plethora of investigations. The volume approaches the issue first from a more general perspective, namely the extent to which insights into the meaning of prepositions give clues to the semantic struc
Research into Natural Language Processing - the use of computers to process language - has developed over the last couple of decades into one of the most vigorous and interesting areas of current work on language and communication. This book introduces the subject through the discussion and development of various computer programs which illustrate some of the basic concepts and techniques in the field. The programming language used is Prolog, which is especially well-suited for Natural Language Processing and those with little or no background in computing. Following the general introduction, the first section of the book presents Prolog, and the following chapters illustrate how various Natural Language Processing programs may be written using this programming language. Since it is assumed that the reader has no previous experience in programming, great care is taken to provide a simple yet comprehensive introduction to Prolog. Due to the 'user friendly' nature of Prolog, simple yet effective programs may be written from an early stage. The reader is gradually introduced to various techniques for syntactic processing, ranging from Finite State Network recognisors to Chart parsers. An integral element of the book is the comprehensive set of exercises included in each chapter as a means of cementing the reader's understanding of each topic. Suggested answers are also provided. An Introduction to Natural Language Processing Through Prolog is an excellent introduction to the subject for students of linguistics and computer science, and will be especially useful for those with no background in the subject. |
You may like...
PowerShell, IT Pro Solutions…
William R. Stanek, William Stanek
Hardcover
R1,434
Discovery Miles 14 340
PowerShell for Administration, IT Pro…
William R. Stanek, William Stanek
Hardcover
R1,418
Discovery Miles 14 180
|