![]() |
![]() |
Your cart is empty |
||
Books > Language & Literature > Language & linguistics > Computational linguistics
1. Main assumptions, objectives and conditionings 1.1. The present book is concerned with certain problems in the logical philosophy of language . It is written in the the Polish logical, philosophical, and semiotic spirit of syntax of tradition, and shows two conceptions of the categorial languages : the theory of simple languages, i.e ., languages which do not include variables nor the operators that bind them (for instance, large fragments of natural languages, calculi, the language of languages of well-known sentential Aristotle's traditional syllogistic, languages of equationally definable algebras), and the theory of w-languages, i.e., languages which include operators and variables bound by the latter
This two-volume set, consisting of LNCS 8403 and LNCS 8404, constitutes the thoroughly refereed proceedings of the 14th International Conference on Intelligent Text Processing and Computational Linguistics, CICLing 2014, held in Kathmandu, Nepal, in April 2014. The 85 revised papers presented together with 4 invited papers were carefully reviewed and selected from 300 submissions. The papers are organized in the following topical sections: lexical resources; document representation; morphology, POS-tagging, and named entity recognition; syntax and parsing; anaphora resolution; recognizing textual entailment; semantics and discourse; natural language generation; sentiment analysis and emotion recognition; opinion mining and social networks; machine translation and multilingualism; information retrieval; text classification and clustering; text summarization; plagiarism detection; style and spelling checking; speech processing; and applications.
QUALICO has been held for the first time as an international conference to demonstrate the state of the art in quantitative linguistics. This domain of language study and research is gaining considerable interest due to recent advances in linguistic modelling, particularly in computational linguistics, cognitive science, and developments in mathematics like modern systems theory. Progress in hardware and software technology, together with ease of access to data and numerical processing, has provided new means of empirical data acquisition and the application of mathematical models of adequate complexity. This volume contains the papers read at QUALICO 91, and provides a representative overview of the state of the art in quantitative linguistic research.
The general aim of this book is to provide an elementary exposition of some basic concepts in terms of which both classical and non-dassicallogirs may be studied and appraised. Although quantificational logic is dealt with briefly in the last chapter, the discussion is chiefly concemed with propo- gjtional cakuli. Still, the subject, as it stands today, cannot br covered in one book of reasonable length. Rather than to try to include in the volume as much as possible, I have put emphasis on some selected topics. Even these could not be roverrd completely, but for each topic I have attempted to present a detailed and precise t'Xposition of several basic results including some which are non-trivial. The roots of some of the central ideas in the volume go back to J.Luka- siewicz's seminar on mathematicallogi
The accurate determination of the speech spectrum, particularly for short frames, is commonly pursued in diverse areas including speech processing, recognition, and acoustic phonetics. With this book the author makes the subject of spectrum analysis understandable to a wide audience, including those with a solid background in general signal processing and those without such background. In keeping with these goals, this is not a book that replaces or attempts to cover the material found in a general signal processing textbook. Some essential signal processing concepts are presented in the first chapter, but even there the concepts are presented in a generally understandable fashion as far as is possible. Throughout the book, the focus is on applications to speech analysis; mathematical theory is provided for completeness, but these developments are set off in boxes for the benefit of those readers with sufficient background. Other readers may proceed through the main text, where the key results and applications will be presented in general heuristic terms, and illustrated with software routines and practical "show-and-tell" discussions of the results. At some points, the book refers to and uses the implementations in the Praat speech analysis software package, which has the advantages that it is used by many scientists around the world, and it is free and open source software. At other points, special software routines have been developed and made available to complement the book, and these are provided in the Matlab programming language. If the reader has the basic Matlab package, he/she will be able to immediately implement the programs in that platform---no extra "toolboxes" are required.
For some time already, a discourse within the field of Translation Studies has increasingly focused on the translator, his/her translation properties and mental processes resulting from their application. Recent years and advances in technology have opened up many possibilities of gaining a deeper insight into these processes. This publication presents the theoretical foundations, the results of scientific experiments, and a broad range of questions to be asked and answered by eye-tracking supported translation studies. The texts have been arranged into two thematic parts. The first part consists of texts dedicated to the theoretical foundations of Translation Studies-oriented eye-tracking research. The second part includes texts discussing the results of the experiments that were carried out.
Current language technology is dominated by approaches that either enumerate a large set of rules, or are focused on a large amount of manually labelled data. The creation of both is time-consuming and expensive, which is commonly thought to be the reason why automated natural language understanding has still not made its way into "real-life" applications yet. This book sets an ambitious goal: to shift the development of language processing systems to a much more automated setting than previous works. A new approach is defined: what if computers analysed large samples of language data on their own, identifying structural regularities that perform the necessary abstractions and generalisations in order to better understand language in the process? After defining the framework of Structure Discovery and shedding light on the nature and the graphic structure of natural language data, several procedures are described that do exactly this: let the computer discover structures without supervision in order to boost the performance of language technology applications. Here, multilingual documents are sorted by language, word classes are identified, and semantic ambiguities are discovered and resolved without using a dictionary or other explicit human input. The book concludes with an outlook on the possibilities implied by this paradigm and sets the methods in perspective to human computer interaction. The target audience are academics on all levels (undergraduate and graduate students, lecturers and professors) working in the fields of natural language processing and computational linguistics, as well as natural language engineers who are seeking to improve their systems.
Speech and Human-Machine Dialog focuses on the dialog management component of a spoken language dialog system. Spoken language dialog systems provide a natural interface between humans and computers. These systems are of special interest for interactive applications, and they integrate several technologies including speech recognition, natural language understanding, dialog management and speech synthesis. Due to the conjunction of several factors throughout the past few years, humans are significantly changing their behavior vis-a-vis machines. In particular, the use of speech technologies will become normal in the professional domain, and in everyday life. The performance of speech recognition components has also significantly improved. This book includes various examples that illustrate the different functionalities of the dialog model in a representative application for train travel information retrieval (train time tables, prices and ticket reservation). Speech and Human-Machine Dialog is designed for a professional audience, composed of researchers and practitioners in industry. This book is also suitable as a secondary text for graduate-level students in computer science and engineering. "
This two-volume set, consisting of LNCS 7816 and LNCS 7817, constitutes the thoroughly refereed proceedings of the 13th International Conference on Computer Linguistics and Intelligent Processing, CICLING 2013, held on Samos, Greece, in March 2013. The total of 91 contributions presented was carefully reviewed and selected for inclusion in the proceedings. The papers are organized in topical sections named: general techniques; lexical resources; morphology and tokenization; syntax and named entity recognition; word sense disambiguation and coreference resolution; semantics and discourse; sentiment, polarity, subjectivity, and opinion; machine translation and multilingualism; text mining, information extraction, and information retrieval; text summarization; stylometry and text simplification; and applications.
This two-volume set, consisting of LNCS 7816 and LNCS 7817, constitutes the thoroughly refereed proceedings of the 13th International Conference on Computer Linguistics and Intelligent Processing, CICLING 2013, held on Samos, Greece, in March 2013. The total of 91 contributions presented was carefully reviewed and selected for inclusion in the proceedings. The papers are organized in topical sections named: general techniques; lexical resources; morphology and tokenization; syntax and named entity recognition; word sense disambiguation and coreference resolution; semantics and discourse; sentiment, polarity, subjectivity, and opinion; machine translation and multilingualism; text mining, information extraction, and information retrieval; text summarization; stylometry and text simplification; and applications.
Dictation systems, read-aloud software for the blind, speech control of machinery, geographical information systems with speech input and output, and educational software with 'talking head' artificial tutorial agents are already on the market. The field is expanding rapidly, and new methods and applications emerge almost daily. But good sources of systematic information have not kept pace with the body of information needed for development and evaluation of these systems. Much of this information is widely scattered through speech and acoustic engineering, linguistics, phonetics, and experimental psychology. The Handbook of Multimodal and Spoken Dialogue Systems presents current and developing best practice in resource creation for speech input/output software and hardware. This volume brings experts in these fields together to give detailed 'how to' information and recommendations on planning spoken dialogue systems, designing and evaluating audiovisual and multimodal systems, and evaluating consumer off-the-shelf products.In addition to standard terminology in the field, the following topics are covered in depth: * How to collect high quality data for designing, training, and evaluating multimodal and speech dialogue systems; * How to evaluate real-life computer systems with speech input and output; * How to describe and model human-computer dialogue precisely and in depth. Also included: * The first systematic medium-scale compendium of terminology with definitions. This handbook has been especially designed for the needs of development engineers, decision-makers, researchers, and advanced level students in the fields of speech technology, multimodal interfaces, multimedia, computational linguistics, and phonetics.
Natural language is easy for people and hard for machines. For two generations, the tantalizing goal has been to get computers to handle human languages in ways that will be compelling and useful to people. Obstacles are many and legendary. Natural Language Processing: The PLNLP Approach describes one group's decade of research in pursuit of that goal. A very broad coverage NLP system, including a programming language (PLNLP) development tools, and analysis and synthesis components, was developed and incorporated into a variety of well-known practical applications, ranging from text critiquing (CRITIQUE) to machine translation (e.g. SHALT). This books represents the first published collection of papers describing the system and how it has been used. Twenty-six authors from nine countries contributed to this volume. Natural language analysis, in the PLNLP approach, is done is six stages that move smoothly from syntax through semantics into discourse. The initial syntactic sketch is provided by an Augmented Phrase Structure Grammar (APSG) that uses exclusively binary rules and aims to produce some reasonable analysis for any input string. Its `approximate' analysis passes to the reassignment component, which takes the default syntactic attachments and adjusts them, using semantic information obtained by parsing definitions and example sentences from machine-readable dictionaries. This technique is an example of one facet of the PLNLP approach: the use of natural language itself as a knowledge representation language -- an innovation that permits a wide variety of online text materials to be exploited as sources of semantic information. The next stage computes the intrasential argument structure and resolves all references, both NP- and VP-anaphora, that can be treated at this point in the processing. Subsequently, additional components, currently not so well developed as the earlier ones, handle the further disambiguation of word senses, the normalization of paraphrases, and the construction of a paragraph (discourse) model by joining sentential semantic graphs. Natural Language Processing: The PLNLP Approach acquaints the reader with the theory and application of a working, real-world, domain-free NLP system, and attempts to bridge the gap between computational and theoretical models of linguistic structure. It provides a valuable resource for students, teachers, and researchers in the areas of computational linguistics, natural processing, artificial intelligence, and information science.
Recent advances in the fields of knowledge representation, reasoning and human-computer interaction have paved the way for a novel approach to treating and handling context. The field of research presented in this book addresses the problem of contextual computing in artificial intelligence based on the state of the art in knowledge representation and human-computer interaction. The author puts forward a knowledge-based approach for employing high-level context in order to solve some persistent and challenging problems in the chosen showcase domain of natural language understanding. Specifically, the problems addressed concern the handling of noise due to speech recognition errors, semantic ambiguities, and the notorious problem of underspecification. Consequently the book examines the individual contributions of contextual composing for different types of context. Therefore, contextual information stemming from the domain at hand, prior discourse, and the specific user and real world situation are considered and integrated in a formal model that is applied and evaluated employing different multimodal mobile dialog systems. This book is intended to meet the needs of readers from at least three fields - AI and computer science; computational linguistics; and natural language processing - as well as some computationally oriented linguists, making it a valuable resource for scientists, researchers, lecturers, language processing practitioners and professionals as well as postgraduates and some undergraduates in the aforementioned fields. "The book addresses a problem of great and increasing technical and practical importance - the role of context in natural language processing (NLP). It considers the role of context in three important tasks: Automatic Speech Recognition, Semantic Interpretation, and Pragmatic Interpretation. Overall, the book represents a novel and insightful investigation into the potential of contextual information processing in NLP." Jerome A Feldman, Professor of Electrical Engineering and Computer Science, UC Berkeley, USA http://dm.tzi.de/research/contextual-computing/
It was in the course of 1980 that it dawned upon several friends and colleagues of Manfred Bierwisch that a half century had passed since his birth in 1930. Manfred's youthful appearance had prevented a timely appreciation of this fact, and these friends and co11eagues are, therefore, not at ali embarrassed to be presenting him, almost a year late, with a Festschrift which willleave a trace of this noteworthy occasion in the archives of linguistics. It should be realized, however, that the deIay would have easily extended to 1990 if alI those who had wanted to contribute to this book had in fact written their chapters. Under the pressure of actuality, several co11eagues who had genu ineIy hoped or even promised to contribute, just couIdn't make it in time. Still, their greetings and best wishes are also, be it tacitly, expressed by this volume. Especia11y important for the archives would be a record of the celebrated one's works and physical appearance. For the convenience of present and future generations this Festschrift contains a bibliography of Manfred Bierwisch's scientific publications, which forms a chapter in itself. The frontispiece photograph was taken unawares by one of our accomplices. The title of this Festschrift may alIow for free associations of various sorts."
Computer parsing technology, which breaks down complex linguistic structures into their constituent parts, is a key research area in the automatic processing of human language. This volume is a collection of contributions from leading researchers in the field of natural language processing technology, each of whom detail their recent work which includes new techniques as well as results. The book presents an overview of the state of the art in current research into parsing technologies, focusing on three important themes: dependency parsing, domain adaptation, and deep parsing. The technology, which has a variety of practical uses, is especially concerned with the methods, tools and software that can be used to parse automatically. Applications include extracting information from free text or speech, question answering, speech recognition and comprehension, recommender systems, machine translation, and automatic summarization. New developments in the area of parsing technology are thus widely applicable, and researchers and professionals from a number of fields will find the material here required reading. As well as the other four volumes on parsing technology in this series this book has a breadth of coverage that makes it suitable both as an overview of the field for graduate students, and as a reference for established researchers in computational linguistics, artificial intelligence, computer science, language engineering, information science, and cognitive science. It will also be of interest to designers, developers, and advanced users of natural language processing systems, including applications such as spoken dialogue, text mining, multimodal human-computer interaction, and semantic web technology.
This book presents the first computer program automating the task of componential analysis of kinship vocabularies. The book examines the program in relation to two basic problems: the commonly occurring inconsistency of componential models; and the huge number of alternative componential models.
"Emotion Recognition Using Speech Features" provides coverage of emotion-specific features present in speech. The author also discusses suitable models for capturing emotion-specific information for distinguishing different emotions. The content of this book is important for designing and developing natural and sophisticated speech systems. In this Brief, Drs. Rao and Koolagudi lead a discussion of how emotion-specific information is embedded in speech and how to acquire emotion-specific knowledge using appropriate statistical models. Additionally, the authors provide information about exploiting multiple evidences derived from various features and models. The acquired emotion-specific knowledge is useful for synthesizing emotions. Features includes discussion of: * Global and local prosodic features at syllable, word and phrase levels, helpful for capturing emotion-discriminative information; * Exploiting complementary evidences obtained from excitation sources, vocal tract systems and prosodic features in order to enhance the emotion recognition performance; * Proposed multi-stage and hybrid models for improving the emotion recognition performance. This brief is for researchers working in areas related to speech-based products such as mobile phone manufacturing companies, automobile companies, and entertainment products as well as researchers involved in basic and applied speech processing research.
The description, automatic identification and further processing of web genres is a novel field of research in computational linguistics, NLP and related areas such as text-technology, digital humanities and web mining. One of the driving forces behind this research is the idea of genre-enabled search engines which enable users to additionally specify web genres that the documents to be retrieved should comply with (e.g., personal homepage, weblog, scientific article etc.). This book offers a thorough foundation of this upcoming field of research on web genres and document types in web-based social networking. It provides theoretical foundations of web genres, presents corpus linguistic approaches to their analysis and computational models for their classification. This includes research in the areas of web genre identification, web genre modelling and related fields such as genres and registers in web based communication social software-based document networks web genre ontologies and classification schemes text-technological models of web genres web content, structure and usage mining web genre classification web as corpus. The book addresses researchers who want to become acquainted with theoretical developments, computational models and their empirical evaluation in this field of research. It also addresses researchers who are interested in standards for the creation of corpora of web documents. Thus, the book concerns readers from many disciplines such as corpus linguistics, computational linguistics, text-technology and computer science.
th This volume is dedicated to Dov Gabbay who celebrated his 50 birthday in October 1995. Dov is one of the most outstanding and most productive researchers we have ever met. He has exerted a profound influence in major fields of logic, linguistics and computer science. His contributions in the areas of logic, language and reasoning are so numerous that a comprehensive survey would already fill half of this book. Instead of summarizing his work we decided to let him speak for himself. Sitting in a car on the way to Amsterdam airport he gave an interview to Jelle Gerbrandy and Anne-Marie Mineur. This recorded conversation with him, which is included gives a deep insight into his motivations and into his view of the world, the Almighty and, of course, the role of logic. In addition, this volume contains a partially annotated bibliography of his main papers and books. The length of the bibliography and the broadness of the topics covered there speaks for itself.
Reversible grammar allows computational models to be built that are equally well suited for the analysis and generation of natural language utterances. This task can be viewed from very different perspectives by theoretical and computational linguists, and computer scientists. The papers in this volume present a broad range of approaches to reversible, bi-directional, and non-directional grammar systems that have emerged in recent years. This is also the first collection entirely devoted to the problems of reversibility in natural language processing. Most papers collected in this volume are derived from presentations at a workshop held at the University of California at Berkeley in the summer of 1991 organised under the auspices of the Association for Computational Linguistics. This book will be a valuable reference to researchers in linguistics and computer science with interests in computational linguistics, natural language processing, and machine translation, as well as in practical aspects of computability.
Trajectories through Knowledge Space: A Dynamic Framework for Machine Comprehension provides an overview of many of the main ideas of connectionism (neural networks) and probabilistic natural language processing. Several areas of common overlap between these fields are described in which each community can benefit from the ideas and techniques of the other. The author's perspective on comprehension pulls together the most significant research of the last ten years and illustrates how we can move more forward onto the next level of intelligent text processing systems. A central focus of the book is the development of a framework for comprehension connecting research themes from cognitive psychology, cognitive science, corpus linguistics and artificial intelligence. The book proposes a new architecture for semantic memory, providing a framework for addressing the problem of how to represent background knowledge in a machine. This architectural framework supports a computational model of comprehension.Trajectories through Knowledge Space: A Dynamic Framework for Machine Comprehension is an excellent reference for researchers and professionals, and may be used as an advanced text for courses on the topic.
1. Structuralist Versus Analogical Descriptions ONE important purpose of this book is to compare two completely dif ferent approaches to describing language. The first of these approaches, commonly called stnlctllralist, is the traditional method for describing behavior. Its methods are found in many diverse fields - from biological taxonomy to literary criticism. A structuralist description can be broadly characterized as a system of classification. The fundamental question that a structuralist description attempts to answer is how a general contextual space should be partitioned. For each context in the partition, a rule is defined. The rule either specifies the behavior of that context or (as in a taxonomy) assigns a name to that context. Structuralists have implicitly assumed that descriptions of behavior should not only be correct, but should also minimize the number of rules and permit only the simplest possible contextual specifications. It turns out that these intuitive notions can actually be derived from more fundamental statements about the uncertainty of rule systems. Traditionally, linguistic analyses have been based on the idea that a language is a system of rules. Saussure, of course, is well known as an early proponent of linguistic structuralism, as exemplified by his characterization of language as "a self-contained whole and principle of classification" (Saussure 1966:9). Yet linguistic structuralism did not originate with Saussure - nor did it end with "American structuralism.""
Speech--to--Speech Translation: a Massively Parallel Memory-Based Approach describes one of the world's first successful speech--to--speech machine translation systems. This system accepts speaker-independent continuous speech, and produces translations as audio output. Subsequent versions of this machine translation system have been implemented on several massively parallel computers, and these systems have attained translation performance in the milliseconds range. The success of this project triggered several massively parallel projects, as well as other massively parallel artificial intelligence projects throughout the world. Dr. Hiroaki Kitano received the distinguished 'Computers and Thought Award' from the International Joint Conferences on Artificial Intelligence in 1993 for his work in this area, and that work is reported in this book.
The editors of the Applied Logic Series are happy to present to the reader the fifth volume in the series, a collection of papers on Logic, Language and Computation. One very striking feature of the application of logic to language and to computation is that it requires the combination, the integration and the use of many diverse systems and methodologies - all in the same single application. The papers in this volume will give the reader a glimpse into the problems of this active frontier of logic. The Editors CONTENTS Preface IX 1. S. AKAMA Recent Issues in Logic, Language and Computation 1 2. M. J. CRESSWELL Restricted Quantification 27 3. B. H. SLATER The Epsilon Calculus' Problematic 39 4. K. VON HEUSINGER Definite Descriptions and Choice Functions 61 5. N. ASHER Spatio-Temporal Structure in Text 93 6. Y. NAKAYAMA DRT and Many-Valued Logics 131 7. S. AKAMA On Constructive Modality 143 8. H. W ANSING Displaying as Temporalizing: Sequent Systems for Subintuitionistic Logics 159 9. L. FARINAS DEL CERRO AND V. LUGARDON 179 Quantification and Dependence Logics 10. R. SYLVAN Relevant Conditionals, and Relevant Application Thereof 191 Index 245 Preface This is a collection of papers by distinguished researchers on Logic, Lin guistics, Philosophy and Computer Science. The aim of this book is to address a broad picture of the recent research on related areas. In particular, the contributions focus on natural language semantics and non-classical logics from different viewpoints."
Connection science is a new information-processing paradigm which attempts to imitate the architecture and process of the brain, and brings together researchers from disciplines as diverse as computer science, physics, psychology, philosophy, linguistics, biology, engineering, neuroscience and AI. Work in Connectionist Natural Language Processing (CNLP) is now expanding rapidly, yet much of the work is still only available in journals, some of them quite obscure. To make this research more accessible this book brings together an important and comprehensive set of articles from the journal CONNECTION SCIENCE which represent the state of the art in Connectionist natural language processing; from speech recognition to discourse comprehension. While it is quintessentially Connectionist, it also deals with hybrid systems, and will be of interest to both theoreticians as well as computer modellers. Range of topics covered: Connectionism and Cognitive Linguistics Motion, Chomsky's Government-binding Theory Syntactic Transformations on Distributed Representations Syntactic Neural Networks A Hybrid Symbolic/Connectionist Model for Understanding of Nouns Connectionism and Determinism in a Syntactic Parser Context Free Grammar Recognition Script Recognition with Hierarchical Feature Maps Attention Mechanisms in Language Script-Based Story Processing A Connectionist Account of Similarity in Vowel Harmony Learning Distributed Representations Connectionist Language Users Representation and Recognition of Temporal Patterns A Hybrid Model of Script Generation Networks that Learn about Phonological Features Pronunciation in Text-to-Speech Systems |
![]() ![]() You may like...
Fine-Kinney-Based Fuzzy Multi-criteria…
Muhammet Gul, Suleyman Mete, …
Hardcover
R2,873
Discovery Miles 28 730
Computer Science Protecting Human…
Aleksander Byrski, Tadeusz Czachorski, …
Hardcover
R2,614
Discovery Miles 26 140
Podcasting - The little Book of…
Jerry The Pod-Starter Hamilton
Hardcover
Minecraft: Guide to Survival (Updated)
Mojang AB, The Official Minecraft Team
Hardcover
Electronic Music and Sound Design…
Alessandro Cipriani, Maurizio Giri
Paperback
R1,224
Discovery Miles 12 240
|