![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Language & Literature > Language & linguistics > Computational linguistics
This book provides an overview of a recent and flexible approach to speech synthesis design to develop the first statistical parametric speech synthesizer for Ibibio, a West African tonal language. The design precludes the inflexibility encountered when modeling tonal features of the language and can be used for other tonal African languages. Mobile use and technological innovations in developing African nations have exploded. With mobile technology, many of the barriers caused by infrastructure issues have vanished. In order to address issues that are unique to African tonal languages, the book uses Ibibio as a model. The text reviews the language's speech characteristics, required for building the front end components of the design and propose a finite state transducer (FST), useful for modelling the language's tonetactics. The statistical parametric approach discussed in the text, implements the Hidden Markov Model (HMM) technique, with the goal of creating a generic structure that learns the model from the text itself, and uses the data-driven approach to input specification.
This book features contributions to the XVIIth International Conference "Linguistic and Cultural Studies: Traditions and Innovations" (LKTI 2017), providing insights into theory, research, scientific achievements, and best practices in the fields of pedagogics, linguistics, and language teaching and learning with a particular focus on Siberian perspectives and collaborations between academics from other Russian regions. Covering topics including curriculum development, designing and delivering courses and vocational training, the book is intended for academics working at all levels of education striving to improve educational environments in their context - school, tertiary education and continuous professional development.
Stress and accent are central, organizing features of grammar, but their precise nature continues to be a source of mystery and wonder. These issues come to the forefront in acquisition, where the tension between the abstract mental representations and the concrete physical manifestations of stress and accent is deeply reflected. Understanding the nature of the representations of stress and accent patterns, and understanding how stress and accent patterns are learned, informs all aspects of linguistic theory and language acquisition. These two themes - representation and acquisition - form the organizational backbone of this book. Each is addressed along different dimensions of stress and accent, including the position of an accent or stress within various prosodic domains and the acoustic dimensions along which the pronunciation of stress and accent may vary. The research presented in the book is multidisciplinary, encompassing theoretical linguistics, speech science, and computational and experimental research.
This book constitutes the refereed proceedings of the 13th International Conference on Computational Processing of the Portuguese Language, PROPOR 2018, held in Canela, RS, Brazil, in September 2018. The 42 full papers, 3 short papers and 4 other papers presented in this volume were carefully reviewed and selected from 92 submissions. The papers are organized in topical sections named: Corpus Linguistics, Information Extraction, LanguageApplications, Language Resources, Sentiment Analysis and Opinion Mining, Speech Processing, and Syntax and Parsing.
This book constitutes the refereed proceedings of the 15th International Conference of the Pacific Association for Computational Linguistics, PACLING 2017, held in Yangon, Myanmar, in August 2017. The 28 revised full papers presented were carefully reviewed and selected from 50 submissions. The papers are organized in topical sections on semantics and semantic analysis; statistical machine translation; corpora and corpus-based language processing; syntax and syntactic analysis; document classification; information extraction and text mining; text summarization; text and message understanding; automatic speech recognition; spoken language and dialogue; speech pathology; speech analysis.
This book constitutes the thoroughly refereed post-workshop proceedings of the 18th Chinese Lexical Semantics Workshop, CLSW 2017, held in Leshan, China, in May 2017. The 48 full papers and 5 short papers included in this volume were carefully reviewed and selected from 176 submissions. They are organized in the following topical sections: lexical semantics; applications of natural language processing; lexical resources; and corpus linguistics.
This book constitutes the refereed proceedings of the 11th International Conference, NooJ 2017, held in Kenitra and Rabat, Morocco, in May 2017. The 20 revised full papers presented in this volume were carefully reviewed and selected from 56 submissions. NooJ is a linguistic development environment that provides tools for linguists to construct linguistic resources that formalize a large gamut of linguistic phenomena: typography, orthography, lexicons for simple words, multiword units and discontinuous expressions, inflectional and derivational morphology, local, structural and transformational syntax, and semantics. The papers in this volume are organized in topical sections on vocabulary and morphology; syntactic analysis; natural language processing applications; NooJ's future.
The aim of this book is to advocate and promote network models of linguistic systems that are both based on thorough mathematical models and substantiated in terms of linguistics. In this way, the book contributes first steps towards establishing a statistical network theory as a theoretical basis of linguistic network analysis the boarder of the natural sciences and the humanities. This book addresses researchers who want to get familiar with theoretical developments, computational models and their empirical evaluation in the field of complex linguistic networks. It is intended to all those who are interested in statistical models of linguistic systems from the point of view of network research. This includes all relevant areas of linguistics ranging from phonological, morphological and lexical networks on the one hand and syntactic, semantic and pragmatic networks on the other. In this sense, the volume concerns readers from many disciplines such as physics, linguistics, computer science and information science. It may also be of interest for the upcoming area of systems biology with which the chapters collected here share the view on systems from the point of view of network analysis.
This book introduces formal semantics techniques for a natural language processing audience. Methods discussed involve: (i) the denotational techniques used in model-theoretic semantics, which make it possible to determine whether a linguistic expression is true or false with respect to some model of the way things happen to be; and (ii) stages of interpretation, i.e., ways to arrive at meanings by evaluating and converting source linguistic expressions, possibly with respect to contexts, into output (logical) forms that could be used with (i). The book demonstrates that the methods allow wide coverage without compromising the quality of semantic analysis. Access to unrestricted, robust and accurate semantic analysis is widely regarded as an essential component for improving natural language processing tasks, such as: recognizing textual entailment, information extraction, summarization, automatic reply, and machine translation.
The contributions in this volume focus on the Bayesian interpretation of natural languages, which is widely used in areas of artificial intelligence, cognitive science, and computational linguistics. This is the first volume to take up topics in Bayesian Natural Language Interpretation and make proposals based on information theory, probability theory, and related fields. The methodologies offered here extend to the target semantic and pragmatic analyses of computational natural language interpretation. Bayesian approaches to natural language semantics and pragmatics are based on methods from signal processing and the causal Bayesian models pioneered by especially Pearl. In signal processing, the Bayesian method finds the most probable interpretation by finding the one that maximizes the product of the prior probability and the likelihood of the interpretation. It thus stresses the importance of a production model for interpretation as in Grice’s contributions to pragmatics or in interpretation by abduction.
This book offers the first detailed, comprehensible scientific presentation of Confabulation Theory, addressing a pressing scientific question: How does brain information processing, or cognition, work? With only elementary mathematics as a prerequisite, this book will prove accessible to technologists, scientists, and the educated public.
Explores the direct relation of modern CALL (Computer-Assisted Language Learning) to aspects of natural language processing for theoretical and practical applications, and worldwide demand for formal language education and training that focuses on restricted or specialized professional domains. Unique in its broad-based, state-of-the-art, coverage of current knowledge and research in the interrelated fields of computer-based learning and teaching and processing of specialized linguistic domains. The articles in this book offer insights on or analyses of the current state and future directions of many recent key concepts regarding the application of computers to natural languages, such as: authenticity, personalization, normalization, evaluation. Other articles present fundamental research on major techniques, strategies and methodologies that are currently the focus of international language research projects, both of a theoretical and an applied nature.
The volume brings together papers emerging from the GlobE conference (University of Warsaw). The authors explore major topics in Discourse Studies, offering insights into the field's theoretical foundations and discussing the results of its empirical applications. The book integrates different lines of research in Discourse Studies as undertaken at academic centres Europe-wide and beyond. In this diversity, the editors identify certain dominant lines of study, including (new) media discourse, political discourse in the age of social/digital media, or professional discourse in globalized workplace contexts. At the same time, the volume shows that Discourse Studies not only investigate emerging language phenomena, but also critically reassess research issues formerly addressed.
​This book is an excellent introduction to multiword expressions. It provides a unique, comprehensive and up-to-date overview of this exciting topic in computational linguistics. The first part describes the diversity and richness of multiword expressions, including many examples in several languages. These constructions are not only complex and arbitrary, but also much more frequent than one would guess, making them a real nightmare for natural language processing applications. The second part introduces a new generic framework for automatic acquisition of multiword expressions from texts. Furthermore, it describes the accompanying free software tool, the mwetoolkit, which comes in handy when looking for expressions in texts (regardless of the language). Evaluation is greatly emphasized, underlining the fact that results depend on parameters like corpus size, language, MWE type, etc. The last part contains solid experimental results and evaluates the mwetoolkit, demonstrating its usefulness for computer-assisted lexicography and machine translation. This is the first book to cover the whole pipeline of multiword expression acquisition in a single volume. It is addresses the needs of students and researchers in computational and theoretical linguistics, cognitive sciences, artificial intelligence and computer science. Its good balance between computational and linguistic views make it the perfect starting point for anyone interested in multiword expressions, language and text processing in general.
This book serves as a basic reference for those interested in the application of metaheuristics to speech enhancement. The major goal of the book is to explain the basic concepts of optimization methods and their use in heuristic optimization in speech enhancement to scientists, practicing engineers, and academic researchers in speech processing. The authors discuss why it has been a challenging problem for researchers to develop new enhancement algorithms that aid in the quality and intelligibility of degraded speech. They present powerful optimization methods to speech enhancement that can help to solve the noise reduction problems. Readers will be able to understand the fundamentals of speech processing as well as the optimization techniques, how the speech enhancement algorithms are implemented by utilizing optimization methods, and will be given the tools to develop new algorithms. The authors also provide a comprehensive literature survey regarding the topic.
The book collects contributions from well-established researchers at the interface between language and cognition. It provides an overview of the latest insights into this interdisciplinary field from the perspectives of natural language processing, computer science, psycholinguistics and cognitive science. One of the pioneers in cognitive natural language processing is Michael Zock, to whom this volume is dedicated. The structure of the book reflects his main research interests: lexicon and lexical analysis, semantics, language and speech generation, reading and writing technologies, language resources and language engineering. The book is a valuable reference work and authoritative information source, giving an overview on the field and describing the state of the art as well as future developments. It is intended for researchers and advanced students interested in the subject. One of the pioneers in cognitive natural language processing is Michael Zock, to whom this volume is dedicated. The structure of the book reflects his main research interests: Lexicon and lexical analysis, semantics, language and speech generation, reading and writing technologies, language resources and language engineering. The book is a valuable reference work and authoritative information source, giving an overview on the field and describing the state of the art as well as future developments. It is intended for researchers and advanced students interested in the subject. One of the pioneers in cognitive natural language processing is Michael Zock, to whom this volume is dedicated. The structure of the book reflects his main research interests: Lexicon and lexical analysis, semantics, language and speech generation, reading and writing technologies, language resources and language engineering. The book is a valuable reference work and authoritative information source, giving an overview on the field and describing the state of the art as well as future developments. It is intended for researchers and advanced students interested in the subject.
To date, the relation between multilingualism and the Semantic Web has not yet received enough attention in the research community. One major challenge for the Semantic Web community is to develop architectures, frameworks and systems that can help in overcoming national and language barriers, facilitating equal access to information produced in different cultures and languages. As such, this volume aims at documenting the state-of-the-art with regard to the vision of a Multilingual Semantic Web, in which semantic information will be accessible in and across multiple languages. The Multilingual Semantic Web as envisioned in this volume will support the following functionalities: (1) responding to information needs in any language with regard to semantically structured data available on the Semantic Web and Linked Open Data (LOD) cloud, (2) verbalizing and accessing semantically structured data, ontologies or other conceptualizations in multiple languages, (3) harmonizing, integrating, aggregating, comparing and repurposing semantically structured data across languages and (4) aligning and reconciling ontologies or other conceptualizations across languages. The volume is divided into three main sections: Principles, Methods and Applications. The section on "Principles" discusses models, architectures and methodologies that enrich the current Semantic Web architecture with features necessary to handle multiple languages. The section on "Methods" describes algorithms and approaches for solving key issues related to the construction of the Multilingual Semantic Web. The section on "Applications" describes the use of Multilingual Semantic Web based approaches in the context of several application domains. This volume is essential reading for all academic and industrial researchers who want to embark on this new research field at the intersection of various research topics, including the Semantic Web, Linked Data, natural language processing, computational linguistics, terminology and information retrieval. It will also be of great interest to practitioners who are interested in re-examining their existing infrastructure and methodologies for handling multiple languages in Web applications or information retrieval systems.
The ubiquity of mobile devices has opened the way to extending learning environments far beyond the constraints of the traditional foreign language classroom. This book seeks to advance the knowledge about effective learning and teaching of English for Medical Purposes supported by mobile environments. The author investigates the effectiveness of the use of a mobile version of a flashcard spaced-repetition learning platform. In conclusion, she presents core principles of an educational solution that supports the ongoing and situated learning of English for Medical Purposes by designing a mobile spaced-repetition medical vocabulary tutor ("Mobile Medical English Companion").
The areas of natural language processing and computational linguistics have continued to grow in recent years, driven by the demand to automatically process text and spoken data. With the processing power and techniques now available, research is scaling up from lab prototypes to real-world, proven applications. This book teaches the principles of natural language processing, first covering practical linguistics issues such as encoding and annotation schemes, defining words, tokens and parts of speech and morphology, as well as key concepts in machine learning, such as entropy, regression and classification, which are used throughout the book. It then details the language-processing functions involved, including part-of-speech tagging using rules and stochastic techniques, using Prolog to write phase-structure grammars, syntactic formalisms and parsing techniques, semantics, predicate logic and lexical semantics and analysis of discourse and applications in dialogue systems. A key feature of the book is the author's hands-on approach throughout, with sample code in Prolog and Perl, extensive exercises, and a detailed introduction to Prolog. The reader is supported with a companion website that contains teaching slides, programs and additional material. The second edition is a complete revision of the techniques exposed in the book to reflect advances in the field the author redesigned or updated all the chapters, added two new ones and considerably expanded the sections on machine-learning techniques.
This book is a collection of papers by leading researchers in computational semantics. It presents a state-of-the-art overview of recent and current research in computational semantics, including descriptions of new methods for constructing and improving resources for semantic computation, such as WordNet, VerbNet, and semantically annotated corpora. It also presents new statistical methods in semantic computation, such as the application of distributional semantics in the compositional calculation of sentence meanings. Computing the meaning of sentences, texts, and spoken or texted dialogue is the ultimate challenge in natural language processing, and the key to a wide range of exciting applications. The breadth and depth of coverage of this book makes it suitable as a reference and overview of the state of the field for researchers in Computational Linguistics, Semantics, Computer Science, Cognitive Science, and Artificial Intelligence. ​
Due to the increasing lingua-cultural heterogeneity of today's users of English, it has become necessary to examine politeness, translation and transcultural communication from a different perspective. This book proposes a concept for a transdisciplinary methodology to shed some light onto the opaque relationship between the lingua-cultural biographies of users of English and their patterns of perceiving and realizing politeness in speech acts. The methodology incorporates aspects of CAT tools and business intelligence systems, and is designed for long-term research that can serve as a foundation for theoretical studies or practical contexts, such as customer relationship management and marketing. |
You may like...
Ahmad al-Wallali's Commentary on…
Ahmad b. Ya'qub al-Wallali
Hardcover
R4,674
Discovery Miles 46 740
Jaakko Hintikka on Knowledge and…
Hans van Ditmarsch, Gabriel Sandu
Hardcover
R4,412
Discovery Miles 44 120
Treatise on Intuitionistic Type Theory
Johan Georg Granstroem
Hardcover
R3,790
Discovery Miles 37 900
Hajnal Andreka and Istvan Nemeti on…
Judit Madarasz, Gergely Szekely
Hardcover
R2,743
Discovery Miles 27 430
|