![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Language & Literature > Language & linguistics > Computational linguistics
It is becoming crucial to accurately estimate and monitor speech quality in various ambient environments to guarantee high quality speech communication. This practical hands-on book shows speech intelligibility measurement methods so that the readers can start measuring or estimating speech intelligibility of their own system. The book also introduces subjective and objective speech quality measures, and describes in detail speech intelligibility measurement methods. It introduces a diagnostic rhyme test which uses rhyming word-pairs, and includes: An investigation into the effect of word familiarity on speech intelligibility. Speech intelligibility measurement of localized speech in virtual 3-D acoustic space using the rhyme test. Estimation of speech intelligibility using objective measures, including the ITU standard PESQ measures, and automatic speech recognizers.
Novel Techniques for Dialectal Arabic Speech describes approaches to improve automatic speech recognition for dialectal Arabic. Since speech resources for dialectal Arabic speech recognition are very sparse, the authors describe how existing Modern Standard Arabic (MSA) speech data can be applied to dialectal Arabic speech recognition, while assuming that MSA is always a second language for all Arabic speakers. In this book, Egyptian Colloquial Arabic (ECA) has been chosen as a typical Arabic dialect. ECA is the first ranked Arabic dialect in terms of number of speakers, and a high quality ECA speech corpus with accurate phonetic transcription has been collected. MSA acoustic models were trained using news broadcast speech. In order to cross-lingually use MSA in dialectal Arabic speech recognition, the authors have normalized the phoneme sets for MSA and ECA. After this normalization, they have applied state-of-the-art acoustic model adaptation techniques like Maximum Likelihood Linear Regression (MLLR) and Maximum A-Posteriori (MAP) to adapt existing phonemic MSA acoustic models with a small amount of dialectal ECA speech data. Speech recognition results indicate a significant increase in recognition accuracy compared to a baseline model trained with only ECA data.
This book discusses the impact of spectral features extracted from frame level, glottal closure regions, and pitch-synchronous analysis on the performance of language identification systems. In addition to spectral features, the authors explore prosodic features such as intonation, rhythm, and stress features for discriminating the languages. They present how the proposed spectral and prosodic features capture the language specific information from two complementary aspects, showing how the development of language identification (LID) system using the combination of spectral and prosodic features will enhance the accuracy of identification as well as improve the robustness of the system. This book provides the methods to extract the spectral and prosodic features at various levels, and also suggests the appropriate models for developing robust LID systems according to specific spectral and prosodic features. Finally, the book discuss about various combinations of spectral and prosodic features, and the desired models to enhance the performance of LID systems.
Researchers in many disciplines have been concerned with modeling textual data in order to account for texts as the primary information unit of written communication. The book "Modelling, Learning and Processing of Text-Technological Data Structures" deals with this challenging information unit. It focuses on theoretical foundations of representing natural language texts as well as on concrete operations of automatic text processing. Following this integrated approach, the present volume includes contributions to a wide range of topics in the context of processing of textual data. This relates to the learning of ontologies from natural language texts, the annotation and automatic parsing of texts as well as the detection and tracking of topics in texts and hypertexts. In this way, the book brings together a wide range of approaches to procedural aspects of text technology as an emerging scientific discipline.
This book presents state of art research in speech emotion recognition. Readers are first presented with basic research and applications - gradually more advance information is provided, giving readers comprehensive guidance for classify emotions through speech. Simulated databases are used and results extensively compared, with the features and the algorithms implemented using MATLAB. Various emotion recognition models like Linear Discriminant Analysis (LDA), Regularized Discriminant Analysis (RDA), Support Vector Machines (SVM) and K-Nearest neighbor (KNN) and are explored in detail using prosody and spectral features, and feature fusion techniques.
This is the first volume of a unique collection that brings together the best English-language problems created for students competing in the Computational Linguistics Olympiad. These problems are representative of the diverse areas presented in the competition and designed with three principles in mind: * To challenge the student analytically, without requiring any explicit knowledge or experience in linguistics or computer science; * To expose the student to the different kinds of reasoning required when encountering a new phenomenon in a language, both as a theoretical topic and as an applied problem; * To foster the natural curiosity students have about the workings of their own language, as well as to introduce them to the beauty and structure of other languages; * To learn about the models and techniques used by computers to understand human language. Aside from being a fun intellectual challenge, the Olympiad mimics the skills used by researchers and scholars in the field of computational linguistics. In an increasingly global economy where businesses operate across borders and languages, having a strong pool of computational linguists is a competitive advantage, and an important component to both security and growth in the 21st century. This collection of problems is a wonderful general introduction to the field of linguistics through the analytic problem solving technique. "A fantastic collection of problems for anyone who is curious about how human language works! These books take serious scientific questions and present them in a fun, accessible way. Readers exercise their logical thinking capabilities while learning about a wide range of human languages, linguistic phenomena, and computational models. " - Kevin Knight, USC Information Sciences Institute
1. Main assumptions, objectives and conditionings 1.1. The present book is concerned with certain problems in the logical philosophy of language . It is written in the the Polish logical, philosophical, and semiotic spirit of syntax of tradition, and shows two conceptions of the categorial languages : the theory of simple languages, i.e ., languages which do not include variables nor the operators that bind them (for instance, large fragments of natural languages, calculi, the language of languages of well-known sentential Aristotle's traditional syllogistic, languages of equationally definable algebras), and the theory of w-languages, i.e., languages which include operators and variables bound by the latter
This two-volume set, consisting of LNCS 8403 and LNCS 8404, constitutes the thoroughly refereed proceedings of the 14th International Conference on Intelligent Text Processing and Computational Linguistics, CICLing 2014, held in Kathmandu, Nepal, in April 2014. The 85 revised papers presented together with 4 invited papers were carefully reviewed and selected from 300 submissions. The papers are organized in the following topical sections: lexical resources; document representation; morphology, POS-tagging, and named entity recognition; syntax and parsing; anaphora resolution; recognizing textual entailment; semantics and discourse; natural language generation; sentiment analysis and emotion recognition; opinion mining and social networks; machine translation and multilingualism; information retrieval; text classification and clustering; text summarization; plagiarism detection; style and spelling checking; speech processing; and applications.
QUALICO has been held for the first time as an international conference to demonstrate the state of the art in quantitative linguistics. This domain of language study and research is gaining considerable interest due to recent advances in linguistic modelling, particularly in computational linguistics, cognitive science, and developments in mathematics like modern systems theory. Progress in hardware and software technology, together with ease of access to data and numerical processing, has provided new means of empirical data acquisition and the application of mathematical models of adequate complexity. This volume contains the papers read at QUALICO 91, and provides a representative overview of the state of the art in quantitative linguistic research.
The general aim of this book is to provide an elementary exposition of some basic concepts in terms of which both classical and non-dassicallogirs may be studied and appraised. Although quantificational logic is dealt with briefly in the last chapter, the discussion is chiefly concemed with propo- gjtional cakuli. Still, the subject, as it stands today, cannot br covered in one book of reasonable length. Rather than to try to include in the volume as much as possible, I have put emphasis on some selected topics. Even these could not be roverrd completely, but for each topic I have attempted to present a detailed and precise t'Xposition of several basic results including some which are non-trivial. The roots of some of the central ideas in the volume go back to J.Luka- siewicz's seminar on mathematicallogi
The accurate determination of the speech spectrum, particularly for short frames, is commonly pursued in diverse areas including speech processing, recognition, and acoustic phonetics. With this book the author makes the subject of spectrum analysis understandable to a wide audience, including those with a solid background in general signal processing and those without such background. In keeping with these goals, this is not a book that replaces or attempts to cover the material found in a general signal processing textbook. Some essential signal processing concepts are presented in the first chapter, but even there the concepts are presented in a generally understandable fashion as far as is possible. Throughout the book, the focus is on applications to speech analysis; mathematical theory is provided for completeness, but these developments are set off in boxes for the benefit of those readers with sufficient background. Other readers may proceed through the main text, where the key results and applications will be presented in general heuristic terms, and illustrated with software routines and practical "show-and-tell" discussions of the results. At some points, the book refers to and uses the implementations in the Praat speech analysis software package, which has the advantages that it is used by many scientists around the world, and it is free and open source software. At other points, special software routines have been developed and made available to complement the book, and these are provided in the Matlab programming language. If the reader has the basic Matlab package, he/she will be able to immediately implement the programs in that platform---no extra "toolboxes" are required.
The practical task of building a talking robot requires a theory of how natural language communication works. Conversely, the best way to computationally verify a theory of natural language communication is to demonstrate its functioning concretely in the form of a talking robot, the epitome of human-machine communication. To build an actual robot requires hardware that provides appropriate recognition and action interfaces, and because such hardware is hard to develop the approach in this book is theoretical: the author presents an artificial cognitive agent with language as a software system called database semantics (DBS). Because a theoretical approach does not have to deal with the technical difficulties of hardware engineering there is no reason to simplify the system - instead the software components of DBS aim at completeness of function and of data coverage in word form recognition, syntactic-semantic interpretation and inferencing, leaving the procedural implementation of elementary concepts for later. In this book the author first examines the universals of natural language and explains the Database Semantics approach. Then in Part I he examines the following natural language communication issues: using external surfaces; the cycle of natural language communication; memory structure; autonomous control; and learning. In Part II he analyzes the coding of content according to the aspects: semantic relations of structure; simultaneous amalgamation of content; graph-theoretical considerations; computing perspective in dialogue; and computing perspective in text. The book ends with a concluding chapter, a bibliography and an index. The book will be of value to researchers, graduate students and engineers in the areas of artificial intelligence and robotics, in particular those who deal with natural language processing.
This book is written for both linguists and computer scientists working in the field of artificial intelligence as well as to anyone interested in intelligent text processing. Lexical function is a concept that formalizes semantic and syntactic relations between lexical units. Collocational relation is a type of institutionalized lexical relations which holds between the base and its partner in a collocation. Knowledge of collocation is important for natural language processing because collocation comprises the restrictions on how words can be used together. The book shows how collocations can be annotated with lexical functions in a computer readable dictionary - allowing their precise semantic analysis in texts and their effective use in natural language applications including parsers, high quality machine translation, periphrasis system and computer-aided learning of lexica. The books shows how to extract collocations from corpora and annotate them with lexical functions automatically. To train algorithms, the authors created a dictionary of lexical functions containing more than 900 Spanish disambiguated and annotated examples which is a part of this book. The obtained results show that machine learning is feasible to achieve the task of automatic detection of lexical functions.
Current language technology is dominated by approaches that either enumerate a large set of rules, or are focused on a large amount of manually labelled data. The creation of both is time-consuming and expensive, which is commonly thought to be the reason why automated natural language understanding has still not made its way into "real-life" applications yet. This book sets an ambitious goal: to shift the development of language processing systems to a much more automated setting than previous works. A new approach is defined: what if computers analysed large samples of language data on their own, identifying structural regularities that perform the necessary abstractions and generalisations in order to better understand language in the process? After defining the framework of Structure Discovery and shedding light on the nature and the graphic structure of natural language data, several procedures are described that do exactly this: let the computer discover structures without supervision in order to boost the performance of language technology applications. Here, multilingual documents are sorted by language, word classes are identified, and semantic ambiguities are discovered and resolved without using a dictionary or other explicit human input. The book concludes with an outlook on the possibilities implied by this paradigm and sets the methods in perspective to human computer interaction. The target audience are academics on all levels (undergraduate and graduate students, lecturers and professors) working in the fields of natural language processing and computational linguistics, as well as natural language engineers who are seeking to improve their systems.
The book focuses on the part of the audio conversation not related to language such as speaking rate (in terms of number of syllables per unit time) and emotion centric features. This text examines using non-linguistics features to infer information from phone calls to call centers. The author analyzes "how" the conversation happens and not "what" the conversation is about by audio signal processing and analysis.
The aim of this book and its accompanying audio files is to make accessible a corpus of 40 authentic job interviews conducted in English. The recordings and transcriptions of the interviews published here may be used by students, teachers and researchers alike for linguistic analyses of spoken discourse and as authentic material for language learning in the classroom. The book includes an introduction to corpus linguistics, offering insight into different kinds of corpora and discussing their main characteristics. Furthermore, major features of the discourse genre job interview are outlined and detailed information is given concerning the job interview corpus published in this book.
Dictation systems, read-aloud software for the blind, speech control of machinery, geographical information systems with speech input and output, and educational software with 'talking head' artificial tutorial agents are already on the market. The field is expanding rapidly, and new methods and applications emerge almost daily. But good sources of systematic information have not kept pace with the body of information needed for development and evaluation of these systems. Much of this information is widely scattered through speech and acoustic engineering, linguistics, phonetics, and experimental psychology. The Handbook of Multimodal and Spoken Dialogue Systems presents current and developing best practice in resource creation for speech input/output software and hardware. This volume brings experts in these fields together to give detailed 'how to' information and recommendations on planning spoken dialogue systems, designing and evaluating audiovisual and multimodal systems, and evaluating consumer off-the-shelf products.In addition to standard terminology in the field, the following topics are covered in depth: * How to collect high quality data for designing, training, and evaluating multimodal and speech dialogue systems; * How to evaluate real-life computer systems with speech input and output; * How to describe and model human-computer dialogue precisely and in depth. Also included: * The first systematic medium-scale compendium of terminology with definitions. This handbook has been especially designed for the needs of development engineers, decision-makers, researchers, and advanced level students in the fields of speech technology, multimodal interfaces, multimedia, computational linguistics, and phonetics.
This two-volume set, consisting of LNCS 8403 and LNCS 8404, constitutes the thoroughly refereed proceedings of the 14th International Conference on Intelligent Text Processing and Computational Linguistics, CICLing 2014, held in Kathmandu, Nepal, in April 2014. The 85 revised papers presented together with 4 invited papers were carefully reviewed and selected from 300 submissions. The papers are organized in the following topical sections: lexical resources; document representation; morphology, POS-tagging, and named entity recognition; syntax and parsing; anaphora resolution; recognizing textual entailment; semantics and discourse; natural language generation; sentiment analysis and emotion recognition; opinion mining and social networks; machine translation and multilingualism; information retrieval; text classification and clustering; text summarization; plagiarism detection; style and spelling checking; speech processing; and applications.
It was in the course of 1980 that it dawned upon several friends and colleagues of Manfred Bierwisch that a half century had passed since his birth in 1930. Manfred's youthful appearance had prevented a timely appreciation of this fact, and these friends and co11eagues are, therefore, not at ali embarrassed to be presenting him, almost a year late, with a Festschrift which willleave a trace of this noteworthy occasion in the archives of linguistics. It should be realized, however, that the deIay would have easily extended to 1990 if alI those who had wanted to contribute to this book had in fact written their chapters. Under the pressure of actuality, several co11eagues who had genu ineIy hoped or even promised to contribute, just couIdn't make it in time. Still, their greetings and best wishes are also, be it tacitly, expressed by this volume. Especia11y important for the archives would be a record of the celebrated one's works and physical appearance. For the convenience of present and future generations this Festschrift contains a bibliography of Manfred Bierwisch's scientific publications, which forms a chapter in itself. The frontispiece photograph was taken unawares by one of our accomplices. The title of this Festschrift may alIow for free associations of various sorts."
Recent advances in the fields of knowledge representation, reasoning and human-computer interaction have paved the way for a novel approach to treating and handling context. The field of research presented in this book addresses the problem of contextual computing in artificial intelligence based on the state of the art in knowledge representation and human-computer interaction. The author puts forward a knowledge-based approach for employing high-level context in order to solve some persistent and challenging problems in the chosen showcase domain of natural language understanding. Specifically, the problems addressed concern the handling of noise due to speech recognition errors, semantic ambiguities, and the notorious problem of underspecification. Consequently the book examines the individual contributions of contextual composing for different types of context. Therefore, contextual information stemming from the domain at hand, prior discourse, and the specific user and real world situation are considered and integrated in a formal model that is applied and evaluated employing different multimodal mobile dialog systems. This book is intended to meet the needs of readers from at least three fields - AI and computer science; computational linguistics; and natural language processing - as well as some computationally oriented linguists, making it a valuable resource for scientists, researchers, lecturers, language processing practitioners and professionals as well as postgraduates and some undergraduates in the aforementioned fields. "The book addresses a problem of great and increasing technical and practical importance - the role of context in natural language processing (NLP). It considers the role of context in three important tasks: Automatic Speech Recognition, Semantic Interpretation, and Pragmatic Interpretation. Overall, the book represents a novel and insightful investigation into the potential of contextual information processing in NLP." Jerome A Feldman, Professor of Electrical Engineering and Computer Science, UC Berkeley, USA http://dm.tzi.de/research/contextual-computing/
Computer parsing technology, which breaks down complex linguistic structures into their constituent parts, is a key research area in the automatic processing of human language. This volume is a collection of contributions from leading researchers in the field of natural language processing technology, each of whom detail their recent work which includes new techniques as well as results. The book presents an overview of the state of the art in current research into parsing technologies, focusing on three important themes: dependency parsing, domain adaptation, and deep parsing. The technology, which has a variety of practical uses, is especially concerned with the methods, tools and software that can be used to parse automatically. Applications include extracting information from free text or speech, question answering, speech recognition and comprehension, recommender systems, machine translation, and automatic summarization. New developments in the area of parsing technology are thus widely applicable, and researchers and professionals from a number of fields will find the material here required reading. As well as the other four volumes on parsing technology in this series this book has a breadth of coverage that makes it suitable both as an overview of the field for graduate students, and as a reference for established researchers in computational linguistics, artificial intelligence, computer science, language engineering, information science, and cognitive science. It will also be of interest to designers, developers, and advanced users of natural language processing systems, including applications such as spoken dialogue, text mining, multimodal human-computer interaction, and semantic web technology.
"Emotion Recognition Using Speech Features" provides coverage of emotion-specific features present in speech. The author also discusses suitable models for capturing emotion-specific information for distinguishing different emotions. The content of this book is important for designing and developing natural and sophisticated speech systems. In this Brief, Drs. Rao and Koolagudi lead a discussion of how emotion-specific information is embedded in speech and how to acquire emotion-specific knowledge using appropriate statistical models. Additionally, the authors provide information about exploiting multiple evidences derived from various features and models. The acquired emotion-specific knowledge is useful for synthesizing emotions. Features includes discussion of: * Global and local prosodic features at syllable, word and phrase levels, helpful for capturing emotion-discriminative information; * Exploiting complementary evidences obtained from excitation sources, vocal tract systems and prosodic features in order to enhance the emotion recognition performance; * Proposed multi-stage and hybrid models for improving the emotion recognition performance. This brief is for researchers working in areas related to speech-based products such as mobile phone manufacturing companies, automobile companies, and entertainment products as well as researchers involved in basic and applied speech processing research.
The description, automatic identification and further processing of web genres is a novel field of research in computational linguistics, NLP and related areas such as text-technology, digital humanities and web mining. One of the driving forces behind this research is the idea of genre-enabled search engines which enable users to additionally specify web genres that the documents to be retrieved should comply with (e.g., personal homepage, weblog, scientific article etc.). This book offers a thorough foundation of this upcoming field of research on web genres and document types in web-based social networking. It provides theoretical foundations of web genres, presents corpus linguistic approaches to their analysis and computational models for their classification. This includes research in the areas of web genre identification, web genre modelling and related fields such as genres and registers in web based communication social software-based document networks web genre ontologies and classification schemes text-technological models of web genres web content, structure and usage mining web genre classification web as corpus. The book addresses researchers who want to become acquainted with theoretical developments, computational models and their empirical evaluation in this field of research. It also addresses researchers who are interested in standards for the creation of corpora of web documents. Thus, the book concerns readers from many disciplines such as corpus linguistics, computational linguistics, text-technology and computer science.
th This volume is dedicated to Dov Gabbay who celebrated his 50 birthday in October 1995. Dov is one of the most outstanding and most productive researchers we have ever met. He has exerted a profound influence in major fields of logic, linguistics and computer science. His contributions in the areas of logic, language and reasoning are so numerous that a comprehensive survey would already fill half of this book. Instead of summarizing his work we decided to let him speak for himself. Sitting in a car on the way to Amsterdam airport he gave an interview to Jelle Gerbrandy and Anne-Marie Mineur. This recorded conversation with him, which is included gives a deep insight into his motivations and into his view of the world, the Almighty and, of course, the role of logic. In addition, this volume contains a partially annotated bibliography of his main papers and books. The length of the bibliography and the broadness of the topics covered there speaks for itself.
Reversible grammar allows computational models to be built that are equally well suited for the analysis and generation of natural language utterances. This task can be viewed from very different perspectives by theoretical and computational linguists, and computer scientists. The papers in this volume present a broad range of approaches to reversible, bi-directional, and non-directional grammar systems that have emerged in recent years. This is also the first collection entirely devoted to the problems of reversibility in natural language processing. Most papers collected in this volume are derived from presentations at a workshop held at the University of California at Berkeley in the summer of 1991 organised under the auspices of the Association for Computational Linguistics. This book will be a valuable reference to researchers in linguistics and computer science with interests in computational linguistics, natural language processing, and machine translation, as well as in practical aspects of computability. |
You may like...
Compendium of Hydrogen Energy - Hydrogen…
Subramani, Angelo Basile, …
Hardcover
R4,410
Discovery Miles 44 100
New Trends in the Physics and Mechanics…
Martine Ben Amar, Alain Goriely, …
Hardcover
R2,505
Discovery Miles 25 050
Microsoft (R) Office 2013 - Illustrated…
Lisa Friedrichsen, Carol Cram, …
Paperback
Big Data and Smart Service Systems
Xiwei Liu, Rangachari Anand, …
Hardcover
Carotid Artery Surgery - A Problem-based…
A.Ross Naylor, William C. Mackey
Hardcover
R3,433
Discovery Miles 34 330
|