![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Language & Literature > Language & linguistics > Computational linguistics
Presenting the digital humanities as both a domain of practice and as a set of methodological approaches to be applied to corpus linguistics and translation, chapters in this volume provide a novel and original framework to triangulate research for pursuing both scientific and educational goals within the digital humanities. They also highlight more broadly the importance of data triangulation in corpus linguistics and translation studies. Putting forward practical applications for digging into data, this book is a detailed examination of how to integrate quantitative and qualitative approaches through case studies, sample analysis and practical examples.
This volume brings together a number of corpus-based studies dealing with language varieties. These contributions focus on contemporary lines of research interests, and include language teaching and learning, translation, domain-specific grammatical and textual phenomena, linguistic variation and gender, among others. Corpora used in these studies range from highly specialized texts, including earlier scientific texts, to regional varieties. Under the umbrella of corpus linguistics, scholars also apply other distinct methodological approaches to their data in order to offer new insights into old and new topics in linguistics and applied linguistics. Another important contribution of this book lies in the obvious didactic implications of the results obtained in the individual chapters for domain-based language teaching.
This book presents state of art research in speech emotion recognition. Readers are first presented with basic research and applications - gradually more advance information is provided, giving readers comprehensive guidance for classify emotions through speech. Simulated databases are used and results extensively compared, with the features and the algorithms implemented using MATLAB. Various emotion recognition models like Linear Discriminant Analysis (LDA), Regularized Discriminant Analysis (RDA), Support Vector Machines (SVM) and K-Nearest neighbor (KNN) and are explored in detail using prosody and spectral features, and feature fusion techniques.
- Donation refusal is high in all the regions of Argentina. - The deficient operative structure is a negative reality that allows inadequate donor maintenance and organ procurement. - In more developed regions, there are a high number of organs which are not utilized. This is true for heart, liver and lungs. Small waiting lists for these organs probably reflect an inadequate economic coverage for these organ transplant activities. - There is a long waiting list for cadaveric kidney transplants, which reflect poor procurement and transplant activity. - Lack of awareness by many physicians leads to the denouncing of brain deaths. In spite of these factors, we can say that there has been a significant growth in organ procuration and transplantation in 1993, after the regionalization of the INCUCAI. Conclusions Is there a shortage of organs in Argentina? There may be. But the situation in Argentina differs from that in Europe, as we have a pool of organs which are not utilized (donation refusal, operational deficits, lack of denouncing of brain deaths). Perhaps, in the future, when we are able to make good use of all the organs submitted for transplantation, we will be able to say objectively whether the number of organs is sufficient or not. Acknowledgements I would like to thank the University of Lyon and the Merieux Foundation, especially Professors Traeger, Touraine and Dr. Dupuy for the honour of being invited to talk about the issue of organ procurement.
It is becoming crucial to accurately estimate and monitor speech quality in various ambient environments to guarantee high quality speech communication. This practical hands-on book shows speech intelligibility measurement methods so that the readers can start measuring or estimating speech intelligibility of their own system. The book also introduces subjective and objective speech quality measures, and describes in detail speech intelligibility measurement methods. It introduces a diagnostic rhyme test which uses rhyming word-pairs, and includes: An investigation into the effect of word familiarity on speech intelligibility. Speech intelligibility measurement of localized speech in virtual 3-D acoustic space using the rhyme test. Estimation of speech intelligibility using objective measures, including the ITU standard PESQ measures, and automatic speech recognizers.
Novel Techniques for Dialectal Arabic Speech describes approaches to improve automatic speech recognition for dialectal Arabic. Since speech resources for dialectal Arabic speech recognition are very sparse, the authors describe how existing Modern Standard Arabic (MSA) speech data can be applied to dialectal Arabic speech recognition, while assuming that MSA is always a second language for all Arabic speakers. In this book, Egyptian Colloquial Arabic (ECA) has been chosen as a typical Arabic dialect. ECA is the first ranked Arabic dialect in terms of number of speakers, and a high quality ECA speech corpus with accurate phonetic transcription has been collected. MSA acoustic models were trained using news broadcast speech. In order to cross-lingually use MSA in dialectal Arabic speech recognition, the authors have normalized the phoneme sets for MSA and ECA. After this normalization, they have applied state-of-the-art acoustic model adaptation techniques like Maximum Likelihood Linear Regression (MLLR) and Maximum A-Posteriori (MAP) to adapt existing phonemic MSA acoustic models with a small amount of dialectal ECA speech data. Speech recognition results indicate a significant increase in recognition accuracy compared to a baseline model trained with only ECA data.
This two-volume set, consisting of LNCS 8403 and LNCS 8404, constitutes the thoroughly refereed proceedings of the 14th International Conference on Intelligent Text Processing and Computational Linguistics, CICLing 2014, held in Kathmandu, Nepal, in April 2014. The 85 revised papers presented together with 4 invited papers were carefully reviewed and selected from 300 submissions. The papers are organized in the following topical sections: lexical resources; document representation; morphology, POS-tagging, and named entity recognition; syntax and parsing; anaphora resolution; recognizing textual entailment; semantics and discourse; natural language generation; sentiment analysis and emotion recognition; opinion mining and social networks; machine translation and multilingualism; information retrieval; text classification and clustering; text summarization; plagiarism detection; style and spelling checking; speech processing; and applications.
The explosion of information technology has led to substantial growth of web-accessible linguistic data in terms of quantity, diversity and complexity. These resources become even more useful when interlinked with each other to generate network effects. The general trend of providing data online is thus accompanied by newly developing methodologies to interconnect linguistic data and metadata. This includes linguistic data collections, general-purpose knowledge bases (e.g., the DBpedia, a machine-readable edition of the Wikipedia), and repositories with specific information about languages, linguistic categories and phenomena. The Linked Data paradigm provides a framework for interoperability and access management, and thereby allows to integrate information from such a diverse set of resources. The contributions assembled in this volume illustrate the band-width of applications of the Linked Data paradigm for representative types of language resources. They cover lexical-semantic resources, annotated corpora, typological databases as well as terminology and metadata repositories. The book includes representative applications from diverse fields, ranging from academic linguistics (e.g., typology and corpus linguistics) over applied linguistics (e.g., lexicography and translation studies) to technical applications (in computational linguistics, Natural Language Processing and information technology). This volume accompanies the Workshop on Linked Data in Linguistics 2012 (LDL-2012) in Frankfurt/M., Germany, organized by the Open Linguistics Working Group (OWLG) of the Open Knowledge Foundation (OKFN). It assembles contributions of the workshop participants and, beyond this, it summarizes initial steps in the formation of a Linked Open Data cloud of linguistic resources, the Linguistic Linked Open Data cloud (LLOD).
This book discusses the impact of spectral features extracted from frame level, glottal closure regions, and pitch-synchronous analysis on the performance of language identification systems. In addition to spectral features, the authors explore prosodic features such as intonation, rhythm, and stress features for discriminating the languages. They present how the proposed spectral and prosodic features capture the language specific information from two complementary aspects, showing how the development of language identification (LID) system using the combination of spectral and prosodic features will enhance the accuracy of identification as well as improve the robustness of the system. This book provides the methods to extract the spectral and prosodic features at various levels, and also suggests the appropriate models for developing robust LID systems according to specific spectral and prosodic features. Finally, the book discuss about various combinations of spectral and prosodic features, and the desired models to enhance the performance of LID systems.
Researchers in many disciplines have been concerned with modeling textual data in order to account for texts as the primary information unit of written communication. The book "Modelling, Learning and Processing of Text-Technological Data Structures" deals with this challenging information unit. It focuses on theoretical foundations of representing natural language texts as well as on concrete operations of automatic text processing. Following this integrated approach, the present volume includes contributions to a wide range of topics in the context of processing of textual data. This relates to the learning of ontologies from natural language texts, the annotation and automatic parsing of texts as well as the detection and tracking of topics in texts and hypertexts. In this way, the book brings together a wide range of approaches to procedural aspects of text technology as an emerging scientific discipline.
1. Main assumptions, objectives and conditionings 1.1. The present book is concerned with certain problems in the logical philosophy of language . It is written in the the Polish logical, philosophical, and semiotic spirit of syntax of tradition, and shows two conceptions of the categorial languages : the theory of simple languages, i.e ., languages which do not include variables nor the operators that bind them (for instance, large fragments of natural languages, calculi, the language of languages of well-known sentential Aristotle's traditional syllogistic, languages of equationally definable algebras), and the theory of w-languages, i.e., languages which include operators and variables bound by the latter
The general aim of this book is to provide an elementary exposition of some basic concepts in terms of which both classical and non-dassicallogirs may be studied and appraised. Although quantificational logic is dealt with briefly in the last chapter, the discussion is chiefly concemed with propo- gjtional cakuli. Still, the subject, as it stands today, cannot br covered in one book of reasonable length. Rather than to try to include in the volume as much as possible, I have put emphasis on some selected topics. Even these could not be roverrd completely, but for each topic I have attempted to present a detailed and precise t'Xposition of several basic results including some which are non-trivial. The roots of some of the central ideas in the volume go back to J.Luka- siewicz's seminar on mathematicallogi
QUALICO has been held for the first time as an international conference to demonstrate the state of the art in quantitative linguistics. This domain of language study and research is gaining considerable interest due to recent advances in linguistic modelling, particularly in computational linguistics, cognitive science, and developments in mathematics like modern systems theory. Progress in hardware and software technology, together with ease of access to data and numerical processing, has provided new means of empirical data acquisition and the application of mathematical models of adequate complexity. This volume contains the papers read at QUALICO 91, and provides a representative overview of the state of the art in quantitative linguistic research.
Semantic fields are lexically coherent - the words they contain co-occur in texts. In this book the authors introduce and define semantic domains, a computational model for lexical semantics inspired by the theory of semantic fields. Semantic domains allow us to exploit domain features for texts, terms and concepts, and they can significantly boost the performance of natural-language processing systems. Semantic domains can be derived from existing lexical resources or can be acquired from corpora in an unsupervised manner. They also have the property of interlinguality, and they can be used to relate terms in different languages in multilingual application scenarios. The authors give a comprehensive explanation of the computational model, with detailed chapters on semantic domains, domain models, and applications of the technique in text categorization, word sense disambiguation, and cross-language text categorization. This book is suitable for researchers and graduate students in computational linguistics.
The accurate determination of the speech spectrum, particularly for short frames, is commonly pursued in diverse areas including speech processing, recognition, and acoustic phonetics. With this book the author makes the subject of spectrum analysis understandable to a wide audience, including those with a solid background in general signal processing and those without such background. In keeping with these goals, this is not a book that replaces or attempts to cover the material found in a general signal processing textbook. Some essential signal processing concepts are presented in the first chapter, but even there the concepts are presented in a generally understandable fashion as far as is possible. Throughout the book, the focus is on applications to speech analysis; mathematical theory is provided for completeness, but these developments are set off in boxes for the benefit of those readers with sufficient background. Other readers may proceed through the main text, where the key results and applications will be presented in general heuristic terms, and illustrated with software routines and practical "show-and-tell" discussions of the results. At some points, the book refers to and uses the implementations in the Praat speech analysis software package, which has the advantages that it is used by many scientists around the world, and it is free and open source software. At other points, special software routines have been developed and made available to complement the book, and these are provided in the Matlab programming language. If the reader has the basic Matlab package, he/she will be able to immediately implement the programs in that platform---no extra "toolboxes" are required.
The practical task of building a talking robot requires a theory of how natural language communication works. Conversely, the best way to computationally verify a theory of natural language communication is to demonstrate its functioning concretely in the form of a talking robot, the epitome of human-machine communication. To build an actual robot requires hardware that provides appropriate recognition and action interfaces, and because such hardware is hard to develop the approach in this book is theoretical: the author presents an artificial cognitive agent with language as a software system called database semantics (DBS). Because a theoretical approach does not have to deal with the technical difficulties of hardware engineering there is no reason to simplify the system - instead the software components of DBS aim at completeness of function and of data coverage in word form recognition, syntactic-semantic interpretation and inferencing, leaving the procedural implementation of elementary concepts for later. In this book the author first examines the universals of natural language and explains the Database Semantics approach. Then in Part I he examines the following natural language communication issues: using external surfaces; the cycle of natural language communication; memory structure; autonomous control; and learning. In Part II he analyzes the coding of content according to the aspects: semantic relations of structure; simultaneous amalgamation of content; graph-theoretical considerations; computing perspective in dialogue; and computing perspective in text. The book ends with a concluding chapter, a bibliography and an index. The book will be of value to researchers, graduate students and engineers in the areas of artificial intelligence and robotics, in particular those who deal with natural language processing.
This book introduces an approach that can be used to ground a variety of intelligent systems, ranging from simple fact based systems to highly sophisticated reasoning systems. As the popularity of AI related fields has grown over the last decade, the number of persons interested in building intelligent systems has increased exponentially. Some of these people are highly skilled and experienced in the use of Al techniques, but many lack that kind of expertise. Much of the literature that might otherwise interest those in the latter category is not appreci ated by them because the material is too technical, often needlessly so. The so called logicists see logic as a primary tool and favor a formal approach to Al, whereas others are more content to rely on informal methods. This polarity has resulted in different styles of writing and reporting, and people entering the field from other disciplines often find themselves hard pressed to keep abreast of current differences in style. This book attempts to strike a balance between these approaches by covering points from both technical and nontechnical perspectives and by doing so in a way that is designed to hold the interest of readers of each persuasion. During recent years, a somewhat overwhelming number of books that present general overviews of Al related subjects have been placed on the market . These books serve an important function by providing researchers and others entering the field with progress reports and new developments.
Current language technology is dominated by approaches that either enumerate a large set of rules, or are focused on a large amount of manually labelled data. The creation of both is time-consuming and expensive, which is commonly thought to be the reason why automated natural language understanding has still not made its way into "real-life" applications yet. This book sets an ambitious goal: to shift the development of language processing systems to a much more automated setting than previous works. A new approach is defined: what if computers analysed large samples of language data on their own, identifying structural regularities that perform the necessary abstractions and generalisations in order to better understand language in the process? After defining the framework of Structure Discovery and shedding light on the nature and the graphic structure of natural language data, several procedures are described that do exactly this: let the computer discover structures without supervision in order to boost the performance of language technology applications. Here, multilingual documents are sorted by language, word classes are identified, and semantic ambiguities are discovered and resolved without using a dictionary or other explicit human input. The book concludes with an outlook on the possibilities implied by this paradigm and sets the methods in perspective to human computer interaction. The target audience are academics on all levels (undergraduate and graduate students, lecturers and professors) working in the fields of natural language processing and computational linguistics, as well as natural language engineers who are seeking to improve their systems.
This two-volume set, consisting of LNCS 7816 and LNCS 7817, constitutes the thoroughly refereed proceedings of the 13th International Conference on Computer Linguistics and Intelligent Processing, CICLING 2013, held on Samos, Greece, in March 2013. The total of 91 contributions presented was carefully reviewed and selected for inclusion in the proceedings. The papers are organized in topical sections named: general techniques; lexical resources; morphology and tokenization; syntax and named entity recognition; word sense disambiguation and coreference resolution; semantics and discourse; sentiment, polarity, subjectivity, and opinion; machine translation and multilingualism; text mining, information extraction, and information retrieval; text summarization; stylometry and text simplification; and applications.
Audio Signal Processing for Next-Generation Multimedia Communication Systems presents cutting-edge digital signal processing theory and implementation techniques for problems including speech acquisition and enhancement using microphone arrays, new adaptive filtering algorithms, multichannel acoustic echo cancellation, sound source tracking and separation, audio coding, and realistic sound stage reproduction. This book's focus is almost exclusively on the processing, transmission, and presentation of audio and acoustic signals in multimedia communications for telecollaboration where immersive acoustics will play a great role in the near future.
This two-volume set, consisting of LNCS 7816 and LNCS 7817, constitutes the thoroughly refereed proceedings of the 13th International Conference on Computer Linguistics and Intelligent Processing, CICLING 2013, held on Samos, Greece, in March 2013. The total of 91 contributions presented was carefully reviewed and selected for inclusion in the proceedings. The papers are organized in topical sections named: general techniques; lexical resources; morphology and tokenization; syntax and named entity recognition; word sense disambiguation and coreference resolution; semantics and discourse; sentiment, polarity, subjectivity, and opinion; machine translation and multilingualism; text mining, information extraction, and information retrieval; text summarization; stylometry and text simplification; and applications.
The aim of this book and its accompanying audio files is to make accessible a corpus of 40 authentic job interviews conducted in English. The recordings and transcriptions of the interviews published here may be used by students, teachers and researchers alike for linguistic analyses of spoken discourse and as authentic material for language learning in the classroom. The book includes an introduction to corpus linguistics, offering insight into different kinds of corpora and discussing their main characteristics. Furthermore, major features of the discourse genre job interview are outlined and detailed information is given concerning the job interview corpus published in this book.
This two-volume set, consisting of LNCS 8403 and LNCS 8404, constitutes the thoroughly refereed proceedings of the 14th International Conference on Intelligent Text Processing and Computational Linguistics, CICLing 2014, held in Kathmandu, Nepal, in April 2014. The 85 revised papers presented together with 4 invited papers were carefully reviewed and selected from 300 submissions. The papers are organized in the following topical sections: lexical resources; document representation; morphology, POS-tagging, and named entity recognition; syntax and parsing; anaphora resolution; recognizing textual entailment; semantics and discourse; natural language generation; sentiment analysis and emotion recognition; opinion mining and social networks; machine translation and multilingualism; information retrieval; text classification and clustering; text summarization; plagiarism detection; style and spelling checking; speech processing; and applications.
Recent advances in the fields of knowledge representation, reasoning and human-computer interaction have paved the way for a novel approach to treating and handling context. The field of research presented in this book addresses the problem of contextual computing in artificial intelligence based on the state of the art in knowledge representation and human-computer interaction. The author puts forward a knowledge-based approach for employing high-level context in order to solve some persistent and challenging problems in the chosen showcase domain of natural language understanding. Specifically, the problems addressed concern the handling of noise due to speech recognition errors, semantic ambiguities, and the notorious problem of underspecification. Consequently the book examines the individual contributions of contextual composing for different types of context. Therefore, contextual information stemming from the domain at hand, prior discourse, and the specific user and real world situation are considered and integrated in a formal model that is applied and evaluated employing different multimodal mobile dialog systems. This book is intended to meet the needs of readers from at least three fields - AI and computer science; computational linguistics; and natural language processing - as well as some computationally oriented linguists, making it a valuable resource for scientists, researchers, lecturers, language processing practitioners and professionals as well as postgraduates and some undergraduates in the aforementioned fields. "The book addresses a problem of great and increasing technical and practical importance - the role of context in natural language processing (NLP). It considers the role of context in three important tasks: Automatic Speech Recognition, Semantic Interpretation, and Pragmatic Interpretation. Overall, the book represents a novel and insightful investigation into the potential of contextual information processing in NLP." Jerome A Feldman, Professor of Electrical Engineering and Computer Science, UC Berkeley, USA http://dm.tzi.de/research/contextual-computing/
It was in the course of 1980 that it dawned upon several friends and colleagues of Manfred Bierwisch that a half century had passed since his birth in 1930. Manfred's youthful appearance had prevented a timely appreciation of this fact, and these friends and co11eagues are, therefore, not at ali embarrassed to be presenting him, almost a year late, with a Festschrift which willleave a trace of this noteworthy occasion in the archives of linguistics. It should be realized, however, that the deIay would have easily extended to 1990 if alI those who had wanted to contribute to this book had in fact written their chapters. Under the pressure of actuality, several co11eagues who had genu ineIy hoped or even promised to contribute, just couIdn't make it in time. Still, their greetings and best wishes are also, be it tacitly, expressed by this volume. Especia11y important for the archives would be a record of the celebrated one's works and physical appearance. For the convenience of present and future generations this Festschrift contains a bibliography of Manfred Bierwisch's scientific publications, which forms a chapter in itself. The frontispiece photograph was taken unawares by one of our accomplices. The title of this Festschrift may alIow for free associations of various sorts." |
You may like...
Mixed-Effects Models in S and S-PLUS
Jose Pinheiro, Douglas Bates
Hardcover
R5,909
Discovery Miles 59 090
Formal Semantics and Proof Techniques…
Kothanda Umamageswaran, Sheetanshu L. Pandey, …
Hardcover
R2,746
Discovery Miles 27 460
|