![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Language & Literature > Language & linguistics > Computational linguistics
Aiming at exemplifying the methodology of learner corpus profiling, this book describes salient features of Romanian Learner English. As a starting point, the volume offers a comprehensive presentation of the Romanian-English contrastive studies. Another innovative aspect of the book refers to the use of the first Romanian Corpus of Learner English, whose compilation is the object of a methodological discussion. In one of the main chapters, the book introduces the methodology of learner corpus profiling and compares it with existing approaches. The profiling approach is emphasised by corpus-based quantitative and qualitative investigations of Romanian Learner English. Part of the investigation is dedicated to the lexico-grammatical profiles of articles, prepositions and genitives. The frequency-based collocation analyses are integrated with error analyses and extended into error pattern samples. Furthermore, contrasting typical Romanian Learner English constructions with examples from the German and the Italian learner corpora opens the path to new contrastive interlanguage analyses.
This book grew out of the Fourth Conference on Computers and the Writing Process, held at the University of Sussex in March 1991. The conference brought together a wide variety of people interested in most aspects of computers and the writing process including, computers and writing education, computer supported fiction, computers and technical writing, evaluation of computer-based writing, and hypertext. Fifteen papers were selected from the twenty-five delivered at the conference. The authors were asked to develop them into articles, incorporating any insights they had gained from their conference presentations. This book offers a survey of the wide area of Computers and Writing, and describes current work in the design and use of computer-based tools for writing. University of Sussex M.S. October, 1991 Note from Publisher This collection of articles is being published simultaneously as a special issue, Volume 21(1-3), of Instructional Science - An International Journal of Learning and Cognition. Instructional Science 21: 1-4 (1992) 1 (c) Kluwer Academic Publishers, Dordrecht Introduction MIKE SHARPLES School of Cognitive and Computing Sciences, University of Sussex, Falmer, Brighton BNl 9QH, United Kingdom.
The infonnation revolution is upon us. Whereas the industrial revolution heralded the systematic augmentation of human physical limitations by har nessing external energy sources, the infonnation revolution strives to augment human memory and mental processing limitations by harnessing external computational resources. Computers can accumulate. transmit and output much more infonnation and in a more timely fashion than more con ventional printed or spoken media. Of greater interest, however, is the computer's ability to process, classify and retrieve infonnation selectively in response to the needs of each human user. One cannot drink from the fire hydrant of infonnation without being immediately flooded with irrelevant text. Recent technological advances such as optical character readers only exacerbate the problem by increasing the volume of electronic text. Just as steam and internal combustion engines brought powerful energy sources under control to yield useful work in the industrial revolution, so must we build computational engines that control and apply the vast infonnation sources that they may yield useful knowledge. Information science is the study of systematic means to control, classify, process and retrieve vast amounts of infonnation in electronic fonn. In par ticular, several methodologies have been developed to classify texts manually by annies of human indexers, as illustrated quite clearly at the National Library ofMedicine, and many computational techniques have been developed to search textual data bases automatically, such as full-text keyword searches. In general."
The aim of this book and its accompanying audio files is to make accessible a corpus of 40 authentic job interviews conducted in English. The recordings and transcriptions of the interviews published here may be used by students, teachers and researchers alike for linguistic analyses of spoken discourse and as authentic material for language learning in the classroom. The book includes an introduction to corpus linguistics, offering insight into different kinds of corpora and discussing their main characteristics. Furthermore, major features of the discourse genre job interview are outlined and detailed information is given concerning the job interview corpus published in this book.
Methods for studying writing processes have significantly developed over the last two decades. The rapid development of software tools which support the collection together with the display and analysis of writing process data and new input from various neighboring disciplines contribute to an increasingly detailed knowledge acquisition about the complex cognitive processes of writing. This volume, which focuses on research methods, mixed methods designs, conceptual considerations of writing process research, interdisciplinary research influences and the application of research methods in educational settings, provides an insight into the current status of the methodological development of writing process research in Europe.
Derivation or Representation? Hubert Haider & Klaus Netter 1 The Issue Derivation and Representation - these keywords refer both to a conceptual as well as to an empirical issue. Transformational grammar was in its outset (Chomsky 1957, 1975) a derivational theory which characterized a well-formed sentence by its derivation, i.e. a set of syntactic representations defined by a set of rules that map one representation into another. The set of mapping rules, the transformations, eventually became more and more abstract and were trivialized into a single one, namely "move a," a general movement-rule. The constraints on movement were singled out in systems of principles that ap ply to the resulting representations, i.e. the configurations containing a moved element and its extraction site, the trace. The introduction of trace-theory (d. Chomsky 1977, ch.3 17, ch. 4) in principle opened up the possibility of com pletely abandoning movement and generating the possible outputs of movement directly, i.e. as structures that contain gaps representing the extraction sites."
Parsing technology traditionally consists of two branches, which correspond to the two main application areas of context-free grammars and their generalizations. Efficient deterministic parsing algorithms have been developed for parsing programming languages, and quite different algorithms are employed for analyzing natural language. The Functional Treatment of Parsing provides a functional framework within which the different traditional techniques are restated and unified. The resulting theory provides new recursive implementations of parsers for context-free grammars. The new implementations, called recursive ascent parsers, avoid explicit manipulation of parse stacks and parse matrices, and are in many ways superior to conventional implementations. They are applicable to grammars for programming languages as well as natural languages. The book has been written primarily for students and practitioners of parsing technology. With its emphasis on modern functional methods, however, the book will also be of benefit to scientists interested in functional programming. The Functional Treatment of Parsing is an excellent reference and can be used as a text for a course on the subject.
The volume brings together a selection of invited articles and papers presented at the 4th International CILC Conference held in Jaen, Spain, in March 2012. The chapters describe English using a range of corpora and other resources. There are two parts, one dealing with diachronic research and the other with synchronic research. Both parts investigate several aspects of the English language from various perspectives and illustrate the use of corpora in current research. The structure of the volume allows for the same linguistic aspect to be discussed both from the diachronic and the synchronic point of view. The chapters are also useful examples of corpus use as well as of use of other resources as corpus, specifically dictionaries. They investigate a broad array of issues, mainly using corpora of English as a native language, with a focus on corpus tools and corpus description.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
"Cognitive and Computational Strategies for Word Sense
Disambiguation" examines cognitive strategies by humans and
computational strategies by machines, for WSD in parallel.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
"Predicting Prosody from Text for Text-to-Speech Synthesis"covers thespecific aspects of prosody, mainly focusing on how to predict the prosodic information from linguistic text, and then how to exploit the predicted prosodic knowledge for various speech applications. Author K. Sreenivasa Rao discusses proposed methods along with state-of-the-art techniques for the acquisition and incorporation of prosodic knowledge for developing speech systems. Positional, contextual and phonological features are proposed for representing the linguistic and production constraints of the sound units present in the text. This book is intended for graduate students and researchers working in the area of speech processing."
Originally published in 1997, this book is concerned with human language technology. This technology provides computers with the capability to handle spoken and written language. One major goal is to improve communication between humans and machines. If people can use their own language to access information, working with software applications and controlling machinery, the greatest obstacle for the acceptance of new information technology is overcome. Another important goal is to facilitate communication among people. Machines can help to translate texts or spoken input from one human language to the other. Programs that assist people in writing by checking orthography, grammar and style are constantly improving. This book was sponsored by the Directorate General XIII of the European Union and the Information Science and Engineering Directorate of the National Science Foundation, USA.
Edited in collaboration with FoLLI, the Association of Logic, Language and Information, this book constitutes the refereed proceedings of the 7th International Conference on Logical Aspects of Computational Linguistics, LACL 2012, held in Nantes, France, in July 2012. The 15 revised full papers presented together with 2 invited talks were carefully reviewed and selected from 24 submissions. The papers are organized in topical sections on logical foundation of syntactic formalisms, logics for semantics of lexical items, sentences, discourse and dialog, applications of these models to natural language processing, type theoretic, proof theoretic, model theoretic and other logically based formal methods for describing natural language syntax, semantics and pragmatics, as well as the implementation of natural language processing software relying on such methods.
"Phonetic Search Methods for Large Databases" focuses on Keyword Spotting (KWS) within large speech databases. The brief will begin by outlining the challenges associated with Keyword Spotting within large speech databases using dynamic keyword vocabularies. It will then continue by highlighting the various market segments in need of KWS solutions, as well as, the specific requirements of each market segment. The work also includes a detailed description of the complexity of the task and the different methods that are used, including the advantages and disadvantages of each method and an in-depth comparison. The main focus will be on the Phonetic Search method and its efficient implementation. This will include a literature review of the various methods used for the efficient implementation of Phonetic Search Keyword Spotting, with an emphasis on the authors' own research which entails a comparative analysis of the Phonetic Search method which includes algorithmic details. This brief is useful for researchers and developers in academia and industry from the fields of speech processing and speech recognition, specifically Keyword Spotting.
The contributions to this volume are drawn from the interdisciplinary research c- ried out within the Sonderforschungsbereich (SFB 378), a special long-term funding scheme of the German National Science Foundation (DFG). Sonderforschungsbe- ich 378 was situated at Saarland University, with colleagues from arti?cial intel- gence, computational linguistics, computer science, philosophy, psychology - and in its ?nal phases - cognitive neuroscience and psycholinguistics. The funding covered a period of 12 years, which was split into four phases of 3 years each, ending in December of 2007. Every sub-period culminated in an intensive reviewing process, comprising written reports as well as on-site p- sentations and demonstrations to the external reviewers. We are most grateful to these reviewers for their extensive support and critical feedback; they contributed 1 their time and labor freely to the DFG, the independent and self-organized ins- tution of German scientists. The ?nal evaluation of the DFG reviewers judged the overall performance and the actual work with the highest possible mark, i.e. "excellent".
In opposition to the classical set theory of natural language, Novak's highly original monograph offers a theory based on alternative and fuzzy sets. This new approach is firmly grounded in semantics and pragmatics, and accounts for the vagueness inherent in natural language-filling a large gap in our current knowledge. The theory will foster fruitful debate among researchers in linguistics and artificial intellegence.
This two-volume set, consisting of LNCS 7181 and LNCS 7182, constitutes the thoroughly refereed proceedings of the 13th International Conference on Computer Linguistics and Intelligent Processing, held in New Delhi, India, in March 2012. The total of 92 full papers were carefully reviewed and selected for inclusion in the proceedings. The contents have been ordered according to the following topical sections: NLP system architecture; lexical resources; morphology and syntax; word sense disambiguation and named entity recognition; semantics and discourse; sentiment analysis, opinion mining, and emotions; natural language generation; machine translation and multilingualism; text categorization and clustering; information extraction and text mining; information retrieval and question answering; document summarization; and applications.
"Corpora and Language Education" critically examines key concepts and issues in corpus linguistics, with a particular focus on the expanding interdisciplinary nature of the field and the role that written and spoken corpora now play in the fields of professional communication, teacher education, translation studies, lexicography, literature, critical discourse analysis, and forensic linguistics. The book also presents a series of corpus-based case studies illustrating central themes and best practices in the field.
In knowledge-based natural language generation, issues of formal knowledge representation meet with the linguistic problems of choosing the most appropriate verbalization in a particular situation of utterance. Lexical Semantics and Knowledge Representation in Multilingual Text Generation presents a new approach to systematically linking the realms of lexical semantics and knowledge represented in a description logic. For language generation from such abstract representations, lexicalization is taken as the central step: when choosing words that cover the various parts of the content representation, the principal decisions on conveying the intended meaning are made. A preference mechanism is used to construct the utterance that is best tailored to parameters representing the context. Lexical Semantics and Knowledge Representation in Multilingual Text Generation develops the means for systematically deriving a set of paraphrases from the same underlying representation with the emphasis on events and verb meaning. Furthermore, the same mapping mechanism is used to achieve multilingual generation: English and German output are produced in parallel, on the basis of an adequate division between language-neutral and language-specific (lexical and grammatical) knowledge. Lexical Semantics and Knowledge Representation in Multilingual Text Generation provides detailed insights into designing the representations and organizing the generation process. Readers with a background in artificial intelligence, cognitive science, knowledge representation, linguistics, or natural language processing will find a model of language production that can be adapted to a variety of purposes.
Modal Logic is a branch of logic with applications in many related disciplines such as computer science, philosophy, linguistics and artificial intelligence. Over the last twenty years, in all of these neighbouring fields, modal systems have been developed that we call multi-dimensional. (Our definition of multi-dimensionality in modal logic is a technical one: we call a modal formalism multi-dimensional if, in its intended semantics, the universe of a model consists of states that are tuples over some more basic set.) This book treats such multi-dimensional modal logics in a uniform way, linking their mathematical theory to the research tradition in algebraic logic. We will define and discuss a number of systems in detail, focusing on such aspects as expressiveness, definability, axiomatics, decidability and interpolation. Although the book will be mathematical in spirit, we take care to give motivations from the disciplines mentioned earlier on.
O. PRELIMINARY REMARKS Initial drafts of the papers in this collection were presented in a con ference entitled 'Views on Phrase Structure', held at the University of Florida, Gainesville, in March, 1989. Eleven of the twenty-three partici pants in the conference were able to contribute to this volume. The purpose of the conference was to explore theories of phrase structure in their relation to other subsystems of grammar and/or systems of nonlinguistic knowledge. Some of the grammatical subsystems which the authors consider are theta-theory, movement, Case, and binding; a number of papers address how the conceptual system and/or aspects of language use may interact. Unifying the various approaches and perspectives is an attempt to furnish hypotheses concerning prin ciples of phrase structure with some sort of independent justification. 1. PHRASE STRUCTURE THEORY: A BRIEF HISTORY A basic outline for a theory of phrase structure theory is accepted by all of the authors here; it is known as 'X-bar theory'. The concepts of X-bar theory are expressed in some form by a number of pre-generative linguists. For example, Bloomfield (1933) contrasted endocentric struc tures such as noun phrases and verb phrases with those he considered exocentric, e. g. prepositional phrases and clauses. Jespersen (1933), while presenting a functional system of description (in terms of 'ranks', where rank one is 'nominal', for example), clarified the relations among the head of a phrase, its modifier, and a phrase which modifies the modifier."
Speech and Human-Machine Dialog focuses on the dialog management component of a spoken language dialog system. Spoken language dialog systems provide a natural interface between humans and computers. These systems are of special interest for interactive applications, and they integrate several technologies including speech recognition, natural language understanding, dialog management and speech synthesis. Due to the conjunction of several factors throughout the past few years, humans are significantly changing their behavior vis-a-vis machines. In particular, the use of speech technologies will become normal in the professional domain, and in everyday life. The performance of speech recognition components has also significantly improved. This book includes various examples that illustrate the different functionalities of the dialog model in a representative application for train travel information retrieval (train time tables, prices and ticket reservation). Speech and Human-Machine Dialog is designed for a professional audience, composed of researchers and practitioners in industry. This book is also suitable as a secondary text for graduate-level students in computer science and engineering. " |
You may like...
Market Fresh Mixology Presents Life…
Bridget Albert, Mary Barranco
Hardcover
|