![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Language & Literature > Language & linguistics > Computational linguistics
This book constitutes the refereed proceedings of the 4th
International Conference on Text, Speech and Dialogue, TSD 2001,
held in Zelezna Ruda, Czech Republic in September 2001.
This book constitutes the refereed proceedings of the scientific
track of the 7th Congress of the Italian Association for Artificial
Intelligence, AI*IA 2001, held in Bari, Italy, in September
2001.
This 1992 collection takes the exciting step of examining natural language phenomena from the perspective of both computational linguistics and formal semantics. Computational linguistics has until now been primarily concerned with the construction of computational models for handling the complexities of linguistic form, but has not tackled the questions of representing or computing meaning. Formal semantics, on the other hand, has attempted to account for the relations between forms and meanings, without necessarily attending to computational concerns. The book introduces the reader to the two disciplines and considers the prospects for the more unified and comprehensive computational theory of language which might obtain from their amalgamation. Of great interest to those working in the fields of computation, logic, semantics, artificial intelligence and linguistics generally.
This 1992 collection takes the exciting step of examining natural language phenomena from the perspective of both computational linguistics and formal semantics. Computational linguistics has until now been primarily concerned with the construction of computational models for handling the complexities of linguistic form, but has not tackled the questions of representing or computing meaning. Formal semantics, on the other hand, has attempted to account for the relations between forms and meanings, without necessarily attending to computational concerns. The book introduces the reader to the two disciplines and considers the prospects for the more unified and comprehensive computational theory of language which might obtain from their amalgamation. Of great interest to those working in the fields of computation, logic, semantics, artificial intelligence and linguistics generally.
Anaphora is a central topic in syntax, semantics, and pragmatics and to the interface between them. It is the subject of advanced undergraduate and graduate courses in linguistics and computational linguistics. In this book, Yan Huang provides an extensive and accessible overview of the major contemporary issues surrounding anaphora and gives a critical survey of the many and diverse contemporary approaches to it. He also provides by far the fullest cross-linguistic account of anaphora yet published.
Understanding any communication depends on the listener or reader recognizing that some words refer to what has already been said or written (his, its, he, there, etc.). This mode of reference, anaphora, involves complicated cognitive and syntactic processes, which people usually perform unerringly, but which present formidable problems for the linguist and cognitive scientist trying to explain precisely how comprehension is achieved. Anaphora is thus a central research focus in syntactic and semantic theory, while understanding and modelling its operation in discourse are important targets in computational linguistics and cognitive science. Yan Huang provides an extensive and accessible overview of the major contemporary issues surrounding anaphora and gives a critical survey of the many and diverse contemporary approaches to it. He provides by far the fullest cross-linguistic account yet published: Dr Huang's survey and analysis are based on a rich collection of data drawn from around 450 of the world's languages.
One of the most hotly debated phenomena in natural language is that of leftward argument scrambling. This book investigates the properties of Hindi-Urdu scrambling to show that it must be analyzed as uniformly a focality-driven XP-adjunction operation. It proposes a novel theory of binding and coreference that not only derives the coreference effects in scrambled constructions, but has important consequences for the proper formulation of binding, crossover, reconstruction, and representational economy in the minimalist program. The book will be of interest not only to specialists in Hindi-Urdu syntax and/or scrambling, but to all students of generative syntax.
Information extraction (IE) is a new technology enabling relevant content to be extracted from textual information available electronically. IE essentially builds on natural language processing and computational linguistics, but it is also closely related to the well established area of information retrieval and involves learning. In concert with other promising intelligent information processing technologies like data mining, intelligent data analysis, text summarization, and information agents, IE plays a crucial role in dealing with the vast amounts of information accessible electronically, for example from the Internet. The book is based on the Second International School on Information Extraction, SCIE-99, held in Frascati near Rome, Italy in June/July 1999.
A primary problem in the area of natural language processing has been that of semantic analysis. This book aims to look at the semantics of natural languages in context. It presents an approach to the computational processing of English text that combines current theories of knowledge representation and reasoning in Artificial Intelligence with the latest linguistic views of lexical semantics. This results in distinct advantages for relating the semantic analysis of a sentence to its context. A key feature is the clear separation of the lexical entries that represent the domain-specific linguistic information from the semantic interpreter that performs the analysis. The criteria for defining the lexical entries are firmly grounded in current linguistic theories, facilitating integration with existing parsers. This approach has been implemented and tested in Prolog on a domain for physics word problems and full details of the algorithms and code are presented. Semantic Processing for Finite Domains will appeal to postgraduates and researchers in computational linguistics, and to industrial groups specializing in natural language processing.
This book deals with a major problem in the study of language: the problem of reference. The ease with which we refer to things in conversation is deceptive. Upon closer scrutiny, it turns out that we hardly ever tell each other explicitly what object we mean, although we expect our interlocutor to discern it. Amichai Kronfeld provides an answer to two questions associated with this: how do we successfully refer, and how can a computer be programmed to achieve this? Beginning with the major theories of reference, Dr Kronfeld provides a consistent philosophical view which is a synthesis of Frege's and Russell's semantic insights with Grice's and Searle's pragmatic theories. This leads to a set of guiding principles, which are then applied to a computational model of referring. The discussion is made accessible to readers from a number of backgrounds: in particular, students and researchers in the areas of computational linguistics, artificial intelligence and the philosophy of language will want to read this book.
The goal of this book is to integrate the research being carried out in the field of lexical semantics in linguistics with the work on knowledge representation and lexicon design in computational linguistics. Rarely do these two camps meet and discuss the demands and concerns of each other's fields. Therefore, this book is interesting in that it provides a stimulating and unique discussion between the computational perspective of lexical meaning and the concerns of the linguist for the semantic description of lexical items in the context of syntactic descriptions. This book grew out of the papers presented at a workshop held at Brandeis University in April, 1988, funded by the American Association for Artificial Intelligence. The entire workshop as well as the discussion periods accom panying each talk were recorded. Once complete copies of each paper were available, they were distributed to participants, who were asked to provide written comments on the texts for review purposes. VII JAMES PUSTEJOVSKY 1. INTRODUCTION There is currently a growing interest in the content of lexical entries from a theoretical perspective as well as a growing need to understand the organization of the lexicon from a computational view. This volume attempts to define the directions that need to be taken in order to achieve the goal of a coherent theory of lexical organization."
From tech giants to plucky startups, the world is full of companies boasting that they are on their way to replacing human interpreters, but are they right? Interpreters vs Machines offers a solid introduction to recent theory and research on human and machine interpreting, and then invites the reader to explore the future of interpreting. With a foreword by Dr Henry Liu, the 13th International Federation of Translators (FIT) President, and written by consultant interpreter and researcher Jonathan Downie, this book offers a unique combination of research and practical insight into the field of interpreting. Written in an innovative, accessible style with humorous touches and real-life case studies, this book is structured around the metaphor of playing and winning a computer game. It takes interpreters of all experience levels on a journey to better understand their own work, learn how computers attempt to interpret and explore possible futures for human interpreters. With five levels and split into 14 chapters, Interpreters vs Machines is key reading for all professional interpreters as well as students and researchers of Interpreting and Translation Studies, and those with an interest in machine interpreting.
Seit dem Entstehen der modernen Textlinguistik in den 1960er Jahren ist eine Vielzahl z.T. hoch spezialisierter Analyseansatze in diesem Bereich entwickelt worden, die auch in diversen Einfuhrungen schon aufbereitet worden sind. Anliegen dieses Arbeitsheftes ist es, Grundlagen linguistischer Textanalyse vorzustellen, wie sie insbesondere Studierende philologischer Facher bei der Analyse literarischer und anspruchsvoller Sachtexte benotigen. Textlinguistik wird dabei nicht als eine Sonderdisziplin der Sprachwissenschaft aufgefasst, die sich nur mit der "obersten" Beschreibungsebene befasst, sondern im Sinne der von Peter Hartmann konzipierten "verwendungsorientierten Sprachwissenschaft." Besonderer Wert wird darauf gelegt, die "neue" Textlinguistik auch in die Tradition fruherer Bemuhungen um den Gegenstand einzuordnen (Rhetorik, Hermeneutik, Literaturwissenschaft, vorstrukturalistische Grammatik). Das Schwergewicht der Darstellung liegt auf der mit vielen Beispielen angereicherten Erlauterung der vier zentralen Beschreibungsdimensionen: situativer Kontext, Funktion, Thema, sprachliche Gestalt. Hier werden nicht nur die den Textzusammenhalt gewahrleistenden Kohasionsmittel besprochen, sondern die Gesamtheit der sprachlichen Mittel, v.a. auf der Ebene von Lexik und Grammatik. Ziel ist es, die Verbindung zwischen Variationslinguistik und Textlinguistik zu verdeutlichen: Zu den Aufgaben der letzteren gehort es, die Soll- und Ist-Normen von Varietaten und Textsorten zu beschreiben."
The breadth and spread of corpus-assisted discourse studies (CADS) indicate its usefulness for exploring language use within a social context. However, its theoretical foundations, limitations, and its epistemological implications must be considered so that we can adjust our research designs accordingly. This Element focuses on important meta-level questions around epistemology, while also offering a compact guide to which corpus linguistic tools are available and how they can contribute to finding out more about discourse. This Element will appeal to researchers both new and experienced, both within the CADS community and beyond.
Corpus linguistics continues to be a vibrant methodology applied across highly diverse fields of research in the language sciences. With the current steep rise in corpus sizes, computational power, statistical literacy and multi-purpose software tools, and inspired by neighbouring disciplines, approaches have diversified to an extent that calls for an intensification of the accompanying critical debate. Bringing together a team of leading experts, this book follows a unique design, comparing advanced methods and approaches current in corpus linguistics, to stimulate reflective evaluation and discussion. Each chapter explores the strengths and weaknesses of different datasets and techniques, presenting a case study and allowing readers to gauge methodological options in practice. Contributions also provide suggestions for further reading, and data and analysis scripts are included in an online appendix. This is an important and timely volume, and will be essential reading for any linguist interested in corpus-linguistic approaches to variation and change.
Corpora are ubiquitous in linguistic research, yet to date, there has been no consensus on how to conceptualize corpus representativeness and collect corpus samples. This pioneering book bridges this gap by introducing a conceptual and methodological framework for corpus design and representativeness. Written by experts in the field, it shows how corpora can be designed and built in a way that is both optimally suited to specific research agendas, and adequately representative of the types of language use in question. It considers questions such as 'what types of texts should be included in the corpus?', and 'how many texts are required?' - highlighting that the degree of representativeness rests on the dual pillars of domain considerations and distribution considerations. The authors introduce, explain, and illustrate all aspects of this corpus representativeness framework in a step-by-step fashion, using examples and activities to help readers develop practical skills in corpus design and evaluation.
This book reviews ways to improve statistical machine speech translation between Polish and English. Research has been conducted mostly on dictionary-based, rule-based, and syntax-based, machine translation techniques. Most popular methodologies and tools are not well-suited for the Polish language and therefore require adaptation, and language resources are lacking in parallel and monolingual data. The main objective of this volume to develop an automatic and robust Polish-to-English translation system to meet specific translation requirements and to develop bilingual textual resources by mining comparable corpora.
Ruslan Mitkov's highly successful Oxford Handbook of Computational Linguistics has been substantially revised and expanded in this second edition. Alongside updated accounts of the topics covered in the first edition, it includes 17 new chapters on subjects such as semantic role-labelling, text-to-speech synthesis, translation technology, opinion mining and sentiment analysis, and the application of Natural Language Processing in educational and biomedical contexts, among many others. The volume is divided into four parts that examine, respectively: the linguistic fundamentals of computational linguistics; the methods and resources used, such as statistical modelling, machine learning, and corpus annotation; key language processing tasks including text segmentation, anaphora resolution, and speech recognition; and the major applications of Natural Language Processing, from machine translation to author profiling. The book will be an essential reference for researchers and students in computational linguistics and Natural Language Processing, as well as those working in related industries.
Deep learning is revolutionizing how machine translation systems are built today. This book introduces the challenge of machine translation and evaluation - including historical, linguistic, and applied context -- then develops the core deep learning methods used for natural language applications. Code examples in Python give readers a hands-on blueprint for understanding and implementing their own machine translation systems. The book also provides extensive coverage of machine learning tricks, issues involved in handling various forms of data, model enhancements, and current challenges and methods for analysis and visualization. Summaries of the current research in the field make this a state-of-the-art textbook for undergraduate and graduate classes, as well as an essential reference for researchers and developers interested in other applications of neural methods in the broader field of human language processing.
Specifically designed for linguists, this book provides an introduction to programming using Python for those with little to no experience of coding. Python is one of the most popular and widely-used programming languages as it's also available for free and runs on any operating system. All examples in the text involve language data and can be adapted or used directly for language research. The text focuses on key language-related issues: searching, text manipulation, text encoding and internet data, providing an excellent resource for language research. More experienced users of Python will also benefit from the advanced chapters on graphical user interfaces and functional programming.
At present, Web 2.0 technologies are making traditional research genres evolve and form complex genre assemblage with other genres online. This book takes the perspective of genre analysis to provide a timely examination of professional and public communication of science. It gives an updated overview on the increasing diversification of genres for communicating scientific research today by reviewing relevant theories that contribute an understanding of genre evolution and innovation in Web 2.0. The book also offers a much-needed critical enquiry into the dynamics of languages for academic and research communication and reflects on current language-related issues such as academic Englishes, ELF lects, translanguaging, polylanguaging and the multilingualisation of science. Additionally, it complements the critical reflections with data from small-scale specialised corpora and exploratory survey research. The book also includes pedagogical orientations for teaching/training researchers in the STEMM disciplines and proposes several avenues for future enquiry into research genres across languages.
The Lexicon provides an introduction to the study of words, their main properties, and how we use them to create meaning. It offers a detailed description of the organizing principles of the lexicon, and of the categories used to classify a wide range of lexical phenomena, including polysemy, meaning variation in composition, and the interplay with ontology, syntax, and pragmatics. Elisabetta Jezek uses empirical data from digitalized corpora and speakers' judgements, combined with the formalisms developed in the field of general and theoretical linguistics, to propose representations for each of these phenomena. The key feature of the book is that it merges theoretical accounts with lexicographic approaches and computational insights. Its clear structure and accessible approach make The Lexicon an ideal textbook for all students of linguistics-theoretical, applied, and computational-and a valuable resource for scholars and students of language in the fields of cognitive science and philosophy.
The content of this textbook is organized as a theory of language for the construction of talking robots. The main topic is the mechanism of natural language communication in both the speaker and the hearer. In the third edition the author has modernized the text, leaving the overview of traditional, theoretical, and computational linguistics, analytic philosophy of language, and mathematical complexity theory with their historical backgrounds intact. The format of the empirical analyses of English and German syntax and semantics has been adapted to current practice; and Chaps. 22-24 have been rewritten to focus more sharply on the construction of a talking robot.
The question of what types of data and evidence can be used is one of the most important topics in linguistics. This book is the first to comprehensively present the methodological problems associated with linguistic data and evidence. Its originality is twofold. First, the authors' approach accounts for a series of unexplained characteristics of linguistic theorising: the uncertainty and diversity of data, the role of evidence in the evaluation of hypotheses, and the problem solving strategies, as well as the emergence and resolution of inconsistencies. Second, the findings are obtained by the application of a new model of plausible argumentation which is also of relevance from a general argumentation theoretical point of view. All concepts and theses are systematically introduced and illustrated by a number of examples from different linguistic theories, and a detailed case-study section shows how the proposed model can be applied to specific linguistic problems.
This dictionary provides a full and authoritative guide to the
meanings of the terms, concepts, and theories employed in
pragmatics, the study of language in use. |
You may like...
Future Computer and Information Systems…
Akira Ishikawa
Hardcover
|