![]() |
![]() |
Your cart is empty |
||
Books > Language & Literature > Language & linguistics > Computational linguistics
Specifically designed for linguists, this book provides an introduction to programming using Python for those with little to no experience of coding. Python is one of the most popular and widely-used programming languages as it's also available for free and runs on any operating system. All examples in the text involve language data and can be adapted or used directly for language research. The text focuses on key language-related issues: searching, text manipulation, text encoding and internet data, providing an excellent resource for language research. More experienced users of Python will also benefit from the advanced chapters on graphical user interfaces and functional programming.
This textbook approaches second language acquisition from the perspective of generative linguistics. Roumyana Slabakova reviews and discusses paradigms and findings from the last thirty years of research in the field, focussing in particular on how the second or additional language is represented in the mind and how it is used in communication. The adoption and analysis of a specific model of acquisition, the Bottleneck Hypothesis, provides a unifying perspective. The book assumes some non-technical knowledge of linguistics, but important concepts are clearly introduced and defined throughout, making it a valuable resource not only for undergraduate and graduate students of linguistics, but also for researchers in cognitive science and language teachers.
Dynamical Grammar explores the consequences for language acquisition, language evolution, and linguistic theory of taking the underlying architecture of the language faculty to be that of a complex adaptive dynamical system. It contains the first results of a new and complex model of language acquisition which the authors have developed to measure how far language input is reflected in language output and thereby get a better idea of just how far the human language faculty is hard-wired.
The use of literature in second language teaching has been advocated for a number of years, yet despite this there have only been a limited number of studies which have sought to investigate its effects. Fewer still have focused on its potential effects as a model of spoken language or as a vehicle to develop speaking skills. Drawing upon multiple research studies, this volume fills that gap to explore how literature is used to develop speaking skills in second language learners. The volume is divided into two sections: literature and spoken language and literature and speaking skills. The first section focuses on studies exploring the use of literature to raise awareness of spoken language features, whilst the second investigates its potential as a vehicle to develop speaking skills. Each section contains studies with different designs and in various contexts including China, Japan and the UK. The research designs used mean that the chapters contain clear implications for classroom pedagogy and research in different contexts.
In this book, application-related studies for acoustic biomedical sensors are covered in depth. The book features an array of different biomedical signals, including acoustic biomedical signals as well as the thermal biomedical signals, magnetic biomedical signals, and optical biomedical signals to support healthcare. It employs signal processing approaches, such as filtering, Fourier transform, spectral estimation, and wavelet transform. The book presents applications of acoustic biomedical sensors and bio-signal processing for prediction, detection, and monitoring of some diseases from the phonocardiogram (PCG) signal analysis. Several challenges and future perspectives related to the acoustic sensors applications are highlighted. This book supports the engineers, researchers, designers, and physicians in several interdisciplinary domains that support healthcare.
A landmark in linguistics and cognitive science. Ray Jackendoff proposes a new holistic theory of the relation between the sounds, structure, and meaning of language and their relation to mind and brain. Foundations of Language exhibits the most fundamental new thinking in linguistics since Noam Chomsky's Aspects of the Theory of Syntax in 1965 -- yet is readable, stylish, and accessible to a wide readership. Along the way it provides new insights on the evolution of language, thought, and communication.
In this handbook, renowned scholars from a range of backgrounds provide a state of the art review of key developmental findings in language acquisition. The book places language acquisition phenomena in a richly linguistic and comparative context, highlighting the link between linguistic theory, language development, and theories of learning. The book is divided into six parts. Parts I and II examine the acquisition of phonology and morphology respectively, with chapters covering topics such as phonotactics and syllable structure, prosodic phenomena, compound word formation, and processing continuous speech. Part III moves on to the acquisition of syntax, including argument structure, questions, mood alternations, and possessives. In Part IV, chapters consider semantic aspects of language acquisition, including the expression of genericity, quantification, and scalar implicature. Finally, Parts V and VI look at theories of learning and aspects of atypical language development respectively.
What is the lexicon, what does it contain, and how is it structured? What principles determine the functioning of the lexicon as a component of natural language grammar? What role does lexical information play in linguistic theory? This accessible introduction aims to answer these questions, and explores the relation of the lexicon to grammar as a whole. It includes a critical overview of major theoretical frameworks, and puts forward a unified treatment of lexical structure and design. The text can be used for introductory and advanced courses, and for courses that touch upon different aspects of the lexicon, such as lexical semantics, lexicography, syntax, general linguistics, computational lexicology and ontology design. The book provides students with a set of tools which will enable them to work with lexical data for all kinds of purposes, including an abundance of exercises and in-class activities designed to ensure that students are actively engaged with the content and effectively acquire the necessary knowledge and skills they need.
This book provides linguists with a clear, critical, and comprehensive overview of theoretical and experimental work on information structure. Leading researchers survey the main theories of information structure in syntax, phonology, and semantics as well as perspectives from psycholinguistics and other relevant fields. Following the editors' introduction the book is divided into four parts. The first, on theories of and theoretical perspectives on information structure, includes chapters on focus, topic, and givenness. Part 2 covers a range of current issues in the field, including quantification, dislocation, and intonation, while Part 3 is concerned with experimental approaches to information structure, including language processing and acquisition. The final part contains a series of linguistic case studies drawn from a wide variety of the world's language families. This volume will be the standard guide to current work in information structure and a major point of departure for future research.
area and in applications to linguistics, formal epistemology, and the study of norms. The second contains papers on non-classical and many-valued logics, with an eye on applications in computer science and through it to engineering. The third concerns the logic of belief management, whichis likewise closely connected with recent work in computer science but also links directly with epistemology, the philosophy of science, the study of legal and other normative systems, and cognitive science. The grouping is of course rough, for there are contributions to the volume that lie astride a boundary; at least one of them is relevant, from a very abstract perspective, to all three areas. We say a few words about each of the individual chapters, to relate them to each other and the general outlook of the volume. Modal Logics The ?rst bundle of papers in this volume contains contribution to modal logic. Three of them examine general problems that arise for all kinds of modal logics. The ?rst paper is essentially semantical in its approach, the second proof-theoretic, the third semantical again: Commutativity of quanti?ers in varying-domain Kripke models, by R. Goldblatt and I. Hodkinson, investigates the possibility of com- tation (i.e. reversing the order) for quanti?ers in ?rst-order modal logics interpreted over relational models with varying domains. The authors study a possible-worlds style structural model theory that does not v- idate commutation, but satis?es all the axioms originally presented by Kripke for his familiar semantics for ?rst-order modal logic."
Computational semantics is the art and science of computing meaning in natural language. The meaning of a sentence is derived from the meanings of the individual words in it, and this process can be made so precise that it can be implemented on a computer. Designed for students of linguistics, computer science, logic and philosophy, this comprehensive text shows how to compute meaning using the functional programming language Haskell. It deals with both denotational meaning (where meaning comes from knowing the conditions of truth in situations), and operational meaning (where meaning is an instruction for performing cognitive action). Including a discussion of recent developments in logic, it will be invaluable to linguistics students wanting to apply logic to their studies, logic students wishing to learn how their subject can be applied to linguistics, and functional programmers interested in natural language processing as a new application area.
With the first publication of this book in 1988, the centrality of the lexicon in language research was becoming increasingly apparent and the use of relational models of the lexicon had been the particular focus of research in a variety of disciplines since the early 1980s. This convergence of approach made the present collection especially welcome for bringing together reports of theoretical developments and applications in relational semantics in computer science, linguistics, cognitive science, anthropology and industrial research. It explains in detail some important applications of relational models to the construction of natural language interfaces, the building of thesauri for bibliographic information retrieval systems and the compilation of terminology banks for machine translation systems. Relational Models of the Lexicon not only provides an invaluable survey of research in relational semantics, but offers a stimulus for potential research advances in semantics, natural language processing and knowledge representation.
This book is the first comprehensive presentation of Functional
Discourse Grammar, a new and important theory of language
structure. The authors set out its nature and origins and show how
it relates to contemporary linguistic theory. They demonstrate and
test its explanatory power and descriptive utility against
linguistic facts from over 150 languages across a wide range of
linguistic families.
This book collects and introduces some of the best and most useful
work in practical lexicography. It has been designed as a resource
for students and scholars of lexicography and lexicology and to be
an essential reference for professional lexicographers. It focusses
on central issues in the field and covers topics hotly debated in
lexicography circles. After a full contextual introduction Thierry
Fontenelle divides the book into twelve parts - theoretical
perspectives, corpus design, lexicographical evidence, word senses
and polysemy, collocations and idioms, definitions, examples,
grammar and usage, bilingual lexicography, tools and methods,
semantic networks, and how dictionaries are used. The book is fully
referenced and indexed.
Memory-based language processing - a machine learning and problem solving method for language technology - is based on the idea that the direct reuse of examples using analogical reasoning is more suited for solving language processing problems than the application of rules extracted from those examples. This book discusses the theory and practice of memory-based language processing, showing its comparative strengths over alternative methods of language modelling. Language is complex, with few generalizations, many sub-regularities and exceptions, and the advantage of memory-based language processing is that it does not abstract away from this valuable low-frequency information. By applying the model to a range of benchmark problems, the authors show that for linguistic areas ranging from phonology to semantics, it produces excellent results. They also describe TiMBL, a software package for memory-based language processing. The first comprehensive overview of the approach, this book will be invaluable for computational linguists, psycholinguists and language engineers.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
A landmark in linguistics and cognitive science. Ray Jackendoff proposes a new holistic theory of the relation between the sounds, structure, and meaning of language and their relation to mind and brain. Foundations of Language exhibits the most fundamental new thinking in linguistics since Noam Chomsky's Aspects of the Theory of Syntax in 1965 -- yet is readable, stylish, and accessible to a wide readership. Along the way it provides new insights on the evolution of language, thought, and communication.
This volume of newly commissioned essays examines current theoretical and computational work on polysemy, the term used in semantic analysis to describe words with more than one meaning. Such words present few difficulties in everyday language, but pose central problems for linguists and lexicographers, especially for those involved in lexical semantics and in computational modelling. The contributors to this book - leading researchers in theoretical and computational linguistics - consider the implications of these problems for linguistic theory and how they may be addressed by computational means. The theoretical essays in the book examine polysemy as an aspect of a broader theory of word meaning. Three theoretical approaches are presented: the Classical (or Aristotelian), the Prototypical, and the Relational. Their authors describe the nature of polysemy, the criteria for detecting it, and its manifestations across languages. They examine the issues arising from the regularity of polysemy and the theoretical principles proposed to account for the interaction of lexical meaning with the semantics and syntax of the context in which it occurs. Finally they consider the formal representations of meaning in the lexicon, and their implications for dictionary construction. The computational essays are concerned with the challenge of polysemy to automatic sense disambiguation - how the intended meaning for a word occurrence can be identified. The approaches presented include the exploitation of lexical information in machine-readable dictionaries, machine learning based on patterns of word co-occurrence, and hybrid approaches that combine the two. As a whole the volume shows how on the one hand theoretical work provides the motivation and may suggest the basis for computational algorithms, while on the other computational results may validate, or reveal problems in, the principles set forth by theories.
This book is about investigating the way people use language in speech and writing. It introduces the corpus-based approach to the study of language, based on analysis of large databases of real language examples and illustrates exciting new findings about language and the different ways that people speak and write. The book is important both for its step-by-step descriptions of research methods and for its findings about grammar and vocabulary, language use, language learning, and differences in language use across texts and user groups.
Provides a valuable overview to the problems of syntax analysis, semantic analysis, text analysis and natural language generation. Although the text is written for readers with a background in computer science and finite mathematics, advanced knowledge of programming language or linguistics is unnecessary.
This book analyses diverse public discourses to investigate how wealth inequality has been portrayed in the British media from the time of the Second World War to the present day. Using a variety of corpus-assisted methods of discourse analysis, chapters present an historicized perspective on how the mass media have helped to make sharply increased wealth inequality seem perfectly normal. Print, radio and online media sources are interrogated using methodologies grounded in critical discourse analysis, critical stylistics and corpus linguistics in order to examine the influence of the media on the British electorate, who have passively consented to the emergence of an even less egalitarian Britain. Covering topics such as Second World War propaganda, the 'Change4Life' anti-obesity campaign and newspaper, parliamentary and TV news programme attitudes to poverty and austerity, this book will be of value to all those interested in the mass media's contribution to the entrenched inequality in modern Britain.
Recent decades have seen a fundamental change and transformation in the commercialisation and popularisation of sports and sporting events. Corpus Approaches to the Language of Sports uses corpus resources to offer new perspectives on the language and discourse of this increasingly popular and culturally significant area of research. Bringing together a range of empirical studies from leading scholars, this book bridges the gap between quantitative corpus approaches and more qualitative, multimodal discourse methods. Covering a wide range of sports, including football, cycling and basketball, the linguistic aspects of sports language are analysed across different genres and contexts. Highlighting the importance of studying the language of sports alongside its accompanying audio-visual modes of communication, chapters draw on new digitised collections of language to fully describe and understand the complexities of communication through various channels. In doing so, Corpus Approaches to the Language of Sports not only offers exciting new insights into the language of sports but also extends the scope of corpus linguistics beyond traditional monomodal approaches to put multimodality firmly on the agenda.
**Shortlisted for the 2021 BAAL Book Prize for an outstanding book in the field of Applied Linguistics** Situated at the interface of corpus linguistics and health communication, Corpus, Discourse and Mental Health provides insights into the linguistic practices of members of three online support communities as they describe their experiences of living with and managing different mental health problems, including anorexia nervosa, depression and diabulimia. In examining contemporary health communication data, the book combines quantitative corpus linguistic methods with qualitative discourse analysis that draws upon recent theoretical insights from critical health sociology. Using this mixed-methods approach, the analysis identifies patterns and consistencies in the language used by people experiencing psychological distress and their role in realising varying representations of mental illness, diagnosis and treatment. Far from being neutral accounts of suffering and treating illness, corpus analysis illustrates that these interactions are suffused with moral and ideological tensions sufferers seek to collectively negotiate responsibility for the onset and treatment of recalcitrant mental health problems. Integrating corpus linguistics, critical discourse analysis and health sociology, this book showcases the capacity of linguistic analysis for understanding mental health discourse as well as critically exploring the potential of corpus linguistics to offer an evidence-based approach to health communication research. |
![]() ![]() You may like...
Multibiometric Watermarking with…
Rohit M. Thanki, Vedvyas J. Dwivedi, …
Hardcover
R1,555
Discovery Miles 15 550
Language, Music and Gesture…
Tatiana Chernigovskaya, Polina Eismont, …
Hardcover
R4,136
Discovery Miles 41 360
Linguistic Inquiries into Donald…
Ulrike Schneider, Matthias Eitelmann
Hardcover
R4,138
Discovery Miles 41 380
The Art and Science of Machine…
Walker H. Land Jr., J. David Schaffer
Hardcover
R4,471
Discovery Miles 44 710
Foundation Models for Natural Language…
Gerhard Paaß, Sven Giesselbach
Hardcover
The Oxford Handbook of Information…
Caroline Fery, Shinichiro Ishihara
Hardcover
R4,740
Discovery Miles 47 400
|