![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Language & Literature > Language & linguistics > Computational linguistics
In this handbook, renowned scholars from a range of backgrounds provide a state of the art review of key developmental findings in language acquisition. The book places language acquisition phenomena in a richly linguistic and comparative context, highlighting the link between linguistic theory, language development, and theories of learning. The book is divided into six parts. Parts I and II examine the acquisition of phonology and morphology respectively, with chapters covering topics such as phonotactics and syllable structure, prosodic phenomena, compound word formation, and processing continuous speech. Part III moves on to the acquisition of syntax, including argument structure, questions, mood alternations, and possessives. In Part IV, chapters consider semantic aspects of language acquisition, including the expression of genericity, quantification, and scalar implicature. Finally, Parts V and VI look at theories of learning and aspects of atypical language development respectively.
Experimental syntax is an area that is rapidly growing as linguistic research becomes increasingly focused on replicable language data, in both fieldwork and laboratory environments. The first of its kind, this handbook provides an in-depth overview of current issues and trends in this field, with contributions from leading international scholars. It pays special attention to sentence acceptability experiments, outlining current best practices in conducting tests, and pointing out promising new avenues for future research. Separate sections review research results from the past 20 years, covering specific syntactic phenomena and language types. The handbook also outlines other common psycholinguistic and neurolinguistic methods for studying syntax, comparing and contrasting them with acceptability experiments, and giving useful perspectives on the interplay between theoretical and experimental linguistics. Providing an up-to-date reference on this exciting field, it is essential reading for students and researchers in linguistics interested in using experimental methods to conduct syntactic research.
In this book, application-related studies for acoustic biomedical sensors are covered in depth. The book features an array of different biomedical signals, including acoustic biomedical signals as well as the thermal biomedical signals, magnetic biomedical signals, and optical biomedical signals to support healthcare. It employs signal processing approaches, such as filtering, Fourier transform, spectral estimation, and wavelet transform. The book presents applications of acoustic biomedical sensors and bio-signal processing for prediction, detection, and monitoring of some diseases from the phonocardiogram (PCG) signal analysis. Several challenges and future perspectives related to the acoustic sensors applications are highlighted. This book supports the engineers, researchers, designers, and physicians in several interdisciplinary domains that support healthcare.
Event structures are central in Linguistics and Artificial Intelligence research: people can easily refer to changes in the world, identify their participants, distinguish relevant information, and have expectations of what can happen next. Part of this process is based on mechanisms similar to narratives, which are at the heart of information sharing. But it remains difficult to automatically detect events or automatically construct stories from such event representations. This book explores how to handle today's massive news streams and provides multidimensional, multimodal, and distributed approaches, like automated deep learning, to capture events and narrative structures involved in a 'story'. This overview of the current state-of-the-art on event extraction, temporal and casual relations, and storyline extraction aims to establish a new multidisciplinary research community with a common terminology and research agenda. Graduate students and researchers in natural language processing, computational linguistics, and media studies will benefit from this book.
This book constitutes the refereed proceedings of the 4th International Conference of the CLEF Initiative, CLEF 2013, held in Valencia, Spain, in September 2013. The 32 papers and 2 keynotes presented were carefully reviewed and selected for inclusion in this volume. The papers are organized in topical sections named: evaluation and visualization; multilinguality and less-resourced languages; applications; and Lab overviews.
area and in applications to linguistics, formal epistemology, and the study of norms. The second contains papers on non-classical and many-valued logics, with an eye on applications in computer science and through it to engineering. The third concerns the logic of belief management, whichis likewise closely connected with recent work in computer science but also links directly with epistemology, the philosophy of science, the study of legal and other normative systems, and cognitive science. The grouping is of course rough, for there are contributions to the volume that lie astride a boundary; at least one of them is relevant, from a very abstract perspective, to all three areas. We say a few words about each of the individual chapters, to relate them to each other and the general outlook of the volume. Modal Logics The ?rst bundle of papers in this volume contains contribution to modal logic. Three of them examine general problems that arise for all kinds of modal logics. The ?rst paper is essentially semantical in its approach, the second proof-theoretic, the third semantical again: Commutativity of quanti?ers in varying-domain Kripke models, by R. Goldblatt and I. Hodkinson, investigates the possibility of com- tation (i.e. reversing the order) for quanti?ers in ?rst-order modal logics interpreted over relational models with varying domains. The authors study a possible-worlds style structural model theory that does not v- idate commutation, but satis?es all the axioms originally presented by Kripke for his familiar semantics for ?rst-order modal logic."
Language, Cognition, and Human Nature collects together for the first time much of Steven Pinker's most influential scholarly work on language and cognition. Pinker's seminal research explores the workings of language and its connections to cognition, perception, social relationships, child development, human evolution, and theories of human nature. This eclectic collection spans Pinker's thirty-year career, exploring his favorite themes in greater depth and scientific detail. It includes thirteen of Pinker's classic articles, ranging over topics such as language development in children, mental imagery, the recognition of shapes, the computational architecture of the mind, the meaning and uses of verbs, the evolution of language and cognition, the nature-nurture debate, and the logic of innuendo and euphemism. Each outlines a major theory or takes up an argument with another prominent scholar, such as Stephen Jay Gould, Noam Chomsky, or Richard Dawkins. Featuring a new introduction by Pinker that discusses his books and scholarly work, this collection reflects essential contributions to cognitive science by one of our leading thinkers and public intellectuals.
What is a language? What do scientific grammars tell us about the structure of individual languages and human language in general? What kind of science is linguistics? These and other questions are the subject of Ryan M. Nefdt's Language, Science, and Structure. Linguistics presents a unique and challenging subject matter for the philosophy of science. As a special science, its formalisation and naturalisation inspired what many consider to be a scientific revolution in the study of mind and language. Yet radical internal theory change, multiple competing frameworks, and issues of modelling and realism have largely gone unaddressed in the field. Nefdt develops a structural realist perspective on the philosophy of linguistics which aims to confront the aforementioned topics in new ways while expanding the outlook toward new scientific connections and novel philosophical insights. On this view, languages are real patterns which emerge from complex biological systems. Nefdt's exploration of this novel view will be especially valuable to those working in formal and computational linguistics, cognitive science, and the philosophies of science, mathematics, and language.
This groundbreaking book offers a new and compelling perspective on the structure of human language. The fundamental issue it addresses is the proper balance between syntax and semantics, between structure and derivation, and between rule systems and lexicon. It argues that the balance struck by mainstream generative grammar is wrong. It puts forward a new basis for syntactic theory, drawing on a wide range of frameworks, and charts new directions for research. In the past four decades, theories of syntactic structure have become more abstract, and syntactic derivations have become ever more complex. Peter Culicover and Ray Jackendoff trace this development through the history of contemporary syntactic theory, showing how much it has been driven by theory-internal rather than empirical considerations. They develop an alternative that is responsive to linguistic, cognitive, computational, and biological concerns. At the core of this alternative is the Simpler Syntax Hypothesis: the most explanatory syntactic theory is one that imputes the minimum structure necessary to mediate between phonology and meaning. A consequence of this hypothesis is a far richer mapping between syntax and semantics than is generally assumed. Through concrete analyses of numerous grammatical phenomena, some well studied and some new, the authors demonstrate the empirical and conceptual superiority of the Simpler Syntax approach. Simpler Syntax is addressed to linguists of all persuasions. It will also be of central interest to those concerned with language in psychology, human biology, evolution, computational science, and artificial intellige
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
This volume of newly commissioned essays examines current theoretical and computational work on polysemy, the term used in semantic analysis to describe words with more than one meaning. Such words present few difficulties in everyday language, but pose central problems for linguists and lexicographers, especially for those involved in lexical semantics and in computational modelling. The contributors to this book - leading researchers in theoretical and computational linguistics - consider the implications of these problems for linguistic theory and how they may be addressed by computational means. The theoretical essays in the book examine polysemy as an aspect of a broader theory of word meaning. Three theoretical approaches are presented: the Classical (or Aristotelian), the Prototypical, and the Relational. Their authors describe the nature of polysemy, the criteria for detecting it, and its manifestations across languages. They examine the issues arising from the regularity of polysemy and the theoretical principles proposed to account for the interaction of lexical meaning with the semantics and syntax of the context in which it occurs. Finally they consider the formal representations of meaning in the lexicon, and their implications for dictionary construction. The computational essays are concerned with the challenge of polysemy to automatic sense disambiguation - how the intended meaning for a word occurrence can be identified. The approaches presented include the exploitation of lexical information in machine-readable dictionaries, machine learning based on patterns of word co-occurrence, and hybrid approaches that combine the two. As a whole the volume shows how on the one hand theoretical work provides the motivation and may suggest the basis for computational algorithms, while on the other computational results may validate, or reveal problems in, the principles set forth by theories.
Specifically designed for linguists, this book provides an introduction to programming using Python for those with little to no experience of coding. Python is one of the most popular and widely-used programming languages as it's also available for free and runs on any operating system. All examples in the text involve language data and can be adapted or used directly for language research. The text focuses on key language-related issues: searching, text manipulation, text encoding and internet data, providing an excellent resource for language research. More experienced users of Python will also benefit from the advanced chapters on graphical user interfaces and functional programming.
The book will appeal to scholars and advanced students of
morphology, syntax, computational linguistics and natural language
processing (NLP). It provides a critical and practical guide to
computational techniques for handling morphological and syntactic
phenomena, showing how these techniques have been used and modified
in practice.
This book analyses diverse public discourses to investigate how wealth inequality has been portrayed in the British media from the time of the Second World War to the present day. Using a variety of corpus-assisted methods of discourse analysis, chapters present an historicized perspective on how the mass media have helped to make sharply increased wealth inequality seem perfectly normal. Print, radio and online media sources are interrogated using methodologies grounded in critical discourse analysis, critical stylistics and corpus linguistics in order to examine the influence of the media on the British electorate, who have passively consented to the emergence of an even less egalitarian Britain. Covering topics such as Second World War propaganda, the 'Change4Life' anti-obesity campaign and newspaper, parliamentary and TV news programme attitudes to poverty and austerity, this book will be of value to all those interested in the mass media's contribution to the entrenched inequality in modern Britain.
**Shortlisted for the 2021 BAAL Book Prize for an outstanding book in the field of Applied Linguistics** Situated at the interface of corpus linguistics and health communication, Corpus, Discourse and Mental Health provides insights into the linguistic practices of members of three online support communities as they describe their experiences of living with and managing different mental health problems, including anorexia nervosa, depression and diabulimia. In examining contemporary health communication data, the book combines quantitative corpus linguistic methods with qualitative discourse analysis that draws upon recent theoretical insights from critical health sociology. Using this mixed-methods approach, the analysis identifies patterns and consistencies in the language used by people experiencing psychological distress and their role in realising varying representations of mental illness, diagnosis and treatment. Far from being neutral accounts of suffering and treating illness, corpus analysis illustrates that these interactions are suffused with moral and ideological tensions sufferers seek to collectively negotiate responsibility for the onset and treatment of recalcitrant mental health problems. Integrating corpus linguistics, critical discourse analysis and health sociology, this book showcases the capacity of linguistic analysis for understanding mental health discourse as well as critically exploring the potential of corpus linguistics to offer an evidence-based approach to health communication research.
Recent decades have seen a fundamental change and transformation in the commercialisation and popularisation of sports and sporting events. Corpus Approaches to the Language of Sports uses corpus resources to offer new perspectives on the language and discourse of this increasingly popular and culturally significant area of research. Bringing together a range of empirical studies from leading scholars, this book bridges the gap between quantitative corpus approaches and more qualitative, multimodal discourse methods. Covering a wide range of sports, including football, cycling and basketball, the linguistic aspects of sports language are analysed across different genres and contexts. Highlighting the importance of studying the language of sports alongside its accompanying audio-visual modes of communication, chapters draw on new digitised collections of language to fully describe and understand the complexities of communication through various channels. In doing so, Corpus Approaches to the Language of Sports not only offers exciting new insights into the language of sports but also extends the scope of corpus linguistics beyond traditional monomodal approaches to put multimodality firmly on the agenda.
Multi-Dimensional Analysis: Research Methods and Current Issues provides a comprehensive guide both to the statistical methods in Multi-Dimensional Analysis (MDA) and its key elements, such as corpus building, tagging, and tools. The major goal is to explain the steps involved in the method so that readers may better understand this complex research framework and conduct MD research on their own. Multi-Dimensional Analysis is a method that allows the researcher to describe different registers (textual varieties defined by their social use) such as academic settings, regional discourse, social media, movies, and pop songs. Through multivariate statistical techniques, MDA identifies complementary correlation groupings of dozens of variables, including variables which belong both to the grammatical and semantic domains. Such groupings are then associated with situational variables of texts like information density, orality, and narrativity to determine linguistic constructs known as dimensions of variation, which provide a scale for the comparison of a large number of texts and registers. This book is a comprehensive research guide to MDA.
This book is open access and available on www.bloomsburycollections.com. It is funded by Knowledge Unlatched. Corpus linguistics has much to offer history, being as both disciplines engage so heavily in analysis of large amounts of textual material. This book demonstrates the opportunities for exploring corpus linguistics as a method in historiography and the humanities and social sciences more generally. Focussing on the topic of prostitution in 17th-century England, it shows how corpus methods can assist in social research, and can be used to deepen our understanding and comprehension. McEnery and Baker draw principally on two sources - the newsbook Mercurius Fumigosis and the Early English Books Online Corpus. This scholarship on prostitution and the sex trade offers insight into the social position of women in history.
A comprehensive corpus analysis of adolescent health communication is long overdue - and this book provides it. We know comparatively little about the language adolescents use to articulate their health concerns, and discourse analysis of their choices can shed light on their attitudes towards and beliefs about health and illness. This book interrogates a two million word corpus of messages posted by adolescents to an online health forum. It adopts a mixed method corpus approach to health communication, combining both quantitative and qualitative techniques. Analysis in this way gives voice to an age group whose subjective experiences of illness have often been marginalized or simply overlooked in favour of the concerns of older populations.
In this volume, Matthew L. Jockers introduces readers to large-scale literary computing and the revolutionary potential of macroanalysis--a new approach to the study of the literary record designed for probing the digital-textual world as it exists today, in digital form and in large quantities. Using computational analysis to retrieve key words, phrases, and linguistic patterns across thousands of texts in digital libraries, researchers can draw conclusions based on quantifiable evidence regarding how literary trends are employed over time, across periods, within regions, or within demographic groups, as well as how cultural, historical, and societal linkages may bind individual authors, texts, and genres into an aggregate literary culture. Moving beyond the limitations of literary interpretation based on the "close-reading" of individual works, Jockers describes how this new method of studying large collections of digital material can help us to better understand and contextualize the individual works within those collections.
This is the first study to explore the complex nature of idiomaticity, by bringing a quantitative corpus-linguistic approach and judgement data. Being presented with phrases of the kind, 'take the plunge' and 'write a letter', native speakers of English tend to agree that the former is more idiomatic that the latter. What exactly is it about these two phrases that guide speakers' judgements? Adopting a usage-based perspective, this study addresses the question 'which factors do speakers rely upon when assessing the idiomaticity of a construction?'. "Rethinking Idiomaticity" is the first study to bring together a quantitative corpus-linguistic approach and quantitative judgement data to explore the nature of idiomaticity as a complex concept that comprises semantic and formal variation parameters. Wulff's fascinating book is suitable for researchers and postgraduates in the fields of lexicography, phraseology, corpus linguistics and those who are employing quantitative approaches. Cognitive linguists interested in the empirical underpinnings of their theoretical assumptions will also find this required reading. The Corpus and Discourse series consists of two strands. The first, "Research in Corpus and Discourse", features innovative contributions to various aspects of corpus linguistics and a wide range of applications, from language technology via the teaching of a second language to a history of mentalities. The second strand, "Studies in Corpus and Discourse", is comprised of key texts bridging the gap between social studies and linguistics. Although equally academically rigorous, this strand will be aimed at a wider audience of academics and postgraduate students working in both disciplines.
This book demonstrates how corpus-based research can advance the understanding of linguistic phenomena in a given language. By presenting a detailed analysis of collocations and idioms in a digital corpus of English and German, the contributors to this volume show how the use of collocations and idioms has changed over time, and suggests possible triggers for this change. The book not only examines what these collocations and idioms are, but also what their purpose is within languages. Idioms and Collocations is divided into three sections. The first section discusses the construction, composition and annotation of the corpus. Chapters in the second section describe the methods for querying the corpus, the generation and maintenance of the example subcorpora, and the linguistic-lexicographic analyses of the target idioms.Finally, the third section presents the results of specific investigations into the syntactic, semantic, and historical properties of collocations. This book presents original work in corpus linguistics, computational linguistics, theoretical linguistics and lexicography. It will be useful for researchers in academic and industrial settings, and lexicographers.The editorial board include: Paul Baker (Lancaster), Frantisek Cermak (Prague), Susan Conrad (Portland), Geoffrey Leech (Lancaster), Dominique Maingueneau (Paris XII), Christian Mair (Freiburg), Alan Partington (Bologna), Elena Tognini-Bonelli (Siena and TWC), Ruth Wodak (Lancaster), and, Feng Zhiwei (Beijing). "Corpus Linguistics" provides the methodology to extract meaning from texts. Taking as its starting point the fact that language is not a mirror of reality but lets us share what we know, believe and think about reality, it focuses on language as a social phenomenon, and makes visible the attitudes and beliefs expressed by the members of a discourse community.Consisting of both spoken and written language, discourse always has historical, social, functional, and regional dimensions. Discourse can be monolingual or multilingual, interconnected by translations. Discourse is where language and social studies meet."The Corpus and Discourse" series consists of two strands. The first, "Research in Corpus and Discourse", features innovative contributions to various aspects of corpus linguistics and a wide range of applications, from language technology via the teaching of a second language to a history of mentalities. The second strand, "Studies in Corpus and Discourse", is comprised of key texts bridging the gap between social studies and linguistics. Although equally academically rigorous, this strand will be aimed at a wider audience of academics and postgraduate students working in both disciplines. |
You may like...
Functional Awareness - Anatomy in Action…
Nancy Romita, Allegra Romita
Hardcover
R3,450
Discovery Miles 34 500
Intersecting Cultures in Music and Dance…
Linda Ashley, David Lines
Hardcover
R3,419
Discovery Miles 34 190
|