|
Showing 1 - 9 of
9 matches in All Departments
This collection brings together the authors' previous research with
new work on the Register-Functional (RF) approach to grammatical
complexity, offering a unified theoretical account for its further
study. The book traces the development of the RF approach from its
foundations in two major research strands of linguistics: the study
of sociolinguistic variation and the text-linguistic study of
register variation. Building on this foundation, the authors
demonstrate the RF framework at work across a series of
corpus-based research studies focused specifically on grammatical
complexity in English. The volume highlights early work exploring
patterns of grammatical complexity in present-day spoken and
written registers as well as subsequent studies which extend this
research to historical patterns of register variation and the
application of RF research to the study of writing development for
L1 and L2 English university students. Taken together, along with
the addition of introductory chapters connecting the different
studies, the volume offers readers with a comprehensive resource to
better understand the RF approach to grammatical complexity and its
implications for future research. The volume will appeal to
students and scholars with research interests in either descriptive
linguistics or applied linguistics, especially those interested in
grammatical complexity and empirical, corpus-based approaches.
This book builds on Baker and Egbert's previous work on
triangulating methodological approaches in corpus linguistics and
takes triangulation one step further to highlight its broader
applicability when implemented with other linguistic research
methods. The volume showcases research methods from other
linguistic disciplines and draws on ten empirical studies from a
range of topics in psycholinguistics, applied linguistics, and
discourse analysis to demonstrate how these methods might be most
effectively triangulated with corpus-linguistic methods. A
concluding chapter synthesizes these findings as a means of
pointing the way toward future directions for triangulation and its
implications for future linguistic research. The combined effect
reveals the potential for the triangulation of these methods to not
only enhance rigor in empirical linguistic research but also our
understanding of linguistic phenomena and variation by studying
them from multiple perspectives, making this book essential reading
for graduate students and researchers in corpus linguistics,
applied linguistics, psycholinguistics, and discourse analysis.
Contemporary corpus linguists use a wide variety of methods to
study discourse patterns. This volume provides a systematic
comparison of various methodological approaches in corpus
linguistics through a series of parallel empirical studies that use
a single corpus dataset to answer the same overarching research
question. Ten contributing experts each use a different method to
address the same broadly framed research question: In what ways
does language use in online Q+A forum responses differ across four
world English varieties (India, Philippines, United Kingdom, and
United States)? Contributions will be based on analysis of the same
400,000 word corpus from online Q+A forums, and contributors employ
methodologies including corpus-based discourse analysis, audience
perceptions, Multi-Dimensional analysis, pragmatic analysis, and
keyword analysis. In their introductory and concluding chapters,
the volume editors compare and contrast the findings from each
method and assess the degree to which 'triangulating' multiple
approaches may provide a more nuanced understanding of a research
question, with the aim of identifying a set of complementary
approaches which could arguably take into account analytical blind
spots. Baker and Egbert also consider the importance of issues such
as researcher subjectivity, type of annotation, the limitations and
affordances of different corpus tools, the relative strengths of
qualitative and quantitative approaches, and the value of
considering data or information beyond the corpus. Rather than
attempting to find the 'best' approach, the focus of the volume is
on how different corpus linguistic methodologies may complement one
another, and raises suggestions for further methodological studies
which use triangulation to enrich corpus-related research.
Contemporary corpus linguists use a wide variety of methods to
study discourse patterns. This volume provides a systematic
comparison of various methodological approaches in corpus
linguistics through a series of parallel empirical studies that use
a single corpus dataset to answer the same overarching research
question. Ten contributing experts each use a different method to
address the same broadly framed research question: In what ways
does language use in online Q+A forum responses differ across four
world English varieties (India, Philippines, United Kingdom, and
United States)? Contributions will be based on analysis of the same
400,000 word corpus from online Q+A forums, and contributors employ
methodologies including corpus-based discourse analysis, audience
perceptions, Multi-Dimensional analysis, pragmatic analysis, and
keyword analysis. In their introductory and concluding chapters,
the volume editors compare and contrast the findings from each
method and assess the degree to which 'triangulating' multiple
approaches may provide a more nuanced understanding of a research
question, with the aim of identifying a set of complementary
approaches which could arguably take into account analytical blind
spots. Baker and Egbert also consider the importance of issues such
as researcher subjectivity, type of annotation, the limitations and
affordances of different corpus tools, the relative strengths of
qualitative and quantitative approaches, and the value of
considering data or information beyond the corpus. Rather than
attempting to find the 'best' approach, the focus of the volume is
on how different corpus linguistic methodologies may complement one
another, and raises suggestions for further methodological studies
which use triangulation to enrich corpus-related research.
This book builds on Baker and Egbert's previous work on
triangulating methodological approaches in corpus linguistics and
takes triangulation one step further to highlight its broader
applicability when implemented with other linguistic research
methods. The volume showcases research methods from other
linguistic disciplines and draws on ten empirical studies from a
range of topics in psycholinguistics, applied linguistics, and
discourse analysis to demonstrate how these methods might be most
effectively triangulated with corpus-linguistic methods. A
concluding chapter synthesizes these findings as a means of
pointing the way toward future directions for triangulation and its
implications for future linguistic research. The combined effect
reveals the potential for the triangulation of these methods to not
only enhance rigor in empirical linguistic research but also our
understanding of linguistic phenomena and variation by studying
them from multiple perspectives, making this book essential reading
for graduate students and researchers in corpus linguistics,
applied linguistics, psycholinguistics, and discourse analysis.
Paradoxically, doing corpus linguistics is both easier and harder
than it has ever been before. On the one hand, it is easier because
we have access to more existing corpora, more corpus analysis
software tools, and more statistical methods than ever before. On
the other hand, reliance on these existing corpora and corpus
linguistic methods can potentially create layers of distance
between the researcher and the language in a corpus, making it a
challenge to do linguistics with a corpus. The goal of this Element
is to explore ways for us to improve how we approach linguistic
research questions with quantitative corpus data. We introduce and
illustrate the major steps in the research process, including how
to: select and evaluate corpora, establish linguistically-motivated
research questions, observational units and variables, select
linguistically interpretable variables, understand and evaluate
existing corpus software tools, adopt minimally sufficient
statistical methods, and qualitatively interpret quantitative
findings.
Corpora are ubiquitous in linguistic research, yet to date, there
has been no consensus on how to conceptualize corpus
representativeness and collect corpus samples. This pioneering book
bridges this gap by introducing a conceptual and methodological
framework for corpus design and representativeness. Written by
experts in the field, it shows how corpora can be designed and
built in a way that is both optimally suited to specific research
agendas, and adequately representative of the types of language use
in question. It considers questions such as 'what types of texts
should be included in the corpus?', and 'how many texts are
required?' - highlighting that the degree of representativeness
rests on the dual pillars of domain considerations and distribution
considerations. The authors introduce, explain, and illustrate all
aspects of this corpus representativeness framework in a
step-by-step fashion, using examples and activities to help readers
develop practical skills in corpus design and evaluation.
Corpora are ubiquitous in linguistic research, yet to date, there
has been no consensus on how to conceptualize corpus
representativeness and collect corpus samples. This pioneering book
bridges this gap by introducing a conceptual and methodological
framework for corpus design and representativeness. Written by
experts in the field, it shows how corpora can be designed and
built in a way that is both optimally suited to specific research
agendas, and adequately representative of the types of language use
in question. It considers questions such as 'what types of texts
should be included in the corpus?', and 'how many texts are
required?' - highlighting that the degree of representativeness
rests on the dual pillars of domain considerations and distribution
considerations. The authors introduce, explain, and illustrate all
aspects of this corpus representativeness framework in a
step-by-step fashion, using examples and activities to help readers
develop practical skills in corpus design and evaluation.
While other books focus on special internet registers, like tweets
or texting, no previous study describes the full range of everyday
registers found on the searchable web. These are the documents that
readers encounter every time they do a Google search, from
registers like news reports, product reviews, travel blogs,
discussion forums, FAQs, etc. Based on analysis of a large,
near-random corpus of web documents, this monograph provides
comprehensive situational, lexical, and grammatical descriptions of
those registers. Beginning with a coding of each document in the
corpus, the description identifies the registers that are
especially common on the searchable web versus those that are less
commonly found. Multi-dimensional analysis is used to describe the
overall patterns of linguistic variation among web registers, while
the second half of the book provides an in-depth description of
each individual register, including analyses of situational
contexts and communicative purposes, together with the typical
lexical and grammatical characteristics associated with those
contexts.
|
You may like...
Loot
Nadine Gordimer
Paperback
(2)
R398
R330
Discovery Miles 3 300
|