![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Language & Literature > Language & linguistics > Computational linguistics
This book presents a comprehensive overview of semi-supervised approaches to dependency parsing. Having become increasingly popular in recent years, one of the main reasons for their success is that they can make use of large unlabeled data together with relatively small labeled data and have shown their advantages in the context of dependency parsing for many languages. Various semi-supervised dependency parsing approaches have been proposed in recent works which utilize different types of information gleaned from unlabeled data. The book offers readers a comprehensive introduction to these approaches, making it ideally suited as a textbook for advanced undergraduate and graduate students and researchers in the fields of syntactic parsing and natural language processing.
The lexicon is now a major focus of research in computational linguistics and natural language processing (NLP), as more linguistic theories concentrate on the lexicon and as the acquisition of an adequate vocabulary has become the chief bottleneck in developing practical NLP systems. This collection describes techniques of lexical representation within a unification-based framework and their linguistic application, concentrating on the issue of structuring the lexicon using inheritance and defaults. Topics covered include typed feature structures, default unification, lexical rules, multiple inheritance and non-monotonic reasoning. The contributions describe both theoretical results and implemented languages and systems, including DATR, the Stuttgart TFS and ISSCO's ELU. This book arose out of a workshop on default inheritance in the lexicon organized as a part of the Esprit ACQUILEX project on computational lexicography. Besides the contributed papers mentioned above, it contains a detailed description of the ACQUILEX lexical knowledge base (LKB) system and its use in the representation of lexicons extracted semi-automatically from machine-readable dictionaries.
This book unites a range of approaches to the collection and digitization of diverse language corpora. Its specific focus is on best practices identified in the exploitation of these resources in landmark impact initiatives across different parts of the globe. The development of increasingly accessible digital corpora has coincided with improvements in the standards governing the collection, encoding and archiving of 'Big Data'. Less attention has been paid to the importance of developing standards for enriching and preserving other types of corpus data, such as that which captures the nuances of regional dialects, for example. This book takes these best practices another step forward by addressing innovative methods for enhancing and exploiting specialized corpora so that they become accessible to wider audiences beyond the academy.
Semantic interpretation and the resolution of ambiguity presents an important advance in computer understanding of natural language. While parsing techniques have been greatly improved in recent years, the approach to semantics has generally improved in recent years, the approach to semantics has generally been ad hoc and had little theoretical basis. Graeme Hirst offers a new, theoretically motivated foundation for conceptual analysis by computer, and shows how this framework facilitates the resolution of lexical and syntactic ambiguities. His approach is interdisciplinary, drawing on research in computational linguistics, artificial intelligence, montague semantics, and cognitive psychology.
This book advances the growing area of language policy and planning (LPP) by examining the epistemological and theoretical foundations that engendered and sustain the field, drawing on insights and approaches from anthropology, linguistics, economics, political science, and education to create an accessible and inter-disciplinary overview of LPP as a coherent discipline. Throughout the book, the authors address LPP from different perspectives, exploring the interface between planning in theory and its practical problems in implementation. This volume will be of interest to students and scholars with an interest in LPP in particular, and educational, social, and public policy more broadly.
Corpus linguistics is one of the most exciting approaches tostudies in applied linguistics today. From its quantitative beginnings it hasgrown to become an essential aspect of research methodology in a range offields, often combining with text analysis, CDA, pragmatics and organizationalstudies to reveal important new insights about how language works. This important new book of specially commissioned chapters byacademics from across the applied linguistics spectrum demonstrates the range andrigour of corpus research in applied linguistics. The volume captures some ofthe most stimulating and significant developments in the field, includingchapters on language teaching, institutional and professional discourse, English as an International Language, translation, forensics and media studies. As a result it goes beyond traditional, limited presentations of corpuswork and shows how corpora inform a diverse and growing number of appliedlinguistic domains.
TRENDS IN LINGUISTICS is a series of books that open new perspectives in our understanding of language. The series publishes state-of-the-art work on core areas of linguistics across theoretical frameworks, as well as studies that provide new insights by approaching language from an interdisciplinary perspective. TRENDS IN LINGUISTICS considers itself a forum for cutting-edge research based on solid empirical data on language in its various manifestations, including sign languages. It regards linguistic variation in its synchronic and diachronic dimensions as well as in its social contexts as important sources of insight for a better understanding of the design of linguistic systems and the ecology and evolution of language. TRENDS IN LINGUISTICS publishes monographs and outstanding dissertations as well as edited volumes, which provide the opportunity to address controversial topics from different empirical and theoretical viewpoints. High quality standards are ensured through anonymous reviewing.
This book builds on decades of research and provides contemporary theoretical foundations for practical applications to intelligent technologies and advances in artificial intelligence (AI). Reflecting the growing realization that computational models of human reasoning and interactions can be improved by integrating heterogeneous information resources and AI techniques, its ultimate goal is to promote integrated computational approaches to intelligent computerized systems. The book covers a range of interrelated topics, in particular, computational reasoning, language, syntax, semantics, memory, and context information. The respective chapters use and develop logically oriented methods and techniques, and the topics selected are from those areas of logic that contribute to AI and provide its mathematical foundations. The intended readership includes researchers working in the areas of traditional logical foundations, and on new approaches to intelligent computational systems.
The "Yearbook of Corpus Linguistics and Pragmatics" "2013" discusses current methodological debates on the synergy of Corpus Linguistics and Pragmatics research. The volume presents insightful pragmatic analyses of corpora in new technological domains and devotes some chapters to the pragmatic description of spoken corpora from various theoretical traditions. The "Yearbook of Corpus Linguistics and Pragmatics" series will give readers insight into how pragmatics can be used to explain real corpus data, and, in addition, how corpora can explain pragmatic intuitions, and from there, develop and refine theory. Corpus Linguistics can offer a meticulous methodology based on mathematics and statistics, while Pragmatics is characterized by its efforts to interpret intended meaning in real language. This yearbook offers a platform to scholars who combine both research methodologies to present rigorous and interdisciplinary findings about language in real use.
Contemporary data analytics involves extracting insights from data and translating them into action. With its turn towards empirical methods and convergent data sources, cognitive linguistics is a fertile context for data analytics. There are key differences between data analytics and statistical analysis as typically conceived. Though the former requires the latter, it emphasizes the role of domain-specific knowledge. Statistical analysis also tends to be associated with preconceived hypotheses and controlled data. Data analytics, on the other hand, can help explore unstructured datasets and inspire emergent questions. This volume addresses two key aspects in data analytics for cognitive linguistic work. Firstly, it elaborates the bottom-up guiding role of data analytics in the research trajectory, and how it helps to formulate and refine questions. Secondly, it shows how data analytics can suggest concrete courses of research-based action, which is crucial for cognitive linguistics to be truly applied. The papers in this volume impart various data analytic methods and report empirical studies across different areas of research and application. They aim to benefit new and experienced researchers alike.
The next big area within the information and communication technology field is Artificial Intelligence (AI). The industry is moving to automate networks, cloud-based systems (e.g., Salesforce), databases (e.g., Oracle), AWS machine learning (e.g., Amazon Lex), and creating infrastructure that has the ability to adapt in real-time to changes and learn what to anticipate in the future. It is an area of technology that is coming faster and penetrating more areas of business than any other in our history. AI will be used from the C-suite to the distribution warehouse floor. Replete with case studies, this book provides a working knowledge of AI's current and future capabilities and the impact it will have on every business. It covers everything from healthcare to warehousing, banking, finance and education. It is essential reading for anyone involved in industry.
Peer reviewed articles from the Natural Language Processing and Cognitive Science (NLPCS) 2014 meeting in October 2014 workshop. The meeting fosters interactions among researchers and practitioners in NLP by taking a Cognitive Science perspective. Articles cover topics such as artificial intelligence, computational linguistics, psycholinguistics, cognitive psychology and language learning.
The book features recent attempts to construct corpora for specific purposes - e.g. multifactorial Dutch (parallel), Geasy Easy Language Corpus (intralingual), HK LegCo interpreting corpus - and showcases sophisticated and innovative corpus analysis methods. It proposes new approaches to address classical themes - i.e. translation pedagogy, translation norms and equivalence, principles of translation - and brings interdisciplinary perspectives - e.g. contrastive linguistics, cognition and metaphor studies - to cast new light. It is a timely reference for the researchers as well as postgraduate students who are interested in the applications of corpus technology to solving translation and interpreting problems.
The rapid advancement in the theoretical understanding of statistical and machine learning methods for semisupervised learning has made it difficult for nonspecialists to keep up to date in the field. Providing a broad, accessible treatment of the theory as well as linguistic applications, Semisupervised Learning for Computational Linguistics offers self-contained coverage of semisupervised methods that includes background material on supervised and unsupervised learning. The book presents a brief history of semisupervised learning and its place in the spectrum of learning methods before moving on to discuss well-known natural language processing methods, such as self-training and co-training. It then centers on machine learning techniques, including the boundary-oriented methods of perceptrons, boosting, support vector machines (SVMs), and the null-category noise model. In addition, the book covers clustering, the expectation-maximization (EM) algorithm, related generative methods, and agreement methods. It concludes with the graph-based method of label propagation as well as a detailed discussion of spectral methods. Taking an intuitive approach to the material, this lucid book facilitates the application of semisupervised learning methods to natural language processing and provides the framework and motivation for a more systematic study of machine learning.
This book is about machine translation (MT) and the classic problems associated with this language technology. It examines the causes of these problems and, for linguistic, rule-based systems, attributes the cause to language's ambiguity and complexity and their interplay in logic-driven processes. For non-linguistic, data-driven systems, the book attributes translation shortcomings to the very lack of linguistics. It then proposes a demonstrable way to relieve these drawbacks in the shape of a working translation model (Logos Model) that has taken its inspiration from key assumptions about psycholinguistic and neurolinguistic function. The book suggests that this brain-based mechanism is effective precisely because it bridges both linguistically driven and data-driven methodologies. It shows how simulation of this cerebral mechanism has freed this one MT model from the all-important, classic problem of complexity when coping with the ambiguities of language. Logos Model accomplishes this by a data-driven process that does not sacrifice linguistic knowledge, but that, like the brain, integrates linguistics within a data-driven process. As a consequence, the book suggests that the brain-like mechanism embedded in this model has the potential to contribute to further advances in machine translation in all its technological instantiations.
The aim of this book is to advocate and promote network models of linguistic systems that are both based on thorough mathematical models and substantiated in terms of linguistics. In this way, the book contributes first steps towards establishing a statistical network theory as a theoretical basis of linguistic network analysis the boarder of the natural sciences and the humanities. This book addresses researchers who want to get familiar with theoretical developments, computational models and their empirical evaluation in the field of complex linguistic networks. It is intended to all those who are interested in statistical models of linguistic systems from the point of view of network research. This includes all relevant areas of linguistics ranging from phonological, morphological and lexical networks on the one hand and syntactic, semantic and pragmatic networks on the other. In this sense, the volume concerns readers from many disciplines such as physics, linguistics, computer science and information science. It may also be of interest for the upcoming area of systems biology with which the chapters collected here share the view on systems from the point of view of network analysis.
This book encompasses a collection of topics covering recent advances that are important to the Arabic language in areas of natural language processing, speech and image analysis. This book presents state-of-the-art reviews and fundamentals as well as applications and recent innovations.The book chapters by top researchers present basic concepts and challenges for the Arabic language in linguistic processing, handwritten recognition, document analysis, text classification and speech processing. In addition, it reports on selected applications in sentiment analysis, annotation, text summarization, speech and font analysis, word recognition and spotting and question answering.Moreover, it highlights and introduces some novel applications in vital areas for the Arabic language. The book is therefore a useful resource for young researchers who are interested in the Arabic language and are still developing their fundamentals and skills in this area. It is also interesting for scientists who wish to keep track of the most recent research directions and advances in this area.
This book deals with two fundamental issues in the semiotics of the image. The first is the relationship between image and observer: how does one look at an image? To answer this question, this book sets out to transpose the theory of enunciation formulated in linguistics over to the visual field. It also aims to clarify the gains made in contemporary visual semiotics relative to the semiology of Roland Barthes and Emile Benveniste. The second issue addressed is the relation between the forces, forms and materiality of the images. How do different physical mediums (pictorial, photographic and digital) influence visual forms? How does materiality affect the generativity of forms? On the forces within the images, the book addresses the philosophical thought of Gilles Deleuze and Rene Thom as well as the experiment of Aby Warburg's Atlas Mnemosyne. The theories discussed in the book are tested on a variety of corpora for analysis, including both paintings and photographs, taken from traditional as well as contemporary sources in a variety of social sectors (arts and sciences). Finally, semiotic methodology is contrasted with the computational analysis of large collections of images (Big Data), such as the "Media Visualization" analyses proposed by Lev Manovich and Cultural Analytics in the field of Computer Science to evaluate the impact of automatic analysis of visual forms on Digital Art History and more generally on the image sciences.
What led Shakespeare to write his most cryptic poem, 'The Phoenix and Turtle'? Could the Phoenix represent Queen Elizabeth, on the verge of death as Shakespeare wrote? Is the Earl of Essex, recently executed for treason, the Turtledove lover of the Phoenix? Questions such as these dominate scholarship of both Shakespeare's poem and the book in which it first appeared: Robert Chester's enigmatic collection of verse, Love's Martyr (1601), where Shakespeare's allegory sits next to erotic love lyrics by Ben Jonson, George Chapman and John Marston, as well as work by the much lesser-known Chester. Don Rodrigues critiques and revises traditional computational attribution studies by integrating the insights of queer theory to a study of Love's Martyr. A book deeply engaged in current debates in computational literary studies, it is particularly attuned to questions of non-normativity, deviation and departures from style when assessing stylistic patterns. Gathering insights from decades of computational and traditional analyses, it presents, most radically, data that supports the once-outlandish theory that Shakespeare may have had a significant hand in editing works signed by Chester. At the same time, this book insists on the fundamentally collaborative nature of production in Love's Martyr. Developing a compelling account of how collaborative textual production could work among early modern writers, Shakespeare's Queer Analytics is a much-needed methodological intervention in computational attribution studies. It articulates what Rodrigues describes as 'queer analytics': an approach to literary analysis that joins the non-normative close reading of queer theory to the distant attention of computational literary studies - highlighting patterns that traditional readings often overlook or ignore.
This readable introductory textbook presents a concise survey of corpus linguistics. The first section of the book introduces the key concepts in corpus linguistics and provides a brief history of the discipline. The second section expands the study of language and shows how corpus linguistics can advance our study of words and meaning, the benefits of studying the corpora, and how meaning can best be conceptualised. Explaining corpus linguistics in easy to understand terms, and including a glossary and suggestions for further reading, this book will be useful to students trying to get a grasp on this subject.
This book presents the concept of the double hierarchy linguistic term set and its extensions, which can deal with dynamic and complex decision-making problems. With the rapid development of science and technology and the acceleration of information updating, the complexity of decision-making problems has become increasingly obvious. This book provides a comprehensive and systematic introduction to the latest research in the field, including measurement methods, consistency methods, group consensus and large-scale group consensus decision-making methods, as well as their practical applications. Intended for engineers, technicians, and researchers in the fields of computer linguistics, operations research, information science, management science and engineering, it also serves as a textbook for postgraduate and senior undergraduate university students.
This book investigates various aspects of Computer Assisted Language Learning (CALL) that address the challenges arising due to increasing learner and teacher mobility. The chapters deal with two broad areas, i.e. mobile technology for teacher and translator education and technology for mobile language learning. The authors allow for insights into how mobile learning activities can be used in educational settings by providing research on classroom practice. This book aims at helping readers gain a better understanding of the function and implementation of mobile technologies in local classroom contexts to support mobility, professional development, and language and culture learning.
This groundbreaking book offers a new and compelling perspective on the structure of human language. The fundamental issue it addresses is the proper balance between syntax and semantics, between structure and derivation, and between rule systems and lexicon. It argues that the balance struck by mainstream generative grammar is wrong. It puts forward a new basis for syntactic theory, drawing on a wide range of frameworks, and charts new directions for research. In the past four decades, theories of syntactic structure have become more abstract, and syntactic derivations have become ever more complex. Peter Culicover and Ray Jackendoff trace this development through the history of contemporary syntactic theory, showing how much it has been driven by theory-internal rather than empirical considerations. They develop an alternative that is responsive to linguistic, cognitive, computational, and biological concerns. Simpler Syntax is addressed to linguists of all persuasions. It will also be of central interest to those concerned with language in psychology, human biology, evolution, computational science, and artificial intelligence.
This book presents recent advances by leading researchers in computational modelling of language acquisition. The contributors have been drawn from departments of linguistics, cognitive science, psychology, and computer science. They show what light can be thrown on fundamental problems when powerful computational techniques are combined with real data. The book considers the extent to which linguistic structure is readily available in the environment, the degree to which language learning is inductive or deductive, and the power of different modelling formalisms for different problems and approaches. It will appeal to linguists, psychologists, cognitive scientists working in language acquisition,and to those involved in computational modelling in linguistic and behavioural science. |
You may like...
First Aid - FM 4-25.11 US Army Field…
Navy And Air Force Us Army
Hardcover
R777
Discovery Miles 7 770
|