![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Language & Literature > Language & linguistics > Computational linguistics
The book will appeal to scholars and advanced students of
morphology, syntax, computational linguistics and natural language
processing (NLP). It provides a critical and practical guide to
computational techniques for handling morphological and syntactic
phenomena, showing how these techniques have been used and modified
in practice.
Research monograph presenting a new approach to Computational Linguistics The ultimate goal of Computational Linguistics is to teach the computer to understand Natural Language. This research monograph presents a description of English according to algorithms which can be programmed into a computer to analyse natural language texts. The algorithmic approach uses series of instructions, written in Natural Language and organised in flow charts, with the aim of analysing certain aspects of the grammar of a sentence. One problem with text processing is the difficulty in distinguishing word forms that belong to parts of speech taken out of context. In order to solve this problem, Hristo Georgiev starts with the assumption that every word is either a verb or a non-verb. Form here he presents an algorithm which allows the computer to recognise parts of speech which to a human would be obvious though the meaning of the words. Emphasis for a computer is placed on verbs, nouns, participles and adjectives. English Algorithmic Grammar presents information for computers to recognise tenses, syntax, parsing, reference, and clauses. The final chapters of the book examine the further applications of an algorithmic approach to English grammar, and suggests ways in which the computer can be programmed to recognise meaning. This is an innovative, cutting-edge approach to computational linguistics that will be essential reading for academics researching computational linguistics, machine translation and natural language processing.
Corpus linguistics is often regarded as a methodology in its own right, but little attention has been given to the theoretical perspectives from which the subject can be approached. The present book contributes to filling this gap. Bringing together original contributions by internationally renowned authors, the chapters include coverage of the lexical priming theory, parole-linguistics, a four-part model of language system and language use, and the concept of local textual functions. The theoretical arguments are illustrated and complemented by case studies using data from large corpora such as the BNC, smaller purpose-built corpora, and Google searches. By presenting theoretical positions in corpus linguistics, "Text, Discourse, and Corpora" provides an essential overview for advanced undergraduate, postgraduate and academic readers. "Corpus and Discourse Series" editors are: Wolfgang Teubert, University of Birmingham, and Michaela Mahlberg, Liverpool Hope University College. Editorial Board: Frantisek Cermak (Prague), Susan Conrad (Portland), Geoffrey Leech (Lancaster), Elena Tognini-Bonelli (Lecce and TWC), Ruth Wodak (Lancaster and Vienna), and Feng Zhiwei (Beijing). Corpus linguistics provides the methodology to extract meaning from texts. Taking as its starting point the fact that language is not a mirror of reality but lets us share what we know, believe and think about reality, it focuses on language as a social phenomenon, and makes visible the attitudes and beliefs expressed by the members of a discourse community. Consisting of both spoken and written language, discourse always has historical, social, functional, and regional dimensions. Discourse can be monolingual or multilingual, interconnected by translations. Discourse is where language and social studies meet. "The Corpus and Discourse" series consists of two strands. The first, "Research in Corpus and Discourse", features innovative contributions to various aspects of corpus linguistics and a wide range of applications, from language technology via the teaching of a second language to a history of mentalities. The second strand, "Studies in Corpus and Discourse", is comprised of key texts bridging the gap between social studies and linguistics. Although equally academically rigorous, this strand will be aimed at a wider audience of academics and postgraduate students working in both disciplines.
This book presents multibiometric watermarking techniques for security of biometric data. This book also covers transform domain multibiometric watermarking techniques and their advantages and limitations. The authors have developed novel watermarking techniques with a combination of Compressive Sensing (CS) theory for the security of biometric data at the system database of the biometric system. The authors show how these techniques offer higher robustness, authenticity, better imperceptibility, increased payload capacity, and secure biometric watermarks. They show how to use the CS theory for the security of biometric watermarks before embedding into the host biometric data. The suggested methods may find potential applications in the security of biometric data at various banking applications, access control of laboratories, nuclear power stations, military base, and airports.
This book explains advanced theoretical and application-related issues in grammatical inference, a research area inside the inductive inference paradigm for machine learning. The first three chapters of the book deal with issues regarding theoretical learning frameworks; the next four chapters focus on the main classes of formal languages according to Chomsky's hierarchy, in particular regular and context-free languages; and the final chapter addresses the processing of biosequences. The topics chosen are of foundational interest with relatively mature and established results, algorithms and conclusions. The book will be of value to researchers and graduate students in areas such as theoretical computer science, machine learning, computational linguistics, bioinformatics, and cognitive psychology who are engaged with the study of learning, especially of the structure underlying the concept to be learned. Some knowledge of mathematics and theoretical computer science, including formal language theory, automata theory, formal grammars, and algorithmics, is a prerequisite for reading this book.
In the course of his career, Professor Halliday has continued to address the issue of the application of linguistic scholarship for computational and quantitative studies. The sixth volume in the collected works of Professor M.A.K. Halliday includes works that span the last five decades, covering such topics as machine translation: the early years; and probabilistic grammar. The last section of this volume includes discussion of recent collaborative efforts bringing together those working in systemic functional grammar, fuzzy logic and "intelligent computing," engaging in what Halliday refers to as computing with meaning. The Collected Works of M.A.K. Halliday is a series that brings together Halliday's publications in many branches of linguistics, both theoretical and applied (a distinction which he himself rejects), including grammar and semantics, discourse analysis and stylistics, phonology, sociolinguistics, computational linguistics, language education and child language development.
This book reports on an outstanding thesis that has significantly advanced the state-of-the-art in the automated analysis and classification of speech and music. It defines several standard acoustic parameter sets and describes their implementation in a novel, open-source, audio analysis framework called openSMILE, which has been accepted and intensively used worldwide. The book offers extensive descriptions of key methods for the automatic classification of speech and music signals in real-life conditions and reports on the evaluation of the framework developed and the acoustic parameter sets that were selected. It is not only intended as a manual for openSMILE users, but also and primarily as a guide and source of inspiration for students and scientists involved in the design of speech and music analysis methods that can robustly handle real-life conditions.
This book introduces Meaningful Purposive Interaction Analysis (MPIA) theory, which combines social network analysis (SNA) with latent semantic analysis (LSA) to help create and analyse a meaningful learning landscape from the digital traces left by a learning community in the co-construction of knowledge. The hybrid algorithm is implemented in the statistical programming language and environment R, introducing packages which capture - through matrix algebra - elements of learners' work with more knowledgeable others and resourceful content artefacts. The book provides comprehensive package-by-package application examples, and code samples that guide the reader through the MPIA model to show how the MPIA landscape can be constructed and the learner's journey mapped and analysed. This building block application will allow the reader to progress to using and building analytics to guide students and support decision-making in learning.
In this thesis, the author makes several contributions to the study of design of graphical materials. The thesis begins with a review of the relationship between design and aesthetics, and the use of mathematical models to capture this relationship. Then, a novel method for linking linguistic concepts to colors using the Latent Dirichlet Allocation Dual Topic Model is proposed. Next, the thesis studies the relationship between aesthetics and spatial layout by formalizing the notion of visual balance. Applying principles of salience and Gaussian mixture models over a body of about 120,000 aesthetically rated professional photographs, the author provides confirmation of Arnhem's theory about spatial layout. The thesis concludes with a description of tools to support automatically generating personalized design.
This book explores novel aspects of social robotics, spoken dialogue systems, human-robot interaction, spoken language understanding, multimodal communication, and system evaluation. It offers a variety of perspectives on and solutions to the most important questions about advanced techniques for social robots and chat systems. Chapters by leading researchers address key research and development topics in the field of spoken dialogue systems, focusing in particular on three special themes: dialogue state tracking, evaluation of human-robot dialogue in social robotics, and socio-cognitive language processing. The book offers a valuable resource for researchers and practitioners in both academia and industry whose work involves advanced interaction technology and who are seeking an up-to-date overview of the key topics. It also provides supplementary educational material for courses on state-of-the-art dialogue system technologies, social robotics, and related research fields.
This textbook examines empirical linguistics from a theoretical linguist's perspective. It provides both a theoretical discussion of what quantitative corpus linguistics entails and detailed, hands-on, step-by-step instructions to implement the techniques in the field. The statistical methodology and R-based coding from this book teach readers the basic and then more advanced skills to work with large data sets in their linguistics research and studies. Massive data sets are now more than ever the basis for work that ranges from usage-based linguistics to the far reaches of applied linguistics. This book presents much of the methodology in a corpus-based approach. However, the corpus-based methods in this book are also essential components of recent developments in sociolinguistics, historical linguistics, computational linguistics, and psycholinguistics. Material from the book will also be appealing to researchers in digital humanities and the many non-linguistic fields that use textual data analysis and text-based sensorimetrics. Chapters cover topics including corpus processing, frequencing data, and clustering methods. Case studies illustrate each chapter with accompanying data sets, R code, and exercises for use by readers. This book may be used in advanced undergraduate courses, graduate courses, and self-study.
All human speech has expression. It is part of the 'humanness' of speech, and is a quality listeners expect to find. Without expression, speech sounds lifeless and artificial. Remove expression, and what's left is the bare bones of the intended message, but none of the feelings which surround the message. The purpose of this book is to present research examining expressive content in speech with a view to simulating expression in computer speech. Human beings communicate expressively with each other in conversation: now in the computer age there is a perceived eed for machines to communicate expressively with humans in dialogue.
This contributed volume explores the achievements gained and the remaining puzzling questions by applying dynamical systems theory to the linguistic inquiry. In particular, the book is divided into three parts, each one addressing one of the following topics: 1) Facing complexity in the right way: mathematics and complexity 2) Complexity and theory of language 3) From empirical observation to formal models: investigation of specific linguistic phenomena, like enunciation, deixis, or the meaning of the metaphorical phrases The application of complexity theory to describe cognitive phenomena is a recent and very promising trend in cognitive science. At the time when dynamical approaches triggered a paradigm shift in cognitive science some decade ago, the major topic of research were the challenges imposed by classical computational approaches dealing with the explanation of cognitive phenomena like consciousness, decision making and language. The target audience primarily comprises researchers and experts in the field but the book may also be beneficial for graduate and post-graduate students who want to enter the field.
This book integrates current advances in biology, economics of information and linguistics research through applications using agent-based modeling and social network analysis to develop scenarios of communication and language emergence in the social aspects of biological communications. The book presents a model of communication emergence that can be applied both to human and non-human living organism networks. The model is based on economic concepts and individual behavior fundamental for the study of trust and reputation networks in social science, particularly in economics; it is also based on the theory of the emergence of norms and historical path dependence that has been influential in institutional economics. Also included are mathematical models and code for agent-based models to explore various scenarios of language evolution, as well as a computer application that explores language and communication in biological versus social organisms, and the emergence of various meanings and grammars in human networks. Emergence of Communication in Socio-Biological Networks offers both a completely novel approach to communication emergence and language evolution and provides a path for the reader to explore various scenarios of language and communication that are not constrained to the human networks alone. By illustrating how computational social science and the complex systems approach can incorporate multiple disciplines and offer an integrated theory-model approach to the evolution of language, the book will be of interest to researchers working with computational linguistics, mathematical linguistics, and complex systems.
This book offers an introduction to modern natural language processing using machine learning, focusing on how neural networks create a machine interpretable representation of the meaning of natural language. Language is crucially linked to ideas - as Webster's 1923 "English Composition and Literature" puts it: "A sentence is a group of words expressing a complete thought". Thus the representation of sentences and the words that make them up is vital in advancing artificial intelligence and other "smart" systems currently being developed. Providing an overview of the research in the area, from Bengio et al.'s seminal work on a "Neural Probabilistic Language Model" in 2003, to the latest techniques, this book enables readers to gain an understanding of how the techniques are related and what is best for their purposes. As well as a introduction to neural networks in general and recurrent neural networks in particular, this book details the methods used for representing words, senses of words, and larger structures such as sentences or documents. The book highlights practical implementations and discusses many aspects that are often overlooked or misunderstood. The book includes thorough instruction on challenging areas such as hierarchical softmax and negative sampling, to ensure the reader fully and easily understands the details of how the algorithms function. Combining practical aspects with a more traditional review of the literature, it is directly applicable to a broad readership. It is an invaluable introduction for early graduate students working in natural language processing; a trustworthy guide for industry developers wishing to make use of recent innovations; and a sturdy bridge for researchers already familiar with linguistics or machine learning wishing to understand the other.
In this book, leading researchers in morphology, syntax, language acquisition, psycholinguistics, and computational linguistics address central questions about the form and acquisition of analogy in grammar. What kinds of patterns do speakers select as the basis for analogical extension? What types of items are particularly susceptible or resistant to analogical pressures? At what levels do analogical processes operate and how do processes interact? What formal mechanisms are appropriate for modelling analogy? The novel synthesis of typological, theoretical, computational, and developmental paradigms in this volume brings us closer to answering these questions than ever before.
Cross-Disciplinary Advances in Applied Natural Language Processing: Issues and Approaches defines the role of ANLP within NLP, and alongside other disciplines such as linguistics, computer science, and cognitive science. The description also includes the categorization of current ANLP research, and examples of current research in ANLP. This book is a useful reference for teachers, students, and materials developers in fields spanning linguistics, computer science, and cognitive science.
This book presents the consolidated acoustic data for all phones in Standard Colloquial Bengali (SCB), commonly known as Bangla, a Bengali language used by 350 million people in India, Bangladesh, and the Bengali diaspora. The book analyzes the real speech of selected native speakers of the Bangla dialect to ensure that a proper acoustical database is available for the development of speech technologies. The acoustic data presented consists of averages and their normal spread, represented by the standard deviations of necessary acoustic parameters including e.g. formant information for multiple native speakers of both sexes. The study employs two important speech technologies:(1) text to speech synthesis (TTS) and (2) automatic speech recognition (ASR). The procedures, particularly those related to the use of technologies, are described in sufficient detail to enable researchers to use them to create technical acoustic databases for any other Indian dialect. The book offers a unique resource for scientists and industrial practitioners who are interested in the acoustic analysis and processing of Indian dialects to develop similar dialect databases of their own.
Tense and aspect are means by which language refers to time-how an event takes place in the past, present, or future. They play a key role in understanding the grammar and structure of all languages, and interest in them reaches across linguistics. The Oxford Handbook of Tense and Aspect is a comprehensive, authoritative, and accessible guide to the topics and theories that currently form the front line of research into tense, aspect, and related areas. The volume contains 36 chapters, divided into 6 sections, written by internationally known experts in theoretical linguistics.
This book brings together scientists, researchers, practitioners, and students from academia and industry to present recent and ongoing research activities concerning the latest advances, techniques, and applications of natural language processing systems, and to promote the exchange of new ideas and lessons learned. Taken together, the chapters of this book provide a collection of high-quality research works that address broad challenges in both theoretical and applied aspects of intelligent natural language processing. The book presents the state-of-the-art in research on natural language processing, computational linguistics, applied Arabic linguistics and related areas. New trends in natural language processing systems are rapidly emerging - and finding application in various domains including education, travel and tourism, and healthcare, among others. Many issues encountered during the development of these applications can be resolved by incorporating language technology solutions. The topics covered by the book include: Character and Speech Recognition; Morphological, Syntactic, and Semantic Processing; Information Extraction; Information Retrieval and Question Answering; Text Classification and Text Mining; Text Summarization; Sentiment Analysis; Machine Translation Building and Evaluating Linguistic Resources; and Intelligent Language Tutoring Systems.
"Corpora and Language Education" critically examines key concepts and issues in corpus linguistics, with a particular focus on the expanding interdisciplinary nature of the field and the role that written and spoken corpora now play in the fields of professional communication, teacher education, translation studies, lexicography, literature, critical discourse analysis and forensic linguistics. The book also presents a series of corpus-based case studies illustrating central themes and best practices in the field.
Recent advances in the fields of knowledge representation, reasoning and human-computer interaction have paved the way for a novel approach to treating and handling context. The field of research presented in this book addresses the problem of contextual computing in artificial intelligence based on the state of the art in knowledge representation and human-computer interaction. The author puts forward a knowledge-based approach for employing high-level context in order to solve some persistent and challenging problems in the chosen showcase domain of natural language understanding. Specifically, the problems addressed concern the handling of noise due to speech recognition errors, semantic ambiguities, and the notorious problem of underspecification. Consequently the book examines the individual contributions of contextual composing for different types of context. Therefore, contextual information stemming from the domain at hand, prior discourse, and the specific user and real world situation are considered and integrated in a formal model that is applied and evaluated employing different multimodal mobile dialog systems. This book is intended to meet the needs of readers from at least three fields - AI and computer science; computational linguistics; and natural language processing - as well as some computationally oriented linguists, making it a valuable resource for scientists, researchers, lecturers, language processing practitioners and professionals as well as postgraduates and some undergraduates in the aforementioned fields. "The book addresses a problem of great and increasing technical and practical importance - the role of context in natural language processing (NLP). It considers the role of context in three important tasks: Automatic Speech Recognition, Semantic Interpretation, and Pragmatic Interpretation. Overall, the book represents a novel and insightful investigation into the potential of contextual information processing in NLP." Jerome A Feldman, Professor of Electrical Engineering and Computer Science, UC Berkeley, USA http://dm.tzi.de/research/contextual-computing/
Collaboratively Constructed Language Resources (CCLRs) such as
Wikipedia, Wiktionary, Linked Open Data, and various resources
developed using crowdsourcing techniques such as Games with a
Purpose and Mechanical Turk have substantially contributed to the
research in natural language processing (NLP). Various NLP tasks
utilize such resources to substitute for or supplement conventional
lexical semantic resources and linguistically annotated corpora.
These resources also provide an extensive body of texts from which
valuable knowledge is mined. There are an increasing number of
community efforts to link and maintain multiple linguistic
resources.
This book provides a gradual introduction to the naming game, starting from the minimal naming game, where the agents have infinite memories (Chapter 2), before moving on to various new and advanced settings: the naming game with agents possessing finite-sized memories (Chapter 3); the naming game with group discussions (Chapter 4); the naming game with learning errors in communications (Chapter 5) ; the naming game on multi-community networks (Chapter 6) ; the naming game with multiple words or sentences (Chapter 7) ; and the naming game with multiple languages (Chapter 8). Presenting the authors' own research findings and developments, the book provides a solid foundation for future advances. This self-study resource is intended for researchers, practitioners, graduate and undergraduate students in the fields of computer science, network science, linguistics, data engineering, statistical physics, social science and applied mathematics.
This book focuses on information literacy for the younger generation of learners and library readers. It is divided into four sections: 1. Information Literacy for Life; 2. Searching Strategies, Disciplines and Special Topics; 3. Information Literacy Tools for Evaluating and Utilizing Resources; 4. Assessment of Learning Outcomes. Written by librarians with wide experience in research and services, and a strong academic background in disciplines such as the humanities, social sciences, information technology, and library science, this valuable reference resource combines both theory and practice. In today's ever-changing era of information, it offers students of library and information studies insights into information literacy as well as learning tips they can use for life. |
You may like...
Introduction to Basic Aspects of the…
Otto Appenzeller, Guillaume J. Lamotte, …
Hardcover
R3,484
Discovery Miles 34 840
Model-Free Prediction and Regression - A…
Dimitris N. Politis
Hardcover
R3,041
Discovery Miles 30 410
Highlights in Lie Algebraic Methods
Anthony Joseph, Anna Melnikov, …
Hardcover
R2,668
Discovery Miles 26 680
Game-Based Learning in Education and…
Flavia Santos, Pierpaolo Dondio
Hardcover
R6,165
Discovery Miles 61 650
Non-Smooth and Complementarity-Based…
Michael Hintermuller, Roland Herzog, …
Hardcover
R1,730
Discovery Miles 17 300
The Two Sides of Innovation - Creation…
Guido Buenstorf, Uwe Cantner, …
Hardcover
|