![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Language & Literature > Language & linguistics > Computational linguistics
This book provides linguists with a clear, critical, and comprehensive overview of theoretical and experimental work on information structure. Leading researchers survey the main theories of information structure in syntax, phonology, and semantics as well as perspectives from psycholinguistics and other relevant fields. Following the editors' introduction the book is divided into four parts. The first, on theories of and theoretical perspectives on information structure, includes chapters on topic, prosody, and implicature. Part 2 covers a range of current issues in the field, including focus, quantification, and sign languages, while Part 3 is concerned with experimental approaches to information structure, including processes involved in its acquisition and comprehension. The final part contains a series of linguistic case studies drawn from a wide variety of the world's language families. This volume will be the standard guide to current work in information structure and a major point of departure for future research.
This book investigates the nature of generalization in language and
examines how language is known by adults and acquired by children.
It looks at how and why constructions are learned, the relation
between their forms and functions, and how cross-linguistic and
language-internal
Machine Learning for Biometrics: Concepts, Algorithms and Applications highlights the fundamental concepts of machine learning, processing and analyzing data from biometrics and provides a review of intelligent and cognitive learning tools which can be adopted in this direction. Each chapter of the volume is supported by real-life case studies, illustrative examples and video demonstrations. The book elucidates various biometric concepts, algorithms and applications with machine intelligence solutions, providing guidance on best practices for new technologies such as e-health solutions, Data science, Cloud computing, and Internet of Things, etc. In each section, different machine learning concepts and algorithms are used, such as different object detection techniques, image enhancement techniques, both global and local feature extraction techniques, and classifiers those are commonly used data science techniques. These biometrics techniques can be used as tools in Cloud computing, Mobile computing, IOT based applications, and e-health care systems for secure login, device access control, personal recognition and surveillance.
Artificial Intelligence for Healthcare Applications and Management introduces application domains of various AI algorithms across healthcare management. Instead of discussing AI first and then exploring its applications in healthcare afterward, the authors attack the problems in context directly, in order to accelerate the path of an interested reader toward building industrial-strength healthcare applications. Readers will be introduced to a wide spectrum of AI applications supporting all stages of patient flow in a healthcare facility. The authors explain how AI supports patients throughout a healthcare facility, including diagnosis and treatment recommendations needed to get patients from the point of admission to the point of discharge while maintaining quality, patient safety, and patient/provider satisfaction. AI methods are expected to decrease the burden on physicians, improve the quality of patient care, and decrease overall treatment costs. Current conditions affected by COVID-19 pose new challenges for healthcare management and learning how to apply AI will be important for a broad spectrum of students and mature professionals working in medical informatics. This book focuses on predictive analytics, health text processing, data aggregation, management of patients, and other fields which have all turned out to be bottlenecks for the efficient management of coronavirus patients.
The Natural Language for Artificial Intelligence presents the biological and logical structure typical of human language in its dynamic mediating process between reality and the human mind. The book explains linguistic functioning in the dynamic process of human cognition when forming meaning. After that, an approach to artificial intelligence (AI) is outlined, which works with a more restricted concept of natural language that leads to flaws and ambiguities. Subsequently, the characteristics of natural language and patterns of how it behaves in different branches of science are revealed to indicate ways to improve the development of AI in specific fields of science. A brief description of the universal structure of language is also presented as an algorithmic model to be followed in the development of AI. Since AI aims to imitate the process of the human mind, the book shows how the cross-fertilization between natural language and AI should be done using the logical-axiomatic structure of natural language adjusted to the logical-mathematical processes of the machine.
This open access book provides a comprehensive overview of the state of the art in research and applications of Foundation Models and is intended for readers familiar with basic Natural Language Processing (NLP) concepts. Over the recent years, a revolutionary new paradigm has been developed for training models for NLP. These models are first pre-trained on large collections of text documents to acquire general syntactic knowledge and semantic information. Then, they are fine-tuned for specific tasks, which they can often solve with superhuman accuracy. When the models are large enough, they can be instructed by prompts to solve new tasks without any fine-tuning. Moreover, they can be applied to a wide range of different media and problem domains, ranging from image and video processing to robot control learning. Because they provide a blueprint for solving many tasks in artificial intelligence, they have been called Foundation Models. After a brief introduction to basic NLP models the main pre-trained language models BERT, GPT and sequence-to-sequence transformer are described, as well as the concepts of self-attention and context-sensitive embedding. Then, different approaches to improving these models are discussed, such as expanding the pre-training criteria, increasing the length of input texts, or including extra knowledge. An overview of the best-performing models for about twenty application areas is then presented, e.g., question answering, translation, story generation, dialog systems, generating images from text, etc. For each application area, the strengths and weaknesses of current models are discussed, and an outlook on further developments is given. In addition, links are provided to freely available program code. A concluding chapter summarizes the economic opportunities, mitigation of risks, and potential developments of AI.
Information in today's advancing world is rapidly expanding and becoming widely available. This eruption of data has made handling it a daunting and time-consuming task. Natural language processing (NLP) is a method that applies linguistics and algorithms to large amounts of this data to make it more valuable. NLP improves the interaction between humans and computers, yet there remains a lack of research that focuses on the practical implementations of this trending approach. Neural Networks for Natural Language Processing is a collection of innovative research on the methods and applications of linguistic information processing and its computational properties. This publication will support readers with performing sentence classification and language generation using neural networks, apply deep learning models to solve machine translation and conversation problems, and apply deep structured semantic models on information retrieval and natural language applications. While highlighting topics including deep learning, query entity recognition, and information retrieval, this book is ideally designed for research and development professionals, IT specialists, industrialists, technology developers, data analysts, data scientists, academics, researchers, and students seeking current research on the fundamental concepts and techniques of natural language processing.
Electromyography (EMG) is a procedure for assessing and recording the electrical activity produced by skeletal muscles. Since the contracting skeletal muscles are greatly responsible for loading the bones and joints, information about the muscle EMG is important to gain knowledge about muscular-skeletal biomechanics. Applications, Challenges, and Advancements in Electromyography Signal Processing provides an updated overview of signal processing applications and recent developments in EMG from a number of diverse aspects and various applications in clinical and experimental research. Presenting new results, concepts, and further developments in the field of EMG signal processing, this publication is an ideal resource for graduate and post-graduate students, academicians, engineers, and scientists in the fields of signal processing and biomedical engineering.
Thanks to the digital revolution, even a traditional discipline like philology has been enjoying a renaissance within academia and beyond. Decades of work have been producing groundbreaking results, raising new research questions and creating innovative educational resources. This book describes the rapidly developing state of the art of digital philology with a focus on Ancient Greek and Latin, the classical languages of Western culture. Contributions cover a wide range of topics about the accessibility and analysis of Greek and Latin sources. The discussion is organized in five sections concerning open data of Greek and Latin texts; catalogs and citations of authors and works; data entry, collection and analysis for classical philology; critical editions and annotations of sources; and finally linguistic annotations and lexical databases. As a whole, the volume provides a comprehensive outline of an emergent research field for a new generation of scholars and students, explaining what is reachable and analyzable that was not before in terms of technology and accessibility.
Is the internet a suitable linguistic corpus? How can we use it in corpus techniques? What are the special properties that we need to be aware of? This book answers those questions. The Web is an exponentially increasing source of language and corpus linguistics data. From gigantic static information resources to user-generated Web 2.0 content, the breadth and depth of information available is breathtaking - and bewildering. This book explores the theory and practice of the "web as corpus". It looks at the most common tools and methods used and features a plethora of examples based on the author's own teaching experience. This book also bridges the gap between studies in computational linguistics, which emphasize technical aspects, and studies in corpus linguistics, which focus on the implications for language theory and use.
From an abundance of intensifiers to frequent repetition and parallelisms, Donald Trump’s idiolect is highly distinctive from that of other politicians and previous Presidents of the United States. Combining quantitative and qualitative analyses, this book identifies the characteristic features of Trump’s language and argues that his speech style, often sensationalized by the media, differs from the usual political rhetoric on more levels than is immediately apparent. Chapters examine Trump’s tweets, inaugural address, political speeches, interviews, and presidential debates, revealing populist language traits that establish his idiolect as a direct reflection of changing social and political norms. The authors scrutinize Trump’s conspicuous use of nicknames, the definite article, and conceptual metaphors as strategies of othering and antagonising his opponents. They further shed light on Trump’s fake news agenda and his mutation of the conventional political apology which are strategically implemented for a political purpose. Drawing on methods from corpus linguistics, conversation analysis, and critical discourse analysis, this book provides a multifaceted investigation of Trump’s language use and addresses essential questions about Trump as a political phenomenon.
This dictionary provides a full and authoritative guide to the
meanings of the terms, concepts, and theories employed in
pragmatics, the study of language in use.
Language, Cognition, and Human Nature collects together for the first time Steven Pinker's most influential scholarly work on language and cognition. Pinker is a highly eminent cognitive scientist, and his research emphasizes the importance of language and its connections to cognition, social relationships, child development, human evolution, and theories of human nature. The thirteen essays in this eclectic collection span Pinker's thirty-year career, ranging over topics such as language acquisitions, visual cognition, the meaning and syntax of verbs, regular and irregular phenomena in language and their implications for the mechanisms of cognition, and the social psychology of direct and indirect speech. Each outlines a major theory - such as evolution, or nature vs. nurture - or takes up an argument with other prominent scholars such as Stephen Jay Gould, Noam Chomsky, or Richard Dawkins. Featuring a new introduction by Pinker that discusses his books and scholarly work, this book represents a major contribution to the field of cognitive science, by one of the field's leading thinkers.
This book explores the interaction between corpus stylistics and translation studies. It shows how corpus methods can be used to compare literary texts to their translations, through the analysis of Joseph Conrad's Heart of Darkness and four of its Italian translations. The comparison focuses on stylistic features related to the major themes of Heart of Darkness. By combining quantitative and qualitative techniques, Mastropierro discusses how alterations to the original's stylistic features can affect the interpretation of the themes in translation. The discussion illuminates the manipulative effects that translating can have on the reception of a text, showing how textual alterations can trigger different readings. This book advances the multidisciplinary dialogue between corpus linguistics and translation studies and is a valuable resource for students and researchers interested in the application of corpus approaches to stylistics and translation.
This book demonstrates how corpus-based research can advance the understanding of linguistic phenomena in a given language. By presenting a detailed analysis of collocations and idioms in a digital corpus of English and German, the contributors to this volume show how the use of collocations and idioms has changed over time, and suggests possible triggers for this change. The book not only examines what these collocations and idioms are, but also what their purpose is within languages. Idioms and Collocations is divided into three sections. The first section discusses the construction, composition and annotation of the corpus. Chapters in the second section describe the methods for querying the corpus, the generation and maintenance of the example subcorpora, and the linguistic-lexicographic analyses of the target idioms. Finally, the third section presents the results of specific investigations into the syntactic, semantic, and historical properties of collocations. This book presents original work in corpus linguistics, computational linguistics, theoretical linguistics and lexicography. It will be useful for researchers in academic and industrial settings, and lexicographers.
"Language Engineering" examines the processes involved in dictionary-making using computational linguistics, including tagging, parsing, spell-checking, lexical semantics and machine translation. The book examines dictionary-building in English, but also includes appendices applying natural language processing to French and German. This book is the first detailed description of syntax and semantics using the C(C++) programming language, and as such should be essential reading for researchers in natural language processing and computational linguistics.
This volume presents several machine intelligence technologies, developed over recent decades, and illustrates how they can be combined in application. One application, the detection of dementia from patterns in speech, is used throughout to illustrate these combinations. This application is a classic stationary pattern detection task, so readers may easily see how these combinations can be applied to other similar tasks. The expositions of the methods are supported by the basic theory they rest upon, and their application is clearly illustrated. The book's goal is to allow readers to select one or more of these methods to quickly apply to their own tasks. Includes a variety of machine intelligent technologies and illustrates how they can work together Shows evolutionary feature subset selection combined with support vector machines and multiple classifiers combined Includes a running case study on intelligent processing relating to Alzheimer's / dementia detection, in addition to several applications of the machine hybrid algorithms
The general focus of this book is on multimodal communication, which captures the temporal patterns of behavior in various dialogue settings. After an overview of current theoretical models of verbal and nonverbal communication cues, it presents studies on a range of related topics: paraverbal behavior patterns in the classroom setting; a proposed optimal methodology for conversational analysis; a study of time and mood at work; an experiment on the dynamics of multimodal interaction from the observer's perspective; formal cues of uncertainty in conversation; how machines can know we understand them; and detecting topic changes using neural network techniques. A joint work bringing together psychologists, communication scientists, information scientists and linguists, the book will be of interest to those working on a wide range of applications from industry to home, and from health to security, with the main goals of revealing, embedding and implementing a rich spectrum of information on human behavior.
This book provides information on digital audio watermarking, its applications, and its evaluation for copyright protection of audio signals - both basic and advanced. The author covers various advanced digital audio watermarking algorithms that can be used for copyright protection of audio signals. These algorithms are implemented using hybridization of advanced signal processing transforms such as fast discrete curvelet transform (FDCuT), redundant discrete wavelet transform (RDWT), and another signal processing transform such as discrete cosine transform (DCT). In these algorithms, Arnold scrambling is used to enhance the security of the watermark logo. This book is divided in to three portions: basic audio watermarking and its classification, audio watermarking algorithms, and audio watermarking algorithms using advance signal transforms. The book also covers optimization based audio watermarking. Describes basic of digital audio watermarking and its applications, including evaluation parameters for digital audio watermarking algorithms; Provides audio watermarking algorithms using advanced signal transformations; Provides optimization based audio watermarking algorithms.
The main focus of this book is the investigation of linguistic variation in Spanish, considering spoken and written, specialised and non-specialised registers from a corpus linguistics approach and employing computational updated tools. The ten chapters represent a range of research on Spanish using a number of different corpora drawn from, amongst others, research articles, student writing, formal conversation and technical reports. A variety of methodologies are brought to bear upon these corpora including multi-dimensional and multi-register analysis, latent semantics and lexical bundles. This in-depth analysis of using Spanish corpora will be of interest to researchers in corpus linguistics or Spanish language. "Corpus and Discourse" series editors are: Wolfgang Teubert, University of Birmingham, and Michaela Mahlberg, Liverpool Hope University College. Editorial Board include: Frantisek Cermak (Prague), Susan Conrad (Portland), Geoffrey Leech (Lancaster), Elena Tognini-Bonelli (Lecce and TWC), Ruth Wodak (Lancaster and Vienna), Feng Zhiwei (Beijing). Corpus linguistics provides the methodology to extract meaning from texts. Taking as its starting point the fact that language is not a mirror of reality but lets us share what we know, believe and think about reality, it focuses on language as a social phenomenon, and makes visible the attitudes and beliefs expressed by the members of a discourse community. Consisting of both spoken and written language, discourse always has historical, social, functional, and regional dimensions. Discourse can be monolingual or multilingual, interconnected by translations. Discourse is where language and social studies meet. "The Corpus and Discourse" series consists of two strands. The first, "Research in Corpus and Discourse", features innovative contributions to various aspects of corpus linguistics and a wide range of applications, from language technology via the teaching of a second language to a history of mentalities. The second strand, "Studies in Corpus and Discourse", is comprised of key texts bridging the gap between social studies and linguistics. Although equally academically rigorous, this strand will be aimed at a wider audience of academics and postgraduate students working in both disciplines.
The key assumption in this text is that machine translation is not merely a mechanical process but in fact requires a high level of linguistic sophistication, as the nuances of syntax, semantics and intonation cannot always be conveyed by modern technology. The increasing dependence on artificial communication by private and corporate users makes this research area an invaluable element when teaching linguistic theory.
Describing the technologies to combine language resources flexibly as web services, this book provides valuable case studies for those who work in services computing, language resources, human-computer interaction (HCI), computer-supported cooperative work (CSCW), and service science. The authors have been operating the Language Grid, which wraps existing language resources as atomic language services and enables users to compose new services by combining them. From architecture level to service composition level, the book explains how to resolve infrastructural and operational difficulties in sharing and combining language resources, including interoperability of language service infrastructures, various types of language service policies, human services, and service failures.The research based on the authors' operating experiences of handling complicated issues such as intellectual property and interoperability of language resources contributes to exploitation of language resources as a service. On the other hand, both the analysis based on using services and the design of new services can bring significant results. A new style of multilingual communication supported by language services is worthy of analysis in HCI/CSCW, and the design process of language services is the focus of valuable case studies in service science. By using language resources in different ways based on the Language Grid, many activities are highly regarded by diverse communities. This book consists of four parts: (1) two types of language service platforms to interconnect language services across service grids, (2) various language service composition technologies that improve the reusability, efficiency, and accuracy of composite services, (3) research work and activities in creating language resources and services, and (4) various applications and tools for understanding and designing language services that well support intercultural collaboration.
Using a corpus of data drawn from naturally-occurring second
language conversations, this book explores the role of idiomaticity
in English as a native language, and its comparative role in
English as a lingua franca. Through examining how idiomaticity
enables first language learners to achieve a greater degree of
fluency, the book explores why idiomatic language poses such a
challenge for users of English as a lingua franca. The book puts
forward a new definition of competence and fluency within the
context of English as a lingua franca, concluding with an analysis
of practical implications for the lingua franca classroom.
The book will appeal to scholars and advanced students of
morphology, syntax, computational linguistics and natural language
processing (NLP). It provides a critical and practical guide to
computational techniques for handling morphological and syntactic
phenomena, showing how these techniques have been used and modified
in practice.
Research monograph presenting a new approach to Computational Linguistics The ultimate goal of Computational Linguistics is to teach the computer to understand Natural Language. This research monograph presents a description of English according to algorithms which can be programmed into a computer to analyse natural language texts. The algorithmic approach uses series of instructions, written in Natural Language and organised in flow charts, with the aim of analysing certain aspects of the grammar of a sentence. One problem with text processing is the difficulty in distinguishing word forms that belong to parts of speech taken out of context. In order to solve this problem, Hristo Georgiev starts with the assumption that every word is either a verb or a non-verb. Form here he presents an algorithm which allows the computer to recognise parts of speech which to a human would be obvious though the meaning of the words. Emphasis for a computer is placed on verbs, nouns, participles and adjectives. English Algorithmic Grammar presents information for computers to recognise tenses, syntax, parsing, reference, and clauses. The final chapters of the book examine the further applications of an algorithmic approach to English grammar, and suggests ways in which the computer can be programmed to recognise meaning. This is an innovative, cutting-edge approach to computational linguistics that will be essential reading for academics researching computational linguistics, machine translation and natural language processing. |
You may like...
Smart Sensors and MEMS - Intelligent…
S. Nihtianov, A. Luque
Paperback
Adversarial Robustness for Machine…
Pin-Yu Chen, Cho-Jui Hsieh
Paperback
R2,204
Discovery Miles 22 040
Number Theory and Discrete Mathematics
A.K. Agarwal, Bruce C. Berndt, …
Hardcover
R2,437
Discovery Miles 24 370
|