![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Language & Literature > Language & linguistics > Computational linguistics
This book explores novel aspects of social robotics, spoken dialogue systems, human-robot interaction, spoken language understanding, multimodal communication, and system evaluation. It offers a variety of perspectives on and solutions to the most important questions about advanced techniques for social robots and chat systems. Chapters by leading researchers address key research and development topics in the field of spoken dialogue systems, focusing in particular on three special themes: dialogue state tracking, evaluation of human-robot dialogue in social robotics, and socio-cognitive language processing. The book offers a valuable resource for researchers and practitioners in both academia and industry whose work involves advanced interaction technology and who are seeking an up-to-date overview of the key topics. It also provides supplementary educational material for courses on state-of-the-art dialogue system technologies, social robotics, and related research fields.
Weighted finite-state transducers (WFSTs) are commonly used by engineers and computational linguists for processing and generating speech and text. This book first provides a detailed introduction to this formalism. It then introduces Pynini, a Python library for compiling finite-state grammars and for combining, optimizing, applying, and searching finite-state transducers. This book illustrates this library's conventions and use with a series of case studies. These include the compilation and application of context-dependent rewrite rules, the construction of morphological analyzers and generators, and text generation and processing applications.
What is the lexicon, what does it contain, and how is it structured? What principles determine the functioning of the lexicon as a component of natural language grammar? What role does lexical information play in linguistic theory? This accessible introduction aims to answer these questions, and explores the relation of the lexicon to grammar as a whole. It includes a critical overview of major theoretical frameworks, and puts forward a unified treatment of lexical structure and design. The text can be used for introductory and advanced courses, and for courses that touch upon different aspects of the lexicon, such as lexical semantics, lexicography, syntax, general linguistics, computational lexicology and ontology design. The book provides students with a set of tools which will enable them to work with lexical data for all kinds of purposes, including an abundance of exercises and in-class activities designed to ensure that students are actively engaged with the content and effectively acquire the necessary knowledge and skills they need.
This book builds on decades of research and provides contemporary theoretical foundations for practical applications to intelligent technologies and advances in artificial intelligence (AI). Reflecting the growing realization that computational models of human reasoning and interactions can be improved by integrating heterogeneous information resources and AI techniques, its ultimate goal is to promote integrated computational approaches to intelligent computerized systems. The book covers a range of interrelated topics, in particular, computational reasoning, language, syntax, semantics, memory, and context information. The respective chapters use and develop logically oriented methods and techniques, and the topics selected are from those areas of logic that contribute to AI and provide its mathematical foundations. The intended readership includes researchers working in the areas of traditional logical foundations, and on new approaches to intelligent computational systems.
This book constitutes the refereed proceedings of the 14th
International Conference on Formal Grammar 2009, held in Bordeaux,
France, in July 2009.
This book provides a timely and comprehensive overview of current theories and methods in fuzzy logic, as well as relevant applications in a variety of fields of science and technology. Dedicated to Lotfi A. Zadeh on his one year death anniversary, the book goes beyond a pure commemorative text. Yet, it offers a fresh perspective on a number of relevant topics, such as computing with words, theory of perceptions, possibility theory, and decision-making in a fuzzy environment. Written by Zadeh's closest colleagues and friends, the different chapters are intended both as a timely reference guide and a source of inspiration for scientists, developers and researchers who have been dealing with fuzzy sets or would like to learn more about their potential for their future research.
Opportunity and Curiosity find similar rocks on Mars. One can generally understand this statement if one knows that Opportunity and Curiosity are instances of the class of Mars rovers, and recognizes that, as signalled by the word on, rocks are located on Mars. Two mental operations contribute to understanding: recognize how entities/concepts mentioned in a text interact and recall already known facts (which often themselves consist of relations between entities/concepts). Concept interactions one identifies in the text can be added to the repository of known facts, and aid the processing of future texts. The amassed knowledge can assist many advanced language-processing tasks, including summarization, question answering and machine translation. Semantic relations are the connections we perceive between things which interact. The book explores two, now intertwined, threads in semantic relations: how they are expressed in texts and what role they play in knowledge repositories. A historical perspective takes us back more than 2000 years to their beginnings, and then to developments much closer to our time: various attempts at producing lists of semantic relations, necessary and sufficient to express the interaction between entities/concepts. A look at relations outside context, then in general texts, and then in texts in specialized domains, has gradually brought new insights, and led to essential adjustments in how the relations are seen. At the same time, datasets which encompass these phenomena have become available. They started small, then grew somewhat, then became truly large. The large resources are inevitably noisy because they are constructed automatically. The available corpora-to be analyzed, or used to gather relational evidence-have also grown, and some systems now operate at the Web scale. The learning of semantic relations has proceeded in parallel, in adherence to supervised, unsupervised or distantly supervised paradigms. Detailed analyses of annotated datasets in supervised learning have granted insights useful in developing unsupervised and distantly supervised methods. These in turn have contributed to the understanding of what relations are and how to find them, and that has led to methods scalable to Web-sized textual data. The size and redundancy of information in very large corpora, which at first seemed problematic, have been harnessed to improve the process of relation extraction/learning. The newest technology, deep learning, supplies innovative and surprising solutions to a variety of problems in relation learning. This book aims to paint a big picture and to offer interesting details.
This book is about investigating the way people use language in speech and writing. It introduces the corpus-based approach to the study of language, based on analysis of large databases of real language examples and illustrates exciting new findings about language and the different ways that people speak and write. The book is important both for its step-by-step descriptions of research methods and for its findings about grammar and vocabulary, language use, language learning, and differences in language use across texts and user groups.
Embeddings have undoubtedly been one of the most influential research areas in Natural Language Processing (NLP). Encoding information into a low-dimensional vector representation, which is easily integrable in modern machine learning models, has played a central role in the development of NLP. Embedding techniques initially focused on words, but the attention soon started to shift to other forms: from graph structures, such as knowledge bases, to other types of textual content, such as sentences and documents. This book provides a high-level synthesis of the main embedding techniques in NLP, in the broad sense. The book starts by explaining conventional word vector space models and word embeddings (e.g., Word2Vec and GloVe) and then moves to other types of embeddings, such as word sense, sentence and document, and graph embeddings. The book also provides an overview of recent developments in contextualized representations (e.g., ELMo and BERT) and explains their potential in NLP. Throughout the book, the reader can find both essential information for understanding a certain topic from scratch and a broad overview of the most successful techniques developed in the literature.
This book focuses mainly on logical approaches to computational linguistics, but also discusses integrations with other approaches, presenting both classic and newly emerging theories and applications.Decades of research on theoretical work and practical applications have demonstrated that computational linguistics is a distinctively interdisciplinary area. There is convincing evidence that computational approaches to linguistics can benefit from research on the nature of human language, including from the perspective of its evolution. This book addresses various topics in computational theories of human language, covering grammar, syntax, and semantics. The common thread running through the research presented is the role of computer science, mathematical logic and other subjects of mathematics in computational linguistics and natural language processing (NLP). Promoting intelligent approaches to artificial intelligence (AI) and NLP, the book is intended for researchers and graduate students in the field.
This open access book provides an overview of the recent advances in representation learning theory, algorithms and applications for natural language processing (NLP). It is divided into three parts. Part I presents the representation learning techniques for multiple language entries, including words, phrases, sentences and documents. Part II then introduces the representation techniques for those objects that are closely related to NLP, including entity-based world knowledge, sememe-based linguistic knowledge, networks, and cross-modal entries. Lastly, Part III provides open resource tools for representation learning techniques, and discusses the remaining challenges and future research directions. The theories and algorithms of representation learning presented can also benefit other related domains such as machine learning, social network analysis, semantic Web, information retrieval, data mining and computational biology. This book is intended for advanced undergraduate and graduate students, post-doctoral fellows, researchers, lecturers, and industrial engineers, as well as anyone interested in representation learning and natural language processing.
Now in its second edition, Text Analysis with R provides a practical introduction to computational text analysis using the open source programming language R. R is an extremely popular programming language, used throughout the sciences; due to its accessibility, R is now used increasingly in other research areas. In this volume, readers immediately begin working with text, and each chapter examines a new technique or process, allowing readers to obtain a broad exposure to core R procedures and a fundamental understanding of the possibilities of computational text analysis at both the micro and the macro scale. Each chapter builds on its predecessor as readers move from small scale "microanalysis" of single texts to large scale "macroanalysis" of text corpora, and each concludes with a set of practice exercises that reinforce and expand upon the chapter lessons. The book's focus is on making the technical palatable and making the technical useful and immediately gratifying. Text Analysis with R is written with students and scholars of literature in mind but will be applicable to other humanists and social scientists wishing to extend their methodological toolkit to include quantitative and computational approaches to the study of text. Computation provides access to information in text that readers simply cannot gather using traditional qualitative methods of close reading and human synthesis. This new edition features two new chapters: one that introduces dplyr and tidyr in the context of parsing and analyzing dramatic texts to extract speaker and receiver data, and one on sentiment analysis using the syuzhet package. It is also filled with updated material in every chapter to integrate new developments in the field, current practices in R style, and the use of more efficient algorithms.
This book introduces a novel type of expert finder system that can determine the knowledge that specific users within a community hold, using explicit and implicit data sources to do so. Further, it details how this is accomplished by combining granular computing, natural language processing and a set of metrics that it introduces to measure and compare candidates' suitability. The book describes profiling techniques that can be used to assess knowledge requirements on the basis of a given problem statement or question, so as to ensure that only the most suitable candidates are recommended. The book brings together findings from natural language processing, artificial intelligence and big data, which it subsequently applies to the context of expert finder systems. Accordingly, it will appeal to researchers, developers and innovators alike.
This book focuses on information literacy for the younger generation of learners and library readers. It is divided into four sections: 1. Information Literacy for Life; 2. Searching Strategies, Disciplines and Special Topics; 3. Information Literacy Tools for Evaluating and Utilizing Resources; 4. Assessment of Learning Outcomes. Written by librarians with wide experience in research and services, and a strong academic background in disciplines such as the humanities, social sciences, information technology, and library science, this valuable reference resource combines both theory and practice. In today's ever-changing era of information, it offers students of library and information studies insights into information literacy as well as learning tips they can use for life.
This volume presents several machine intelligence technologies, developed over recent decades, and illustrates how they can be combined in application. One application, the detection of dementia from patterns in speech, is used throughout to illustrate these combinations. This application is a classic stationary pattern detection task, so readers may easily see how these combinations can be applied to other similar tasks. The expositions of the methods are supported by the basic theory they rest upon, and their application is clearly illustrated. The book's goal is to allow readers to select one or more of these methods to quickly apply to their own tasks. Includes a variety of machine intelligent technologies and illustrates how they can work together Shows evolutionary feature subset selection combined with support vector machines and multiple classifiers combined Includes a running case study on intelligent processing relating to Alzheimer's / dementia detection, in addition to several applications of the machine hybrid algorithms
The general focus of this book is on multimodal communication, which captures the temporal patterns of behavior in various dialogue settings. After an overview of current theoretical models of verbal and nonverbal communication cues, it presents studies on a range of related topics: paraverbal behavior patterns in the classroom setting; a proposed optimal methodology for conversational analysis; a study of time and mood at work; an experiment on the dynamics of multimodal interaction from the observer's perspective; formal cues of uncertainty in conversation; how machines can know we understand them; and detecting topic changes using neural network techniques. A joint work bringing together psychologists, communication scientists, information scientists and linguists, the book will be of interest to those working on a wide range of applications from industry to home, and from health to security, with the main goals of revealing, embedding and implementing a rich spectrum of information on human behavior.
The research described in this book shows that conversation analysis can effectively model dialogue. Specifically, this work shows that the multidisciplinary field of communicative ICALL may greatly benefit from including Conversation Analysis. As a consequence, this research makes several contributions to the related research disciplines, such as conversation analysis, second-language acquisition, computer-mediated communication, artificial intelligence, and dialogue systems. The book will be of value for researchers and engineers in the areas of computational linguistics, intelligent assistants, and conversational interfaces.
This book provides information on digital audio watermarking, its applications, and its evaluation for copyright protection of audio signals - both basic and advanced. The author covers various advanced digital audio watermarking algorithms that can be used for copyright protection of audio signals. These algorithms are implemented using hybridization of advanced signal processing transforms such as fast discrete curvelet transform (FDCuT), redundant discrete wavelet transform (RDWT), and another signal processing transform such as discrete cosine transform (DCT). In these algorithms, Arnold scrambling is used to enhance the security of the watermark logo. This book is divided in to three portions: basic audio watermarking and its classification, audio watermarking algorithms, and audio watermarking algorithms using advance signal transforms. The book also covers optimization based audio watermarking. Describes basic of digital audio watermarking and its applications, including evaluation parameters for digital audio watermarking algorithms; Provides audio watermarking algorithms using advanced signal transformations; Provides optimization based audio watermarking algorithms.
This book constitutes the refereed proceedings of the 16th International Conference of the Pacific Association for Computational Linguistics, PACLING 2019, held in Hanoi, Vietnam, in October 2019. The 28 full papers and 14 short papers presented were carefully reviewed and selected from 70 submissions. The papers are organized in topical sections on text summarization; relation and word embedding; machine translation; text classification; web analyzing; question and answering, dialog analyzing; speech and emotion analyzing; parsing and segmentation; information extraction; and grammar error and plagiarism detection.
Focusing on methodologies, applications and challenges of textual data analysis and related fields, this book gathers selected and peer-reviewed contributions presented at the 14th International Conference on Statistical Analysis of Textual Data (JADT 2018), held in Rome, Italy, on June 12-15, 2018. Statistical analysis of textual data is a multidisciplinary field of research that has been mainly fostered by statistics, linguistics, mathematics and computer science. The respective sections of the book focus on techniques, methods and models for text analytics, dictionaries and specific languages, multilingual text analysis, and the applications of text analytics. The interdisciplinary contributions cover topics including text mining, text analytics, network text analysis, information extraction, sentiment analysis, web mining, social media analysis, corpus and quantitative linguistics, statistical and computational methods, and textual data in sociology, psychology, politics, law and marketing.
This book explains speech enhancement in the Fractional Fourier Transform (FRFT) domain and investigates the use of different FRFT algorithms in both single channel and multi-channel enhancement systems, which has proven to be an ideal time frequency analysis tool in many speech signal processing applications. The authors discuss the complexities involved in the highly non- stationary signal processing and the concepts of FRFT for speech enhancement applications. The book explains the fundamentals of FRFT as well as its implementation in speech enhancement. Theories of different FRFT methods are also discussed. The book lets readers understand the new fractional domains to prepare them to develop new algorithms. A comprehensive literature survey regarding the topic is also made available to the reader.
Data-driven experimental analysis has become the main evaluation tool of Natural Language Processing (NLP) algorithms. In fact, in the last decade, it has become rare to see an NLP paper, particularly one that proposes a new algorithm, that does not include extensive experimental analysis, and the number of involved tasks, datasets, domains, and languages is constantly growing. This emphasis on empirical results highlights the role of statistical significance testing in NLP research: If we, as a community, rely on empirical evaluation to validate our hypotheses and reveal the correct language processing mechanisms, we better be sure that our results are not coincidental. The goal of this book is to discuss the main aspects of statistical significance testing in NLP. Our guiding assumption throughout the book is that the basic question NLP researchers and engineers deal with is whether or not one algorithm can be considered better than another one. This question drives the field forward as it allows the constant progress of developing better technology for language processing challenges. In practice, researchers and engineers would like to draw the right conclusion from a limited set of experiments, and this conclusion should hold for other experiments with datasets they do not have at their disposal or that they cannot perform due to limited time and resources. The book hence discusses the opportunities and challenges in using statistical significance testing in NLP, from the point of view of experimental comparison between two algorithms. We cover topics such as choosing an appropriate significance test for the major NLP tasks, dealing with the unique aspects of significance testing for non-convex deep neural networks, accounting for a large number of comparisons between two NLP algorithms in a statistically valid manner (multiple hypothesis testing), and, finally, the unique challenges yielded by the nature of the data and practices of the field.
Il volume presenta uno studio linguistico-testuale di un corpus di post di blog diaristici. L'analisi proposta si colloca all'intersezione di due indirizzi di riflessione, quello testuale e quello piu prettamente linguistico. Nell'ambito del primo il diario on-line viene studiato nelle sue peculiarita testuali e comunicative come genere di discorso all'interno di tre insiemi: generi autobiografici, generi della CMC e testi poco vincolanti. Nell'ambito del secondo viene esaminata la presenza nel corpus di una serie di tratti morfo-sintattici con l'obiettivo di poter qualificare i diari on-line in termini di distanza/vicinanza rispetto alla norma dell'italiano standard. Segue l'analisi di una serie di tratti sintattici tipici del parlato volta a scoprire in che misura i testi del corpus esaminato risultino orientati verso l'oralita.
In recent years, online social networking has revolutionized interpersonal communication. The newer research on language analysis in social media has been increasingly focusing on the latter's impact on our daily lives, both on a personal and a professional level. Natural language processing (NLP) is one of the most promising avenues for social media data processing. It is a scientific challenge to develop powerful methods and algorithms that extract relevant information from a large volume of data coming from multiple sources and languages in various formats or in free form. This book will discuss the challenges in analyzing social media texts in contrast with traditional documents. Research methods in information extraction, automatic categorization and clustering, automatic summarization and indexing, and statistical machine translation need to be adapted to a new kind of data. This book reviews the current research on NLP tools and methods for processing the non-traditional information from social media data that is available in large amounts, and it shows how innovative NLP approaches can integrate appropriate linguistic information in various fields such as social media monitoring, health care, and business intelligence. The book further covers the existing evaluation metrics for NLP and social media applications and the new efforts in evaluation campaigns or shared tasks on new datasets collected from social media. Such tasks are organized by the Association for Computational Linguistics (such as SemEval tasks), the National Institute of Standards and Technology via the Text REtrieval Conference (TREC) and the Text Analysis Conference (TAC), or the Conference and Labs of the Evaluation Forum (CLEF). In this third edition of the book, the authors added information about recent progress in NLP for social media applications, including more about the modern techniques provided by deep neural networks (DNNs) for modeling language and analyzing social media data.
This book constitutes the proceedings of the 14th International Conference on Computational Processing of the Portuguese Language, PROPOR 2020, held in Evora, Portugal, in March 2020. The 36 full papers presented together with 5 short papers were carefully reviewed and selected from 70 submissions. They are grouped in topical sections on speech processing; resources and evaluation; natural language processing applications; semantics; natural language processing tasks; and multilinguality. |
You may like...
Managing Data From Knowledge Bases…
Wei Emma Zhang, Quan Z. Sheng
Hardcover
R2,653
Discovery Miles 26 530
Advances in Algebra - SRAC 2017, Mobile…
Joerg Feldvoss, Lauren Grimley, …
Hardcover
R4,052
Discovery Miles 40 520
|