![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence > Natural language & machine translation
Globalization has increased the number of individuals in criminal proceedings who are unable to understand the language of the courtroom, and as a result the number of court interpreters has also increased. But unsupervised interpreters can severely undermine the fairness of a criminal proceeding. In this innovative and methodological new study, Dingfelder Stone comprehensively examines the multitudes of mistakes made by interpreters, and explores the resultant legal and practical implications. Whilst scholars of interpreting studies have researched the prevalence of interpreter error for decades, the effect of these mistakes on criminal proceedings has largely gone unanalyzed by legal scholars. Drawing upon both interpreting studies research and legal scholarship alike, this engaging and timely study analyzes the impact of court interpreters on the right to a fair trial under international law, which forms the minimum baseline standard for national systems.
This open access book describes the results of natural language processing and machine learning methods applied to clinical text from electronic patient records. It is divided into twelve chapters. Chapters 1-4 discuss the history and background of the original paper-based patient records, their purpose, and how they are written and structured. These initial chapters do not require any technical or medical background knowledge. The remaining eight chapters are more technical in nature and describe various medical classifications and terminologies such as ICD diagnosis codes, SNOMED CT, MeSH, UMLS, and ATC. Chapters 5-10 cover basic tools for natural language processing and information retrieval, and how to apply them to clinical text. The difference between rule-based and machine learning-based methods, as well as between supervised and unsupervised machine learning methods, are also explained. Next, ethical concerns regarding the use of sensitive patient records for research purposes are discussed, including methods for de-identifying electronic patient records and safely storing patient records. The book's closing chapters present a number of applications in clinical text mining and summarise the lessons learned from the previous chapters. The book provides a comprehensive overview of technical issues arising in clinical text mining, and offers a valuable guide for advanced students in health informatics, computational linguistics, and information retrieval, and for researchers entering these fields.
Describing the technologies to combine language resources flexibly as web services, this book provides valuable case studies for those who work in services computing, language resources, human-computer interaction (HCI), computer-supported cooperative work (CSCW), and service science. The authors have been operating the Language Grid, which wraps existing language resources as atomic language services and enables users to compose new services by combining them. From architecture level to service composition level, the book explains how to resolve infrastructural and operational difficulties in sharing and combining language resources, including interoperability of language service infrastructures, various types of language service policies, human services, and service failures.The research based on the authors' operating experiences of handling complicated issues such as intellectual property and interoperability of language resources contributes to exploitation of language resources as a service. On the other hand, both the analysis based on using services and the design of new services can bring significant results. A new style of multilingual communication supported by language services is worthy of analysis in HCI/CSCW, and the design process of language services is the focus of valuable case studies in service science. By using language resources in different ways based on the Language Grid, many activities are highly regarded by diverse communities. This book consists of four parts: (1) two types of language service platforms to interconnect language services across service grids, (2) various language service composition technologies that improve the reusability, efficiency, and accuracy of composite services, (3) research work and activities in creating language resources and services, and (4) various applications and tools for understanding and designing language services that well support intercultural collaboration.
This book discusses some of the basic issues relating to corpus generation and the methods normally used to generate a corpus. Since corpus-related research goes beyond corpus generation, the book also addresses other major topics connected with the use and application of language corpora, namely, corpus readiness in the context of corpus sanitation and pre-editing of corpus texts; the application of statistical methods; and various text processing techniques. Importantly, it explores how corpora can be used as a primary or secondary resource in English language teaching, in creating dictionaries, in word sense disambiguation, in various language technologies, and in other branches of linguistics. Lastly, the book sheds light on the status quo of corpus generation in Indian languages and identifies current and future needs. Discussing various technical issues in the field in a lucid manner, providing extensive new diagrams and charts for easy comprehension, and using simplified English, the book is an ideal resource for non-native English readers. Written by academics with many years of experience teaching and researching corpus linguistics, its focus on Indian languages and on English corpora makes it applicable to graduate and postgraduate students of applied linguistics, computational linguistics and language processing in South Asia and across countries where English is spoken as a first or second language.
This book is about machine translation (MT) and the classic problems associated with this language technology. It examines the causes of these problems and, for linguistic, rule-based systems, attributes the cause to language's ambiguity and complexity and their interplay in logic-driven processes. For non-linguistic, data-driven systems, the book attributes translation shortcomings to the very lack of linguistics. It then proposes a demonstrable way to relieve these drawbacks in the shape of a working translation model (Logos Model) that has taken its inspiration from key assumptions about psycholinguistic and neurolinguistic function. The book suggests that this brain-based mechanism is effective precisely because it bridges both linguistically driven and data-driven methodologies. It shows how simulation of this cerebral mechanism has freed this one MT model from the all-important, classic problem of complexity when coping with the ambiguities of language. Logos Model accomplishes this by a data-driven process that does not sacrifice linguistic knowledge, but that, like the brain, integrates linguistics within a data-driven process. As a consequence, the book suggests that the brain-like mechanism embedded in this model has the potential to contribute to further advances in machine translation in all its technological instantiations.
This book applies formal language and automata theory in the context of Tibetan computational linguistics; further, it constructs a Tibetan-spelling formal grammar system that generates a Tibetan-spelling formal language group, and an automata group that can recognize the language group. In addition, it investigates the application technologies of Tibetan-spelling formal language and automata. Given its creative and original approach, the book offers a valuable reference guide for researchers, teachers and graduate students in the field of computational linguistics.
This is the first volume that brings together research and practice from academic and industry settings and a combination of human and machine translation evaluation. Its comprehensive collection of papers by leading experts in human and machine translation quality and evaluation who situate current developments and chart future trends fills a clear gap in the literature. This is critical to the successful integration of translation technologies in the industry today, where the lines between human and machine are becoming increasingly blurred by technology: this affects the whole translation landscape, from students and trainers to project managers and professionals, including in-house and freelance translators, as well as, of course, translation scholars and researchers. The editors have broad experience in translation quality evaluation research, including investigations into professional practice with qualitative and quantitative studies, and the contributors are leading experts in their respective fields, providing a unique set of complementary perspectives on human and machine translation quality and evaluation, combining theoretical and applied approaches.
This Pivot reconsiders the controversial literary figure of Lin Shu and the debate surrounding his place in the history of Modern Chinese Literature. Although recent Chinese mainland research has recognized some of the innovations introduced by Lin Shu, he has often been labeled a 'rightist reformer' in contrast to 'leftist reformers' such as Chen Duxiu and the new wave scholars of the May Fourth Movement. This book provides a well-documented account of his place in the different polemics between these two circles ('conservatives' and 'reformers') and provides a more nuanced account of the different literary movements of the time. Notably, it argues that these differences were neither in content nor in politics, but in the methodological approach of both parties. Examining Lin Shu and the 'conservatives' advocated coexistence of both traditional and modern thought, the book provides background to the major changes occurring in the intellectual landscape of Modern China.
This book comprehensively examines the development of translator and interpreter training using bibliometric reviews of the state of the field and empirical studies on classroom practice. It starts by introducing databases in bibliometric reviews and presents a detailed account of the reasons behind the project and its objectives as well as a description of the methods of constructing databases. The introduction is followed by full-scale review studies on various aspects of translator and interpreter training, providing not only an overall picture of the research themes and methods, but also valuable information on active authors, institutions and countries in the subfields of translator training, interpreter training, and translator and interpreter training in general. The book also compares publications from different subfields of research, regions and journals to show the special features within this discipline. Further, it provides a series of empirical studies conducted by the authors, covering a wide array of topics in translator and interpreter training, with an emphasis on learner factors. This collective volume, with its unique perspective on bibliometric data and empirical studies, highlights the latest development in the field of translator and interpreter training research. The findings presented will help researchers, trainers and practitioners to reflect on the important issues in the discipline and find possible new directions for future research.
The two-volume set LNCS 10761 + 10762 constitutes revised selected papers from the CICLing 2017 conference which took place in Budapest, Hungary, in April 2017. The total of 90 papers presented in the two volumes was carefully reviewed and selected from numerous submissions. In addition, the proceedings contain 4 invited papers. The papers are organized in the following topical sections: Part I: general; morphology and text segmentation; syntax and parsing; word sense disambiguation; reference and coreference resolution; named entity recognition; semantics and text similarity; information extraction; speech recognition; applications to linguistics and the humanities. Part II: sentiment analysis; opinion mining; author profiling and authorship attribution; social network analysis; machine translation; text summarization; information retrieval and text classification; practical applications.
Many applications within natural language processing involve performing text-to-text transformations, i.e., given a text in natural language as input, systems are required to produce a version of this text (e.g., a translation), also in natural language, as output. Automatically evaluating the output of such systems is an important component in developing text-to-text applications. Two approaches have been proposed for this problem: (i) to compare the system outputs against one or more reference outputs using string matching-based evaluation metrics and (ii) to build models based on human feedback to predict the quality of system outputs without reference texts. Despite their popularity, reference-based evaluation metrics are faced with the challenge that multiple good (and bad) quality outputs can be produced by text-to-text approaches for the same input. This variation is very hard to capture, even with multiple reference texts. In addition, reference-based metrics cannot be used in production (e.g., online machine translation systems), when systems are expected to produce outputs for any unseen input. In this book, we focus on the second set of metrics, so-called Quality Estimation (QE) metrics, where the goal is to provide an estimate on how good or reliable the texts produced by an application are without access to gold-standard outputs. QE enables different types of evaluation that can target different types of users and applications. Machine learning techniques are used to build QE models with various types of quality labels and explicit features or learnt representations, which can then predict the quality of unseen system outputs. This book describes the topic of QE for text-to-text applications, covering quality labels, features, algorithms, evaluation, uses, and state-of-the-art approaches. It focuses on machine translation as application, since this represents most of the QE work done to date. It also briefly describes QE for several other applications, including text simplification, text summarization, grammatical error correction, and natural language generation.
This updated book expands upon prosody for recognition applications of speech processing. It includes importance of prosody for speech processing applications; builds on why prosody needs to be incorporated in speech processing applications; and presents methods for extraction and representation of prosody for applications such as speaker recognition, language recognition and speech recognition. The updated book also includes information on the significance of prosody for emotion recognition and various prosody-based approaches for automatic emotion recognition from speech.
This book constitutes the refereed proceedings of the 9th International Conference of the CLEF Initiative, CLEF 2018, jointly organized by Avignon, Marseille and Toulon universities and held in Avignon, France, in September 2018. The conference has a clear focus on experimental information retrieval with special attention to the challenges of multimodality, multilinguality, and interactive search ranging from unstructured to semi structures and structured data. The 13 papers presented in this volume were carefully reviewed and selected from 39 submissions. Many papers tackle the medical ehealth and ehealth multimedia retrieval challenges, however there are many other topics of research such as document clustering, social biases in IR, social book search, personality profiling. Further this volume presents 9 "best of the labs" papers which were reviewed as a full paper submission with the same review criteria. The labs represented scientific challenges based on new data sets and real world problems in multimodal and multilingual information access. In addition to this, 10 benchmarking labs reported results of their yearlong activities in overview talks and lab sessions. The papers address all aspects of information access in any modularity and language and cover a broad range of topics in the field of multilingual and multimodal information access evaluation.
This two volume set of LNAI 11108 and LNAI 11109 constitutes the refereed proceedings of the 7th CCF Conference on Natural Language Processing and Chinese Computing, NLPCC 2018, held in Hohhot, China, in August 2018. The 55 full papers and 31 short papers presented were carefully reviewed and selected from 308 submissions. The papers of the first volume are organized in the following topics: conversational Bot/QA/IR; knowledge graph/IE; machine learning for NLP; machine translation; and NLP applications. The papers of the second volume are organized as follows: NLP for social network; NLP fundamentals; text mining; and short papers.
This book constitutes the refereed proceedings of the 18th International Conference on Artificial Intelligence: Methodology, Systems, and Applications, AIMSA 2018, held in Varna, Bulgaria, in September 2018. The 22 revised full papers and 7 poster papers presented were carefully reviewed and selected from 72 submissions. They cover a wide range of topics in AI: from machine learning to natural language systems, from information extraction to text mining, from knowledge representation to soft computing; from theoretical issues to real-world applications.
An informative and comprehensive overview of the state-of-the-art in natural language generation (NLG) for interactive systems, this guide serves to introduce graduate students and new researchers to the field of natural language processing and artificial intelligence, while inspiring them with ideas for future research. Detailing the techniques and challenges of NLG for interactive applications, it focuses on the research into systems that model collaborativity and uncertainty, are capable of being scaled incrementally, and can engage with the user effectively. A range of real-world case studies is also included. The book and the accompanying website feature a comprehensive bibliography, and refer the reader to corpora, data, software and other resources for pursuing research on natural language generation and interactive systems, including dialog systems, multimodal interfaces and assistive technologies. It is an ideal resource for students and researchers in computational linguistics, natural language processing and related fields.
This two-volume set LNAI 10934 and LNAI 10935 constitutes the refereed proceedings of the 14th International Conference on Machine Learning and Data Mining in Pattern Recognition, MLDM 2018, held in New York, NY, USA in July 2018. The 92 regular papers presented in this two-volume set were carefully reviewed and selected from 298 submissions. The topics range from theoretical topics for classification, clustering, association rule and pattern mining to specific data mining methods for the different multi-media data types such as image mining, text mining, video mining, and Web mining.
This book constitutes the thoroughly refereed post-conference proceedings of the Satellite Events of the 15th Extended Semantic Web Conference, ESWC 2018, held in Heraklion, Crete, Greece, in June 2018.The volume contains 41 poster and demonstration papers, 11 invited workshop papers, and 9 full papers, selected out of a total of 70 submissions. They deal with all areas of semantic web research, semantic technologies on the Web and Linked Data.
Sentiment analysis and opinion mining is the field of study that analyzes people's opinions, sentiments, evaluations, attitudes, and emotions from written language. It is one of the most active research areas in natural language processing and is also widely studied in data mining, Web mining, and text mining. In fact, this research has spread outside of computer science to the management sciences and social sciences due to its importance to business and society as a whole. The growing importance of sentiment analysis coincides with the growth of social media such as reviews, forum discussions, blogs, micro-blogs, Twitter, and social networks. For the first time in human history, we now have a huge volume of opinionated data recorded in digital form for analysis. Sentiment analysis systems are being applied in almost every business and social domain because opinions are central to almost all human activities and are key influencers of our behaviors. Our beliefs and perceptions of reality, and the choices we make, are largely conditioned on how others see and evaluate the world. For this reason, when we need to make a decision we often seek out the opinions of others. This is true not only for individuals but also for organizations. This book is a comprehensive introductory and survey text. It covers all important topics and the latest developments in the field with over 400 references. It is suitable for students, researchers and practitioners who are interested in social media analysis in general and sentiment analysis in particular. Lecturers can readily use it in class for courses on natural language processing, social media analysis, text mining, and data mining. Lecture slides are also available online. Table of Contents: Preface / Sentiment Analysis: A Fascinating Problem / The Problem of Sentiment Analysis / Document Sentiment Classification / Sentence Subjectivity and Sentiment Classification / Aspect-Based Sentiment Analysis / Sentiment Lexicon Generation / Opinion Summarization / Analysis of Comparative Opinions / Opinion Search and Retrieval / Opinion Spam Detection / Quality of Reviews / Concluding Remarks / Bibliography / Author Biography
This book focuses on speech signal phenomena, presenting a robustification of the usual speech generation models with regard to the presumed types of excitation signals, which is equivalent to the introduction of a class of nonlinear models and the corresponding criterion functions for parameter estimation. Compared to the general class of nonlinear models, such as various neural networks, these models possess good properties of controlled complexity, the option of working in "online" mode, as well as a low information volume for efficient speech encoding and transmission. Providing comprehensive insights, the book is based on the authors' research, which has already been published, supplemented by additional texts discussing general considerations of speech modeling, linear predictive analysis and robust parameter estimation.
This book constitutes the refereed proceedings of the 7h Language and Technology Conference: Challenges for Computer Science and Linguistics, LTC 2015, held in Poznan, Poland, in November 2015. The 31 revised papers presented in this volume were carefully reviewed and selected from 108 submissions. The papers selected to this volume belong to various fields of: Speech Processing; Multiword Expressions; Parsing; Language Resources and Tools; Ontologies and Wordnets; Machine Translation; Information and Data Extraction; Text Engineering and Processing; Applications in Language Learning; Emotions, Decisions and Opinions; Less-Resourced Languages.
This book serves as a starting point for Semantic Web (SW) students and researchers interested in discovering what Natural Language Processing (NLP) has to offer. NLP can effectively help uncover the large portions of data held as unstructured text in natural language, thus augmenting the real content of the Semantic Web in a significant and lasting way. The book covers the basics of NLP, with a focus on Natural Language Understanding (NLU), referring to semantic processing, information extraction and knowledge acquisition, which are seen as the key links between the SW and NLP communities. Major emphasis is placed on mining sentences in search of entities and relations. In the course of this "quest", challenges will be encountered for various text analysis tasks, including part-of-speech tagging, parsing, semantic disambiguation, named entity recognition and relation extraction. Standard algorithms associated with these tasks are presented to provide an understanding of the fundamental concepts. Furthermore, the importance of experimental design and result analysis is emphasized, and accordingly, most chapters include small experiments on corpus data with quantitative and qualitative analysis of the results. This book is divided into four parts. Part I "Searching for Entities in Text" is dedicated to the search for entities in textual data. Next, Part II "Working with Corpora" investigates corpora as valuable resources for NLP work. In turn, Part III "Semantic Grounding and Relatedness" focuses on the process of linking surface forms found in text to entities in resources. Finally, Part IV "Knowledge Acquisition" delves into the world of relations and relation extraction. The book also includes three appendices: "A Look into the Semantic Web" gives a brief overview of the Semantic Web and is intended to bring readers less familiar with the Semantic Web up to speed, so that they too can fully benefit from the material of this book. "NLP Tools and Platforms" provides information about NLP platforms and tools, while "Relation Lists" gathers lists of relations under different categories, showing how relations can be varied and serve different purposes. And finally, the book includes a glossary of over 200 terms commonly used in NLP. The book offers a valuable resource for graduate students specializing in SW technologies and professionals looking for new tools to improve the applicability of SW techniques in everyday life - or, in short, everyone looking to learn about NLP in order to expand his or her horizons. It provides a wealth of information for readers new to both fields, helping them understand the underlying principles and the challenges they may encounter.
This book constitutes the thoroughly refereed proceedings of the 15th International Conference on Image Analysis and Recognition, ICIAR 2018, held in Povoa de Varzim, Portugal, in June 2018. The 91 full papers presented together with 15 short papers were carefully reviewed and selected from 179 submissions. The papers are organized in the following topical sections: Enhancement, Restoration and Reconstruction, Image Segmentation, Detection, Classication and Recognition, Indexing and Retrieval, Computer Vision, Activity Recognition, Traffic and Surveillance, Applications, Biomedical Image Analysis, Diagnosis and Screening of Ophthalmic Diseases, and Challenge on Breast Cancer Histology Images. |
You may like...
Deep Natural Language Processing and AI…
Poonam Tanwar, Arti Saxena, …
Hardcover
R6,648
Discovery Miles 66 480
Eyetracking and Applied Linguistics
Silvia Hansen-Schirra, Sambor Grucza
Hardcover
R835
Discovery Miles 8 350
Cross-Disciplinary Advances in Applied…
Chutima Boonthum-Denecke, Philip M. McCarthy, …
Hardcover
R4,976
Discovery Miles 49 760
Handbook of Research on Natural Language…
Rodolfo Abraham Pazos-Rangel, Rogelio Florencia-Juarez, …
Hardcover
R8,145
Discovery Miles 81 450
Annotation, Exploitation and Evaluation…
Silvia Hansen-Schirra, Sambor Grucza
Hardcover
R919
Discovery Miles 9 190
|