![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Language & Literature > Language & linguistics > Computational linguistics
This book is about the nature of expression in speech. It is a comprehensive exploration of how such expression is produced and understood, and of how the emotional content of spoken words may be analysed, modelled, tested, and synthesized. Listeners can interpret tone-of-voice, assess emotional pitch, and effortlessly detect the finest modulations of speaker attitude; yet these processes present almost intractable difficulties to the researchers seeking to identify and understand them. In seeking to explain the production and perception of emotive content, Mark Tatham and Katherine Morton review the potential of biological and cognitive models. They examine how the features that make up the speech production and perception systems have been studied by biologists, psychologists, and linguists, and assess how far biological, behavioural, and linguistic models generate hypotheses that provide insights into the nature of expressive speech. The authors use recent techniques in speech synthesis and automatic speech recognition as a test bed for models of expression in speech.Acknowledging that such testing presupposes a comprehensive computational model of speech production, they put forward original proposals for its foundations and show how the relevant data structures may be modelled within its framework. This pioneering book will be of central interest to researchers in linguistics and in speech science, pathology, and technology. It will also be valuable for behavioural and cognitive scientists wanting to know more about this vital and elusive aspect of human behaviour.
One of the challenges brought on by the digital revolution of the recent decades is the mechanism by which information carried by texts can be extracted in order to access its contents. The processing of named entities remains a very active area of research, which plays a central role in natural language processing technologies and their applications. Named entity recognition, a tool used in information extraction tasks, focuses on recognizing small pieces of information in order to extract information on a larger scale. The authors use written text and examples in French and English to present the necessary elements for the readers to familiarize themselves with the main concepts related to named entities and to discover the problems associated with them, as well as the methods available in practice for solving these issues.
When we speak, we configure the vocal tract which shapes the visible motions of the face and the patterning of the audible speech acoustics. Similarly, we use these visible and audible behaviors to perceive speech. This book showcases a broad range of research investigating how these two types of signals are used in spoken communication, how they interact, and how they can be used to enhance the realistic synthesis and recognition of audible and visible speech. The volume begins by addressing two important questions about human audiovisual performance: how auditory and visual signals combine to access the mental lexicon and where in the brain this and related processes take place. It then turns to the production and perception of multimodal speech and how structures are coordinated within and across the two modalities. Finally, the book presents overviews and recent developments in machine-based speech recognition and synthesis of AV speech.
In this pioneering book Katarzyna Jaszczolt lays down the
foundations of an original theory of meaning in discourse, reveals
the cognitive foundations of discourse interpretation, and puts
forward a new basis for the analysis of discourse processing. She
provides a step-by-step introduction to the theory and its
application, and explains new terms and formalisms as required. Dr.
Jaszczolt unites the precision of truth-conditional, dynamic
approaches with insights from neo-Gricean pragmatics into the role
of speaker's intentions in communication. She shows that the
compositionality of meaning may be understood as merger
representations combining information from various sources
including word meaning and sentence structure, various kinds of
default interpretations, and conscious pragmatic inference.
This open access book provides a comprehensive overview of the state of the art in research and applications of Foundation Models and is intended for readers familiar with basic Natural Language Processing (NLP) concepts. Over the recent years, a revolutionary new paradigm has been developed for training models for NLP. These models are first pre-trained on large collections of text documents to acquire general syntactic knowledge and semantic information. Then, they are fine-tuned for specific tasks, which they can often solve with superhuman accuracy. When the models are large enough, they can be instructed by prompts to solve new tasks without any fine-tuning. Moreover, they can be applied to a wide range of different media and problem domains, ranging from image and video processing to robot control learning. Because they provide a blueprint for solving many tasks in artificial intelligence, they have been called Foundation Models. After a brief introduction to basic NLP models the main pre-trained language models BERT, GPT and sequence-to-sequence transformer are described, as well as the concepts of self-attention and context-sensitive embedding. Then, different approaches to improving these models are discussed, such as expanding the pre-training criteria, increasing the length of input texts, or including extra knowledge. An overview of the best-performing models for about twenty application areas is then presented, e.g., question answering, translation, story generation, dialog systems, generating images from text, etc. For each application area, the strengths and weaknesses of current models are discussed, and an outlook on further developments is given. In addition, links are provided to freely available program code. A concluding chapter summarizes the economic opportunities, mitigation of risks, and potential developments of AI.
This text explores the consequences for language acquisition, language evolution and linguistic theory of taking the underlying architecture of the language faculty to be that of a dynamical system. The authors investigate whether it is possible for a complex adaptive system to identify the categories, structures and rules of a language given access only to instances of grammatical utterances of that language. The linguistic tradition says that this is impossible, but there is a growing body of literature in psychology and computer science arguing that grammar can be uncovered using purely statistical techniques applied to the distribution of forms in a string of words. The book goes on to discuss whether a learner requires information about structure that goes beyond the information that is contained in the meaning. Does the learner have to have knowledge of grammar per se prior to language acquisition, as has been traditionally assumed? The authors ask whether it is possible to adequately describe and explain linguistic phenomena if we restrict ourselves to the relatively impoverished apparatus that we require for language acquisition. They explore the consequences of adopting a radical form of minimalism to try to reconcile the linguistic facts with the book's perspective of language acquisition. Culicover and Nowak investigate to what extent it is possible to account for language variation in dynamical terms, as a consequence of the behaviour of the complex social network in which languages and the properties of languages are acquired by learners through interactions with other speakers over time.
This book adopts a corpus-based critical discourse analysis approach and examines a corpus of newspaper articles from Pakistani and Indian publications to gain comparative insights into the ideological construction of China's Belt and Road Initiative (BRI) and the China-Pakistan Economic Corridor (CPEC) within news discourses. This book contributes to the works on perceptions of BRI in English newspapers of India and Pakistan. A multi-billion-dollar project of BRI or the "One Belt One Road" (OBOR), CPEC symbolizes a vision for regional revival under China's economic leadership and clout. Propelled by the Chinese Premier's dream to revive the Chinese economy as well as to restructure and catalyze infrastructural development in Asia, BRI is aimed at connecting Asia via land and sea routes with Europe, Africa, and the Middle Eastern states.
The two-volume set LNCS 13396 and 13397 constitutes revised selected papers from the CICLing 2018 conference which took place in Hanoi, Vietnam, in March 2018.The total of 67 papers presented in the two volumes was carefully reviewed and selected from 181 submissions. The focus of the conference was on following topics such as computational linguistics and intelligent text and speech processing and others. The papers are organized in the following topical sections: General, Author profiling and authorship attribution, social network analysis, Information retrieval, information extraction, Lexical resources, Machine translation, Morphology, syntax, Semantics and text similarity, Sentiment analysis, Syntax and parsing, Text categorization and clustering, Text generation, and Text mining.
The two-volume set LNCS 13396 and 13397 constitutes revised selected papers from the CICLing 2018 conference which took place in Hanoi, Vietnam, in March 2018.The total of 67 papers presented in the two volumes was carefully reviewed and selected from 181 submissions. The focus of the conference was on following topics such as computational linguistics and intelligent text and speech processing and others. The papers are organized in the following topical sections: General, Author profiling and authorship attribution, social network analysis, Information retrieval, information extraction, Lexical resources, Machine translation, Morphology, syntax, Semantics and text similarity, Sentiment analysis, Syntax and parsing, Text categorization and clustering, Text generation, and Text mining.
This case study-based textbook in multivariate analysis for advanced students in the humanities emphasizes descriptive, exploratory analyses of various types of datasets from a wide range of sub-disciplines, promoting the use of multivariate analysis and illustrating its wide applicability. Fields featured include, but are not limited to, historical agriculture, arts (music and painting), theology, and stylometrics (authorship issues). Most analyses are based on existing data, earlier analysed in published peer-reviewed papers. Four preliminary methodological and statistical chapters provide general technical background to the case studies. The multivariate statistical methods presented and illustrated include data inspection, several varieties of principal component analysis, correspondence analysis, multidimensional scaling, cluster analysis, regression analysis, discriminant analysis, and three-mode analysis. The bulk of the text is taken up by 14 case studies that lean heavily on graphical representations of statistical information such as biplots, using descriptive statistical techniques to support substantive conclusions. Each study features a description of the substantive background to the data, followed by discussion of appropriate multivariate techniques, and detailed results interpreted through graphical illustrations. Each study is concluded with a conceptual summary. Datasets in SPSS are included online.
In the not so distant future, we can expect a world where humans and robots coexist and interact with each other. For this to occur, we need to understand human traits, such as seeing, hearing, thinking, speaking, etc., and institute these traits in robots. The most essential feature necessary for robots to achieve is that of integrative multimedia understanding (IMU) which occurs naturally in humans. It allows us to assimilate pieces of information expressed through different modes such as speech, pictures, gestures, etc. The book describes how robots acquire traits like natural language understanding (NLU) as the central part of IMU. Mental image directed semantic theory (MIDST) is its core, and is based on the hypothesis that NLU is essentially the processing of mental image associated with natural language expressions, namely, mental-image based understanding (MBU). MIDST is intended to model omnisensory mental image in human and to afford a knowledge representation system in order for integrative management of knowledge subjective to cognitive mechanisms of intelligent entities such as humans and robots based on a mental image model visualized as 'Loci in Attribute Spaces' and its description language Lmd (mental image description language) to be employed for predicate logic with a systematic scheme for symbol-grounding. This language works as an interlingua among various kinds of information media, and has been applied to several versions of the intelligent system interlingual understanding model aiming at general system (IMAGES). Its latest version, i.e. conversation management system (CMS) simulates MBU and comprehends the user's intention through dialogue to find and solve problems, and finally, provides a response in text or animation. The book is aimed at researchers and students interested in artificial intelligence, robotics, and cognitive science. Based on philosophical considerations, the methodology will also have an appeal in linguistics, psychology, ontology, geography, and cartography. Key Features: Describes the methodology to provide robots with human-like capability of natural language understanding (NLU) as the central part of IMU Uses methodology that also relates to linguistics, psychology, ontology, geography, and cartography Examines current trends in machine translation
The two-volume set LNCS 13451 and 13452 constitutes revised selected papers from the CICLing 2019 conference which took place in La Rochelle, France, April 2019.The total of 95 papers presented in the two volumes was carefully reviewed and selected from 335 submissions. The book also contains 3 invited papers. The papers are organized in the following topical sections: General, Information extraction, Information retrieval, Language modeling, Lexical resources, Machine translation, Morphology, sintax, parsing, Name entity recognition, Semantics and text similarity, Sentiment analysis, Speech processing, Text categorization, Text generation, and Text mining.
This open access book introduces Vector semantics, which links the formal theory of word vectors to the cognitive theory of linguistics. The computational linguists and deep learning researchers who developed word vectors have relied primarily on the ever-increasing availability of large corpora and of computers with highly parallel GPU and TPU compute engines, and their focus is with endowing computers with natural language capabilities for practical applications such as machine translation or question answering. Cognitive linguists investigate natural language from the perspective of human cognition, the relation between language and thought, and questions about conceptual universals, relying primarily on in-depth investigation of language in use. In spite of the fact that these two schools both have 'linguistics' in their name, so far there has been very limited communication between them, as their historical origins, data collection methods, and conceptual apparatuses are quite different. Vector semantics bridges the gap by presenting a formal theory, cast in terms of linear polytopes, that generalizes both word vectors and conceptual structures, by treating each dictionary definition as an equation, and the entire lexicon as a set of equations mutually constraining all meanings.
The book covers theoretical work, approaches, applications, and techniques for computational models of information, language, and reasoning. Computational and technological developments that incorporate natural language are proliferating. Adequate coverage of natural language processing in artificial intelligence encounters problems on developments of specialized computational approaches and algorithms. Many difficulties are due to ambiguities in natural language and dependency of interpretations on contexts and agents. Classical approaches proceed with relevant updates, and new developments emerge in theories of formal and natural languages, computational models of information and reasoning, and related computerized applications. Its focus is on computational processing of human language and relevant medium languages, which can be theoretically formal, or for programming and specification of computational systems. The goal is to promote intelligent natural language processing, along with models of computation, language, reasoning, and other cognitive processes.
This open access book provides an in-depth description of the EU project European Language Grid (ELG). Its motivation lies in the fact that Europe is a multilingual society with 24 official European Union Member State languages and dozens of additional languages including regional and minority languages. The only meaningful way to enable multilingualism and to benefit from this rich linguistic heritage is through Language Technologies (LT) including Natural Language Processing (NLP), Natural Language Understanding (NLU), Speech Technologies and language-centric Artificial Intelligence (AI) applications. The European Language Grid provides a single umbrella platform for the European LT community, including research and industry, effectively functioning as a virtual home, marketplace, showroom, and deployment centre for all services, tools, resources, products and organisations active in the field. Today the ELG cloud platform already offers access to more than 13,000 language processing tools and language resources. It enables all stakeholders to deposit, upload and deploy their technologies and datasets. The platform also supports the long-term objective of establishing digital language equality in Europe by 2030 - to create a situation in which all European languages enjoy equal technological support. This is the very first book dedicated to Language Technology and NLP platforms. Cloud technology has only recently matured enough to make the development of a platform like ELG feasible on a larger scale. The book comprehensively describes the results of the ELG project. Following an introduction, the content is divided into four main parts: (I) ELG Cloud Platform; (II) ELG Inventory of Technologies and Resources; (III) ELG Community and Initiative; and (IV) ELG Open Calls and Pilot Projects.
Computational semantics is the art and science of computing meaning in natural language. The meaning of a sentence is derived from the meanings of the individual words in it, and this process can be made so precise that it can be implemented on a computer. Designed for students of linguistics, computer science, logic and philosophy, this comprehensive text shows how to compute meaning using the functional programming language Haskell. It deals with both denotational meaning (where meaning comes from knowing the conditions of truth in situations), and operational meaning (where meaning is an instruction for performing cognitive action). Including a discussion of recent developments in logic, it will be invaluable to linguistics students wanting to apply logic to their studies, logic students wishing to learn how their subject can be applied to linguistics, and functional programmers interested in natural language processing as a new application area.
The book features recent attempts to construct corpora for specific purposes - e.g. multifactorial Dutch (parallel), Geasy Easy Language Corpus (intralingual), HK LegCo interpreting corpus - and showcases sophisticated and innovative corpus analysis methods. It proposes new approaches to address classical themes - i.e. translation pedagogy, translation norms and equivalence, principles of translation - and brings interdisciplinary perspectives - e.g. contrastive linguistics, cognition and metaphor studies - to cast new light. It is a timely reference for the researchers as well as postgraduate students who are interested in the applications of corpus technology to solving translation and interpreting problems.
This work presents a discourse-aware Text Simplification approach that splits and rephrases complex English sentences within the semantic context in which they occur. Based on a linguistically grounded transformation stage, complex sentences are transformed into shorter utterances with a simple canonical structure that can be easily analyzed by downstream applications. To avoid breaking down the input into a disjointed sequence of statements that is difficult to interpret, the author incorporates the semantic context between the split propositions in the form of hierarchical structures and semantic relationships, thus generating a novel representation of complex assertions that puts a semantic layer on top of the simplified sentences. In a second step, she leverages the semantic hierarchy of minimal propositions to improve the performance of Open IE frameworks. She shows that such systems benefit in two dimensions. First, the canonical structure of the simplified sentences facilitates the extraction of relational tuples, leading to an improved precision and recall of the extracted relations. Second, the semantic hierarchy can be leveraged to enrich the output of existing Open IE approaches with additional meta-information, resulting in a novel lightweight semantic representation for complex text data in the form of normalized and context-preserving relational tuples.
This book brings together selected revised papers representing a multidisciplinary approach to language, music, and gesture, as well as their interaction. Among the number of multidisciplinary and comparative studies of the structure and organization of language and music, the presented book broadens the scope with the inclusion of gesture problems in the analyzed spectrum. A unique feature of the presented collection is that the papers, compiled in one volume, allow readers to see similarities and differences in gesture as an element of non-verbal communication and gesture as the main element of dance. In addition to enhancing the analysis, the data on the perception and comprehension of speech, music, and dance in regard to both their functioning in a natural situation and their reflection in various forms of performing arts makes this collection extremely useful for those who are interested in human cognitive abilities and performing skills. The book begins with a philosophical overview of recent neurophysiological studies reflecting the complexity of higher cognitive functions, which references the idea of the baroque style in art being neither linear nor stable. The following papers are allocated into 5 sections. The papers of the section "Language-Music-Gesture As Semiotic Systems" discuss the issues of symbolic and semiotic aspects of language, music, and gesture, including from the perspective of their notation. This is followed by the issues of "Language-Music-Gesture Onstage" and interaction within the idea of the "World as a Text." The papers of "Teaching Language and Music" present new teaching methods that take into account the interaction of all the cognitive systems examined. The papers of the last two sections focus on issues related primarily to language: The section "Verbalization Of Music And Gesture" considers the problem of describing musical text and non-verbal behavior with language, and papers in the final section "Emotions In Linguistics And Ai-Communication Systems" analyze the ways of expressing emotions in speech and the problems of organizing emotional communication with computer agents.
With the first publication of this book in 1988, the centrality of the lexicon in language research was becoming increasingly apparent and the use of relational models of the lexicon had been the particular focus of research in a variety of disciplines since the early 1980s. This convergence of approach made the present collection especially welcome for bringing together reports of theoretical developments and applications in relational semantics in computer science, linguistics, cognitive science, anthropology and industrial research. It explains in detail some important applications of relational models to the construction of natural language interfaces, the building of thesauri for bibliographic information retrieval systems and the compilation of terminology banks for machine translation systems. Relational Models of the Lexicon not only provides an invaluable survey of research in relational semantics, but offers a stimulus for potential research advances in semantics, natural language processing and knowledge representation.
When viewed through a political lens, the act of defining terms in natural language arguably transforms knowledge into values. This unique volume explores how corporate, military, academic, and professional values shaped efforts to define computer terminology and establish an information engineering profession as a precursor to what would become computer science. As the Cold War heated up, U.S. federal agencies increasingly funded university researchers and labs to develop technologies, like the computer, that would ensure that the U.S. maintained economic prosperity and military dominance over the Soviet Union. At the same time, private corporations saw opportunities for partnering with university labs and military agencies to generate profits as they strengthened their business positions in civilian sectors. They needed a common vocabulary and principles of streamlined communication to underpin the technology development that would ensure national prosperity and military dominance. investigates how language standardization contributed to the professionalization of computer science as separate from mathematics, electrical engineering, and physics examines traditions of language standardization in earlier eras of rapid technology development around electricity and radio highlights the importance of the analogy of "the computer is like a human" to early explanations of computer design and logic traces design and development of electronic computers within political and economic contexts foregrounds the importance of human relationships in decisions about computer design This in-depth humanistic study argues for the importance of natural language in shaping what people come to think of as possible and impossible relationships between computers and humans. The work is a key reference in the history of technology and serves as a source textbook on the human-level history of computing. In addition, it addresses those with interests in sociolinguistic questions around technology studies, as well as technology development at the nexus of politics, business, and human relations.
This book addresses the research, analysis, and description of the methods and processes that are used in the annotation and processing of language corpora in advanced, semi-advanced, and non-advanced languages. It provides the background information and empirical data needed to understand the nature and depth of problems related to corpus annotation and text processing and shows readers how the linguistic elements found in texts are analyzed and applied to develop language technology systems and devices. As such, it offers valuable insights for researchers, educators, and students of linguistics and language technology.
This book presents a method of linking the ordered structure of the cosmos with human thoughts: the theory of language holography. In the view presented here, the cosmos is in harmony with the human body and language, and human thoughts are holographic with the cosmos at the level of language. In a word, the holographic relation is nothing more than the bridge by means of which Guanlian Qian connects the cosmos, human, and language. This is a vitally important contribution to linguistic and philosophical studies that cannot be ignored. The book has two main focus areas: outer language holography and inner language holography. These two areas constitute the core of the dynamic and holistic view put forward in the theory of language holography. The book's main properties can be summarized into the following points: First and foremost, it is a book created in toto by a Chinese scholar devoted to pragmatics, theoretical linguistics, and philosophy of language. Secondly, the book was accepted by a top Chinese publisher and was republished the second year, which reflected its value and appeal. Thirdly, in terms of writing style, the book is characterized by succinctness and logic. As a result, it reads fluidly and smoothly without redundancies, which is not that common in linguistic or even philosophical works. Lastly, as stated by the author in the introduction, "Creation is the development of previous capacities, but it is also the generation of new ones"; this book can be said to put this concept into practice. Overall, the book offers a unique resource to readers around the world who want to know more about the truly original and innovative studies of language in Chinese academia.
This manual contains an up-to-date description of the existing anthologies (with a linguistic focus) and corpora that have so far been compiled for the different Romance languages. This description takes into account both the standard languages and a selection of well-attested diatopic and diastratic varieties as well as Romance-based Creoles. Representative texts and detailed commentaries are provided for all the languages and varieties discussed. |
You may like...
How Did We Get Here? - A Girl's Guide to…
Mpoomy Ledwaba
Paperback
(1)
Allergy Sense For Families - A Practical…
Meg Faure, Sarah Karabus, …
Paperback
|