![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence > Natural language & machine translation
Lexical semantics has become a major research area within computational linguistics, drawing from psycholinguistics, knowledge representation, computer algorithms and architecture. Research programmes whose goal is the definition of large lexicons are asking what the appropriate representation structure is for different facets of lexical information. Among these facets, semantic information is probably the most complex and the least explored. Computational Lexical Semantics is one of the first volumes to provide models for the creation of various kinds of computerized lexicons for the automatic treatment of natural language, with applications to machine translation, automatic indexing, and database front-ends, knowledge extraction, among other things. It focuses on semantic issues, as seen by linguists, psychologists and computer scientists. Besides describing academic research, it also covers ongoing industrial projects.
This book investigates the nature of generalization in language and
examines how language is known by adults and acquired by children.
It looks at how and why constructions are learned, the relation
between their forms and functions, and how cross-linguistic and
language-internal generalizations about them can be explained.
EsTAL - Espana " for Natural Language Processing - continued on from the three previous conferences: FracTAL, held at the Universit' e de Franch-Comt' e, Besan, con (France) in December 1997, VexTAL, held at Venice International University, Ca ' Foscari (Italy), in November 1999, and PorTAL, held at the U- versidade do Algarve, Faro (Portugal), in June 2002. The main goals of these conferences have been: (i) to bring together the international NLP community; (ii) to strengthen the position of local NLP research in the international NLP community; and (iii) to provide a forum for discussion of new research and - plications. EsTAL contributed to achieving these goals and increasing the already high international standing of these conferences, largely due to its Program Comm- tee,composedofrenownedresearchersinthe?eldofnaturallanguageprocessing and its applications. This clearly contributed to the signi?cant number of papers submitted (72) by researchers from (18) di?erent countries. The scope of the conference was structured around the following main topics: (i)computational linguistics research (spoken and written language analysis and generation; pragmatics, discourse, semantics, syntax and morphology; lexical - sources; word sense disambiguation; linguistic, mathematical, and psychological models of language; knowledge acquisition and representation; corpus-based and statistical language modelling; machine translation and translation aids; com- tationallexicography),and(ii)monolingualandmultilingualintelligentlanguage processing and applications (information retrieval, extraction and question - swering; automatic summarization; document categorization; natural language interfaces; dialogue systems and evaluation of systems).
This book discusses the connection between two areas of semantics, namely the semantics of databases and the semantics of natural language, and links them via a common view of the semantics of time. It is argued that a coherent theory of the semantics of time is an essential ingredient for the success of efforts to incorporate more 'real world' semantics into database models. This idea is a relatively recent concern of database research but it is receiving growing interest. The book begins with a discussion of database querying which motivates the use of the paradigm of Montague Semantics and discusses the details of the intensional logic ILs. This is followed by a description of the author's own model, the Historical Relational Data Model (HRDM) which extends the RDM to include a temporal dimension. Finally the database querying language QEHIII is defined and examples illustrate its use. A formal model for the interpretation of questions is presented in this work which will form the basis for much further research.
This book constitutes the refereed proceedings of the 19th International Conference on Data Analytics and Management in Data Intensive Domains, DAMDID/RCDL 2017, held in Moscow, Russia, in October 2017.The 16 revised full papers presented together with three invited papers were carefully reviewed and selected from 75 submissions. The papers are organized in the following topical sections: data analytics; next generation genomic sequencing: challenges and solutions; novel approaches to analyzing and classifying of various astronomical entities and events; ontology population in data intensive domains; heterogeneous data integration issues; data curation and data provenance support; and temporal summaries generation.
Recent developments in artificial intelligence, especially neural network and deep learning technology, have led to rapidly improving performance in voice assistants such as Siri and Alexa. Over the next few years, capability will continue to improve and become increasingly personalised. Today's voice assistants will evolve into virtual personal assistants firmly embedded within our everyday lives. Told through the view of a fictitious personal assistant called Cyba, this book provides an accessible but detailed overview of how a conversational voice assistant works, especially how it understands spoken language, manages conversations, answers questions and generates responses. Cyba explains through examples and diagrams the neural network technology underlying speech recognition and synthesis, natural language understanding, knowledge representation, conversation management, language translation and chatbot technology. Cyba also explores the implications of this rapidly evolving technology for security, privacy and bias, and gives a glimpse of future developments. Cyba's website can be found at HeyCyba.com.
This book explains how to build Natural Language Generation (NLG) systems--computer software systems that automatically generate understandable texts in English or other human languages. NLG systems use knowledge about language and the application domain to automatically produce documents, reports, explanations, help messages, and other kinds of texts. The book covers the algorithms and representations needed to perform the core tasks of document planning, microplanning, and surface realization, using a case study to show how these components fit together. It is essential reading for researchers interested in NLP, AI, and HCI; and for developers interested in advanced document-creation technology.
This book presents a collection of papers on the issue of focus in its broadest sense. While commonly being considered as related to phenomena such as presupposition and anaphora, focusing is much more widely spread, and it is this pervasiveness that this collection addresses. The volume explicitly aims to bring together theoretical, psychological, and descriptive approaches to focus, at the same time maintaining the overall interest in how these notions apply to the larger problem of evolving some formal representation of the semantic aspects of linguistic content. The contributed papers to this volume have been reworked from a selection of original work presented at a conference held in 1994 in Schloss Wolfsbrunnen in Germany.
Originally published in 1997, this book is concerned with human language technology. This technology provides computers with the capability to handle spoken and written language. One major goal is to improve communication between humans and machines. If people can use their own language to access information, working with software applications and controlling machinery, the greatest obstacle for the acceptance of new information technology is overcome. Another important goal is to facilitate communication among people. Machines can help to translate texts or spoken input from one human language to the other. Programs that assist people in writing by checking orthography, grammar and style are constantly improving. This book was sponsored by the Directorate General XIII of the European Union and the Information Science and Engineering Directorate of the National Science Foundation, USA.
Argumentation mining is an application of natural language processing (NLP) that emerged a few years ago and has recently enjoyed considerable popularity, as demonstrated by a series of international workshops and by a rising number of publications at the major conferences and journals of the field. Its goals are to identify argumentation in text or dialogue; to construct representations of the constellation of claims, supporting and attacking moves (in different levels of detail); and to characterize the patterns of reasoning that appear to license the argumentation. Furthermore, recent work also addresses the difficult tasks of evaluating the persuasiveness and quality of arguments. Some of the linguistic genres that are being studied include legal text, student essays, political discourse and debate, newspaper editorials, scientific writing, and others. The book starts with a discussion of the linguistic perspective, characteristics of argumentative language, and their relationship to certain other notions such as subjectivity. Besides the connection to linguistics, argumentation has for a long time been a topic in Artificial Intelligence, where the focus is on devising adequate representations and reasoning formalisms that capture the properties of argumentative exchange. It is generally very difficult to connect the two realms of reasoning and text analysis, but we are convinced that it should be attempted in the long term, and therefore we also touch upon some fundamentals of reasoning approaches. Then the book turns to its focus, the computational side of mining argumentation in text. We first introduce a number of annotated corpora that have been used in the research. From the NLP perspective, argumentation mining shares subtasks with research fields such as subjectivity and sentiment analysis, semantic relation extraction, and discourse parsing. Therefore, many technical approaches are being borrowed from those (and other) fields. We break argumentation mining into a series of subtasks, starting with the preparatory steps of classifying text as argumentative (or not) and segmenting it into elementary units. Then, central steps are the automatic identification of claims, and finding statements that support or oppose the claim. For certain applications, it is also of interest to compute a full structure of an argumentative constellation of statements. Next, we discuss a few steps that try to 'dig deeper': to infer the underlying reasoning pattern for a textual argument, to reconstruct unstated premises (so-called 'enthymemes'), and to evaluate the quality of the argumentation. We also take a brief look at 'the other side' of mining, i.e., the generation or synthesis of argumentative text. The book finishes with a summary of the argumentation mining tasks, a sketch of potential applications, and a--necessarily subjective--outlook for the field.
Lexical semantics has become a major research area within computational linguistics, drawing from psycholinguistics, knowledge representation, and computer algorithms and architecture. Research programmes whose goal is the definition of large lexicons are asking what the appropriate representation structure is for different facets of lexical information. Among these facets, semantic information is probably the most complex and the least explored. Computational Lexical Semantics is one of the first volumes to provide models for the creation of various kinds of computerized lexicons for the automatic treatment of natural language, with applications to machine translation, automatic indexing, and database front-ends, knowledge extraction, among other things. It focuses on semantic issues, as seen by linguists, psychologists and computer scientists. Besides describing academic research, it also covers ongoing industrial projects.
This book will help readers understand fundamental and advanced statistical models and deep learning models for robust speaker recognition and domain adaptation. This useful toolkit enables readers to apply machine learning techniques to address practical issues, such as robustness under adverse acoustic environments and domain mismatch, when deploying speaker recognition systems. Presenting state-of-the-art machine learning techniques for speaker recognition and featuring a range of probabilistic models, learning algorithms, case studies, and new trends and directions for speaker recognition based on modern machine learning and deep learning, this is the perfect resource for graduates, researchers, practitioners and engineers in electrical engineering, computer science and applied mathematics.
This book is for developers who are looking for an overview of basic concepts in Natural Language Processing. It casts a wide net of techniques to help developers who have a range of technical backgrounds. Numerous code samples and listings are included to support myriad topics. The first chapter shows you various details of managing data that are relevant for NLP. The next pair of chapters contain NLP concepts, followed by another pair of chapters with Python code samples to illustrate those NLP concepts. Chapter 6 explores applications, e.g., sentiment analysis, recommender systems, COVID-19 analysis, spam detection, and a short discussion regarding chatbots. The final chapter presents the Transformer architecture, BERT-based models, and the GPT family of models, all of which were developed during the past three years and considered SOTA ("state of the art"). The appendices contain introductory material (including Python code samples) on regular expressions and probability/statistical concepts. Companion files with source code and figures are included. FEATURES: Covers extensive topics related to natural language processing Includes separate appendices on regular expressions and probability/statistics Features companion files with source code and figures from the book.
In this brief, the authors discuss recently explored spectral (sub-segmental and pitch synchronous) and prosodic (global and local features at word and syllable levels in different parts of the utterance) features for discerning emotions in a robust manner. The authors also delve into the complementary evidences obtained from excitation source, vocal tract system and prosodic features for the purpose of enhancing emotion recognition performance. Features based on speaking rate characteristics are explored with the help of multi-stage and hybrid models for further improving emotion recognition performance. Proposed spectral and prosodic features are evaluated on real life emotional speech corpus.
In everyday communication, Europe's citizens, business partners and politicians are inevitably confronted with language barriers. Language technology has the potential to overcome these barriers and to provide innovative interfaces to technologies and knowledge. This document presents a Strategic Research Agenda for Multilingual Europe 2020. The agenda was prepared by META-NET, a European Network of Excellence. META-NET consists of 60 research centres in 34 countries, who cooperate with stakeholders from economy, government agencies, research organisations, non-governmental organisations, language communities and European universities. META-NET's vision is high-quality language technology for all European languages. "The research carried out in the area of language technology is of utmost importance for the consolidation of Portuguese as a language of global communication in the information society." - Dr. Pedro Passos Coelho (Prime-Minister of Portugal) "It is imperative that language technologies for Slovene are developed systematically if we want Slovene to flourish also in the future digital world." - Dr. Danilo Turk (President of the Republic of Slovenia) "For such small languages like Latvian keeping up with the ever increasing pace of time and technological development is crucial. The only way to ensure future existence of our language is to provide its users with equal opportunities as the users of larger languages enjoy. Therefore being on the forefront of modern technologies is our opportunity." - Valdis Dombrovskis (Prime Minister of Latvia) "Europe's inherent multilingualism and our scientific expertise are the perfect prerequisites for significantly advancing the challenge that language technology poses. META-NET opens up new opportunities for the development of ubiquitous multilingual technologies." - Prof. Dr. Annette Schavan (German Minister of Education and Research)
Designing machines that can read handwriting like human beings has been an ambitious goal for more than half a century, driving talented researchers to explore diverse approaches. Obstacles have often been encountered that at first appeared insurmountable but were indeed overcome before long. Yet some open issues remain to be solved. As an indispensable branch, Chinese handwriting recognition has been termed as one of the most difficult Pattern Recognition tasks. Chinese handwriting recognition poses its own unique challenges, such as huge variations in strokes, diversity of writing styles, and a large set of confusable categories. With ever-increasing training data, researchers have pursued elaborate algorithms to discern characters from different categories and compensate for the sample variations within the same category. As a result, Chinese handwriting recognition has evolved substantially and amazing achievements can be seen. This book introduces integral algorithms used in Chinese handwriting recognition and the applications of Chinese handwriting recogniers. The first part of the book covers both widespread canonical algorithms to a reliable recognizer and newly developed scalable methods in Chinese handwriting recognition. The recognition of Chinese handwritten text is presented systematically, including instructive guidelines for collecting samples, novel recognition paradigms, distributed discriminative learning of appearance models and distributed estimation of contextual models for large categories, in addition to celebrated methods, e.g. Gradient features, MQDF and HMMs. In the second part of this book, endeavors are made to create a friendlier human-machine interface through application of Chinese handwriting recognition. Four scenarios are exemplified: grid-assisted input, shortest moving input, handwritten micro-blog, and instant handwriting messenger. All the while, the book moves from basic to more complex approaches, also providing a list for further reading with literature comments.
This book constitutes the thoroughly refereed post-conference proceedings of the Second International ICST Conference on Ambient Systems and Media, AMBI-SYS 2011, held in Porto, Portugal in March 2011. The 10 revised full papers presented were carefully reviewed and selected and cover a wide range of topics as innovative solutions in the field of ambient assisted living, providing a new physical basis for ambient intelligence by also leveraging on contributions offered by interaction design methods and approaches.
In the light of upcoming global issues, concerning population, energy, the environment, and food, information and communication technologies are required to overcome difficulties in communication among cultures. In this context, the First International Conference on Culture and Computing, which was held in Kyoto, Japan, in February 2010, was conceived as a collection of symposia, panels, workshops, exhibitions, and guided tours intended to share issues, activities, and research results regarding culture and computing. This volume includes 17 invited and selected papers dealing with state-of-the-art topics in culturally situated agents, intercultural collaboration and support systems, culture and computing for art and heritage, as well as culture and computing within regional communities.
This book takes an idea first explored by medieval logicians 800 years ago and revisits it armed with the tools of contemporary linguistics, logic, and computer science. The idea - the Holy Grail of the medieval logicians - was the thought that all of logic could be reduced to two very simple rules that are sensitive to logical polarity (for example, the presence and absence of negations). Ludlow and Zivanovic pursue this idea and show how it has profound consequences for our understanding of the nature of human inferential capacities. They also show its consequences for some of the deepest issues in contemporary linguistics, including the nature of quantification, puzzles about discourse anaphora and pragmatics, and even insights into the source of aboutness in natural language. The key to their enterprise is a formal relation they call "p-scope" - a polarity-sensitive relation that controls the operations that can be carried out in their Dynamic Deductive System. They show that with p-scope in play, deductions can be carried out using sublogical operations like those they call COPY and PRUNE - operations that are simple syntactic operations on sentences. They prove that the resulting deductive system is complete and sound. The result is a beautiful formal tapestry in which p-scope unlocks important properties of natural language, including the property of "restrictedness," which they prove to be equivalent to the semantic notion of conservativity. More than that, they show that restrictedness is also a key to understanding quantification and discourse anaphora, and many other linguistic phenomena.
The rich programme of ICIDS 2009, comprising invited talks, technical pres- tations and posters, demonstrations, and co-located post-conference workshops clearly underscores the event's status as premier international meeting in the domain. It thereby con?rms the decision taken by the Constituting Committee of the conference series to take the step forward: out of the national cocoons of its precursors, ICVS and TIDSE, and towards an itinerant platform re?ecting its global constituency. This move re?ects the desire and the will to take on the challenge to stay on the lookout, critically re?ect upon and integrate views and ideas,?ndingsandexperiences,andtopromoteinterdisciplinaryexchange,while ensuring overall coherence and maintaining a sense of direction. This is a signi?cant enterprise: The challenges sought are multifarious and must be addressed consistently at all levels. The desire to involve all research communitiesandstakeholdersmustbematchedbyacknowledgingthedi?erences in established practises and by providing suitable means of guidance and int- duction, exposition and direct interaction at the event itself and of lasting (and increasingly:living) documentation, of which the present proceedings are but an important part.
This volume contains papers presented at the 19th International Conference on Algorithmic Learning Theory (ALT 2008), which was held in Budapest, Hungary during October 13-16, 2008. The conference was co-located with the 11th - ternational Conference on Discovery Science (DS 2008). The technical program of ALT 2008 contained 31 papers selected from 46 submissions, and 5 invited talks. The invited talks were presented in joint sessions of both conferences. ALT 2008 was the 19th in the ALT conference series, established in Japan in 1990. The series Analogical and Inductive Inference is a predecessor of this series: it was held in 1986, 1989 and 1992, co-located with ALT in 1994, and s- sequently merged with ALT. ALT maintains its strong connections to Japan, but has also been held in other countries, such as Australia, Germany, Italy, Sin- pore, Spain and the USA. The ALT conference series is supervised by its Steering Committee: Naoki Abe (IBM T. J.
The Reasoning Web summer school series is a well-established event, attracting experts from academia and industry as well as PhD students interested in fo- dational and applicational aspects of the Semantic Web. This volume contains thelecturenotesofthefourthsummerschool, which took place in Venice, Italy, in September 2008. This year, the school focussed on a number of important application domains, in which semantic web techniques have proved to be p- ticularly e?ective or promising in tackling problems. The ?rst three chapters provide introductory material to: - languages, formalisms, and standards adopted to encode semantic information; - "soft" extensions that might be useful in contexts such as multimedia or social network applications; - controlled natural language techniques to bring ontology authoring closer to end users. The remaining chapters cover major application areas such as social networks, semantic multimedia indexing and retrieval, bioinformatics, and semantic web services. Thepresentationshighlightedwhichtechniquesarealreadybeingsuccessfully applied for purposes such as improving the performance of information retrieval algorithms, enablingtheinteroperationofheterogeneousagents, modellinguser's pro?les and social relations, and standardizing and improving the accuracy of very large and dynamic scienti?c databases. Furthermore, the lectures pointed out which aspects are still waiting for a solution, andthepossiblerolethatsemantictechniquesmayplay, especiallythose reasoningmethodsthathavenotyetbeenexploitedtotheirfullpotential.Wehope thatthe school'smaterialwillinspire further exciting researchinthese areas. We are grateful to all the lecturers and their co-authors for their excellent contributions, to the Reasoning Web School Board, and Norbert Eisinger in particular, who helped in several critical phases, and to the organizations that supported this event: the University of Padua, the MOST project, and the N- work of Excellence REWERSE.
This book is the first to provide a comprehensive survey of the computational models and methodologies used for studying the evolution and origin of language and communication. Comprising contributions from the most influential figures in the field, it presents and summarises the state-of-the-art in computational approaches to language evolution, and highlights new lines of development.Essential reading for researchers and students in the fields of evolutionary and adaptive systems, language evolution modelling and linguistics, it will also be of interest to researchers working on applications of neural networks to language problems. Furthermore, due to the fact that language evolution models use multi-agent methodologies, it will also be of great interest to computer scientists working on multi-agent systems, robotics and internet agents. |
You may like...
Multibiometric Watermarking with…
Rohit M. Thanki, Vedvyas J. Dwivedi, …
Hardcover
R1,421
Discovery Miles 14 210
Annotation, Exploitation and Evaluation…
Silvia Hansen-Schirra, Sambor Grucza
Hardcover
R919
Discovery Miles 9 190
The People's Web Meets NLP…
Iryna Gurevych, Jungi Kim
Hardcover
|