![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Artificial intelligence > Natural language & machine translation
Lexical semantics has become a major research area within computational linguistics, drawing from psycholinguistics, knowledge representation, computer algorithms and architecture. Research programmes whose goal is the definition of large lexicons are asking what the appropriate representation structure is for different facets of lexical information. Among these facets, semantic information is probably the most complex and the least explored. Computational Lexical Semantics is one of the first volumes to provide models for the creation of various kinds of computerized lexicons for the automatic treatment of natural language, with applications to machine translation, automatic indexing, and database front-ends, knowledge extraction, among other things. It focuses on semantic issues, as seen by linguists, psychologists and computer scientists. Besides describing academic research, it also covers ongoing industrial projects.
This book investigates the nature of generalization in language and
examines how language is known by adults and acquired by children.
It looks at how and why constructions are learned, the relation
between their forms and functions, and how cross-linguistic and
language-internal generalizations about them can be explained.
EsTAL - Espana " for Natural Language Processing - continued on from the three previous conferences: FracTAL, held at the Universit' e de Franch-Comt' e, Besan, con (France) in December 1997, VexTAL, held at Venice International University, Ca ' Foscari (Italy), in November 1999, and PorTAL, held at the U- versidade do Algarve, Faro (Portugal), in June 2002. The main goals of these conferences have been: (i) to bring together the international NLP community; (ii) to strengthen the position of local NLP research in the international NLP community; and (iii) to provide a forum for discussion of new research and - plications. EsTAL contributed to achieving these goals and increasing the already high international standing of these conferences, largely due to its Program Comm- tee,composedofrenownedresearchersinthe?eldofnaturallanguageprocessing and its applications. This clearly contributed to the signi?cant number of papers submitted (72) by researchers from (18) di?erent countries. The scope of the conference was structured around the following main topics: (i)computational linguistics research (spoken and written language analysis and generation; pragmatics, discourse, semantics, syntax and morphology; lexical - sources; word sense disambiguation; linguistic, mathematical, and psychological models of language; knowledge acquisition and representation; corpus-based and statistical language modelling; machine translation and translation aids; com- tationallexicography),and(ii)monolingualandmultilingualintelligentlanguage processing and applications (information retrieval, extraction and question - swering; automatic summarization; document categorization; natural language interfaces; dialogue systems and evaluation of systems).
This book discusses the connection between two areas of semantics, namely the semantics of databases and the semantics of natural language, and links them via a common view of the semantics of time. It is argued that a coherent theory of the semantics of time is an essential ingredient for the success of efforts to incorporate more 'real world' semantics into database models. This idea is a relatively recent concern of database research but it is receiving growing interest. The book begins with a discussion of database querying which motivates the use of the paradigm of Montague Semantics and discusses the details of the intensional logic ILs. This is followed by a description of the author's own model, the Historical Relational Data Model (HRDM) which extends the RDM to include a temporal dimension. Finally the database querying language QEHIII is defined and examples illustrate its use. A formal model for the interpretation of questions is presented in this work which will form the basis for much further research.
This book presents recent advances by leading researchers in computational modelling of language acquisition. The contributors, from departments of linguistics, cognitive science, psychology, and computer science, combine powerful computational techniques with real data and in doing so throw new light on the operations of the brain and the mind. They explore the extent to which linguistic structure is innate and/or available in a child's environment, and the degree to which language learning is inductive or deductive. They assess the explanatory power of different models. The book will appeal to all those working in language acquisition.
This book constitutes the refereed proceedings of the 19th International Conference on Data Analytics and Management in Data Intensive Domains, DAMDID/RCDL 2017, held in Moscow, Russia, in October 2017.The 16 revised full papers presented together with three invited papers were carefully reviewed and selected from 75 submissions. The papers are organized in the following topical sections: data analytics; next generation genomic sequencing: challenges and solutions; novel approaches to analyzing and classifying of various astronomical entities and events; ontology population in data intensive domains; heterogeneous data integration issues; data curation and data provenance support; and temporal summaries generation.
This book explains how to build Natural Language Generation (NLG) systems--computer software systems that automatically generate understandable texts in English or other human languages. NLG systems use knowledge about language and the application domain to automatically produce documents, reports, explanations, help messages, and other kinds of texts. The book covers the algorithms and representations needed to perform the core tasks of document planning, microplanning, and surface realization, using a case study to show how these components fit together. It is essential reading for researchers interested in NLP, AI, and HCI; and for developers interested in advanced document-creation technology.
ThoughtTreasure is a commonsense knowledge base and architecture for natural language processing. It uses multiple representations including logic, finite automata, grids, and scripts. The ThoughtTreasure architecture consists of: the text agency, containing text agents for recognizing words, phrases, and names, and mechanisms for learning new words and inflections; the syntactic component, containing a syntactic parser, base rules, and filters; the semantic component, containing a semantic parser for producing a surface-level understanding of a sentence, a natural language generator, and an anaphoric parser for resolving anaphoric entities such as pronouns; the planning agency, containing planning agents for achieving goals on behalf of simulated actors; and the understanding agency, containing understanding agents for producing a more detailed understanding of a discourse.
Originally published in 1997, this book is concerned with human language technology. This technology provides computers with the capability to handle spoken and written language. One major goal is to improve communication between humans and machines. If people can use their own language to access information, working with software applications and controlling machinery, the greatest obstacle for the acceptance of new information technology is overcome. Another important goal is to facilitate communication among people. Machines can help to translate texts or spoken input from one human language to the other. Programs that assist people in writing by checking orthography, grammar and style are constantly improving. This book was sponsored by the Directorate General XIII of the European Union and the Information Science and Engineering Directorate of the National Science Foundation, USA.
Lexical semantics has become a major research area within computational linguistics, drawing from psycholinguistics, knowledge representation, and computer algorithms and architecture. Research programmes whose goal is the definition of large lexicons are asking what the appropriate representation structure is for different facets of lexical information. Among these facets, semantic information is probably the most complex and the least explored. Computational Lexical Semantics is one of the first volumes to provide models for the creation of various kinds of computerized lexicons for the automatic treatment of natural language, with applications to machine translation, automatic indexing, and database front-ends, knowledge extraction, among other things. It focuses on semantic issues, as seen by linguists, psychologists and computer scientists. Besides describing academic research, it also covers ongoing industrial projects.
Computer processing of natural language is a burgeoning field, but until now there has been no agreement of a standardized classification of the diverse structural elements that occur in real-life language material. This book attempts to define a 'Linnaean taxonomy' for the English language: an annotation scheme, the SUSANNE scheme, which yields a labelled constituency structure for any string of English, comprehensively identifying all of its surface and logical structural properties. The structure is specified with sufficient rigour that analysts working independently must produce identical annotations for a given example. The scheme is based on large samples of real-life use of British and American written and spoken English. The book also describes the SUSANNE electronic corpus of English which is annotated in accordance with the scheme. It is freely available as a research resource to anyone working at a computer connected to Internet, and since 1992 has come into widespread use in academic and commercial research environments on four continents.
This book constitutes the refereed proceedings of the 4th International Symposium on Information Management in a Changing World, IMCW 2013, held in Limerick, Ireland, in September 2013. The 12 revised full papers presented together with three keynotes were carefully reviewed and selected from 31 submissions. The papers deal with the following topics: Cloud Architectures and Cultural Memory; Cloud Computing Beyond the Obvious: An Approach for Innovation; Cloud Computing: A New Generation of Technology Enables Deeper Collaboration; Evaluation of Conditions Regarding Cloud Computing Applications in Turkey, EU and the USA; Trustworthy Digital Images and the Cloud: Early Findings of the Records in the Cloud Project; Cloud Computing and Copyright: New Challenges in Legal Protection? Clouding Big Data: Information Privacy Considerations; The Influence of Recent Court Cases Relating to Copyright Changes in Cloud Computing Services in Japan; Government Participation in Digital Copyright Licensing in the Cloud Computing Environment; Evaluation of Information Security Approaches: A Defense Industry Organization Case; Information-Seeking Behavior of Undergraduate, Graduate, and Doctoral Students: A Survey of Istanbul University, Turkey; Students Readiness for E-Learning: An Assessment on Hacettepe University Department of Information Management; Evaluation of Scientific Disciplines in Turkey: A Citation Analysis Study.
Designing machines that can read handwriting like human beings has been an ambitious goal for more than half a century, driving talented researchers to explore diverse approaches. Obstacles have often been encountered that at first appeared insurmountable but were indeed overcome before long. Yet some open issues remain to be solved. As an indispensable branch, Chinese handwriting recognition has been termed as one of the most difficult Pattern Recognition tasks. Chinese handwriting recognition poses its own unique challenges, such as huge variations in strokes, diversity of writing styles, and a large set of confusable categories. With ever-increasing training data, researchers have pursued elaborate algorithms to discern characters from different categories and compensate for the sample variations within the same category. As a result, Chinese handwriting recognition has evolved substantially and amazing achievements can be seen. This book introduces integral algorithms used in Chinese handwriting recognition and the applications of Chinese handwriting recogniers. The first part of the book covers both widespread canonical algorithms to a reliable recognizer and newly developed scalable methods in Chinese handwriting recognition. The recognition of Chinese handwritten text is presented systematically, including instructive guidelines for collecting samples, novel recognition paradigms, distributed discriminative learning of appearance models and distributed estimation of contextual models for large categories, in addition to celebrated methods, e.g. Gradient features, MQDF and HMMs. In the second part of this book, endeavors are made to create a friendlier human-machine interface through application of Chinese handwriting recognition. Four scenarios are exemplified: grid-assisted input, shortest moving input, handwritten micro-blog, and instant handwriting messenger. All the while, the book moves from basic to more complex approaches, also providing a list for further reading with literature comments.
In this brief, the authors discuss recently explored spectral (sub-segmental and pitch synchronous) and prosodic (global and local features at word and syllable levels in different parts of the utterance) features for discerning emotions in a robust manner. The authors also delve into the complementary evidences obtained from excitation source, vocal tract system and prosodic features for the purpose of enhancing emotion recognition performance. Features based on speaking rate characteristics are explored with the help of multi-stage and hybrid models for further improving emotion recognition performance. Proposed spectral and prosodic features are evaluated on real life emotional speech corpus.
In everyday communication, Europe's citizens, business partners and politicians are inevitably confronted with language barriers. Language technology has the potential to overcome these barriers and to provide innovative interfaces to technologies and knowledge. This document presents a Strategic Research Agenda for Multilingual Europe 2020. The agenda was prepared by META-NET, a European Network of Excellence. META-NET consists of 60 research centres in 34 countries, who cooperate with stakeholders from economy, government agencies, research organisations, non-governmental organisations, language communities and European universities. META-NET's vision is high-quality language technology for all European languages. "The research carried out in the area of language technology is of utmost importance for the consolidation of Portuguese as a language of global communication in the information society." - Dr. Pedro Passos Coelho (Prime-Minister of Portugal) "It is imperative that language technologies for Slovene are developed systematically if we want Slovene to flourish also in the future digital world." - Dr. Danilo Turk (President of the Republic of Slovenia) "For such small languages like Latvian keeping up with the ever increasing pace of time and technological development is crucial. The only way to ensure future existence of our language is to provide its users with equal opportunities as the users of larger languages enjoy. Therefore being on the forefront of modern technologies is our opportunity." - Valdis Dombrovskis (Prime Minister of Latvia) "Europe's inherent multilingualism and our scientific expertise are the perfect prerequisites for significantly advancing the challenge that language technology poses. META-NET opens up new opportunities for the development of ubiquitous multilingual technologies." - Prof. Dr. Annette Schavan (German Minister of Education and Research)
This book constitutes the thoroughly refereed post-conference proceedings of the Second International ICST Conference on Ambient Systems and Media, AMBI-SYS 2011, held in Porto, Portugal in March 2011. The 10 revised full papers presented were carefully reviewed and selected and cover a wide range of topics as innovative solutions in the field of ambient assisted living, providing a new physical basis for ambient intelligence by also leveraging on contributions offered by interaction design methods and approaches.
This book takes an idea first explored by medieval logicians 800 years ago and revisits it armed with the tools of contemporary linguistics, logic, and computer science. The idea - the Holy Grail of the medieval logicians - was the thought that all of logic could be reduced to two very simple rules that are sensitive to logical polarity (for example, the presence and absence of negations). Ludlow and Zivanovic pursue this idea and show how it has profound consequences for our understanding of the nature of human inferential capacities. They also show its consequences for some of the deepest issues in contemporary linguistics, including the nature of quantification, puzzles about discourse anaphora and pragmatics, and even insights into the source of aboutness in natural language. The key to their enterprise is a formal relation they call "p-scope" - a polarity-sensitive relation that controls the operations that can be carried out in their Dynamic Deductive System. They show that with p-scope in play, deductions can be carried out using sublogical operations like those they call COPY and PRUNE - operations that are simple syntactic operations on sentences. They prove that the resulting deductive system is complete and sound. The result is a beautiful formal tapestry in which p-scope unlocks important properties of natural language, including the property of "restrictedness," which they prove to be equivalent to the semantic notion of conservativity. More than that, they show that restrictedness is also a key to understanding quantification and discourse anaphora, and many other linguistic phenomena.
This book constitutes the refereed proceedings of the International Conference on Information Systems for Indian Languages, ICISIL 2011, held in Patiala, India, in March 2011. The 63 revised papers presented were carefully reviewed and selected from 126 paper submissions (full papers as well as poster papers) and 25 demo submissions. The papers address all current aspects on localization, e-governance, Web content accessibility, search engine and information retrieval systems, online and offline OCR, handwriting recognition, machine translation and transliteration, and text-to-speech and speech recognition - all with a particular focus on Indic scripts and languages.
In the light of upcoming global issues, concerning population, energy, the environment, and food, information and communication technologies are required to overcome difficulties in communication among cultures. In this context, the First International Conference on Culture and Computing, which was held in Kyoto, Japan, in February 2010, was conceived as a collection of symposia, panels, workshops, exhibitions, and guided tours intended to share issues, activities, and research results regarding culture and computing. This volume includes 17 invited and selected papers dealing with state-of-the-art topics in culturally situated agents, intercultural collaboration and support systems, culture and computing for art and heritage, as well as culture and computing within regional communities.
The use of differing input and output equipment (scanners, monitors, printers, etc.) in computer-aided publishing often results in the unsatisfactory reproduction of color originals in print and online media. This is the first book presenting the basics and strategies for color management in the print publishing workflow with focus on producing according ISO 12647-2 and other standards. The user learns what to expect from color management according to the ICC-standard and how to avoid the pitfalls. The terminology is oriented on practicing professionals for print production.
The rich programme of ICIDS 2009, comprising invited talks, technical pres- tations and posters, demonstrations, and co-located post-conference workshops clearly underscores the event's status as premier international meeting in the domain. It thereby con?rms the decision taken by the Constituting Committee of the conference series to take the step forward: out of the national cocoons of its precursors, ICVS and TIDSE, and towards an itinerant platform re?ecting its global constituency. This move re?ects the desire and the will to take on the challenge to stay on the lookout, critically re?ect upon and integrate views and ideas,?ndingsandexperiences,andtopromoteinterdisciplinaryexchange,while ensuring overall coherence and maintaining a sense of direction. This is a signi?cant enterprise: The challenges sought are multifarious and must be addressed consistently at all levels. The desire to involve all research communitiesandstakeholdersmustbematchedbyacknowledgingthedi?erences in established practises and by providing suitable means of guidance and int- duction, exposition and direct interaction at the event itself and of lasting (and increasingly:living) documentation, of which the present proceedings are but an important part.
The Reasoning Web summer school series is a well-established event, attracting experts from academia and industry as well as PhD students interested in fo- dational and applicational aspects of the Semantic Web. This volume contains thelecturenotesofthefourthsummerschool, which took place in Venice, Italy, in September 2008. This year, the school focussed on a number of important application domains, in which semantic web techniques have proved to be p- ticularly e?ective or promising in tackling problems. The ?rst three chapters provide introductory material to: - languages, formalisms, and standards adopted to encode semantic information; - "soft" extensions that might be useful in contexts such as multimedia or social network applications; - controlled natural language techniques to bring ontology authoring closer to end users. The remaining chapters cover major application areas such as social networks, semantic multimedia indexing and retrieval, bioinformatics, and semantic web services. Thepresentationshighlightedwhichtechniquesarealreadybeingsuccessfully applied for purposes such as improving the performance of information retrieval algorithms, enablingtheinteroperationofheterogeneousagents, modellinguser's pro?les and social relations, and standardizing and improving the accuracy of very large and dynamic scienti?c databases. Furthermore, the lectures pointed out which aspects are still waiting for a solution, andthepossiblerolethatsemantictechniquesmayplay, especiallythose reasoningmethodsthathavenotyetbeenexploitedtotheirfullpotential.Wehope thatthe school'smaterialwillinspire further exciting researchinthese areas. We are grateful to all the lecturers and their co-authors for their excellent contributions, to the Reasoning Web School Board, and Norbert Eisinger in particular, who helped in several critical phases, and to the organizations that supported this event: the University of Padua, the MOST project, and the N- work of Excellence REWERSE.
Unraveling the Voynich Codex reviews the historical, botanical, zoological, and iconographic evidence related to the Voynich Codex, one of the most enigmatic historic texts of all time. The bizarre Voynich Codex has often been referred to as the most mysterious book in the world. Discovered in an Italian Catholic college in 1912 by a Polish book dealer Wilfrid Voynich, it was eventually bequeathed to the Beinecke Rare Book and Manuscript Library of Yale University. It contains symbolic language that has defied translation by eminent cryptologists. The codex is encyclopedic in scope and contains sections known as herbal, pharmaceutical, balenological (nude nymphs bathing in pools), astrological, cosmological and a final section of text that may be prescriptions but could be poetry or incantations. Because the vellum has been carbon dated to the early 15th century and the manuscript was known to be in the collection of Emperor Rudolf II of the Holy Roman Empire sometime between 1607 and 1622, current dogma had assumed it a European manuscript of the 15th century. However, based on identification of New World plants, animals, a mineral, as well as cities and volcanos of Central Mexico, the authors of this book reveal that the codex is clearly a document of colonial New Spain. Furthermore, the illustrator and author are identified as native to Mesoamerica based on a name and ligated initials in the first botanical illustration. This breakthrough in Voynich studies indicates that the failure to decipher the manuscript has been the result of a basic misinterpretation of its origin in time and place. Tentative assignment of the Voynichese symbols also provides a key to decipherment based on Mesoamerican languages. A document from this time, free from filter or censor from either Spanish or Inquisitorial authorities has major importance in our understanding of life in 16th century Mexico. Publisher's Note: For the eBook editions, Voynichese symbols are only rendered properly in the PDF format.
This volume contains papers presented at the 19th International Conference on Algorithmic Learning Theory (ALT 2008), which was held in Budapest, Hungary during October 13-16, 2008. The conference was co-located with the 11th - ternational Conference on Discovery Science (DS 2008). The technical program of ALT 2008 contained 31 papers selected from 46 submissions, and 5 invited talks. The invited talks were presented in joint sessions of both conferences. ALT 2008 was the 19th in the ALT conference series, established in Japan in 1990. The series Analogical and Inductive Inference is a predecessor of this series: it was held in 1986, 1989 and 1992, co-located with ALT in 1994, and s- sequently merged with ALT. ALT maintains its strong connections to Japan, but has also been held in other countries, such as Australia, Germany, Italy, Sin- pore, Spain and the USA. The ALT conference series is supervised by its Steering Committee: Naoki Abe (IBM T. J. |
![]() ![]() You may like...
Advances in Generative Lexicon Theory
James Pustejovsky, Pierrette Bouillon, …
Hardcover
R5,211
Discovery Miles 52 110
Foundation Models for Natural Language…
Gerhard Paaß, Sven Giesselbach
Hardcover
R935
Discovery Miles 9 350
Python Programming for Computations…
Computer Language
Hardcover
Annotation, Exploitation and Evaluation…
Silvia Hansen-Schirra, Sambor Grucza
Hardcover
R991
Discovery Miles 9 910
|