![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence > Natural language & machine translation
This book constitutes the refereed proceedings of the Third
International Colloquium on Grammatical Inference, ICGI-96, held in
Montpellier, France, in September 1996.
This volume of papers grew out of a research project on "Cross-Linguistic Quantification" originated by Emmon Bach, Angelika Kratzer and Barbara Partee in 1987 at the University of Massachusetts at Amherst, and supported by National Science Foundation Grant BNS 871999. The publication also reflects directly or indirectly several other related activ ities. Bach, Kratzer, and Partee organized a two-evening symposium on cross-linguistic quantification at the 1988 Annual Meeting of the Linguistic Society of America in New Orleans (held without financial support) in order to bring the project to the attention of the linguistic community and solicit ideas and feedback from colleagues who might share our concern for developing a broader typological basis for research in semantics and a better integration of descriptive and theoretical work in the area of quantification in particular. The same trio organized a six-week workshop and open lecture series and related one-day confer ence on the same topic at the 1989 LSA Linguistic Institute at the University of Arizona in Tucson, supported by a supplementary grant, NSF grant BNS-8811250, and Partee offered a seminar on the same topic as part of the Institute course offerings. Eloise Jelinek, who served as a consultant on the principal grant and was a participant in the LSA symposium and the Arizona workshops, joined the group of editors for this volume in 1989."
This volume presents the proceedings of the Second International
Colloquium on Grammatical Inference (ICGI-94), held in Alicante,
Spain in September 1994.
This book constitutes the refereed proceedings of the 13th International Tbilisi Symposium on Logic, Language and Computation, TbiLLC 2019, held in Batumi, Georgia, in September 2019. The volume contains 17 full revised papers presented at the conference from 17 submissions. The scientific program consisted of tutorials, invited lectures, contributed talks, and two workshops. The symposium offered two tutorials in language and logic and aimed at students as well as researchers working in the other areas: * Language: Sign language linguistics. State of the art, by Fabian Bross (University of Stuttgart, Germany) * Logic: Axiomatic Semantics, by Graham E. Leigh (University of Gothenburg, Sweden)
From tech giants to plucky startups, the world is full of companies boasting that they are on their way to replacing human interpreters, but are they right? Interpreters vs Machines offers a solid introduction to recent theory and research on human and machine interpreting, and then invites the reader to explore the future of interpreting. With a foreword by Dr Henry Liu, the 13th International Federation of Translators (FIT) President, and written by consultant interpreter and researcher Jonathan Downie, this book offers a unique combination of research and practical insight into the field of interpreting. Written in an innovative, accessible style with humorous touches and real-life case studies, this book is structured around the metaphor of playing and winning a computer game. It takes interpreters of all experience levels on a journey to better understand their own work, learn how computers attempt to interpret and explore possible futures for human interpreters. With five levels and split into 14 chapters, Interpreters vs Machines is key reading for all professional interpreters as well as students and researchers of Interpreting and Translation Studies, and those with an interest in machine interpreting.
The Translator's Workbench Project was a European Community sponsored research and development project which dealt with issues in multi-lingual communication and docu mentation. This book presents an integrated toolset as a solution to problems in translation and docu mentation. Professional translators and teachers of translation were involved in the proc ess of software development, starting with a detailed study of the user requirements and ending with several evaluation-and-improvement cycles of the resulting toolset. English, German, Greek, and Spanish are addressed in the contributions, however, some of the techniques are inherently language-independent and can thus be extended to cover other languages as well. Translation can be viewed broadly as the execution of three cognitive processes, and this book has been structured along these lines: * First, the translation pre-process, understanding the target language text at a lexico semantic level on the one hand, and making sense of the source language document on the other hand. The tools for the pre-translation process include access to electronic networks, conversion of documents from one format to another, creation of terminol ogy data banks and access to existing data banks, and terminology dictionaries. * Second, the translation process, rendering sentences in the source language into equiva lent target sentences. The translation process refers to the potential of conventional machine translation systems, like METAL, and of the statistically oriented translation memory.
This volume constitutes the proceedings of the Third International
Workshop of the European Association for Machine Translation, held
in Heidelberg, Germany in April 1993. The EAMT Workshops
traditionally aim at bringing together researchers, developers,
users, and others interested in the field of machine or
computer-assisted translation research, development and use.
Computer processing of natural language is a burgeoning field, but until now there has been no agreement of a standardized classification of the diverse structural elements that occur in real-life language material. This book attempts to define a 'Linnaean taxonomy' for the English language: an annotation scheme, the SUSANNE scheme, which yields a labelled constituency structure for any string of English, comprehensively identifying all of its surface and logical structural properties. The structure is specified with sufficient rigour that analysts working independently must produce identical annotations for a given example. The scheme is based on large samples of real-life use of British and American written and spoken English. The book also describes the SUSANNE electronic corpus of English which is annotated in accordance with the scheme. It is freely available as a research resource to anyone working at a computer connected to Internet, and since 1992 has come into widespread use in academic and commercial research environments on four continents.
Humans do a great job of reading text, identifying key ideas, summarizing, making connections, and other tasks that require comprehension and context. Recent advances in deep learning make it possible for computer systems to achieve similar results. Deep Learning for Natural Language Processing teaches you to apply deep learning methods to natural language processing (NLP) to interpret and use text effectively. In this insightful book, NLP expert Stephan Raaijmakers distills his extensive knowledge of the latest state-of-the-art developments in this rapidly emerging field. Key features An overview of NLP and deep learning * Models for textual similarity * Deep memory-based NLP * Semantic role labeling * Sequential NLP Audience For those with intermediate Python skills and general knowledge of NLP. No hands-on experience with Keras or deep learning toolkits is required. About the technology Natural language processing is the science of teaching computers to interpret and process human language. Recently, NLP technology has leapfrogged to exciting new levels with the application of deep learning, a form of neural network-based machine learning Stephan Raaijmakers is a senior scientist at TNO and holds a PhD in machine learning and text analytics. He's the technical coordinator of two large European Union-funded research security-related projects. He's currently anticipating an endowed professorship in deep learning and NLP at a major Dutch university.
With this volume in honour of Don Walker, Linguistica Computazionale con tinues the series of special issues dedicated to outstanding personalities who have made a significant contribution to the progress of our discipline and maintained a special collaborative relationship with our Institute in Pisa. I take the liberty of quoting in this preface some of the initiatives Pisa and Don Walker have jointly promoted and developed during our collaboration, because I think that they might serve to illustrate some outstanding features of Don's personality, in particular his capacity for identifying areas of potential convergence among the different scientific communities within our field and establishing concrete forms of coop eration. These initiatives also testify to his continuous and untiring work, dedi cated to putting people into contact and opening up communication between them, collecting and disseminating information, knowledge and resources, and creating shareable basic infrastructures needed for progress in our field. Our collaboration began within the Linguistics in Documentation group of the FID and continued in the framework of the CCL (International Committee for Computational Linguistics). In 1982 this collaboration was strengthened when, at CO LING in Prague, I was invited by Don to join him in the organization of a series of workshops with participants of the various communities interested in the study, development, and use of computational lexica."
Although natural language processing has come far, the technology has not achieved a major impact on society. Is this because of some fundamental limitation that cannot be overcome? Or because there has not been enough time to refine and apply theoretical work already done? Editors Madeleine Bates and Ralph Weischedel believe it is neither; they feel that several critical issues have never been adequately addressed in either theoretical or applied work, and they have invited capable researchers in the field to do that in Challenges in Natural Language Processing. This volume will be of interest to researchers of computational linguistics in academic and non-academic settings and to graduate students in computational linguistics, artificial intelligence and linguistics.
This volume presents the proceedings of the Sixth International Workshop on Automated Natural Language Generation held in Castel Ivano, Trento, Italy, April 5-7, 1992. Besides an invited lecture by Nadia Magnenat-Thalmann, a well-known researcher in computer animation, on creating and visualizing speech and emotion, the volume includes the 17 thouroughly reviewed papers accepted for presentation, selected out of the submissions to the Workshop, as well as 11 statements contributed to panels on multilinguality and generation or extending language generation to multiple media. The accepted papers by leading researchers from Japan, North America and Europe fall in sections on generator system architecture, issues in realisation, issues in discourse structure, and beyond traditional generation.
In den letzten Jahren hat sich in der Informatik und speziell auch in der K}nstlichen Intelligenz ein Wandel in der Auffassung vom Computer und seinerVerwendung vollzogen - von der Vorstellung von der sequentiell arbeitenden Funktionseinheit zum verteilten, interaktiven, parallel arbeitenden System von Agenten/Akteuren. Computer werden also nicht nur als pers-nliches Werkzeug, sondern als Medien f}r Kommunikation und als einer unter vielen intelligenten Partnern in einer verteilten Arbeitsumgebung verwendet. Dieser Band beinhaltet alle Beitr{ge des 4. Internationalen GI-Kongre es "Wissensbasierte Systeme," der sich haupts{chlich mit dem f}r den praktischen Einsatz der Wissensverarbeitung {u erst wichtigen Themenkreis der verteilten K}nstlichen Intelligenz und der Unterst}tzung kooperativen Entscheidens und Handelns sowie mit eng verwandten Gebieten wie Wissensrepr{sentation, Mensch-Maschine-Interaktion und nat}rlich-sprachlichen Systemen befa te. Weitere Schwerpunkte des Kongre es waren die Theorie und Anwendung neuronaler Netze, wobei alle BMFT-Verbundprojekte zur Neuroinformatik vorgestellt wurden, sowie das in letzter Zeit f}r die Modellierung technischer Systeme immer n}tzlicher gewordene gebiet des qualitativen modellbasierten Schlie ens.
Die 7. \sterreichische Artificial-Intelligence-Tagung fand vom 24.-27. September 1991 an der Technischen Universit{t Wien statt. Sie hat aufgrund der starken Beteiligung aus dem Ausland einen ausgepr{gt internationalen Charakter, weshalb auch der vorliegende Tagungsband zweisprachig herausgegeben wurde. Die behandelten Themen aus dem Gebiet der K}nstlichen Intelligenz (KI) werden repr{sentiert durch sechzehn begutachtete Beitr{ge sowie zwei eingeladene Vortr{ge. Sie sind thematisch breit gestreut, wobei sich gewisse Schwerpunkte in den Gebieten "Nat}rliche Sprache" und "Wissensbasierte Systeme" sowie Logik und Schlie en" abzeichnen.
Text and Context: Document Storage and Processing describes information processing techniques, including those which do not appear in conventional textbooks on database systems. It focuses on the input, storage, retrieval and presentation of primarily textual information, together with auxiliary material about graphic and video data. There are chapters on text analysis as a basis for lexicography, full-text databases and information retrieval, the use of optical storage for both ASCII text and scanned document images, hypertext and multi-media systems, abstract document definition, and document formatting and imaging. The material is treated in an informal way with an emphasis on real applications and software. There are, among others, case studies from Reuters, British Airways, St. Bartholomew's Hospital, Sony, and HMSO. Relevant industry standards are discussed including ISO 9660 for CD-ROM file storage, CCITT Group4 data compression, the Standard Generalised Markup Language and Office Document Architecture, and the Postscript language. Readers will benefit from the way Susan Jones has brought together this information, in a logical sequence, to highlight the connections between related topics. This book will be of interest to second and third year undergraduates and MSc students in computer science, to B/TEC HTD final year computing and information science students either specialising in IT or taking an IT option, and to students taking courses in IT and in business computing systems.
Der Kongress "Verarbeitung Naturlicher Sprache" (KONVENS) ist die erste Tagung, die gemeinsam von den folgenden wissenschaftlichen Gesellschaften veranstaltet wird: GI (Gesellschaft fur Informa- tik e.V., Fachausschuss 1.3 "Naturliche Sprache"), DGfS (Deutsche Gesellschaft fur Sprachwissen- schaft / Sektion Computerlinguistik), GLDV (Gesellschaft fur Linguistische Datenverarbeitung), ITG/DEGA (Informationstechnische Gesellschaft in Zusammenarbeit mit der Deutschen Gesell- schaft fur Akustik) und OGAI (Osterreichische Gesellschaft fur Artificial Intelligence). Sie soll die erste in einer Reihe von Tagungen uber die Verarbeitung naturlicher Sprache im deutschsprachigen Raum s~n, die die beteiligten Gesellschaften in zweijahrigem Turnus planen. Die Verantwortung fur die Organisation werden die veranstaltenden Gesellschaften reihum ubernehmen; bei der KONVENS 92 hat die Gesellschaft fur Informatik die Federfuhrung. Die KONVENS hat das Ziel, einen Querschnitt durch die aktuelle Forschung in allen Gebieten der Sprachverarbeitung zu bieten. Hierzu ist die Mitwirkung samtlicher fur die Sprachverarbeitung relevanten Disziplinen, wie z.B. Informatik, Linguistik, Psychologie und Nachrichtentechnik, erfor- derlich. In den Beitragen sollen neben grundlagen-orientierten Forschungsaspekten und Resultaten auch innovative Anwendungen vertreten sein. Besonders erwunscht sind Berichte uber erfolgreich durchgefuhrte und implementierte Vorhaben. Zusatzlich sollen durch die Vorgabe eines Schwerpunktthemas Anstosse in wichtigen Forschungsrich- tungen vermittelt werden. Als Schwerpunktthema fur die KONVENS 92 wurde "Integration akustischer und linguistischer Ansatze" gewahlt. Zum Schwerpunktthema werden drei eingeladene Vortrage angeboten sowie zwei Einfuh- rungskurse, die der Tagung vorangehen.
The European Workshop on Logics in Artificial Intelligence was held at the Centre for Mathematics and Computer Science in Amsterdam, September 10-14, 1990. This volume includes the 29 papers selected and presented at the workshop together with 7 invited papers. The main themes are: - Logic programming and automated theorem proving, - Computational semantics for natural language, - Applications of non-classical logics, - Partial and dynamic logics.
Attribute grammars were introduced over twenty years ago, but they are still not as widely used as could have been hoped initially. This is particularly so in industry, despite their qualities as a specification tool. The aim of this International Workshop on Attribute Grammars and their Applications (WAGA), the first to be entirely devoted to this topic, was to show that they are still the subject of active research and now lead to important, useful and practical applications in various areas. The workshop covered all aspects of attribute grammars, with an emphasis on practical results. This volume includes the text of the three invited talks and 21 submitted papers presented at the workshop. This selection provides a wide view of the diverse research being done in the area. Topics include: - Fundamentals: efficient exhaustive and incremental at- tribute evaluation methods, parallel evaluation, space optimization, relationships with functional, logic and object-oriented programming, and systems. - Applications: compiler construction, natural language processing, and interactive program manipulation.
This volume is the proceedings of the Second Advanced School on Artificial Intelligence (EAIA '90) held in Guarda, Portugal, October 8-12, 1990. The focus of the contributions is natural language processing. Two types of subject are covered: - Linguistically motivated theories, presented at an introductory level, such as X-bar theory and head- driven phrase structure grammar, - Recent trends in formalisms which will be familiar to readers with a background in AI, such as Montague semantics and situation semantics. The topics were chosen to provide a balanced overview of the most important ideas in natural language processing today. Some of the results presented were worked out very recently, are the subject of ongoing research, and have not previously appeared in book form. This book may serve as a textbook: in fact its contents were intended as lecture notes.
We met because we both share the same views of language. Language is a living organism, produced by neural mechanisms relating in large numbers as a society. Language exists between minds, as a way of communicating between them, not as an autonomous process. The logical 'rules' seem to us an epiphe nomena .of the neural mechanism, rather than an essential component in language. This view of language has been advocated by an increasing number of workers, as the view that language is simply a collection of logical rules has had less and less success. People like Yorick Wilks have been able to show in paper after paper that almost any rule which can be devised can be shown to have exceptions. The meaning does not lie in the rules. David Powers is a teacher of computer science. Christopher Turk, like many workers who have come into the field of AI (Artificial Intelligence) was originally trained in literature. He moved into linguistics, and then into computational linguistics. In 1983 he took a sabbatical in Roger Shank's AI project in the Computer Science Department at Yale University. Like an earlier visitor to the project, John Searle from California, Christopher Turk was increasingly uneasy at the view of language which was used at Yale."
This volume contains the papers presented at the International Scientific Symposium "Natural Language and Logic" held in Hamburg in May 1989. The aim of the papers is to present and discuss latest developments in the application of logic-based meth- ods for natural language understanding. Logic-based methods have gained in importance in the field of computational linguistics as well as for representing various types of knowledge in natural language understanding systems. The volume gives an overview of recent results achieved within the LILOG project (LInguistic and LOgic methods for understanding German texts) - one of the largest research projects in the field of text understanding - as well as within related natural language understanding systems.
This book springs from a conference held in Stockholm in May June 1988 on Culture, Language and Artificial Intelligence. It assembled more than 300 researchers and practitioners in the fields of technology, philosophy, history of ideas, literature, lin guistics, social science, etc. It was an initiative from the Swedish Center for Working Life, based on the project AI-Based Systems and the Future of Language, Knowledge and Responsibility in Professions within the COST 13 programme of the European Commission. Participants in the conference, or in some cases researchers related to its aims, were chosen to contribute to this book. It was preceded by Knowledge, Skill and Artificial Intelligence (ed. B. G6ranzon and 1. Josefson, Springer-Verlag, London, 1988) and will be followed by Dialogue and Technology (ed. M. Florin and B. Goranzon, Springer-Verlag, London, 1990). The contributors' thinking in this field varies greatly; so do their styles of writing. For example: contributors have varied in their choice of 'he' or 'he/she' for the third person. No distinction is intended but chapters have been left with the original usage to avoid extensive changes. Similarly, individual contributor's preferences as to notes or references lists have been followed. We want to thank our researcher Satinder P. Gill for excellent work with summaries and indexes, and Sandi Irvine of Springer Verlag for eminent editorial work."
Dr AIvy Ray Smith Executive Vice President, Pixar The pOlyglot language of computer animation has arisen piecemeal as a collection of terms borrowed from geometry, film, video, painting, conventional animation, computer graphiCS, computer science, and publishing - in fact, from every older art or science which has anything to do with pictures and picture making. Robi Roncarelli, who has already demonstrated his foresight by formally identifying a nascent industry and addressing his Computer Animation Newsletter to it, here again makes a useful contribution to it by codifying its jargon. My pleasure in reading his dictionary comes additionally from the many historical notes sprinkled throughout and from surprise entries such as the one referring to Zimbabwe. Just as Samuel Johnson's dictionary of the English language was a major force in stabilizing the spelling of English, perhaps this one will serve a similar purpose for computer animation. Two of my pets are "color" for "colour" and "modeling" "modelling", under the rule that the shorter accepted spelling is always preferable. [Robi, are you reading this?] [Yes, AIvy!] Now I commend this book to you, whether you be a newcomer or an oldtimer. |
You may like...
Handbook of Research on Recent…
Siddhartha Bhattacharyya, Nibaran Das, …
Hardcover
R9,028
Discovery Miles 90 280
Eyetracking and Applied Linguistics
Silvia Hansen-Schirra, Sambor Grucza
Hardcover
R835
Discovery Miles 8 350
Natural Language Processing for Global…
Fatih Pinarbasi, M. Nurdan Taskiran
Hardcover
R6,306
Discovery Miles 63 060
Cross-Disciplinary Advances in Applied…
Chutima Boonthum-Denecke, Philip M. McCarthy, …
Hardcover
R4,976
Discovery Miles 49 760
|