Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Applications of computing > Artificial intelligence > Natural language & machine translation
This book provides system developers and researchers in natural language processing and computational linguistics with the necessary background information for working with the Arabic language. The goal is to introduce Arabic linguistic phenomena and review the state-of-the-art in Arabic processing. The book discusses Arabic script, phonology, orthography, morphology, syntax and semantics, with a final chapter on machine translation issues. The chapter sizes correspond more or less to what is linguistically distinctive about Arabic, with morphology getting the lion's share, followed by Arabic script. No previous knowledge of Arabic is needed. This book is designed for computer scientists and linguists alike. The focus of the book is on Modern Standard Arabic; however, notes on practical issues related to Arabic dialects and languages written in the Arabic script are presented in different chapters. Table of Contents: What is "Arabic"? / Arabic Script / Arabic Phonology and Orthography / Arabic Morphology / Computational Morphology Tasks / Arabic Syntax / A Note on Arabic Semantics / A Note on Arabic and Machine Translation
This book explains how to build Natural Language Generation (NLG) systems - computer software systems which use techniques from artificial intelligence and computational linguistics to automatically generate understandable texts in English or other human languages, either in isolation or as part of multimedia documents, Web pages, and speech output systems. Typically starting from some non-linguistic representation of information as input, NLG systems use knowledge about language and the application domain to automatically produce documents, reports, explanations, help messages, and other kinds of texts. The book covers the algorithms and representations needed to perform the core tasks of document planning, microplanning, and surface realization, using a case study to show how these components fit together. It also discusses engineering issues such as system architecture, requirements analysis, and the integration of text generation into multimedia and speech output systems.
Search for information is no longer exclusively limited within the native language of the user, but is more and more extended to other languages. This gives rise to the problem of cross-language information retrieval (CLIR), whose goal is to find relevant information written in a different language to a query. In addition to the problems of monolingual information retrieval (IR), translation is the key problem in CLIR: one should translate either the query or the documents from a language to another. However, this translation problem is not identical to full-text machine translation (MT): the goal is not to produce a human-readable translation, but a translation suitable for finding relevant documents. Specific translation methods are thus required. The goal of this book is to provide a comprehensive description of the specific problems arising in CLIR, the solutions proposed in this area, as well as the remaining problems. The book starts with a general description of the monolingual IR and CLIR problems. Different classes of approaches to translation are then presented: approaches using an MT system, dictionary-based translation and approaches based on parallel and comparable corpora. In addition, the typical retrieval effectiveness using different approaches is compared. It will be shown that translation approaches specifically designed for CLIR can rival and outperform high-quality MT systems. Finally, the book offers a look into the future that draws a strong parallel between query expansion in monolingual IR and query translation in CLIR, suggesting that many approaches developed in monolingual IR can be adapted to CLIR. The book can be used as an introduction to CLIR. Advanced readers can also find more technical details and discussions about the remaining research challenges in the future. It is suitable to new researchers who intend to carry out research on CLIR. Table of Contents: Preface / Introduction / Using Manually Constructed Translation Systems and Resources for CLIR / Translation Based on Parallel and Comparable Corpora / Other Methods to Improve CLIR / A Look into the Future: Toward a Unified View of Monolingual IR and CLIR? / References / Author Biography
This volume contains the papers presented at the 23rd Canadian Conference on Arti?cial Intelligence (AI 2010). The conference was held in Ottawa, Ontario, fromMay31toJune2,2010,andwascollocatedwiththe36thGraphicsInterface Conference(GI2010),andthe7thCanadianConferenceonComputerandRobot Vision (CRV 2010). The Program Committee received 90 submissions for the main conference, AI2010,fromacrossCanadaandaroundtheworld.Eachsubmissionwasreviewed byuptofourreviewers.Forthe?nalconferenceprogramandforinclusioninthese proceedings, 22 regular papers, with allocation of 12 pages each, were selected. Additionally,26 shortpapers,with allocationof 4 pageseach,wereaccepted. The papers from the Graduate Student Symposium are also included in the proceedings:sixoral(fourpages)andsixposter(twopages)presentationpapers. The conference programfeatured three keynote presentations by Dekang Lin (Google Inc.), Guy Lapalme (Universit'edeMontr' eal), and Evangelos Milios (Dalhousie University). The one-page abstracts of their talks are also included in the proceedings. Two pre-conference workshops, each with their own proceedings, were held on May 30, 2010. The Workshop on Intelligent Methods for Protecting Privacy and Con?dentiality in Data was organized by Khaled El Emam and Marina Sokolova. The workshop on Teaching AI in Computing and Information Te- nology (AI-CIT 2010) was organized by Danny Silver, Leila Kosseim, and Sajid Hussain. This conference wouldnot havebeen possible without the hardworkofmany people.WewouldliketothankallProgramCommitteemembersandexternal- viewers for their e?ort in providing high-quality reviews in a timely manner. We thank all the authors of submitted papers for submitting their work,and the - thors of selected papers for their collaboration in preparation of the ?nal copy. ManythankstoEbrahimBagheriandMarinaSokolovafororganizingtheGra- ateStudentSymposium,andchairingtheProgramCommitteeofthesymposium. We are in debt to Andrei Voronkov for developing the EasyChair conference managementsystemandmakingitfreelyavailabletotheacademicworld.Itisan amazinglyelegantand functionalWeb-basedsystem,whichsavedus muchtime.
Our world is being revolutionized by data-driven methods: access to large amounts of data has generated new insights and opened exciting new opportunities in commerce, science, and computing applications. Processing the enormous quantities of data necessary for these advances requires large clusters, making distributed computing paradigms more crucial than ever. MapReduce is a programming model for expressing distributed computations on massive datasets and an execution framework for large-scale data processing on clusters of commodity servers. The programming model provides an easy-to-understand abstraction for designing scalable algorithms, while the execution framework transparently handles many system-level details, ranging from scheduling to synchronization to fault tolerance. This book focuses on MapReduce algorithm design, with an emphasis on text processing algorithms common in natural language processing, information retrieval, and machine learning. We introduce the notion of MapReduce design patterns, which represent general reusable solutions to commonly occurring problems across a variety of problem domains. This book not only intends to help the reader "think in MapReduce", but also discusses limitations of the programming model as well. Table of Contents: Introduction / MapReduce Basics / MapReduce Algorithm Design / Inverted Indexing for Text Retrieval / Graph Algorithms / EM Algorithms for Text Processing / Closing Remarks
Lexical semantics has become a major research area within computational linguistics, drawing from psycholinguistics, knowledge representation, computer algorithms and architecture. Research programmes whose goal is the definition of large lexicons are asking what the appropriate representation structure is for different facets of lexical information. Among these facets, semantic information is probably the most complex and the least explored. Computational Lexical Semantics is one of the first volumes to provide models for the creation of various kinds of computerized lexicons for the automatic treatment of natural language, with applications to machine translation, automatic indexing, and database front-ends, knowledge extraction, among other things. It focuses on semantic issues, as seen by linguists, psychologists and computer scientists. Besides describing academic research, it also covers ongoing industrial projects.
This book is aimed at providing an overview of several aspects of semantic role labeling. Chapter 1 begins with linguistic background on the definition of semantic roles and the controversies surrounding them. Chapter 2 describes how the theories have led to structured lexicons such as FrameNet, VerbNet and the PropBank Frame Files that in turn provide the basis for large scale semantic annotation of corpora. This data has facilitated the development of automatic semantic role labeling systems based on supervised machine learning techniques. Chapter 3 presents the general principles of applying both supervised and unsupervised machine learning to this task, with a description of the standard stages and feature choices, as well as giving details of several specific systems. Recent advances include the use of joint inference to take advantage of context sensitivities, and attempts to improve performance by closer integration of the syntactic parsing task with semantic role labeling. Chapter 3 also discusses the impact the granularity of the semantic roles has on system performance. Having outlined the basic approach with respect to English, Chapter 4 goes on to discuss applying the same techniques to other languages, using Chinese as the primary example. Although substantial training data is available for Chinese, this is not the case for many other languages, and techniques for projecting English role labels onto parallel corpora are also presented. Table of Contents: Preface / Semantic Roles / Available Lexical Resources / Machine Learning for Semantic Role Labeling / A Cross-Lingual Perspective / Summary
This volume contains the proceedings of NOLISP 2009, an ISCA Tutorial and Workshop on Non-Linear Speech Processing held at the University of Vic (- talonia, Spain) during June 25-27, 2009. NOLISP2009wasprecededbythreeeditionsofthisbiannualeventheld2003 in Le Croisic (France), 2005 in Barcelona, and 2007 in Paris. The main idea of NOLISP workshops is to present and discuss new ideas, techniques and results related to alternative approaches in speech processing that may depart from the mainstream. In order to work at the front-end of the subject area, the following domains of interest have been de?ned for NOLISP 2009: 1. Non-linear approximation and estimation 2. Non-linear oscillators and predictors 3. Higher-order statistics 4. Independent component analysis 5. Nearest neighbors 6. Neural networks 7. Decision trees 8. Non-parametric models 9. Dynamics for non-linear systems 10. Fractal methods 11. Chaos modeling 12. Non-linear di?erential equations The initiative to organize NOLISP 2009 at the University of Vic (UVic) came from the UVic Research Group on Signal Processing and was supported by the Hardware-Software Research Group. We would like to acknowledge the ?nancial support obtained from the M- istry of Science and Innovation of Spain (MICINN), University of Vic, ISCA, and EURASIP. All contributions to this volume are original. They were subject to a doub- blind refereeing procedure before their acceptance for the workshop and were revised after being presented at NOLISP 2009.
This book teaches the principles of natural language processing and covers linguistics issues. It also details the language-processing functions involved, including part-of-speech tagging using rules and stochastic techniques. A key feature of the book is the author's hands-on approach throughout, with extensive exercises, sample code in Prolog and Perl, and a detailed introduction to Prolog. The book is suitable for researchers and students of natural language processing and computational linguistics.
This volume contains the full and short papers of SAMT 2009, the 4th Int- national Conference on Semantic and Digital Media Technologies 2009 held in Graz, Austria. SAMT brings together researchers dealing with a broad range of research topics related to semantic multimedia and a great diversity of application - eas. The current research shows that adding and using semantics of multimedia content is broadening its scope from search and retrieval to the complete media life cycle, from content creation to distribution and consumption, thus lever- ing new possibilities in creating, sharing and reusing multimedia content. While some of the contributions present improvements in automatic analysis and - notation methods, there is increasingly more work dealing with visualization, user interaction and collaboration. We can also observe ongoing standardization activities related to semantic multimedia in both W3C and MPEG, forming a solid basis for a wide adoption. Theconferencereceived41submissionsthisyear, ofwhichtheProgramC- mittee selected 13 full papers for oral presentation and 8 short papers for poster presentation. In addition to the scienti?c papers, the conference program - cluded two invited talks by Ricardo Baeza-Yates and Stefan Rug ] er and a demo session showing results from three European projects. The day before the main conference o?ered an industry day with presen- tions and demos that showed the growing importance of semantic technologies in real-world applications as well as the research challenges coming from them."
Considerable progress has been made in recent years in the development of dialogue systems that support robust and efficient human-machine interaction using spoken language. Spoken dialogue technology allows various interactive applications to be built and used for practical purposes, and research focuses on issues that aim to increase the system's communicative competence by including aspects of error correction, cooperation, multimodality, and adaptation in context. This book gives a comprehensive view of state-of-the-art techniques that are used to build spoken dialogue systems. It provides an overview of the basic issues such as system architectures, various dialogue management methods, system evaluation, and also surveys advanced topics concerning extensions of the basic model to more conversational setups. The goal of the book is to provide an introduction to the methods, problems, and solutions that are used in dialogue system development and evaluation. It presents dialogue modelling and system development issues relevant in both academic and industrial environments and also discusses requirements and challenges for advanced interaction management and future research. Table of Contents: Preface / Introduction to Spoken Dialogue Systems / Dialogue Management / Error Handling / Case Studies: Advanced Approaches to Dialogue Management / Advanced Issues / Methodologies and Practices of Evaluation / Future Directions / References / Author Biographies
Distribution of anaphora in natural language and the complexity of its resolution have resulted in a wide range of disciplines focusing their research on this grammatical phenomenon. It has emerged as one of the most productive topics of multi- and int- disciplinary research such as cognitive science, artificial intelligence and human language technology, theoretical, cognitive, corpus and computational linguistics, philosophy of language, psycholinguistics and cognitive psychology. Anaphora plays a major role in understanding a language and also accounts for the cohesion of a text. Correct interpretation of anaphora is necessary in all high-level natural language pr- essing applications. Given the growing importance of the study of anaphora in the last few decades, it has emerged as the frontier area of research. This is evident from the high-quality th submissions received for the 7 DAARC from where the 10 excellent reports on - search findings are selected for this volume. These are the regular papers that were presented at DAARC.
This book introduces Chinese language-processing issues and techniques to readers who already have a basic background in natural language processing (NLP). Since the major difference between Chinese and Western languages is at the word level, the book primarily focuses on Chinese morphological analysis and introduces the concept, structure, and interword semantics of Chinese words. The following topics are covered: a general introduction to Chinese NLP; Chinese characters, morphemes, and words and the characteristics of Chinese words that have to be considered in NLP applications; Chinese word segmentation; unknown word detection; word meaning and Chinese linguistic resources; interword semantics based on word collocation and NLP techniques for collocation extraction. Table of Contents: Introduction / Words in Chinese / Challenges in Chinese Morphological Processing / Chinese Word Segmentation / Unknown Word Identification / Word Meaning / Chinese Collocations / Automatic Chinese Collocation Extraction / Appendix / References / Author Biographies
We are pleased to present this LNCS volume, the Proceedings of the 22nd A- tralasianJointConferenceonArti?cialIntelligence(AI2009), heldinMelbourne, Australia, December 1-4,2009.This long established annual regionalconference is a forum both for the presentation of researchadvances in arti?cial intelligence and for scienti?c interchange amongst researchers and practitioners in the ?eld of arti?cial intelligence. Conference attendees were also able to enjoy AI 2009 being co-located with the Australasian Data Mining Conference (AusDM 2009) and the 4th Australian Conference on Arti?cial Life (ACAL 2009). This year AI 2009 received 174 submissions, from authors of 30 di?erent countries. After an extensive peer review process where each submitted paper was rigorously reviewed by at least 2 (and in most cases 3) independent revi- ers, the best 68 papers were selected by the senior Program Committee for oral presentation at the conference and included in this volume, resulting in an - ceptance rate of 39%. The papers included in this volume cover a wide range of topics in arti?cial intelligence: from machine learning to natural language s- tems, from knowledge representation to soft computing, from theoretical issues to real-world applications. AI 2009 also included 11 tutorials, available through the First Australian Computational Intelligence Summer School (ACISS 2009). These tutorials - some introductory, some advanced - covered a wide range of research topics within arti?cial intelligence, including data mining, games, evolutionary c- putation, swarm optimization, intelligent agents, Bayesian and belief networks
Thebookpresentsa cross-sectionofstate-of-the-artresearchonmultimodalc- pora, a highly interdisciplinary area that is a prerequisite for various specialized disciplines. A number of the papers included are revised and expanded versions ofpapersacceptedtotheInternationalWorkshoponMultimodal Corpora: From Models of Natural Interaction to Systems and Applications, held in conjunction th with the 6 International Conference for Language Resources and Evaluation (LREC) on May 27, 2008, in Marrakech, Morocco. This international workshop series started in 2000 and has since then grown into a regular satellite event of the bi-annual LREC conference, attracting researchers from ?elds as diverse as psychology, arti?cial intelligence, robotics, signal processing, computational linguisticsandhuman-computerinteraction. Tocomplement theselected papers from the 2008 workshop, we invited well-known researchers from corpus coll- tioninitiativestocontributetothisvolume. Wewereabletoobtainseveninvited research articles, including contributions from major international multimodal corpus projects like AMI and SmartWeb, which complement the six selected workshop contributions. All papers underwent a special review process for this volume, resulting in signi?cant revisions and extensions based on the experts' advice. While we were pleased that the 2006 edition of the workshop resulted in a special issue of the Journal of Language Resources and Evaluation, published in 2007, we felt that this was the time for another major publication, given not onlytherapidprogressandincreasedinterestin this researchareabut especially in order to acknowledge the di?culty of disseminating results across discipline borders. The Springer LNAI series is the perfect platform for doing so. We also created the website www. multimodal-corpora.
The ninth campaign of the Cross-Language Evaluation Forum (CLEF) for European languages was held from January to September 2008. There were seven main eval- tion tracks in CLEF 2008 plus two pilot tasks. The aim, as usual, was to test the p- formance of a wide range of multilingual information access (MLIA) systems or s- tem components. This year, 100 groups, mainly but not only from academia, parti- pated in the campaign. Most of the groups were from Europe but there was also a good contingent from North America and Asia plus a few participants from South America and Africa. Full details regarding the design of the tracks, the methodologies used for evaluation, and the results obtained by the participants can be found in the different sections of these proceedings. The results of the CLEF 2008 campaign were presented at a two-and-a-half day workshop held in Aarhus, Denmark, September 17-19, and attended by 150 resear- ers and system developers. The annual workshop, held in conjunction with the European Conference on Digital Libraries, plays an important role by providing the opportunity for all the groups that have participated in the evaluation campaign to get together comparing approaches and exchanging ideas. The schedule of the workshop was divided between plenary track overviews, and parallel, poster and breakout sessions presenting this year's experiments and discu- ing ideas for the future. There were several invited talks.
Half a centuryago not manypeople had realizedthat a new epoch in the history of homo sapiens had just started. The term "Information Society Age" seems an appropriate name for this epoch. Communication was without a doubt a lever of the conquest of the human race over the rest of the animate world. There is little doubt that the human racebegan when our predecessorsstarted to communicate with each other using language.This highly abstractmeans of communicationwas probably one of the major factors contributing to the evolutionary success of the human race within the animal world. Physically weak and imperfect, humans started to dominate the rest of the world through the creation of communication-based societies where individuals communicated initially to satisfy immediate needs, and then to create, accumulate and process knowledge for future use. The crucial step in the history of humanity was the invention of writing. It is worth noting that writing is a human invention, not a phenomenon resulting from natural evolution. Humans invented writing as a technique for recording speech as well as for storing and facilitating the dissemination of knowledge across the world. Humans continue to be born illiterate, and therefore teaching and conscious supervised learning is necessary to maintain this basic social skill.
This book constitutes the refereed proceedings of the 8th International Conference on Flexible Query Answering Systems, FQAS 2009, held in Roskilde, Denmark, in October 2009. The 57 papers included in this volume were carefully reviewed and selected from 90 submissions. They are structured in topical sections on database management, information retrieval, extraction and mining, ontologies and semantic web, intelligent information extraction from texts, advances in fuzzy querying, personalization, preferences, context and recommendation, and Web as a stream.
th TSD 2009was the 12 eventin the series of InternationalConferenceson Text, Speech andDialoguesupportedbytheInternationalSpeechCommunicationAssociation(ISCA) ? and Czech Society for Cybernetics and Informatics (CSKI). This year, TSD was held in Plzen ? (Pilsen), in the Primavera Conference Center, during September 13-17, 2009 and it was organized by the University of West Bohemia in Plzen ? in cooperation with Masaryk University of Brno, Czech Republic. Like its predecessors, TSD 2009 hi- lighted to both the academic and scienti?c world the importance of text and speech processing and its most recent breakthroughsin current applications. Both experienced researchers and professionals as well as newcomers to the text and speech processing ?eld, interested in designing or evaluating interactive software, developing new int- action technologies, or investigatingoverarchingtheories of text and speech processing found in the TSD conference a forum to communicate with people sharing similar - terests. The conference is an interdisciplinary forum, intertwining research in speech and language processing with its applications in everyday practice. We feel that the mixture of different approaches and applications offered a great opportunity to get - quaintedwith currentactivitiesin all aspects oflanguagecommunicationand to witness the amazing vitality of researchers from developing countries too. This year's conference was partially oriented toward semantic processing, which was chosen as the main topic of the conference. All invited speakers (Frederick Jelinek, Louise Guthrie, Roberto Pieraccini, Tilman Becker, and Elmar Not ] h) gave lectures on thenewestresultsintherelativelybroadandstillunexploredareaofsemanticprocessing."
From the point of view of computational linguistics, morphological resources are the basis for all higher-level applications. This is especially true for languages with a rich morphology, such as German or Finnish. A morphology component should thus be capable of analyzing single word forms as well as whole corpora. For many practical applications, not only morphological analysis, but also generation is required, i.e., the production of surfaces corresponding to speci?c categories. Apart from uses in computational linguistics, there are also numerous practical - plications that either require morphological analysis and generation or that can greatly bene?t from it, for example, in text processing, user interfaces, or information - trieval. These applications have speci?c requirements for morphological components, including requirements from software engineering, such as programming interfaces or robustness. In 1994, the First Morpholympics took place at the University of Erlangen- Nuremberg, a competition between several systems for the analysis and generation of German word forms. Eight systems participated in the First Morpholympics; the conference proceedings [1] thus give a very good overview of the state of the art in computational morphologyfor German as of 1994.
For many years Leonard Bolc has played an important role in the Polish computer science community. He is especially known for his clear vision in the development of artificial intelligence, inspiring research, organizational and editorial achievements in areas such as e.g.: logic, automatic reasoning, natural language processing, and computer applications of natural language or human-like reasoning. This Festschrift volume, published to honor Leonard Bolc on his 75th birthday includes 17 refereed papers by leading researchers, his friends, former students and colleagues to celebrate his scientific career. The essays present research in the areas which Leonard Bolc and his colleagues investigated during his long scientific career. The volume is organized in three parts; the first is devoted to logic - the domain which was one of the most explored by Leonard Bolc himself. The second part contains papers focusing on different aspects of computational linguistics; the third part comprises papers describing different applications in which natural language processing or automatic reasoning plays an important role.
This volume collects the papers selected for presentation at the Third Inter- tional Conference on Metadata and Semantic Research (MTSR 2009), held in Milan at the University of Milano-Bicocca (October 1-2, 2009). Metadataandsemanticresearchistodayagrowingcomplexsetofconceptual, theoretical, methodological, and technological frameworks, o?ering innovative computational solutions in the design and development of computer-based s- tems.Fromthis perspective,researchersworkinginthisareamusttackleabroad range of issues on methods, results, and solutions coming from di?erent classic areas of this discipline. The conference has been designed as a forum allowing researchers to present and discuss specialized results as general contributions to the ?eld. In order to give a novelperspective in which both theoreticaland application aspects of metadata research contribute in the growth of the area, this book mirrors the structure of the conference, grouping the papers into three main categories: (1) Theoretical Research: Results and Proposals; (2) Applications: Case Studies and Proposals; (3) Special Track: Metadata and Semantics for Agriculture,FoodandEnvironment.Thebookcontains31fullpapers(10forthe ? rstcategory,10forthesecondand12forthethird),selectedfromapreliminary initial set of about 70 submissions. Many people contributed to the success of the conference and the creation of this volume, from the initial idea to its implementation. Our ?rst ackno- edgement is to the members of the Steering Commitee, GeorgeBokosand David Raitt. We would also like to thank all Program Committee members and - viewers for their collaboration. Special thanks to Carlo Batini, on behalf of the DepartmentofComputerScience,SystemsandCommunicationoftheUniversity of Milan-Bicocca, who kindly hosted our conference.
This volume brings together the peer-reviewed contributions of the participants at the COST 2102 International Conference on "Cross-Modal Analysis of Speech, Gestures, Gaze and Facial Expressions" held in Prague, Czech Republic, October 15-18, 2008. The conference was sponsored by COST (European Cooperation in the Field of Scientific and Technical Research, www. cost. esf. org/domains_actions/ict) in the - main of Information and Communication Technologies (ICT) for disseminating the research advances developed within COST Action 2102: "Cross-Modal Analysis of Verbal and Nonverbal Communication" http://cost2102. cs. stir. ac. uk. COST 2102 research networking has contributed to modifying the conventional theoretical approach to the cross-modal analysis of verbal and nonverbal communi- tion changing the concept of face to face communication with that of body to body communication as well as developing the idea of embodied information. Information is no longer the result of a difference in perception and is no longer measured in terms of quantity of stimuli, since the research developed in COST 2102 has proved that human information processing is a nonlinear process that cannot be seen as the sum of the numerous pieces of information available. Considering simply the pieces of inf- mation available, results in a model of the receiver as a mere decoder, and produces a huge simplification of the communication process.
Since 1993 the conference Developments in Language Theory (DLT) has been held in Europe every odd year and, since 2002, outside Europe every even year. The 13th conference in this series was DLT 2009. It took place in Stuttgart fromJune30to July3.PreviousmeetingsoccurredinTurku(1993), Magdeburg (1995), Thessaloniki(1997), Aachen(1999), Vienna(2001), Kyoto(2002), Szeged (2003), Auckland (2004), Palermo (2005), Santa Barbara (2006), Turku (2007), and Kyoto (2008). The DLT conference has developed into the main forum for language theory and related topics. This has also been re?ected in the high quality of the 70 s- missions received in 2009. Most submissions were reviewed by four Programme Committeemembersandtheirsub-referees.TheProgrammeCommitteeselected the best 35 papers for presentation during the conference. These 35 papers are also published in this proceedings volume. Members of the ProgrammeComm- tee were not allowed to submit papers. The work of the Programme Committee wasorganizedusingtheEasyChairconferencesystem, thankstoAndreiVoronkov. The conference programme included ?ve invited lectures. They were given by Mikola j Bojanczyk (Warsaw), Paul Gastin (Cachan), Tero Harju (Turku), ChristosKapoutsis(Nicosia), andBenjaminSteinberg(Ottawa).Wearegrateful to the invited speakers for accepting the invitation and presenting their lectures and for their contributions to the proceedings. The Informatik Forum Stuttgart provided a best paper award, which was selected by the Programme Committee. The recipient was: "Magic Numbers and Ternary Alphabet" by Galina Jiraskova."
Edited in collaboration with FoLLI, the Association of Logic, Language and Information, this book constitutes the 4th volume of the FoLLI LNAI subline; containing the refereed proceedings of the 16h International Workshop on Logic, Language, Information and Computation, WoLLIC 2009, held in Tokyo, Japan, in June 2009. The 25 revised full papers presented together with six tutorials
and invited talks were carefully reviewed and selected from 57
submissions. The papers cover some of the most active areas of
research on the frontiers between computation, logic, and
linguistics, with particular interest in cross-disciplinary topics.
Typical areas of interest are: foundations of computing and
programming; novel computation models and paradigms; broad notions
of proof and belief; formal methods in software and hardware
development; logical approach to natural language and reasoning;
logics of programs, actions and resources; foundational aspects of
information organization, search, flow, sharing, and
protection. |
You may like...
Foundation Models for Natural Language…
Gerhard PaaĆ, Sven Giesselbach
Hardcover
Natural Interaction with Robots…
Joseph Mariani, Sophie Rosset, …
Hardcover
|