![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence > Natural language & machine translation
Among all information systems that are nowadays available, web sites are definitely the ones having the widest potential audience and the most significant impact on the everyday life of people. Web sites contribute largely to the information society: they provide visitors with a large array of services and information and allow them to perform various tasks without prior assumptions about their computer literacy. Web sites are assumed to be accessible and usable to the widest possible audience. Consequently, usability has been recognized as a critical success factor for web sites of every kind. Beyond this universal recognition, usability still remains a notion that is hard to grasp. Summative evaluation methods have been introduced to identify potential usability problems to assess the quality of web sites. However, summative evaluation remains limited in impact as it does not necessarily deliver constructive comments to web site designers and developers on how to solve the usability problems. Formative evaluation methods have been introduced to address this issue. Evaluation remains a process that is hard to drive and perform, while its potential impact is probably maximal for the benefit of the final user. This complexity is exacerbated when web sites are very large, potentially up to several hundreds of thousands of pages, thus leading to a situation where eval uating the web site is almost impossible to conduct manually. Therefore, many attempts have been made to support evaluation with: * Models that capture some characteristics of the web site of interest.
This book constitutes the refereed proceedings of the 4th Language and Technology Conference: Challenges for Computer Science and Linguistics, LTC 2009, held in Poznan, Poland, in November 2009. The 52 revised and in many cases substantially extended papers presented in this volume were carefully reviewed and selected from 103 submissions. The contributions are organized in topical sections on speech processing, computational morphology/lexicography, parsing, computational semantics, dialogue modeling and processing, digital language resources, WordNet, document processing, information processing, and machine translation.
Concurrency in Dependable Computing focuses on concurrency related issues in the area of dependable computing. Failures of system components, be hardware units or software modules, can be viewed as undesirable events occurring concurrently with a set of normal system events. Achieving dependability therefore is closely related to, and also benefits from, concurrency theory and formalisms. This beneficial relationship appears to manifest into three strands of work. Application level structuring of concurrent activities. Concepts such as atomic actions, conversations, exception handling, view synchrony, etc., are useful in structuring concurrent activities so as to facilitate attempts at coping with the effects of component failures. Replication induced concurrency management. Replication is a widely used technique for achieving reliability. Replica management essentially involves ensuring that replicas perceive concurrent events identically. Application of concurrency formalisms for dependability assurance. Fault-tolerant algorithms are harder to verify than their fault-free counterparts due to the fact that the impact of component faults at each state need to be considered in addition to valid state transitions. CSP, Petri nets, CCS are useful tools to specify and verify fault-tolerant designs and protocols. Concurrency in Dependable Computing explores many significant issues in all three strands. To this end, it is composed as a collection of papers written by authors well-known in their respective areas of research. To ensure quality, the papers are reviewed by a panel of at least three experts in the relevant area.
Distribution of anaphora in natural language and the complexity of its resolution have resulted in a wide range of disciplines focusing their research on this grammatical phenomenon. It has emerged as one of the most productive topics of multi- and int- disciplinary research such as cognitive science, artificial intelligence and human language technology, theoretical, cognitive, corpus and computational linguistics, philosophy of language, psycholinguistics and cognitive psychology. Anaphora plays a major role in understanding a language and also accounts for the cohesion of a text. Correct interpretation of anaphora is necessary in all high-level natural language pr- essing applications. Given the growing importance of the study of anaphora in the last few decades, it has emerged as the frontier area of research. This is evident from the high-quality th submissions received for the 7 DAARC from where the 10 excellent reports on - search findings are selected for this volume. These are the regular papers that were presented at DAARC.
The ninth campaign of the Cross-Language Evaluation Forum (CLEF) for European languages was held from January to September 2008. There were seven main eval- tion tracks in CLEF 2008 plus two pilot tasks. The aim, as usual, was to test the p- formance of a wide range of multilingual information access (MLIA) systems or s- tem components. This year, 100 groups, mainly but not only from academia, parti- pated in the campaign. Most of the groups were from Europe but there was also a good contingent from North America and Asia plus a few participants from South America and Africa. Full details regarding the design of the tracks, the methodologies used for evaluation, and the results obtained by the participants can be found in the different sections of these proceedings. The results of the CLEF 2008 campaign were presented at a two-and-a-half day workshop held in Aarhus, Denmark, September 17-19, and attended by 150 resear- ers and system developers. The annual workshop, held in conjunction with the European Conference on Digital Libraries, plays an important role by providing the opportunity for all the groups that have participated in the evaluation campaign to get together comparing approaches and exchanging ideas. The schedule of the workshop was divided between plenary track overviews, and parallel, poster and breakout sessions presenting this year's experiments and discu- ing ideas for the future. There were several invited talks.
Thebookpresentsa cross-sectionofstate-of-the-artresearchonmultimodalc- pora, a highly interdisciplinary area that is a prerequisite for various specialized disciplines. A number of the papers included are revised and expanded versions ofpapersacceptedtotheInternationalWorkshoponMultimodal Corpora: From Models of Natural Interaction to Systems and Applications, held in conjunction th with the 6 International Conference for Language Resources and Evaluation (LREC) on May 27, 2008, in Marrakech, Morocco. This international workshop series started in 2000 and has since then grown into a regular satellite event of the bi-annual LREC conference, attracting researchers from ?elds as diverse as psychology, arti?cial intelligence, robotics, signal processing, computational linguisticsandhuman-computerinteraction. Tocomplement theselected papers from the 2008 workshop, we invited well-known researchers from corpus coll- tioninitiativestocontributetothisvolume. Wewereabletoobtainseveninvited research articles, including contributions from major international multimodal corpus projects like AMI and SmartWeb, which complement the six selected workshop contributions. All papers underwent a special review process for this volume, resulting in signi?cant revisions and extensions based on the experts' advice. While we were pleased that the 2006 edition of the workshop resulted in a special issue of the Journal of Language Resources and Evaluation, published in 2007, we felt that this was the time for another major publication, given not onlytherapidprogressandincreasedinterestin this researchareabut especially in order to acknowledge the di?culty of disseminating results across discipline borders. The Springer LNAI series is the perfect platform for doing so. We also created the website www. multimodal-corpora.
This book constitutes the refereed proceedings of the 8th International Conference on Flexible Query Answering Systems, FQAS 2009, held in Roskilde, Denmark, in October 2009. The 57 papers included in this volume were carefully reviewed and selected from 90 submissions. They are structured in topical sections on database management, information retrieval, extraction and mining, ontologies and semantic web, intelligent information extraction from texts, advances in fuzzy querying, personalization, preferences, context and recommendation, and Web as a stream.
th TSD 2009was the 12 eventin the series of InternationalConferenceson Text, Speech andDialoguesupportedbytheInternationalSpeechCommunicationAssociation(ISCA) ? and Czech Society for Cybernetics and Informatics (CSKI). This year, TSD was held in Plzen ? (Pilsen), in the Primavera Conference Center, during September 13-17, 2009 and it was organized by the University of West Bohemia in Plzen ? in cooperation with Masaryk University of Brno, Czech Republic. Like its predecessors, TSD 2009 hi- lighted to both the academic and scienti?c world the importance of text and speech processing and its most recent breakthroughsin current applications. Both experienced researchers and professionals as well as newcomers to the text and speech processing ?eld, interested in designing or evaluating interactive software, developing new int- action technologies, or investigatingoverarchingtheories of text and speech processing found in the TSD conference a forum to communicate with people sharing similar - terests. The conference is an interdisciplinary forum, intertwining research in speech and language processing with its applications in everyday practice. We feel that the mixture of different approaches and applications offered a great opportunity to get - quaintedwith currentactivitiesin all aspects oflanguagecommunicationand to witness the amazing vitality of researchers from developing countries too. This year's conference was partially oriented toward semantic processing, which was chosen as the main topic of the conference. All invited speakers (Frederick Jelinek, Louise Guthrie, Roberto Pieraccini, Tilman Becker, and Elmar Not ] h) gave lectures on thenewestresultsintherelativelybroadandstillunexploredareaofsemanticprocessing."
In 1992 it seemed very difficult to answer the question whether it would be possible to develop a portable system for the automatic recognition and translation of spon taneous speech. Previous research work on speech processing had focused on read speech only and international projects aimed at automated text translation had just been terminated without achieving their objectives. Within this context, the German Federal Ministry of Education and Research (BMBF) made a careful analysis of all national and international research projects conducted in the field of speech and language technology before deciding to launch an eight-year basic-research lead project in which research groups were to cooperate in an interdisciplinary and international effort covering the disciplines of computer science, computational linguistics, translation science, signal processing, communi cation science and artificial intelligence. At some point, the project comprised up to 135 work packages with up to 33 research groups working on these packages. The project was controlled by means of a network plan. Every two years the project sit uation was assessed and the project goals were updated. An international scientific advisory board provided advice for BMBF. A new scientific approach was chosen for this project: coping with the com plexity of spontaneous speech with all its pertinent phenomena such as ambiguities, self-corrections, hesitations and disfluencies took precedence over the intended lex icon size. Another important aspect was that prosodic information was exploited at all processing stages."
New material treats such contemporary subjects as automatic speech recognition and speaker verification for banking by computer and privileged (medical, military, diplomatic) information and control access. The book also focuses on speech and audio compression for mobile communication and the Internet. The importance of subjective quality criteria is stressed. The book also contains introductions to human monaural and binaural hearing, and the basic concepts of signal analysis. Beyond speech processing, this revised and extended new edition of Computer Speech gives an overview of natural language technology and presents the nuts and bolts of state-of-the-art speech dialogue systems.
People engage in discourse every day - from writing letters and presenting papers to simple discussions. Yet discourse is a complex and fascinating phenomenon that is not well understood. This volume stems from a multidisciplinary workshop in which eminent scholars in linguistics, sociology and computational linguistics presented various aspects of discourse. The topics treated range from multi-party conversational interactions to deconstructing text from various perspectives, considering topic-focus development and discourse structure, and an empirical study of discourse segmentation. The chapters not only describe each author's favorite burning issue in discourse but also provide a fascinating view of the research methodology and style of argumentation in each field.
Provides a broad sample of current information processing applications Includes examples of successful applications that will encourage practitioners to apply the techniques described in the book to real-life problems
Current Work and Open Problems: A Road-Map for Research into the Emergence of Communication and Language Chrystopher L. Nehaniv, Caroline Lyon, and Angelo Cangelosi 1.1. Introduction This book brings together work on the emergence of communication and language from researchers working in a broad array of scientific paradigms in North America, Europe, Japan and Africa. We hope that its multi-disciplinary approach will encourage cross-fertilization and promote further advances in this active research field. The volume draws on diverse disciplines, including linguistics, psychology, neuroscience, ethology, anthropology, robotics, and computer science. Computational simulations of the emergence of phenomena associated with communication and language play a key role in illuminating some of the most significant issues, and the renewed scientific interest in language emergence has benefited greatly from research in Artificial Intelligence and Cognitive Science. The book starts with this road map chapter by the editors, pointing to the ways in which disparate disciplines can inform and stimulate each other. It examines the role of simulations as a novel way to express theories in science, and their contribution to the development of a new approach to the study of the emergence of communication and language. We will also discuss and collect the most promising directions and grand challenge problems for future research. The present volume, is organized into three parts: I. Empirical Investi- tions on Human Language, II. Synthesis and Simulation of Communication and Language in Artificial Systems, and III. Insights from Animal Communication.
Automatic Text Categorization and Clustering are becoming more and more important as the amount of text in electronic format grows and the access to it becomes more necessary and widespread. Well known applications are spam filtering and web search, but a large number of everyday uses exist (intelligent web search, data mining, law enforcement, etc.) Currently, researchers are employing many intelligent techniques for text categorization and clustering, ranging from support vector machines and neural networks to Bayesian inference and algebraic methods, such as Latent Semantic Indexing. This volume offers a wide spectrum of research work developed for intelligent text categorization and clustering. In the following, we give a brief introduction of the chapters that are included in this book.
Internet and web technology penetrates many aspects of our daily life. Its importance as a medium for business transactions will grow exponentially during the next few years. In terms of the involved market volume, the B2B area will hereby be the most interesting area. Also, it will be the place, where the new technology will lead to drastic changes in established customer relationships and business models. In an era where open and flexible electronic commerce provides new types of services to its users, simple 1-1 connections will be replaced by n-m relationships between customers and vendors. This new flexibility in electronic trading will generate serious challenges. The main problem stems from the heterogeneity of information descriptions used by vendors and customers, creating problems in both manual trading and in direct 1-1 electronic trading. In the case of B2B market places, it becomes too serious to be neglected. Product descriptions, catalog formats and business documents are often unstructured and non-standardized. Intelligent solutions that mechanize the structuring, standardizing, aligning, and personalizing process are a key requisite for successfully overcoming the current bottlenecks of B2B electronic commerce while enabling its further growth. Intelligent Information Integration in B2B Electronic Commerce discusses the main problems of information integration in this area and sketches several technological solution paths. Intelligent Information Integration in B2B Electronic Commerce is designed to meet the needs of a professional audience composed of researchers and practitioners in industry and graduate level students in Computer Science.
This book brings all the major and frontier topics in the field of document analysis together into a single volume, creating a unique reference source that will be invaluable to a large audience of researchers, lecturers and students working in this field. With chapters written by some of the most distinguished researchers active in this field, this book addresses recent advances in digital document processing research and development.
Before designing a speech application system, three key questions have to be answered: who will use it, why and how often? This book focuses on these high-level questions and gives a criteria of when and how to design speech systems. After an introduction, the state-of-the-art in modern voice user interfaces is displayed. The book goes on to evolve criteria for designing and evaluating successful voice user interfaces. Trends in this fast growing area are also presented.
The evolution of technology has set the stage for the rapid growth of the video Web: broadband Internet access is ubiquitous, and streaming media protocols, systems, and encoding standards are mature. In addition to Web video delivery, users can easily contribute content captured on low cost camera phones and other consumer products. The media and entertainment industry no longer views these developments as a threat to their established business practices, but as an opportunity to provide services for more viewers in a wider range of consumption contexts. The emergence of IPTV and mobile video services offers unprecedented access to an ever growing number of broadcast channels and provides the flexibility to deliver new, more personalized video services. Highly capable portable media players allow us to take this personalized content with us, and to consume it even in places where the network does not reach. Video search engines enable users to take advantage of these emerging video resources for a wide variety of applications including entertainment, education and communications. However, the task of information extr- tion from video for retrieval applications is challenging, providing opp- tunities for innovation. This book aims to first describe the current state of video search engine technology and second to inform those with the req- site technical skills of the opportunities to contribute to the development of this field. Today's Web search engines have greatly improved the accessibility and therefore the value of the Web.
This book teaches the principles of natural language processing and covers linguistics issues. It also details the language-processing functions involved, including part-of-speech tagging using rules and stochastic techniques. A key feature of the book is the author's hands-on approach throughout, with extensive exercises, sample code in Prolog and Perl, and a detailed introduction to Prolog. The book is suitable for researchers and students of natural language processing and computational linguistics.
Arti?cial intelligence has recently been re-energized to provide the clues needed to resolve complicated problems. AI is also expected to play a central role in enhancing a wide variety of daily activities. JSAI (The Japanese Society for Arti?cial Intelligence) is responsible for boosting the activities of AI researchers in Japan, and their series of annual conferences o?ers attractive forums for the exposition of the latest achievements and inter-group communication. In the past, the best papers of the conferences were published in the LNAI series. This book consists of award papers from the 22nd annual conference of the JSAI (JSAI 2008) and selected papers from the three co-located workshops. Eight papers were selected among more than 400 presentations at the conference and 18 papers were selected from the 34 presentations at the co-located wo- shops; Logic and Engineering of Natural Language Semantics 5 (LENLS 2008), the 2nd International Workshop on Juris-informatics (JURISIN 2008), and the First International Workshop on Laughter in Interaction and Body Movement (LIBM 2008). The award papers from JSAI 2008 were selected through a r- orous selection process. In the process, papers recommended by session chairs, session commentators, and PC members were carefully reviewed, before the ?nal decision was made.
This volume contains the papers presented at the 23rd Canadian Conference on Arti?cial Intelligence (AI 2010). The conference was held in Ottawa, Ontario, fromMay31toJune2,2010,andwascollocatedwiththe36thGraphicsInterface Conference(GI2010),andthe7thCanadianConferenceonComputerandRobot Vision (CRV 2010). The Program Committee received 90 submissions for the main conference, AI2010,fromacrossCanadaandaroundtheworld.Eachsubmissionwasreviewed byuptofourreviewers.Forthe?nalconferenceprogramandforinclusioninthese proceedings, 22 regular papers, with allocation of 12 pages each, were selected. Additionally,26 shortpapers,with allocationof 4 pageseach,wereaccepted. The papers from the Graduate Student Symposium are also included in the proceedings:sixoral(fourpages)andsixposter(twopages)presentationpapers. The conference programfeatured three keynote presentations by Dekang Lin (Google Inc.), Guy Lapalme (Universit'edeMontr' eal), and Evangelos Milios (Dalhousie University). The one-page abstracts of their talks are also included in the proceedings. Two pre-conference workshops, each with their own proceedings, were held on May 30, 2010. The Workshop on Intelligent Methods for Protecting Privacy and Con?dentiality in Data was organized by Khaled El Emam and Marina Sokolova. The workshop on Teaching AI in Computing and Information Te- nology (AI-CIT 2010) was organized by Danny Silver, Leila Kosseim, and Sajid Hussain. This conference wouldnot havebeen possible without the hardworkofmany people.WewouldliketothankallProgramCommitteemembersandexternal- viewers for their e?ort in providing high-quality reviews in a timely manner. We thank all the authors of submitted papers for submitting their work,and the - thors of selected papers for their collaboration in preparation of the ?nal copy. ManythankstoEbrahimBagheriandMarinaSokolovafororganizingtheGra- ateStudentSymposium,andchairingtheProgramCommitteeofthesymposium. We are in debt to Andrei Voronkov for developing the EasyChair conference managementsystemandmakingitfreelyavailabletotheacademicworld.Itisan amazinglyelegantand functionalWeb-basedsystem,whichsavedus muchtime.
th CICLing 2009 markedthe 10 anniversary of the Annual Conference on Intel- gent Text Processing and Computational Linguistics. The CICLing conferences provide a wide-scope forum for the discussion of the art and craft of natural language processing research as well as the best practices in its applications. This volume contains ?ve invited papers and the regular papers accepted for oral presentation at the conference. The papers accepted for poster presentation were published in a special issue of another journal (see the website for more information). Since 2001, the proceedings of CICLing conferences have been published in Springer's Lecture Notes in Computer Science series, as volumes 2004, 2276, 2588, 2945, 3406, 3878, 4394, and 4919. This volume has been structured into 12 sections: - Trends and Opportunities - Linguistic Knowledge Representation Formalisms - Corpus Analysis and Lexical Resources - Extraction of Lexical Knowledge - Morphology and Parsing - Semantics - Word Sense Disambiguation - Machine Translation and Multilinguism - Information Extraction and Text Mining - Information Retrieval and Text Comparison - Text Summarization - Applications to the Humanities A total of 167 papers by 392 authors from 40 countries were submitted for evaluation by the International Program Committee, see Tables 1 and 2. This volume contains revised versions of 44 papers, by 120 authors, selected for oral presentation; the acceptance rate was 26. 3%.
This volume presents the proceedings of the Third International Sanskrit C- putational Linguistics Symposium hosted by the University of Hyderabad, Hyderabad, IndiaduringJanuary15-17,2009.TheseriesofsymposiaonSanskrit Computational Linguistics began in 2007. The ?rst symposium was hosted by INRIA atRocquencourt, Francein October 2007asa partofthe jointcollabo- tion between INRIA and the University of Hyderabad. This joint collaboration expanded both geographically as well as academically covering more facets of Sanskrit Computaional Linguistics, when the second symposium was hosted by Brown University, USA in May 2008. We received 16 submissions, which were reviewed by the members of the Program Committee. After discussion, nine of them were selected for presen- tion. These nine papers fall under four broad categories: four papers deal with the structure of Pan - ini's Astad - hyay - - ?. Two of them deal with parsing issues, . .. two with various aspects of machine translation, and the last one with the Web concordance of an important Sanskrit text. Ifwelookretrospectivelyoverthelasttwoyears, thethreesymposiainsucc- sion have seen not only continuity of some of the themes, but also steady growth of the community. As is evident, researchers from diverse disciplines such as l- guistics, computer science, philology, and vy- akarana are collaborating with the . scholars from other disciplines, witnessing the growth of Sanskrit computational linguistics as an emergent discipline. We are grateful to S.D. Joshi, Jan Houben, and K.V.R. Krishnamacharyulu for accepting our invitation to deliver the invited speeches."
This volume brings together the peer-reviewed contributions of the participants at the COST 2102 International Conference on "Cross-Modal Analysis of Speech, Gestures, Gaze and Facial Expressions" held in Prague, Czech Republic, October 15-18, 2008. The conference was sponsored by COST (European Cooperation in the Field of Scientific and Technical Research, www. cost. esf. org/domains_actions/ict) in the - main of Information and Communication Technologies (ICT) for disseminating the research advances developed within COST Action 2102: "Cross-Modal Analysis of Verbal and Nonverbal Communication" http://cost2102. cs. stir. ac. uk. COST 2102 research networking has contributed to modifying the conventional theoretical approach to the cross-modal analysis of verbal and nonverbal communi- tion changing the concept of face to face communication with that of body to body communication as well as developing the idea of embodied information. Information is no longer the result of a difference in perception and is no longer measured in terms of quantity of stimuli, since the research developed in COST 2102 has proved that human information processing is a nonlinear process that cannot be seen as the sum of the numerous pieces of information available. Considering simply the pieces of inf- mation available, results in a model of the receiver as a mere decoder, and produces a huge simplification of the communication process.
This book constitutes the refereed proceedings of the 6th International Conference on Natural Language Processing, GoTAL 2008, Gothenburg, Sweden, August 2008. The 44 revised full papers presented together with 3 invited talks were carefully reviewed and selected from 107 submissions. The papers address all current issues in computational linguistics and monolingual and multilingual intelligent language processing - theory, methods and applications. |
You may like...
Creativity, Inc. - Overcoming The Unseen…
Ed Catmull, Amy Wallace
Hardcover
Killing For Culture - From Edison to…
David Kerekes, David Slater
Paperback
R940
Discovery Miles 9 400
|