![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence > Natural language & machine translation
We have written this book principally for users and practitioners of computer graphics. In particular, system designers, independent software vendors, graphics system implementers, and application program developers need to understand the basic standards being put in place at the so-called Virtual Device Interface and how they relate to other industry standards, both formal and de facto. Secondarily, the book has been targetted at technical managers and advanced students who need some understanding of the graphics standards and how they fit together, along with a good overview of the Computer Graphics Interface (CGI) proposal and Computer Graphics Metafile (CGM) standard in particular. Part I, Chapters 1,2, and 3; Part II, Chapters 10 and 11; Part III, Chapters 15, 16, and 17; and some of the Appendices will be of special interest. Finally, these same sections will interest users in government and industry who are responsible for selecting, buying and installing commercial implementations of the standards. The CGM is already a US Federal Information Processing Standard (FIPS 126), and we expect the same status for the CGI when its development is completed and it receives formal approval by the standards-making bodies.
HTML and the Art of Authoring For the World Wide Web is devoted to teaching the Web user how to generate good hypertext. `As a result of (this) rapid uncontrolled growth, the Web community may be facing a `hypertext crisis'. Thousands of hastily written or ill conceived documents may soon be presented to readers poorly formatted or unusable... .' (From the Preface.) `The clear and practical ways in which HTML and the Art of Authoring For the World Wide Web sets forth the principles of the Web, the operation of its servers and browsers, and its publishing concept is commendable. It will be an indispensable guide to the Web author as well as the sophisticated user.' (From the Foreword by Robert Cailliau.) `Despite its user friendliness, the Web has, by its own virtue, a default that makes it difficult for people to know where to begin: there is no starting point to the Web. Bebo White's HTML and the Art of Authoring For the World Wide Web will fill this gap immediately, as it provides a clear, introductory and sequential description of the fundamental concepts that lie underneath the Web. It describes HTML as an SGML application, explains the relationship between HTML and SGML, and gives a complete description of all the structure that HTML provides.' (From the Foreword by Eric van Herwijnen.)
Intelligent Integration of Information presents a collection of chapters bringing the science of intelligent integration forward. The focus on integration defines tasks that increase the value of information when information from multiple sources is accessed, related, and combined. This contributed volume has also been published as a special double issue of the Journal of Intelligent Information Systems (JIIS), Volume 6:2/3.
Explorations in Automatic Thesaurus Discovery presents an automated method for creating a first-draft thesaurus from raw text. It describes natural processing steps of tokenization, surface syntactic analysis, and syntactic attribute extraction. From these attributes, word and term similarity is calculated and a thesaurus is created showing important common terms and their relation to each other, common verb--noun pairings, common expressions, and word family members. The techniques are tested on twenty different corpora ranging from baseball newsgroups, assassination archives, medical X-ray reports, abstracts on AIDS, to encyclopedia articles on animals, even on the text of the book itself. The corpora range from 40,000 to 6 million characters of text, and results are presented for each in the Appendix. The methods described in the book have undergone extensive evaluation. Their time and space complexity are shown to be modest. The results are shown to converge to a stable state as the corpus grows. The similarities calculated are compared to those produced by psychological testing. A method of evaluation using Artificial Synonyms is tested. Gold Standards evaluation show that techniques significantly outperform non-linguistic-based techniques for the most important words in corpora. Explorations in Automatic Thesaurus Discovery includes applications to the fields of information retrieval using established testbeds, existing thesaural enrichment, semantic analysis. Also included are applications showing how to create, implement, and test a first-draft thesaurus.
This book deals with the computational application of systemic functional grammar (SFG) for natural language generation. More particularly, it first describes the implementation of a fragment of the grammar of German in the computational framework of KOMET-PENMAN for multilingual generation. Second, it presents a specification of explicit well-formedness constraints on syntagmatic structure which are defined in the form of typed feature structures. It thus achieves a model of systemic functional grammar that unites both the stregths of systemics, such as stratification, functional diversification, the orientation to context etc., adn the kinds of syntactic generalizations that are typically found in modern, syntagmatically-focused computational grammars. Elke Teich worked as a researcher in the KOMET project for text generation at the German National Research Centre for Information Technology, Institute for Integrated Publication and Information Systems, Darmstadt from 1990 to 1996. She is now a Research Associate at the Institute for Applied Linguistics, Translating and Interpreting, University of the Saarland, Saarbrucken.
The general markup language XML has played an outstanding role in the mul- ple ways of processing electronic documents, XML being used either in the design of interface structures or as a formal framework for the representation of structure or content-related properties of documents. This book in its 13 chapters discusses aspects of XML-based linguistic information modeling combining: methodological issues, especially with respect to text-related information modeling, applicati- oriented research and issues of formal foundations. The contributions in this book are based on current research in Text Technology, Computational Linguistics and in the international domain of evolving standards for language resources. Rec- rent themes in this book are markup languages, explored from different points of view, and topics of text-related information modeling. These topics have been core areas of the research Unit "Text-technological Information Modeling" (www. te- technology. de) funded from 2002 to 2009 by the German Research Foundation (DFG). Positions developed in this book could also bene t from the presentations and discussion at the conference "Modelling Linguistic Information Resources" at the Center for Interdisciplinary Research (Zentrum fur .. interdisziplinare .. Forschung, ZiF) at Bielefeld, a center for advanced studies known for its international and interdisciplinary meetings and research. The editors would like to thank the DFG and ZiF for their nancial support, the publisher, the series editors, the reviewers and those people that helped to prepare the manuscript, especially Carolin Kram, Nils Diewald, Jens Stegmann and Peter M. Fischer and last but not least, all of the authors.
This book constitutes the refereed proceedings of the 6th Metadata and Semantics Research Conference, MTSR 2012, held in Cadiz, Spain, in November 2012. The 33 revised papers presented were carefully reviewed and selected from 85 submissions. The papers are organized in a general, main track and several others: a track on metadata and semantics for open access repositories, research information systems and infrastructures, a second on metadata and semantics for cultural collections and applications, and finally one on metadata and semantics for agriculture, food and environment.
DSSSL (Document Style Semantics and Specification Language) is an ISO standard (ISO/IEC 10179: 1996) published in the year 1996. DSSSL is a standard of the SGML family (Standard Generalized Markup Language, ISO 8879:1986), whose aim is to establish a processing model for SGML documents. For a good understanding of the SGML standard, many books exist including Author's guide[BryanI988] and The SGML handbook[GoldfarbI990]. A DSSSL document is an SGML document, written with the same rules that guide any SGML document. The structure of a DSSSL document is explained in Chapter 2. DSSSL is based, in part, on scheme, a standard functional programming language. The DSSSL subset of scheme along with the procedures supported by DSSSL are explained in Chapter 3. The DSSSL standard starts with the supposition of a pre-existing SGML document, and offers a series of processes that can be performed on it: * Groves The first process that is performed on an SGML document in DSSSL is always the analysis of the document and the creation of a grove. The DSSSL standard shares many common characteristics with another standard of the SGML family, HyTime (ISO/IEC 10744). These standards were developed in parallel, and their developers designed a common data model, the grove, that would support the processing needs of each standard.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
Document Computing: Technologies for Managing Electronic Document Collections discusses the important aspects of document computing and recommends technologies and techniques for document management, with an emphasis on the processes that are appropriate when computers are used to create, access, and publish documents. This book includes descriptions of the nature of documents, their components and structure, and how they can be represented; examines how documents are used and controlled; explores the issues and factors affecting design and implementation of a document management strategy; and gives a detailed case study. The analysis and recommendations are grounded in the findings of the latest research. Document Computing: Technologies for Managing Electronic Document Collections brings together concepts, research, and practice from diverse areas including document computing, information retrieval, librarianship, records management, and business process re-engineering. It will be of value to anyone working in these areas, whether as a researcher, a developer, or a user. Document Computing: Technologies for Managing Electronic Document Collections can be used for graduate classes in document computing and related fields, by developers and integrators of document management systems and document management applications, and by anyone wishing to understand the processes of document management.
Cooperative Computer-Aided Authoring and Learning: A Systems Approach describes in detail a practical system for computer assisted authoring and learning. Drawing from the experiences gained during the Nestor project, jointly run between the Universities of Karlsruhe, Kaiserslautern and Freiburg and the Digital Equipment Corp. Center for Research and Advanced Development, the book presents a concrete example of new concepts in the domain of computer-aided authoring and learning. The conceptual foundation is laid by a reference architecture for an integrated environment for authoring and learning. This overall architecture represents the nucleus, shell and common denominator for the R&D activities carried out. From its conception, the reference architecture was centered around three major issues: * Cooperation among and between authors and learners in an open, multimedia and distributed system as the most important attribute; * Authoring/learning as the central topic; * Laboratory as the term which evoked the most suitable association with the envisioned authoring/learning environment.Within this framework, the book covers four major topics which denote the most important technical domains, namely: * The system kernel, based on object orientation and hypermedia; * Distributed multimedia support; * Cooperation support, and * Reusable instructional design support. Cooperative Computer-Aided Authoring and Learning: A Systems Approach is a major contribution to the emerging field of collaborative computing and is essential reading for researchers and practitioners alike. Its pedagogic flavor also makes it suitable for use as a text for a course on the subject.
"Cognitive and Computational Strategies for Word Sense
Disambiguation" examines cognitive strategies by humans and
computational strategies by machines, for WSD in parallel.
This volume constitutes the refereed proceedings of the 4th International Conference on Internationalization, Design and Global Development, IDGD 2011, held in Orlando, FL, USA, in July 2011 in the framework of the 14th International Conference on Human-Computer Interaction, HCII 2011. The 71 revised papers presented were carefully reviewed and selected from numerous submissions. The papers accepted for presentation thoroughly cover the entire field of internationalization, design and global development and address the following major topics: Cultural and cross-cultural design, culture and usability, design, emotion, trust and aesthetics, cultural issues in business and industry, culture, communication and society.
This book constitutes the refereed proceedings of the 24th Conference on Artificial Intelligence, Canadian AI 2011, held in St. John's, Canada, in May 2011. The 23 revised full papers presented together with 22 revised short papers and 5 papers from the graduate student symposium were carefully reviewed and selected from 81 submissions. The papers cover a broad range of topics presenting original work in all areas of artificial intelligence, either theoretical or applied.
Text classification is becoming a crucial task to analysts in different areas. In the last few decades, the production of textual documents in digital form has increased exponentially. Their applications range from web pages to scientific documents, including emails, news and books. Despite the widespread use of digital texts, handling them is inherently difficult - the large amount of data necessary to represent them and the subjectivity of classification complicate matters. This book gives a concise view on how to use kernel approaches for inductive inference in large scale text classification; it presents a series of new techniques to enhance, scale and distribute text classification tasks. It is not intended to be a comprehensive survey of the state-of-the-art of the whole field of text classification. Its purpose is less ambitious and more practical: to explain and illustrate some of the important methods used in this field, in particular kernel approaches and techniques.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
As its title suggests, "Uncertainty Management in Information Systems" is a book about how information systems can be made to manage information permeated with uncertainty. This subject is at the intersection of two areas of knowledge: information systems is an area that concentrates on the design of practical systems that can store and retrieve information; uncertainty modeling is an area in artificial intelligence concerned with accurate representation of uncertain information and with inference and decision-making under conditions infused with uncertainty. New applications of information systems require stronger capabilities in the area of uncertainty management. Our hope is that lasting interaction between these two areas would facilitate a new generation of information systems that will be capable of servicing these applications. Although there are researchers in information systems who have addressed themselves to issues of uncertainty, as well as researchers in uncertainty modeling who have considered the pragmatic demands and constraints of information systems, to a large extent there has been only limited interaction between these two areas. As the subtitle, "From Needs to Solutions," indicates, this book presents view points of information systems experts on the needs that challenge the uncer tainty capabilities of present information systems, and it provides a forum to researchers in uncertainty modeling to describe models and systems that can address these needs."
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
Summary his book was written primarily for people who intend or wish to develop new machines for the output of typefaces. It is practical to categorize equipment into three groups for which digital alphabets are required - 1) display devices, 2) typesetting machines and 3) numerically controlled (NC) machines. Until now, development of typefaces has been overly dependent upon the design of the respective machine on which it was to be used. This need not be the case. Digitization of type should be undertaken in two steps: the preparation of a database using hand-digitization, and the subsequent automatic generation of machine formats by soft scanning, through the use of a computer-based program. Digital formats for typefaces are ideally suited to system atic ordering, as are coding techniques. In this volume, various formats are investigated, their properties discussed and rela tive production requirements analyzed. Appendices provide readers additional information, largely on digital formats for typeface storage introduced by the IKARUS system. This book was composed in Latino type, developed by Hermann Zapf from his Melior for URW in 1990. Compo sition was accomplished on a Linotronic 300, as well as on an Agfa 9400 typesetter using PostScript. v Preface Preface his book was brought out by URW Publishers in 1986 with the title "Digital Formats for Typefaces;). It was translated into English in 1987, Japanese in 1989 and French in 1991.
"Predicting Prosody from Text for Text-to-Speech Synthesis"covers thespecific aspects of prosody, mainly focusing on how to predict the prosodic information from linguistic text, and then how to exploit the predicted prosodic knowledge for various speech applications. Author K. Sreenivasa Rao discusses proposed methods along with state-of-the-art techniques for the acquisition and incorporation of prosodic knowledge for developing speech systems. Positional, contextual and phonological features are proposed for representing the linguistic and production constraints of the sound units present in the text. This book is intended for graduate students and researchers working in the area of speech processing."
This book is based on the NATO Advanced Research Workshop on Structures of Com munication and Intelligent Help for Hypermedia Courseware, which was held at Espinho, Portugal, April 19-24, 1990. The texts included here should not be regarded as untouched proceedings of this meeting, but as the result of the reflections which took place there and which led the authors to revise their texts in that light. The Espinho ARW was itself to some extent the continuation of the ARW on Designing Hypermedia/Hypertext for Learning, held in Germany in 1989 (D. H. Jonassen, H. Mandl (eds.): Designing Hypermedia for Learning. NATO ASI Series F, Vol. 67. Springer 1990). At that meeting an essential conclusion becarne apparent: the importance and interest of hyper media products as potential pedagogical tools. It was then already predictable that the enormous evolution of hypermedia would lead to its association with multimedia technologies, namely for the production of courseware. Parallel to the improvement of the didactic potential and quality which results from this association, it nevertheless brought along a natural array of difficulties, some old, some new, in the con ception and use of hypermedia products. Today there is agreement that one of the most promising technological advances for education is represented by the use of text, sound and images based on nonlinear techniques of information handling and searching of hypermedia architectures. The problem of hypermedia is fundamentally one of communication; this leads to an attempt at defining a language for hypermedia.
This book contains the reports of selected projects involving natural language commu nication with pictorial information systems. More than just a record of research results, however, it presents concrete applications to the solution of a wide variety of problems. The authors are all prominent figures in the field whose authoritative contributions help ensure its continued expansion in both size and significance. Y. C. Lee and K S. Fu (Purdue University, USA) survey picture query languages which form an interface between the pictorial database system and the user and support infor mation retrieval, data entry and manipulation, data analysis and output generation. They include explicit picture query languages that augment alphanumeric data query langua ges as well as languages and command sets which are implicitly embedded in a pictorial information system but perform similar functions. It is worth mentioning that some forms of query languages can be transformed from a given set of natural language senten ces by using ATN (Augmented Transition Networks), which consequently allows for na turallanguage communication with information system." |
You may like...
Feminist Art in Resistance - Aesthetics…
Elif Dastarli, F. Melis Cin
Hardcover
R3,300
Discovery Miles 33 000
Sapiens - A Brief History Of Humankind
Yuval Noah Harari
Paperback
(4)
New Directions in Law and Literature
Elizabeth S. Anker, Bernadette Meyler
Hardcover
R3,296
Discovery Miles 32 960
Africa's Radicalisms and Conservatisms…
Edwin Etieyibo, Obvious Katsaura, …
Hardcover
R3,321
Discovery Miles 33 210
|