![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Artificial intelligence > Natural language & machine translation
The recent rapid development of transformational grammars has incorpo rated some strong claims in the areas of semantics and co-occurrence. The earlier structuralists relied on a minimum of information about the meaning of strings of a language. They asked only if strings of sounds were different in meaning - or simply were different words or phrases. Current transfor mational grammars, on the other hand, set as their goal the production of exactly the meaningful strings of a language. Stated slightly differently, they wish to specify exactly which strings of a language can occur together (meaningfully) in a given order. The present book purports to show that transformational grammar is in dependent of the current trends in semantics. I claim that exciting and sophisticated transformational grammars are required for describing when strings of a language mean the same, that is, for describing when strings of a language are paraphrases of each other. This task can be quite naturally limited to a project of much weaker semantic claims than those which are current in transformational linguistics."
In this book common sense computing techniques are further developed and applied to bridge the semantic gap between word-level natural language data and the concept-level opinions conveyed by these. In particular, the ensemble application of graph mining and multi-dimensionality reduction techniques is exploited on two common sense knowledge bases to develop a novel intelligent engine for open-domain opinion mining and sentiment analysis. The proposed approach, termed sentic computing, performs a clause-level semantic analysis of text, which allows the inference of both the conceptual and emotional information associated with natural language opinions and, hence, a more efficient passage from (unstructured) textual information to (structured) machine-processable data.
Rethinking Hypermedia: The Microcosm Approach is essentially the story of the Microcosm hypermedia research and development project that started in the late 1980's and from which has emerged a philosophy that re-examines the whole concept of hypermedia and its role in the evolution of multimedia information systems. The book presents the complete story of Microcosm to date. It sets the development of Microcosm in the context of the history of the subject from which it evolved, as well as the developments in the wider world of technology over the last two decades including personal computing, high-speed communications, and the growth of the Internet. These all lead us towards a world of global integrated information environments: the publishing revolution of the 20th century, in principle making vast amounts of information available to anybody anywhere in the world. Rethinking Hypermedia: The Microcosm Approach explains the role that open hypermedia systems and link services will play in the integrated information environments of the future. It considers issues such as authoring, legacy systems and data integrity issues, and looks beyond the simple hypertext model provided in the World Wide Web and other systems today to the world of intelligent information processing agents that will help us deal with the problems of information overload and maintenance. Rethinking Hypermedia: The Microcosm Approach will be of interest to all those who are involved in designing, implementing and maintaining hypermedia systems such as the World Wide Web by setting the groundwork for producing a system that is both easy to use and easy to maintain. Rethinking Hypermedia: The Microcosm Approach is essential reading for anyone involved in the provision of online information.
Review Office automation and associated hardware and software technologies are producing significant changes in traditional typing, printing, and publishing techniques and strategies. The long term impact of current developments is likely to be even more far reaching as reducing hardware costs, improved human-computer interfacing, uniformity through standardization, and sophisticated software facilities will all combine together to provide systems of power, capability and flexibility. The configuration of the system can be matched to the requirements of the user, whether typist, clerk, secretary, scientist, manager, director, or publisher. Enormous advances are currently being made in the areas of publication systems in the bringing together of text and pictures, and the aggregation of a greater variety of multi-media documents. Advances in technology and reductions in cost and size have produced many 'desk-top' publishing systems in the market place. More sophisticated systems are targeted at the high end of the market for newspaper production and quality color output. Outstanding issues in desk-top publishing systems include interactive editing of structured documents, integration of text and graphics, page description languages, standards, and the human-computer interface to documentation systems. The latter area is becoming increasingly important: usability by non-specialists and flexibility across application areas are two current concerns. One of the objectives of current work is to bring the production of high quality documents within the capability of naive users as well as experts.
Information Retrieval (IR) has concentrated on the development of information management systems to support user retrieval from large collections of homogeneous textual material. A variety of approaches have been tried and tested with varying degrees of success over many decades of research. Hypertext (HT) systems, on the other hand, provide a retrieval paradigm based on browsing through a structured information space, following pre-defined connections between information fragments until an information need is satisfied, or appears to be. Information Retrieval and Hypertext addresses the confluence of the areas of IR and HT and explores the work done to date in applying techniques from one area, to the other leading to the development of hypertext information retrieval' (HIR) systems. An important aspect of the work in IR/HT and in any user-centred information system is the emergence of multimedia information and such multimedia information is treated as an integral information type in this text. The contributed chapters cover the development of integrated hypertext information retrieval models, and the application of IR and HT techniques in hypertext construction and the approaches that can be taken in searching HIR systems. These chapters are complemented by two overview chapters covering, respectively, information retrieval and hypertext research and developments. Information Retrieval and Hypertext is important as it is the first text to directly address the combined searching/browsing paradigm of information discovery which is becoming so important in modern computing environments. It will be of interest to researchers and professionals working in a range of areas related to information discovery.
The second edition of the book on language comprehension in honor of Pim Levelt's sixtieth birthday has been released before he turns sixty-one. Some things move faster than the years of age. This seems to be especially true for advances in science. Therefore, the present edition entails changes in some of the chapters and incorporates an update of the current literature. I would like to thank all contributors for their cooperation in making a second edition possible such a short time after the completion of the first one. Angela D. Friederici Leipzig, November 23, 1998. Preface to the first edition Language comprehension and production is a uniquely human capability. We know little about the evolution of language as a human trait, possibly because our direct ancestors lived several million years ago. This fact certainly impedes the desirable advances in the biological basis of any theory of language evolution. Our knowledge about language as an existing species-specific biological sys tem, however, has advanced dramatically over the last two decades. New experi mental techniques have allowed the investigation of language and language use within the methodological framework of the natural sciences. The present book provides an overview of the experimental research in the area of language com prehension in particular."
This book is about automatic handling of non-rigid or deformable objects like cables, fabric, or foam rubber. The automation by robots in industrial environments, is especially examined. It discusses several important automation aspects, such as material modelling and simulation, planning and control strategies, collaborative systems, and industrial applications. This book collects contributions from various countries and international projects and, therefore, provides a representative overview of the state of the art in this field. It is of particular interest for scientists and practitioners in the area of robotics and automation
This book constitutes the refereed proceedings of the International Conference on Theory and Practice of Digital Libraries, TPDL 2011 - formerly known as ECDL (European Conference on Research and Advanced Technology for Digital Libraries) - held in Berlin, Germany, in September 2011. The 27 full papers, 13 short papers, 9 posters and 9 demos presented in this volume were carefully reviewed and selected from 162 initial submissions. In addition the book contains the abstract of 2 keynote speeches and an appendix stating information on the doctoral consortium, as well as the panel, which were held at the conference. The papers are grouped in topical sections on networked information, semantics and interoperability, systems and architectures, text and multimedia retrieval, collaborative information spaces, DL applications and legal aspects, user interaction and information visualization, user studies, archives and repositories, europeana, and preservation.
This book constitutes the refereed proceedings of the Second
International Conference on Multilingual and Multimodal Information
Access Evaluation, in continuation of the popular CLEF campaigns
and workshops that have run for the last decade, CLEF 2011, held in
Amsterdem, The Netherlands, in September 2011.
This book constitutes the proceedings of the 5th International Conference on Nonlinear Speech Processing, NoLISP 2011, held in Las Palmas de Gran Canaria, Spain, in November 2011. The purpose of the workshop is to present and discuss new ideas, techniques and results related to alternative approaches in speech processing that may depart from the main stream. The 33 papers presented together with 2 keynote talks were carefully reviewed and selected for inclusion in this book. The topics of NOLISP 2011 were non-linear approximation and estimation; non-linear oscillators and predictors; higher-order statistics; independent component analysis; nearest neighbors; neural networks; decision trees; non-parametric models; dynamics of non-linear systems; fractal methods; chaos modeling; and non-linear differential equations.
Document imaging is a new discipline in applied computer science. It is building bridges between computer graphics, the world of prepress and press, and the areas of color vision and color reproduction. The focus of this book is of special relevance to people learning how to utilize and integrate such available technology as digital printing or short run color, how to make use of CIM techniques for print products, and how to evaluate related technologies that will become relevant in the next few years. This book is the first to give a comprehensive overview of document imaging, the areas involved, and how they relate. For readers with a background in computer graphics it gives insight into all problems related to putting information in print, a field only very thinly covered in textbooks on computer graphics.
We have written this book principally for users and practitioners of computer graphics. In particular, system designers, independent software vendors, graphics system implementers, and application program developers need to understand the basic standards being put in place at the so-called Virtual Device Interface and how they relate to other industry standards, both formal and de facto. Secondarily, the book has been targetted at technical managers and advanced students who need some understanding of the graphics standards and how they fit together, along with a good overview of the Computer Graphics Interface (CGI) proposal and Computer Graphics Metafile (CGM) standard in particular. Part I, Chapters 1,2, and 3; Part II, Chapters 10 and 11; Part III, Chapters 15, 16, and 17; and some of the Appendices will be of special interest. Finally, these same sections will interest users in government and industry who are responsible for selecting, buying and installing commercial implementations of the standards. The CGM is already a US Federal Information Processing Standard (FIPS 126), and we expect the same status for the CGI when its development is completed and it receives formal approval by the standards-making bodies.
HTML and the Art of Authoring For the World Wide Web is devoted to teaching the Web user how to generate good hypertext. `As a result of (this) rapid uncontrolled growth, the Web community may be facing a `hypertext crisis'. Thousands of hastily written or ill conceived documents may soon be presented to readers poorly formatted or unusable... .' (From the Preface.) `The clear and practical ways in which HTML and the Art of Authoring For the World Wide Web sets forth the principles of the Web, the operation of its servers and browsers, and its publishing concept is commendable. It will be an indispensable guide to the Web author as well as the sophisticated user.' (From the Foreword by Robert Cailliau.) `Despite its user friendliness, the Web has, by its own virtue, a default that makes it difficult for people to know where to begin: there is no starting point to the Web. Bebo White's HTML and the Art of Authoring For the World Wide Web will fill this gap immediately, as it provides a clear, introductory and sequential description of the fundamental concepts that lie underneath the Web. It describes HTML as an SGML application, explains the relationship between HTML and SGML, and gives a complete description of all the structure that HTML provides.' (From the Foreword by Eric van Herwijnen.)
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
Intelligent Integration of Information presents a collection of chapters bringing the science of intelligent integration forward. The focus on integration defines tasks that increase the value of information when information from multiple sources is accessed, related, and combined. This contributed volume has also been published as a special double issue of the Journal of Intelligent Information Systems (JIIS), Volume 6:2/3.
The general markup language XML has played an outstanding role in the mul- ple ways of processing electronic documents, XML being used either in the design of interface structures or as a formal framework for the representation of structure or content-related properties of documents. This book in its 13 chapters discusses aspects of XML-based linguistic information modeling combining: methodological issues, especially with respect to text-related information modeling, applicati- oriented research and issues of formal foundations. The contributions in this book are based on current research in Text Technology, Computational Linguistics and in the international domain of evolving standards for language resources. Rec- rent themes in this book are markup languages, explored from different points of view, and topics of text-related information modeling. These topics have been core areas of the research Unit "Text-technological Information Modeling" (www. te- technology. de) funded from 2002 to 2009 by the German Research Foundation (DFG). Positions developed in this book could also bene t from the presentations and discussion at the conference "Modelling Linguistic Information Resources" at the Center for Interdisciplinary Research (Zentrum fur .. interdisziplinare .. Forschung, ZiF) at Bielefeld, a center for advanced studies known for its international and interdisciplinary meetings and research. The editors would like to thank the DFG and ZiF for their nancial support, the publisher, the series editors, the reviewers and those people that helped to prepare the manuscript, especially Carolin Kram, Nils Diewald, Jens Stegmann and Peter M. Fischer and last but not least, all of the authors.
Edited in collaboration with FoLLI, the Association of Logic, Language and Information, this book constitutes the refereed proceedings of the 8th International Tbilisi Symposium on Logic, Language, and Computation, TbiLLC 2009, held in Bakuriani, Georgia, in September 2009. The 20 revised full papers included in the book were carefully reviewed and selected from numerous presentations given at the symposium. The focus of the papers is on the following topics: natural language syntax, semantics, and pragmatics; constructive, modal and algebraic logic; linguistic typology and semantic universals; logics for artificial intelligence; information retrieval, query answer systems; logic, games, and formal pragmatics; language evolution and learnability; computational social choice; historical linguistics, history of logic.
This book constitutes the refereed proceedings of the 16th International Conference on Applications of Natural Language to Information Systems, held in Alicante, Spain, in June 2011. The 11 revised full papers and 11 revised short papers presented together with 23 poster papers, 1 invited talk and 6 papers of the NLDB 2011 doctoral symposium were carefully reviewed and selected from 74 submissions. The papers address all aspects of Natural Language Processing related areas and present current research on topics such as natural language in conceptual modeling, NL interfaces for data base querying/retrieval, NL-based integration of systems, large-scale online linguistic resources, applications of computational linguistics in information systems, management of textual databases NL on data warehouses and data mining, NLP applications, as well as NL and ubiquitous computing.
This book constitutes the refereed proceedings of the 24th Conference on Artificial Intelligence, Canadian AI 2011, held in St. John's, Canada, in May 2011. The 23 revised full papers presented together with 22 revised short papers and 5 papers from the graduate student symposium were carefully reviewed and selected from 81 submissions. The papers cover a broad range of topics presenting original work in all areas of artificial intelligence, either theoretical or applied.
This volume constitutes the refereed proceedings of the 4th International Conference on Internationalization, Design and Global Development, IDGD 2011, held in Orlando, FL, USA, in July 2011 in the framework of the 14th International Conference on Human-Computer Interaction, HCII 2011. The 71 revised papers presented were carefully reviewed and selected from numerous submissions. The papers accepted for presentation thoroughly cover the entire field of internationalization, design and global development and address the following major topics: Cultural and cross-cultural design, culture and usability, design, emotion, trust and aesthetics, cultural issues in business and industry, culture, communication and society.
Text classification is becoming a crucial task to analysts in different areas. In the last few decades, the production of textual documents in digital form has increased exponentially. Their applications range from web pages to scientific documents, including emails, news and books. Despite the widespread use of digital texts, handling them is inherently difficult - the large amount of data necessary to represent them and the subjectivity of classification complicate matters. This book gives a concise view on how to use kernel approaches for inductive inference in large scale text classification; it presents a series of new techniques to enhance, scale and distribute text classification tasks. It is not intended to be a comprehensive survey of the state-of-the-art of the whole field of text classification. Its purpose is less ambitious and more practical: to explain and illustrate some of the important methods used in this field, in particular kernel approaches and techniques.
As its title suggests, "Uncertainty Management in Information Systems" is a book about how information systems can be made to manage information permeated with uncertainty. This subject is at the intersection of two areas of knowledge: information systems is an area that concentrates on the design of practical systems that can store and retrieve information; uncertainty modeling is an area in artificial intelligence concerned with accurate representation of uncertain information and with inference and decision-making under conditions infused with uncertainty. New applications of information systems require stronger capabilities in the area of uncertainty management. Our hope is that lasting interaction between these two areas would facilitate a new generation of information systems that will be capable of servicing these applications. Although there are researchers in information systems who have addressed themselves to issues of uncertainty, as well as researchers in uncertainty modeling who have considered the pragmatic demands and constraints of information systems, to a large extent there has been only limited interaction between these two areas. As the subtitle, "From Needs to Solutions," indicates, this book presents view points of information systems experts on the needs that challenge the uncer tainty capabilities of present information systems, and it provides a forum to researchers in uncertainty modeling to describe models and systems that can address these needs."
When individuals read or listen to prose they try to understand what it means. This is quite obvious. However, the cognitive mechanisms that participate in prose comprehension are far from obvious. Even simple stories involve com plexities that have stymied many cognitive scientists. Why is prose comprehen sion so difficult to study? Perhaps because comprehension is guided by so many domains of knowledge. Perhaps because some critical mysteries of prose comprehension reside between the lines-in the mind of the comprehender. Ten years ago very few psychologists were willing to dig beyond the surface of explicit code in their studies of discourse processing. Tacit knowledge, world knowledge, inferences, and expectations were slippery notions that experimental psychologists managed to circumvent rather than understand. In many scientific circles it was taboo to investigate mechanisms and phenomena that are not directly governed by the physical stimulus. Fortunately, times have changed. Cognitive scientists are now vigorously exploring the puzzles of comprehension that lie beyond the word. The study of discourse processing is currently growing at a frenetic pace."
Summary his book was written primarily for people who intend or wish to develop new machines for the output of typefaces. It is practical to categorize equipment into three groups for which digital alphabets are required - 1) display devices, 2) typesetting machines and 3) numerically controlled (NC) machines. Until now, development of typefaces has been overly dependent upon the design of the respective machine on which it was to be used. This need not be the case. Digitization of type should be undertaken in two steps: the preparation of a database using hand-digitization, and the subsequent automatic generation of machine formats by soft scanning, through the use of a computer-based program. Digital formats for typefaces are ideally suited to system atic ordering, as are coding techniques. In this volume, various formats are investigated, their properties discussed and rela tive production requirements analyzed. Appendices provide readers additional information, largely on digital formats for typeface storage introduced by the IKARUS system. This book was composed in Latino type, developed by Hermann Zapf from his Melior for URW in 1990. Compo sition was accomplished on a Linotronic 300, as well as on an Agfa 9400 typesetter using PostScript. v Preface Preface his book was brought out by URW Publishers in 1986 with the title "Digital Formats for Typefaces;). It was translated into English in 1987, Japanese in 1989 and French in 1991.
8. 5 Summary In this chapter we have identified three basic patterns of influences that lead to ambiguity in the QP analysis of the basic active furnace state. We have then shown how modification of these patterns, by adding equilibrium values and sensitivity annotations on influence arcs, could permit resolu tion of the ambiguities. Finally, we have described in detail the extensions needed to the basic influence resolution algorithm in QP theory to oper ate on these extended descriptions. We have also shown that the modified influence resolution algorithm corrects an error in Forbus' original method for combining influences. We have then presented an extended example in which introduction of equilibrium assumptions eliminates all ambigu ity in the influence resolution deduction. In the next chapter we extend these techniques further, by developing a qualitative perturbation analysis technique that permits us to answer "what ir' control questions; then we extend this technique to obtain quantitative, as well as qualitative, effects of hypothetical control actions. 8." |
![]() ![]() You may like...
Cuito Cuanavale - 12 Months Of War That…
Fred Bridgland
Paperback
![]()
Proceedings of International Conference…
Basant Tiwari, Vivek, Tiwari, …
Hardcover
R5,726
Discovery Miles 57 260
Well-Quasi Orders in Computation, Logic…
Peter M. Schuster, Monika Seisenberger, …
Hardcover
R4,944
Discovery Miles 49 440
Recent Progress in Few-Body Physics…
N. A. Orr, M. Ploszajczak, …
Hardcover
R4,602
Discovery Miles 46 020
|