![]() |
![]() |
Your cart is empty |
||
Books > Language & Literature > Language & linguistics > Computational linguistics
It is with great pleasure that we are presenting to the community the second edition of this extraordinary handbook. It has been over 15 years since the publication of the first edition and there have been great changes in the landscape of philosophical logic since then. The first edition has proved invaluable to generations of students and researchers in formal philosophy and language, as weIl as to consumers of logic in many applied areas. The main logic artiele in the Encyelopaedia Britannica 1999 has described the first edition as 'the best starting point for exploring any of the topics in logic'. We are confident that the second edition will prove to be just as good. ! The first edition was the second handbook published for the logic commu nity. It followed the North Holland one volume Handbook 0/ Mathematical Logic, published in 1977, edited by the late Jon Barwise. The four volume Handbook 0/ Philosophical Logic, published 1983-1989 came at a fortunate at the evolution of logic. This was the time when logic temporal junction was gaining ground in computer science and artificial intelligence cireles. These areas were under increasing commercial pressure to provide devices which help andjor replace the human in his daily activity. This pressure required the use of logic in the modelling of human activity and organisa tion on the one hand and to provide the theoretical basis for the computer program constructs on the other.
1. 1 OBJECTIVES The main objective of this joint work is to bring together some ideas that have played central roles in two disparate theoretical traditions in order to con tribute to a better understanding of the relationship between focus and the syn tactic and semantic structure of sentences. Within the Prague School tradition and the branch of its contemporary development represented by Hajicova and Sgall (HS in the sequel), topic-focus articulation has long been a central object of study, and it has long been a tenet of Prague school linguistics that topic-focus structure has systematic relevance to meaning. Within the formal semantics tradition represented by Partee (BHP in the sequel), focus has much more recently become an area of concerted investigation, but a number of the semantic phenomena to which focus is relevant have been extensively investi gated and given explicit compositional semantic-analyses. The emergence of 'tripartite structures' (see Chapter 2) in formal semantics and the partial simi larities that can be readily observed between some aspects of tripartite structures and some aspects of Praguian topic-focus articulation have led us to expect that a closer investigation of the similarities and differences in these different theoretical constructs would be a rewarding undertaking with mutual benefits for the further development of our respective theories and potential benefit for the study of semantic effects of focus in other theories as well."
In the last years, it was observed an increasing interest of computer scientists in the structure of biological molecules and the way how they can be manipulated in vitro in order to define theoretical models of computation based on genetic engineering tools. Along the same lines, a parallel interest is growing regarding the process of evolution of living organisms. Much of the current data for genomes are expressed in the form of maps which are now becoming available and permit the study of the evolution of organisms at the scale of genome for the first time. On the other hand, there is an active trend nowadays throughout the field of computational biology toward abstracted, hierarchical views of biological sequences, which is very much in the spirit of computational linguistics. In the last decades, results and methods in the field of formal language theory that might be applied to the description of biological sequences were pointed out.
Web Personalization can be de?ned as any set of actions that can tailor the Webexperiencetoaparticularuserorsetofusers. Toachievee?ectivepers- alization, organizationsmustrelyonallavailabledata, includingtheusageand click-stream data (re?ecting user behaviour), the site content, the site str- ture, domainknowledge, aswellasuserdemographicsandpro?les. Inaddition, e?cient and intelligent techniques are needed to mine this data for actionable knowledge, and to e?ectively use the discovered knowledge to enhance the users' Web experience. These techniques must address important challenges emanating from the size and the heterogeneous nature of the data itself, as wellasthedynamicnatureofuserinteractionswiththeWeb. Thesechallenges include the scalability of the personalization solutions, data integration, and successful integration of techniques from machine learning, information - trievaland?ltering, databases, agentarchitectures, knowledgerepresentation, data mining, text mining, statistics, user modelling and human-computer - teraction. The Semantic Web adds one more dimension to this. The workshop will focus on the semantic web approach to personalization and adaptation. The Web has been formed to be an integral part of numerous applications inwhichauserinteractswithaserviceprovider, productsellers, governmental organisations, friends and colleagues. Content and services are available at di?erent sources and places. Hence, Web applications need to combine all available knowledge in order to form personalized, user-friendly, and busine- optimal servi
In the fall of 1985 Carnegie Mellon University established a Department of Philosophy. The focus of the department is logic broadly conceived, philos- ophy of science, in particular of the social sciences, and linguistics. To mark the inauguration of the department, a daylong celebration was held on April 5, 1986. This celebration consisted of two keynote addresses by Patrick Sup- pes and Thomas Schwartz, seminars directed by members of the department, and a panel discussion on the computational model of mind moderated by Dana S. Scott. The various contributions, in modified and expanded form, are the core of this collection of essays, and they are, I believe, of more than parochial interest: they turn attention to substantive and reflective interdis- ciplinary work. The collection is divided into three parts. The first part gives perspec- tives (i) on general features of the interdisciplinary enterprise in philosophy (by Patrick Suppes, Thomas Schwartz, Herbert A. Simon, and Clark Gly- mour) , and (ii) on a particular topic that invites such interaction, namely computational models of the mind (with contributions by Gilbert Harman, John Haugeland, Jay McClelland, and Allen Newell). The second part con- tains (mostly informal) reports on concrete research done within that enter- prise; the research topics range from decision theory and the philosophy of economics through foundational problems in mathematics to issues in aes- thetics and computational linguistics. The third part is a postscriptum by Isaac Levi, analyzing directions of (computational) work from his perspective.
The contributors present the main results and techniques of their specialties in an easily accessible way accompanied with many references: historical, hints for complete proofs or solutions to exercises and directions for further research. This volume contains applications which have not appeared in any collection of this type. The book is a general source of information in computation theory, at the undergraduate and research level.
This book introduces a theory, Naive Semantics (NS), a theory of the knowledge underlying natural language understanding. The basic assumption of NS is that knowing what a word means is not very different from knowing anything else, so that there is no difference in form of cognitive representation between lexical semantics and ency clopedic knowledge. NS represents word meanings as commonsense knowledge, and builds no special representation language (other than elements of first-order logic). The idea of teaching computers common sense knowledge originated with McCarthy and Hayes (1969), and has been extended by a number of researchers (Hobbs and Moore, 1985, Lenat et aI, 1986). Commonsense knowledge is a set of naive beliefs, at times vague and inaccurate, about the way the world is structured. Traditionally, word meanings have been viewed as criterial, as giving truth conditions for membership in the classes words name. The theory of NS, in identifying word meanings with commonsense knowledge, sees word meanings as typical descriptions of classes of objects, rather than as criterial descriptions. Therefore, reasoning with NS represen tations is probabilistic rather than monotonic. This book is divided into two parts. Part I elaborates the theory of Naive Semantics. Chapter 1 illustrates and justifies the theory. Chapter 2 details the representation of nouns in the theory, and Chapter 4 the verbs, originally published as "Commonsense Reasoning with Verbs" (McDowell and Dahlgren, 1987). Chapter 3 describes kind types, which are naive constraints on noun representations."
ABOUT THIS BOOK This book is intended for researchers who want to keep abreast of cur rent developments in corpus-based natural language processing. It is not meant as an introduction to this field; for readers who need one, several entry-level texts are available, including those of (Church and Mercer, 1993; Charniak, 1993; Jelinek, 1997). This book captures the essence of a series of highly successful work shops held in the last few years. The response in 1993 to the initial Workshop on Very Large Corpora (Columbus, Ohio) was so enthusias tic that we were encouraged to make it an annual event. The following year, we staged the Second Workshop on Very Large Corpora in Ky oto. As a way of managing these annual workshops, we then decided to register a special interest group called SIGDAT with the Association for Computational Linguistics. The demand for international forums on corpus-based NLP has been expanding so rapidly that in 1995 SIGDAT was led to organize not only the Third Workshop on Very Large Corpora (Cambridge, Mass. ) but also a complementary workshop entitled From Texts to Tags (Dublin). Obviously, the success of these workshops was in some measure a re flection of the growing popularity of corpus-based methods in the NLP community. But first and foremost, it was due to the fact that the work shops attracted so many high-quality papers."
This book is a revised version of my doctoral thesis which was submitted in April 1993. The main extension is a chapter on evaluation of the system de scribed in Chapter 8 as this is clearly an issue which was not treated in the original version. This required the collection of data, the development of a concept for diagnostic evaluation of linguistic word recognition systems and, of course, the actual evaluation of the system itself. The revisions made primarily concern the presentation of the latest version of the SILPA system described in an additional Subsection 8. 3, the development environment for SILPA in Sec tion 8. 4, the diagnostic evaluation of the system as an additional Chapter 9. Some updates are included in the discussion of phonology and computation in Chapter 2 and finite state techniques in computational phonology in Chapter 3. The thesis was designed primarily as a contribution to the area of compu tational phonology. However, it addresses issues which are relevant within the disciplines of general linguistics, computational linguistics and, in particular, speech technology, in providing a detailed declarative, computationally inter preted linguistic model for application in spoken language processing. Time Map Phonology is a novel, constraint-based approach based on a two-stage temporal interpretation of phonological categories as events."
A history of machine translation (MT) from the point of view of a major writer and innovator in the field is the subject of this book. It details the deep differences between rival groups on how best to do MT, and presents a global perspective covering historical and contemporary systems in Europe, the US and Japan. The author considers MT as a fundamental part of Artificial Intelligence and the ultimate test-bed for all computational linguistics.
Most of the books about computational (lexical) semantic lexicons deal with the depth (or content) aspect of lexicons, ignoring the breadth (or coverage) aspect. This book presents a first attempt in the community to address both issues: content and coverage of computational semantic lexicons, in a thorough manner. Moreover, it addresses issues which have not yet been tackled in implemented systems such as the application time of lexical rules. Lexical rules and lexical underspecification are also contrasted in implemented systems. The main approaches in the field of computational (lexical) semantics are represented in the present book (including Wordnet, CyC, Mikrokosmos, Generative Lexicon). This book embraces several fields (and subfields) as different as: linguistics (theoretical, computational, semantics, pragmatics), psycholinguistics, cognitive science, computer science, artificial intelligence, knowledge representation, statistics and natural language processing. The book also constitutes a very good introduction to the state of the art in computational semantic lexicons of the late 1990s.
The last decade has been one of dramatic progress in the field of Natural Language Processing (NLP). This hitherto largely academic discipline has found itself at the center of an information revolution ushered in by the Internet age, as demand for human-computer communication and informa tion access has exploded. Emerging applications in computer-assisted infor mation production and dissemination, automated understanding of news, understanding of spoken language, and processing of foreign languages have given impetus to research that resulted in a new generation of robust tools, systems, and commercial products. Well-positioned government research funding, particularly in the U. S., has helped to advance the state-of-the art at an unprecedented pace, in no small measure thanks to the rigorous 1 evaluations. This volume focuses on the use of Natural Language Processing in In formation Retrieval (IR), an area of science and technology that deals with cataloging, categorization, classification, and search of large amounts of information, particularly in textual form. An outcome of an information retrieval process is usually a set of documents containing information on a given topic, and may consist of newspaper-like articles, memos, reports of any kind, entire books, as well as annotated image and sound files. Since we assume that the information is primarily encoded as text, IR is also a natural language processing problem: in order to decide if a document is relevant to a given information need, one needs to be able to understand its content."
It is with great pleasure that we are presenting to the community the second edition of this extraordinary handbook. It has been over 15 years since the publication of the first edition and there have been great changes in the landscape of philosophical logic since then. The first edition has proved invaluable to generations of students and researchers in formal philosophy and language, as well as to consumers of logic in many applied areas. The main logic article in the Encyclopaedia Britannica 1999 has described the first edition as 'the best starting point for exploring any of the topics in logic'. We are confident that the second edition will prove to be just as good! The first edition was the second handbook published for the logic com- nity. It followed the North Holland one volume Handbook of Mathematical Logic, published in 1977, edited by the late Jon Barwise. The four volume Handbook of Philosophical Logic, published 1983-1989 came at a fortunate temporal junction at the evolution of logic. This was the time when logic was gaining ground in computer science and artificial intelligence circles. These areas were under increasing commercial pressure to provide devices which help and/or replace the human in his daily activity. This pressure required the use of logic in the modelling of human activity and organi- tion on the one hand and to provide the theoretical basis for the computer program constructs on the other.
Researchers in a number of disciplines deal with large text sets requiring both text management and text analysis. Faced with a large amount of textual data collected in marketing surveys, literary investigations, historical archives and documentary data bases, these researchers require assistance with organizing, describing and comparing texts. Exploring Textual Data demonstrates how exploratory multivariate statistical methods such as correspondence analysis and cluster analysis can be used to help investigate, assimilate and evaluate textual data. The main text does not contain any strictly mathematical demonstrations, making it accessible to a large audience. This book is very user-friendly with proofs abstracted in the appendices. Full definitions of concepts, implementations of procedures and rules for reading and interpreting results are fully explored. A succession of examples is intended to allow the reader to appreciate the variety of actual and potential applications and the complementary processing methods. A glossary of terms is provided.
Marcus Contextual Grammars is the first monograph to present a class of grammars introduced about three decades ago, based on the fundamental linguistic phenomenon of strings-contexts interplay (selection). Most of the theoretical results obtained so far about the many variants of contextual grammars are presented with emphasis on classes of questions with relevance for applications in the study of natural language syntax: generative powers, descriptive and computational complexity, automata recognition, semilinearity, structure of the generated strings, ambiguity, regulated rewriting, etc. Constant comparison with families of languages in the Chomsky hierarchy is made. Connections with non-linguistic areas are established, such as molecular computing. Audience: Researchers and students in theoretical computer science (formal language theory and automata theory), computational linguistics, mathematical methods in linguistics, and linguists interested in formal models of syntax.
In the late 1990s, AI witnessed an increasing use of the term 'argumentation' within its bounds: in natural language processing, in user interface design, in logic programming and nonmonotonic reasoning, in Al's interface with the legal community, and in the newly emerging field of multi-agent systems. It seemed to me that many of these uses of argumentation were inspired by (of ten inspired) guesswork, and that a great majority of the AI community were unaware that there was a maturing, rich field of research in Argumentation Theory (and Critical Thinking and Informal Logic) that had been steadily re building a scholarly approach to the area over the previous twenty years or so. Argumentation Theory, on its side; was developing theories and approaches that many in the field felt could have a role more widely in research and soci ety, but were for the most part unaware that AI was one of the best candidates for such application."
The study of prosody is perhaps the area of speech research which has undergone the most noticeable development during the past ten to fifteen years. As an indication of this, one can note, for example, that at the latest International Conference on Spoken Language Processing in Philadelphia (October 1996), there were more sessions devoted to prosody than to any other area. Not only that, but within other sessions, in particular those dealing with dialogue, several of the presentations dealt specifically with prosodic aspects of dialogue research. Even at the latest Eurospeech meeting in Rhodes (September 1997), prosody, together with speech recognition (where several contributions dealt with how prosodic cues can be exploited to improve recognition processes) were the most frequent session topics, despite the fact that th'ere was a separate ESCA satellite workshop on intonation in conjunction with the main Eurospeech meeting which included over 80 contributions. This focus on prosodic research is partly due to the fact that developments in speech technology have made it possible to examine the acoustic parameters associated with prosodic phenomena (in particular fundamental frequency and duration) to an extent which has not been possible in other domains of speech research. It is also due to the fact that significant theoretical advances in linguistics and phonetics have been made during this time which have made it possible to obtain a better understanding of how prosodic parameters function in expressing different kinds of meaning in the languages of the world.
This book celebrates the work of Yorick Wilks in the form of a selection of his papers which are intended to reflect the range and depth of his work. The volume accompanies a Festschrift which celebrates his contribution to the fields of Computational Linguistics and Artificial Intelligence. The selected papers reflect Yorick's contribution to both practical and theoretical aspects of automatic language processing.
The Language of Design articulates the theory that there is a language of design. Drawing upon insights from computational language processing, the language of design is modeled computationally through latent semantic analysis (LSA), lexical chain analysis (LCA), and sentiment analysis (SA). The statistical co-occurrence of semantics (LSA), semantic relations (LCA), and semantic modifiers (SA) in design text is used to illustrate how the reality producing effect of language is itself an enactment of design, allowing a new understanding of the connections between creative behaviors. The computation of the language of design makes it possible to make direct measurements of creative behaviors which are distributed across social spaces and mediated through language. The book demonstrates how machine understanding of design texts based on computation over the language of design yields practical applications for design management.
The ideal of using human language to control machines requires a practical theory of natural language communication that includes grammatical analysis of language signs, plus a model of the cognitive agent, with interfaces for recognition and action, an internal database, and an algorithm for reading content in and out. This book offers a functional framework for theoretical analysis of natural language communication and for practical applications of natural language processing.
Natural Language Processing and Text Mining not only discusses applications of Natural Language Processing techniques to certain Text Mining tasks, but also the converse, the use of Text Mining to assist NLP. It assembles a diverse views from internationally recognized researchers and emphasizes caveats in the attempt to apply Natural Language Processing to text mining. This state-of-the-art survey is a must-have for advanced students, professionals, and researchers.
How far can you take fuzzy logic, the brilliant conceptual framework made famous by George Klir? With this book, you can find out. The authors of this updated edition have extended Klir s work by taking fuzzy logic into even more areas of application. It serves a number of functions, from an introductory text on the concept of fuzzy logic to a treatment of cutting-edge research problems suitable for a fully paid-up member of the fuzzy logic community.
The eleven chapters of this book represent an original contribution to the field of multimodal spoken dialogue systems. The material includes highly relevant topics, such as dialogue modeling in research systems versus industrial systems. The book contains detailed application studies, including speech-controlled MP3 players in a car environment, negotiation training with a virtual human in a military context and the application of spoken dialogue to question-answering systems.
Due to the increasing lingua-cultural heterogeneity of today's users of English, it has become necessary to examine politeness, translation and transcultural communication from a different perspective. This book proposes a concept for a transdisciplinary methodology to shed some light onto the opaque relationship between the lingua-cultural biographies of users of English and their patterns of perceiving and realizing politeness in speech acts. The methodology incorporates aspects of CAT tools and business intelligence systems, and is designed for long-term research that can serve as a foundation for theoretical studies or practical contexts, such as customer relationship management and marketing.
In both the linguistic and the language engineering community, the creation and use of annotated text collections (or annotated corpora) is currently a hot topic. Annotated texts are of interest for research as well as for the development of natural language pro cessing (NLP) applications. Unfortunately, the annotation of text material, especially more interesting linguistic annotation, is as yet a difficult task and can entail a substan tial amount of human involvement. Allover the world, work is being done to replace as much as possible of this human effort by computer processing. At the frontier of what can already be done (mostly) automatically we find syntactic wordclass tagging, the annotation of the individual words in a text with an indication of their morpho syntactic classification. This book describes the state of the art in syntactic wordclass tagging. As an attempt to give an overall view of the field, this book is of interest to (at least) two, possibly very different, types of reader. The first type consists of those people who are using, or are planning to use, tagged material and taggers. They will want to know what the possibilities and impossibilities of tagging are, but are not necessarily interested in the internal working of automatic taggers. This, on the other hand, is the main interest of our second type of reader, the builders of automatic taggers and other natural language processing software." |
![]() ![]() You may like...
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim
Paperback
Advances in Imaging and Electron…
Martin Hytch, Peter W. Hawkes
Hardcover
R6,778
Discovery Miles 67 780
Statistics for Business and Economics…
Paul Newbold, William Carlson, …
R2,388
Discovery Miles 23 880
Journal; 1873, no.2
London Iron and Steel Institute, London Tra Iron and Steel Institute, …
Hardcover
R954
Discovery Miles 9 540
|