0
Your cart

Your cart is empty

Browse All Departments
Price
  • R250 - R500 (1)
  • R500+ (885)
  • -
Status
Format
Author / Contributor
Publisher

Books > Language & Literature > Language & linguistics > Computational linguistics

Naive Semantics for Natural Language Understanding (Paperback, Softcover reprint of the original 1st ed. 1988): Kathleen... Naive Semantics for Natural Language Understanding (Paperback, Softcover reprint of the original 1st ed. 1988)
Kathleen Dahlgren
R4,005 Discovery Miles 40 050 Ships in 18 - 22 working days

This book introduces a theory, Naive Semantics (NS), a theory of the knowledge underlying natural language understanding. The basic assumption of NS is that knowing what a word means is not very different from knowing anything else, so that there is no difference in form of cognitive representation between lexical semantics and ency clopedic knowledge. NS represents word meanings as commonsense knowledge, and builds no special representation language (other than elements of first-order logic). The idea of teaching computers common sense knowledge originated with McCarthy and Hayes (1969), and has been extended by a number of researchers (Hobbs and Moore, 1985, Lenat et aI, 1986). Commonsense knowledge is a set of naive beliefs, at times vague and inaccurate, about the way the world is structured. Traditionally, word meanings have been viewed as criterial, as giving truth conditions for membership in the classes words name. The theory of NS, in identifying word meanings with commonsense knowledge, sees word meanings as typical descriptions of classes of objects, rather than as criterial descriptions. Therefore, reasoning with NS represen tations is probabilistic rather than monotonic. This book is divided into two parts. Part I elaborates the theory of Naive Semantics. Chapter 1 illustrates and justifies the theory. Chapter 2 details the representation of nouns in the theory, and Chapter 4 the verbs, originally published as "Commonsense Reasoning with Verbs" (McDowell and Dahlgren, 1987). Chapter 3 describes kind types, which are naive constraints on noun representations."

Inductive Dependency Parsing (Paperback, Softcover reprint of hardcover 1st ed. 2006): Joakim Nivre Inductive Dependency Parsing (Paperback, Softcover reprint of hardcover 1st ed. 2006)
Joakim Nivre
R2,638 Discovery Miles 26 380 Ships in 18 - 22 working days

This book describes the framework of inductive dependency parsing, a methodology for robust and efficient syntactic analysis of unrestricted natural language text. Coverage includes a theoretical analysis of central models and algorithms, and an empirical evaluation of memory-based dependency parsing using data from Swedish and English. A one-stop reference to dependency-based parsing of natural language, it will interest researchers and system developers in language technology, and is suitable for graduate or advanced undergraduate courses.

Entropy Guided Transformation Learning: Algorithms and Applications (Paperback, 2012 ed.): Cicero Nogueira dos Santos, Ruy Luiz... Entropy Guided Transformation Learning: Algorithms and Applications (Paperback, 2012 ed.)
Cicero Nogueira dos Santos, Ruy Luiz Milidiu
R1,408 Discovery Miles 14 080 Ships in 18 - 22 working days

Entropy Guided Transformation Learning: Algorithms and Applications (ETL) presents a machine learning algorithm for classification tasks. ETL generalizes Transformation Based Learning (TBL) by solving the TBL bottleneck: the construction of good template sets. ETL automatically generates templates using Decision Tree decomposition.

The authors describe ETL Committee, an ensemble method that uses ETL as the base learner. Experimental results show that ETL Committee improves the effectiveness of ETL classifiers. The application of ETL is presented to four Natural Language Processing (NLP) tasks: part-of-speech tagging, phrase chunking, named entity recognition and semantic role labeling. Extensive experimental results demonstrate that ETL is an effective way to learn accurate transformation rules, and shows better results than TBL with handcrafted templates for the four tasks. By avoiding the use of handcrafted templates, ETL enables the use of transformation rules to a greater range of tasks.

Suitable for both advanced undergraduate and graduate courses, Entropy Guided Transformation Learning: Algorithms and Applications provides a comprehensive introduction to ETL and its NLP applications.

Natural Language Processing Using Very Large Corpora (Paperback, Softcover reprint of the original 1st ed. 1999): S. Armstrong,... Natural Language Processing Using Very Large Corpora (Paperback, Softcover reprint of the original 1st ed. 1999)
S. Armstrong, Kenneth W. Church, Pierre Isabelle, Sandra Manzi, Evelyne Tzoukermann, …
R4,020 Discovery Miles 40 200 Ships in 18 - 22 working days

ABOUT THIS BOOK This book is intended for researchers who want to keep abreast of cur rent developments in corpus-based natural language processing. It is not meant as an introduction to this field; for readers who need one, several entry-level texts are available, including those of (Church and Mercer, 1993; Charniak, 1993; Jelinek, 1997). This book captures the essence of a series of highly successful work shops held in the last few years. The response in 1993 to the initial Workshop on Very Large Corpora (Columbus, Ohio) was so enthusias tic that we were encouraged to make it an annual event. The following year, we staged the Second Workshop on Very Large Corpora in Ky oto. As a way of managing these annual workshops, we then decided to register a special interest group called SIGDAT with the Association for Computational Linguistics. The demand for international forums on corpus-based NLP has been expanding so rapidly that in 1995 SIGDAT was led to organize not only the Third Workshop on Very Large Corpora (Cambridge, Mass. ) but also a complementary workshop entitled From Texts to Tags (Dublin). Obviously, the success of these workshops was in some measure a re flection of the growing popularity of corpus-based methods in the NLP community. But first and foremost, it was due to the fact that the work shops attracted so many high-quality papers."

Refined Verisimilitude (Paperback, Softcover reprint of hardcover 1st ed. 2002): S.D. Zwart Refined Verisimilitude (Paperback, Softcover reprint of hardcover 1st ed. 2002)
S.D. Zwart
R2,649 Discovery Miles 26 490 Ships in 18 - 22 working days

The subject of the present inquiry is the approach-to-the-truth research, which started with the publication of Sir Karl Popper's Conjectures and Refutations. In the decade before this publication, Popper fiercely attacked the ideas of Rudolf Carnap about confirmation and induction; and ten years later, in the famous tenth chapter of Conjectures he introduced his own ideas about scientific progress and verisimilitude (cf. the quotation on page 6). Abhorring inductivism for its apprecia tion of logical weakness rather than strength, Popper tried to show that fallibilism could serve the purpose of approach to the truth. To substantiate this idea he formalized the common sense intuition about preferences, that is: B is to be preferred to A if B has more advantages andfewer drawbacks than A. In 1974, however, David Millerand Pavel Tichy proved that Popper's formal explication could not be used to compare false theories. Subsequently, many researchers proposed alternatives or tried to improve Popper's original definition."

Topic-Focus Articulation, Tripartite Structures, and Semantic Content (Paperback, Softcover reprint of hardcover 1st ed. 1998):... Topic-Focus Articulation, Tripartite Structures, and Semantic Content (Paperback, Softcover reprint of hardcover 1st ed. 1998)
Eva Hajicova, Barbara B.H. Partee, P. Sgall
R2,636 Discovery Miles 26 360 Ships in 18 - 22 working days

1. 1 OBJECTIVES The main objective of this joint work is to bring together some ideas that have played central roles in two disparate theoretical traditions in order to con tribute to a better understanding of the relationship between focus and the syn tactic and semantic structure of sentences. Within the Prague School tradition and the branch of its contemporary development represented by Hajicova and Sgall (HS in the sequel), topic-focus articulation has long been a central object of study, and it has long been a tenet of Prague school linguistics that topic-focus structure has systematic relevance to meaning. Within the formal semantics tradition represented by Partee (BHP in the sequel), focus has much more recently become an area of concerted investigation, but a number of the semantic phenomena to which focus is relevant have been extensively investi gated and given explicit compositional semantic-analyses. The emergence of 'tripartite structures' (see Chapter 2) in formal semantics and the partial simi larities that can be readily observed between some aspects of tripartite structures and some aspects of Praguian topic-focus articulation have led us to expect that a closer investigation of the similarities and differences in these different theoretical constructs would be a rewarding undertaking with mutual benefits for the further development of our respective theories and potential benefit for the study of semantic effects of focus in other theories as well."

Machine Translation - Its Scope and Limits (Paperback, Softcover reprint of hardcover 1st ed. 2009): Yorick Wilks Machine Translation - Its Scope and Limits (Paperback, Softcover reprint of hardcover 1st ed. 2009)
Yorick Wilks
R3,106 Discovery Miles 31 060 Ships in 18 - 22 working days

A history of machine translation (MT) from the point of view of a major writer and innovator in the field is the subject of this book. It details the deep differences between rival groups on how best to do MT, and presents a global perspective covering historical and contemporary systems in Europe, the US and Japan. The author considers MT as a fundamental part of Artificial Intelligence and the ultimate test-bed for all computational linguistics.

Diamonds and Defaults - Studies in Pure and Applied Intensional Logic (Paperback, Softcover reprint of the original 1st ed.... Diamonds and Defaults - Studies in Pure and Applied Intensional Logic (Paperback, Softcover reprint of the original 1st ed. 1993)
Maarten de Rijke
R4,038 Discovery Miles 40 380 Ships in 18 - 22 working days

This volume contains a selection of papers presented at a Seminar on Intensional Logic held at the University of Amsterdam during the period September 1990-May 1991. Modal logic, either as a topic or as a tool, is common to most of the papers in this volume. A number of the papers are con cerned with what may be called well-known or traditional modal systems, but, as a quick glance through this volume will reveal, this by no means implies that they walk the beaten tracks. In deed, such contributions display new directions, new results, and new techniques to obtain familiar results. Other papers in this volume are representative examples of a current trend in modal logic: the study of extensions or adaptations of the standard sys tems that have been introduced to overcome various shortcomings of the latter, especially their limited expressive power. Finally, there is another major theme that can be discerned in the vol ume, a theme that may be described by the slogan 'representing changing information. ' Papers falling under this heading address long-standing issues in the area, or present a systematic approach, while a critical survey and a report contributing new techniques are also included. The bulk of the papers on pure modal logic deal with theoreti calor even foundational aspects of modal systems."

Locality in WH Quantification - Questions and Relative Clauses in Hindi (Paperback, Softcover reprint of hardcover 1st ed.... Locality in WH Quantification - Questions and Relative Clauses in Hindi (Paperback, Softcover reprint of hardcover 1st ed. 1996)
Veneeta Dayal
R2,646 Discovery Miles 26 460 Ships in 18 - 22 working days

Locality in WH Quantification argues that Logical Form, the level that mediates between syntax and semantics, is derived from S-structure by local movement. The primary data for the claim of locality at LF is drawn from Hindi but English data is used in discussing the semantics of questions and relative clauses. The book takes a cross-linguistic perspective showing how the Hindi and English facts can be brought to bear on the theory of universal grammar. There are several phenomena generally thought to involve long-distance dependencies at LF, such as scope marking, long-distance list answers and correlatives. In this book they are handled by explicating novel types of local relationships that interrogative and relative clauses can enter. A more articulated semantics is shown leading to a simpler syntax. Among other issues addressed is the switch from uniqueness/maximality effects in single wh constructions to list readings in multiple wh constructions. These effects are captured by adapting the treatment of wh expressions as quantifying over functions to the cases of multiple wh questions and correlatives. List readings due to functional dependencies are systematically distinguished from those that are based on plurality.

An Introduction to Mathematical Logic and Type Theory - To Truth Through Proof (Paperback, 2nd ed. 2002. Softcover reprint of... An Introduction to Mathematical Logic and Type Theory - To Truth Through Proof (Paperback, 2nd ed. 2002. Softcover reprint of the original 2nd ed. 2002)
Peter B. Andrews
R2,458 Discovery Miles 24 580 Ships in 18 - 22 working days

"In case you are considering to adopt this book for courses with over 50 students, please contact ""[email protected]"" for more information. "


This introduction to mathematical logic starts with propositional calculus and first-order logic. Topics covered include syntax, semantics, soundness, completeness, independence, normal forms, vertical paths through negation normal formulas, compactness, Smullyan's Unifying Principle, natural deduction, cut-elimination, semantic tableaux, Skolemization, Herbrand's Theorem, unification, duality, interpolation, and definability.

The last three chapters of the book provide an introduction to type theory (higher-order logic). It is shown how various mathematical concepts can be formalized in this very expressive formal language. This expressive notation facilitates proofs of the classical incompleteness and undecidability theorems which are very elegant and easy to understand. The discussion of semantics makes clear the important distinction between standard and nonstandard models which is so important in understanding puzzling phenomena such as the incompleteness theorems and Skolem's Paradox about countable models of set theory.

Some of the numerous exercises require giving formal proofs. A computer program called ETPS which is available from the web facilitates doing and checking such exercises.

"Audience: " This volume will be of interest to mathematicians, computer scientists, and philosophers in universities, as well as to computer scientists in industry who wish to use higher-order logic for hardware and software specification and verification. "

Prosody: Theory and Experiment - Studies Presented to Goesta Bruce (Paperback, Softcover reprint of hardcover 1st ed. 2000): M.... Prosody: Theory and Experiment - Studies Presented to Goesta Bruce (Paperback, Softcover reprint of hardcover 1st ed. 2000)
M. Horne
R4,031 Discovery Miles 40 310 Ships in 18 - 22 working days

The study of prosody is perhaps the area of speech research which has undergone the most noticeable development during the past ten to fifteen years. As an indication of this, one can note, for example, that at the latest International Conference on Spoken Language Processing in Philadelphia (October 1996), there were more sessions devoted to prosody than to any other area. Not only that, but within other sessions, in particular those dealing with dialogue, several of the presentations dealt specifically with prosodic aspects of dialogue research. Even at the latest Eurospeech meeting in Rhodes (September 1997), prosody, together with speech recognition (where several contributions dealt with how prosodic cues can be exploited to improve recognition processes) were the most frequent session topics, despite the fact that th'ere was a separate ESCA satellite workshop on intonation in conjunction with the main Eurospeech meeting which included over 80 contributions. This focus on prosodic research is partly due to the fact that developments in speech technology have made it possible to examine the acoustic parameters associated with prosodic phenomena (in particular fundamental frequency and duration) to an extent which has not been possible in other domains of speech research. It is also due to the fact that significant theoretical advances in linguistics and phonetics have been made during this time which have made it possible to obtain a better understanding of how prosodic parameters function in expressing different kinds of meaning in the languages of the world.

Parallel Text Processing - Alignment and Use of Translation Corpora (Paperback, Softcover reprint of hardcover 1st ed. 2000):... Parallel Text Processing - Alignment and Use of Translation Corpora (Paperback, Softcover reprint of hardcover 1st ed. 2000)
Jean Veronis
R5,180 Discovery Miles 51 800 Ships in 18 - 22 working days

l This book evolved from the ARCADE evaluation exercise that started in 1995. The project's goal is to evaluate alignment systems for parallel texts, i. e., texts accompanied by their translation. Thirteen teams from various places around the world have participated so far and for the first time, some ten to fifteen years after the first alignment techniques were designed, the community has been able to get a clear picture of the behaviour of alignment systems. Several chapters in this book describe the details of competing systems, and the last chapter is devoted to the description of the evaluation protocol and results. The remaining chapters were especially commissioned from researchers who have been major figures in the field in recent years, in an attempt to address a wide range of topics that describe the state of the art in parallel text processing and use. As I recalled in the introduction, the Rosetta stone won eternal fame as the prototype of parallel texts, but such texts are probably almost as old as the invention of writing. Nowadays, parallel texts are electronic, and they are be coming an increasingly important resource for building the natural language processing tools needed in the "multilingual information society" that is cur rently emerging at an incredible speed. Applications are numerous, and they are expanding every day: multilingual lexicography and terminology, machine and human translation, cross-language information retrieval, language learning, etc."

Natural Language Information Retrieval (Paperback, Softcover reprint of the original 1st ed. 1999): T. Strzalkowski Natural Language Information Retrieval (Paperback, Softcover reprint of the original 1st ed. 1999)
T. Strzalkowski
R2,699 Discovery Miles 26 990 Ships in 18 - 22 working days

The last decade has been one of dramatic progress in the field of Natural Language Processing (NLP). This hitherto largely academic discipline has found itself at the center of an information revolution ushered in by the Internet age, as demand for human-computer communication and informa tion access has exploded. Emerging applications in computer-assisted infor mation production and dissemination, automated understanding of news, understanding of spoken language, and processing of foreign languages have given impetus to research that resulted in a new generation of robust tools, systems, and commercial products. Well-positioned government research funding, particularly in the U. S., has helped to advance the state-of-the art at an unprecedented pace, in no small measure thanks to the rigorous 1 evaluations. This volume focuses on the use of Natural Language Processing in In formation Retrieval (IR), an area of science and technology that deals with cataloging, categorization, classification, and search of large amounts of information, particularly in textual form. An outcome of an information retrieval process is usually a set of documents containing information on a given topic, and may consist of newspaper-like articles, memos, reports of any kind, entire books, as well as annotated image and sound files. Since we assume that the information is primarily encoded as text, IR is also a natural language processing problem: in order to decide if a document is relevant to a given information need, one needs to be able to understand its content."

Recent Advances in Formal Languages and Applications (Paperback, Softcover reprint of hardcover 1st ed. 2006): Zoltan Esik,... Recent Advances in Formal Languages and Applications (Paperback, Softcover reprint of hardcover 1st ed. 2006)
Zoltan Esik, Carlos Martin-Vide, Victor Mitrana
R4,013 Discovery Miles 40 130 Ships in 18 - 22 working days

The contributors present the main results and techniques of their specialties in an easily accessible way accompanied with many references: historical, hints for complete proofs or solutions to exercises and directions for further research. This volume contains applications which have not appeared in any collection of this type. The book is a general source of information in computation theory, at the undergraduate and research level.

Argumentation Machines - New Frontiers in Argument and Computation (Paperback, Softcover reprint of hardcover 1st ed. 2004):... Argumentation Machines - New Frontiers in Argument and Computation (Paperback, Softcover reprint of hardcover 1st ed. 2004)
Creed, T.J. Norman
R4,007 Discovery Miles 40 070 Ships in 18 - 22 working days

In the late 1990s, AI witnessed an increasing use of the term 'argumentation' within its bounds: in natural language processing, in user interface design, in logic programming and nonmonotonic reasoning, in Al's interface with the legal community, and in the newly emerging field of multi-agent systems. It seemed to me that many of these uses of argumentation were inspired by (of ten inspired) guesswork, and that a great majority of the AI community were unaware that there was a maturing, rich field of research in Argumentation Theory (and Critical Thinking and Informal Logic) that had been steadily re building a scholarly approach to the area over the previous twenty years or so. Argumentation Theory, on its side; was developing theories and approaches that many in the field felt could have a role more widely in research and soci ety, but were for the most part unaware that AI was one of the best candidates for such application."

A Computational Model of Natural Language Communication - Interpretation, Inference, and Production in Database Semantics... A Computational Model of Natural Language Communication - Interpretation, Inference, and Production in Database Semantics (Paperback, Softcover reprint of hardcover 1st ed. 2006)
Roland R Hausser
R1,431 Discovery Miles 14 310 Ships in 18 - 22 working days

The ideal of using human language to control machines requires a practical theory of natural language communication that includes grammatical analysis of language signs, plus a model of the cognitive agent, with interfaces for recognition and action, an internal database, and an algorithm for reading content in and out. This book offers a functional framework for theoretical analysis of natural language communication and for practical applications of natural language processing.

The Language of Design - Theory and Computation (Paperback, Softcover reprint of hardcover 1st ed. 2009): Andy An-Si Dong The Language of Design - Theory and Computation (Paperback, Softcover reprint of hardcover 1st ed. 2009)
Andy An-Si Dong
R2,653 Discovery Miles 26 530 Ships in 18 - 22 working days

The Language of Design articulates the theory that there is a language of design. Drawing upon insights from computational language processing, the language of design is modeled computationally through latent semantic analysis (LSA), lexical chain analysis (LCA), and sentiment analysis (SA). The statistical co-occurrence of semantics (LSA), semantic relations (LCA), and semantic modifiers (SA) in design text is used to illustrate how the reality producing effect of language is itself an enactment of design, allowing a new understanding of the connections between creative behaviors. The computation of the language of design makes it possible to make direct measurements of creative behaviors which are distributed across social spaces and mediated through language. The book demonstrates how machine understanding of design texts based on computation over the language of design yields practical applications for design management.

Knowledge Processing and Data Analysis - First International Conference, KONT 2007, Novosibirsk, Russia, September 14-16,... Knowledge Processing and Data Analysis - First International Conference, KONT 2007, Novosibirsk, Russia, September 14-16, 2007,and First International Conference, KPP 2007, Darmstadt, Germany, September 28-30, 2007. Revised Selected Papers (Paperback, 2011)
Karl Erich Wolff, Dmitry E. Palchunov, Nikolay G. Zagoruiko, Urs Andelfinger
R1,419 Discovery Miles 14 190 Ships in 18 - 22 working days

This book constitutes the proceedings of the First International Conference on Knowledge - Ontology - Theory (KONT 2007) held in Novosibirsk, Russia, in September 2007 and the First International Conference on Knowledge Processing in Practice (KPP 2007) held in Darmstadt, Germany, in September 2007. The 21 revised full papers were carefully reviewed and selected from numerous submissions and cover four main focus areas: applications of conceptual structures; concept based software; ontologies as conceptual structures; and data analysis.

Parsing with Principles and Classes of Information (Paperback, Softcover reprint of the original 1st ed. 1996): Paola Merlo Parsing with Principles and Classes of Information (Paperback, Softcover reprint of the original 1st ed. 1996)
Paola Merlo
R2,647 Discovery Miles 26 470 Ships in 18 - 22 working days

Parsing with Principles and Classes of Information presents a parser based on current principle-based linguistic theories for English. It argues that differences in the kind of information being computed, whether lexical, structural or syntactic, play a crucial role in the mapping from grammatical theory to parsing algorithms. The direct encoding of homogeneous classes of information has computational and cognitive advantages, which are discussed in detail. Phrase structure is built by using a fast algorithm and compact reference tables. A quantified comparison of different compilation methods shows that lexical and structural information are most compactly represented by separate tables. This finding is reconciled to evidence on the resolution of lexical ambiguity, as an approach to the modularization of information. The same design is applied to the efficient computation of long- distance dependencies. Incremental parsing using bottom-up tabular algorithms is discussed in detail. Finally, locality restrictions are calculated by a parametric algorithm. Students of linguistics, parsing and psycholinguistics will find this book a useful resource on issues related to the implementation of current linguistic theories, using computational and cognitive plausible algorithms.

Fuzzy Logic - A Spectrum of Theoretical & Practical Issues (Paperback, Softcover reprint of hardcover 1st ed. 2007): Paul P.... Fuzzy Logic - A Spectrum of Theoretical & Practical Issues (Paperback, Softcover reprint of hardcover 1st ed. 2007)
Paul P. Wang, Da Ruan, Etienne E. Kerre
R4,060 Discovery Miles 40 600 Ships in 18 - 22 working days

How far can you take fuzzy logic, the brilliant conceptual framework made famous by George Klir? With this book, you can find out. The authors of this updated edition have extended Klir s work by taking fuzzy logic into even more areas of application. It serves a number of functions, from an introductory text on the concept of fuzzy logic to a treatment of cutting-edge research problems suitable for a fully paid-up member of the fuzzy logic community.

Knowledge-Driven Multimedia Information Extraction and Ontology Evolution - Bridging the Semantic Gap (Paperback, 2011):... Knowledge-Driven Multimedia Information Extraction and Ontology Evolution - Bridging the Semantic Gap (Paperback, 2011)
Georgios Paliouras, Constantine D. Spyropoulos, George Tsatsaronis
R1,397 Discovery Miles 13 970 Ships in 18 - 22 working days

This book presents the state of the art in the areas of ontology evolution and knowledge-driven multimedia information extraction, placing an emphasis on how the two can be combined to bridge the semantic gap. This was also the goal of the EC-sponsored BOEMIE (Bootstrapping Ontology Evolution with Multimedia Information Extraction) project, to which the authors of this book have all contributed. The book addresses researchers and practitioners in the field of computer science and more specifically in knowledge representation and management, ontology evolution, and information extraction from multimedia data. It may also constitute an excellent guide to students attending courses within a computer science study program, addressing information processing and extraction from any type of media (text, images, and video). Among other things, the book gives concrete examples of how several of the methods discussed can be applied to athletics (track and field) events.

Philosophical Logic in Poland (Paperback, Softcover reprint of hardcover 1st ed. 1993): Jan Wolenski Philosophical Logic in Poland (Paperback, Softcover reprint of hardcover 1st ed. 1993)
Jan Wolenski
R5,164 Discovery Miles 51 640 Ships in 18 - 22 working days

Poland has played an enormous role in the development of mathematical logic. Leading Polish logicians, like Lesniewski, Lukasiewicz and Tarski, produced several works related to philosophical logic, a field covering different topics relevant to philosophical foundations of logic itself, as well as various individual sciences. This collection presents contemporary Polish work in philosophical logic which in many respects continue the Polish way of doing philosophical logic. This book will be of interest to logicians, mathematicians, philosophers, and linguists.

Fuzzy Quantifiers - A Computational Theory (Paperback, Softcover reprint of hardcover 1st ed. 2006): Ingo Gloeckner Fuzzy Quantifiers - A Computational Theory (Paperback, Softcover reprint of hardcover 1st ed. 2006)
Ingo Gloeckner
R4,061 Discovery Miles 40 610 Ships in 18 - 22 working days

From a linguistic perspective, it is quanti?cation which makes all the di?- ence between "having no dollars" and "having a lot of dollars." And it is the meaning of the quanti?er "most" which eventually decides if "Most Ame- cans voted Kerry" or "Most Americans voted Bush" (as it stands). Natural language(NL)quanti?erslike"all,""almostall,""many"etc. serveanimp- tant purpose because they permit us to speak about properties of collections, as opposed to describing speci?c individuals only; in technical terms, qu- ti?ers are a 'second-order' construct. Thus the quantifying statement "Most Americans voted Bush" asserts that the set of voters of George W. Bush c- prisesthemajorityofAmericans, while"Bushsneezes"onlytellsussomething about a speci?c individual. By describing collections rather than individuals, quanti?ers extend the expressive power of natural languages far beyond that of propositional logic and make them a universal communication medium. Hence language heavily depends on quantifying constructions. These often involve fuzzy concepts like "tall," and they frequently refer to fuzzy quantities in agreement like "about ten," "almost all," "many" etc. In order to exploit this expressive power and make fuzzy quanti?cation available to technical applications, a number of proposals have been made how to model fuzzy quanti?ers in the framework of fuzzy set theory. These approaches usually reduce fuzzy quanti?cation to a comparison of scalar or fuzzy cardinalities 197, 132].

Advances in Probabilistic and Other Parsing Technologies (Paperback, Softcover reprint of hardcover 1st ed. 2000): H Bunt,... Advances in Probabilistic and Other Parsing Technologies (Paperback, Softcover reprint of hardcover 1st ed. 2000)
H Bunt, Anton Nijholt
R2,651 Discovery Miles 26 510 Ships in 18 - 22 working days

Parsing technology is concerned with finding syntactic structure in language. In parsing we have to deal with incomplete and not necessarily accurate formal descriptions of natural languages. Robustness and efficiency are among the main issuesin parsing. Corpora can be used to obtain frequency information about language use. This allows probabilistic parsing, an approach that aims at both robustness and efficiency increase. Approximation techniques, to be applied at the level of language description, parsing strategy, and syntactic representation, have the same objective. Approximation at the level of syntactic representation is also known as underspecification, a traditional technique to deal with syntactic ambiguity. In this book new parsing technologies are collected that aim at attacking the problems of robustness and efficiency by exactly these techniques: the design of probabilistic grammars and efficient probabilistic parsing algorithms, approximation techniques applied to grammars and parsers to increase parsing efficiency, and techniques for underspecification and the integration of semantic information in the syntactic analysis to deal with massive ambiguity. The book gives a state-of-the-art overview of current research and development in parsing technologies. In its chapters we see how probabilistic methods have entered the toolbox of computational linguistics in order to be applied in both parsing theory and parsing practice. The book is both a unique reference for researchers and an introduction to the field for interested graduate students.

A Computational Theory of Writing Systems (Paperback, New ed): Richard Sproat A Computational Theory of Writing Systems (Paperback, New ed)
Richard Sproat
R1,378 Discovery Miles 13 780 Ships in 10 - 15 working days

This book develops a formal computational theory of writing systems. It offers specific proposals about the linguistic objects that are represented by orthographic elements; what levels of linguistic representation are involved and how they may differ across writing systems; and what formal constraints hold of the mapping relation between linguistic and orthographic elements. Based on the insights gained, Sproat then proposes a taxonomy of writing systems. The treatment of theoretical linguistic issues and their computational implementation is complemented with discussion of empirical psycholinguistic work on reading and its relevance for the computational model developed here. Throughout, the model is illustrated with a number of detailed case studies of writing systems around the world. This book will be of interest to students and researchers in a variety of fields, including theoretical and computational linguistics, the psycholinguistics of reading and writing, and speech technology.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
So, For The Record - Behind The…
Anton Harber Paperback R638 Discovery Miles 6 380
Winged Messenger - Running Your First…
Bruce Fordyce Paperback  (1)
R220 R203 Discovery Miles 2 030
Being Black - A South African Story That…
Theo Mayekiso Paperback R305 Discovery Miles 3 050
Introduction To Legal Pluralism In South…
C. Rautenbach Paperback  (1)
R1,274 R1,150 Discovery Miles 11 500
Better Choices - Ensuring South Africa's…
Greg Mills, Mcebisi Jonas, … Paperback R350 R317 Discovery Miles 3 170
Hadeda la land: A new Madam and Eve…
Stephen Francis Paperback R220 R203 Discovery Miles 2 030
Expensive Poverty - Why Aid Fails And…
Greg Mills Paperback R360 R326 Discovery Miles 3 260
Black And White Bioscope - Making Movies…
Neil Parsons Hardcover R339 Discovery Miles 3 390
Win! - Compelling Conversations With 20…
Jeremy Maggs Paperback R294 Discovery Miles 2 940
Damaged Goods - The Rise and Fall of Sir…
Oliver Shah Paperback  (1)
R289 R264 Discovery Miles 2 640

 

Partners