Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Philosophy > Topics in philosophy > Logic
This volume contains 21 new and original contributions to the study of formal semantics, written by distinguished experts in response to landmark papers in the field. The chapters make the target articles more accessible by providing background, modernizing the notation, providing critical commentary, explaining the afterlife of the proposals, and offering a useful bibliography for further study. The chapters were commissioned by the series editors to mark the 100th volume in the book series Studies in Linguistics and Philosophy. The target articles are amongst the most widely read and cited papers up to the end of the 20th century, and cover most of the important subfields of formal semantics. The authors are all prominent researchers in the field, making this volume a valuable addition to the literature for researchers, students, and teachers of formal semantics. Chapter 19 is available open access under a Creative Commons Attribution 4.0 International License via link.springer.com.
This is an open access title available under the terms of a CC BY-NC-ND 4.0 International licence. It is free to read at Oxford Scholarship Online and offered as a free PDF download from OUP and selected open access locations. We need to understand the impossible. Francesco Berto and Mark Jago start by considering what the concepts of meaning, information, knowledge, belief, fiction, conditionality, and counterfactual supposition have in common. They are all concepts which divide the world up more finely than logic does. Logically equivalent sentences may carry different meanings and information and may differ in how they're believed. Fictions can be inconsistent yet meaningful. We can suppose impossible things without collapsing into total incoherence. Yet for the leading philosophical theories of meaning, these phenomena are an unfathomable mystery. To understand these concepts, we need a metaphysical, logical, and conceptual grasp of situations that could not possibly exist: Impossible Worlds. This book discusses the metaphysics of impossible worlds and applies the concept to a range of central topics and open issues in logic, semantics, and philosophy. It considers problems in the logic of knowledge, the meaning of alternative logics, models of imagination and mental simulation, the theory of information, truth in fiction, the meaning of conditional statements, and reasoning about the impossible. In all these cases, impossible worlds have an essential role to play.
Frege's Theorem collects eleven essays by Richard G Heck, Jr, one
of the world's leading authorities on Frege's philosophy. The
Theorem is the central contribution of Gottlob Frege's formal work
on arithmetic. It tells us that the axioms of arithmetic can be
derived, purely logically, from a single principle: the number of
these things is the same as the number of those things just in case
these can be matched up one-to-one with those. But that principle
seems so utterly fundamental to thought about number that it might
almost count as a definition of number. If so, Frege's Theorem
shows that arithmetic follows, purely logically, from a near
definition. As Crispin Wright was the first to make clear, that
means that Frege's logicism, long thought dead, might yet be
viable.
Berkeley: Ideas, Immaterialism, and Objective Presence offers a novel interpretation of the arc of George Berkeley's philosophical thought, from his theory of vision through his immaterialism and finally to his proof of God's existence. Keota Fields unifies these themes to focus on Berkeley's use of the Cartesian doctrine of objective presence, which demands causal explanations of the content of ideas. This is particularly so with respect to Berkeley's arguments for immaterialism. One of those arguments is typically read as a straightforward transitivity argument. After identifying material bodies with sensible objects, and the latter with ideas of sense, Berkeley concludes that putative material bodies are actually identical to collections of ideas of sense. George Pappas has recently defended an alternative reading that grounds Berkeley's immaterialism in his rejection of what Pappas calls category-transcendent abstract ideas: abstract ideas of beings, entia, or existence. Fields uses Pappas's interpretation as a framework for understanding Berkeley's immaterialism in terms of transcendental arguments. Early moderns routinely used the doctrine of objective presence to justify transcendental arguments for the existence of material substance. The claim was that physical qualities are necessary for any causal explanation of the content of sensory ideas; since those qualities are represented to perceivers as ontologically dependent, material substance is the necessary condition for the existence of physical qualities and a fortiori any causal explanation of the content of sensory ideas. On the reading defended here, Berkeley rejects Locke's transcendental argument for the existence of material substratum on the grounds that it turns decisively on the aforementioned category-transcendent abstract ideas, which Berkeley rejects as logically inconsistent. In its place, Berkeley offers his own transcendental argument designed to show that only minds and ideas exist. He uses that argument as a
Timothy Williamson is one of the most influential living philosophers working in the areas of logic and metaphysics. His work in these areas has been particularly influential in shaping debates about metaphysical modality, which is the topic of his recent provocative and closely-argued book Modal Logic as Metaphysics (2013). This book comprises ten essays by metaphysicians and logicians responding to Williamson's work on metaphysical modality, as well as replies by Williamson to each essay. In addition, it contains an original essay by Williamson, 'Modal science,' concerning the role of modal claims in natural science. This book was originally published as a special issue of the Canadian Journal of Philosophy.
Gary Kemp presents a penetrating investigation of key issues in the philosophy of language, by means of a comparative study of two great figures of late twentieth-century philosophy. So far as language and meaning are concerned, Willard Van Orman Quine and Donald Davidson are usually regarded as birds of a feather. The two disagreed in print on various matters over the years, but fundamentally they seem to be in agreement; most strikingly, Davidson's thought experiment of Radical Interpretation looks to be a more sophisticated, technically polished version of Quinean Radical Translation. Yet Quine's most basic and general philosophical commitment is to his methodological naturalism, which is ultimately incompatible with Davidson's main commitments. In particular, it is impossible to endorse, from Quine's perspective, the roles played by the concepts of truth and reference in Davidson's philosophy of language: Davidson's employment of the concept of truth is from Quine's point of view needlessly adventurous, and his use of the concept of reference cannot be divorced from unscientific 'intuition'. From Davidson's point of view, Quine's position looks needlessly scientistic, and seems blind to the genuine problems of language and meaning. Gary Kemp offers a powerful argument for Quine's position, and in favour of methodological naturalism and its corollary, naturalized epistemology. It is possible to give a consistent and explanatory account of language and meaning without problematic uses of the concepts truth and reference, which in turn makes a strident naturalism much more plausible.
Reference is a central topic in philosophy of language, and has been the main focus of discussion about how language relates to the world. R. M. Sainsbury sets out a new approach to the concept, which promises to bring to an end some long-standing debates in semantic theory. There is a single category of referring expressions, all of which deserve essentially the same kind of semantic treatment. Included in this category are both singular and plural referring expressions ('Aristotle', 'The Pleiades'), complex and non-complex referring expressions ('The President of the USA in 1970', 'Nixon'), and empty and non-empty referring expressions ('Vulcan', 'Neptune'). Referring expressions are to be described semantically by a reference condition, rather than by being associated with a referent. In arguing for these theses, Sainsbury's book promises to end the fruitless oscillation between Millian and descriptivist views. Millian views insist that every name has a referent, and find it hard to give a good account of names which appear not to have referents, or at least are not known to do so, like ones introduced through error ('Vulcan'), ones where it is disputed whether they have a bearer ('Patanjali') and ones used in fiction. Descriptivist theories require that each name be associated with some body of information. These theories fly in the face of the fact names are useful precisely because there is often no overlap of information among speakers and hearers. The alternative position for which the book argues is firmly non-descriptivist, though it also does not require a referent. A much broader view can be taken of which expressions are referring expressions: not just names and pronouns used demonstratively, but also some complex expressions and some anaphoric uses of pronouns. Sainsbury's approach brings reference into line with truth: no one would think that a semantic theory should associate a sentence with a truth value, but it is commonly held that a semantic theory should associate a sentence with a truth condition, a condition which an arbitrary state of the world would have to satisfy in order to make the sentence true. The right analogy is that a semantic theory should associate a referring expression with a reference condition, a condition which an arbitrary object would have to satisfy in order to be the expression's referent. Lucid and accessible, and written with a minimum of technicality, Sainsbury's book also includes a useful historical survey. It will be of interest to those working in logic, mind, and metaphysics as well as essential reading for philosophers of language.
If there is one utterly inescapable problem for the metaphysician, it is this: is metaphysics itself a theoretically legitimate discipline? Is it, in other words, capable of a systematic and well-confirmed set of theoretical results? And if not, why not? From its inception, metaphysics has found itself exercised by the nagging worry that its own inquiries might reveal it to be a subject without an object, or a mode of inquiry without a method. Such concerns were voiced as early as Plato's discussion of the battle between the Gods and Giants. Since then, no era of its history has spared metaphysics some rehearsal of this question. In Empiricism and the Problem of Metaphysics, Paul Studtmann defends an empiricist critique of metaphysical theorizing. At the heart of the critique is an empiricist view of a priori knowledge, according to which all a priori knowledge is empirical knowledge of the results of effective procedures. Such a view of a priori knowledge places severe limits on the scope a priori speculation and indeed places beyond our ken the types of claims that metaphysicians as well as traditional epistemologists and ethicists have typically wanted to make.
Ludwig Wittgenstein's On Certainty explores a myriad of new and important ideas regarding our notions of belief, knowledge, skepticism, and certainty. During the course of his exploration, Wittgenstein makes a fascinating new discovery about certitude, namely, that it is categorically distinct from knowledge. As his investigation advances, he recognizes that certainty must be non-propositional and non-ratiocinated; borne out not in the things we say, but in our actions, our deeds. Many philosophers working outside of epistemology recognized Wittgenstein's insights and determined that his work's abrupt end might serve as an excellent launching point for still further philosophical expeditions. In Exploring Certainty: Wittgenstein and Wide Fields of Thought, Robert Greenleaf Brice surveys some of this rich topography. Wittgenstein's writings serve as a point of departure for Brice's own ideas about certainty. He shows how Wittgenstein's rough and unpolished notion of certitude might be smoothed out and refined in a way to benefit studies of morality, aesthetics, cognitive science, philosophy of mathematics. Brice's work opens new avenues of thought for scholars and students of the Wittgensteinian tradition, while introducing original philosophies concerning issues central to human knowledge and cognition.
Necessary Beings is concerned with two central areas of metaphysics: modality-the theory of necessity, possibility, and other related notions; and ontology-the general study of what kinds of entities there are. Bob Hale's overarching purpose is to develop and defend two quite general theses about what is required for the existence of entities of various kinds: that questions about what kinds of things there are cannot be properly understood or adequately answered without recourse to considerations about possibility and necessity, and that, conversely, questions about the nature and basis of necessity and possibility cannot be satisfactorily tackled without drawing on what might be called the methodology of ontology. Taken together, these two theses claim that ontology and modality are mutually dependent upon one another, neither more fundamental than the other. Hale defends a broadly Fregean approach to metaphysics, according to which ontological distinctions among different kinds of things (objects, properties, and relations) are to be drawn on the basis of prior distinctions between different logical types of expression. The claim that facts about what kinds of things exist depend upon facts about what is possible makes little sense unless one accepts that at least some modal facts are fundamental, and not reducible to facts of some other, non-modal, sort. He argues that facts about what is absolutely necessary or possible have this character, and that they have their source or basis, not in meanings or concepts nor in facts about alternative 'worlds', but in the natures or essences of things.
This collection presents the first sustained examination of the nature and status of the idea of principles in early modern thought. Principles are almost ubiquitous in the seventeenth and eighteenth centuries: the term appears in famous book titles, such as Newton's Principia; the notion plays a central role in the thought of many leading philosophers, such as Leibniz's Principle of Sufficient Reason; and many of the great discoveries of the period, such as the Law of Gravitational Attraction, were described as principles. Ranging from mathematics and law to chemistry, from natural and moral philosophy to natural theology, and covering some of the leading thinkers of the period, this volume presents ten compelling new essays that illustrate the centrality and importance of the idea of principles in early modern thought. It contains chapters by leading scholars in the field, including the Leibniz scholar Daniel Garber and the historian of chemistry William R. Newman, as well as exciting, emerging scholars, such as the Newton scholar Kirsten Walsh and a leading expert on experimental philosophy, Alberto Vanzo. The Idea of Principles in Early Modern Thought: Interdisciplinary Perspectives charts the terrain of one of the period's central concepts for the first time, and opens up new lines for further research.
From the point of view of non-classical logics, Heyting's implication is the smallest implication for which the deduction theorem holds. This book studies properties of logical systems having some of the classical connectives and implication in the neighbourhood of Heyt ing's implication. I have not included anything on entailment, al though it belongs to this neighbourhood, mainly because of the appearance of the Anderson-Belnap book on entailment. In the later chapters of this book, I have included material that might be of interest to the intuitionist mathematician. Originally, I intended to include more material in that spirit but I decided against it. There is no coherent body of material to include that builds naturally on the present book. There are some serious results on topological models, second order Beth and Kripke models, theories of types, etc., but it would require further research to be able to present a general theory, possibly using sheaves. That would have postponed pUblication for too long. I would like to dedicate this book to my colleagues, Professors G. Kreisel, M.O. Rabin and D. Scott. I have benefited greatly from Professor Kreisel's criticism and suggestions. Professor Rabin's fun damental results on decidability and undecidability provided the powerful tools used in obtaining the majority of the results reported in this book. Professor Scott's approach to non-classical logics and especially his analysis of the Scott consequence relation makes it possible to present Heyting's logic as a beautiful, integral part of non-classical logics."
Our finances, politics, media, opportunities, information, shopping and knowledge production are mediated through algorithms and their statistical approaches to knowledge; increasingly, these methods form the organizational backbone of contemporary capitalism. Revolutionary Mathematics traces the revolution in statistics and probability that has quietly underwritten the explosion of machine learning, big data and predictive algorithms that now decide many aspects of our lives. Exploring shifts in the philosophical understanding of probability in the late twentieth century, Joque shows how this was not merely a technical change but a wholesale philosophical transformation in the production of knowledge and the extraction of value. This book provides a new and unique perspective on the dangers of allowing artificial intelligence and big data to manage society. It is essential reading for those who want to understand the underlying ideological and philosophical changes that have fueled the rise of algorithms and convinced so many to blindly trust their outputs, reshaping our current political and economic situation.
The founder of both American pragmatism and semiotics, Charles Sanders Peirce (1839-1914) is widely regarded as an enormously important and pioneering theorist. In this book, scholars from around the world examine the nature and significance of Peirce's work on perception, iconicity, and diagrammatic thinking. Abjuring any strict dichotomy between presentational and representational mental activity, Peirce's theories transform the Aristotelian, Humean, and Kantian paradigms that continue to hold sway today and, in so doing, forge a new path for understanding the centrality of visual thinking in science, education, art, and communication. The essays in this collection cover a wide range of issues related to Peirce's theories, including the perception of generality; the legacy of ideas being copies of impressions; imagination and its contribution to knowledge; logical graphs, diagrams, and the question of whether their iconicity distinguishes them from other sorts of symbolic notation; how images and diagrams contribute to scientific discovery and make it possible to perceive formal relations; and the importance and danger of using diagrams to convey scientific ideas. This book is a key resource for scholars interested in Perice's philosophy and its relation to contemporary issues in mathematics, philosophy of mind, philosophy of perception, semiotics, logic, visual thinking, and cognitive science.
Plato's formulation of the Principle of Non-contradiction (PNC) in Republic IV is the first full statement of the principle in western philosophy. His use of the principle might seem to suggest that he endorses the PNC. After all, how could one possibly deny so fundamental a principle-especially when it seems difficult to deny it without relying on it. However, the endorsement in the text is qualified. Socrates refers to the principle as one that he and his interlocutors will hypothesize and warns that if it should ever be shown to be false, all that follows from it will also be refuted. Scholars who have noticed this issue have tended to assume that the truth of the hypothesis in question can be guaranteed. Laurence Bloom argues against unthinkingly accepting this claim. He suggests that what emerges from the text is more sophisticated: Plato's concession that the PNC is hypothetical is a textual clue pointing us to a complex philosophical argument that grounds the PNC, as well as the sort of reasoning it grounds, in form. Indeed, in framing the problem in this way, we can read the Republic as providing an extended argument for form. The argument for forms that emerges is complex and difficult. It is not and cannot be a normal, discursive argument. Indeed, the argument cannot even be one that assumes the PNC; if it did so, it would fall prey to a vicious circularity. Rather, the argument rests on the very possibility of our hypothesizing the PNC in the first place. Our ability to hypothesize the PNC-and perhaps our inability not to hypothesize it-is the linchpin. When we ask questions such as "to what objects does the PNC apply?" or "how is it possible that we apply the PNC?," we are asking questions that lead us to the existence of form. The Principle of Non-contradiction in Plato's Republic also explores the soul of the knower-the very entity to which and by which the principle is applied in the text-and its underlying unity.
A classic of how to think clearly and critically and ahead of its time in anticipating the threats to democracy by poor argument and shoddy reasoning Engaging, clear and witty, it is a brilliant example of how philosophy can connect with the concerns with everyone and requires no knowledge of the subject Susan Stebbing was the first woman in the UK to be appointed a professor of philosophy, in 1933 A new foreword by Nigel Warburton and introduction by Peter West help to set Stebbing book in helpful context
Contents: Introduction; I. ONTOLOGY; 1. Existence (1987); 2. Nonexistence (1998); 3. Mythical Objects (2002); II. NECESSITY; 4. Modal Logic Kalish-and-Montague Style (1994); 5. Impossible Worlds (1984); 6. An Empire of Thin Air (1988); 7. The Logic of What Might Have Been (1989); III. IDENTITY; 8. The fact that x=y (1987); 9. This Side of Paradox (1993); 10. Identity Facts (2003); 11. Personal Identity: What's the Problem? (1995); IV. PHILOSOPHY OF MATHEMATICS; 12. Wholes, Parts, and Numbers (1997); 13. The Limits of Human Mathematics (2001); V. THEORY OF MEANING AND REFERENCE; 14. On Content (1992); 15. On Designating (1997); 16. A Problem in the Frege-Church Theory of Sense and Denotation (1993); 17. The Very Possibility of Language (2001); 18. Tense and Intension (2003); 19. Pronouns as Variables (2005)
Were the most serious philosophers of the millennium 200 A.D. to 1200 A.D. just confused mystics? This book shows otherwise. John Martin rehabilitates Neoplatonism, founded by Plotinus and brought into Christianity by St. Augustine. The Neoplatonists devise ranking predicates like good, excellent, perfect to divide the Chain of Being, and use the predicate intensifier hyper so that it becomes a valid logical argument to reason from God is not (merely) good to God is hyper-good. In this way the relational facts underlying reality find expression in Aristotle's subject-predicate statements, and the Platonic tradition proves able to subsume Aristotle's logic while at the same time rejecting his metaphysics. In the Middle Ages when Aristotle's larger philosophy was recovered and joined again to the Neoplatonic tradition which was never lost, Neoplatonic logic lived along side Aristotle's metaphysics in a sometime confusing and unsettled way. Showing Neoplatonism to be significantly richer in its logical and philosophical ideas than it is usually given credit for, this book will be of interest not just to historians of logic, but to philosophers, logicians, linguists, and theologians.
This book offers a comprehensive account of logic that addresses fundamental issues concerning the nature and foundations of the discipline. The authors claim that these foundations can not only be established without the need for strong metaphysical assumptions, but also without hypostasizing logical forms as specific entities. They present a systematic argument that the primary subject matter of logic is our linguistic interaction rather than our private reasoning and it is thus misleading to see logic as revealing "the laws of thought". In this sense, fundamental logical laws are implicit to our "language games" and are thus more similar to social norms than to the laws of nature. Peregrin and Svoboda also show that logical theories, despite the fact that they rely on rules implicit to our actual linguistic practice, firm up these rules and make them explicit. By carefully scrutinizing the project of logical analysis, the authors demonstrate that logical rules can be best seen as products of the so called reflective equilibrium. They suggest that we can profit from viewing languages as "inferential landscapes" and logicians as "geographers" who map them and try to pave safe routes through them. This book is an essential resource for scholars and researchers engaged with the foundations of logical theories and the philosophy of language.
In this book, Lorraine Besser-Jones develops a eudaimonistic virtue ethics based on a psychological account of human nature. While her project maintains the fundamental features of the eudaimonistic virtue ethical framework-virtue, character, and well-being-she constructs these concepts from an empirical basis, drawing support from the psychological fields of self-determination and self-regulation theory. Besser-Jones's resulting account of "eudaimonic ethics" presents a compelling normative theory and offers insight into what is involved in being a virtuous person and "acting well." This original contribution to contemporary ethics and moral psychology puts forward a provocative hypothesis of what an empirically-based moral theory would look like.
'Don't hope that events will turn out the way you want, welcome events in whichever way they happen' How can we cope when life's events seem beyond our control? These words of consolation and inspiration from the three great Stoic philosophers - Epictetus, Seneca and Marcus Aurelius - offer ancient wisdom on how to face life's adversities and live well in the world. One of twenty new books in the bestselling Penguin Great Ideas series. This new selection showcases a diverse list of thinkers who have helped shape our world today, from anarchists to stoics, feminists to prophets, satirists to Zen Buddhists.
Moral Inferences is the first volume to thoroughly explore the relationship between morality and reasoning. Drawing on the expertise of world-leading researchers, this text provides ground-breaking insight into the importance of studying these distinct fields together. The volume integrates the latest research into morality with current theories in reasoning to consider the prominent role reasoning plays in everyday moral judgements. Featuring contributions on topics such as moral arguments, causal models, and dual process theory, this text provides a new perspectives on previous studies, encouraging researchers to adopt a more integrated approach in the future. Moral Inferences will be essential reading for students and researchers of moral psychology, specifically those interested in reasoning, rationality and decision-making.
Moral Inferences is the first volume to thoroughly explore the relationship between morality and reasoning. Drawing on the expertise of world-leading researchers, this text provides ground-breaking insight into the importance of studying these distinct fields together. The volume integrates the latest research into morality with current theories in reasoning to consider the prominent role reasoning plays in everyday moral judgements. Featuring contributions on topics such as moral arguments, causal models, and dual process theory, this text provides a new perspectives on previous studies, encouraging researchers to adopt a more integrated approach in the future. Moral Inferences will be essential reading for students and researchers of moral psychology, specifically those interested in reasoning, rationality and decision-making.
This publication will introduce how two different countries promote high quality learning with technology in very different educational systems. The book opens inspiring scenarios how new technological tools and services can be used for promoting students' learning in schools and higher education, enhancing collaboration in educational communities and supporting teachers' professional development. The publication focuses on three major themes: Students as knowledge and art creators in playful learning systems, personalized learning supported by mobiles and intelligent tutoring systems with games and new web-based tools identifying learning difficulties, and technology in digitalized learning environments. The book is based on systematic research work in universities.
Epistemic Principles: A Primer of the Theory of Knowledge presents a compact account of the basic principles of the theory of knowledge. In doing this, Nicholas Rescher aims to fill the current gap in contemporary philosophical theory of knowledge with a comprehensive analysis of epistemological fundamentals. The book is not a mere inventory of such rules and principles, but rather interweaves them into a continuous exposition of basic issues. Written at a user-friendly and accessible level, Epistemic Principles is an essential addition for both advanced undergraduate and graduate courses in epistemology. |
You may like...
|