![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Science: general issues > Philosophy of science
Often people have wondered why there is no introductory text on category theory aimed at philosophers working in related areas. The answer is simple: what makes categories interesting and significant is their specific use for specific purposes. These uses and purposes, however, vary over many areas, both "pure", e.g., mathematical, foundational and logical, and "applied", e.g., applied to physics, biology and the nature and structure of mathematical models. Borrowing from the title of Saunders Mac Lane's seminal work "Categories for the Working Mathematician", this book aims to bring the concepts of category theory to philosophers working in areas ranging from mathematics to proof theory to computer science to ontology, from to physics to biology to cognition, from mathematical modeling to the structure of scientific theories to the structure of the world. Moreover, it aims to do this in a way that is accessible to non-specialists. Each chapter is written by either a category-theorist or a philosopher working in one of the represented areas, and in a way that builds on the concepts that are already familiar to philosophers working in these areas.
According to the cognitive penetrability hypothesis, our beliefs, desires, and possibly our emotions literally affect how we see the world. This book elucidates the nature of the cognitive penetrability and impenetrability hypotheses, assesses their plausibility, and explores their philosophical consequences. It connects the topic's multiple strands (the psychological findings, computationalist background, epistemological consequences of cognitive architecture, and recent philosophical developments) at a time when the outcome of many philosophical debates depends on knowing whether and how cognitive states can influence perception. All sixteen chapters were written especially for the book. The first chapters provide methodological and conceptual clarification of the topic and give an account of the relations between penetrability, encapsulation, modularity, and cross-modal interactions in perception. Assessments of psychological and neuroscientific evidence for cognitive penetration are given by several chapters. Most of the contributions analyse the impact of cognitive penetrability and impenetrability on specific philosophical topics: high-level perceptual contents, the epistemological consequences of penetration, nonconceptual content, the phenomenology of late perception, metacognitive feelings, and action. The book includes a comprehensive introduction which explains the history of the debate, its key technical concepts (informational encapsulation, early and late vision, the perception-cognition distinction, hard-wired perceptual processing, perceptual learning, theory-ladenness), and the debate's relevance to current topics in the philosophy of mind and perception, epistemology, and philosophy of psychology.
Metaphysicians should pay attention to quantum mechanics. Why? Not because it provides definitive answers to many metaphysical questions-the theory itself is remarkably silent on the nature of the physical world, and the various interpretations of the theory on offer present conflicting ontological pictures. Rather, quantum mechanics is essential to the metaphysician because it reshapes standard metaphysical debates and opens up unforeseen new metaphysical possibilities. Even if quantum mechanics provides few clear answers, there are good reasons to think that any adequate understanding of the quantum world will result in a radical reshaping of our classical world-view in some way or other. Whatever the world is like at the atomic scale, it is almost certainly not the swarm of particles pushed around by forces that is often presupposed. This book guides readers through the theory of quantum mechanics and its implications for metaphysics in a clear and accessible way. The theory and its various interpretations are presented with a minimum of technicality. The consequences of these interpretations for metaphysical debates concerning realism, indeterminacy, causation, determinism, holism, and individuality (among other topics) are explored in detail, stressing the novel form that the debates take given the empirical facts in the quantum domain. While quantum mechanics may not deliver unconditional pronouncements on these issues, the range of possibilities consistent with our knowledge of the empirical world is relatively small-and each possibility is metaphysically revisionary in some way. This book will appeal to researchers, students, and anybody else interested in how science informs our world-view.
Science is the most reliable means available for understanding the world around us and our place in it. But, since science draws conclusions based on limited empirical evidence, there is always a chance that a scientific inference will be incorrect. That chance, known as inductive risk, is endemic to science. Though inductive risk has always been present in scientific practice, the role of values in responding to it has only recently gained extensive attention from philosophers, scientists, and policy-makers. Exploring Inductive Risk brings together a set of eleven concrete case studies with the goals of illustrating the pervasiveness of inductive risk, assisting scientists and policymakers in responding to it, and moving theoretical discussions of this phenomenon forward. The case studies range over a wide variety of scientific contexts, including the drug approval process, high energy particle physics, dual-use research, climate science, research on gender disparities in employment, clinical trials, and toxicology. The book includes an introductory chapter that provides a conceptual introduction to the topic and a historical overview of the argument that values have an important role to play in responding to inductive risk, as well as a concluding chapter that synthesizes important themes from the book and maps out issues in need of further consideration.
Ordinary language and scientific language enable us to speak about, in a singular way (using demonstratives and names), what we recognize not to exist: fictions, the contents of our hallucinations, abstract objects, and various idealized but nonexistent objects that our scientific theories are often couched in terms of. Indeed, references to such nonexistent items-especially in the case of the application of mathematics to the sciences-are indispensable. We cannot avoid talking about such things. Scientific and ordinary languages thus enable us to say things about Pegasus or about hallucinated objects that are true (or false), such as "Pegasus was believed by the ancient Greeks to be a flying horse," or "That elf I'm now hallucinating over there is wearing blue shoes." Standard contemporary metaphysical views and semantic analyses of singular idioms on offer in contemporary philosophy of language have not successfully accommodated these routine practices of saying true and false things about the nonexistent while simultaneously honoring the insight that such things do not exist in any way at all (and have no properties). That is, philosophers often feel driven to claim that such objects do exist, or they claim that all our talk isn't genuine truth-apt talk, but only pretence. This book reconfigures metaphysics (and the role of metaphysics in semantics) in radical ways that allow the accommodation of our ordinary ways of speaking of what does not exist while retaining the absolutely crucial presupposition that such objects exist in no way at all, have no properties, and so are not the truth-makers for the truths and falsities that are about them.
Richard Pettigrew offers an extended investigation into a particular way of justifying the rational principles that govern our credences (or degrees of belief). The main principles that he justifies are the central tenets of Bayesian epistemology, though many other related principles are discussed along the way. These are: Probabilism, the claims that credences should obey the laws of probability; the Principal Principle, which says how credences in hypotheses about the objective chances should relate to credences in other propositions; the Principle of Indifference, which says that, in the absence of evidence, we should distribute our credences equally over all possibilities we entertain; and Conditionalization, the Bayesian account of how we should plan to respond when we receive new evidence. Ultimately, then, this book is a study in the foundations of Bayesianism. To justify these principles, Pettigrew looks to decision theory. He treats an agent's credences as if they were a choice she makes between different options, gives an account of the purely epistemic utility enjoyed by different sets of credences, and then appeals to the principles of decision theory to show that, when epistemic utility is measured in this way, the credences that violate the principles listed above are ruled out as irrational. The account of epistemic utility set out here is the veritist's: the sole fundamental source of epistemic utility for credences is their accuracy. Thus, Pettigrew conducts an investigation in the version of epistemic utility theory known as accuracy-first epistemology. The book can also be read as an extended reply on behalf of the veritist to the evidentialist's objection that veritism cannot account for certain evidential principles of credal rationality, such as the Principal Principle, the Principle of Indifference, and Conditionalization.
This is a new volume of original essays on the metaphysics of quantum mechanics. The essays address questions such as: What fundamental metaphysics is best motivated by quantum mechanics? What is the ontological status of the wave function? Does quantum mechanics support the existence of any other fundamental entities, e.g. particles? What is the nature of the fundamental space (or space-time manifold) of quantum mechanics? What is the relationship between the fundamental ontology of quantum mechanics and ordinary, macroscopic objects like tables, chairs, and persons? The volume includes a comprehensive introduction with a history of quantum mechanics and the debate over its metaphysical interpretation focusing especially on the main realist alternatives.
The experience of illness is a universal and substantial part of human existence. Like death, illness raises important philosophical issues. But unlike death, illness, and in particular the experience of being ill, has received little philosophical attention. This may be because illness is often understood as a physiological process that falls within the domain of medical science, and is thus outside the purview of philosophy. In Phenomenology of Illness Havi Carel argues that the experience of illness has been wrongly neglected by philosophers and proposes to fill the lacuna. Phenomenology of Illness provides a distinctively philosophical account of illness. Using phenomenology, the philosophical method for first-person investigation, Carel explores how illness modifies the ill person's body, values, and world. The aim of Phenomenology of Illness is twofold: to contribute to the understanding of illness through the use of philosophy and to demonstrate the importance of illness for philosophy. Contra the philosophical tendency to resist thinking about illness, Carel proposes that illness is a philosophical tool. Through its pathologising effect, illness distances the ill person from taken for granted routines and habits and reveals aspects of human existence that normally go unnoticed. Phenomenology of Illness develops a phenomenological framework for illness and a systematic understanding of illness as a philosophical tool.
Social epistemology has been flourishing in recent years, expanding and making connections with political philosophy, virtue epistemology, philosophy of science, and feminist philosophy. The philosophy of the social world too is flourishing, with burgeoning work in the metaphysics of the social world, collective responsibility, group action, and group belief. The new philosophical vista now more clearly presenting itself is collective epistemology-the epistemology of groups and institutions. Groups engage in epistemic activity all the time-whether it be the active collective inquiry of scientific research groups or crime detection units, or the evidential deliberations of tribunals and juries, or the informational efforts of the voting population in general-and yet in philosophy there is still relatively little epistemology of groups to help explore these epistemic practices and their various dimensions of social and philosophical significance. The aim of this book is to address this lack, by presenting original essays in the field of collective epistemology, exploring these regions of epistemic practice and their significance for Epistemology, Political Philosophy, Ethics, and the Philosophy of Science.
In the last decade, science in the United States has become
increasingly politicized, as government officials have been accused
of manipulating, distorting, subverting, and censoring science for
ideological purposes. Political gamesmanship has played a major
role in many different areas of science, including the debate over
global climate change, embryonic stem cell research, government
funding of research, the FDA's approval process, military
intelligence related to Iraq, research with human subjects, and the
teaching of evolution in public schools.
Science and its philosophical companion, Naturalism, represent reality in wholly nonpersonal terms. How, if at all, can a nonpersonal scheme accommodate the first-person perspective that we all enjoy? In this volume, Lynne Rudder Baker explores that question by considering both reductive and eliminative approaches to the first-person perspective. After finding both approaches wanting, she mounts an original constructive argument to show that a nonCartesian first-person perspective belongs in the basic inventory of what exists. That is, the world that contains us persons is irreducibly personal. After arguing for the irreducibilty and ineliminability of the first-person perspective, Baker develops a theory of this perspective. The first-person perspective has two stages, rudimentary and robust. Human infants and nonhuman animals with consciousness and intentionality have rudimentary first-person perspectives. In learning a language, a person acquires a robust first-person perspective: the capacity to conceive of oneself as oneself, in the first person. By developing an account of personal identity, Baker argues that her theory is coherent, and she shows various ways in which first-person perspectives contribute to reality.
In the 1950s, John Reber convinced many Californians that the best
way to solve the state's water shortage problem was to dam up the
San Francisco Bay. Against massive political pressure, Reber's
opponents persuaded lawmakers that doing so would lead to disaster.
They did this not by empirical measurement alone, but also through
the construction of a model. Simulation and Similarity explains why
this was a good strategy while simultaneously providing an account
of modeling and idealization in modern scientific practice. Michael
Weisberg focuses on concrete, mathematical, and computational
models in his consideration of the nature of models, the practice
of modeling, and nature of the relationship between models and
real-world phenomena.
There is a widely held conception that progress in science and technology is our salvation, and the more of it, the better. This, however, is an oversimplified and even dangerous attitude. While the future will certainly offer huge changes due to such progress, it is far from certain that all of these changes will be for the better. The unprecedented rate of technological development that the 20th century witnessed has made our lives today vastly different from those in 1900. No slowdown is in sight, and the 21st century will most likely see even more revolutionary changes than the 20th, due to advances in science, technology and medicine. Particular areas where extraordinary and perhaps disruptive advances can be expected include biotechnology, nanotechnology, and machine intelligence. We may also look forward various ways to enhance human cognitive and other abilities using, e.g., pharmaceuticals, genetic engineering or machine-brain interfaces - perhaps to the extent of changing human nature beyond what we currently think of as human, and into a posthuman era. The potential benefits of all these technologies are enormous, but so are the risks, including the possibility of human extinction. This book is a passionate plea for doing our best to map the territories ahead of us, and for acting with foresight, so as to maximize our chances of reaping the benefits of the new technologies while avoiding the dangers.
The Oxford Handbook of German Philosophy in the Nineteenth Century is the first collective critical study of this important period in intellectual history. The volume is divided into four parts. The first part explores individual philosophers, including Fichte, Hegel, Schopenhauer, Marx, and Nietzsche, amongst other great thinkers of the period. The second addresses key philosophical movements: Idealism, Romanticism, Neo-Kantianism, and Existentialism. The essays in the third part engage with different areas of philosophy that received particular attention at this time, including philosophy of nature, philosophy of mind, philosophy of language, philosophy of history, and hermeneutics. Finally, the contributors turn to discuss central philosophical topics, from skepticism to mat-erialism, from dialectics to ideas of historical and cultural Otherness, and from the reception of antiquity to atheism. Written by a team of leading experts, this Handbook will be an essential resource for anyone working in the area and will lead the direction of future research.
In 1687 Isaac Newton ushered in a new scientific era in which laws of nature could be used to predict the movements of matter with almost perfect precision. Newton's physics also posed a profound challenge to our self-understanding, however, for the very same laws that keep airplanes in the air and rivers flowing downhill tell us that it is in principle possible to predict what each of us will do every second of our entire lives, given the early conditions of the universe. Can it really be that even while you toss and turn late at night in the throes of an important decision and it seems like the scales of fate hang in the balance, that your decision is a foregone conclusion? Can it really be that everything you have done and everything you ever will do is determined by facts that were in place long before you were born? This problem is one of the staples of philosophical discussion. It is discussed by everyone from freshman in their first philosophy class, to theoretical physicists in bars after conferences. And yet there is no topic that remains more unsettling, and less well understood. If you want to get behind the facade, past the bare statement of determinism, and really try to understand what physics is telling us in its own terms, read this book. The problem of free will raises all kinds of questions. What does it mean to make a decision, and what does it mean to say that our actions are determined? What are laws of nature? What are causes? What sorts of things are we, when viewed through the lenses of physics, and how do we fit into the natural order? Ismael provides a deeply informed account of what physics tells us about ourselves. The result is a vision that is abstract, alien, illuminating, and-Ismael argues-affirmative of most of what we all believe about our own freedom. Written in a jargon-free style, How Physics Makes Us Free provides an accessible and innovative take on a central question of human existence.
Three-fourths of scientific research in the United States is funded
by special interests. Many of these groups have specific practical
goals, such as developing pharmaceuticals or establishing that a
pollutant causes only minimal harm. For groups with financial
conflicts of interest, their scientific findings often can be
deeply flawed.
This volume contains ten new essays focused on the exploration and articulation of a narrative that considers the notion of order within medieval and modern philosophy-its various kinds (natural, moral, divine, and human), the different ways in which each is conceived, and the diverse dependency relations that are thought to obtain among them. Descartes, with the help of others, brought about an important shift in what was understood by the order of nature by placing laws of nature at the foundation of his natural philosophy. Vigorous debate then ensued about the proper formulation of the laws of nature and the moral law, about whether such laws can be justified, and if so, how-through some aspect of the divine order or through human beings-and about what consequences these laws have for human beings and the moral and divine orders. That is, philosophers of the period were thinking through what the order of nature consists in and how to understand its relations to the divine, human, and moral orders. No two major philosophers in the modern period took exactly the same stance on these issues, but these issues are clearly central to their thought. The Divine Order, the Human Order, and the Order of Nature is devoted to investigating their positions from a vantage point that has the potential to combine metaphysical, epistemological, scientific, and moral considerations into a single narrative.
Calls for a "consilient" or "vertically integrated" approach to the study of human mind and culture have, for the most part, been received by scholars in the humanities with either indifference or hostility. One reason for this is that consilience has often been framed as bringing the study of humanistic issues into line with the study of non-human phenomena, rather than as something to which humanists and scientists contribute equally. The other major reason that consilience has yet to catch on in the humanities is a dearth of compelling examples of the benefits of adopting a consilient approach. Creating Consilience is the product of a workshop that brought together internationally-renowned scholars from a variety of fields to address both of these issues. It includes representative pieces from workshop speakers and participants that examine how adopting such a consilient stance -- informed by cognitive science and grounded in evolutionary theory -- would concretely impact specific topics in the humanities, examining each topic in a manner that not only cuts across the humanities-natural science divide, but also across individual humanistic disciplines. By taking seriously the fact that science-humanities integration is a two-way exchange, this volume takes a new approach to bridging the cultures of science and the humanities. The editors and contributors formulate how to develop a new shared framework of consilience beyond mere interdisciplinarity, in a way that both sides can accept.
The biological and social sciences often generalize causal
conclusions from one context or location to others that may differ
in some relevant respects, as is illustrated by inferences from
animal models to humans or from a pilot study to a broader
population. Inferences like these are known as extrapolations. The
question of how and when extrapolation can be legitimate is a
fundamental issue for the biological and social sciences that has
not received the attention it deserves. In Across the Boundaries,
Steel argues that previous accounts of extrapolation are inadequate
and proposes a better approach that is able to answer
methodological critiques of extrapolation from animal models to
humans.
Restoring Layered Landscapes brings together historians, geographers, philosophers, and interdisciplinary scholars to explore ecological restoration in landscapes with complex histories shaped by ongoing interactions between humans and nature. For many decades, ecological restoration - particularly in the United States - focused on returning degraded sites to conditions that prevailed prior to human influence. This model has been broadened in recent decades, and restoration now increasingly focuses on the recovery of ecological functions and processes rather than on returning a site to a specific historical state. Nevertheless, neither the theory nor the practice of restoration has fully come to terms with the challenges of restoring layered landscapes, where nature and culture shape one another in deep and ongoing relationships. Former military and industrial sites provide paradigmatic examples of layered landscapes. Many of these sites are not only characterized by natural ecosystems worth preserving and restoring, but also embody significant political, social, and cultural histories. This volume grapples with the challenges of restoring and interpreting such complex sites: What should we aim to restore in such places? How can restoration adequately take the legacies of human use into account? Should traces of the past be left on the landscape, and how can interpretive strategies be creatively employed to make visible the complex legacies of an open pit mine or chemical weapons manufacturing plant? Restoration aims to create new value, but not always without loss. Restoration often disrupts existing ecosystems, infrastructure, and artifacts. The chapters in this volume consider what restoration can tell us more generally about the relationship between continuity and change, and how the past can and should inform our thinking about the future. These insights, in turn, will help foster a more thoughtful approach to human-environment relations in an era of unprecedented anthropogenic global environmental change.
The theory of relativity convinced many philosophers that space and time are fundamentally alike, and that they are mere aspects of a more fundamental space-time. In The Nature of Time, Ulrich Meyer argues against this consensus view. Instead of a 'spatial' account of time that treats instants like positions in space, he presents the first comprehensive defense of a 'modal' account that emphasizes the similarities between times and the possible worlds in modal logic. Modal accounts of time are naturally cast in terms of a tense logic that accounts for temporal distinctions in terms of primitive tense operators. Tense logic was originally developed to provide a linguistic theory of verb tense in natural languages, but here Meyer proposes that it can be treated as a metaphysical theory of the nature of time. Contrary to popular belief, such modal accounts of time do not commit us to the view that there is something metaphysically special about the present moment, and they are easily reconciled with the theory of relativity.
Storrs McCall presents an original philosophical theory of the nature of the universe based on a striking new model of its space-time structure. He shows that this theory can illuminate a wide variety of hitherto unresolved philosophical problems. These include: the direction and flow of time; the nature of scientific laws; the interpretation of quantum mechanics; the definition of probability; counterfactual semantics; and the notions of identity, essential properties, deliberation, decision, and free will. A particular instance of the explanatory powers of the proposed space-time model is its account of quantum non-locality in the EPR and GHZ experiments. Professor McCall argues that the fact that the model explains and throws light on such a broad range of problems constitutes strong evidence that the universe is as the model portrays it.
This volume brings together fourteen major essays on truth, naturalism, expressivism and representationalism, by one of contemporary philosophy's most challenging thinkers. Huw Price weaves together Quinean minimalism about truth, Carnapian deflationism about metaphysics, Wittgensteinian pluralism about the functions of declarative language, and Rortyian skepticism about representation to craft a powerful and sustained critique of contemporary naturalistic metaphysics. In its place, he offers us not nonnaturalistic metaphysics, or philosophical quietism, but a new positive program for philosophy, cast from a pragmatist mold. This collection will be essential reading for anyone interested naturalism, pragmatism, truth, expressivism, pluralism and representationalism, or in deep questions about the direction and foundations of contemporary philosophy. It will be especially important to practitioners of analytic metaphysics, if they wish to confront the presuppositions of their own discipline. Price recommends a modest explanatory naturalism, in the sense of Hume: naturalism about own linguistic behavior, regarded as a behavior of natural creatures in a natural environment. He shows how this viewpoint privileges use and function over truth and reference, and expression over representation, as useful theoretical categories for the core philosophical project; and thereby undermines the semantic presuppositions of contemporary analytic metaphysics. At the same time, it offers an attractive resolution of the so-called "placement problems", that so preoccupy metaphysical naturalists-a global expressivism, with affinities both to the more local expressivism of writers such as Blackburn and Gibbard, and to Brandom's global inferentialism.
What do we see? We are visually conscious of colors and shapes, but are we also visually conscious of complex properties such as being John Malkovich? In this book, Susanna Siegel develops a framework for understanding the contents of visual experience, and argues that these contents involve all sorts of complex properties. Siegel starts by analyzing the notion of the contents of experience, and by arguing that theorists of all stripes should accept that experiences have contents. She then introduces a method for discovering the contents of experience: the method of phenomenal contrast. This method relies only minimally on introspection, and allows rigorous support for claims about experience. She then applies the method to make the case that we are conscious of many kinds of properties, of all sorts of causal properties, and of many other complex properties. She goes on to use the method to help analyze difficult questions about our consciousness of objects and their role in the contents of experience, and to reconceptualize the distinction between perception and sensation. Siegel's results are important for many areas of philosophy, including the philosophy of mind, epistemology, and the philosophy of science. They are also important for the psychology and cognitive neuroscience of vision.
Colin Howson offers a solution to one of the central, unsolved problems of Western philosophy, the problem of induction. In the mid-eighteenth century David Hume argued that successful prediction tells us nothing about the truth or probable truth of the predicting theory. Howson claims that Hume's argument is correct, and examines what follows about the relation between science and its empirical base. |
![]() ![]() You may like...
The Lie Of 1652 - A Decolonised History…
Patric Tariq Mellet
Paperback
![]()
|