![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Mathematics > Mathematical foundations > General
This collection of papers, celebrating the contributions of Swedish logician Dag Prawitz to Proof Theory, has been assembled from those presented at the Natural Deduction conference organized in Rio de Janeiro to honour his seminal research. Dag Prawitz’s work forms the basis of intuitionistic type theory and his inversion principle constitutes the foundation of most modern accounts of proof-theoretic semantics in Logic, Linguistics and Theoretical Computer Science. The range of contributions includes material on the extension of natural deduction with higher-order rules, as opposed to higher-order connectives, and a paper discussing the application of natural deduction rules to dealing with equality in predicate calculus. The volume continues with a key chapter summarizing work on the extension of the Curry-Howard isomorphism (itself a by-product of the work on natural deduction), via methods of category theory that have been successfully applied to linear logic, as well as many other contributions from highly regarded authorities. With an illustrious group of contributors addressing a wealth of topics and applications, this volume is a valuable addition to the libraries of academics in the multiple disciplines whose development has been given added scope by the methodologies supplied by natural deduction. The volume is representative of the rich and varied directions that Prawitz work has inspired in the area of natural deduction.
We humans are collectively driven by a powerful - yet not fully explained - instinct to understand. We would like to see everything established, proven, laid bare. The more important an issue, the more we desire to see it clarified, stripped of all secrets, all shades of gray. What could be more important than to understand the Universe and ourselves as a part of it? To find a window onto our origin and our destiny? This book examines how far our modern cosmological theories - with their sometimes audacious models, such as inflation, cyclic histories, quantum creation, parallel universes - can take us towards answering these questions. Can such theories lead us to ultimate truths, leaving nothing unexplained? Last, but not least, Heller addresses the thorny problem of why and whether we should expect to find theories with all-encompassing explicative power.
As a student moves from basic calculus courses into upper-division courses in linear and abstract algebra, real and complex analysis, number theory, topology, and so on, a "bridge" course can help ensure a smooth transition. Introduction to Mathematical Structures and Proofs is a textbook intended for such a course, or for self-study. This book introduces an array of fundamental mathematical structures. It also explores the delicate balance of intuition and rigor-and the flexible thinking-required to prove a nontrivial result. In short, this book seeks to enhance the mathematical maturity of the reader. The new material in this second edition includes a section on graph theory, several new sections on number theory (including primitive roots, with an application to card-shuffling), and a brief introduction to the complex numbers (including a section on the arithmetic of the Gaussian integers). Solutions for even numbered exercises are available on springer.com for instructors adopting the text for a course.
Two prisoners are told that they will be brought to a room and seated so that each can see the other. Hats will be placed on their heads; each hat is either red or green. The two prisoners must simultaneously submit a guess of their own hat color, and they both go free if at least one of them guesses correctly. While no communication is allowed once the hats have been placed, they will, however, be allowed to have a strategy session before being brought to the room. Is there a strategy ensuring their release? The answer turns out to be yes, and this is the simplest non-trivial example of a "hat problem." This book deals with the question of how successfully one can predict the value of an arbitrary function at one or more points of its domain based on some knowledge of its values at other points. Topics range from hat problems that are accessible to everyone willing to think hard, to some advanced topics in set theory and infinitary combinatorics. For example, there is a method of predicting the value f(a) of a function f mapping the reals to the reals, based only on knowledge of f's values on the open interval (a - 1, a), and for every such function the prediction is incorrect only on a countable set that is nowhere dense. The monograph progresses from topics requiring fewer prerequisites to those requiring more, with most of the text being accessible to any graduate student in mathematics. The broad range of readership includes researchers, postdocs, and graduate students in the fields of set theory, mathematical logic, and combinatorics. The hope is that this book will bring together mathematicians from different areas to think about set theory via a very broad array of coordinated inference problems.
This volume is the first ever collection devoted to the field of proof-theoretic semantics. Contributions address topics including the systematics of introduction and elimination rules and proofs of normalization, the categorial characterization of deductions, the relation between Heyting's and Gentzen's approaches to meaning, knowability paradoxes, proof-theoretic foundations of set theory, Dummett's justification of logical laws, Kreisel's theory of constructions, paradoxical reasoning, and the defence of model theory. The field of proof-theoretic semantics has existed for almost 50 years, but the term itself was proposed by Schroeder-Heister in the 1980s. Proof-theoretic semantics explains the meaning of linguistic expressions in general and of logical constants in particular in terms of the notion of proof. This volume emerges from presentations at the Second International Conference on Proof-Theoretic Semantics in Tubingen in 2013, where contributing authors were asked to provide a self-contained description and analysis of a significant research question in this area. The contributions are representative of the field and should be of interest to logicians, philosophers, and mathematicians alike.
This textbook addresses the mathematical description of sets, categories, topologies and measures, as part of the basis for advanced areas in theoretical computer science like semantics, programming languages, probabilistic process algebras, modal and dynamic logics and Markov transition systems. Using motivations, rigorous definitions, proofs and various examples, the author systematically introduces the Axiom of Choice, explains Banach-Mazur games and the Axiom of Determinacy, discusses the basic constructions of sets and the interplay of coalgebras and Kripke models for modal logics with an emphasis on Kleisli categories, monads and probabilistic systems. The text further shows various ways of defining topologies, building on selected topics like uniform spaces, Goedel's Completeness Theorem and topological systems. Finally, measurability, general integration, Borel sets and measures on Polish spaces, as well as the coalgebraic side of Markov transition kernels along with applications to probabilistic interpretations of modal logics are presented. Special emphasis is given to the integration of (co-)algebraic and measure-theoretic structures, a fairly new and exciting field, which is demonstrated through the interpretation of game logics. Readers familiar with basic mathematical structures like groups, Boolean algebras and elementary calculus including mathematical induction will discover a wealth of useful research tools. Throughout the book, exercises offer additional information, and case studies give examples of how the techniques can be applied in diverse areas of theoretical computer science and logics. References to the relevant mathematical literature enable the reader to find the original works and classical treatises, while the bibliographic notes at the end of each chapter provide further insights and discussions of alternative approaches.
This meticulous critical assessment of the ground-breaking work of philosopher Stanislaw Lesniewski focuses exclusively on primary texts and explores the full range of output by one of the master logicians of the Lvov-Warsaw school. The author's nuanced survey eschews secondary commentary, analyzing Lesniewski's core philosophical views and evaluating the formulations that were to have such a profound influence on the evolution of mathematical logic. One of the undisputed leaders of the cohort of brilliant logicians that congregated in Poland in the early twentieth century, Lesniewski was a guide and mentor to a generation of celebrated analytical philosophers (Alfred Tarski was his PhD student). His primary achievement was a system of foundational mathematical logic intended as an alternative to the Principia Mathematica of Alfred North Whitehead and Bertrand Russell. Its three strands-'protothetic', 'ontology', and 'mereology', are detailed in discrete sections of this volume, alongside a wealth other chapters grouped to provide the fullest possible coverage of Lesniewski's academic output. With material on his early philosophical views, his contributions to set theory and his work on nominalism and higher-order quantification, this book offers a uniquely expansive critical commentary on one of analytical philosophy's great pioneers.
The purpose of the book is to advance in the understanding of brain function by defining a general framework for representation based on category theory. The idea is to bring this mathematical formalism into the domain of neural representation of physical spaces, setting the basis for a theory of mental representation, able to relate empirical findings, uniting them into a sound theoretical corpus. The innovative approach presented in the book provides a horizon of interdisciplinary collaboration that aims to set up a common agenda that synthesizes mathematical formalization and empirical procedures in a systemic way. Category theory has been successfully applied to qualitative analysis, mainly in theoretical computer science to deal with programming language semantics. Nevertheless, the potential of category theoretic tools for quantitative analysis of networks has not been tackled so far. Statistical methods to investigate graph structure typically rely on network parameters. Category theory can be seen as an abstraction of graph theory. Thus, new categorical properties can be added into network analysis and graph theoretic constructs can be accordingly extended in more fundamental basis. By generalizing networks using category theory we can address questions and elaborate answers in a more fundamental way without waiving graph theoretic tools. The vital issue is to establish a new framework for quantitative analysis of networks using the theory of categories, in which computational neuroscientists and network theorists may tackle in more efficient ways the dynamics of brain cognitive networks. The intended audience of the book is researchers who wish to explore the validity of mathematical principles in the understanding of cognitive systems. All the actors in cognitive science: philosophers, engineers, neurobiologists, cognitive psychologists, computer scientists etc. are akin to discover along its pages new unforeseen connections through the development of concepts and formal theories described in the book. Practitioners of both pure and applied mathematics e.g., network theorists, will be delighted with the mapping of abstract mathematical concepts in the terra incognita of cognition.
The two main themes of this book, logic and complexity, are both essential for understanding the main problems about the foundations of mathematics. Logical Foundations of Mathematics and Computational Complexity covers a broad spectrum of results in logic and set theory that are relevant to the foundations, as well as the results in computational complexity and the interdisciplinary area of proof complexity. The author presents his ideas on how these areas are connected, what are the most fundamental problems and how they should be approached. In particular, he argues that complexity is as important for foundations as are the more traditional concepts of computability and provability. Emphasis is on explaining the essence of concepts and the ideas of proofs, rather than presenting precise formal statements and full proofs. Each section starts with concepts and results easily explained, and gradually proceeds to more difficult ones. The notes after each section present some formal definitions, theorems and proofs. Logical Foundations of Mathematics and Computational Complexity is aimed at graduate students of all fields of mathematics who are interested in logic, complexity and foundations. It will also be of interest for both physicists and philosophers who are curious to learn the basics of logic and complexity theory.
This book exclusively deals with the study of almost convergence and statistical convergence of double sequences. The notion of “almost convergence” is perhaps the most useful notion in order to obtain a weak limit of a bounded non-convergent sequence. There is another notion of convergence known as the “statistical convergence”, introduced by H. Fast, which is an extension of the usual concept of sequential limits. This concept arises as an example of “convergence in density” which is also studied as a summability method. Even unbounded sequences can be dealt with by using this method. The book also discusses the applications of these non-matrix methods in approximation theory. Written in a self-contained style, the book discusses in detail the methods of almost convergence and statistical convergence for double sequences along with applications and suitable examples. The last chapter is devoted to the study convergence of double series and describes various convergence tests analogous to those of single sequences. In addition to applications in approximation theory, the results are expected to find application in many other areas of pure and applied mathematics such as mathematical analysis, probability, fixed point theory and statistics.
In this new text, Steven Givant—the author of several acclaimed books, including works co-authored with Paul Halmos and Alfred Tarski—develops three theories of duality for Boolean algebras with operators. Givant addresses the two most recognized dualities (one algebraic and the other topological) and introduces a third duality, best understood as a hybrid of the first two. This text will be of interest to graduate students and researchers in the fields of mathematics, computer science, logic, and philosophy who are interested in exploring special or general classes of Boolean algebras with operators. Readers should be familiar with the basic arithmetic and theory of Boolean algebras, as well as the fundamentals of point-set topology.
An ontology is a formal description of concepts and relationships that can exist for a community of human and/or machine agents. The notion of ontologies is crucial for the purpose of enabling knowledge sharing and reuse. The Handbook on Ontologies provides a comprehensive overview of the current status and future prospectives of the field of ontologies considering ontology languages, ontology engineering methods, example ontologies, infrastructures and technologies for ontologies, and how to bring this all into ontology-based infrastructures and applications that are among the best of their kind. The field of ontologies has tremendously developed and grown in the five years since the first edition of the "Handbook on Ontologies". Therefore, its revision includes 21 completely new chapters as well as a major re-working of 15 chapters transferred to this second edition.
The problem of probability interpretation was long overlooked before exploding in the 20th century, when the frequentist and subjectivist schools formalized two conflicting conceptions of probability. Beyond the radical followers of the two schools, a circle of pluralist thinkers tends to reconcile the opposing concepts. The author uses two theorems in order to prove that the various interpretations of probability do come into opposition and can be used in different contexts. The goal here is to clarify the multi fold nature of probability by means of a purely mathematical approach and to show how philosophical arguments can only serve to deepen actual intellectual contrasts. The book can be considered as one of the most important contributions in the analysis of probability interpretation in the last 10-15 years.
This book is based on two premises: one cannot understand philosophy of mathematics without understanding mathematics and one cannot understand mathematics without doing mathematics. It draws readers into philosophy of mathematics by having them do mathematics. It offers 298 exercises, covering philosophically important material, presented in a philosophically informed way. The exercises give readers opportunities to recreate some mathematics that will illuminate important readings in philosophy of mathematics. Topics include primitive recursive arithmetic, Peano arithmetic, Gödel's theorems, interpretability, the hierarchy of sets, Frege arithmetic and intuitionist sentential logic. The book is intended for readers who understand basic properties of the natural and real numbers and have some background in formal logic.
This book presents four mathematical essays which explore the foundations of mathematics and related topics ranging from philosophy and logic to modern computer mathematics. While connected to the historical evolution of these concepts, the essays place strong emphasis on developments still to come. The book originated in a 2002 symposium celebrating the work of Bruno Buchberger, Professor of Computer Mathematics at Johannes Kepler University, Linz, Austria, on the occasion of his 60th birthday. Among many other accomplishments, Professor Buchberger in 1985 was the founding editor of the Journal of Symbolic Computation; the founder of the Research Institute for Symbolic Computation (RISC) and its chairman from 1987-2000; the founder in 1990 of the Softwarepark Hagenberg, Austria, and since then its director. More than a decade in the making, Mathematics, Computer Science and Logic - A Never Ending Story includes essays by leading authorities, on such topics as mathematical foundations from the perspective of computer verification; a symbolic-computational philosophy and methodology for mathematics; the role of logic and algebra in software engineering; and new directions in the foundations of mathematics. These inspiring essays invite general, mathematically interested readers to share state-of-the-art ideas which advance the never ending story of mathematics, computer science and logic. Mathematics, Computer Science and Logic - A Never Ending Story is edited by Professor Peter Paule, Bruno Buchberger's successor as director of the Research Institute for Symbolic Computation.
This book contains the proceedings of the 23rd International Workshop on Operator Theory and its Applications (IWOTA 2012), which was held at the University of New South Wales (Sydney, Australia) from 16 July to 20 July 2012. It includes twelve articles presenting both surveys of current research in operator theory and original results.
This ambitious and original book sets out to introduce to mathematicians (even including graduate students ) the mathematical methods of theoretical and experimental quantum field theory, with an emphasis on coordinate-free presentations of the mathematical objects in use. This in turn promotes the interaction between mathematicians and physicists by supplying a common and flexible language for the good of both communities, though mathematicians are the primary target. This reference work provides a coherent and complete mathematical toolbox for classical and quantum field theory, based on categorical and homotopical methods, representing an original contribution to the literature. The first part of the book introduces the mathematical methods needed to work with the physicists' spaces of fields, including parameterized and functional differential geometry, functorial analysis, and the homotopical geometric theory of non-linear partial differential equations, with applications to general gauge theories. The second part presents a large family of examples of classical field theories, both from experimental and theoretical physics, while the third part provides an introduction to quantum field theory, presents various renormalization methods, and discusses the quantization of factorization algebras.
This text offers an extension to the traditional Kripke semantics for non-classical logics by adding the notion of reactivity. Reactive Kripke models change their accessibility relation as we progress in the evaluation process of formulas in the model. This feature makes the reactive Kripke semantics strictly stronger and more applicable than the traditional one. Here we investigate the properties and axiomatisations of this new and most effective semantics, and we offer a wide landscape of applications of the idea of reactivity. Applied topics include reactive automata, reactive grammars, reactive products, reactive deontic logic and reactive preferential structures. Reactive Kripke semantics is the next step in the evolution of possible world semantics for non-classical logics, and this book, written by one of the leading authorities in the field, is essential reading for graduate students and researchers in applied logic, and it offers many research opportunities for PhD students.
This is a text in methods of applied statistics for researchers who design and conduct experiments, perform statistical inference, and write technical reports. These research activities rely on an adequate knowledge of applied statistics. The reader both builds on basic statistics skills and learns to apply it to applicable scenarios without over-emphasis on the technical aspects. Demonstrations are a very important part of this text. Mathematical expressions are exhibited only if they are defined or intuitively comprehensible. This text may be used as a self review guidebook for applied researchers or as an introductory statistical methods textbook for students not majoring in statistics. Discussion includes essential probability models, inference of means, proportions, correlations and regressions, methods for censored survival time data analysis, and sample size determination. The author has over twenty years of experience on applying statistical methods to study design and data analysis in collaborative medical research setting as well as on teaching. He received his PhD from University of Southern California Department of Preventive Medicine, received a post-doctoral training at Harvard Department of Biostatistics, has held faculty appointments at UCLA School of Medicine and Harvard Medical School, and currently a biostatistics faculty member at Massachusetts General Hospital and Harvard Medical School in Boston, Massachusetts, USA.
Substantial effort has been drawn for years onto the development of (possibly high-order) numerical techniques for the scalar homogeneous conservation law, an equation which is strongly dissipative in L1 thanks to shock wave formation. Such a dissipation property is generally lost when considering hyperbolic systems of conservation laws, or simply inhomogeneous scalar balance laws involving accretive or space-dependent source terms, because of complex wave interactions. An overall weaker dissipation can reveal intrinsic numerical weaknesses through specific nonlinear mechanisms: Hugoniot curves being deformed by local averaging steps in Godunov-type schemes, low-order errors propagating along expanding characteristics after having hit a discontinuity, exponential amplification of truncation errors in the presence of accretive source terms... This book aims at presenting rigorous derivations of different, sometimes called well-balanced, numerical schemes which succeed in reconciling high accuracy with a stronger robustness even in the aforementioned accretive contexts. It is divided into two parts: one dealing with hyperbolic systems of balance laws, such as arising from quasi-one dimensional nozzle flow computations, multiphase WKB approximation of linear Schroedinger equations, or gravitational Navier-Stokes systems. Stability results for viscosity solutions of onedimensional balance laws are sketched. The other being entirely devoted to the treatment of weakly nonlinear kinetic equations in the discrete ordinate approximation, such as the ones of radiative transfer, chemotaxis dynamics, semiconductor conduction, spray dynamics or linearized Boltzmann models. "Caseology" is one of the main techniques used in these derivations. Lagrangian techniques for filtration equations are evoked too. Two-dimensional methods are studied in the context of non-degenerate semiconductor models.
Filling a gap in the literature, this book takes the reader to the frontiers of equivariant topology, the study of objects with specified symmetries. The discussion is motivated by reference to a list of instructive "toy" examples and calculations in what is a relatively unexplored field. The authors also provide a reading path for the first-time reader less interested in working through sophisticated machinery but still desiring a rigorous understanding of the main concepts. The subject's classical counterparts, ordinary homology and cohomology, dating back to the work of Henri Poincare in topology, are calculational and theoretical tools which are important in many parts of mathematics and theoretical physics, particularly in the study of manifolds. Similarly powerful tools have been lacking, however, in the context of equivariant topology. Aimed at advanced graduate students and researchers in algebraic topology and related fields, the book assumes knowledge of basic algebraic topology and group actions.
Karl Menger, one of the founders of dimension theory, is among the most original mathematicians and thinkers of the twentieth century. He was a member of the Vienna Circle and the founder of its mathematical equivalent, the Viennese Mathematical Colloquium. Both during his early years in Vienna and, after his emigration, in the United States, Karl Menger made significant contributions to a wide variety of mathematical fields, and greatly influenced many of his colleagues. These two volumes contain Menger's major mathematical papers, based on his own selection from his extensive writings. They deal with topics as diverse as topology, geometry, analysis and algebra, and also include material on economics, sociology, logic and philosophy. The Selecta Mathematica is a monument to the diversity and originality of Menger's ideas.
Although introduced more than 60 years ago it is only during the last 15 years that there has been a systematic development of saddlepoint approximations. These approximations give a highly accurate expression for the tail of a distribution, not only in the centre of the distribution but also for very small tail probabilities. The price for this is a more cumbersome formula, the evaluation of which may require the use of a personal computer. This text explains the ideas behind the saddlepoint approximations as well as giving a detailed mathematical description of the subject. The emphasis of the book is two-fold. One is on popularizing the formulae through many worked out and ready to use examples. The second is on giving a comprehensive mathematical background for further research in the field. Some of the subjects treated are uniformity of the approximations, tests in exponential families and compound sums with applications in insurance mathematics. This book is intended for researchers and graduate students in statistics and consulting statisticians.
This book introduces a theory of higher matrix factorizations for regular sequences and uses it to describe the minimal free resolutions of high syzygy modules over complete intersections. Such resolutions have attracted attention ever since the elegant construction of the minimal free resolution of the residue field by Tate in 1957. The theory extends the theory of matrix factorizations of a non-zero divisor, initiated by Eisenbud in 1980, which yields a description of the eventual structure of minimal free resolutions over a hypersurface ring. Matrix factorizations have had many other uses in a wide range of mathematical fields, from singularity theory to mathematical physics.
This volume is the first systematic and thorough attempt to investigate the relation and the possible applications of mereology to contemporary science. It gathers contributions from leading scholars in the field and covers a wide range of scientific theories and practices such as physics, mathematics, chemistry, biology, computer science and engineering. Throughout the volume, a variety of foundational issues are investigated both from the formal and the empirical point of view. The first section looks at the topic as it applies to physics. The section addresses questions of persistence and composition within quantum and relativistic physics and concludes by scrutinizing the possibility to capture continuity of motion as described by our best physical theories within gunky space times. The second part tackles mathematics and shows how to provide a foundation for point-free geometry of space switching to fuzzy-logic. The relation between mereological sums and set-theoretic suprema is investigated and issues about different mereological perspectives such as classical and natural Mereology are thoroughly discussed. The third section in the volume looks at natural science. Several questions from biology, medicine and chemistry are investigated. From the perspective of biology, there is an attempt to provide axioms for inferring statements about part hood between two biological entities from statements about their spatial relation. From the perspective of chemistry, it is argued that classical mereological frameworks are not adequate to capture the practices of chemistry in that they consider neither temporal nor modal parameters. The final part introduces computer science and engineering. A new formal mereological framework in which an indeterminate relation of part hood is taken as a primitive notion is constructed and then applied to a wide variety of disciplines from robotics to knowledge engineering. A formal framework for discrete mereotopology and its applications is developed and finally, the importance of mereology for the relatively new science of domain engineering is also discussed. |
![]() ![]() You may like...
XR Case Studies - Using Augmented…
Timothy Jung, Jeremy Dalton
Hardcover
R2,739
Discovery Miles 27 390
Progress in Turbulence VIII…
Ramis Oerlu, Alessandro Talamelli, …
Hardcover
R4,625
Discovery Miles 46 250
Latest Advances in Robot Kinematics
Jadran Lenarcic, Manfred Husty
Hardcover
R4,672
Discovery Miles 46 720
Provenance in Data Science - From Data…
Leslie F Sikos, Oshani W. Seneviratne, …
Hardcover
R3,966
Discovery Miles 39 660
Qualitative Theory in Structural…
Dajun Wang, Qishen WANG, …
Hardcover
R4,899
Discovery Miles 48 990
|