Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Science & Mathematics > Mathematics > Mathematical foundations
We see numbers on automobile license plates, addresses, weather reports, and, of course, on our smartphones. Yet we look at these numbers for their role as descriptors, not as an entity in and unto themselves. Each number has its own history of meaning, usage, and connotation in the larger world. The Secret Lives of Numbers takes readers on a journey through integers, considering their numerological assignments as well as their significance beyond mathematics and in the realm of popular culture. Of course we all know that the number 13 carries a certain value of unluckiness with it. The phobia of the number is called Triskaidekaphobia; Franklin Delano Roosevelt was known to invite and disinvite guests to parties to avoid having 13 people in attendance; high-rise buildings often skip the 13th floor out of superstition. There are many explanations as to how the number 13 received this negative honor, but from a mathematical point of view, the number 13 is also the smallest prime number that when its digits are reversed is also a prime number. It is honored with a place among the Fibonacci numbers and integral Pythagorean triples, as well as many other interesting and lesser-known occurrences. In The Secret Lives of Numbers, popular mathematician Alfred S. Posamentier provides short and engaging mini-biographies of more than 100 numbers, starting with 1 and featuring some especially interesting numbers -like 6,174, a number with most unusual properties -to provide readers with a more comprehensive picture of the lives of numbers both mathematically and socially.
This book is the final version of a course on algorithmic information theory and the epistemology of mathematics and physics. It discusses Einstein and Goedel's views on the nature of mathematics in the light of information theory, and sustains the thesis that mathematics is quasi-empirical. There is a foreword by Cris Calude of the University of Auckland, and supplementary material is available at the author's web site. The special feature of this book is that it presents a new "hands on" didatic approach using LISP and Mathematica software. The reader will be able to derive an understanding of the close relationship between mathematics and physics. "The Limits of Mathematics is a very personal and idiosyncratic account of Greg Chaitin's entire career in developing algorithmic information theory. The combination of the edited transcripts of his three introductory lectures maintains all the energy and content of the oral presentations, while the material on AIT itself gives a full explanation of how to implement Greg's ideas on real computers for those who want to try their hand at furthering the theory." (John Casti, Santa Fe Institute)
Reasoning under uncertainty is always based on a specified language or for malism, including its particular syntax and semantics, but also on its associated inference mechanism. In the present volume of the handbook the last aspect, the algorithmic aspects of uncertainty calculi are presented. Theory has suffi ciently advanced to unfold some generally applicable fundamental structures and methods. On the other hand, particular features of specific formalisms and ap proaches to uncertainty of course still influence strongly the computational meth ods to be used. Both general as well as specific methods are included in this volume. Broadly speaking, symbolic or logical approaches to uncertainty and nu merical approaches are often distinguished. Although this distinction is somewhat misleading, it is used as a means to structure the present volume. This is even to some degree reflected in the two first chapters, which treat fundamental, general methods of computation in systems designed to represent uncertainty. It has been noted early by Shenoy and Shafer, that computations in different domains have an underlying common structure. Essentially pieces of knowledge or information are to be combined together and then focused on some particular question or domain. This can be captured in an algebraic structure called valuation algebra which is described in the first chapter. Here the basic operations of combination and focus ing (marginalization) of knowledge and information is modeled abstractly subject to simple axioms."
Logic and Complexity looks at basic logic as it is used in Computer Science, and provides students with a logical approach to Complexity theory. With plenty of exercises, this book presents classical notions of mathematical logic, such as decidability, completeness and incompleteness, as well as new ideas brought by complexity theory such as NP-completeness, randomness and approximations, providing a better understanding for efficient algorithmic solutions to problems. Divided into three parts, it covers: - Model Theory and Recursive Functions - introducing the basic model theory of propositional, 1st order, inductive definitions and 2nd order logic. Recursive functions, Turing computability and decidability are also examined. - Descriptive Complexity - looking at the relationship between definitions of problems, queries, properties of programs and their computational complexity. - Approximation - explaining how some optimization problems and counting problems can be approximated according to their logical form. Logic is important in Computer Science, particularly for verification problems and database query languages such as SQL. Students and researchers in this field will find this book of great interest.
Goal Directed Proof Theory presents a uniform and coherent methodology for automated deduction in non-classical logics, the relevance of which to computer science is now widely acknowledged. The methodology is based on goal-directed provability. It is a generalization of the logic programming style of deduction, and it is particularly favourable for proof search. The methodology is applied for the first time in a uniform way to a wide range of non-classical systems, covering intuitionistic, intermediate, modal and substructural logics. The book can also be used as an introduction to these logical systems form a procedural perspective. Readership: Computer scientists, mathematicians and philosophers, and anyone interested in the automation of reasoning based on non-classical logics. The book is suitable for self study, its only prerequisite being some elementary knowledge of logic and proof theory.
* Written by an interdisciplinary group of specialists from the arts, humanities and sciences at Oxford University * Suitable for a wide non-academic readership, and will appeal to anyone with an interest in mathematics, science and philosophy.
This is a two-volume collection presenting the selected works of Herbert Busemann, one of the leading geometers of the twentieth century and one of the main founders of metric geometry, convexity theory and convexity in metric spaces. Busemann also did substantial work (probably the most important) on Hilbert's Problem IV. These collected works include Busemann's most important published articles on these topics. Volume I of the collection features Busemann's papers on the foundations of geodesic spaces and on the metric geometry of Finsler spaces. Volume II includes Busemann's papers on convexity and integral geometry, on Hilbert's Problem IV, and other papers on miscellaneous subjects. Each volume offers biographical documents and introductory essays on Busemann's work, documents from his correspondence and introductory essays written by leading specialists on Busemann's work. They are a valuable resource for researchers in synthetic and metric geometry, convexity theory and the foundations of geometry.
Since 1990 the German Research Society (Deutsche Forschungsgemeinschaft, DFG) has been funding PhD courses (Graduiertenkollegs) at selected universi- ties in the Federal Republic of Germany. TU Berlin has been one of the first universities joining that new funding program of DFG. The PhD courses have been funded over aperiod of 9 years. The grant for the nine years sums up to approximately 5 million DM. Our Grnduiertenkolleg on Communication-based Systems has been assigned to the Computer Science Department of TU Berlin although it is a joined effort of all three universities in Berlin, Technische Uni- versitat (TU), Freie Universitat (FU), and Humboldt Universitat (HU). The Graduiertenkolleg has been started its program in October 1991. The professors responsible for the program are: Hartmut Ehrig (TU), Gunter Hommel (TU), Stefan Jahnichen (TU), Peter Lohr (FU), Miroslaw Malek (RU), Peter Pep- per (TU), Radu Popescu-Zeletin (TU), Herbert Weber (TU), and Adam Wolisz (TU). The Graduiertenkolleg is a PhD program for highly qualified persons in the field of computer science. Twenty scholarships have been granted to fellows of the Graduiertenkolleg for a maximal period of three years. During this time the fellows take part in a selected educational program and work on their PhD thesis.
2. The Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3. Convergence Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ., . . . . 60 4. Complexity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 5. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 A Simple Proof for a Result of Ollerenshaw on Steiner Trees . . . . . . . . . . 68 Xiufeng Du, Ding-Zhu Du, Biao Gao, and Lixue Qii 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 2. In the Euclidean Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3. In the Rectilinear Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4. Discussion . . . . . . . . . . . . -. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Optimization Algorithms for the Satisfiability (SAT) Problem . . . . . . . . . 72 Jun Gu 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 2. A Classification of SAT Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7:3 3. Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IV 4. Complete Algorithms and Incomplete Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 81 5. Optimization: An Iterative Refinement Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 6. Local Search Algorithms for SAT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 7. Global Optimization Algorithms for SAT Problem . . . . . . . . . . . . . . . . . . . . . . . . 106 8. Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 9. Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 10. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Ergodic Convergence in Proximal Point Algorithms with Bregman Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Osman Guier 1. Introduction . . .: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 2. Convergence for Function Minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 3. Convergence for Arbitrary Maximal Monotone Operators . . . . . . . . . . . .
This volume was produced in conjunction with the Thematic Program in o-Minimal Structures and Real Analytic Geometry, held from January to June of 2009 at the Fields Institute. Five of the six contributions consist of notes from graduate courses associated with the program: Felipe Cano on a new proof of resolution of singularities for planar analytic vector fields; Chris Miller on o-minimality and Hardy fields; Jean-Philippe Rolin on the construction of o-minimal structures from quasianalytic classes; Fernando Sanz on non-oscillatory trajectories of vector fields; and Patrick Speissegger on pfaffian sets. The sixth contribution, by Antongiulio Fornasiero and Tamara Servi, is an adaptation to the nonstandard setting of A.J. Wilkie's construction of o-minimal structures from infinitely differentiable functions. Most of this material is either unavailable elsewhere or spread across many different sources such as research papers, conference proceedings and PhD theses. This book will be a useful tool for graduate students or researchers from related fields who want to learn about expansions of o-minimal structures by solutions, or images thereof, of definable systems of differential equations.
The book presents in a mathematical clear way the fundamentals of algorithmic information theory and a few selected applications. This 2nd edition presents new and important results obtained in recent years: the characterization of computable enumerable random reals, the construction of an Omega Number for which ZFC cannot determine any digits, and the first successful attempt to compute the exact values of 64 bits of a specific Omega Number. Finally, the book contains a discussion of some interesting philosophical questions related to randomness and mathematical knowledge. "Professor Calude has produced a first-rate exposition of up-to-date work in information and randomness." D.S. Bridges, Canterbury University, co-author, with Errett Bishop, of Constructive Analysis "The second edition of this classic work is highly recommended to anyone interested in algorithmic information and randomness." G.J. Chaitin, IBM Research Division, New York, author of Conversations with a Mathematician "This book is a must for a comprehensive introduction to algorithmic information theory and for anyone interested in its applications in the natural sciences." K. Svozil, Technical University of Vienna, author of Randomness & Undecidability in Physics
This monograph details several important advances in the area known as the proofs-as-programs paradigm, a set of approaches to developing programs from proofs in constructive logic. It serves the dual purpose of providing a state-of-the-art overview of the field and detailing tools and techniques to stimulate further research. One of the booka (TM)s central themes is a general, abstract framework for developing new systems of program synthesis by adapting proofs-as-programs to new contexts, which the authors call the Curry--Howard Protocol. This protocol is used to provide two novel applications for industrial-scale, complex software engineering: contractual imperative program synthesis and structured software synthesis. These applications constitute an exemplary justification for the applicability of the protocol to different contexts. The book is intended for graduate students in computer science or mathematics who wish to extend their background in logic and type theory as well as gain experience working with logical frameworks and practical proof systems. In addition, the proofs-as-programs research community, and the wider computational logic, formal methods and software engineering communities will benefit. The applications given in the book should be of interest for researchers working in the target problem domains.
The purpose of the book is to advance in the understanding of brain function by defining a general framework for representation based on category theory. The idea is to bring this mathematical formalism into the domain of neural representation of physical spaces, setting the basis for a theory of mental representation, able to relate empirical findings, uniting them into a sound theoretical corpus. The innovative approach presented in the book provides a horizon of interdisciplinary collaboration that aims to set up a common agenda that synthesizes mathematical formalization and empirical procedures in a systemic way. Category theory has been successfully applied to qualitative analysis, mainly in theoretical computer science to deal with programming language semantics. Nevertheless, the potential of category theoretic tools for quantitative analysis of networks has not been tackled so far. Statistical methods to investigate graph structure typically rely on network parameters. Category theory can be seen as an abstraction of graph theory. Thus, new categorical properties can be added into network analysis and graph theoretic constructs can be accordingly extended in more fundamental basis. By generalizing networks using category theory we can address questions and elaborate answers in a more fundamental way without waiving graph theoretic tools. The vital issue is to establish a new framework for quantitative analysis of networks using the theory of categories, in which computational neuroscientists and network theorists may tackle in more efficient ways the dynamics of brain cognitive networks. The intended audience of the book is researchers who wish to explore the validity of mathematical principles in the understanding of cognitive systems. All the actors in cognitive science: philosophers, engineers, neurobiologists, cognitive psychologists, computer scientists etc. are akin to discover along its pages new unforeseen connections through the development of concepts and formal theories described in the book. Practitioners of both pure and applied mathematics e.g., network theorists, will be delighted with the mapping of abstract mathematical concepts in the terra incognita of cognition.
This essential companion volume to Chaitin's highly successful "The Limits of Mathematics", also published by Springer, gives a brilliant historical survey of the work of this century on the foundations of mathematics, in which the author was a major participant. The Unknowable is a very readable and concrete introduction to Chaitin's ideas, and it includes a detailed explanation of the programming language used by Chaitin in both volumes. It will enable computer users to interact with the author's proofs and discover for themselves how they work. The software for The Unknowable can be downloaded from the author's Web site.
As is known, the book named "Multivariate spline functions and their applications" has been published by the Science Press in 1994. This book is an English edition based on the original book mentioned 1 above with many changes, including that of the structure of a cubic - interpolation in n-dimensional spline spaces, and more detail on triangu- lations have been added in this book. Special cases of multivariate spline functions (such as step functions, polygonal functions, and piecewise polynomials) have been examined math- ematically for a long time. I. J. Schoenberg (Contribution to the problem of application of equidistant data by analytic functions, Quart. Appl. Math., 4(1946), 45 - 99; 112 - 141) and W. Quade & L. Collatz (Zur Interpo- lations theories der reellen periodischen function, Press. Akad. Wiss. (PhysMath. KL), 30(1938), 383- 429) systematically established the the- ory of the spline functions. W. Quade & L. Collatz mainly discussed the periodic functions, while I. J. Schoenberg's work was systematic and com- plete. I. J. Schoenberg outlined three viewpoints for studing univariate splines: Fourier transformations, truncated polynomials and Taylor ex- pansions. Based on the first two viewpoints, I. J. Schoenberg deduced the B-spline function and its basic properties, especially the basis func- tions. Based on the latter viewpoint, he represented the spline functions in terms of truncated polynomials. These viewpoints and methods had significantly effected on the development of the spline functions.
The expression of uncertainty in measurement poses a challenge since it involves physical, mathematical, and philosophical issues. This problem is intensified by the limitations of the probabilistic approach used by the current standard (the GUM Instrumentation Standard). This text presents an alternative approach. It makes full use of the mathematical theory of evidence to express the uncertainty in measurements. Coverage provides an overview of the current standard, then pinpoints and constructively resolves its limitations. Numerous examples throughout help explain the book 's unique approach.
This valuable resource provides an overview of recent research and strategies in developing and applying modelling to promote practice-based research in STEM education. In doing so, it bridges barriers across academic disciplines by suggesting activities that promote integration of qualitative science concepts with the tools of mathematics and engineering. The volume's three parts offer a comprehensive review, by 1) Presenting a conceptual background of how scientific inquiry can be induced in mathematics classes considering recommendations of prior research, 2) Collecting case studies that were designed using scientific inquiry process designed for math classes, and 3) Exploring future possibilities and directions for the research included within. Among the topics discussed: * STEM education: A platform for multidisciplinary learning. * Teaching and learning representations in STEM. * Formulating conceptual framework for multidisciplinary STEM modeling. * Exploring function continuity in context. * Exploring function transformations using a dynamic system. Scientific Inquiry in Mathematics - Theory and Practice delivers hands-on and concrete strategies for effective STEM teaching in practice to educators within the fields of mathematics, science, and technology. It will be of interest to practicing and future mathematics teachers at all levels, as well as teacher educators, mathematics education researchers, and undergraduate and graduate mathematics students interested in research based methods for integrating inquiry-based learning into STEM classrooms.
Mathematics plays a key role in computer science, some researchers would consider computers as nothing but the physical embodiment of mathematical systems. And whether you are designing a digital circuit, a computer program or a new programming language, you need mathematics to be able to reason about the design -- its correctness, robustness and dependability. This book covers the foundational mathematics necessary for courses in computer science. The common approach to presenting mathematical concepts and operators is to define them in terms of properties they satisfy, and then based on these definitions develop ways of computing the result of applying the operators and prove them correct. This book is mainly written for computer science students, so here the author takes a different approach: he starts by defining ways of calculating the results of applying the operators and then proves that they satisfy various properties. After justifying his underlying approach the author offers detailed chapters covering propositional logic, predicate calculus, sets, relations, discrete structures, structured types, numbers, and reasoning about programs. The book contains chapter and section summaries, detailed proofs and many end-of-section exercises -- key to the learning process. The book is suitable for undergraduate and graduate students, and although the treatment focuses on areas with frequent applications in computer science, the book is also suitable for students of mathematics and engineering.
"In case you are considering to adopt this book for courses with over 50 students, please contact ""[email protected]"" for more information. "
The last three chapters of the book provide an introduction to type theory (higher-order logic). It is shown how various mathematical concepts can be formalized in this very expressive formal language. This expressive notation facilitates proofs of the classical incompleteness and undecidability theorems which are very elegant and easy to understand. The discussion of semantics makes clear the important distinction between standard and nonstandard models which is so important in understanding puzzling phenomena such as the incompleteness theorems and Skolem's Paradox about countable models of set theory. Some of the numerous exercises require giving formal proofs. A computer program called ETPS which is available from the web facilitates doing and checking such exercises. "Audience: " This volume will be of interest to mathematicians, computer scientists, and philosophers in universities, as well as to computer scientists in industry who wish to use higher-order logic for hardware and software specification and verification. "
Data fusion or information fusion are names which have been primarily assigned to military-oriented problems. In military applications, typical data fusion problems are: multisensor, multitarget detection, object identification, tracking, threat assessment, mission assessment and mission planning, among many others. However, it is clear that the basic underlying concepts underlying such fusion procedures can often be used in nonmilitary applications as well. The purpose of this book is twofold: First, to point out present gaps in the way data fusion problems are conceptually treated. Second, to address this issue by exhibiting mathematical tools which treat combination of evidence in the presence of uncertainty in a more systematic and comprehensive way. These techniques are based essentially on two novel ideas relating to probability theory: the newly developed fields of random set theory and conditional and relational event algebra. This volume is intended to be both an update on research progress on data fusion and an introduction to potentially powerful new techniques: fuzzy logic, random set theory, and conditional and relational event algebra. Audience: This volume can be used as a reference book for researchers and practitioners in data fusion or expert systems theory, or for graduate students as text for a research seminar or graduate level course.
Fuzzy knowledge and fuzzy systems affect our lives today as systems enter the world of commerce. Fuzzy systems are incorporated in domestic appliances (washing machine, air conditioning, microwave, telephone) and in transport systems (a pilotless helicopter has recently completed a test flight). Future applications are expected to have dramatic implications for the demand for labor, among other things. It was with such thoughts in mind that this first international survey of future applications of fuzzy logic has been undertaken. The results are likely to be predictive for a decade beyond the millenium. The predictive element is combined with a bibliography which serves as an historical anchor as well as being both extensive and extremely useful. Analysis and Evaluation of Fuzzy Systems is thus a milestone in the development of fuzzy logic and applications of three representative subsystems: Fuzzy Control, Fuzzy Pattern Recognition and Fuzzy Communications.
Fuzzy Logic Foundations and Industrial Applications is an organized edited collection of contributed chapters covering basic fuzzy logic theory, fuzzy linear programming, and applications. Special emphasis has been given to coverage of recent research results, and to industrial applications of fuzzy logic. The chapters are new works that have been written exclusively for this book by many of the leading and prominent researchers (such as Ronald Yager, Ellen Hisdal, Etienne Kerre, and others) in this field. The contributions are original and each chapter is self-contained. The authors have been careful to indicate direct links between fuzzy set theory and its industrial applications. Fuzzy Logic Foundations and Industrial Applications is an invaluable work that provides researchers and industrial engineers with up-to-date coverage of new results on fuzzy logic and relates these results to their industrial use.
The series is devoted to the publication of monographs and high-level textbooks in mathematics, mathematical methods and their applications. Apart from covering important areas of current interest, a major aim is to make topics of an interdisciplinary nature accessible to the non-specialist. The works in this series are addressed to advanced students and researchers in mathematics and theoretical physics. In addition, it can serve as a guide for lectures and seminars on a graduate level. The series de Gruyter Studies in Mathematics was founded ca. 35 years ago by the late Professor Heinz Bauer and Professor Peter Gabriel with the aim to establish a series of monographs and textbooks of high standard, written by scholars with an international reputation presenting current fields of research in pure and applied mathematics. While the editorial board of the Studies has changed with the years, the aspirations of the Studies are unchanged. In times of rapid growth of mathematical knowledge carefully written monographs and textbooks written by experts are needed more than ever, not least to pave the way for the next generation of mathematicians. In this sense the editorial board and the publisher of the Studies are devoted to continue the Studies as a service to the mathematical community. Please submit any book proposals to Niels Jacob. Titles in planning include Flavia Smarazzo and Alberto Tesei, Measure Theory: Radon Measures, Young Measures, and Applications to Parabolic Problems (2019) Elena Cordero and Luigi Rodino, Time-Frequency Analysis of Operators (2019) Mark M. Meerschaert, Alla Sikorskii, and Mohsen Zayernouri, Stochastic and Computational Models for Fractional Calculus, second edition (2020) Mariusz Lemanczyk, Ergodic Theory: Spectral Theory, Joinings, and Their Applications (2020) Marco Abate, Holomorphic Dynamics on Hyperbolic Complex Manifolds (2021) Miroslava Antic, Joeri Van der Veken, and Luc Vrancken, Differential Geometry of Submanifolds: Submanifolds of Almost Complex Spaces and Almost Product Spaces (2021) Kai Liu, Ilpo Laine, and Lianzhong Yang, Complex Differential-Difference Equations (2021) Rajendra Vasant Gurjar, Kayo Masuda, and Masayoshi Miyanishi, Affine Space Fibrations (2022)
Logic and Philosophy of Mathematics in the Early Husserl focuses on the first ten years of Edmund Husserl's work, from the publication of his Philosophy of Arithmetic (1891) to that of his Logical Investigations (1900/01), and aims to precisely locate his early work in the fields of logic, philosophy of logic and philosophy of mathematics. Unlike most phenomenologists, the author refrains from reading Husserl's early work as a more or less immature sketch of claims consolidated only in his later phenomenology, and unlike the majority of historians of logic she emphasizes the systematic strength and the originality of Husserl's logico-mathematical work. The book attempts to reconstruct the discussion between Husserl and those philosophers and mathematicians who contributed to new developments in logic, such as Leibniz, Bolzano, the logical algebraists (especially Boole and Schroder), Frege, and Hilbert and his school. It presents both a comprehensive critical examination of some of the major works produced by Husserl and his antagonists in the last decade of the 19th century and a formal reconstruction of many texts from Husserl's Nachlass that have not yet been the object of systematical scrutiny. This volume will be of particular interest to researchers working in the history, and in the philosophy, of logic and mathematics, and more generally, to analytical philosophers and phenomenologists with a background in standard logic."
This book contains leading survey papers on the various aspects of Abduction, both logical and numerical approaches. Abduction is central to all areas of applied reasoning, including artificial intelligence, philosophy of science, machine learning, data mining and decision theory, as well as logic itself. |
You may like...
From Quantum Information to Musical…
Maria Luisa Dalla Chiara, Roberto Giuntini, …
Paperback
R484
Discovery Miles 4 840
The Art of Logic - How to Make Sense in…
Eugenia Cheng
Paperback
(1)
Mathematical Proofs: A Transition to…
Gary Chartrand, Albert Polimeni, …
Paperback
R2,185
Discovery Miles 21 850
Theory and Applications of…
Florentin Smarandache, Madeline Al-Tahan
Hardcover
R7,022
Discovery Miles 70 220
|