![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Mathematical foundations
While it is well known that the Delian problems are impossible to solve with a straightedge and compass - for example, it is impossible to construct a segment whose length is cube root of 2 with these instruments - the discovery of the Italian mathematician Margherita Beloch Piazzolla in 1934 that one can in fact construct a segment of length cube root of 2 with a single paper fold was completely ignored (till the end of the 1980s). This comes as no surprise, since with few exceptions paper folding was seldom considered as a mathematical practice, let alone as a mathematical procedure of inference or proof that could prompt novel mathematical discoveries. A few questions immediately arise: Why did paper folding become a non-instrument? What caused the marginalisation of this technique? And how was the mathematical knowledge, which was nevertheless transmitted and prompted by paper folding, later treated and conceptualised? Aiming to answer these questions, this volume provides, for the first time, an extensive historical study on the history of folding in mathematics, spanning from the 16th century to the 20th century, and offers a general study on the ways mathematical knowledge is marginalised, disappears, is ignored or becomes obsolete. In doing so, it makes a valuable contribution to the field of history and philosophy of science, particularly the history and philosophy of mathematics and is highly recommended for anyone interested in these topics.
This volume is a collection of essays in honour of Professor Mohammad Ardeshir. It examines topics which, in one way or another, are connected to the various aspects of his multidisciplinary research interests. Based on this criterion, the book is divided into three general categories. The first category includes papers on non-classical logics, including intuitionistic logic, constructive logic, basic logic, and substructural logic. The second category is made up of papers discussing issues in the contemporary philosophy of mathematics and logic. The third category contains papers on Avicenna's logic and philosophy. Mohammad Ardeshir is a full professor of mathematical logic at the Department of Mathematical Sciences, Sharif University of Technology, Tehran, Iran, where he has taught generations of students for around a quarter century. Mohammad Ardeshir is known in the first place for his prominent works in basic logic and constructive mathematics. His areas of interest are however much broader and include topics in intuitionistic philosophy of mathematics and Arabic philosophy of logic and mathematics. In addition to numerous research articles in leading international journals, Ardeshir is the author of a highly praised Persian textbook in mathematical logic. Partly through his writings and translations, the school of mathematical intuitionism was introduced to the Iranian academic community.
This monograph is a defence of the Fregean take on logic. The author argues that Freges projects, in logic and philosophy of language, are essentially connected and that the formalist shift produced by the work of Peano, Boole and Schroeder and continued by Hilbert and Tarski is completely alien to Frege's approach in the Begriffsschrift. A central thesis of the book is that judgeable contents, i.e. propositions, are the primary bearers of logical properties, which makes logic embedded in our conceptual system. This approach allows coherent and correct definitions of logical constants, logical consequence, and truth and connects their use to the practices of rational agents in science and everyday life.
This book treats bounded arithmetic and propositional proof complexity from the point of view of computational complexity. The first seven chapters include the necessary logical background for the material and are suitable for a graduate course. Associated with each of many complexity classes are both a two-sorted predicate calculus theory, with induction restricted to concepts in the class, and a propositional proof system. The complexity classes range from AC0 for the weakest theory up to the polynomial hierarchy. Each bounded theorem in a theory translates into a family of (quantified) propositional tautologies with polynomial size proofs in the corresponding proof system. The theory proves the soundness of the associated proof system. The result is a uniform treatment of many systems in the literature, including Buss's theories for the polynomial hierarchy and many disparate systems for complexity classes such as AC0, AC0(m), TC0, NC1, L, NL, NC, and P.
The Equation of Knowledge: From Bayes' Rule to a Unified Philosophy of Science introduces readers to the Bayesian approach to science: teasing out the link between probability and knowledge. The author strives to make this book accessible to a very broad audience, suitable for professionals, students, and academics, as well as the enthusiastic amateur scientist/mathematician. This book also shows how Bayesianism sheds new light on nearly all areas of knowledge, from philosophy to mathematics, science and engineering, but also law, politics and everyday decision-making. Bayesian thinking is an important topic for research, which has seen dramatic progress in the recent years, and has a significant role to play in the understanding and development of AI and Machine Learning, among many other things. This book seeks to act as a tool for proselytising the benefits and limits of Bayesianism to a wider public. Features Presents the Bayesian approach as a unifying scientific method for a wide range of topics Suitable for a broad audience, including professionals, students, and academics Provides a more accessible, philosophical introduction to the subject that is offered elsewhere
This book introduces new models based on R-calculus and theories of belief revision for dealing with large and changing data. It extends R-calculus from first-order logic to propositional logic, description logics, modal logic and logic programming, and from minimal change semantics to subset minimal change, pseudo-subformula minimal change and deduction-based minimal change (the last two minimal changes are newly defined). And it proves soundness and completeness theorems with respect to the minimal changes in these logics. To make R-calculus computable, an approximate R-calculus is given which uses finite injury priority method in recursion theory. Moreover, two applications of R-calculus are given to default theory and semantic inheritance networks. This book offers a rich blend of theory and practice. It is suitable for students, researchers and practitioners in the field of logic. Also it is very useful for all those who are interested in data, digitization and correctness and consistency of information, in modal logics, non monotonic logics, decidable/undecidable logics, logic programming, description logics, default logics and semantic inheritance networks.
This book has a fundamental relationship to the International Seminar on Fuzzy Set Theory held each September in Linz, Austria. First, this volume is an extended account of the eleventh Seminar of 1989. Second, and more importantly, it is the culmination of the tradition of the preceding ten Seminars. The purpose of the Linz Seminar, since its inception, was and is to foster the development of the mathematical aspects of fuzzy sets. In the earlier years, this was accomplished by bringing together for a week small grou ps of mathematicians in various fields in an intimate, focused environment which promoted much informal, critical discussion in addition to formal presentations. Beginning with the tenth Seminar, the intimate setting was retained, but each Seminar narrowed in theme; and participation was broadened to include both younger scholars within, and established mathematicians outside, the mathematical mainstream of fuzzy sets theory. Most of the material of this book was developed over the years in close association with the Seminar or influenced by what transpired at Linz. For much of the content, it played a crucial role in either stimulating this material or in providing feedback and the necessary screening of ideas. Thus we may fairly say that the book, and the eleventh Seminar to which it is directly related, are in many respects a culmination of the previous Seminars.
An introductory textbook, Logic for Justice covers, in full detail, the language and semantics of both propositional logic and first-order logic. It motivates the study of those logical systems by drawing on social and political issues. Basically, Logic for Justice frames propositional logic and first-order logic as two theories of the distinction between good arguments and bad arguments. And the book explains why, for the purposes of social justice and political reform, we need theories of that distinction. In addition, Logic for Justice is extremely lucid, thorough, and clear. It explains, and motivates, many different features of the formalism of propositional logic and first-order logic, always connecting those features back to real-world issues. Key Features Connects the study of logic to real-world social and political issues, drawing in students who might not otherwise be attracted to the subject. Offers extremely clear and thorough presentations of technical material, allowing students to learn directly from the book without having to rely on instructor explanations. Carefully explains the value of arguing well throughout one’s life, with several discussions about how to argue and how arguments – when done with care – can be helpful personally. Includes examples that appear throughout the entire book, allowing students to see how the ideas presented in the book build on each other. Provides a large and diverse set of problems for each chapter. Teaches logic by connecting formal languages to natural languages with which students are already familiar, making it much easier for students to learn how logic works.
An introductory textbook, Logic for Justice covers, in full detail, the language and semantics of both propositional logic and first-order logic. It motivates the study of those logical systems by drawing on social and political issues. Basically, Logic for Justice frames propositional logic and first-order logic as two theories of the distinction between good arguments and bad arguments. And the book explains why, for the purposes of social justice and political reform, we need theories of that distinction. In addition, Logic for Justice is extremely lucid, thorough, and clear. It explains, and motivates, many different features of the formalism of propositional logic and first-order logic, always connecting those features back to real-world issues. Key Features Connects the study of logic to real-world social and political issues, drawing in students who might not otherwise be attracted to the subject. Offers extremely clear and thorough presentations of technical material, allowing students to learn directly from the book without having to rely on instructor explanations. Carefully explains the value of arguing well throughout one’s life, with several discussions about how to argue and how arguments – when done with care – can be helpful personally. Includes examples that appear throughout the entire book, allowing students to see how the ideas presented in the book build on each other. Provides a large and diverse set of problems for each chapter. Teaches logic by connecting formal languages to natural languages with which students are already familiar, making it much easier for students to learn how logic works.
This book explores the research of Professor Hilary Putnam, a Harvard professor as well as a leading philosopher, mathematician and computer scientist. It features the work of distinguished scholars in the field as well as a selection of young academics who have studied topics closely connected to Putnam's work. It includes 12 papers that analyze, develop, and constructively criticize this notable professor's research in mathematical logic, the philosophy of logic and the philosophy of mathematics. In addition, it features a short essay presenting reminiscences and anecdotes about Putnam from his friends and colleagues, and also includes an extensive bibliography of his work in mathematics and logic. The book offers readers a comprehensive review of outstanding contributions in logic and mathematics as well as an engaging dialogue between prominent scholars and researchers. It provides those interested in mathematical logic, the philosophy of logic, and the philosophy of mathematics unique insights into the work of Hilary Putnam.
Topos Theory is an important branch of mathematical logic of interest to theoretical computer scientists, logicians and philosophers who study the foundations of mathematics, and to those working in differential geometry and continuum physics. This compendium contains material that was previously available only in specialist journals. This is likely to become the standard reference work for all those interested in the subject.
Topos Theory is an important branch of mathematical logic of interest to theoretical computer scientists, logicians and philosophers who study the foundations of mathematics, and to those working in differential geometry and continuum physics. This compendium contains material that was previously available only in specialist journals. This is likely to become the standard reference work for all those interested in the subject.
Today the notion of the algorithm is familiar not only to mathematicians. It forms a conceptual base for information processing; the existence of a corresponding algorithm makes automatic information processing possible. The theory of algorithms (together with mathematical logic ) forms the the oretical basis for modern computer science (see [Sem Us 86]; this article is called "Mathematical Logic in Computer Science and Computing Practice" and in its title mathematical logic is understood in a broad sense including the theory of algorithms). However, not everyone realizes that the word "algorithm" includes a transformed toponym Khorezm. Algorithms were named after a great sci entist of medieval East, is al-Khwarizmi (where al-Khwarizmi means "from Khorezm"). He lived between c. 783 and 850 B.C. and the year 1983 was chosen to celebrate his 1200th birthday. A short biography of al-Khwarizmi compiled in the tenth century starts as follows: "al-Khwarizmi. His name is Muhammad ibn Musa, he is from Khoresm" (cited according to [Bul Rozen Ah 83, p.8]).
This book is an exploration and defense of the coherence of classical theism's doctrine of divine aseity in the face of the challenge posed by Platonism with respect to abstract objects. A synoptic work in analytic philosophy of religion, the book engages discussions in philosophy of mathematics, philosophy of language, metaphysics, and metaontology. It addresses absolute creationism, non-Platonic realism, fictionalism, neutralism, and alternative logics and semantics, among other topics. The book offers a helpful taxonomy of the wide range of options available to the classical theist for dealing with the challenge of Platonism. It probes in detail the diverse views on the reality of abstract objects and their compatibility with classical theism. It contains a most thorough discussion, rooted in careful exegesis, of the biblical and patristic basis of the doctrine of divine aseity. Finally, it challenges the influential Quinean metaontological theses concerning the way in which we make ontological commitments.
This book provides a concise introduction to the mathematical foundations of time series analysis, with an emphasis on mathematical clarity. The text is reduced to the essential logical core, mostly using the symbolic language of mathematics, thus enabling readers to very quickly grasp the essential reasoning behind time series analysis. It appeals to anybody wanting to understand time series in a precise, mathematical manner. It is suitable for graduate courses in time series analysis but is equally useful as a reference work for students and researchers alike.
The purpose of this book is to present the classical analytic function theory of several variables as a standard subject in a course of mathematics after learning the elementary materials (sets, general topology, algebra, one complex variable). This includes the essential parts of Grauert-Remmert's two volumes, GL227(236) (Theory of Stein spaces) and GL265 (Coherent analytic sheaves) with a lowering of the level for novice graduate students (here, Grauert's direct image theorem is limited to the case of finite maps).The core of the theory is "Oka's Coherence", found and proved by Kiyoshi Oka. It is indispensable, not only in the study of complex analysis and complex geometry, but also in a large area of modern mathematics. In this book, just after an introductory chapter on holomorphic functions (Chap. 1), we prove Oka's First Coherence Theorem for holomorphic functions in Chap. 2. This defines a unique character of the book compared with other books on this subject, in which the notion of coherence appears much later.The present book, consisting of nine chapters, gives complete treatments of the following items: Coherence of sheaves of holomorphic functions (Chap. 2); Oka-Cartan's Fundamental Theorem (Chap. 4); Coherence of ideal sheaves of complex analytic subsets (Chap. 6); Coherence of the normalization sheaves of complex spaces (Chap. 6); Grauert's Finiteness Theorem (Chaps. 7, 8); Oka's Theorem for Riemann domains (Chap. 8). The theories of sheaf cohomology and domains of holomorphy are also presented (Chaps. 3, 5). Chapter 6 deals with the theory of complex analytic subsets. Chapter 8 is devoted to the applications of formerly obtained results, proving Cartan-Serre's Theorem and Kodaira's Embedding Theorem. In Chap. 9, we discuss the historical development of "Coherence".It is difficult to find a book at this level that treats all of the above subjects in a completely self-contained manner. In the present volume, a number of classical proofs are improved and simplified, so that the contents are easily accessible for beginning graduate students.
This monograph introduces and explores the notions of a commutator equation and the equationally-defined commutator from the perspective of abstract algebraic logic. An account of the commutator operation associated with equational deductive systems is presented, with an emphasis placed on logical aspects of the commutator for equational systems determined by quasivarieties of algebras. The author discusses the general properties of the equationally-defined commutator, various centralization relations for relative congruences, the additivity and correspondence properties of the equationally-defined commutator and its behavior in finitely generated quasivarieties. Presenting new and original research not yet considered in the mathematical literature, The Equationally-Defined Commutator will be of interest to professional algebraists and logicians, as well as graduate students and other researchers interested in problems of modern algebraic logic.
This book explores the classical and beautiful character theory of finite groups. It does it by using some rudiments of the language of categories. Originally emerging from two courses offered at Peking University (PKU), primarily for third-year students, it is now better suited for graduate courses, and provides broader coverage than books that focus almost exclusively on groups. The book presents the basic tools, notions and theorems of character theory (including a new treatment of the control of fusion and isometries), and introduces readers to the categorical language at several levels. It includes and proves the major results on characteristic zero representations without any assumptions about the base field. The book includes a dedicated chapter on graded representations and applications of polynomial invariants of finite groups, and its closing chapter addresses the more recent notion of the Drinfeld double of a finite group and the corresponding representation of GL_2(Z).
This book addresses mechanisms for reducing model heterogeneity induced by the absence of explicit semantics expression in the formal techniques used to specify design models. More precisely, it highlights the advances in handling both implicit and explicit semantics in formal system developments, and discusses different contributions expressing different views and perceptions on the implicit and explicit semantics. The book is based on the discussions at the Shonan meeting on this topic held in 2016, and includes contributions from the participants summarising their perspectives on the problem and offering solutions. Divided into 5 parts: domain modelling, knowledge-based modelling, proof-based modelling, assurance cases, and refinement-based modelling, and offers inspiration for researchers and practitioners in the fields of formal methods, system and software engineering, domain knowledge modelling, requirement analysis, and explicit and implicit semantics of modelling languages.
This book provides a general survey of the main concepts, questions and results that have been developed in the recent interactions between quantum information, quantum computation and logic. Divided into 10 chapters, the books starts with an introduction of the main concepts of the quantum-theoretic formalism used in quantum information. It then gives a synthetic presentation of the main "mathematical characters" of the quantum computational game: qubits, quregisters, mixtures of quregisters, quantum logical gates. Next, the book investigates the puzzling entanglement-phenomena and logically analyses the Einstein-Podolsky-Rosen paradox and introduces the reader to quantum computational logics, and new forms of quantum logic. The middle chapters investigate the possibility of a quantum computational semantics for a language that can express sentences like "Alice knows that everybody knows that she is pretty", explore the mathematical concept of quantum Turing machine, and illustrate some characteristic examples that arise in the framework of musical languages. The book concludes with an analysis of recent discussions, and contains a Mathematical Appendix which is a survey of the definitions of all main mathematical concepts used in the book.
This adaptation of an earlier work by the authors is a graduate text and professional reference on the fundamentals of graph theory. It covers the theory of graphs, its applications to computer networks and the theory of graph algorithms. Also includes exercises and an updated bibliography.
Automata Theory and its Applications is a uniform treatment of the theory of finite state machines on finite and infinite strings and trees. Many books deal with automata on finite strings, but there are very few expositions that prove the fundamental results of automata on infinite strings and trees. These results have important applications to modeling parallel computation and concurrency, the specification and verification of sequential and concurrent programs, databases, operating systems, computational complexity, and decision methods in logic and algebra. Thus, this textbook fills an important gap in the literature by exposing early fundamental results in automata theory and its applications. Beginning with coverage of all standard fundamental results regarding finite automata, the book deals in great detail with BA1/4chi and Rabin automata and their applications to various logical theories such as S1S and S2S, and describes game-theoretic models of concurrent operating and communication systems. The book is self-contained with numerous examples, illustrations, exercises, and is suitable for a two-semester undergraduate course for computer science or mathematics majors, or for a one-semester graduate course/seminar. Since no advanced mathematical background is required, the text is also useful for self-study by computer science professionals who wish to understand the foundations of modern formal approaches to software development, validation, and verification.
This volume presents essays by pioneering thinkers including Tyler Burge, Gregory Chaitin, Daniel Dennett, Barry Mazur, Nicholas Humphrey, John Searle and Ian Stewart. Together they illuminate the Map/Territory Distinction that underlies at the foundation of the scientific method, thought and the very reality itself. It is imperative to distinguish Map from the Territory while analyzing any subject but we often mistake map for the territory. Meaning for the Reference. Computational tool for what it computes. Representations are handy and tempting that we often end up committing the category error of over-marrying the representation with what is represented, so much so that the distinction between the former and the latter is lost. This error that has its roots in the pedagogy often generates a plethora of paradoxes/confusions which hinder the proper understanding of the subject. What are wave functions? Fields? Forces? Numbers? Sets? Classes? Operators? Functions? Alphabets and Sentences? Are they a part of our map (theory/representation)? Or do they actually belong to the territory (Reality)? Researcher, like a cartographer, clothes (or creates?) the reality by stitching multitudes of maps that simultaneously co-exist. A simple apple, for example, can be analyzed from several viewpoints beginning with evolution and biology, all the way down its microscopic quantum mechanical components. Is there a reality (or a real apple) out there apart from these maps? How do these various maps interact/intermingle with each other to produce a coherent reality that we interact with? Or do they not? Does our brain uses its own internal maps to facilitate "physicist/mathematician" in us to construct the maps about the external territories in turn? If so, what is the nature of these internal maps? Are there meta-maps? Evolution definitely fences our perception and thereby our ability to construct maps, revealing to us only those aspects beneficial for our survival. But the question is, to what extent? Is there a way out of the metaphorical Platonic cave erected around us by the nature? While "Map is not the territory" as Alfred Korzybski remarked, join us in this journey to know more, while we inquire on the nature and the reality of the maps which try to map the reality out there. The book also includes a foreword by Sir Roger Penrose and an afterword by Dagfinn Follesdal.
This monograph offers a critical introduction to current theories of how scientific models represent their target systems. Representation is important because it allows scientists to study a model to discover features of reality. The authors provide a map of the conceptual landscape surrounding the issue of scientific representation, arguing that it consists of multiple intertwined problems. They provide an encyclopaedic overview of existing attempts to answer these questions, and they assess their strengths and weaknesses. The book also presents a comprehensive statement of their alternative proposal, the DEKI account of representation, which they have developed over the last few years. They show how the account works in the case of material as well as non-material models; how it accommodates the use of mathematics in scientific modelling; and how it sheds light on the relation between representation in science and art. The issue of representation has generated a sizeable literature, which has been growing fast in particular over the last decade. This makes it hard for novices to get a handle on the topic because so far there is no book-length introduction that would guide them through the discussion. Likewise, researchers may require a comprehensive review that they can refer to for critical evaluations. This book meets the needs of both groups. |
You may like...
Better Choices - Ensuring South Africa's…
Greg Mills, Mcebisi Jonas, …
Paperback
491 Days - Prisoner Number 1323/69
Winnie Madikizela-Mandela
Paperback
(2)
|