![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Mathematical foundations
The two volumes in this advanced textbook present results, proof methods, and translations of motivational and philosophical considerations to formal constructions. In this Vol. I the author explains preferential structures and abstract size. In the associated Vol. II he presents chapters on theory revision and sums, defeasible inheritance theory, interpolation, neighbourhood semantics and deontic logic, abstract independence, and various aspects of nonmonotonic and other logics. In both volumes the text contains many exercises and some solutions, and the author limits the discussion of motivation and general context throughout, offering this only when it aids understanding of the formal material, in particular to illustrate the path from intuition to formalisation. Together these books are a suitable compendium for graduate students and researchers in the area of computer science and mathematical logic.
Very Short Introductions: Brilliant, Sharp, Inspiring Kurt Goedel first published his celebrated theorem, showing that no axiomatization can determine the whole truth and nothing but the truth concerning arithmetic, nearly a century ago. The theorem challenged prevalent presuppositions about the nature of mathematics and was consequently of considerable mathematical interest, while also raising various deep philosophical questions. Goedel's Theorem has since established itself as a landmark intellectual achievement, having a profound impact on today's mathematical ideas. Goedel and his theorem have attracted something of a cult following, though his theorem is often misunderstood. This Very Short Introduction places the theorem in its intellectual and historical context, and explains the key concepts as well as common misunderstandings of what it actually states. A. W. Moore provides a clear statement of the theorem, presenting two proofs, each of which has something distinctive to teach about its content. Moore also discusses the most important philosophical implications of the theorem. In particular, Moore addresses the famous question of whether the theorem shows the human mind to have mathematical powers beyond those of any possible computer ABOUT THE SERIES: The Very Short Introductions series from Oxford University Press contains hundreds of titles in almost every subject area. These pocket-sized books are the perfect way to get ahead in a new subject quickly. Our expert authors combine facts, analysis, perspective, new ideas, and enthusiasm to make interesting and challenging topics highly readable.
The aim of this book volume is to explain the importance of Markov state models to molecular simulation, how they work, and how they can be applied to a range of problems. The Markov state model (MSM) approach aims to address two key challenges of molecular simulation: 1) How to reach long timescales using short simulations of detailed molecular models. 2) How to systematically gain insight from the resulting sea of data. MSMs do this by providing a compact representation of the vast conformational space available to biomolecules by decomposing it into states sets of rapidly interconverting conformations and the rates of transitioning between states.This kinetic definition allows one to easily vary the temporal and spatial resolution of an MSM from high-resolution models capable of quantitative agreement with (or prediction of) experiment to low-resolution models that facilitate understanding. Additionally, MSMs facilitate the calculation of quantities that are difficult to obtain from more direct MD analyses, such as the ensemble of transition pathways. This book introduces the mathematical foundations of Markov models, how they can be used to analyze simulations and drive efficient simulations, and some of the insights these models have yielded in a variety of applications of molecular simulation."
The two volumes in this advanced textbook present results, proof methods, and translations of motivational and philosophical considerations to formal constructions. In the associated Vol. I the author explains preferential structures and abstract size. In this Vol. II he presents chapters on theory revision and sums, defeasible inheritance theory, interpolation, neighbourhood semantics and deontic logic, abstract independence, and various aspects of nonmonotonic and other logics. In both volumes the text contains many exercises and some solutions, and the author limits the discussion of motivation and general context throughout, offering this only when it aids understanding of the formal material, in particular to illustrate the path from intuition to formalisation. Together these books are a suitable compendium for graduate students and researchers in the area of computer science and mathematical logic.
Fourier analysis aims to decompose functions into a superposition of simple trigonometric functions, whose special features can be exploited to isolate specific components into manageable clusters before reassembling the pieces. This two-volume text presents a largely self-contained treatment, comprising not just the major theoretical aspects (Part I) but also exploring links to other areas of mathematics and applications to science and technology (Part II). Following the historical and conceptual genesis, this book (Part I) provides overviews of basic measure theory and functional analysis, with added insight into complex analysis and the theory of distributions. The material is intended for both beginning and advanced graduate students with a thorough knowledge of advanced calculus and linear algebra. Historical notes are provided and topics are illustrated at every stage by examples and exercises, with separate hints and solutions, thus making the exposition useful both as a course textbook and for individual study.
Handbook of Sinc Numerical Methods presents an ideal road map for handling general numeric problems. Reflecting the author's advances with Sinc since 1995, the text most notably provides a detailed exposition of the Sinc separation of variables method for numerically solving the full range of partial differential equations (PDEs) of interest to scientists and engineers. This new theory, which combines Sinc convolution with the boundary integral equation (IE) approach, makes for exponentially faster convergence to solutions of differential equations. The basis for the approach is the Sinc method of approximating almost every type of operation stemming from calculus via easily computed matrices of very low dimension. The downloadable resources of this handbook contain roughly 450 MATLAB (R) programs corresponding to exponentially convergent numerical algorithms for solving nearly every computational problem of science and engineering. While the book makes Sinc methods accessible to users wanting to bypass the complete theory, it also offers sufficient theoretical details for readers who do want a full working understanding of this exciting area of numerical analysis.
Algorithms and Theory of Computation Handbook, Second Edition: General Concepts and Techniques provides an up-to-date compendium of fundamental computer science topics and techniques. It also illustrates how the topics and techniques come together to deliver efficient solutions to important practical problems. Along with updating and revising many of the existing chapters, this second edition contains four new chapters that cover external memory and parameterized algorithms as well as computational number theory and algorithmic coding theory. This best-selling handbook continues to help computer professionals and engineers find significant information on various algorithmic topics. The expert contributors clearly define the terminology, present basic results and techniques, and offer a number of current references to the in-depth literature. They also provide a glimpse of the major research issues concerning the relevant topics.
Transform theory and methods are useful to many professionals from various mathematical backgrounds. This introduction to the theory and practice of continuous and discrete transforms integrates knowledge from many branches of mathematics. It combines heuristic argument and discussion with careful, defensible mathematical statements, frequently in the form of theorems without proof.
Proofs play a central role in advanced mathematics and theoretical computer science, yet many students struggle the first time they take a course in which proofs play a significant role. This bestselling text's third edition helps students transition from solving problems to proving theorems by teaching them the techniques needed to read and write proofs. Featuring over 150 new exercises and a new chapter on number theory, this new edition introduces students to the world of advanced mathematics through the mastery of proofs. The book begins with the basic concepts of logic and set theory to familiarize students with the language of mathematics and how it is interpreted. These concepts are used as the basis for an analysis of techniques that can be used to build up complex proofs step by step, using detailed 'scratch work' sections to expose the machinery of proofs about numbers, sets, relations, and functions. Assuming no background beyond standard high school mathematics, this book will be useful to anyone interested in logic and proofs: computer scientists, philosophers, linguists, and, of course, mathematicians.
This book offers a historical explanation of important philosophical problems in logic and mathematics, which have been neglected by the official history of modern logic. It offers extensive information on Gottlob Frege's logic, discussing which aspects of his logic can be considered truly innovative in its revolution against the Aristotelian logic. It presents the work of Hilbert and his associates and followers with the aim of understanding the revolutionary change in the axiomatic method. Moreover, it offers useful tools to understand Tarski's and Goedel's work, explaining why the problems they discussed are still unsolved. Finally, the book reports on some of the most influential positions in contemporary philosophy of mathematics, i.e., Maddy's mathematical naturalism and Shapiro's mathematical structuralism. Last but not least, the book introduces Biancani's Aristotelian philosophy of mathematics as this is considered important to understand current philosophical issue in the applications of mathematics. One of the main purposes of the book is to stimulate readers to reconsider the Aristotelian position, which disappeared almost completely from the scene in logic and mathematics in the early twentieth century.
Presents Results from a Very Active Area of Research Exploring an active area of mathematics that studies the complexity of equivalence relations and classification problems, Invariant Descriptive Set Theory presents an introduction to the basic concepts, methods, and results of this theory. It brings together techniques from various areas of mathematics, such as algebra, topology, and logic, which have diverse applications to other fields. After reviewing classical and effective descriptive set theory, the text studies Polish groups and their actions. It then covers Borel reducibility results on Borel, orbit, and general definable equivalence relations. The author also provides proofs for numerous fundamental results, such as the Glimm-Effros dichotomy, the Burgess trichotomy theorem, and the Hjorth turbulence theorem. The next part describes connections with the countable model theory of infinitary logic, along with Scott analysis and the isomorphism relation on natural classes of countable models, such as graphs, trees, and groups. The book concludes with applications to classification problems and many benchmark equivalence relations. By illustrating the relevance of invariant descriptive set theory to other fields of mathematics, this self-contained book encourages readers to further explore this very active area of research.
Praise for the First Edition "...complete, up-to-date coverage of computational complexity theory...the book promises to become the standard reference on computational complexity." -Zentralblatt MATH A thorough revision based on advances in the field of computational complexity and readers feedback, the Second Edition of Theory of Computational Complexity presents updates to the principles and applications essential to understanding modern computational complexity theory. The new edition continues to serve as a comprehensive resource on the use of software and computational approaches for solving algorithmic problems and the related difficulties that can be encountered. Maintaining extensive and detailed coverage, Theory of Computational Complexity, Second Edition, examines the theory and methods behind complexity theory, such as computational models, decision tree complexity, circuit complexity, and probabilistic complexity. The Second Edition also features recent developments on areas such as NP-completeness theory, as well as: * A new combinatorial proof of the PCP theorem based on the notion of expander graphs, a research area in the field of computer science * Additional exercises at varying levels of difficulty to further test comprehension of the presented material * End-of-chapter literature reviews that summarize each topic and offer additional sources for further study Theory of Computational Complexity, Second Edition, is an excellent textbook for courses on computational theory and complexity at the graduate level. The book is also a useful reference for practitioners in the fields of computer science, engineering, and mathematics who utilize state-of-the-art software and computational methods to conduct research. A thorough revision based on advances in the field of computational complexity and readers feedback, the Second Edition of Theory of Computational Complexity presents updates to the principles and applications essential to understanding modern computational complexity theory. The new edition continues to serve as a comprehensive resource on the use of software and computational approaches for solving algorithmic problems and the related difficulties that can be encountered. Maintaining extensive and detailed coverage, Theory of Computational Complexity, Second Edition, examines the theory and methods behind complexity theory, such as computational models, decision tree complexity, circuit complexity, and probabilistic complexity. The Second Edition also features recent developments on areas such as NP-completeness theory, as well as: A new combinatorial proof of the PCP theorem based on the notion of expander graphs, a research area in the field of computer science Additional exercises at varying levels of difficulty to further test comprehension of the presented material End-of-chapter literature reviews that summarize each topic and offer additional sources for further study Theory of Computational Complexity, Second Edition, is an excellent textbook for courses on computational theory and complexity at the graduate level. The book is also a useful reference for practitioners in the fields of computer science, engineering, and mathematics who utilize state-of-the-art software and computational methods to conduct research.
This book delves into finite mathematics and its application in physics, particularly quantum theory. It is shown that quantum theory based on finite mathematics is more general than standard quantum theory, whilst finite mathematics is itself more general than standard mathematics.As a consequence, the mathematics describing nature at the most fundamental level involves only a finite number of numbers while the notions of limit, infinite/infinitesimal and continuity are needed only in calculations that describe nature approximately. It is also shown that the concepts of particle and antiparticle are likewise approximate notions, valid only in special situations, and that the electric charge and baryon- and lepton quantum numbers can be only approximately conserved.
Continuing in the bestselling, informative tradition of the first edition, the Handbook of Combinatorial Designs, Second Edition remains the only resource to contain all of the most important results and tables in the field of combinatorial design. This handbook covers the constructions, properties, and applications of designs as well as existence results. Over 30% longer than the first edition, the book builds upon the groundwork of its predecessor while retaining the original contributors' expertise. The first part contains a brief introduction and history of the subject. The following parts focus on four main classes of combinatorial designs: balanced incomplete block designs, orthogonal arrays and Latin squares, pairwise balanced designs, and Hadamard and orthogonal designs. Closely connected to the preceding sections, the next part surveys 65 additional classes of designs, such as balanced ternary, factorial, graphical, Howell, quasi-symmetric, and spherical. The final part presents mathematical and computational background related to design theory. New to the Second Edition An introductory part that provides a general overview and a historical perspective of the area New chapters on the history of design theory, various codes, bent functions, and numerous types of designs Fully updated tables, including BIBDs, MOLS, PBDs, and Hadamard matrices Nearly 2,200 references in a single bibliographic section Meeting the need for up-to-date and accessible tabular and reference information, this handbook provides the tools to understand combinatorial design theory and applications that span the entire discipline. The author maintains a website with more information.
This textbook offers an introduction to the philosophy of science. It helps undergraduate students from the natural, the human and social sciences to gain an understanding of what science is, how it has developed, what its core traits are, how to distinguish between science and pseudo-science and to discover what a scientific attitude is. It argues against the common assumption that there is fundamental difference between natural and human science, with natural science being concerned with testing hypotheses and discovering natural laws, and the aim of human and some social sciences being to understand the meanings of individual and social group actions. Instead examines the similarities between the sciences and shows how the testing of hypotheses and doing interpretation/hermeneutics are similar activities. The book makes clear that lessons from natural scientists are relevant to students and scholars within the social and human sciences, and vice versa. It teaches its readers how to effectively demarcate between science and pseudo-science and sets criteria for true scientific thinking. Divided into three parts, the book first examines the question What is Science? It describes the evolution of science, defines knowledge, and explains the use of and need for hypotheses and hypothesis testing. The second half of part I deals with scientific data and observation, qualitative data and methods, and ends with a discussion of theories on the development of science. Part II offers philosophical reflections on four of the most important con cepts in science: causes, explanations, laws and models. Part III presents discussions on philosophy of mind, the relation between mind and body, value-free and value-related science, and reflections on actual trends in science.
'Points, questions, stories, and occasional rants introduce the 24 chapters of this engaging volume. With a focus on mathematics and peppered with a scattering of computer science settings, the entries range from lightly humorous to curiously thought-provoking. Each chapter includes sections and sub-sections that illustrate and supplement the point at hand. Most topics are self-contained within each chapter, and a solid high school mathematics background is all that is needed to enjoy the discussions. There certainly is much to enjoy here.'CHOICEEver notice how people sometimes use math words inaccurately? Or how sometimes you instinctively know a math statement is false (or not known)?Each chapter of this book makes a point like those above and then illustrates the point by doing some real mathematics through step-by-step mathematical techniques.This book gives readers valuable information about how mathematics and theoretical computer science work, while teaching them some actual mathematics and computer science through examples and exercises. Much of the mathematics could be understood by a bright high school student. The points made can be understood by anyone with an interest in math, from the bright high school student to a Field's medal winner.
This book lays out the theory of Mordell-Weil lattices, a very powerful and influential tool at the crossroads of algebraic geometry and number theory, which offers many fruitful connections to other areas of mathematics. The book presents all the ingredients entering into the theory of Mordell-Weil lattices in detail, notably, relevant portions of lattice theory, elliptic curves, and algebraic surfaces. After defining Mordell-Weil lattices, the authors provide several applications in depth. They start with the classification of rational elliptic surfaces. Then a useful connection with Galois representations is discussed. By developing the notion of excellent families, the authors are able to design many Galois representations with given Galois groups such as the Weyl groups of E6, E7 and E8. They also explain a connection to the classical topic of the 27 lines on a cubic surface.Two chapters deal with elliptic K3 surfaces, a pulsating area of recent research activity which highlights many central properties of Mordell-Weil lattices. Finally, the book turns to the rank problem-one of the key motivations for the introduction of Mordell-Weil lattices. The authors present the state of the art of the rank problem for elliptic curves both over Q and over C(t) and work out applications to the sphere packing problem. Throughout, the book includes many instructive examples illustrating the theory.
In fall 2000, the Notre Dame logic community hosted Greg Hjorth, Rodney G. Downey, ZoA(c) Chatzidakis, and Paola D'Aquino as visiting lecturers. Each of them presented a month long series of expository lectures at the graduate level. The articles in this volume are refinements of these excellent lectures.
Godel's Incompleteness Theorems are among the most significant results in the foundation of mathematics. These results have a positive consequence: any system of axioms for mathematics that we recognize as correct can be properly extended by adding as a new axiom a formal statement expressing that the original system is consistent. This suggests that our mathematical knowledge is inexhaustible, an essentially philosophical topic to which this book is devoted. Basic material in predicate logic, set theory and recursion theory is presented, leading to a proof of incompleteness theorems. The inexhaustibility of mathematical knowledge is treated based on the concept of transfinite progressions of theories as conceived by Turing and Feferman. All concepts and results necessary to understand the arguments are introduced as needed, making the presentation self-contained and thorough."
Boolean valued analysis is a technique for studying properties of an arbitrary mathematical object by comparing its representations in two different set-theoretic models whose construction utilises principally distinct Boolean algebras. The use of two models for studying a single object is a characteristic of the so-called non-standard methods of analysis. Application of Boolean valued models to problems of analysis rests ultimately on the procedures of ascending and descending, the two natural functors acting between a new Boolean valued universe and the von Neumann universe. This book demonstrates the main advantages of Boolean valued analysis which provides the tools for transforming, for example, function spaces to subsets of the reals, operators to functionals, and vector-functions to numerical mappings. Boolean valued representations of algebraic systems, Banach spaces, and involutive algebras are examined thoroughly. Audience: This volume is intended for classical analysts seeking new tools, and for model theorists in search of challenging applications of nonstandard models.
Quantitative thinking is our inclination to view natural and everyday phenomena through a lens of measurable events, with forecasts, odds, predictions, and likelihood playing a dominant part. The Error of Truth recounts the astonishing and unexpected tale of how quantitative thinking came to be, and its rise to primacy in the nineteenth and early twentieth centuries. Additionally, it considers how seeing the world through a quantitative lens has shaped our perception of the world we live in, and explores the lives of the individuals behind its early establishment. This worldview was unlike anything humankind had before, and it came about because of a momentous human achievement: we had learned how to measure uncertainty. Probability as a science was conceptualised. As a result of probability theory, we now had correlations, reliable predictions, regressions, the bellshaped curve for studying social phenomena, and the psychometrics of educational testing. Significantly, these developments happened during a relatively short period in world history- roughly, the 130-year period from 1790 to 1920, from about the close of the Napoleonic era, through the Enlightenment and the Industrial Revolutions, to the end of World War I. At which time, transportation had advanced rapidly, due to the invention of the steam engine, and literacy rates had increased exponentially. This brief period in time was ready for fresh intellectual activity, and it gave a kind of impetus for the probability inventions. Quantification is now everywhere in our daily lives, such as in the ubiquitous microchip in smartphones, cars, and appliances; in the Bayesian logic of artificial intelligence, as well as applications in business, engineering, medicine, economics, and elsewhere. Probability is the foundation of quantitative thinking. The Error of Truth tells its story- when, why, and how it happened.
This book proves some important new theorems in the theory of canonical inner models for large cardinal hypotheses, a topic of central importance in modern set theory. In particular, the author 'completes' the theory of Fine Structure and Iteration Trees (FSIT) by proving a comparison theorem for mouse pairs parallel to the FSIT comparison theorem for pure extender mice, and then using the underlying comparison process to develop a fine structure theory for strategy mice. Great effort has been taken to make the book accessible to non-experts so that it may also serve as an introduction to the higher reaches of inner model theory. It contains a good deal of background material, some of it unpublished folklore, and includes many references to the literature to guide further reading. An introductory essay serves to place the new results in their broader context. This is a landmark work in inner model theory that should be in every set theorist's library.
The proceedings of the Los Angeles Caltech-UCLA 'Cabal Seminar' were originally published in the 1970s and 1980s. Wadge Degrees and Projective Ordinals is the second of a series of four books collecting the seminal papers from the original volumes together with extensive unpublished material, new papers on related topics and discussion of research developments since the publication of the original volumes. Focusing on the subjects of 'Wadge Degrees and Pointclasses' (Part III) and 'Projective Ordinals' (Part IV), each of the two sections is preceded by an introductory survey putting the papers into present context. These four volumes will be a necessary part of the book collection of every set theorist.
This textbook reviews the foundational topics that are typically covered in an introduction to proof course and studies the language of sentential logic as well as investigating the more powerful language of first-order logic and the notion of a formal deduction in first-order logic, in addition, it proves Godel's Completeness Theorem and discusses incompleteness and the computability concept. |
You may like...
From Quantum Information to Musical…
Maria Luisa Dalla Chiara, Roberto Giuntini, …
Paperback
R522
Discovery Miles 5 220
|