![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Mathematical foundations
In decision theory there are basically two appr hes to the modeling of individual choice: one is based on an absolute representation of preferences leading to a ntDnerical expression of preference intensity. This is utility theory. Another approach is based on binary relations that encode pairwise preference. While the former has mainly blossomed in the Anglo-Saxon academic world, the latter is mostly advocated in continental Europe, including Russia. The advantage of the utility theory approach is that it integrates uncertainty about the state of nature, that may affect the consequences of decision. Then, the problems of choice and ranking from the knowledge of preferences become trivial once the utility function is known. In the case of the relational approach, the model does not explicitly accounts for uncertainty, hence it looks less sophisticated. On the other hand it is more descriptive than normative in the first stand because it takes the pairwise preference pattern expressed by the decision-maker as it is and tries to make the best out of it. Especially the preference relation is not supposed to have any property. The main problem with the utility theory approach is the gap between what decision-makers are and can express, and what the theory would like them to be and to be capable of expressing. With the relational approach this gap does not exist, but the main difficulty is now to build up convincing choice rules and ranking rules that may help the decision process.
This book is concerned with cardinal number valued functions defined for any Boolean algebra. Examples of such functions are independence, which assigns to each Boolean algebra the supremum of the cardinalities of its free subalgebras, and cellularity, which gives the supremum of cardinalities of sets of pairwise disjoint elements. Twenty-one such functions are studied in detail, and many more in passing. The questions considered are the behaviour of these functions under algebraic operations such as products, free products, ultraproducts, and their relationships to one another. Assuming familiarity with only the basics of Boolean algebras and set theory, through simple infinite combinatorics and forcing, the book reviews current knowledge about these functions, giving complete proofs for most facts. A special feature of the book is the attention given to open problems, of which 185 are formulated. Based on Cardinal Functions on Boolean Algebras (1990) and Cardinal Invariants on Boolean Algebras (1996) by the same author, the present work is much larger than either of these. It contains solutions to many of the open problems of the earlier volumes. Among the new topics are continuum cardinals on Boolean algebras, with a lengthy treatment of the reaping number. Diagrams at the end of the book summarize the relationships between the functions for many important classes of Boolean algebras, including interval algebras, tree algebras and superatomic algebras.
This book describes the latest Russian research covering the structure and algorithmic properties of Boolean algebras from the algebraic and model-theoretic points of view. A significantly revised version of the author's Countable Boolean Algebras (Nauka, Novosibirsk, 1989), the text presents new results as well as a selection of open questions on Boolean algebras. Other current features include discussions of the Kottonen algebras in enrichments by ideals and automorphisms, and the properties of the automorphism groups.
Graph Theory, Combinatorics and Algorithms: Interdisciplinary Applications focuses on discrete mathematics and combinatorial algorithms interacting with real world problems in computer science, operations research, applied mathematics and engineering. The book contains eleven chapters written by experts in their respective fields, and covers a wide spectrum of high-interest problems across these discipline domains. Among the contributing authors are Richard Karp of UC Berkeley and Robert Tarjan of Princeton; both are at the pinnacle of research scholarship in Graph Theory and Combinatorics. The chapters from the contributing authors focus on "real world" applications, all of which will be of considerable interest across the areas of Operations Research, Computer Science, Applied Mathematics, and Engineering. These problems include Internet congestion control, high-speed communication networks, multi-object auctions, resource allocation, software testing, data structures, etc. In sum, this is a book focused on major, contemporary problems, written by the top research scholars in the field, using cutting-edge mathematical and computational techniques.
The book is an authoritative and up-to-date introduction to the field of analysis and potential theory dealing with the distribution zeros of classical systems of polynomials such as orthogonal polynomials, Chebyshev, Fekete and Bieberbach polynomials, best or near-best approximating polynomials on compact sets and on the real line. The main feature of the book is the combination of potential theory with conformal invariants, such as module of a family of curves and harmonic measure, to derive discrepancy estimates for signed measures if bounds for their logarithmic potentials or energy integrals are known a priori.
Mathematical Problems from Applied Logic II presents chapters from selected, world renowned, logicians. Important topics of logic are discussed from the point of view of their further development in light of requirements arising from their successful application in areas such as Computer Science and AI language. Fields covered include: logic of provability, applications of computability theory to biology, psychology, physics, chemistry, economics, and other basic sciences; computability theory and computable models; logic and space-time geometry; hybrid systems; logic and region-based theory of space.
In this volume specialists in mathematics, physics, and linguistics present the first comprehensive analysis of the ideas and influence of Hermann G. Grassmann (1809-1877), the remarkable universalist whose work recast the foundations of these disciplines and shaped the course of their modern development.
This book, translated from the French, is an introduction to first-order model theory. The first six chapters are very basic: starting from scratch, they quickly reach the essential, namely, the back-and-forth method and compactness, which are illustrated with examples taken from algebra. The next chapter introduces logic via the study of the models of arithmetic, and the following is a combinatorial tool-box preparing for the chapters on saturated and prime models. The last ten chapters form a rather complete but nevertheless accessible exposition of stability theory, which is the core of the subject.
At first glance, Robinson's original form of nonstandard analysis appears nonconstructive in essence, because it makes a rather unrestricted use of classical logic and set theory and, in particular, of the axiom of choice. Recent developments, however, have given rise to the hope that the distance between constructive and nonstandard mathematics is actually much smaller than it appears. So the time was ripe for the first meeting dedicated simultaneously to both ways of doing mathematics and to the current and future reunion of these seeming opposites. Consisting of peer-reviewed research and survey articles written on the occasion of such an event, this volume offers views of the continuum from various standpoints. Including historical and philosophical issues, the topics of the contributions range from the foundations, the practice, and the applications of constructive and nonstandard mathematics, to the interplay of these areas and the development of a unified theory.
Starting with a simple formulation accessible to all mathematicians, this second edition is designed to provide a thorough introduction to nonstandard analysis. Nonstandard analysis is now a well-developed, powerful instrument for solving open problems in almost all disciplines of mathematics; it is often used as a 'secret weapon' by those who know the technique. This book illuminates the subject with some of the most striking applications in analysis, topology, functional analysis, probability and stochastic analysis, as well as applications in economics and combinatorial number theory. The first chapter is designed to facilitate the beginner in learning this technique by starting with calculus and basic real analysis. The second chapter provides the reader with the most important tools of nonstandard analysis: the transfer principle, Keisler's internal definition principle, the spill-over principle, and saturation. The remaining chapters of the book study different fields for applications; each begins with a gentle introduction before then exploring solutions to open problems. All chapters within this second edition have been reworked and updated, with several completely new chapters on compactifications and number theory. Nonstandard Analysis for the Working Mathematician will be accessible to both experts and non-experts, and will ultimately provide many new and helpful insights into the enterprise of mathematics.
This book presents eleven peer-reviewed papers from the 3rd International Conference on Applications of Mathematics and Informatics in Natural Sciences and Engineering (AMINSE2017) held in Tbilisi, Georgia in December 2017. Written by researchers from the region (Georgia, Russia, Turkey) and from Western countries (France, Germany, Italy, Luxemburg, Spain, USA), it discusses key aspects of mathematics and informatics, and their applications in natural sciences and engineering. Featuring theoretical, practical and numerical contributions, the book appeals to scientists from various disciplines interested in applications of mathematics and informatics in natural sciences and engineering.
Intuitionistic type theory can be described, somewhat boldly, as a partial fulfillment of the dream of a universal language for science. This book expounds several aspects of intuitionistic type theory, such as the notion of set, reference vs. computation, assumption, and substitution. Moreover, the book includes philosophically relevant sections on the principle of compositionality, lingua characteristica, epistemology, propositional logic, intuitionism, and the law of excluded middle. Ample historical references are given throughout the book.
The fundamental theorem of algebra states that any complex polynomial must have a complex root. This book examines three pairs of proofs of the theorem from three different areas of mathematics: abstract algebra, complex analysis and topology. The first proof in each pair is fairly straightforward and depends only on what could be considered elementary mathematics. However, each of these first proofs leads to more general results from which the fundamental theorem can be deduced as a direct consequence. These general results constitute the second proof in each pair. To arrive at each of the proofs, enough of the general theory of each relevant area is developed to understand the proof. In addition to the proofs and techniques themselves, many applications such as the insolvability of the quintic and the transcendence of e and pi are presented. Finally, a series of appendices give six additional proofs including a version of Gauss'original first proof. The book is intended for junior/senior level undergraduate mathematics students or first year graduate students, and would make an ideal "capstone" course in mathematics.
Alfred Tarski was one of the two giants of the twentieth-century development of logic, along with Kurt Goedel. The four volumes of this collection contain all of Tarski's published papers and abstracts, as well as a comprehensive bibliography. Here will be found many of the works, spanning the period 1921 through 1979, which are the bedrock of contemporary areas of logic, whether in mathematics or philosophy. These areas include the theory of truth in formalized languages, decision methods and undecidable theories, foundations of geometry, set theory, and model theory, algebraic logic, and universal algebra.
It is the business of science not to create laws, but to discover them. We do not originate the constitution of our own minds, greatly as it may be in our power to modify their character. And as the laws of the human intellect do not depend upon our will, so the forms of science, of (1. 1) which they constitute the basis, are in all essential regards independent of individual choice. George Boole 10, p. llJ 1. 1 Comparison with Traditional Logic The logic of this book is a probability logic built on top of a yes-no or 2-valued logic. It is divided into two parts, part I: BP Logic, and part II: M Logic. 'BP' stands for 'Bayes Postulate'. This postulate says that in the absence of knowl edge concerning a probability distribution over a universe or space one should assume 1 a uniform distribution. 2 The M logic of part II does not make use of Bayes postulate or of any other postulates or axioms. It relies exclusively on purely deductive reasoning following from the definition of probabilities. The M logic goes an important step further than the BP logic in that it can distinguish between certain types of information supply sentences which have the same representation in the BP logic as well as in traditional first order logic, although they clearly have different meanings (see example 6. 1. 2; also comments to the Paris-Rome problem of eqs. (1. 8), (1. 9) below)."
This is a thorough and comprehensive treatment of the theory of NP-completeness in the framework of algebraic complexity theory. Coverage includes Valiant's algebraic theory of NP-completeness; interrelations with the classical theory as well as the Blum-Shub-Smale model of computation, questions of structural complexity; fast evaluation of representations of general linear groups; and complexity of immanants.
Term rewriting techniques are applicable to various fields of computer science, including software engineering, programming languages, computer algebra, program verification, automated theorem proving and Boolean algebra. These powerful techniques can be successfully applied in all areas that demand efficient methods for reasoning with equations. One of the major problems encountered is the characterization of classes of rewrite systems that have a desirable property, like confluence or termination. In a system that is both terminating and confluent, every computation leads to a result that is unique, regardless of the order in which the rewrite rules are applied. This volume provides a comprehensive and unified presentation of termination and confluence, as well as related properties. Topics and features: *unified presentation and notation for important advanced topics *comprehensive coverage of conditional term-rewriting systems *state-of-the-art survey of modularity in term rewriting *presentation of unified framework for term and graph rewriting *up-to-date discussion of transformational methods for proving termination of logic programs, including the TALP system This unique book offers a comprehensive and unified view of the subject that is suitable for all computer scientists, program designers, and software engineers who study and use term rewriting techniques. Practitioners, researchers and professionals will find the book an essential and authoritative resource and guide for the latest developments and results in the field.
George Boole (1815-1864) is well known to mathematicians for his research and textbooks on the calculus, but his name has spread world-wide for his innovations in symbolic logic and the development and applications made since his day. The utility of "Boolean algebra" in computing has greatly increased curiosity in the nature and extent of his achievements. His work is most accessible in his two books on logic, "A mathematical analysis of logic" (1947) and "An investigation of the laws of thought" (1954). But at various times he wrote manuscript essays, especially after the publication of the second book; several were intended for a non-technical work, "The Philosophy of logic," which he was not able to complete. This volume contains an edited selection which not only relates them to Boole's publications and the historical context of his time, but also describes their strange history of family, followers and scholars have treid to confect an edition. The book will appeal to logicians, mathematicians and philosophers, and those interested in the histories of the corresponding subjects; and also students of the early Victorian Britain in which they were written.
The purpose of this book is to provide the reader who is interested in applications of fuzzy set theory, in the first place with a text to which he or she can refer for the basic theoretical ideas, concepts and techniques in this field and in the second place with a vast and up to date account of the literature. Although there are now many books about fuzzy set theory, and mainly about its applications, e. g. in control theory, there is not really a book available which introduces the elementary theory of fuzzy sets, in what I would like to call "a good degree of generality." To write a book which would treat the entire range of results concerning the basic theoretical concepts in great detail and which would also deal with all possible variants and alternatives of the theory, such as e. g. rough sets and L-fuzzy sets for arbitrary lattices L, with the possibility-probability theories and interpretations, with the foundation of fuzzy set theory via multi-valued logic or via categorical methods and so on, would have been an altogether different project. This book is far more modest in its mathematical content and in its scope.
This volume presents a unified approach to the mathematical theory of a wide class of non-additive set functions, the so called null-additive set functions, which also includes classical measure theory. It includes such important set functions as capacities, triangular set functions, some fuzzy measures, submeasures, decomposable measures, possibility measures, distorted probabilities, autocontinuous set functions, etc. The usefulness of the theory is demonstrated by applications in nonlinear differential and difference equations; fractal geometry in the theory of chaos; the approximation of functions in modular spaces by nonlinear singular integral operators; and in the theory of diagonal theorems as a universal method for proving general and fundamental theorems in functional analysis and measure theory. Audience: This book will be of value to researchers and postgraduate students in mathematics, as well as in such diverse fields as knowledge engineering, artificial intelligence, game theory, statistics, economics, sociology and industry.
The book is intended for students who want to learn how to prove theorems and be better prepared for the rigors required in more advance mathematics. One of the key components in this textbook is the development of a methodology to lay bare the structure underpinning the construction of a proof, much as diagramming a sentence lays bare its grammatical structure. Diagramming a proof is a way of presenting the relationships between the various parts of a proof. A proof diagram provides a tool for showing students how to write correct mathematical proofs.
Despite decades of work in evolutionary algorithms, there remains a lot of uncertainty as to when it is beneficial or detrimental to use recombination or mutation. This book provides a characterization of the roles that recombination and mutation play in evolutionary algorithms. It integrates prior theoretical work and introduces new theoretical techniques for studying evolutionary algorithms. An aggregation algorithm for Markov chains is introduced which is useful for studying not only evolutionary algorithms specifically, but also complex systems in general. Practical consequences of the theory are explored and a novel method for comparing search and optimization algorithms is introduced. A focus on discrete rather than real-valued representations allows the book to bridge multiple communities, including evolutionary biologists and population geneticists.
Mathematics has stood as a bridge between the Humanities and the Sciences since the days of classical antiquity. For Plato, mathematics was evidence of Being in the midst of Becoming, garden variety evidence apparent even to small children and the unphilosophical, and therefore of the highest educational significance. In the great central similes of The Republic it is the touchstone ofintelligibility for discourse, and in the Timaeus it provides in an oddly literal sense the framework of nature, insuring the intelligibility ofthe material world. For Descartes, mathematical ideas had a clarity and distinctness akin to the idea of God, as the fifth of the Meditations makes especially clear. Cartesian mathematicals are constructions as well as objects envisioned by the soul; in the Principles, the work ofthe physicist who provides a quantified account ofthe machines of nature hovers between description and constitution. For Kant, mathematics reveals the possibility of universal and necessary knowledge that is neither the logical unpacking ofconcepts nor the record of perceptual experience. In the Critique ofPure Reason, mathematics is one of the transcendental instruments the human mind uses to apprehend nature, and by apprehending to construct it under the universal and necessary lawsofNewtonian mechanics. |
You may like...
Hardware Accelerator Systems for…
Shiho Kim, Ganesh Chandra Deka
Hardcover
R3,950
Discovery Miles 39 500
Hamiltonian Monte Carlo Methods in…
Tshilidzi Marwala, Rendani Mbuvha, …
Paperback
R3,518
Discovery Miles 35 180
Dynamics of Ice Sheets and Glaciers
Ralf Greve, Heinz Blatter
Hardcover
R5,302
Discovery Miles 53 020
Hardware Software Co-Design of a…
Sao-jie Chen, Guang-Huei Lin, …
Hardcover
R2,739
Discovery Miles 27 390
The MPEG Representation of Digital Media
Leonardo Chiariglione
Hardcover
R2,676
Discovery Miles 26 760
|