![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Mathematical foundations
Since the late 1980s, a large number of very user-friendly tools for fuzzy control, fuzzy expert systems, and fuzzy data analysis have emerged. This has changed the character of this area and started the area of `fuzzy technology'. The next large step in the development occurred in 1992 when almost independently in Europe, Japan and the USA, the three areas of fuzzy technology, artificial neural nets and genetic algorithms joined forces under the title of `computational intelligence' or `soft computing'. The synergies which were possible between these three areas have been exploited very successfully. Practical Applications of Fuzzy Sets focuses on model and real applications of fuzzy sets, and is structured into four major parts: engineering and natural sciences; medicine; management; and behavioral, cognitive and social sciences. This book will be useful for practitioners of fuzzy technology, scientists and students who are looking for applications of their models and methods, for topics of their theses, and even for venture capitalists who look for attractive possibilities for investments.
In this two-volume compilation of articles, leading researchers reevaluate the success of Hilbert's axiomatic method, which not only laid the foundations for our understanding of modern mathematics, but also found applications in physics, computer science and elsewhere. The title takes its name from David Hilbert's seminal talk Axiomatisches Denken, given at a meeting of the Swiss Mathematical Society in Zurich in 1917. This marked the beginning of Hilbert's return to his foundational studies, which ultimately resulted in the establishment of proof theory as a new branch in the emerging field of mathematical logic. Hilbert also used the opportunity to bring Paul Bernays back to Goettingen as his main collaborator in foundational studies in the years to come. The contributions are addressed to mathematical and philosophical logicians, but also to philosophers of science as well as physicists and computer scientists with an interest in foundations.
The nationwide research project `Deduktion', funded by the `Deutsche Forschungsgemeinschaft (DFG)' for a period of six years, brought together almost all research groups within Germany engaged in the field of automated reasoning. Intensive cooperation and exchange of ideas led to considerable progress both in the theoretical foundations and in the application of deductive knowledge. This three-volume book covers these original contributions moulded into the state of the art of automated deduction. The three volumes are intended to document and advance a development in the field of automated deduction that can now be observed all over the world. Rather than restricting the interest to purely academic research, the focus now is on the investigation of problems derived from realistic applications. In fact industrial applications are already pursued on a trial basis. In consequence the emphasis of the volumes is not on the presentation of the theoretical foundations of logical deduction as such, as in a handbook; rather the books present the concepts and methods now available in automated deduction in a form which can be easily accessed by scientists working in applications outside of the field of deduction. This reflects the strong conviction that automated deduction is on the verge of being fully included in the evolution of technology. Volume I focuses on basic research in deduction and on the knowledge on which modern deductive systems are based. Volume II presents techniques of implementation and details about system building. Volume III deals with applications of deductive techniques mainly, but not exclusively, to mathematics and the verification of software. Each chapter was read by two referees, one an international expert from abroad and the other a knowledgeable participant in the national project. It has been accepted for inclusion on the basis of these review reports. Audience: Researchers and developers in software engineering, formal methods, certification, verification, validation, specification of complex systems and software, expert systems, natural language processing.
This book defines a logical system called the Protocol-theoretic Logic of Epistemic Norms (PLEN), it develops PLEN into a formal framework for representing and reasoning about epistemic norms, and it shows that PLEN is theoretically interesting and useful with regard to the aims of such a framework. In order to motivate the project, the author defends an account of epistemic norms called epistemic proceduralism. The core of this view is the idea that, in virtue of their indispensable, regulative role in cognitive life, epistemic norms are closely intertwined with procedural rules that restrict epistemic actions, procedures, and processes. The resulting organizing principle of the book is that epistemic norms are protocols for epistemic planning and control. The core of the book is developing PLEN, which is essentially a novel variant of propositional dynamic logic (PDL) distinguished by more or less elaborate revisions of PDL's syntax and semantics. The syntax encodes the procedural content of epistemic norms by means of the well-known protocol or program constructions of dynamic and epistemic logics. It then provides a novel language of operators on protocols, including a range of unique protocol equivalence relations, syntactic operations on protocols, and various procedural relations among protocols in addition to the standard dynamic (modal) operators of PDL. The semantics of the system then interprets protocol expressions and expressions embedding protocols over a class of directed multigraph-like structures rather than the standard labeled transition systems or modal frames. The intent of the system is to better represent epistemic dynamics, build a logic of protocols atop it, and then show that the resulting logic of protocols is useful as a logical framework for epistemic norms. The resulting theory of epistemic norms centers on notions of norm equivalence derived from theories of process equivalence familiar from the study of dynamic and modal logics. The canonical account of protocol equivalence in PLEN turns out to possess a number of interesting formal features, including satisfaction of important conditions on hyperintensional equivalence, a matter of recently recognized importance in the logic of norms, generally. To show that the system is interesting and useful as a framework for representing and reasoning about epistemic norms, the author applies the logical system to the analysis of epistemic deontic operators, and, partly on the basis of this, establishes representation theorems linking protocols to the action-guiding content of epistemic norms. The protocol-theoretic logic of epistemic norms is then shown to almost immediately validate the main principles of epistemic proceduralism.
This monograph provides the first up-to-date and self-contained presentation of a recently discovered mathematical structure-the Schrodinger-Virasoro algebra. Just as Poincare invariance or conformal (Virasoro) invariance play a key role in understanding, respectively, elementary particles and two-dimensional equilibrium statistical physics, this algebra of non-relativistic conformal symmetries may be expected to apply itself naturally to the study of some models of non-equilibrium statistical physics, or more specifically in the context of recent developments related to the non-relativistic AdS/CFT correspondence. The study of the structure of this infinite-dimensional Lie algebra touches upon topics as various as statistical physics, vertex algebras, Poisson geometry, integrable systems and supergeometry as well as representation theory, the cohomology of infinite-dimensional Lie algebras, and the spectral theory of Schrodinger operators."
The axiomatic theory of sets is a vibrant part of pure mathematics, with its own basic notions, fundamental results, and deep open problems. It is also viewed as a foundation of mathematics so that "to make a notion precise" simply means "to define it in set theory." This book gives a solid introduction to "pure set theory" through transfinite recursion and the construction of the cumulative hierarchy of sets, and also attempts to explain how mathematical objects can be faithfully modeled within the universe of sets. In this new edition the author has added solutions to the exercises, and rearranged and reworked the text to improve the presentation.
At the beginning of the new millennium, fuzzy logic opens a new challenging perspective in information processing. This perspective emerges out of the ideas of the founder of fuzzy logic - Lotfi Zadeh, to develop 'soft' tools for direct computing with human perceptions. The enigmatic nature of human perceptions manifests in their unique capacity to generalize, extract patterns and capture both the essence and the integrity of the events and phenomena in human life. This capacity goes together with an intrinsic imprecision of the perception-based information. According to Zadeh, it is because of the imprecision of the human imprecision that they do not lend themselves to meaning representation through the use of precise methods based on predicate logic. This is the principal reason why existing scientific theories do not have the capability to operate on perception-based information. We are at the eve of the emergence of a theory with such a capability. Its applicative effectiveness has been already demonstrated through the industrial implementation of the soft computing - a powerful intelligent technology centred in fuzzy logic. At the focus of the papers included in this book is the knowledge and experience of the researchers in relation both to the engineering applications of soft computing and to its social and philosophical implications at the dawn of the third millennium. The papers clearly demonstrate that Fuzzy Logic revolutionizes general approaches for solving applied problems and reveals deep connections between them and their solutions.
Fuzzy hardware developments have been a major force driving the applications of fuzzy set theory and fuzzy logic in both science and engineering. This volume provides the reader with a comprehensive up-to-date look at recent works describing new innovative developments of fuzzy hardware. An important research trend is the design of improved fuzzy hardware. There is an increasing interest in both analog and digital implementations of fuzzy controllers in particular and fuzzy systems in general. Specialized analog and digital VLSI implementations of fuzzy systems, in the form of dedicated architectures, aim at the highest implementation efficiency. This particular efficiency is asserted in terms of processing speed and silicon utilization. Processing speed in particular has caught the attention of developers of fuzzy hardware and researchers in the field. The volume includes detailed material on a variety of fuzzy hardware related topics such as: Historical review of fuzzy hardware research Fuzzy hardware based on encoded trapezoids Pulse stream techniques for fuzzy hardware Hardware realization of fuzzy neural networks Design of analog neuro-fuzzy systems in CMOS digital technologies Fuzzy controller synthesis method Automatic design of digital and analog neuro-fuzzy controllers Electronic implementation of complex controllers Silicon compilation of fuzzy hardware systems Digital fuzzy hardware processing Parallel processor architecture for real-time fuzzy applications Fuzzy cellular systems Fuzzy Hardware: Architectures and Applications is a technical reference book for researchers, engineers and scientists interested in fuzzy systems in general and in building fuzzy systems in particular.
This open access book offers a self-contained introduction to the homotopy theory of simplicial and dendroidal sets and spaces. These are essential for the study of categories, operads, and algebraic structure up to coherent homotopy. The dendroidal theory combines the combinatorics of trees with the theory of Quillen model categories. Dendroidal sets are a natural generalization of simplicial sets from the point of view of operads. In this book, the simplicial approach to higher category theory is generalized to a dendroidal approach to higher operad theory. This dendroidal theory of higher operads is carefully developed in this book. The book also provides an original account of the more established simplicial approach to infinity-categories, which is developed in parallel to the dendroidal theory to emphasize the similarities and differences. Simplicial and Dendroidal Homotopy Theory is a complete introduction, carefully written with the beginning researcher in mind and ideally suited for seminars and courses. It can also be used as a standalone introduction to simplicial homotopy theory and to the theory of infinity-categories, or a standalone introduction to the theory of Quillen model categories and Bousfield localization.
Features Provides a uniquely historical perspective on the mathematical underpinnings of a comprehensive list of games Suitable for a broad audience of differing mathematical levels. Anyone with a passion for games, game theory, and mathematics will enjoy this book, whether they be students, academics, or game enthusiasts Covers a wide selection of topics at a level that can be appreciated on a historical, recreational, and mathematical level.
This book offers an original introduction to the representation theory of algebras, suitable for beginning researchers in algebra. It includes many results and techniques not usually covered in introductory books, some of which appear here for the first time in book form. The exposition employs methods from linear algebra (spectral methods and quadratic forms), as well as categorical and homological methods (module categories, Galois coverings, Hochschild cohomology) to present classical aspects of ring theory under new light. This includes topics such as rings with several objects, the Harada-Sai lemma, chain conditions, and Auslander-Reiten theory. Noteworthy and significant results covered in the book include the Brauer-Thrall conjectures, Drozd's theorem, and criteria to distinguish tame from wild algebras. This text may serve as the basis for a second graduate course in algebra or as an introduction to research in the field of representation theory of algebras. The originality of the exposition and the wealth of topics covered also make it a valuable resource for more established researchers.
Lectori salutem! The kind reader opens the book that its authors would have liked to read it themselves, but it was not written yet. Then, their only choice was to write this book, to fill a gap in the mathematicalliterature. The idea of convexity has appeared in the human mind since the antiquity and its fertility has led to a huge diversity of notions and of applications. A student intending a thoroughgoing study of convexity has the sensation of swimming into an ocean. It is due to two reasons: the first one is the great number of properties and applications of the classical convexity and second one is the great number of generalisations for various purposes. As a consequence, a tendency of writing huge books guiding the reader in convexity appeared during the last twenty years (for example, the books of P. M. Gruber and J. M. Willis (1993) and R. J. Webster (1994)). Another last years' tendency is to order, from some point of view, as many convexity notions as possible (for example, the book of I. Singer (1997)). These approaches to the domain of convexity follow the previous point of view of axiomatizing it (A. Ghika (1955), W. Prenowitz (1961), D. Voiculescu (1967), V. W. Bryant and R. J. Webster (1969)). Following this last tendency, our book proposes to the reader two classifications of convexity properties for sets, both of them starting from the internal mechanism of defining them.
In this presentation of the Galois correspondence, modern theories of groups and fields are used to study problems, some of which date back to the ancient Greeks. The techniques used to solve these problems, rather than the solutions themselves, are of primary importance. The ancient Greeks were concerned with constructibility problems. For example, they tried to determine if it was possible, using straightedge and compass alone, to perform any of the following tasks? (1) Double an arbitrary cube; in particular, construct a cube with volume twice that of the unit cube. (2) Trisect an arbitrary angle. (3) Square an arbitrary circle; in particular, construct a square with area 1r. (4) Construct a regular polygon with n sides for n > 2. If we define a real number c to be constructible if, and only if, the point (c, 0) can be constructed starting with the points (0,0) and (1,0), then we may show that the set of constructible numbers is a subfield of the field R of real numbers containing the field Q of rational numbers. Such a subfield is called an intermediate field of Rover Q. We may thus gain insight into the constructibility problems by studying intermediate fields of Rover Q. In chapter 4 we will show that (1) through (3) are not possible and we will determine necessary and sufficient conditions that the integer n must satisfy in order that a regular polygon with n sides be constructible.
This book gives a comprehensive introduction to Universal Algebraic Logic. The three main themes are (i) universal logic and the question of what logic is, (ii) duality theories between the world of logics and the world of algebra, and (iii) Tarskian algebraic logic proper including algebras of relations of various ranks, cylindric algebras, relation algebras, polyadic algebras and other kinds of algebras of logic. One of the strengths of our approach is that it is directly applicable to a wide range of logics including not only propositional logics but also e.g. classical first order logic and other quantifier logics. Following the Tarskian tradition, besides the connections between logic and algebra, related logical connections with geometry and eventually spacetime geometry leading up to relativity are also part of the perspective of the book. Besides Tarskian algebraizations of logics, category theoretical perspectives are also touched upon. This book, apart from being a monograph containing state of the art results in algebraic logic, can be used as the basis for a number of different courses intended for both novices and more experienced students of logic, mathematics, or philosophy. For instance, the first two chapters can be used in their own right as a crash course in Universal Algebra.
This monograph shows that, through a recourse to the concepts and methods of abstract algebraic logic, the algebraic theory of regular varieties and the concept of analyticity in formal logic can profitably interact. By extending the technique of Plonka sums from algebras to logical matrices, the authors investigate the different classes of models for logics of variable inclusion and they shed new light into their formal properties. The book opens with the historical origins of logics of variable inclusion and on their philosophical motivations. It includes the basics of the algebraic theory of regular varieties and the construction of Plonka sums over semilattice direct systems of algebra. The core of the book is devoted to an abstract definition of logics of left and right variable inclusion, respectively, and the authors study their semantics using the construction of Plonka sums of matrix models. The authors also cover Paraconsistent Weak Kleene logic and survey its abstract algebraic logical properties. This book is of interest to scholars of formal logic.
In his studies of cyclotomic fields, in view of establishing his monumental theorem about Fermat's last theorem, Kummer introduced "local" methods. They are concerned with divisibility of "ideal numbers" of cyclotomic fields by lambda = 1 - psi where psi is a primitive "p"-th root of 1 (p any odd prime). Henssel developed Kummer's ideas, constructed the field of "p"-adic numbers and proved the fundamental theorem known today. Kurschak formally introduced the concept of a valuation of a field, as being real valued functions on the set of non-zero elements of the field satisfying certain properties, like the "p"-adic valuations. Ostrowski, Hasse, Schmidt and others developed this theory and collectively, these topics form the primary focus of this book.
Spline functions entered Approximation Theory as solutions of natural extremal problems. A typical example is the problem of drawing a function curve through given n + k points that has a minimal norm of its k-th derivative. Isolated facts about the functions, now called splines, can be found in the papers of L. Euler, A. Lebesgue, G. Birkhoff, J. Favard, L. Tschakaloff. However, the Theory of Spline Functions has developed in the last 30 years by the effort of dozens of mathematicians. Recent fundamental results on multivariate polynomial interpolation and multivari ate splines have initiated a new wave of theoretical investigations and variety of applications. The purpose of this book is to introduce the reader to the theory of spline functions. The emphasis is given to some new developments, such as the general Birkoff's type interpolation, the extremal properties of the splines and their prominant role in the optimal recovery of functions, multivariate interpolation by polynomials and splines. The material presented is based on the lectures of the authors, given to the students at the University of Sofia and Yerevan University during the last 10 years. Some more elementary results are left as excercises and detailed hints are given."
This book offers an up-to-date, comprehensive account of determinantal rings and varieties, presenting a multitude of methods used in their study, with tools from combinatorics, algebra, representation theory and geometry. After a concise introduction to Groebner and Sagbi bases, determinantal ideals are studied via the standard monomial theory and the straightening law. This opens the door for representation theoretic methods, such as the Robinson-Schensted-Knuth correspondence, which provide a description of the Groebner bases of determinantal ideals, yielding homological and enumerative theorems on determinantal rings. Sagbi bases then lead to the introduction of toric methods. In positive characteristic, the Frobenius functor is used to study properties of singularities, such as F-regularity and F-rationality. Castelnuovo-Mumford regularity, an important complexity measure in commutative algebra and algebraic geometry, is introduced in the general setting of a Noetherian base ring and then applied to powers and products of ideals. The remainder of the book focuses on algebraic geometry, where general vanishing results for the cohomology of line bundles on flag varieties are presented and used to obtain asymptotic values of the regularity of symbolic powers of determinantal ideals. In characteristic zero, the Borel-Weil-Bott theorem provides sharper results for GL-invariant ideals. The book concludes with a computation of cohomology with support in determinantal ideals and a survey of their free resolutions. Determinants, Groebner Bases and Cohomology provides a unique reference for the theory of determinantal ideals and varieties, as well as an introduction to the beautiful mathematics developed in their study. Accessible to graduate students with basic grounding in commutative algebra and algebraic geometry, it can be used alongside general texts to illustrate the theory with a particularly interesting and important class of varieties.
Algorithmic Information Theory treats the mathematics of many important areas in digital information processing. It has been written as a read-and-learn book on concrete mathematics, for teachers, students and practitioners in electronic engineering, computer science and mathematics. The presentation is dense, and the examples and exercises are numerous. It is based on lectures on information technology (Data Compaction, Cryptography, Polynomial Coding) for engineers.
The theory presented in this book is developed constructively, is based on a few axioms encapsulating the notion of objects (points and sets) being apart, and encompasses both point-set topology and the theory of uniform spaces. While the classical-logic-based theory of proximity spaces provides some guidance for the theory of apartness, the notion of nearness/proximity does not embody enough algorithmic information for a deep constructive development. The use of constructive (intuitionistic) logic in this book requires much more technical ingenuity than one finds in classical proximity theory - algorithmic information does not come cheaply - but it often reveals distinctions that are rendered invisible by classical logic. In the first chapter the authors outline informal constructive logic and set theory, and, briefly, the basic notions and notations for metric and topological spaces. In the second they introduce axioms for a point-set apartness and then explore some of the consequences of those axioms. In particular, they examine a natural topology associated with an apartness space, and relations between various types of continuity of mappings. In the third chapter the authors extend the notion of point-set (pre-)apartness axiomatically to one of (pre-)apartness between subsets of an inhabited set. They then provide axioms for a quasiuniform space, perhaps the most important type of set-set apartness space. Quasiuniform spaces play a major role in the remainder of the chapter, which covers such topics as the connection between uniform and strong continuity (arguably the most technically difficult part of the book), apartness and convergence in function spaces, types of completeness, and neat compactness. Each chapter has a Notes section, in which are found comments on the definitions, results, and proofs, as well as occasional pointers to future work. The book ends with a Postlude that refers to other constructive approaches to topology, with emphasis on the relation between apartness spaces and formal topology. Largely an exposition of the authors' own research, this is the first book dealing with the apartness approach to constructive topology, and is a valuable addition to the literature on constructive mathematics and on topology in computer science. It is aimed at graduate students and advanced researchers in theoretical computer science, mathematics, and logic who are interested in constructive/algorithmic aspects of topology. Largely an exposition of the authors' own research, this is the first book dealing with the apartness approach to constructive topology, and is a valuable addition to the literature on constructive mathematics and on topology in computer science. It is aimed at graduate students and advanced researchers in theoretical computer science, mathematics, and logic who are interested in constructive/algorithmic aspects of topology.
Can you really keep your eye on the ball? How is massive data collection changing sports? Sports science courses are growing in popularity. The author's course at Roanoke College is a mix of physics, physiology, mathematics, and statistics. Many students of both genders find it exciting to think about sports. Sports problems are easy to create and state, even for students who do not live sports 24/7. Sports are part of their culture and knowledge base, and the opportunity to be an expert on some area of sports is invigorating. This should be the primary reason for the growth of mathematics of sports courses: the topic provides intrinsic motivation for students to do their best work. From the Author: "The topics covered in Sports Science and Sports Analytics courses vary widely. To use a golfing analogy, writing a book like this is like hitting a drive at a driving range; there are many directions you can go without going out of bounds. At the driving range, I pick out a small target to focus on, and that is what I have done here. I have chosen a sample of topics I find very interesting. Ideally, users of this book will have enough to choose from to suit whichever version of a sports course is being run." "The book is very appealing to teach from as well as to learn from. Students seem to have a growing interest in ways to apply traditionally different areas to solve problems. This, coupled with an enthusiasm for sports, makes Dr. Minton's book appealing to me."-Kevin Hutson, Furman University
Problems books are popular with instructors and students alike, as well as among general readers. The key to this book is the many alternative solutions to single problems. Mathematics educators, secondary mathematics teachers, and university instructors will find the book interesting and useful.
Model theory is used in every theoretical branch of analytic philosophy: in philosophy of mathematics, in philosophy of science, in philosophy of language, in philosophical logic, and in metaphysics. But these wide-ranging uses of model theory have created a highly fragmented literature. On the one hand, many philosophically significant results are found only in mathematics textbooks: these are aimed squarely at mathematicians; they typically presuppose that the reader has a serious background in mathematics; and little clue is given as to their philosophical significance. On the other hand, the philosophical applications of these results are scattered across disconnected pockets of papers. The first aim of this book, then, is to explore the philosophical uses of model theory, focusing on the central topics of reference, realism, and doxology. Its second aim is to address important questions in the philosophy of model theory, such as: sameness of theories and structure, the boundaries of logic, and the classification of mathematical structures. Philosophy and Model Theory will be accessible to anyone who has completed an introductory logic course. It does not assume that readers have encountered model theory before, but starts right at the beginning, discussing philosophical issues that arise even with conceptually basic model theory. Moreover, the book is largely self-contained: model-theoretic notions are defined as and when they are needed for the philosophical discussion, and many of the most philosophically significant results are given accessible proofs.
This second volume of the book series shows R-calculus is a combination of one monotonic tableau proof system and one non-monotonic one. The R-calculus is a Gentzen-type deduction system which is non-monotonic, and is a concrete belief revision operator which is proved to satisfy the AGM postulates and the DP postulates. It discusses the algebraical and logical properties of tableau proof systems and R-calculi in many-valued logics. This book offers a rich blend of theory and practice. It is suitable for students, researchers and practitioners in the field of logic. Also it is very useful for all those who are interested in data, digitization and correctness and consistency of information, in modal logics, non monotonic logics, decidable/undecidable logics, logic programming, description logics, default logics and semantic inheritance networks. |
You may like...
Logic from Russell to Church, Volume 5
Dov M. Gabbay, John Woods
Hardcover
R5,271
Discovery Miles 52 710
Elementary Lessons in Logic - Deductive…
William Stanley Jevons
Paperback
R569
Discovery Miles 5 690
|