![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Mathematical foundations
This edited volume focuses on the work of Professor Larisa Maksimova, providing a comprehensive account of her outstanding contributions to different branches of non-classical logic. The book covers themes ranging from rigorous implication, relevance and algebraic logic, to interpolation, definability and recognizability in superintuitionistic and modal logics. It features both her scientific autobiography and original contributions from experts in the field of non-classical logics. Professor Larisa Maksimova's influential work involved combining methods of algebraic and relational semantics. Readers will be able to trace both influences on her work, and the ways in which her work has influenced other logicians. In the historical part of this book, it is possible to trace important milestones in Maksimova's career. Early on, she developed an algebraic semantics for relevance logics and relational semantics for the logic of entailment. Later, Maksimova discovered that among the continuum of superintuitionisitc logics there are exactly three pretabular logics. She went on to obtain results on the decidability of tabularity and local tabularity problems for superintuitionistic logics and for extensions of S4. Further investigations by Maksimova were aimed at the study of fundamental properties of logical systems (different versions of interpolation and definability, disjunction property, etc.) in big classes of logics, and on decidability and recognizability of such properties. To this end she determined a powerful combination of algebraic and semantic methods, which essentially determine the modern state of investigations in the area, as can be seen in the later chapters of this book authored by leading experts in non-classical logics. These original contributions bring the reader up to date on the very latest work in this field.
Th e vari a t i on al s p li ne t heo ry w h ic h orig i na t es from th e w ell-kn own p ap er b y J. e . Hollid a y ( 1957) i s t od a y a we ll- deve lo pe d fi eld in a p pr o x - mat i o n t he o ry . T he ge ne ra l d efinition of s p l i nes in t he Hilb er t s pace , - i st ence , uniquen e s s , and ch ar a c t eriz a tion t he o re ms w ere obt ain ed a b o ut 35 ye a r s ago b y M . A t t ei a , P . J . Laur en t , a n d P . M. An selon e , bu t in r e cent y e a r s important n e w r esult s h a v e b e en ob t ain ed in th e a bst ract va r i a t i o n a l s p l i ne theor y .
The importance of having ef cient and effective methods for data mining and kn- ledge discovery (DM&KD), to which the present book is devoted, grows every day and numerous such methods have been developed in recent decades. There exists a great variety of different settings for the main problem studied by data mining and knowledge discovery, and it seems that a very popular one is formulated in terms of binary attributes. In this setting, states of nature of the application area under consideration are described by Boolean vectors de ned on some attributes. That is, by data points de ned in the Boolean space of the attributes. It is postulated that there exists a partition of this space into two classes, which should be inferred as patterns on the attributes when only several data points are known, the so-called positive and negative training examples. The main problem in DM&KD is de ned as nding rules for recognizing (cl- sifying) new data points of unknown class, i. e. , deciding which of them are positive and which are negative. In other words, to infer the binary value of one more attribute, called the goal or class attribute. To solve this problem, some methods have been suggested which construct a Boolean function separating the two given sets of positive and negative training data points.
This is the first treatment in book format of proof-theoretic transformations - known as proof interpretations - that focuses on applications to ordinary mathematics. It covers both the necessary logical machinery behind the proof interpretations that are used in recent applications as well as - via extended case studies - carrying out some of these applications in full detail. This subject has historical roots in the 1950s. This book for the first time tells the whole story.
This monograph provides a theoretical treatment of the problems related to the embeddability of graphs. Among these problems are the planarity and planar embeddings of a graph, the Gaussian crossing problem, the isomorphisms of polyhedra, surface embeddability, problems concerning graphic and cographic matroids and the knot problem from topology to combinatorics are discussed. Rectilinear embeddability, and the net-embeddability of a graph, which appears from the VSLI circuit design and has been much improved by the author recently, is also illustrated. Furthermore, some optimization problems related to planar and rectilinear embeddings of graphs, including those of finding the shortest convex embedding with a boundary condition and the shortest triangulation for given points on the plane, the bend and the area minimizations of rectilinear embeddings, and several kinds of graph decompositions are specially described for conditions efficiently solvable. At the end of each chapter, the Notes Section sets out the progress of related problems, the background in theory and practice, and some historical remarks. Some open problems with suggestions for their solutions are mentioned for further research.
From the Introduction: "We shall base our discussion on a set-theoretical foundation like that used in developing analysis, or algebra, or topology. We may consider our task as that of giving a mathematical analysis of the basic concepts of logic and mathematics themselves. Thus we treat mathematical and logical practice as given empirical data and attempt to develop a purely mathematical theory of logic abstracted from these data." There are 31 chapters in 5 parts and approximately 320 exercises marked by difficulty and whether or not they are necessary for further work in the book.
This research text addresses the logical aspects of the visualization of information with papers especially commissioned for this book. The authors explore the logical properties of diagrams, charts, maps, and the like, and their use in problem solving and in teaching basic reasoning skills. As computers make visual presentations of information even more commonplace,it becomes increasingly important for the research community to develop an understanding of such tools.
This IMA Volume in Mathematics and its Applications RANDOM SETS: THEORY AND APPLICATIONS is based on the proceedings of a very successful 1996 three-day Summer Program on "Application and Theory of Random Sets." We would like to thank the scientific organizers: John Goutsias (Johns Hopkins University), Ronald P.S. Mahler (Lockheed Martin), and Hung T. Nguyen (New Mexico State University) for their excellent work as organizers of the meeting and for editing the proceedings. We also take this opportunity to thank the Army Research Office (ARO), the Office ofNaval Research (0NR), and the Eagan, MinnesotaEngineering Center ofLockheed Martin Tactical Defense Systems, whose financial support made the summer program possible. Avner Friedman Robert Gulliver v PREFACE "Later generations will regard set theory as a disease from which one has recovered. " - Henri Poincare Random set theory was independently conceived by D.G. Kendall and G. Matheron in connection with stochastic geometry. It was however G.
The forms and scope of logic rest on assumptions of how language and reasoning connect to experience. In this volume an analysis of meaning and truth provides a foundation for studying modern propositional and predicate logics. Chapters on propositional logic, parsing propositions, and meaning, truth and reference give a basis for criteria that can be used to judge formalizations of ordinary language arguments. Over 120 worked examples of formalizations of propositions and arguments illustrate the scope and limitations of modern logic, as analyzed in chapters on identity, quantifiers, descriptive names, functions, and second-order logic. The chapter on second-order logic illustrates how different conceptions of predicates and propositions do not lead to a common basis for quantification over predicates, as they do for quantification over things. Notable for its clarity of presentation, and supplemented by many exercises, this volume is suitable for philosophers, linguists, mathematicians, and computer scientists who wish to better understand the tools they use in formalizing reasoning.
The main aim of this monograph is to provide a structured study of the algebraic method in metalogic. In contrast to traditional algebraic logic, where the focus is on the algebraic forms of specific deductive systems, abstract algebraic logic is concerned with the process of algebraization itself. This book presents in a systematic way recent ideas in abstract algebraic logic centered around the notion of the Leibniz operator. The stress is put on the taxonomy of deductive systems. Isolating a list of plausible properties of the Leibniz operator serves as a basis for distinguishing certain natural classes of sentential logics. The hierarchy of deductive systems presented in the book comprises, among others, the following classes: protoalgebraic logics, equivalential logics, algebraizable logics, and Fregean logics. Because of the intimate connection between algebraic and logical structures, the book also provides a uniform treatment of various topics concerning deduction theorems and quasivarieties of algebras. The presentation of the above classes of logics is accompanied by a wealth of examples illustrating the general theory. An essential part of the book is formed by the numerous exercises integrated into the text. This book is both suitable for logically and algebraically minded graduate and advanced graduate students of mathematics, computer science and philosophy, and as a reference work for the expert.
Logic and the Modalities in the Twentieth Century is an
indispensable research tool for anyone interested in the
development of logic, including researchers, graduate and senior
undergraduate students in logic, history of logic, mathematics,
history of mathematics, computer science and artificial
intelligence, linguistics, cognitive science, argumentation theory,
philosophy, and the history of ideas.
The chief purpose of the book is to present, in detail, a compilation of proofs of the Cantor-Bernstein Theorem (CBT) published through the years since the 1870's. Over thirty such proofs are surveyed. The book comprises five parts. In the first part the discussion covers the role of CBT and related notions in the writings of Cantor and Dedekind. New views are presented, especially regarding the general proof of CBT obtained by Cantor, his proof of the Comparability Theorem, the ruptures in the Cantor-Dedekind correspondence and the origin of Dedekind's proof of CBT. The second part covers the first CBT proofs published (1896-1901). The works of the following mathematicians is considered in detail: Schroder, Bernstein, Bore, Schoenflies and Zermelo. Here a subtheme of the book is launched; it concerns the research project following Bernstein's Division Theorem (BDT). In its third part the book covers proofs that emerged during the period when the logicist movement was developed (1902-1912). It covers the works of Russell and Whitehead, Jourdain, Harward, Poincare, J. Konig, D. Konig (his results in graph theory), Peano, Zermelo, Korselt. Also Hausdorff's paradox is discussed linking it to BDT. In the fourth part of the book are discussed the developments of CBT and BDT (including the inequality-BDT) in the hands of the mathematicians of the Polish School of Logic, including Sierpi ski, Banach, Tarski, Lindenbaum, Kuratowski, Sikorski, Knaster, the British Whittaker, and Reichbach. Finally, in the fifth part, the main discussion concentrates on the attempts to port CBT to intuitionist mathematics (with results by Brouwer, Myhill, van Dalen and Troelstra) and to Category Theory (by Trnkova and Koubek).The second purpose of the book is to develop a methodology for the comparison of proofs. The core idea of this methodology is that a proof can be described by two descriptors, called gestalt and metaphor. It is by comparison of their descriptors that the comparison of proofs is obtained. The process by which proof descriptors are extracted from a proof is named 'proof-processing', and it is conjectured that mathematicians perform proof-processing habitually, in the study of proofs.
This book is addressed primarily to researchers specializing in mathemat ical logic. It may also be of interest to students completing a Masters Degree in mathematics and desiring to embark on research in logic, as well as to teachers at universities and high schools, mathematicians in general, or philosophers wishing to gain a more rigorous conception of deductive reasoning. The material stems from lectures read from 1962 to 1968 at the Faculte des Sciences de Paris and since 1969 at the Universities of Provence and Paris-VI. The only prerequisites demanded of the reader are elementary combinatorial theory and set theory. We lay emphasis on the semantic aspect of logic rather than on syntax; in other words, we are concerned with the connection between formulas and the multirelations, or models, which satisfy them. In this context considerable importance attaches to the theory of relations, which yields a novel approach and algebraization of many concepts of logic. The present two-volume edition considerably widens the scope of the original French] one-volume edition (1967: Relation, Formule logique, Compacite, Completude). The new Volume 1 (1971: Relation et Formule logique) reproduces the old Chapters 1, 2, 3, 4, 5 and 8, redivided as follows: Word, formula (Chapter 1), Connection (Chapter 2), Relation, operator (Chapter 3), Free formula (Chapter 4), Logicalformula, denumer able-model theorem (L6wenheim-Skolem) (Chapter 5), Completeness theorem (G6del-Herbrand) and Interpolation theorem (Craig-Lyndon) (Chapter 6), Interpretability of relations (Chapter 7)."
Information is precious. It reduces our uncertainty in making decisions. Knowledge about the outcome of an uncertain event gives the possessor an advantage. It changes the course of lives, nations, and history itself. Information is the food of Maxwell's demon. His power comes from know ing which particles are hot and which particles are cold. His existence was paradoxical to classical physics and only the realization that information too was a source of power led to his taming. Information has recently become a commodity, traded and sold like or ange juice or hog bellies. Colleges give degrees in information science and information management. Technology of the computer age has provided access to information in overwhelming quantity. Information has become something worth studying in its own right. The purpose of this volume is to introduce key developments and results in the area of generalized information theory, a theory that deals with uncertainty-based information within mathematical frameworks that are broader than classical set theory and probability theory. The volume is organized as follows."
This monograph provides a thorough analysis of two important formalisms for nonmonotonic reasoning: default logic and modal nonmonotonic logics. It is also shown how they are related to each other and how they provide the formal foundations for logic programming. The discussion is rigorous, and all main results are formally proved. Many of the results are deep and surprising, some of them previously unpublished. The book has three parts, on default logic, modal nonmonotonic logics, and connections and complexity issues, respectively. The study of general default logic is followed by a discussion of normal default logic and its connections to the closed world assumption, and also a presentation of related aspects of logic programming. The general theory of the family of modal nonmonotonic logics introduced by McDermott and Doyle is followed by studies of autoepistemic logic, the logic of reflexive knowledge, and the logic of pure necessitation, and also a short discussion of algorithms for computing knowledge and belief sets. The third part explores connections between default logic and modal nonmonotonic logics and contains results on the complexity of nonmonotonic reasoning. The ideas are presented with an elegance and unity of perspective that set a new standard of scholarship for books in this area, and the work indicates that the field has reached a very high level of maturity and sophistication. The book is intended as a reference on default logic, nonmonotonic logics, and related computational issues, and is addressed to researchers, programmers, and graduate students in the Artificial Intelligence community.
This is the first book presenting cardinality theory of fuzzy sets with triangular norms, including its scalar and "fuzzy" streams. This theory constitutes not only a powerful basis but also a useful tool for modelling and processing vague and imprecise quantitative information. The multiple application areas of the theory encompass computer science, soft computing, computing with words, and decision-making. Starting with a presentation of the fundamentals of triangular norms and fuzzy set theory, the book offers a self-contained, concise and systematic exposition of cardinalities of fuzzy sets that includes many examples.
This book presents a collection of contributions from related logics to applied paraconsistency. Moreover, all of them are dedicated to Jair Minoro Abe,on the occasion of his sixtieth birthday. He is one of the experts in Paraconsistent Engineering, who developed the so-called annotated logics. The book includes important contributions on foundations and applications of paraconsistent logics in connection with engineering, mathematical logic, philosophical logic, computer science, physics, economics, and biology. It will be of interest to students and researchers, who are working on engineering and logic.
This book features a unique approach to the teaching of mathematical logic by putting it in the context of the puzzles and paradoxes of common language and rational thought. It serves as a bridge from the author 's puzzle books to his technical writing in the fascinating field of mathematical logic. Using the logic of lying and truth-telling, the author introduces the readers to informal reasoning preparing them for the formal study of symbolic logic, from propositional logic to first-order logic, a subject that has many important applications to philosophy, mathematics, and computer science. The book includes a journey through the amazing labyrinths of infinity, which have stirred the imagination of mankind as much, if not more, than any other subject.
Many philosophers have considered logical reasoning as an inborn ability of mankind and as a distinctive feature in the human mind; but we all know that the distribution of this capacity, or at any rate its development, is very unequal. Few people are able to set up a cogent argument; others are at least able to follow a logical argument and even to detect logical fallacies. Nevertheless, even among educated persons there are many who do not even attain this relatively modest level of development. According to my personal observations, lack of logical ability may be due to various circumstances. In the first place, I mention lack of general intelligence, insufficient power of concentration, and absence of formal education. Secondly, however, I have noticed that many people are unable, or sometimes rather unwilling, to argue ex hypothesi; such persons cannot, or will not, start from premisses which they know or believe to be false or even from premisses whose truth is not, in their opinion, sufficient ly warranted. Or, if they agree to start from such premisses, they sooner or later stray away from the argument into attempts first to settle the truth or falsehood of the premisses. Presumably this attitude results either from lack of imagination or from undue moral rectitude. On the other hand, proficiency in logical reasoning is not in itself a guarantee for a clear theoretic insight into the principles and foundations of logic."
The main idea of statistical convergence is to demand convergence only for a majority of elements of a sequence. This method of convergence has been investigated in many fundamental areas of mathematics such as: measure theory, approximation theory, fuzzy logic theory, summability theory, and so on. In this monograph we consider this concept in approximating a function by linear operators, especially when the classical limit fails. The results of this book not only cover the classical and statistical approximation theory, but also are applied in the fuzzy logic via the fuzzy-valued operators. The authors in particular treat the important Korovkin approximation theory of positive linear operators in statistical and fuzzy sense. They also present various statistical approximation theorems for some specific real and complex-valued linear operators that are not positive. This is the first monograph in Statistical Approximation Theory and Fuzziness. The chapters are self-contained and several advanced courses can be taught. The research findings will be useful in various applications including applied and computational mathematics, stochastics, engineering, artificial intelligence, vision and machine learning. This monograph is directed to graduate students, researchers, practitioners and professors of all disciplines.
This book is concerned with advances in serial-data computa tional architectures, and the CAD tools for their implementation in silicon. The bit-serial tradition at Edinburgh University (EU) stretches back some 6 years to the conception of the FIRST silicon compiler. FIRST owes much of its inspiration to Dick Lyon, then at Xerox P ARC, who proposed a 'structured-design' methodology for construction of signal processing systems from bit-serial building blocks. Based on an nMOS cell-library, FIRST automates much of Lyon's physical design process. More recently, we began to feel that FIRST should be able to exploit more modern technologies. Before this could be achieved, we were faced with a massive manual re-design task, i. e. the porting of FIRST cell-library to a new technology. As it was to avoid such tasks that FIRST was conceived in the first place, we decided to move the level of user-specification much nearer to the silicon level (while still hiding details of transistor circuit design, place and route etc., from the user), and by so doing, enable the specification of more functionally powerful libraries in technology-free form. The results of this work are in evidence as advances in serial-data design techniques, and the SECOND silicon compiler, introduced later in this book. These achievements could not have been accomplished without help from various sources. We take this opportunity to thank Profs."
Towards the end of the nineteenth century, Frege gave us the
abstraction principles and the general notion of functions.
Self-application of functions was at the heart of Russell's
paradox. This led Russell to introduce type theory in order to
avoid the paradox. Since, the twentieth century has seen an amazing
number of theories concerned with types and functions and many
applications. Progress in computer science also meant more and more
emphasis on the use of logic, types and functions to study the
syntax, semantics, design and implementation of programming
languages and theorem provers, and the correctness of proofs and
programs. The authors of this book have themselves been leading the
way by providing various extensions of type theory which have been
shown to bring many advantages. This book gathers much of their
influential work and is highly recommended for anyone interested in
type theory. The main emphasis is on:
Mathematics is often considered as a body of knowledge that is essen tially independent of linguistic formulations, in the sense that, once the content of this knowledge has been grasped, there remains only the problem of professional ability, that of clearly formulating and correctly proving it. However, the question is not so simple, and P. Weingartner's paper (Language and Coding-Dependency of Results in Logic and Mathe matics) deals with some results in logic and mathematics which reveal that certain notions are in general not invariant with respect to different choices of language and of coding processes. Five example are given: 1) The validity of axioms and rules of classical propositional logic depend on the interpretation of sentential variables; 2) The language dependency of verisimilitude; 3) The proof of the weak and strong anti inductivist theorems in Popper's theory of inductive support is not invariant with respect to limitative criteria put on classical logic; 4) The language-dependency of the concept of provability; 5) The language dependency of the existence of ungrounded and paradoxical sentences (in the sense of Kripke). The requirements of logical rigour and consistency are not the only criteria for the acceptance and appreciation of mathematical proposi tions and theories." |
You may like...
Workflow Scheduling on Computing Systems
Kenli Li, Xiaoyong Tang, …
Hardcover
R2,582
Discovery Miles 25 820
Creativity in Computing and DataFlow…
Suyel Namasudra, Veljko Milutinovic
Hardcover
R4,204
Discovery Miles 42 040
Application of Evolutionary Algorithms…
M.C. Bhuvaneswari
Hardcover
Management And Cost Accounting In South…
William Bishop, Colin Drury
Paperback
R549
Discovery Miles 5 490
Land Policy Circular: November 1936…
U S Resettlement Administration
Paperback
R484
Discovery Miles 4 840
Land Use Survey and Analysis and Land…
Division of Community Planning
Paperback
R539
Discovery Miles 5 390
Applying AI-Based IoT Systems to…
Bhatia Madhulika, Bhatia Surabhi, …
Hardcover
R6,677
Discovery Miles 66 770
|