![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Mathematical foundations > General
This is the second volume of a two volume collection on Structural Complexity. This volume assumes as a prerequisite knowledge about the topics treated in Volume I, but the present volume itself is nearly self-contained. As in Volume I, each chapter of this book ends with a section entitled "Bibliographical Remarks", in which the relevant references for the chapter are briefly commented upon. These sections might also be of interest to those wanting an overview of the evolution of the field, as well as relevant related results which are not included in the text. Each chapter includes a section of exercises. The reader is encouraged to spend some time on them. Some results presented as exercises are occasionally used later in the text. A reference is provided for the most interesting and for the most useful exercises. Some exercises are marked with a * to indicate that, to the best knowledge of the authors, the solution has a certain degree of difficulty. Many topics from the field of Structural Complexity are not treated in depth, or not treated at all. The authors bear all responsibility for the choice of topics, which has been made based on the interest of the authors on each topic. Many friends and colleagues have made suggestions or corrections. In partic ular we would like to express our gratitude to Richard Beigel, Ron Book, Rafael Casas, Jozef Gruska, Uwe Schoning, Pekka Orponen, and Osamu Watanabe.
This volume has a dual significance to the ESPRIT Basic Research efforts towards forging strong links between European academic and industrial teams carrying out research, often interdisciplinary, at the forefront of Information Technology. Firstly, it consists of the proceedings of the "Symposium on Computational Logic" - held on the occasion of the 7th ESPRIT Conference Week in November 1990 - whose organisation was inspired by the work of Basic Research Action 3012 (COMPULOG). This is a consortium which has attracted world-wide interest, with requests for collaboration throughout Europe, the US and Japan. The work of COMPULOG acts as a focal point in this symposium which is broadened to cover the work of other eminent researchers in the field, thus providing a review of the state of the art in computational logic, new and important contributions in the field, but also a vision of the future. Secondly, this volume is the first of an ESPRIT Basic Research Series of publications of research results. It is expected that the quality of content and broad distribution of this series will have a major impact in making the advances achieved accessible to the world of academic and industrial research alike. At this time, all ESPRIT Basic Research Actions have completed their first year and it is most encouraging and stimulating to see the flow of results such as the fine examples presented in this symposium.
This book corresponds to a mathematical course given in 1986/87 at the University Louis Pasteur, Strasbourg. This work is primarily intended for graduate students. The following are necessary prerequisites : a few standard definitions in set theory, the definition of rational integers, some elementary facts in Combinatorics (maybe only Newton's binomial formula), some theorems of Analysis at the level of high schools, and some elementary Algebra (basic results about groups, rings, fields and linear algebra). An important place is given to exercises. These exercises are only rarely direct applications of the course. More often, they constitute complements to the text. Mostly, hints or references are given so that the reader should be able to find solutions. Chapters one and two deal with elementary results of Number Theory, for example : the euclidean algorithm, the Chinese remainder theorem and Fermat's little theorem. These results are useful by themselves, but they also constitute a concrete introduction to some notions in abstract algebra (for example, euclidean rings, principal rings ... ). Algorithms are given for arithmetical operations with long integers. The rest of the book, chapters 3 through 7, deals with polynomials. We give general results on polynomials over arbitrary rings. Then polynomials with complex coefficients are studied in chapter 4, including many estimates on the complex roots of polynomials. Some of these estimates are very useful in the subsequent chapters.
This book grew out of my interest in what is common to three disciplines: mathematics, philosophy, and history. The origins of Zermelo's Axiom of Choice, as well as the controversy that it engendered, certainly lie in that intersection. Since the time of Aristotle, mathematics has been concerned alternately with its assumptions and with the objects, such as number and space, about which those assumptions were made. In the historical context of Zermelo's Axiom, I have explored both the vagaries and the fertility of this alternating concern. Though Zermelo's research has provided the focus for this book, much of it is devoted to the problems from which his work originated and to the later developments which, directly or indirectly, he inspired. A few remarks about format are in order. In this book a publication is indicated by a date after a name; so Hilbert 1926, 178 refers to page 178 of an article written by Hilbert, published in 1926, and listed in the bibliography.
Since its inception by Professor Lotfi Zadeh about 18 years ago, the theory of fuzzy sets has evolved in many directions, and is finding applications in a wide variety of fields in which the phenomena under study are too complex or too ill-defined to be analyzed by conventional techniques. Thus, by providing a basis for a systematic approach to approximate reasoning and inexact inference, the theory of fuzzy sets may well have a substantial impact on scientific methodology in the years ahead, particularly in the realms of psychology, economics, engineering, law, medicine, decision-analysis, information retrieval, and artificial intelli gence. This volume consists of 24 selected papers invited by the editor, Professor Paul P. Wang. These papers cover the theory and applications of fuzzy sets, almost equal in number. We are very fortunate to have Professor A. Kaufmann to contribute an overview paper of the advances in fuzzy sets. One special feature of this volume is the strong participation of Chinese researchers in this area. The fact is that Chinese mathematicians, scientists and engineers have made important contributions to the theory and applications of fuzzy sets through the past decade. However, not until the visit of Professor A. Kaufmann to China in 1974 and again in 1980, did the Western World become fully aware of the important work of Chinese researchers. Now, Professor Paul Wang has initiated the effort to document these important contributions in this volume to expose them to the western researchers."
This book is about some recent work in a subject usually considered part of "logic" and the" foundations of mathematics," but also having close connec tions with philosophy and computer science. Namely, the creation and study of "formal systems for constructive mathematics." The general organization of the book is described in the" User's Manual" which follows this introduction, and the contents of the book are described in more detail in the introductions to Part One, Part Two, Part Three, and Part Four. This introduction has a different purpose; it is intended to provide the reader with a general view of the subject. This requires, to begin with, an elucidation of both the concepts mentioned in the phrase, "formal systems for constructive mathematics." "Con structive mathematics" refers to mathematics in which, when you prove that l a thing exists (having certain desired properties) you show how to find it. Proof by contradiction is the most common way of proving something exists without showing how to find it - one assumes that nothing exists with the desired properties, and derives a contradiction. It was only in the last two decades of the nineteenth century that mathematicians began to exploit this method of proof in ways that nobody had previously done; that was partly made possible by the creation and development of set theory by Georg Cantor and Richard Dedekind."
Rough Sets and Data Mining: Analysis of Imprecise Data is an edited collection of research chapters on the most recent developments in rough set theory and data mining. The chapters in this work cover a range of topics that focus on discovering dependencies among data, and reasoning about vague, uncertain and imprecise information. The authors of these chapters have been careful to include fundamental research with explanations as well as coverage of rough set tools that can be used for mining data bases. The contributing authors consist of some of the leading scholars in the fields of rough sets, data mining, machine learning and other areas of artificial intelligence. Among the list of contributors are Z. Pawlak, J Grzymala-Busse, K. Slowinski, and others. Rough Sets and Data Mining: Analysis of Imprecise Data will be a useful reference work for rough set researchers, data base designers and developers, and for researchers new to the areas of data mining and rough sets.
The concept of likeness to truth, like that of truth itself, is fundamental to a realist conception of inquiry. To demonstrate this we need only make two rather modest aim of an inquiry, as an inquiry, is realist assumptions: the truth doctrine (that the the truth of some matter) and the progress doctrine (that one false theory may realise this aim better than another). Together these yield the conclusion that a false theory may be more truthlike, or closer to the truth, than another. It is the aim of this book to give a rigorous philosophical analysis of the concept of likeness to truth, and to examine the consequences, some of them no doubt surprising to those who have been unduly impressed by the (admittedly important) true/false dichotomy. Truthlikeness is not only a requirement of a particular philosophical outlook, it is as deeply embedded in common sense as the concept of truth. Everyone seems to be capable of grading various propositions, in different (hypothetical) situations, according to their closeness to the truth in those situations. And (if my experience is anything to go by) there is remarkable unanimity on these pretheoretical judge ments. This is not proof that there is a single coherent concept underlying these judgements. The whole point of engaging in philosophical analysis is to make this claim plausible."
Problems in decision making and in other areas such as pattern recogni tion, control, structural engineering etc. involve numerous aspects of uncertainty. Additional vagueness is introduced as models become more complex but not necessarily more meaningful by the added details. During the last two decades one has become more and more aware of the fact that not all this uncertainty is of stochastic (random) cha racter and that, therefore, it can not be modelled appropriately by probability theory. This becomes the more obvious the more we want to represent formally human knowledge. As far as uncertain data are concerned, we have neither instru ments nor reasoning at our disposal as well defined and unquestionable as those used in the probability theory. This almost infallible do main is the result of a tremendous work by the whole scientific world. But when measures are dubious, bad or no longer possible and when we really have to make use of the richness of human reasoning in its variety, then the theories dealing with the treatment of uncertainty, some quite new and other ones older, provide the required complement, and fill in the gap left in the field of knowledge representation. Nowadays, various theories are widely used: fuzzy sets, belief function, the convenient associations between probability and fuzzines~ etc *** We are more and more in need of a wide range of instruments and theories to build models that are more and more adapted to the most complex systems.
The words "microdifferential systems in the complex domain" refer to seve ral branches of mathematics: micro local analysis, linear partial differential equations, algebra, and complex analysis. The microlocal point of view first appeared in the study of propagation of singularities of differential equations, and is spreading now to other fields of mathematics such as algebraic geometry or algebraic topology. How ever it seems that many analysts neglect very elementary tools of algebra, which forces them to confine themselves to the study of a single equation or particular square matrices, or to carryon heavy and non-intrinsic formula tions when studying more general systems. On the other hand, many alge braists ignore everything about partial differential equations, such as for example the "Cauchy problem," although it is a very natural and geometri cal setting of "inverse image." Our aim will be to present to the analyst the algebraic methods which naturally appear in such problems, and to make available to the algebraist some topics from the theory of partial differential equations stressing its geometrical aspects. Keeping this goal in mind, one can only remain at an elementary level."
Heyting'88 Summer School and Conference on Mathematical Logic, held September 13 - 23, 1988 in Chaika, Bulgaria, was honourably dedicated to Arend Heyting's 90th anniversary. It was organized by Sofia University "Kliment Ohridski" on the occasion of its centenary and by the Bulgarian Academy of Sciences, with sponsorship of the Association for Symbolic Logic. The Meeting gathered some 115 participants from 19 countries. The present volume consists of invited and selected papers. Included are all the invited lectures submitted for publication and the 14 selected contributions, chosen out of 56 submissions by the Selection Committee. The selection was made on the basis of reports of PC members, an average of 4 per sLlbmission. All the papers are concentrated on the topics of the Meeting: Recursion Theory, Modal and Non-classical Logics, Intuitionism and Constructivism, Related Applications to Computer and Other Sciences, Life and Work of Arend Heyting. I am pleased to thank all persons and institutions that contributed to the success of the Meeting: sponsors, Programme Committee members and additional referees, the members of the Organizing Committee, our secretaries K. Lozanova and L. Nikolova, as well as K. Angelov, V. Bozhichkova, A. Ditchev, D. Dobrev, N. Dimitrov, R. Draganova, G. Gargov, N. Georgieva, M. Janchev, P. Marinov, S. Nikolova, S. Radev, I. Soskov, A. Soskova and v. Sotirov, who helped in the organization, Plenum Press and at last but not least all participants in the Meeting and contributors to this volume.
This volume presents the results of approximately 15 years of work from researchers around the world on the use of fuzzy set theory to represent imprecision in databases. The maturity of the research in the discipline and the recent developments in commercial/industrial fuzzy databases provided an opportunity to produce this survey. In this introduction we will describe briefly how fuzzy databases fit into the overall design of database systems and then overview the organization of the text. FUZZY DATABASE LANDSCAPE The last five years have been witness to a revolution in the database research community. The dominant data models have changed and the consensus on what constitutes worthwhile research is in flux. Also, at this time, it is possible to gain a perspective on what has been accomplished in the area of fuzzy databases. Therefore, now is an opportune time to take stock of the past and establish a framework. A framework should assist in evaluating future research through a better understanding of the different aspects of imprecision that a database can model [ 1 l.
It has been shown how the common structure that defines a family of proofs can be expressed as a proof plan [5]. This common structure can be exploited in the search for particular proofs. A proof plan has two complementary components: a proof method and a proof tactic. By prescribing the structure of a proof at the level of primitive inferences, a tactic [11] provides the guarantee part of the proof. In contrast, a method provides a more declarative explanation of the proof by means of preconditions. Each method has associated effects. The execution of the effects simulates the application of the corresponding tactic. Theorem proving in the proof planning framework is a two-phase process: 1. Tactic construction is by a process of method composition: Given a goal, an applicable method is selected. The applicability of a method is determined by evaluating the method's preconditions. The method effects are then used to calculate subgoals. This process is applied recursively until no more subgoals remain. Because of the one-to-one correspondence between methods and tactics, the output from this process is a composite tactic tailored to the given goal. 2. Tactic execution generates a proof in the object-level logic. Note that no search is involved in the execution of the tactic. All the search is taken care of during the planning process. The real benefits of having separate planning and execution phases become appar ent when a proof attempt fails.
The volume is the outgrowth of a workshop with the same title held at MSRI in the week of November 13-17, 1989, and for those who did not get it, Logic from Computer Science is the converse of Logic in Computer Science, the full name of the highly successful annual LICS conferences. We meant to have a conference which would bring together the LICS commu nity with some of the more traditional "mathematical logicians" and where the emphasis would be on the flow of ideas from computer science to logic rather than the other way around. In a LICS talk, sometimes, the speaker presents a perfectly good theorem about (say) the A-calculus or finite model theory in terms of its potential applications rather than its (often more ob vious) intrinsic, foundational interest and intricate proof. This is not meant to be a criticism; the LICS meetings are, after all, organized by the IEEE Computer Society. We thought, for once, it would be fun to see what we would get if we asked the speakers to emphasize the relevance of their work for logic rather than computer science and to point out what is involved in the proofs. I think, mostly, it worked. In any case, the group of people represented as broad a selection of logicians as I have seen in recent years, and the quality of the talks was (in my view) exceptionally, unusually high. I learned a lot and (I think) others did too."
By North-American standards, philosophy is not new in Quebec: the first men tion of philosophy lectures given by a Jesuit in the College de Quebec (founded 1635) dates from 1665, and the oldest logic manuscript dates from 1679. In English-speaking universities such as McGill (founded 1829), philosophy began to be taught later, during the second half of the 19th century. The major influence on English-speaking philosophers was, at least initially, that of Scottish Empiricism. On the other hand, the strong influence of the Catholic Church on French-Canadian society meant that the staff of the facultes of the French-speaking universities consisted, until recently, almost entirely of Thomist philosophers. There was accordingly little or no work in modem Formal Logic and Philosophy of Science and precious few contacts between the philosophical communities. In the late forties, Hugues Leblanc was a young student wanting to learn Formal Logic. He could not find anyone in Quebec to teach him and he went to study at Harvard University under the supervision of W. V. Quine. His best friend Maurice L' Abbe had left, a year earlier, for Princeton to study with Alonzo Church. After receiving his Ph. D from Harvard in 1948, Leblanc started his profes sional career at Bryn Mawr College, where he stayed until 1967. He then went to Temple University, where he taught until his retirement in 1992, serving as Chair of the Department of Philosophy from 1973 until 1979.
This present volume is the Proceedings of the 14th International Conference on Near rings and Nearfields held in Hamburg at the Universitiit der Bundeswehr Hamburg, from July 30 to August 06, 1995. This Conference was attended by 70 mathematicians and many accompanying persons who represented 22 different countries from all five continents. Thus it was the largest conference devoted entirely to nearrings and nearfields. The first of these conferences took place in 1968 at the Mathematische For schungsinstitut Oberwolfach, Germany. This was also the site of the conferences in 1972, 1976, 1980 and 1989. The other eight conferences held before the Hamburg Conference took place in eight different countries. For details about this and, more over, for a general historical overview of the development of the subject, we refer to the article "On the beginnings and development of near-ring theory" by G. Betsch [3]. During the last forty years the theory of nearrings and related algebraic struc tures like nearfields, nearmodules, nearalgebras and seminearrings has developed into an extensive branch of algebra with its own features. In its position between group theory and ring theory, this relatively young branch of algebra has not only a close relationship to these two more well-known areas of algebra, but it also has, just as these two theories, very intensive connections to many further branches of mathematics.
It is a pleasure and an honor to be able to present this collection of papers to Ray Reiter on the occasion of his 60th birthday. To say that Ray's research has had a deep impact on the field of Artificial Intel ligence is a considerable understatement. Better to say that anyone thinking of do ing work in areas like deductive databases, default reasoning, diagnosis, reasoning about action, and others should realize that they are likely to end up proving corol laries to Ray's theorems. Sometimes studying related work makes us think harder about the way we approach a problem; studying Ray's work is as likely to make us want to drop our way of doing things and take up his. This is because more than a mere visionary, Ray has always been a true leader. He shows us how to proceed not by pointing from his armchair, but by blazing a trail himself, setting up camp, and waiting for the rest of us to arrive. The International Joint Conference on Ar tificial Intelligence clearly recognized this and awarded Ray its highest honor, the Research Excellence award in 1993, before it had even finished acknowledging all the founders of the field. The papers collected here sample from many of the areas where Ray has done pi oneering work. One of his earliest areas of application was databases, and this is re flected in the chapters by Bertossi et at. and the survey chapter by Minker."
The present text resulted from lectures given by the authors at the Rijks Universiteit at Utrecht. These lectures were part of a series on 'History of Contemporary Mathematics'. The need for such an enterprise was generally felt, since the curriculum at many universities is designed to suit an efficient treatment of advanced subjects rather than to reflect the development of notions and techniques. As it is very likely that this trend will continue, we decided to offer lectures of a less technical nature to provide students and interested listeners with a survey of the history of topics in our present-day mathematics. We consider it very useful for a mathematician to have an acquaintance with the history of the development of his subject, especially in the nineteenth century where the germs of many of modern disciplines can be found. Our attention has therefore been mainly directed to relatively young developments. In the lectures we tried to stay clear of both oversimplification and extreme technicality. The result is a text, that should not cause difficulties to a reader with a working knowledge of mathematics. The developments sketched in this book are fundamental for many areas in mathematics and the notions considered are crucial almost everywhere. The book may be most useful, in particular, for those teaching mathematics.
Mathematical Foundations of Computer Science, Volume I is the first of two volumes presenting topics from mathematics (mostly discrete mathematics) which have proven relevant and useful to computer science. This volume treats basic topics, mostly of a set-theoretical nature (sets, functions and relations, partially ordered sets, induction, enumerability, and diagonalization) and illustrates the usefulness of mathematical ideas by presenting applications to computer science. Readers will find useful applications in algorithms, databases, semantics of programming languages, formal languages, theory of computation, and program verification. The material is treated in a straightforward, systematic, and rigorous manner. The volume is organized by mathematical area, making the material easily accessible to the upper-undergraduate students in mathematics as well as in computer science and each chapter contains a large number of exercises. The volume can be used as a textbook, but it will also be useful to researchers and professionals who want a thorough presentation of the mathematical tools they need in a single source. In addition, the book can be used effectively as supplementary reading material in computer science courses, particularly those courses which involve the semantics of programming languages, formal languages and automata, and logic programming.
Drinfeld Moduli Schemes and Automorphic Forms: The Theory of Elliptic Modules with Applications is based on the author's original work establishing the correspondence between ell-adic rank r Galois representations and automorphic representations of GL(r) over a function field, in the local case, and, in the global case, under a restriction at a single place. It develops Drinfeld's theory of elliptic modules, their moduli schemes and covering schemes, the simple trace formula, the fixed point formula, as well as the congruence relations and a "simple" converse theorem, not yet published anywhere. This version, based on a recent course taught by the author at The Ohio State University, is updated with references to research that has extended and developed the original work. The use of the theory of elliptic modules in the present work makes it accessible to graduate students, and it will serve as a valuable resource to facilitate an entrance to this fascinating area of mathematics.
This is the first book-length treatment of hybrid logic and its proof-theory. Hybrid logic is an extension of ordinary modal logic which allows explicit reference to individual points in a model (where the points represent times, possible worlds, states in a computer, or something else). This is useful for many applications, for example when reasoning about time one often wants to formulate a series of statements about what happens at specific times. There is little consensus about proof-theory for ordinary modal logic. Many modal-logical proof systems lack important properties and the relationships between proof systems for different modal logics are often unclear. In the present book we demonstrate that hybrid-logical proof-theory remedies these deficiencies by giving a spectrum of well-behaved proof systems (natural deduction, Gentzen, tableau, and axiom systems) for a spectrum of different hybrid logics (propositional, first-order, intensional first-order, and intuitionistic).
This book constitutes the proceedings of the 12th Biennial Meeting on Mathematics in Language, MOL 12, held in Nara, Japan, in September 2011. Presented in this volume are 12 carefully selected papers, as well as the paper of the invited speaker Andreas Maletti. The papers cover such diverse topics as formal languages (string and tree transducers, grammar-independent syntactic structures, probabilistic and weighted context-free grammars, formalization of minimalist syntax), parsing and unification, lexical and compositional semantics, statistical language models, and theories of truth.
Resolution Proof Systems: An Algebraic Theory presents a new algebraic framework for the design and analysis of resolution- based automated reasoning systems for a range of non-classical logics. It develops an algebraic theory of resolution proof systems focusing on the problems of proof theory, representation and efficiency of the deductive process. A new class of logical calculi, the class of resolution logics, emerges as a second theme of the book. The logical and computational aspects of the relationship between resolution logics and resolution proof systems is explored in the context of monotonic as well as nonmonotonic reasoning. This book is aimed primarily at researchers and graduate students in artificial intelligence, symbolic and computational logic. The material is suitable as a reference book for researchers and as a text book for graduate courses on the theoretical aspects of automated reasoning and computational logic.
Assuming that the reader is familiar with sheaf theory, the book gives a self-contained introduction to the theory of constructible sheaves related to many kinds of singular spaces, such as cell complexes, triangulated spaces, semialgebraic and subanalytic sets, complex algebraic or analytic sets, stratified spaces, and quotient spaces. The relation to the underlying geometrical ideas are worked out in detail, together with many applications to the topology of such spaces. All chapters have their own detailed introduction, containing the main results and definitions, illustrated in simple terms by a number of examples. The technical details of the proof are postponed to later sections, since these are not needed for the applications. |
You may like...
|