![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Mathematical foundations > General
As of today, Evolutionary Computing and Fuzzy Set Computing are two mature, wen -developed, and higbly advanced technologies of information processing. Bach of them has its own clearly defined research agenda, specific goals to be achieved, and a wen setUed algorithmic environment. Concisely speaking, Evolutionary Computing (EC) is aimed at a coherent population -oriented methodology of structural and parametric optimization of a diversity of systems. In addition to this broad spectrum of such optimization applications, this paradigm otTers an important ability to cope with realistic goals and design objectives reflected in the form of relevant fitness functions. The GA search (which is often regarded as a dominant domain among other techniques of EC such as evolutionary strategies, genetic programming or evolutionary programming) delivers a great deal of efficiency helping navigate through large search spaces. The main thrust of fuzzy sets is in representing and managing nonnumeric (linguistic) information. The key notion (whose conceptual as weH as algorithmic importance has started to increase in the recent years) is that of information granularity. It somewhat concurs with the principle of incompatibility coined by L. A. Zadeh. Fuzzy sets form a vehic1e helpful in expressing a granular character of information to be captured. Once quantified via fuzzy sets or fuzzy relations, the domain knowledge could be used efficiently very often reducing a heavy computation burden when analyzing and optimizing complex systems.
We dedicate this volume to Professor Parimala on the occasion of her 60th birthday. It contains a variety of papers related to the themes of her research. Parimala's rst striking result was a counterexample to a quadratic analogue of Serre's conjecture (Bulletin of the American Mathematical Society, 1976). Her in uence has cont- ued through her tenure at the Tata Institute of Fundamental Research in Mumbai (1976-2006),and now her time at Emory University in Atlanta (2005-present). A conference was held from 30 December 2008 to 4 January 2009, at the U- versity of Hyderabad, India, to celebrate Parimala's 60th birthday (see the conf- ence's Web site at http://mathstat.uohyd.ernet.in/conf/quadforms2008). The or- nizing committee consisted of J.-L. Colliot-Thel ' en ' e, Skip Garibaldi, R. Sujatha, and V. Suresh. The present volume is an outcome of this event. We would like to thank all the participants of the conference, the authors who have contributed to this volume, and the referees who carefully examined the s- mitted papers. We would also like to thank Springer-Verlag for readily accepting to publish the volume. In addition, the other three editors of the volume would like to place on record their deep appreciation of Skip Garibaldi's untiring efforts toward the nal publication.
The importance of having ef cient and effective methods for data mining and kn- ledge discovery (DM&KD), to which the present book is devoted, grows every day and numerous such methods have been developed in recent decades. There exists a great variety of different settings for the main problem studied by data mining and knowledge discovery, and it seems that a very popular one is formulated in terms of binary attributes. In this setting, states of nature of the application area under consideration are described by Boolean vectors de ned on some attributes. That is, by data points de ned in the Boolean space of the attributes. It is postulated that there exists a partition of this space into two classes, which should be inferred as patterns on the attributes when only several data points are known, the so-called positive and negative training examples. The main problem in DM&KD is de ned as nding rules for recognizing (cl- sifying) new data points of unknown class, i. e. , deciding which of them are positive and which are negative. In other words, to infer the binary value of one more attribute, called the goal or class attribute. To solve this problem, some methods have been suggested which construct a Boolean function separating the two given sets of positive and negative training data points.
Text Retrieval and Filtering: Analytical Models of Performance is the first book that addresses the problem of analytically computing the performance of retrieval and filtering systems. The book describes means by which retrieval may be studied analytically, allowing one to describe current performance, predict future performance, and to understand why systems perform as they do. The focus is on retrieving and filtering natural language text, with material addressing retrieval performance for the simple case of queries with a single term, the more complex case with multiple terms, both with term independence and term dependence, and for the use of grammatical information to improve performance. Unambiguous statements of the conditions under which one method or system will be more effective than another are developed. Text Retrieval and Filtering: Analytical Models of Performance focuses on the performance of systems that retrieve natural language text, considering full sentences as well as phrases and individual words. The last chapter explicitly addresses how grammatical constructs and methods may be studied in the context of retrieval or filtering system performance. The book builds toward solving this problem, although the material in earlier chapters is as useful to those addressing non-linguistic, statistical concerns as it is to linguists. Those interested in grammatical information should be cautioned to carefully examine earlier chapters, especially Chapters 7 and 8, which discuss purely statistical relationships between terms, before moving on to Chapter 10, which explicitly addresses linguistic issues. Text Retrieval and Filtering: Analytical Models of Performance is suitable as a secondary text for a graduate level course on Information Retrieval or Linguistics, and as a reference for researchers and practitioners in industry.
Industrial development is essential to improvement of the standard of living in all countries. People's health and the environment can be affected, directly or indirectly by routine waste discharges or by accidents. A series of recent major industrial accidents and the effect of pollution highlighted, once again, the need for better management of routine and accidental risks. Moreover, the existence of natural hazards complicate even more the situation in any given region. In the past effort to cope with these risks, if made at all, have been largely on a plant by plant basis; some plants are well equipped to manage environmental and health hazards, while others are not. Managing the hazards of modern technological systems has become a key activity in highly industrialised countries. Decision makers are often confronted with complex issues concerning economic and social development, industrialisation and associated infrastructure needs, population and land use planning. Such issues have to be addressed in such a way that ensures that public health will not be disrupted or substantially degraded. Due to the increasing complexity of technological systems and the higher geographical density of punctual hazard sources, new methodologies and a novel approach to these problems are challenging risk managers and regional planers. Risks from these new complex technological systems are inherently different form those addressed by the risk managers for decades ago.
In this monograph we study two generalizations of standard unification, E-unification and higher-order unification, using an abstract approach orig inated by Herbrand and developed in the case of standard first-order unifi cation by Martelli and Montanari. The formalism presents the unification computation as a set of non-deterministic transformation rules for con verting a set of equations to be unified into an explicit representation of a unifier (if such exists). This provides an abstract and mathematically elegant means of analysing the properties of unification in various settings by providing a clean separation of the logical issues from the specification of procedural information, and amounts to a set of 'inference rules' for unification, hence the title of this book. We derive the set of transformations for general E-unification and higher order unification from an analysis of the sense in which terms are 'the same' after application of a unifying substitution. In both cases, this results in a simple extension of the set of basic transformations given by Herbrand Martelli-Montanari for standard unification, and shows clearly the basic relationships of the fundamental operations necessary in each case, and thus the underlying structure of the most important classes of term unifi cation problems."
Fuzzy data such as marks, scores, verbal evaluations, imprecise observations, experts' opinions and grey tone pictures, are quite common. In Fuzzy Data Analysis the authors collect their recent results providing the reader with ideas, approaches and methods for processing such data when looking for sub-structures in knowledge bases for an evaluation of functional relationship, e.g. in order to specify diagnostic or control systems. The modelling presented uses ideas from fuzzy set theory and the suggested methods solve problems usually tackled by data analysis if the data are real numbers. Fuzzy Data Analysis is self-contained and is addressed to mathematicians oriented towards applications and to practitioners in any field of application who have some background in mathematics and statistics.
Call-by-push-value is a programming language paradigm that,
surprisingly, breaks down the call-by-value and call-by-name
paradigms into simple primitives. This monograph, written for
graduate students and researchers, exposes the call-by-push-value
structure underlying a remarkable range of semantics, including
operational semantics, domains, possible worlds, continuations and
games.
The theory of constructive (recursive) models follows from works of Froehlich, Shepherdson, Mal'tsev, Kuznetsov, Rabin, and Vaught in the 50s. Within the framework of this theory, algorithmic properties of abstract models are investigated by constructing representations on the set of natural numbers and studying relations between algorithmic and structural properties of these models. This book is a very readable exposition of the modern theory of constructive models and describes methods and approaches developed by representatives of the Siberian school of algebra and logic and some other researchers (in particular, Nerode and his colleagues). The main themes are the existence of recursive models and applications to fields, algebras, and ordered sets (Ershov), the existence of decidable prime models (Goncharov, Harrington), the existence of decidable saturated models (Morley), the existence of decidable homogeneous models (Goncharov and Peretyat'kin), properties of the Ehrenfeucht theories (Millar, Ash, and Reed), the theory of algorithmic dimension and conditions of autostability (Goncharov, Ash, Shore, Khusainov, Ventsov, and others), and the theory of computable classes of models with various properties. Future perspectives of the theory of constructive models are also discussed. Most of the results in the book are presented in monograph form for the first time. The theory of constructive models serves as a basis for recursive mathematics. It is also useful in computer science, in particular, in the study of programming languages, higher level languages of specification, abstract data types, and problems of synthesis and verification of programs. Therefore, the book will be useful for not only specialists in mathematical logic and the theory of algorithms but also for scientists interested in the mathematical fundamentals of computer science. The authors are eminent specialists in mathematical logic. They have established fundamental results on elementary theories, model theory, the theory of algorithms, field theory, group theory, applied logic, computable numberings, the theory of constructive models, and the theoretical computer science.
Model theory has made substantial contributions to semialgebraic, subanalytic, p-adic, rigid and diophantine geometry. These applications range from a proof of the rationality of certain Poincare series associated to varieties over p-adic fields, to a proof of the Mordell-Lang conjecture for function fields in positive characteristic. In some cases (such as the latter) it is the most abstract aspects of model theory which are relevant. This book, originally published in 2000, arising from a series of introductory lectures for graduate students, provides the necessary background to understanding both the model theory and the mathematics behind these applications. The book is unique in that the whole spectrum of contemporary model theory (stability, simplicity, o-minimality and variations) is covered and diverse areas of geometry (algebraic, diophantine, real analytic, p-adic, and rigid) are introduced and discussed, all by leading experts in their fields.
Since their inception, fuzzy sets and fuzzy logic became popular. The reason is that the very idea of fuzzy sets and fuzzy logic attacks an old tradition in science, namely bivalent (black-or-white, all-or-none) judg ment and reasoning and the thus resulting approach to formation of scientific theories and models of reality. The idea of fuzzy logic, briefly speaking, is just the opposite of this tradition: instead of full truth and falsity, our judgment and reasoning also involve intermediate truth values. Application of this idea to various fields has become known under the term fuzzy approach (or graded truth approach). Both prac tice (many successful engineering applications) and theory (interesting nontrivial contributions and broad interest of mathematicians, logicians, and engineers) have proven the usefulness of fuzzy approach. One of the most successful areas of fuzzy methods is the application of fuzzy relational modeling. Fuzzy relations represent formal means for modeling of rather nontrivial phenomena (reasoning, decision, control, knowledge extraction, systems analysis and design, etc. ) in the pres ence of a particular kind of indeterminacy called vagueness. Models and methods based on fuzzy relations are often described by logical formulas (or by natural language statements that can be translated into logical formulas). Therefore, in order to approach these models and methods in an appropriate formal way, it is desirable to have a general theory of fuzzy relational systems with basic connections to (formal) language which enables us to describe relationships in these systems.
This monograph covers the recent major advances in various areas of set theory. From the reviews: "One of the classical textbooks and reference books in set theory....The present Third Millennium edition...is a whole new book. In three parts the author offers us what in his view every young set theorist should learn and master....This well-written book promises to influence the next generation of set theorists, much as its predecessor has done." --MATHEMATICAL REVIEWS
From the Introduction: "We shall base our discussion on a set-theoretical foundation like that used in developing analysis, or algebra, or topology. We may consider our task as that of giving a mathematical analysis of the basic concepts of logic and mathematics themselves. Thus we treat mathematical and logical practice as given empirical data and attempt to develop a purely mathematical theory of logic abstracted from these data." There are 31 chapters in 5 parts and approximately 320 exercises marked by difficulty and whether or not they are necessary for further work in the book.
Chapter 1 The algebraic prerequisites for the book are covered here and in the appendix. This chapter should be used as reference material and should be consulted as needed. A systematic treatment of algebras, coalgebras, bialgebras, Hopf algebras, and represen tations of these objects to the extent needed for the book is given. The material here not specifically cited can be found for the most part in [Sweedler, 1969] in one form or another, with a few exceptions. A great deal of emphasis is placed on the coalgebra which is the dual of n x n matrices over a field. This is the most basic example of a coalgebra for our purposes and is at the heart of most algebraic constructions described in this book. We have found pointed bialgebras useful in connection with solving the quantum Yang-Baxter equation. For this reason we develop their theory in some detail. The class of examples described in Chapter 6 in connection with the quantum double consists of pointed Hopf algebras. We note the quantized enveloping algebras described Hopf algebras. Thus for many reasons pointed bialgebras are elsewhere are pointed of fundamental interest in the study of the quantum Yang-Baxter equation and objects quantum groups.
This book presents a unifying framework for using priority arguments to prove theorems in computability. Priority arguments provide the most powerful theorem-proving technique in the field, but most of the applications of this technique are ad hoc, masking the unifying principles used in the proofs. The proposed framework presented isolates many of these unifying combinatorial principles and uses them to give shorter and easier-to-follow proofs of computability-theoretic theorems. Standard theorems of priority levels 1, 2, and 3 are chosen to demonstrate the framework's use, with all proofs following the same pattern. The last section features a new example requiring priority at all finite levels. The book will serve as a resource and reference for researchers in logic and computability, helping them to prove theorems in a shorter and more transparent manner.
Geometric properties and relations play central roles in the description and processing of spatial data. The properties and relations studied by mathematicians usually have precise definitions, but verbal descriptions often involve imprecisely defined concepts such as elongatedness or proximity. The methods used in soft computing provide a framework for formulating and manipulating such concepts. This volume contains eight papers on the soft definition and manipulation of spatial relations and gives a comprehensive summary on the subject.
Formal Languages and Applications provides a comprehensive study-aid and self-tutorial for graduates students and researchers. The main results and techniques are presented in an readily accessible manner and accompanied by many references and directions for further research. This carefully edited monograph is intended to be the gateway to formal language theory and its applications, so it is very useful as a review and reference source of information in formal language theory.
Quantitative Evaluation of Fire and EMS Mobilization Times presents comprehensive empirical data on fire emergency and EMS call processing and turnout times, and aims to improve the operational benchmarks of NFPA peer consensus standards through a close examination of real-world data. The book also identifies and analyzes the elements that can influence EMS mobilization response times. Quantitative Evaluation of Fire and EMS Mobilization Times is intended for practitioners as a tool for analyzing fire emergency response times and developing methods for improving them. Researchers working in a related field will also find the book valuable.
Computing systems are of growing importance because of their wide use in many areas including those in safety-critical systems. This book describes the basic models and approaches to the reliability analysis of such systems. An extensive review is provided and models are categorized into different types. Some Markov models are extended to the analysis of some specific computing systems such as combined software and hardware, imperfect debugging processes, failure correlation, multi-state systems, heterogeneous subsystems, etc. One of the aims of the presentation is that based on the sound analysis and simplicity of the approaches, the use of Markov models can be better implemented in the computing system reliability.
The theory of oppositions based on Aristotelian foundations of logic has been pictured in a striking square diagram which can be understood and applied in many different ways having repercussions in various fields: epistemology, linguistics, mathematics, sociology, physics. The square can also be generalized in other two-dimensional or multi-dimensional objects extending in breadth and depth the original Aristotelian theory. The square of opposition from its origin in antiquity to the present day continues to exert a profound impact on the development of deductive logic. Since 10 years there is a new growing interest for the square due to recent discoveries and challenging interpretations. This book presents a collection of previously unpublished papers by high level specialists on the square from all over the world.
Compactness in topology and finite generation in algebra are nice properties to start with. However, the study of compact spaces leads naturally to non-compact spaces and infinitely generated chain complexes; a classical example is the theory of covering spaces. In handling non-compact spaces we must take into account the infinity behaviour of such spaces. This necessitates modifying the usual topological and algebraic cate gories to obtain "proper" categories in which objects are equipped with a "topologized infinity" and in which morphisms are compatible with the topology at infinity. The origins of proper (topological) category theory go back to 1923, when Kere kjart6 [VT] established the classification of non-compact surfaces by adding to orien tability and genus a new invariant, consisting of a set of "ideal points" at infinity. Later, Freudenthal [ETR] gave a rigorous treatment of the topology of "ideal points" by introducing the space of "ends" of a non-compact space. In spite of its early ap pearance, proper category theory was not recognized as a distinct area of topology until the late 1960's with the work of Siebenmann [OFB], [IS], [DES] on non-compact manifolds.
Dr. KURT GODEL'S sixtieth birthday (April 28, 1966) and the thirty fifth anniversary of the publication of his theorems on undecidability were celebrated during the 75th Anniversary Meeting of the Ohio Ac ademy of Science at The Ohio State University, Columbus, on April 22, 1966. The celebration took the form of a Festschrift Symposium on a theme supported by the late Director of The Institute for Advanced Study at Princeton, New Jersey, Dr. J. ROBERT OPPENHEIMER: "Logic, and Its Relations to Mathematics, Natural Science, and Philosophy." The symposium also celebrated the founding of Section L (Mathematical Sciences) of the Ohio Academy of Science. Salutations to Dr. GODEL were followed by the reading of papers by S. F. BARKER, H. B. CURRY, H. RUBIN, G. E. SACKS, and G. TAKEUTI, and by the announcement of in-absentia papers contributed in honor of Dr. GODEL by A. LEVY, B. MELTZER, R. M. SOLOVAY, and E. WETTE. A short discussion of "The II Beyond Godel's I" concluded the session."
Henkin-Keisler models emanate from a modification of the Henkin construction introduced by Keisler to motivate the definition of ultraproducts. Keisler modified the Henkin construction at that point at which 'new' individual constants are introduced and did so in a way that illuminates a connection between Henkin-Keisler models and ultraproducts. The resulting construction can be viewed both as a specialization of the Henkin construction and as an alternative to the ultraproduct construction. These aspects of the Henkin-Keisler construction are utilized here to present a perspective on ultraproducts and their applications accessible to the reader familiar with Henkin's proof of the completeness of first order logic and naive set theory. This approach culminates in proofs of various forms of the Keisler-Shelah characterizations of elementary equivalence and elementary classes via Henkin-Keisler models. The presentation is self-contained and proofs of more advanced results from set theory are introduced as needed. Audience: Logicians in philosophy, computer science, linguistics and mathematics.
This book, which is based on Polya's method of problem solving, aids students in their transition from calculus (or precalculus) to higher-level mathematics. The book begins by providing a great deal of guidance on how to approach definitions, examples, and theorems in mathematics and ends with suggested projects for independent study. Students will follow Polya's four step approach: analyzing the problem, devising a plan to solve the problem, carrying out that plan, and then determining the implication of the result. In addition to the Polya approach to proofs, this book places special emphasis on reading proofs carefully and writing them well. The authors have included a wide variety of problems, examples, illustrations and exercises, some with hints and solutions, designed specifically to improve the student's ability to read and write proofs. Historical connections are made throughout the text, and students are encouraged to use the rather extensive bibliography to begin making connections of their own. While standard texts in this area prepare students for future courses in algebra, this book also includes chapters on sequences, convergence, and metric spaces for those wanting to bridge the gap between the standard course in calculus and one in analysis. |
You may like...
The High School Arithmetic - for Use in…
W. H. Ballard, A. C. McKay, …
Hardcover
R981
Discovery Miles 9 810
The New Method Arithmetic [microform]
P (Phineas) McIntosh, C a (Carl Adolph) B 1879 Norman
Hardcover
R921
Discovery Miles 9 210
Key to Advanced Arithmetic for Canadian…
Barnard 1817-1876 Smith, Archibald McMurchy
Hardcover
R863
Discovery Miles 8 630
|