![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Mathematical theory of computation
This book is for researchers in computer science, mathematical logic, and philosophical logic. It shows the state of the art in current investigations of process calculi with mainly two major paradigms at work: linear logic and modal logic. The combination of approaches and pointers for further integration also suggests a grander vision for the field.
In many areas in engineering, economics and science new developments are only possible by the application of modern optimization methods. Theoptimizationproblemsarisingnowadaysinapplicationsaremostly multiobjective, i.e. many competing objectives are aspired all at once. These optimization problems with a vector-valued objective function have in opposition to scalar-valued problems generally not only one minimal solution but the solution set is very large. Thus the devel- ment of e?cient numerical methods for special classes of multiobj- tive optimization problems is, due to the complexity of the solution set, of special interest. This relevance is pointed out in many recent publications in application areas such as medicine ([63, 118, 100, 143]), engineering([112,126,133,211,224],referencesin[81]),environmental decision making ([137, 227]) or economics ([57, 65, 217, 234]). Consideringmultiobjectiveoptimizationproblemsdemands?rstthe de?nition of minimality for such problems. A ?rst minimality notion traces back to Edgeworth [59], 1881, and Pareto [180], 1896, using the naturalorderingintheimagespace.A?rstmathematicalconsideration ofthistopicwasdonebyKuhnandTucker[144]in1951.Sincethattime multiobjective optimization became an active research ? eld. Several books and survey papers have been published giving introductions to this topic, for instance [28, 60, 66, 76, 112, 124, 165, 188, 189, 190, 215]. Inthelastdecadesthemainfocuswasonthedevelopmentofinteractive methods for determining one single solution in an iterative process.
There are many surprising connections between the theory of numbers, which is one of the oldest branches of mathematics, and computing and information theory. Number theory has important applications in computer organization and security, coding and cryptography, random number generation, hash functions, and graphics. Conversely, number theorists use computers in factoring large integers, determining primes, testing conjectures, and solving other problems. This book takes the reader from elementary number theory, via algorithmic number theory, to applied number theory in computer science. It introduces basic concepts, results, and methods, and discusses their applications in the design of hardware and software, cryptography, and security. It is aimed at undergraduates in computing and information technology, but will also be valuable to mathematics students interested in applications. In this 2nd edition full proofs of many theorems are added and some corrections are made.
The area of adaptive systems, which encompasses recursive identification, adaptive control, filtering, and signal processing, has been one of the most active areas of the past decade. Since adaptive controllers are fundamentally nonlinear controllers which are applied to nominally linear, possibly stochastic and time-varying systems, their theoretical analysis is usually very difficult. Nevertheless, over the past decade much fundamental progress has been made on some key questions concerning their stability, convergence, performance, and robustness. Moreover, adaptive controllers have been successfully employed in numerous practical applications, and have even entered the marketplace.
This book presents the basic concepts and algorithms of computer algebra using practical examples that illustrate their actual use in symbolic computation. A wide range of topics are presented, including: Groebner bases, real algebraic geometry, lie algebras, factorization of polynomials, integer programming, permutation groups, differential equations, coding theory, automatic theorem proving, and polyhedral geometry. This book is a must read for anyone working in the area of computer algebra, symbolic computation, and computer science.
The twenty-six papers in this volume reflect the wide and still expanding range of Anil Nerode's work. A conference on Logical Methods was held in honor of Nerode's sixtieth birthday (4 June 1992) at the Mathematical Sciences Institute, Cornell University, 1-3 June 1992. Some of the conference papers are here, but others are from students, co-workers and other colleagues. The intention of the conference was to look forward, and to see the directions currently being pursued, in the development of work by, or with, Nerode. Here is a brief summary of the contents of this book. We give a retrospective view of Nerode's work. A number of specific areas are readily discerned: recursive equivalence types, recursive algebra and model theory, the theory of Turing degrees and r.e. sets, polynomial-time computability and computer science. Nerode began with automata theory and has also taken a keen interest in the history of mathematics. All these areas are represented. The one area missing is Nerode's applied mathematical work relating to the environment. Kozen's paper builds on Nerode's early work on automata. Recursive equivalence types are covered by Dekker and Barback, the latter using directly a fundamental metatheorem of Nerode. Recursive algebra is treated by Ge & Richards (group representations). Recursive model theory is the subject of papers by Hird, Moses, and Khoussainov & Dadajanov, while a combinatorial problem in recursive model theory is discussed in Cherlin & Martin's paper. Cenzer presents a paper on recursive dynamics.
Since the beginning of the seventies computer hardware is available to use programmable computers for various tasks. During the nineties the hardware has developed from the big main frames to personal workstations. Nowadays it is not only the hardware which is much more powerful, but workstations can do much more work than a main frame, compared to the seventies. In parallel we find a specialization in the software. Languages like COBOL for business orientated programming or Fortran for scientific computing only marked the beginning. The introduction of personal computers in the eighties gave new impulses for even further development, already at the beginning of the seven ties some special languages like SAS or SPSS were available for statisticians. Now that personal computers have become very popular the number of pro grams start to explode. Today we will find a wide variety of programs for almost any statistical purpose (Koch & Haag 1995)."
The classical theory of computation has its origins in the work of Goedel, Turing, Church, and Kleene and has been an extraordinarily successful framework for theoretical computer science. The thesis of this book, however, is that it provides an inadequate foundation for modern scientific computation where most of the algorithms are real number algorithms. The goal of this book is to develop a formal theory of computation which integrates major themes of the classical theory and which is more directly applicable to problems in mathematics, numerical analysis, and scientific computing. Along the way, the authors consider such fundamental problems as: * Is the Mandelbrot set decidable? * For simple quadratic maps, is the Julia set a halting set? * What is the real complexity of Newton's method? * Is there an algorithm for deciding the knapsack problem in a ploynomial number of steps? * Is the Hilbert Nullstellensatz intractable? * Is the problem of locating a real zero of a degree four polynomial intractable? * Is linear programming tractable over the reals? The book is divided into three parts: The first part provides an extensive introduction and then proves the fundamental NP-completeness theorems of Cook-Karp and their extensions to more general number fields as the real and complex numbers. The later parts of the book develop a formal theory of computation which integrates major themes of the classical theory and which is more directly applicable to problems in mathematics, numerical analysis, and scientific computing.
Since the publication of the first edition of this book, advances in algorithms, logic and software tools have transformed the field of data fusion. The latest edition covers these areas as well as smart agents, human computer interaction, cognitive aides to analysis and data system fusion control. Besides aiding you in selecting the appropriate algorithm for implementing a data fusion system, this book guides you through the process of determining the trade-offs among competing data fusion algorithms, selecting commercial off-the-shelf (COTS) tools, and understanding when data fusion improves systems processing. Completely new chapters in this second edition explain data fusion system control, DARPA's recently developed TRIP model, and the latest applications of data fusion in data warehousing and medical equipment, as well as defence systems.
The notion of Fuzziness stands as one of the really new concepts that have recently enriched the world of Science. Science grows not only through technical and formal advances on one side and useful applications on the other side, but also as consequence of the introduction and assimilation of new concepts in its corpus. These, in turn, produce new developments and applications. And this is what Fuzziness, one of the few new concepts arisen in the XX Century, has been doing so far. This book aims at paying homage to Professor Lotfi A. Zadeh, the "father of fuzzy logic" and also at giving credit to his exceptional work and personality. In a way, this is reflected in the variety of contributions collected in the book. In some of them the authors chose to speak of personal meetings with Lotfi; in others, they discussed how certain papers of Zadeh were able to open for them a new research horizon. Some contributions documented results obtained from the author/s after taking inspiration from a particular idea of Zadeh, thus implicitly acknowledging him. Finally, there are contributions of several "third generation fuzzysists or softies" who were firstly led into the world of Fuzziness by a disciple of Lotfi Zadeh, who, following his example, took care of opening for them a new road in science. Rudolf Seising is Adjoint Researcher at the European Centre for Soft Computing in Mieres, Asturias (Spain). Enric Trillas and Claudio Moraga are Emeritus Researchers at the European Centre for Soft Computing, Mieres, Asturias (Spain). Settimo Termini is Professor of Theoretical Computer Science at the University of Palermo, Italy and Affiliated Researcher at the European Centre for Soft Computing, Mieres, Asturias (Spain)
In this monograph we study two generalizations of standard unification, E-unification and higher-order unification, using an abstract approach orig inated by Herbrand and developed in the case of standard first-order unifi cation by Martelli and Montanari. The formalism presents the unification computation as a set of non-deterministic transformation rules for con verting a set of equations to be unified into an explicit representation of a unifier (if such exists). This provides an abstract and mathematically elegant means of analysing the properties of unification in various settings by providing a clean separation of the logical issues from the specification of procedural information, and amounts to a set of 'inference rules' for unification, hence the title of this book. We derive the set of transformations for general E-unification and higher order unification from an analysis of the sense in which terms are 'the same' after application of a unifying substitution. In both cases, this results in a simple extension of the set of basic transformations given by Herbrand Martelli-Montanari for standard unification, and shows clearly the basic relationships of the fundamental operations necessary in each case, and thus the underlying structure of the most important classes of term unifi cation problems."
This is the first book to present an up-to-date and self-contained account of Algebraic Complexity Theory that is both comprehensive and unified. Requiring of the reader only some basic algebra and offering over 350 exercises, it is well-suited as a textbook for beginners at graduate level. With its extensive bibliography covering about 500 research papers, this text is also an ideal reference book for the professional researcher. The subdivision of the contents into 21 more or less independent chapters enables readers to familiarize themselves quickly with a specific topic, and facilitates the use of this book as a basis for complementary courses in other areas such as computer algebra.
Genetic algorithms provide a powerful range of methods for solving complex engineering search and optimization algorithms. Their power can also lead to difficulty for new researchers and students who wish to apply such evolution-based methods. "Applied Evolutionary Algorithms in Java" offers a practical, hands-on guide to applying such algorithms to engineering and scientific problems. The concepts are illustrated through clear examples, ranging from simple to more complex problems domains; all based on real-world industrial problems. Examples are taken from image processing, fuzzy-logic control systems, mobile robots, and telecommunication network optimization problems. The Java-based toolkit provides an easy-to-use and essential visual interface, with integrated graphing and analysis tools. Topics and features: *inclusion of a complete Java toolkit for exploring evolutionary algorithms *strong use of visualization techniques, to increase understanding *coverage of all major evolutionary algorithms in common usage *broad range of industrially based example applications *includes examples and an appendix based on fuzzy logic This book is intended for students, researchers, and professionals interested in using evolutionary algorithms in their work. No mathematics beyond basic algebra and Cartesian graphs methods are required, as the aim is to encourage applying the Java toolkit to develop the power of these techniques.
The volume is devoted to the interaction of modern scientific computation and classical number theory. The contributions, ranging from effective finiteness results to efficient algorithms in elementary, analytical and algebraic number theory, provide a broad view of the methods and results encountered in the new and rapidly developing area of computational number theory. Topics covered include finite fields, quadratic forms, number fields, modular forms, elliptic curves and diophantine equations. In addition, two new number theoretical software packages, KANT and SIMATH, are described in detail with emphasis on algorithms in algebraic number theory.
Sparse grids are a popular tool for the numerical treatment of high-dimensional problems. Where classical numerical discretization schemes fail in more than three or four dimensions, sparse grids, in their different flavors, are frequently the method of choice. This volume of LNCSE presents selected papers from the proceedings of the fourth workshop on sparse grids and applications, and demonstrates once again the importance of this numerical discretization scheme. The articles present recent advances in the numerical analysis of sparse grids in connection with a range of applications including computational chemistry, computational fluid dynamics, and big data analytics, to name but a few.
Floating-point arithmetic is the most widely used way of implementing real-number arithmetic on modern computers. However, making such an arithmetic reliable and portable, yet fast, is a very difficult task. As a result, floating-point arithmetic is far from being exploited to its full potential. This handbook aims to provide a complete overview of modern floating-point arithmetic. So that the techniques presented can be put directly into practice in actual coding or design, they are illustrated, whenever possible, by a corresponding program. The handbook is designed for programmers of numerical applications, compiler designers, programmers of floating-point algorithms, designers of arithmetic operators, and more generally, students and researchers in numerical analysis who wish to better understand a tool used in their daily work and research.
Our Subjects and Objectives. This book is about algebraic and symbolic computation and numerical computing (with matrices and polynomials). It greatly extends the study of these topics presented in the celebrated books of the seventies, [AHU] and [BM] (these topics have been under-represented in [CLR], which is a highly successful extension and updating of [AHU] otherwise). Compared to [AHU] and [BM] our volume adds extensive material on parallel com putations with general matrices and polynomials, on the bit-complexity of arithmetic computations (including some recent techniques of data compres sion and the study of numerical approximation properties of polynomial and matrix algorithms), and on computations with Toeplitz matrices and other dense structured matrices. The latter subject should attract people working in numerous areas of application (in particular, coding, signal processing, control, algebraic computing and partial differential equations). The au thors' teaching experience at the Graduate Center of the City University of New York and at the University of Pisa suggests that the book may serve as a text for advanced graduate students in mathematics and computer science who have some knowledge of algorithm design and wish to enter the exciting area of algebraic and numerical computing. The potential readership may also include algorithm and software designers and researchers specializing in the design and analysis of algorithms, computational complexity, alge braic and symbolic computing, and numerical computation.
This book provides a coherent methodology for Model-Driven Requirements Engineering which stresses the systematic treatment of requirements within the realm of modelling and model transformations. The underlying basic assumption is that detailed requirements models are used as first-class artefacts playing a direct role in constructing software. To this end, the book presents the Requirements Specification Language (RSL) that allows precision and formality, which eventually permits automation of the process of turning requirements into a working system by applying model transformations and code generation to RSL. The book is structured in eight chapters. The first two chapters present the main concepts and give an introduction to requirements modelling in RSL. The next two chapters concentrate on presenting RSL in a formal way, suitable for automated processing. Subsequently, chapters 5 and 6 concentrate on model transformations with the emphasis on those involving RSL and UML. Finally, chapters 7 and 8 provide a summary in the form of a systematic methodology with a comprehensive case study. Presenting technical details of requirements modelling and model transformations for requirements, this book is of interest to researchers, graduate students and advanced practitioners from industry. While researchers will benefit from the latest results and possible research directions in MDRE, students and practitioners can exploit the presented information and practical techniques in several areas, including requirements engineering, architectural design, software language construction and model transformation. Together with a tool suite available online, the book supplies the reader with what it promises: the means to get from requirements to code "in a snap".
Reasoning under uncertainty is always based on a specified language or for malism, including its particular syntax and semantics, but also on its associated inference mechanism. In the present volume of the handbook the last aspect, the algorithmic aspects of uncertainty calculi are presented. Theory has suffi ciently advanced to unfold some generally applicable fundamental structures and methods. On the other hand, particular features of specific formalisms and ap proaches to uncertainty of course still influence strongly the computational meth ods to be used. Both general as well as specific methods are included in this volume. Broadly speaking, symbolic or logical approaches to uncertainty and nu merical approaches are often distinguished. Although this distinction is somewhat misleading, it is used as a means to structure the present volume. This is even to some degree reflected in the two first chapters, which treat fundamental, general methods of computation in systems designed to represent uncertainty. It has been noted early by Shenoy and Shafer, that computations in different domains have an underlying common structure. Essentially pieces of knowledge or information are to be combined together and then focused on some particular question or domain. This can be captured in an algebraic structure called valuation algebra which is described in the first chapter. Here the basic operations of combination and focus ing (marginalization) of knowledge and information is modeled abstractly subject to simple axioms."
This book is the final version of a course on algorithmic information theory and the epistemology of mathematics and physics. It discusses Einstein and Goedel's views on the nature of mathematics in the light of information theory, and sustains the thesis that mathematics is quasi-empirical. There is a foreword by Cris Calude of the University of Auckland, and supplementary material is available at the author's web site. The special feature of this book is that it presents a new "hands on" didatic approach using LISP and Mathematica software. The reader will be able to derive an understanding of the close relationship between mathematics and physics. "The Limits of Mathematics is a very personal and idiosyncratic account of Greg Chaitin's entire career in developing algorithmic information theory. The combination of the edited transcripts of his three introductory lectures maintains all the energy and content of the oral presentations, while the material on AIT itself gives a full explanation of how to implement Greg's ideas on real computers for those who want to try their hand at furthering the theory." (John Casti, Santa Fe Institute)
This book covers the dominant theoretical approaches to the approximate solution of hard combinatorial optimization and enumeration problems. It contains elegant combinatorial theory, useful and interesting algorithms, and deep results about the intrinsic complexity of combinatorial problems. Its clarity of exposition and excellent selection of exercises will make it accessible and appealing to all those with a taste for mathematics and algorithms. Richard Karp,University Professor, University of California at Berkeley Following the development of basic combinatorial optimization techniques in the 1960s and 1970s, a main open question was to develop a theory of approximation algorithms. In the 1990s, parallel developments in techniques for designing approximation algorithms as well as methods for proving hardness of approximation results have led to a beautiful theory. The need to solve truly large instances of computationally hard problems, such as those arising from the Internet or the human genome project, has also increased interest in this theory. The field is currently very active, with the toolbox of approximation algorithm design techniques getting always richer. It is a pleasure to recommend Vijay Vazirani's well-written and comprehensive book on this important and timely topic. I am sure the reader will find it most useful both as an introduction to approximability as well as a reference to the many aspects of approximation algorithms. László Lovász, Senior Researcher, Microsoft Research
This book has been designed to deal with the topics which are indispensable in the advanced age of computer science. The first three chapters cover mathematical logic, sets, relations and function. Next come the chapters on ordered sets, Boolean albegra and switching circuits and matrices. Finally there are individual chapters on combinatorics, discrete numeric functions, generating functinos, recurrence relations, algebraic structures and graph theory; Graphs are binary trees. The purpose of this book is to present principles and concepts of discrete structures as relevant to student learning. The matter has been presented in as simple and lucid manner as possible and a large number of solved examples to understand the concept and principle of the theory have been introduced.
This volume is a comprehensive collection of extended contributions from the Workshop on Computational Optimization 2015. It presents recent advances in computational optimization. The volume includes important real life problems like parameter settings for controlling processes in bioreactor, control of ethanol production, minimal convex hill with application in routing algorithms, graph coloring, flow design in photonic data transport system, predicting indoor temperature, crisis control center monitoring, fuel consumption of helicopters, portfolio selection, GPS surveying and so on. It shows how to develop algorithms for them based on new metaheuristic methods like evolutionary computation, ant colony optimization, constrain programming and others. This research demonstrates how some real-world problems arising in engineering, economics, medicine and other domains can be formulated as optimization problems.
Quantum information science is a rapidly developing field that not only promises a revolution in computer sciences but also touches deeply the very foundations of quantum physics. This book consists of a set of lectures by leading experts in the field that bridges the gap between standard textbook material and the research literature, thus providing the ne- cessary background for postgraduate students and non-specialist researchers wishing to familiarize themselves with the subject thoroughly and at a high level. This volume is ideally suited as a course book for postgraduate students, and lecturers will find in it a large choice of material for bringing their courses up to date.
1. The increasing number of research papers appeared in the last years that either make use of aggregation functions or contribute to its theoretieal study asses its growing importance in the field of Fuzzy Logie and in others where uncertainty and imprecision play a relevant role. Since these papers are pub lished in many journals, few books and several proceedings of conferences, books on aggregation are partieularly welcome. To my knowledge, "Agrega tion Operators. New Trends and Applications" is the first book aiming at generality, and I take it as a honour to write this Foreword in response to the gentle demand of its editors, Radko Mesiar, Tomasa Calvo and Gaspar Mayor. My pleasure also derives from the fact that twenty years aga I was one of the first Spaniards interested in the study of aggregation functions, and this book includes work by several Spanish authors. The book contains nice and relevant original papers, authored by some of the most outstanding researchers in the field, and since it can serve, as the editors point out in the Preface, as a small handbook on aggregation, the book is very useful for those entering the subject for the first time. The book also contains apart dealing with potential areas of application, so it can be helpful in gaining insight on the future developments." |
![]() ![]() You may like...
Logic and Implication - An Introduction…
Petr Cintula, Carles Noguera
Hardcover
R3,451
Discovery Miles 34 510
Recent Trends in Mathematical Modeling…
Vinai K. Singh, Yaroslav D. Sergeyev, …
Hardcover
R6,393
Discovery Miles 63 930
Advanced Topics in Bisimulation and…
Davide Sangiorgi, Jan Rutten
Hardcover
R3,404
Discovery Miles 34 040
Analytic Combinatorics for Multiple…
Roy Streit, Robert Blair Angle, …
Hardcover
R3,626
Discovery Miles 36 260
Digital Protection for Power Systems
Salman K. Salman, A.T. Johns
Hardcover
|