![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Mathematical theory of computation
Belief change is an emerging field of artificial intelligence and information science dedicated to the dynamics of information and the present book provides a state-of-the-art picture of its formal foundations. It deals with the addition, deletion and combination of pieces of information and, more generally, with the revision, updating and fusion of knowledge bases. The book offers an extensive coverage of, and seeks to reconcile, two traditions in the kinematics of belief that often ignore each other - the symbolic and the numerical (often probabilistic) approaches. Moreover, the work encompasses both revision and fusion problems, even though these two are also commonly investigated by different communities. Finally, the book presents the numerical view of belief change, beyond the probabilistic framework, covering such approaches as possibility theory, belief functions and convex gambles. The work thus presents a unified view of belief change operators, drawing from a widely scattered literature embracing philosophical logic, artificial intelligence, uncertainty modelling and database systems. The material is a clearly organised guide to the literature on the dynamics of epistemic states, knowledge bases and uncertain information, suitable for scholars and graduate students familiar with applied logic, knowledge representation and uncertain reasoning.
Learning spaces offer a rigorous mathematical foundation for practical systems of educational technology. Learning spaces generalize partially ordered sets and are special cases of knowledge spaces. The various structures are investigated from the standpoints of combinatorial properties and stochastic processes. Leaning spaces have become the essential structures to be used in assessing students' competence of various topics. A practical example is offered by ALEKS, a Web-based, artificially intelligent assessment and learning system in mathematics and other scholarly fields. At the heart of ALEKS is an artificial intelligence engine that assesses each student individually and continously. The book is of interest to mathematically oriented readers in education, computer science, engineering, and combinatorics at research and graduate levels. Numerous examples and exercises are included, together with an extensive bibliography.
This volume consists of papers presented at the Variational Analysis and Aerospace Engineering Workshop II held in Erice, Italy in September 2010 at the International School of Mathematics "Guido Stampacchia." The workshop provided a platform for aerospace engineers and mathematicians (from universities, research centers and industry) to discuss the advanced problems requiring an extensive application of mathematics. The presentations were dedicated to the most advanced subjects in engineering and, in particular to computational fluid dynamics methods, introduction of new materials, optimization in aerodynamics, structural optimization, space missions, flight mechanics, control theory and optimization, variational methods and applications, etc. This book will capture the interest of researchers from both academia and industry. "
1.1. What This Book is About This book is a study of * subrecursive programming systems, * efficiency/program-size trade-offs between such systems, and * how these systems can serve as tools in complexity theory. Section 1.1 states our basic themes, and Sections 1.2 and 1.3 give a general outline of the book. Our first task is to explain what subrecursive programming systems are and why they are of interest. 1.1.1. Subrecursive Programming Systems A subrecursive programming system is, roughly, a programming language for which the result of running any given program on any given input can be completely determined algorithmically. Typical examples are: 1. the Meyer-Ritchie LOOP language [MR67,DW83], a restricted assem- bly language with bounded loops as the only allowed deviation from straight-line programming; 2. multi-tape 'lUring Machines each explicitly clocked to halt within a time bound given by some polynomial in the length ofthe input (see [BH79,HB79]); 3. the set of seemingly unrestricted programs for which one can prove 1 termination on all inputs (see [Kre51,Kre58,Ros84]); and 4. finite state and pushdown automata from formal language theory (see [HU79]). lOr, more precisely, the collection of programs, p, ofsome particular general-purpose programming language (e. g., Lisp or Modula-2) for which there is a proof in some par- ticular formal system (e.g., Peano Arithmetic) that p halts on all inputs.
This clearly written and enlightening textbook provides a concise, introductory guide to the key mathematical concepts and techniques used by computer scientists. Topics and features: ideal for self-study, offering many pedagogical features such as chapter-opening key topics, chapter introductions and summaries, review questions, and a glossary; places our current state of knowledge within the context of the contributions made by early civilizations, such as the ancient Babylonians, Egyptians and Greeks; examines the building blocks of mathematics, including sets, relations and functions; presents an introduction to logic, formal methods and software engineering; explains the fundamentals of number theory, and its application in cryptography; describes the basics of coding theory, language theory, and graph theory; discusses the concept of computability and decideability; includes concise coverage of calculus, probability and statistics, matrices, complex numbers and quaternions.
"Incomplete Information System and Rough Set Theory: Models and Attribute Reductions" covers theoretical study of generalizations of rough set model in various incomplete information systems. It discusses not only the regular attributes but also the criteria in the incomplete information systems. Based on different types of rough set models, the book presents the practical approaches to compute several reducts in terms of these models. The book is intended for researchers and postgraduate students in machine learning, data mining and knowledge discovery, especially for those who are working in rough set theory, and granular computing. Dr. Xibei Yang is a lecturer at the School of Computer Science and Engineering, Jiangsu University of Science and Technology, China; Jingyu Yang is a professor at the School of Computer Science, Nanjing University of Science and Technology, China.
This graduate-level text provides a language for understanding, unifying, and implementing a wide variety of algorithms for digital signal processing - in particular, to provide rules and procedures that can simplify or even automate the task of writing code for the newest parallel and vector machines. It thus bridges the gap between digital signal processing algorithms and their implementation on a variety of computing platforms. The mathematical concept of tensor product is a recurring theme throughout the book, since these formulations highlight the data flow, which is especially important on supercomputers. Because of their importance in many applications, much of the discussion centres on algorithms related to the finite Fourier transform and to multiplicative FFT algorithms.
This introduction to random variables and signals provides engineering students with the analytical and computational tools for processing random signals using linear systems. It presents the underlying theory as well as examples and applications using computational aids throughout, in particular, computer-based symbolic computation programs are used for performing the analytical manipulations and the numerical calculations. The accompanying CD-ROM provides MathcadTM and MatlabTM notebooks and sheets to develop processing methods. Intended for a one-semester course for advanced undergraduate or beginning graduate students, the book covers such topics as: set theory and probability; random variables, distributions, and processes; deterministic signals, spectral properties, and transformations; and filtering, and detection theory. The large number of worked examples together with the programming aids make the book eminently suited for self study as well as classroom use.
This book is devoted to a novel conceptual theoretical framework of neuro science and is an attempt to show that we can postulate a very small number of assumptions and utilize their heuristics to explain a very large spectrum of brain phenomena. The major assumption made in this book is that inborn and acquired neural automatisms are generated according to the same func tional principles. Accordingly, the principles that have been revealed experi mentally to govern inborn motor automatisms, such as locomotion and scratching, are used to elucidate the nature of acquired or learned automat isms. This approach allowed me to apply the language of control theory to describe functions of biological neural networks. You, the reader, can judge the logic of the conclusions regarding brain phenomena that the book derives from these assumptions. If you find the argument flawless, one can call it common sense and consider that to be the best praise for a chain of logical conclusions. For the sake of clarity, I have attempted to make this monograph as readable as possible. Special attention has been given to describing some of the concepts of optimal control theory in such a way that it will be under standable to a biologist or physician. I have also included plenty of illustra tive examples and references designed to demonstrate the appropriateness and applicability of these conceptual theoretical notions for the neurosciences."
This book contains a large amount of information not found in standard textbooks. Written for the advanced undergraduate/beginning graduate student, it combines the modern mathematical standards of numerical analysis with an understanding of the needs of the computer scientist working on practical applications. Among its many particular features are: fully worked-out examples; many carefully selected and formulated problems; fast Fourier transform methods; a thorough discussion of some important minimization methods; solution of stiff or implicit ordinary differential equations and of differential algebraic systems; modern shooting techniques for solving two-point boundary value problems; and basics of multigrid methods. This new edition features expanded presentation of Hermite interpolation and B-splines, with a new section on multi-resolution methods and B-splines. New material on differential equations and the iterative solution of linear equations include: solving differential equations in the presence of discontinuities whose locations are not known at the outset; techniques for sensitivity analyses of differential equations dependent on additional parameters; new advanced techniques in multiple shooting; and Krylov space methods for non-symmetric systems of linear equations.
This volume comprises a collection of twenty written versions of invited as well as contributed papers presented at the conference held from 20-24 May 1996 in Beijing, China. It covers many areas of logic and the foundations of mathematics, as well as computer science. Also included is an article by M. Yasugi on the Asian Logic Conference which first appeared in Japanese, to provide a glimpse into the history and development of the series.
Fibonacci Cubes have been an extremely popular area of research since the 1990s.This unique compendium features the state of research into Fibonacci Cubes. It expands the knowledge in graph theoretic and combinatorial properties of Fibonacci Cubes and their variants.By highlighting various approaches with numerous examples, it provides a fundamental source for further research in the field. This useful reference text surely benefits advanced students in computer science and mathematics and serves as an archival record of the current state of the field.
This monograph studies the logical aspects of domains as used in de notational semantics of programming languages. Frameworks of domain logics are introduced; these serve as foundations for systematic derivations of proof systems from denotational semantics of programming languages. Any proof system so derived is guaranteed to agree with denotational se mantics in the sense that the denotation of any program coincides with the set of assertions true of it. The study focuses on two categories for dena tational semantics: SFP domains, and the less standard, but important, category of stable domains. The intended readership of this monograph includes researchers and graduate students interested in the relation between semantics of program ming languages and formal means of reasoning about programs. A basic knowledge of denotational semantics, mathematical logic, general topology, and category theory is helpful for a full understanding of the material. Part I SFP Domains Chapter 1 Introduction This chapter provides a brief exposition to domain theory, denotational se mantics, program logics, and proof systems. It discusses the importance of ideas and results on logic and topology to the understanding of the relation between denotational semantics and program logics. It also describes the motivation for the work presented by this monograph, and how that work fits into a more general program. Finally, it gives a short summary of the results of each chapter. 1. 1 Domain Theory Programming languages are languages with which to perform computa tion."
This book aids in the rehabilitation of the wrongfully deprecated work of William Parry, and is the only full-length investigation into Parry-type propositional logics. A central tenet of the monograph is that the sheer diversity of the contexts in which the mereological analogy emerges - its effervescence with respect to fields ranging from metaphysics to computer programming - provides compelling evidence that the study of logics of analytic implication can be instrumental in identifying connections between topics that would otherwise remain hidden. More concretely, the book identifies and discusses a host of cases in which analytic implication can play an important role in revealing distinct problems to be facets of a larger, cross-disciplinary problem. It introduces an element of constancy and cohesion that has previously been absent in a regrettably fractured field, shoring up those who are sympathetic to the worth of mereological analogy. Moreover, it generates new interest in the field by illustrating a wide range of interesting features present in such logics - and highlighting these features to appeal to researchers in many fields.
Graph algorithms is a well-established subject in mathematics and computer science. Beyond classical application fields, such as approximation, combinatorial optimization, graphics, and operations research, graph algorithms have recently attracted increased attention from computational molecular biology and computational chemistry. Centered around the fundamental issue of graph isomorphism, this text goes beyond classical graph problems of shortest paths, spanning trees, flows in networks, and matchings in bipartite graphs. Advanced algorithmic results and techniques of practical relevance are presented in a coherent and consolidated way. This book introduces graph algorithms on an intuitive basis followed by a detailed exposition in a literate programming style, with correctness proofs as well as worst-case analyses. Furthermore, full C++ implementations of all algorithms presented are given using the LEDA library of efficient data structures and algorithms.
Genetic programming (GP), one of the most advanced forms of evolutionary computation, has been highly successful as a technique for getting computers to automatically solve problems without having to tell them explicitly how. Since its inceptions more than ten years ago, GP has been used to solve practical problems in a variety of application fields. Along with this ad-hoc engineering approaches interest increased in how and why GP works. This book provides a coherent consolidation of recent work on the theoretical foundations of GP. A concise introduction to GP and genetic algorithms (GA) is followed by a discussion of fitness landscapes and other theoretical approaches to natural and artificial evolution. Having surveyed early approaches to GP theory it presents new exact schema analysis, showing that it applies to GP as well as to the simpler GAs. New results on the potentially infinite number of possible programs are followed by two chapters applying these new techniques.
The communication complexity of two-party protocols is an only 15 years old complexity measure, but it is already considered to be one of the fundamen tal complexity measures of recent complexity theory. Similarly to Kolmogorov complexity in the theory of sequential computations, communication complex ity is used as a method for the study of the complexity of concrete computing problems in parallel information processing. Especially, it is applied to prove lower bounds that say what computer resources (time, hardware, memory size) are necessary to compute the given task. Besides the estimation of the compu tational difficulty of computing problems the proved lower bounds are useful for proving the optimality of algorithms that are already designed. In some cases the knowledge about the communication complexity of a given problem may be even helpful in searching for efficient algorithms to this problem. The study of communication complexity becomes a well-defined indepen dent area of complexity theory. In addition to a strong relation to several funda mental complexity measures (and so to several fundamental problems of com plexity theory) communication complexity has contributed to the study and to the understanding of the nature of determinism, nondeterminism, and random ness in algorithmics. There already exists a non-trivial mathematical machinery to handle the communication complexity of concrete computing problems, which gives a hope that the approach based on communication complexity will be in strumental in the study of several central open problems of recent complexity theory."
This is the first book where mathematics and computer science are directly confronted and joined to tackle intricate problems in computer science with deep mathematical approaches. It contains a collection of refereed papers presented at the Colloquium on Mathematics and Computer Science held at the University of Versailles-St-Quentin on September 18-20, 2000. The colloquium was a meeting place for researchers in mathematics and computer science and thus an important opportunity to exchange ideas and points of view, and to present new approaches and new results in the common areas such as algorithms analysis, trees, combinatorics, optimization, performance evaluation and probabilities. The book is intended for a large public in applied mathematics, discrete mathematics and computer science, including researchers, teachers, graduate students and engineers. It provides an overview of the current questions in computer science and related modern mathematical methods. The range of applications is very wide and reaches beyond computer science.
The resilience of computing systems includes their dependability as well as their fault tolerance and security. It defines the ability of a computing system to perform properly in the presence of various kinds of disturbances and to recover from any service degradation. These properties are immensely important in a world where many aspects of our daily life depend on the correct, reliable and secure operation of often large-scale distributed computing systems. Wolter and her co-editors grouped the 20 chapters from leading researchers into seven parts: an introduction and motivating examples, modeling techniques, model-driven prediction, measurement and metrics, testing techniques, case studies, and conclusions. The core is formed by 12 technical papers, which are framed by motivating real-world examples and case studies, thus illustrating the necessity and the application of the presented methods. While the technical chapters are independent of each other and can be read in any order, the reader will benefit more from the case studies if he or she reads them together with the related techniques. The papers combine topics like modeling, benchmarking, testing, performance evaluation, and dependability, and aim at academic and industrial researchers in these areas as well as graduate students and lecturers in related fields. In this volume, they will find a comprehensive overview of the state of the art in a field of continuously growing practical importance.
This volume presents selected peer-reviewed contributions from The International Work-Conference on Time Series, ITISE 2015, held in Granada, Spain, July 1-3, 2015. It discusses topics in time series analysis and forecasting, advanced methods and online learning in time series, high-dimensional and complex/big data time series as well as forecasting in real problems. The International Work-Conferences on Time Series (ITISE) provide a forum for scientists, engineers, educators and students to discuss the latest ideas and implementations in the foundations, theory, models and applications in the field of time series analysis and forecasting. It focuses on interdisciplinary and multidisciplinary research encompassing the disciplines of computer science, mathematics, statistics and econometrics.
This book explores alternative ways of accomplishing secure information transfer with incoherent multi-photon pulses in contrast to conventional Quantum Key Distribution techniques. Most of the techniques presented in this book do not need conventional encryption. Furthermore, the book presents a technique whereby any symmetric key can be securely transferred using the polarization channel of an optical fiber for conventional data encryption. The work presented in this book has largely been practically realized, albeit in a laboratory environment, to offer proof of concept rather than building a rugged instrument that can withstand the rigors of a commercial environment.
The expansion of digital data has transformed various sectors of business such as healthcare, industrial manufacturing, and transportation. A new way of solving business problems has emerged through the use of machine learning techniques in conjunction with big data analytics. Deep Learning Innovations and Their Convergence With Big Data is a pivotal reference for the latest scholarly research on upcoming trends in data analytics and potential technologies that will facilitate insight in various domains of science, industry, business, and consumer applications. Featuring extensive coverage on a broad range of topics and perspectives such as deep neural network, domain adaptation modeling, and threat detection, this book is ideally designed for researchers, professionals, and students seeking current research on the latest trends in the field of deep learning techniques in big data analytics. Contents include: Deep Auto-Encoders Deep Neural Network Domain Adaptation Modeling Multilayer Perceptron (MLP) Natural Language Processing (NLP) Restricted Boltzmann Machines (RBM) Threat Detection
G. J. Chaitin is at the IBM Thomas J. Watson Research Center in New York. He has shown that God plays dice not only in quantum mechanics, but even in the foundations of mathematics, where Chaitin discovered mathematical facts that are true for no reason, that are true by accident. This book collects his most wide-ranging and non-technical lectures and interviews, and it will be of interest to anyone concerned with the philosophy of mathematics, with the similarities and differences between physics and mathematics, or with the creative process and mathematics as an art."Chaitin has put a scratch on the rock of eternity."Jacob T. Schwartz, Courant Institute, New York University, USA"(Chaitin is) one of the great ideas men of mathematics and computer science."Marcus Chown, author of The Magic Furnace, in NEW SCIENTIST"Finding the right formalization is a large component of the art of doing great mathematics."John Casti, author of Mathematical Mountaintops, on Godel, Turing and Chaitin in NATURE"What mathematicians over the centuries - from the ancients, through Pascal, Fermat, Bernoulli, and de Moivre, to Kolmogorov and Chaitin - have discovered, is that it ÄrandomnessÜ is a profoundly rich concept."Jerrold W. Grossman in the MATHEMATICAL INTELLIGENCER
Alfred Tarski was one of the two giants of the twentieth-century development of logic, along with Kurt Goedel. The four volumes of this collection contain all of Tarski's published papers and abstracts, as well as a comprehensive bibliography. Here will be found many of the works, spanning the period 1921 through 1979, which are the bedrock of contemporary areas of logic, whether in mathematics or philosophy. These areas include the theory of truth in formalized languages, decision methods and undecidable theories, foundations of geometry, set theory, and model theory, algebraic logic, and universal algebra.
A best-seller in its French edition, the construction of this book is original and its success in the French market demonstrates its appeal. It is based on three principles: 1. An organization of the chapters by families of algorithms : exhaustive search, divide and conquer, etc. At the contrary, there is no chapter only devoted to a systematic exposure of, say, algorithms on strings. Some of these will be found in different chapters. 2. For each family of algorithms, an introduction is given to the mathematical principles and the issues of a rigorous design, with one or two pedagogical examples. 3. For its most part, the book details 150 problems, spanning on seven families of algorithms. For each problem, a precise and progressive statement is given. More important, a complete solution is detailed, with respect to the design principles that have been presented ; often, some classical errors are pointed at. Roughly speaking, two thirds of the book are devoted to the detailed rational construction of the solutions. |
![]() ![]() You may like...
Auroboros: Coils of the Serpent…
Warchief Gaming, Chris Metzen
Hardcover
|