![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Mathematical theory of computation
This is a thorough introduction to the fundamental concepts of functional programming.KEY TOPICS:The book clearly expounds the construction of functional programming as a process of mathematical calculation, but restricts itself to the mathematics relevant to actual program construction. It covers simple and abstract datatypes, numbers, lists, examples, trees, and efficiency. It includes a simple, yet coherent treatment of the Haskell class; a calculus of time complexity; and new coverage of monadic input-output.MARKET:For anyone interested in the theory and practice of functional programming.
The aim of this textbook is to present an account of the theory of computation. After introducing the concept of a model of computation and presenting various examples, the author explores the limitations of effective computation via basic recursion theory. Self-reference and other methods are introduced as fundamental and basic tools for constructing and manipulating algorithms. From there the book considers the complexity of computations and the notion of a complexity measure is introduced. Finally, the book culminates in considering time and space measures and in classifying computable functions as being either feasible or not. The author assumes only a basic familiarity with discrete mathematics and computing, making this textbook ideal for a graduate-level introductory course. It is based on many such courses presented by the author and so numerous exercises are included. In addition, the solutions to most of these exercises are provided.
In 1994 Peter Shor [65] published a factoring algorithm for a quantum computer that finds the prime factors of a composite integer N more efficiently than is possible with the known algorithms for a classical com puter. Since the difficulty of the factoring problem is crucial for the se curity of a public key encryption system, interest (and funding) in quan tum computing and quantum computation suddenly blossomed. Quan tum computing had arrived. The study of the role of quantum mechanics in the theory of computa tion seems to have begun in the early 1980s with the publications of Paul Benioff [6]' [7] who considered a quantum mechanical model of computers and the computation process. A related question was discussed shortly thereafter by Richard Feynman [35] who began from a different perspec tive by asking what kind of computer should be used to simulate physics. His analysis led him to the belief that with a suitable class of "quantum machines" one could imitate any quantum system.
The nationwide research project `Deduktion', funded by the `Deutsche Forschungsgemeinschaft (DFG)' for a period of six years, brought together almost all research groups within Germany engaged in the field of automated reasoning. Intensive cooperation and exchange of ideas led to considerable progress both in the theoretical foundations and in the application of deductive knowledge. This three-volume book covers these original contributions moulded into the state of the art of automated deduction. The three volumes are intended to document and advance a development in the field of automated deduction that can now be observed all over the world. Rather than restricting the interest to purely academic research, the focus now is on the investigation of problems derived from realistic applications. In fact industrial applications are already pursued on a trial basis. In consequence the emphasis of the volumes is not on the presentation of the theoretical foundations of logical deduction as such, as in a handbook; rather the books present the concepts and methods now available in automated deduction in a form which can be easily accessed by scientists working in applications outside of the field of deduction. This reflects the strong conviction that automated deduction is on the verge of being fully included in the evolution of technology. Volume I focuses on basic research in deduction and on the knowledge on which modern deductive systems are based. Volume II presents techniques of implementation and details about system building. Volume III deals with applications of deductive techniques mainly, but not exclusively, to mathematics and the verification of software. Each chapter was read by two referees, one an international expert from abroad and the other a knowledgeable participant in the national project. It has been accepted for inclusion on the basis of these review reports. Audience: Researchers and developers in software engineering, formal methods, certification, verification, validation, specification of complex systems and software, expert systems, natural language processing.
In recent years, deep learning has fundamentally changed the landscapes of a number of areas in artificial intelligence, including speech, vision, natural language, robotics, and game playing. In particular, the striking success of deep learning in a wide variety of natural language processing (NLP) applications has served as a benchmark for the advances in one of the most important tasks in artificial intelligence. This book reviews the state of the art of deep learning research and its successful applications to major NLP tasks, including speech recognition and understanding, dialogue systems, lexical analysis, parsing, knowledge graphs, machine translation, question answering, sentiment analysis, social computing, and natural language generation from images. Outlining and analyzing various research frontiers of NLP in the deep learning era, it features self-contained, comprehensive chapters written by leading researchers in the field. A glossary of technical terms and commonly used acronyms in the intersection of deep learning and NLP is also provided. The book appeals to advanced undergraduate and graduate students, post-doctoral researchers, lecturers and industrial researchers, as well as anyone interested in deep learning and natural language processing.
This book reviews the theoretical concepts, leading-edge techniques and practical tools involved in the latest multi-disciplinary approaches addressing the challenges of big data. Illuminating perspectives from both academia and industry are presented by an international selection of experts in big data science. Topics and features: describes the innovative advances in theoretical aspects of big data, predictive analytics and cloud-based architectures; examines the applications and implementations that utilize big data in cloud architectures; surveys the state of the art in architectural approaches to the provision of cloud-based big data analytics functions; identifies potential research directions and technologies to facilitate the realization of emerging business models through big data approaches; provides relevant theoretical frameworks, empirical research findings, and numerous case studies; discusses real-world applications of algorithms and techniques to address the challenges of big datasets.
This second volume of the book series shows R-calculus is a combination of one monotonic tableau proof system and one non-monotonic one. The R-calculus is a Gentzen-type deduction system which is non-monotonic, and is a concrete belief revision operator which is proved to satisfy the AGM postulates and the DP postulates. It discusses the algebraical and logical properties of tableau proof systems and R-calculi in many-valued logics. This book offers a rich blend of theory and practice. It is suitable for students, researchers and practitioners in the field of logic. Also it is very useful for all those who are interested in data, digitization and correctness and consistency of information, in modal logics, non monotonic logics, decidable/undecidable logics, logic programming, description logics, default logics and semantic inheritance networks.
Recent years have seen an explosion of new mathematical results on
learning and processing in neural networks. This body of results
rests on a breadth of mathematical background which even few
specialists possess. In a format intermediate between a textbook
and a collection of research articles, this book has been assembled
to present a sample of these results, and to fill in the necessary
background, in such areas as computability theory, computational
complexity theory, the theory of analog computation, stochastic
processes, dynamical systems, control theory, time-series analysis,
Bayesian analysis, regularization theory, information theory,
computational learning theory, and mathematical statistics.
This book provides insight into the mathematics of Galerkin finite element method as applied to parabolic equations. The revised second edition has been influenced by recent progress in application of semigroup theory to stability and error analysis, particulatly in maximum-norm. Two new chapters have also been added, dealing with problems in polygonal, particularly noncovex, spatial domains, and with time discretization based on using Laplace transformation and quadrature.
With the proliferation of huge amounts of (heterogeneous) data on the Web, the importance of information retrieval (IR) has grown considerably over the last few years. Big players in the computer industry, such as Google, Microsoft and Yahoo , are the primary contributors of technology for fast access to Web-based information; and searching capabilities are now integrated into most information systems, ranging from business management software and customer relationship systems to social networks and mobile phone applications. Ceri and his co-authors aim at taking their readers from the foundations of modern information retrieval to the most advanced challenges of Web IR. To this end, their book is divided into three parts. The first part addresses the principles of IR and provides a systematic and compact description of basic information retrieval techniques (including binary, vector space and probabilistic models as well as natural language search processing) before focusing on its application to the Web. Part two addresses the foundational aspects of Web IR by discussing the general architecture of search engines (with a focus on the crawling and indexing processes), describing link analysis methods (specifically Page Rank and HITS), addressing recommendation and diversification, and finally presenting advertising in search (the main source of revenues for search engines). The third and final part describes advanced aspects of Web search, each chapter providing a self-contained, up-to-date survey on current Web research directions. Topics in this part include meta-search and multi-domain search, semantic search, search in the context of multimedia data, and crowd search. The book is ideally suited to courses on information retrieval, as it covers all Web-independent foundational aspects. Its presentation is self-contained and does not require prior background knowledge. It can also be used in the context of classic courses on data management, allowing the instructor to cover both structured and unstructured data in various formats. Its classroom use is facilitated by a set of slides, which can be downloaded from www.search-computing.org.
This book is concerned with the processing of signals that have been sampled and digitized. The authors present algorithms for the optimization, random simulation, and numerical integration of probability densities for applications of Bayesian inference to signal processing. In particular, methods are developed for the computation of marginal densities and evidence, and are applied to previously intractable problems either involving large numbers of parameters or where the signal model is of a complex form. The emphasis is on the applications of these methods notably to the restoration of digital audio recordings and biomedical data. After a chapter which sets out the main principles of Bayesian inference applied to signal processing, subsequent chapters cover numerical approaches to these techniques, the use of Markov chain Monte Carlo methods, the identification of abrupt changes in data using the Bayesian piecewise linear model, and identifying missing samples in digital audio signals.
Biology is in the midst of a era yielding many significant discoveries and promising many more. Unique to this era is the exponential growth in the size of information-packed databases. Inspired by a pressing need to analyze that data, Introduction to Computational Biology explores a new area of expertise that emerged from this fertile field- the combination of biological and information sciences.
Bayesian probability theory and maximum entropy methods are at the core of a new view of scientific inference. These new' ideas, along with the revolution in computational methods afforded by modern computers, allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. This volume records the Proceedings of Eleventh Annual Maximum Entropy' Workshop, held at Seattle University in June, 1991. These workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this volume. There are tutorial papers, theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. The contributions contained in this volume present a state-of-the-art review that will be influential and useful for many years to come.
This monograph offers an original broad and very diverse exploration of the seriation domain in data analysis, together with building a specific relation to clustering.Relative to a data table crossing a set of objects and a set of descriptive attributes, the search for orders which correspond respectively to these two sets is formalized mathematically and statistically. State-of-the-art methods are created and compared with classical methods and a thorough understanding of the mutual relationships between these methods is clearly expressed. The authors distinguish two families of methods: Geometric representation methods Algorithmic and Combinatorial methods Original and accurate methods are provided in the framework for both families. Their basis and comparison is made on both theoretical and experimental levels. The experimental analysis is very varied and very comprehensive. Seriation in Combinatorial and Statistical Data Analysis has a unique character in the literature falling within the fields of Data Analysis, Data Mining and Knowledge Discovery. It will be a valuable resource for students and researchers in the latter fields.
This long-awaited revision offers a comprehensive introduction to natural language understanding with developments and research in the field today. Building on the effective framework of the first edition, the new edition gives the same balanced coverage of syntax, semantics, and discourse, and offers a uniform framework based on feature-based context-free grammars and chart parsers used for syntactic and semantic processing. Thorough treatment of issues in discourse and context-dependent interpretation is also provided. In addition, this title offers coverage of two entirely new
subject areas. First, the text features a new chapter on
statistically-based methods using large corpora. Second, it
includes an appendix on speech recognition and spoken language
understanding. Also, the information on semantics that was covered
in the first edition has been largely expanded in this edition to
include an emphasis on compositional interpretation.
The book provides an introduction to common programming tools and methods in numerical mathematics and scientific computing. Unlike standard approaches, it does not focus on any specific language, but aims to explain the underlying ideas. Typically, new concepts are first introduced in the particularly user-friendly Python language and then transferred and extended in various programming environments from C/C++, Julia and MATLAB to Maple and Mathematica. This includes various approaches to distributed computing. By examining and comparing different languages, the book is also helpful for mathematicians and practitioners in deciding which programming language to use for which purposes. At a more advanced level, special tools for the automated solution of partial differential equations using the finite element method are discussed. On a more experimental level, the basic methods of scientific machine learning in artificial neural networks are explained and illustrated.
As organizations continue to develop, there is an increasing need for technological methods that can keep up with the rising amount of data and information that is being generated. Machine learning is a tool that has become powerful due to its ability to analyze large amounts of data quickly. Machine learning is one of many technological advancements that is being implemented into a multitude of specialized fields. An extensive study on the execution of these advancements within professional industries is necessary. Advanced Multi-Industry Applications of Big Data Clustering and Machine Learning is an essential reference source that synthesizes the analytic principles of clustering and machine learning to big data and provides an interface between the main disciplines of engineering/technology and the organizational, administrative, and planning abilities of management. Featuring research on topics such as project management, contextual data modeling, and business information systems, this book is ideally designed for engineers, economists, finance officers, marketers, decision makers, business professionals, industry practitioners, academicians, students, and researchers seeking coverage on the implementation of big data and machine learning within specific professional fields.
This third volume of the book series shows R-calculus is a Gentzen-typed deduction system which is non-monotonic, and is a concrete belief revision operator which is proved to satisfy the AGM postulates and the DP postulates. In this book, R-calculus is taken as Tableau-based/sequent-based/multisequent-based to preserve the satisfiability of the Theory/sequent/multisequent to revise, or sequent-based, to preserve the satisfiability of the sequent to revise. The R-calculi for Post and three-valued logic is given. This book offers a rich blend of theory and practice. It is suitable for students, researchers and practitioners in the field of logic.
This book demonstrates that fundamental concepts and methods from phenomenological particle physics can be derived rigorously from well-defined general assumptions in a mathematically clean way. Starting with the Wightman formulation of relativistic quantum field theory, the perturbative formulation of quantum electrodynamics is derived avoiding the usual formalism based on the canonical commutation relations. A scattering formalism based on the local-observables approach is developed, directly yielding expressions for the observable inclusive cross-sections without having to introduce the S-matrix. Neither ultraviolet nor infrared regularizations are required in this approach. Although primarily intended for researchers working in this field, anyone with a basic working knowledge of relativistic quantum field theory can benefit from this book.
This book features a selection of extended papers presented at the 9th IFIP WG 12.6 International Workshop on Artificial Intelligence for Knowledge Management, AI4KM 2021, and the 1st International Workshop on Energy and Sustainability, AIES 2021, named AI4KMES 2021 and held in conjunction with IJCAI 2021 in August 2021. The conference was planned to take place in Montreal, Canada, but changed to an online event due to the COVID-19 pandemic. The 15 papers included in this book were carefully reviewed and selected from 17 submissions. They deal with knowledge management and sustainability challenges, focusing on methodological, technical and organizational aspects of AI used for facing related complex problems. This year's topic was AI for Knowledge Management, Energy and Sustainable Future.
This book presents essential concepts of traditional Flower Pollination Algorithm (FPA) and its recent variants and also its application to find optimal solution for a variety of real-world engineering and medical problems. Swarm intelligence-based meta-heuristic algorithms are extensively implemented to solve a variety of real-world optimization problems due to its adaptability and robustness. FPA is one of the most successful swarm intelligence procedures developed in 2012 and extensively used in various optimization tasks for more than a decade. The mathematical model of FPA is quite straightforward and easy to understand and enhance, compared to other swarm approaches. Hence, FPA has attracted attention of researchers, who are working to find the optimal solutions in variety of domains, such as N-dimensional numerical optimization, constrained/unconstrained optimization, and linear/nonlinear optimization problems. Along with the traditional bat algorithm, the enhanced versions of FPA are also considered to solve a variety of optimization problems in science, engineering, and medical applications.
The theory presented in this book is developed constructively, is based on a few axioms encapsulating the notion of objects (points and sets) being apart, and encompasses both point-set topology and the theory of uniform spaces. While the classical-logic-based theory of proximity spaces provides some guidance for the theory of apartness, the notion of nearness/proximity does not embody enough algorithmic information for a deep constructive development. The use of constructive (intuitionistic) logic in this book requires much more technical ingenuity than one finds in classical proximity theory - algorithmic information does not come cheaply - but it often reveals distinctions that are rendered invisible by classical logic. In the first chapter the authors outline informal constructive logic and set theory, and, briefly, the basic notions and notations for metric and topological spaces. In the second they introduce axioms for a point-set apartness and then explore some of the consequences of those axioms. In particular, they examine a natural topology associated with an apartness space, and relations between various types of continuity of mappings. In the third chapter the authors extend the notion of point-set (pre-)apartness axiomatically to one of (pre-)apartness between subsets of an inhabited set. They then provide axioms for a quasiuniform space, perhaps the most important type of set-set apartness space. Quasiuniform spaces play a major role in the remainder of the chapter, which covers such topics as the connection between uniform and strong continuity (arguably the most technically difficult part of the book), apartness and convergence in function spaces, types of completeness, and neat compactness. Each chapter has a Notes section, in which are found comments on the definitions, results, and proofs, as well as occasional pointers to future work. The book ends with a Postlude that refers to other constructive approaches to topology, with emphasis on the relation between apartness spaces and formal topology. Largely an exposition of the authors' own research, this is the first book dealing with the apartness approach to constructive topology, and is a valuable addition to the literature on constructive mathematics and on topology in computer science. It is aimed at graduate students and advanced researchers in theoretical computer science, mathematics, and logic who are interested in constructive/algorithmic aspects of topology. Largely an exposition of the authors' own research, this is the first book dealing with the apartness approach to constructive topology, and is a valuable addition to the literature on constructive mathematics and on topology in computer science. It is aimed at graduate students and advanced researchers in theoretical computer science, mathematics, and logic who are interested in constructive/algorithmic aspects of topology.
Random Generation of Trees is about a field on the crossroads between computer science, combinatorics and probability theory. Computer scientists need random generators for performance analysis, simulation, image synthesis, etc. In this context random generation of trees is of particular interest. The algorithms presented here are efficient and easy to code. Some aspects of Horton--Strahler numbers, programs written in C and pictures are presented in the appendices. The complexity analysis is done rigorously both in the worst and average cases. Random Generation of Trees is intended for students in computer science and applied mathematics as well as researchers interested in random generation.
Gaussian linear modelling cannot address current signal processing demands. In moderncontexts, suchasIndependentComponentAnalysis(ICA), progresshasbeen made speci?cally by imposing non-Gaussian and/or non-linear assumptions. Hence, standard Wiener and Kalman theories no longer enjoy their traditional hegemony in the ?eld, revealing the standard computational engines for these problems. In their place, diverse principles have been explored, leading to a consequent diversity in the implied computational algorithms. The traditional on-line and data-intensive pre- cupations of signal processing continue to demand that these algorithms be tractable. Increasingly, full probability modelling (the so-called Bayesian approach)-or partial probability modelling using the likelihood function-is the pathway for - sign of these algorithms. However, the results are often intractable, and so the area of distributional approximation is of increasing relevance in signal processing. The Expectation-Maximization (EM) algorithm and Laplace approximation, for ex- ple, are standard approaches to handling dif?cult models, but these approximations (certainty equivalence, and Gaussian, respectively) are often too drastic to handle the high-dimensional, multi-modal and/or strongly correlated problems that are - countered. Since the 1990s, stochastic simulation methods have come to dominate Bayesian signal processing. Markov Chain Monte Carlo (MCMC) sampling, and - lated methods, are appreciated for their ability to simulate possibly high-dimensional distributions to arbitrary levels of accuracy. More recently, the particle ?ltering - proach has addressed on-line stochastic simulation. Nevertheless, the wider acce- ability of these methods-and, to some extent, Bayesian signal processing itself- has been undermined by the large computational demands they typically mak
Discrete Mathematics for New Technology has been designed to cover the core mathematics requirement for undergraduate computer science students in the UK and the USA. This has been approached in a comprehensive way whilst maintaining an easy to follow progression from the basic mathematical concepts covered by the GCSE in the UK and by high-school algebra in the USA, to the more sophisticated mathematical concepts examined in the latter stages of the book. The rigorous treatment of theory is punctuated by frequent use of pertinent examples. This is then reinforced with exercises to allow the reader to achieve a "feel" for the subject at hand. Hints and solutions are provided for these brain-teasers at the end of the book. Although aimed primarily at computer science students, the structured development of the mathematics enables this text to be used by undergraduate mathematicians, scientists and others who require an understanding of discrete mathematics. The topics covered include: logic and the nature of mathematical proof set theory, relations and functions, matrices and systems of linear equations, algebraic structures, Boolean algebras and a thorough treatise on graph theory. The authors have extensive experience of teaching undergraduate mathematics at colleges and universities in the British and American systems. They have developed and taught courses for a varied of non-specialists and have established reputations for presenting rigorous mathematical concepts in a manner which is accessible to this audience. Their current research interests lie in the fields of algebra, topology and mathematics education. Discrete Mathematics for New Technology is therefore a rare thing; areadable, friendly textbook designed for non-mathematicians, presenting material which is at the foundations of mathematics itself. It is essential reading. |
![]() ![]() You may like...
Quantum Random Number Generation…
Christian Kollmitzer, Stefan Schauer, …
Hardcover
R3,890
Discovery Miles 38 900
Hajnal Andreka and Istvan Nemeti on…
Judit Madarasz, Gergely Szekely
Hardcover
R2,971
Discovery Miles 29 710
Mathematics and Computing - ICMC 2018…
Debdas Ghosh, Debasis Giri, …
Hardcover
R2,960
Discovery Miles 29 600
Analytic Combinatorics for Multiple…
Roy Streit, Robert Blair Angle, …
Hardcover
R3,626
Discovery Miles 36 260
Arithmetic and Algebraic Circuits
Antonio Lloris Ruiz, Encarnacion Castillo Morales, …
Hardcover
R5,234
Discovery Miles 52 340
Computational Diffusion MRI - MICCAI…
Elisenda Bonet-Carne, Jana Hutter, …
Hardcover
R4,359
Discovery Miles 43 590
Numerical Geometry, Grid Generation and…
Vladimir A. Garanzha, Lennard Kamenski, …
Hardcover
R6,387
Discovery Miles 63 870
Digital Protection for Power Systems
Salman K. Salman, A.T. Johns
Hardcover
|