![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Mathematical theory of computation
This book is a product of the Third International Conference on Computing, Mathematics and Statistics (iCMS2017) to be held in Langkawi in November 2017. It is divided into four sections according to the thrust areas: Computer Science, Mathematics, Statistics, and Multidisciplinary Applications. All sections sought to confront current issues that society faces today. The book brings collectively quantitative, as well as qualitative, research methods that are also suitable for future research undertakings. Researchers in Computer Science, Mathematics and Statistics can use this book as a sourcebook to enrich their research works.
Although the computing facilities available to scientists are becoming more powerful, the problems they are addressing are increasingly complex. The mathematical methods for simplifying the computing procedures are therefore as important as ever. Microcomputer Algorithms: Action from Algebra stresses the mathematical basis behind the use of many algorithms of computational mathematics, providing detailed descriptions on how to generate algorithms for a large number of different uses. Covering a wide range of mathematical and physical applications, the book contains the theory of 25 algorithms. The mathematical theory for each algorithm is described in detail prior to discussing the algorithm in full, with complete program listings. The book presents the algorithms in modular form, allowing for easy interpretation, for the adaptation to readers' specific requirements without difficulty, and for use with various microcomputers. Blending mathematics and programming in one volume, this book will be of broad interest to all scientists and engineers, particularly those physicists using microcomputers for scientific problem handling. Students handling numerical data for research projects will also find the book useful.
In this book, the development of the English dictionary is examined, along with the kinds of dictionary available, the range of information they contain, factors affecting their usage, and public attitudes towards them. As well as an descriptive analysis of word meaning, the author considers whether a thematic, thesaurus-like presentation might be more suited than the traditional alphabetical format to the description of words and their meaning.
The main focus of this textbook is the basic unit of information and the way in which our understanding of this has evolved over time. In particular the author covers concepts related to information, classical computing, logic, reversible computing, quantum mechanics, quantum computing, thermodynamics and some artificial intelligence and biology, all approached from the viewpoint of computer sciences. The book begins by asking the following nontrivial question: what is a bit? The author then discusses logic, logic gates, reversible computing and reversible architectures, and the concept of disorder. He then tries to establish the relationship between three essential questions that justify quantum approaches in computer sciences: the energy required to perform a real-life computation, the size of current processors, and the reversibility of quantum operations. Based on these concepts, the author establishes the conditions that justify the use of quantum techniques for certain kinds of computational tasks, and he uses formal descriptions and formal argumentations to introduce key quantum mechanical concepts and approaches. The rest of the book is formally different, focusing on practical issues, including a discussion of remarkable quantum algorithms in a treatment based on quantum circuit theory. The book is valuable for graduate students in computer science, and students of other disciplines who are engaged with physical models of information and computing.
Although the computing facilities available to scientists are
becoming more powerful, the problems they are addressing are
increasingly complex. The mathematical methods for simplifying the
computing procedures are therefore as important as ever.
Microcomputer Algorithms: Action from Algebra stresses the
mathematical basis behind the use of many algorithms of
computational mathematics, providing detailed descriptions on how
to generate algorithms for a large number of different uses.
This book is a specialized monograph on the development of the mathematical and computational metatheory of reductive logic and proof-search, areas of logic that are becoming important in computer science. A systematic foundational text on these emerging topics, it includes proof-theoretic, semantic/model-theoretic and algorithmic aspects. The scope ranges from the conceptual background to reductive logic, through its mathematical metatheory, to its modern applications in the computational sciences. Suitable for researchers and graduate students in mathematical, computational and philosophical logic, and in theoretical computer science and artificial intelligence, this is the latest in the prestigous world-renowned Oxford Logic Guides, which contains Michael Dummet's Elements of intuitionism (2nd Edition), Dov M. Gabbay, Mark A. Reynolds, and Marcelo Finger's Temporal Logic Mathematical Foundations and Computational Aspects , J. M. Dunn and G. Hardegree's Algebraic Methods in Philosophical Logic, H. Rott's Change, Choice and Inference: A Study of Belief Revision and Nonmonotonic Reasoning , and P. T. Johnstone's Sketches of an Elephant: A Topos Theory Compendium: Volumes 1 and 2 .
Taking a highly pragmatic approach to presenting the principles and applications of chemical engineering, this companion text for students and working professionals offers an easily accessible guide to solving problems using computers. The primer covers the core concepts of chemical engineering, from conservation laws all the way up to chemical kinetics, without heavy stress on theory and is designed to accompany traditional larger core texts. The book presents the basic principles and techniques of chemical engineering processes and helps readers identify typical problems and how to solve them. Focus is on the use of systematic algorithms that employ numerical methods to solve different chemical engineering problems by describing and transforming the information. Problems are assigned for each chapter, ranging from simple to difficult, allowing readers to gradually build their skills and tackle a broad range of problems. MATLAB and Excel (R) are used to solve many examples and the more than 70 real examples throughout the book include computer or hand solutions, or in many cases both. The book also includes a variety of case studies to illustrate the concepts and a downloadable file containing fully worked solutions to the book's problems on the publisher's website. Introduces the reader to chemical engineering computation without the distractions caused by the contents found in many texts. Provides the principles underlying all of the major processes a chemical engineer may encounter as well as offers insight into their analysis, which is essential for design calculations. Shows how to solve chemical engineering problems using computers that require numerical methods using standard algorithms, such as MATLAB (R) and Excel (R). Contains selective solved examples of many problems within the chemical process industry to demonstrate how to solve them using the techniques presented in the text. Includes a variety of case studies to illustrate the concepts and a downloadable file containing fully worked solutions to problems on the publisher's website. Offers non-chemical engineers who are expected to work with chemical engineers on projects, scale-ups and process evaluations a solid understanding of basic concepts of chemical engineering analysis, design, and calculations.
Visualisation and Processing of Tensor Fields provides researchers an inspirational look at how to process and visualize complicated 2D and 3D images known as tensor fields. Tensor fields are the natural representation for many physical quantities; they can describe how water moves around in the brain, how gravity varies around the earth, or how materials are stressed and deformed. With its numerous color figures, this book helps the reader understand both the underlying mathematics and the applications of tensor fields. The reader also will learn about the most recent research topics and open research questions.
In this book, Professor Salomaa gives an introduction to certain mathematical topics central to theoretical computer science: computability and recursive functions, formal languages and automata, computational complexity and cryptography. The presentation is essentially self-contained with detailed proofs of all statements provided, yet without sacrificing readability. Professor Salomaa is well known for his books in this area; the present work will be welcomed as an exposition that begins with basics familiar to advanced undergraduate students yet proceeds to some of the most important recent developments in theoretical computer science.
Randomized algorithms have become a central part of the algorithms curriculum, based on their increasingly widespread use in modern applications. This book presents a coherent and unified treatment of probabilistic techniques for obtaining high probability estimates on the performance of randomized algorithms. It covers the basic toolkit from the Chernoff-Hoeffding bounds to more sophisticated techniques like martingales and isoperimetric inequalities, as well as some recent developments like Talagrand's inequality, transportation cost inequalities and log-Sobolev inequalities. Along the way, variations on the basic theme are examined, such as Chernoff-Hoeffding bounds in dependent settings. The authors emphasise comparative study of the different methods, highlighting respective strengths and weaknesses in concrete example applications. The exposition is tailored to discrete settings sufficient for the analysis of algorithms, avoiding unnecessary measure-theoretic details, thus making the book accessible to computer scientists as well as probabilists and discrete mathematicians.
This essential companion volume to CHAITIN's highly successful books The Unknowable and The Limits of Mathematics, also published by Springer, presents the technical core of his theory of program-size complexity, also known as algorithmic information theory. (The two previous volumes are more concerned with applications to meta-mathematics.) LISP is used to present the key algorithms and to enable computer users to interact with the author's proofs and discover for themselves how they work. The LISP code for this book is available at the author's Web site together with a Java applet LISP interpreter: http://www.cs.auckland.ac.nz/CDMTCS/chaitin/ait/"No one has looked deeper and farther into the abyss of randomness and its role in mathematics than Greg Chaitin. This book tells you everything he's seen. Don't miss it."John Casti, Santa Fe Institute, Author of "Goedel: A Life of Logic"
Fast Solvers for Mesh-Based Computations presents an alternative way of constructing multi-frontal direct solver algorithms for mesh-based computations. It also describes how to design and implement those algorithms. The book's structure follows those of the matrices, starting from tri-diagonal matrices resulting from one-dimensional mesh-based methods, through multi-diagonal or block-diagonal matrices, and ending with general sparse matrices. Each chapter explains how to design and implement a parallel sparse direct solver specific for a particular structure of the matrix. All the solvers presented are either designed from scratch or based on previously designed and implemented solvers. Each chapter also derives the complete JAVA or Fortran code of the parallel sparse direct solver. The exemplary JAVA codes can be used as reference for designing parallel direct solvers in more efficient languages for specific architectures of parallel machines. The author also derives exemplary element frontal matrices for different one-, two-, or three-dimensional mesh-based computations. These matrices can be used as references for testing the developed parallel direct solvers. Based on more than 10 years of the author's experience in the area, this book is a valuable resource for researchers and graduate students who would like to learn how to design and implement parallel direct solvers for mesh-based computations.
Combining concepts of mathematics and computer science, this book is about the sequences of symbols that can be generated by simple models of computation called "finite automata". Suitable for graduate students or advanced undergraduates, it starts from elementary principles and develops the basic theory. The study then progresses to show how these ideas can be applied to solve problems in number theory and physics.
Computational Mathematics: Models, Methods, and Analysis with MATLAB (R) and MPI is a unique book covering the concepts and techniques at the core of computational science. The author delivers a hands-on introduction to nonlinear, 2D, and 3D models; nonrectangular domains; systems of partial differential equations; and large algebraic problems requiring high-performance computing. The book shows how to apply a model, select a numerical method, implement computer simulations, and assess the ensuing results. Providing a wealth of MATLAB, Fortran, and C++ code online for download, the Second Edition of this very popular text: Includes a new chapter with two sections on the finite element method, two sections on shallow water waves, and two sections on the driven cavity problem Introduces multiprocessor/multicore computers, parallel MATLAB, and message passing interface (MPI) in the chapter on high-performance computing Updates and adds code and documentation Computational Mathematics: Models, Methods, and Analysis with MATLAB (R) and MPI, Second Edition is an ideal textbook for an undergraduate course taught to mathematics, computer science, and engineering students. By using code in practical ways, students take their first steps toward more sophisticated numerical modeling.
The genesis of this book goes back to the conference held at the University of Bologna, June 1999, on collaborative work between the University of California at Berkeley and the University of Bologna. The book, in its present form, is a compilation of some of the recent work using geometric partial differential equations and the level set methodology in medical and biomedical image analysis.The book not only gives a good overview on some of the traditional applications in medical imagery such as, CT, MR, Ultrasound, but also shows some new and exciting applications in the area of Life Sciences, such as confocal microscope image understanding.
The authors' treatment of data structures in Data Structures and Algorithms is unified by an informal notion of "abstract data types," allowing readers to compare different implementations of the same concept. Algorithm design techniques are also stressed and basic algorithm analysis is covered. Most of the programs are written in Pascal.
This book provides an up-to-date account of current research in quantum information theory, at the intersection of theoretical computer science, quantum physics, and mathematics. The book confronts many unprecedented theoretical challenges generated by infi nite dimensionality and memory effects in quantum communication. The book will also equip readers with all the required mathematical tools to understand these essential questions.
Automata theory lies at the foundation of computer science, and is vital to a theoretical understanding of how computers work and what constitutes formal methods. This treatise gives a rigorous account of the topic and illuminates its real meaning by looking at the subject in a variety of ways. The first part of the book is organised around notions of rationality and recognisability. The second part deals with relations between words realised by finite automata, which not only exemplifies the automata theory but also illustrates the variety of its methods and its fields of application. Many exercises are included, ranging from those that test the reader, to those that are technical results, to those that extend ideas presented in the text. Solutions or answers to many of these are included in the book.
The Primality Testing Problem (PTP) has now proved to be solvable in deterministic polynomial-time (P) by the AKS (Agrawal-Kayal-Saxena) algorithm, whereas the Integer Factorization Problem (IFP) still remains unsolvable in (P). There is still no polynomial-time algorithm for IFP. Many practical public-key cryptosystems and protocols such as RSA (Rivest-Shamir-Adleman) rely their security on computational intractability of IFP. Primality Testing and Integer Factorization in Public Key Cryptography, Second Edition, provides a survey of recent progress in primality testing and integer factorization, with implications to factoring based public key cryptography. Notable new features are the comparison of Rabin-Miller probabilistic test in RP, Atkin-Morain elliptic curve test in ZPP and AKS deterministic test. This volume is designed for advanced level students in computer science and mathematics, and as a secondary text or reference book; suitable for practitioners and researchers in industry. First edition was very positively reviewed by Prof Samuel Wagstaff at Purdue University in AMS Mathematical Reviews (See MR2028480 2004j: 11148), and by Professor J.T. Ayuso of University of Simon Bolivar in the European Mathematical Societya (TM)s review journal Zentralblatt fA1/4r Mathematik (see Zbl 1048.11103).
Many machine learning tasks involve solving complex optimization problems, such as working on non-differentiable, non-continuous, and non-unique objective functions; in some cases it can prove difficult to even define an explicit objective function. Evolutionary learning applies evolutionary algorithms to address optimization problems in machine learning, and has yielded encouraging outcomes in many applications. However, due to the heuristic nature of evolutionary optimization, most outcomes to date have been empirical and lack theoretical support. This shortcoming has kept evolutionary learning from being well received in the machine learning community, which favors solid theoretical approaches. Recently there have been considerable efforts to address this issue. This book presents a range of those efforts, divided into four parts. Part I briefly introduces readers to evolutionary learning and provides some preliminaries, while Part II presents general theoretical tools for the analysis of running time and approximation performance in evolutionary algorithms. Based on these general tools, Part III presents a number of theoretical findings on major factors in evolutionary optimization, such as recombination, representation, inaccurate fitness evaluation, and population. In closing, Part IV addresses the development of evolutionary learning algorithms with provable theoretical guarantees for several representative tasks, in which evolutionary learning offers excellent performance.
This volume contains the articles presented at the 20th International Meshing Roundtable (IMR) organized, in part, by Sandia National Laboratories and was held in Paris, France on Oct 23-26, 2011. This is the first year the IMR was held outside the United States territory. Other sponsors of the 20th IMR are Systematic Paris Region Systems & ICT Cluster, AIAA, NAFEMS, CEA, and NSF. The Sandia National Laboratories started the first IMR in 1992, and the conference has been held annually since. Each year the IMR brings together researchers, developers, and application experts, from a variety of disciplines, to present and discuss ideas on mesh generation and related topics. The topics covered by the IMR have applications in numerical analysis, computational geometry, computer graphics, as well as other areas, and the presentations describe novel work ranging from theory to application.
DLP denotes a dynamic-linear modeling and optimization approach to computational decision support for resource planning problems that arise, typically, within the natural resource sciences and the disciplines of operations research and operational engineering. It integrates techniques of dynamic programming (DP) and linear programming (LP) and can be realized in an immediate, practical and usable way. Simultaneously DLP connotes a broad and very general modeling/ algorithmic concept that has numerous areas of application and possibilities for extension. Two motivating examples provide a linking thread through the main chapters, and an appendix provides a demonstration program, executable on a PC, for hands-on experience with the DLP approach.
This book was written to serve as an introduction to logic, with in each chapter - if applicable - special emphasis on the interplay between logic and philosophy, mathematics, language and (theoretical) computer science. The reader will not only be provided with an introduction to classical logic, but to philosophical (modal, epistemic, deontic, temporal) and intuitionistic logic as well. The first chapter is an easy to read non-technical Introduction to the topics in the book. The next chapters are consecutively about Propositional Logic, Sets (finite and infinite), Predicate Logic, Arithmetic and Goedel's Incompleteness Theorems, Modal Logic, Philosophy of Language, Intuitionism and Intuitionistic Logic, Applications (Prolog; Relational Databases and SQL; Social Choice Theory, in particular Majority Judgment) and finally, Fallacies and Unfair Discussion Methods. Throughout the text, the author provides some impressions of the historical development of logic: Stoic and Aristotelian logic, logic in the Middle Ages and Frege's Begriffsschrift, together with the works of George Boole (1815-1864) and August De Morgan (1806-1871), the origin of modern logic. Since "if ..., then ..." can be considered to be the heart of logic, throughout this book much attention is paid to conditionals: material, strict and relevant implication, entailment, counterfactuals and conversational implicature are treated and many references for further reading are given. Each chapter is concluded with answers to the exercises. Philosophical and Mathematical Logic is a very recent book (2018), but with every aspect of a classic. What a wonderful book! Work written with all the necessary rigor, with immense depth, but without giving up clarity and good taste. Philosophy and mathematics go hand in hand with the most diverse themes of logic. An introductory text, but not only that. It goes much further. It's worth diving into the pages of this book, dear reader! Paulo Sergio Argolo
Deep Learning with R introduces deep learning and neural networks using the R programming language. The book builds on the understanding of the theoretical and mathematical constructs and enables the reader to create applications on computer vision, natural language processing and transfer learning. The book starts with an introduction to machine learning and moves on to describe the basic architecture, different activation functions, forward propagation, cross-entropy loss and backward propagation of a simple neural network. It goes on to create different code segments to construct deep neural networks. It discusses in detail the initialization of network parameters, optimization techniques, and some of the common issues surrounding neural networks such as dealing with NaNs and the vanishing/exploding gradient problem. Advanced variants of multilayered perceptrons namely, convolutional neural networks and sequence models are explained, followed by application to different use cases. The book makes extensive use of the Keras and TensorFlow frameworks.
The first edition of "Integrated Methods for Optimization" was published in January 2007. Because the book covers a rapidly developing field, the time is right for a second edition. The book provides a unified treatment of optimization methods. It brings ideas from mathematical programming (MP), constraint programming (CP), and global optimization (GO)into a single volume. There is no reason these must be learned as separate fields, as they normally are, and there are three reasons they should be studied together. (1) There is much in common among them intellectually, and to a large degree they can be understood as special cases of a single underlying solution technology. (2) A growing literature reports how they can be profitably integrated to formulate and solve a wide range of problems. (3) Several software packages now incorporate techniques from two or more of these fields. The book provides a unique resource for graduate students and practitioners who want a well-rounded background in optimization methods within a single course of study. Engineering students are a particularly large potential audience, because engineering optimization problems often benefit from a combined approach-particularly where design, scheduling, or logistics are involved. The text is also of value to those studying operations research, because their educational programs rarely cover CP, and to those studying computer science and artificial intelligence (AI), because their curricula typically omit MP and GO. The text is also useful for practitioners in any of these areas who want to learn about another, because it provides a more concise and accessible treatment than other texts. The book can cover so wide a range of material because it focuses on ideas that arerelevant to the methods used in general-purpose optimization and constraint solvers. The book focuses on ideas behind the methods that have proved useful in general-purpose optimization and constraint solvers, as well as integrated solvers of the present and foreseeable future. The second edition updates results in this area and includes several major new topics: Background material in linear, nonlinear, and dynamic programming.Network flow theory, due to its importance in filtering algorithms.A chapter on generalized duality theory that more explicitly develops a unifying primal-dual algorithmic structure for optimization methods.An extensive survey of search methods from both MP and AI, using the primal-dual framework as an organizing principle.Coverage of several additional global constraints used in CP solvers. The book continues to focus on exact as opposed to heuristic methods. It is possible to bring heuristic methods into the unifying scheme described in the book, and the new edition will retain the brief discussion of how this might be done." |
![]() ![]() You may like...
Experiments and Modeling in Cognitive…
Fabien Mathy, Mustapha Chekaf
Hardcover
Entrepreneurial Ecosystems - Place-Based…
Allan O'Connor, Erik Stam, …
Hardcover
R4,600
Discovery Miles 46 000
Programming Environments for Massively…
K.M. Decker, R. M Rehmann
Hardcover
R2,604
Discovery Miles 26 040
|