![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Mathematical theory of computation
The book comprises an assembly of benchmarks and examples for porous media mechanics collected over the last twenty years. Analysis of thermo-hydro-mechanical-chemical (THMC) processes is essential to many applications in environmental engineering, such as geological waste deposition, geothermal energy utilisation, carbon capture and storage, water resources management, hydrology, even climate chance. In order to assess the feasibility as well as the safety of geotechnical applications, process-based modelling is the only tool to put numbers, i.e. to quantify future scenarios. This charges a huge responsibility concerning the reliability of computational tools. Benchmarking is an appropriate methodology to verify the quality of modelling tools based on best practices. Moreover, benchmarking and code comparison foster community efforts. The benchmark book is part of the OpenGeoSys initiative - an open source project to share knowledge and experience in environmental analysis and scientific computation.
This book opens the door to a new interesting and ambitious world of reversible and quantum computing research. It presents the state of the art required to travel around that world safely. Top world universities, companies and government institutions are in a race of developing new methodologies, algorithms and circuits on reversible logic, quantum logic, reversible and quantum computing and nano-technologies. In this book, twelve reversible logic synthesis methodologies are presented for the first time in a single literature with some new proposals. Also, the sequential reversible logic circuitries are discussed for the first time in a book. Reversible logic plays an important role in quantum computing. Any progress in the domain of reversible logic can be directly applied to quantum logic. One of the goals of this book is to show the application of reversible logic in quantum computing. A new implementation of wavelet and multiwavelet transforms using quantum computing is performed for this purpose. Researchers in academia or industry and graduate students, who work in logic synthesis, quantum computing, nano-technology, and low power VLSI circuit design, will be interested in this book.
This book covers the new topic of GPU computing with many applications involved, taken from diverse fields such as networking, seismology, fluid mechanics, nano-materials, data-mining , earthquakes ,mantle convection, visualization. It will show the public why GPU computing is important and easy to use. It will offer a reason why GPU computing is useful and how to implement codes in an everyday situation.
This book contains a selection of the papers presented at the ITACOSM 2013 Conference, held in Milan in June 2013. It is intended as an international forum of scientific discussion on the developments of theory and application of survey sampling methodologies and applications in human and natural sciences. The book gathers research papers carefully selected from both invited and contributed sessions of the conference. The whole book appears to be a relevant contribution to various key aspects of sampling methodology and techniques; it deals with some hot topics in sampling theory, such as calibration, quantile-regression and multiple frame surveys and with innovative methodologies in important topics of both sampling theory and applications. Contributions cut across current sampling methodologies such as interval estimation for complex samples, randomized responses, bootstrap, weighting, modeling, imputation, small area estimation and effective use of auxiliary information; applications cover a wide and enlarging range of subjects in official household surveys, Bayesian networks, auditing, business and economic surveys, geostatistics and agricultural statistics. The book is an updated, high level reference survey addressed to researchers, professionals and practitioners in many fields.
This book offers a self-contained exposition of the theory of computability in a higher-order context, where 'computable operations' may themselves be passed as arguments to other computable operations. The subject originated in the 1950s with the work of Kleene, Kreisel and others, and has since expanded in many different directions under the influence of workers from both mathematical logic and computer science. The ideas of higher-order computability have proved valuable both for elucidating the constructive content of logical systems, and for investigating the expressive power of various higher-order programming languages. In contrast to the well-known situation for first-order functions, it turns out that at higher types there are several different notions of computability competing for our attention, and each of these has given rise to its own strand of research. In this book, the authors offer an integrated treatment that draws together many of these strands within a unifying framework, revealing not only the range of possible computability concepts but the relationships between them. The book will serve as an ideal introduction to the field for beginning graduate students, as well as a reference for advanced researchers
Thirty years ago mathematical, as opposed to applied numerical, computation was difficult to perform and so relatively little used. Three threads changed that: the emergence of the personal computer; the discovery of fiber-optics and the consequent development of the modern internet; and the building of the Three "M's" Maple, Mathematica and Matlab. We intend to persuade that Mathematica and other similar tools are worth knowing, assuming only that one wishes to be a mathematician, a mathematics educator, a computer scientist, an engineer or scientist, or anyone else who wishes/needs to use mathematics better. We also hope to explain how to become an "experimental mathematician" while learning to be better at proving things. To accomplish this our material is divided into three main chapters followed by a postscript. These cover elementary number theory, calculus of one and several variables, introductory linear algebra, and visualization and interactive geometric computation.
PATTERN CLASSIFICATION a unified view of statistical and neural approaches The product of years of research and practical experience in pattern classification, this book offers a theory-based engineering perspective on neural networks and statistical pattern classification. Pattern Classification sheds new light on the relationship between seemingly unrelated approaches to pattern recognition, including statistical methods, polynomial regression, multilayer perceptron, and radial basis functions. Important topics such as feature selection, reject criteria, classifier performance measurement, and classifier combinations are fully covered, as well as material on techniques that, until now, would have required an extensive literature search to locate. A full program of illustrations, graphs, and examples helps make the operations and general properties of different classification approaches intuitively understandable. Offering a lucid presentation of complex applications and their algorithms, Pattern Classification is an invaluable resource for researchers, engineers, and graduate students in this rapidly developing field.
This book offers a snapshot of the state-of-the-art in classification at the interface between statistics, computer science and application fields. The contributions span a broad spectrum, from theoretical developments to practical applications; they all share a strong computational component. The topics addressed are from the following fields: Statistics and Data Analysis; Machine Learning and Knowledge Discovery; Data Analysis in Marketing; Data Analysis in Finance and Economics; Data Analysis in Medicine and the Life Sciences; Data Analysis in the Social, Behavioural, and Health Care Sciences; Data Analysis in Interdisciplinary Domains; Classification and Subject Indexing in Library and Information Science. The book presents selected papers from the Second European Conference on Data Analysis, held at Jacobs University Bremen in July 2014. This conference unites diverse researchers in the pursuit of a common topic, creating truly unique synergies in the process.
This proceedings volume is a collection of peer reviewed papers presented at the 8th International Conference on Soft Methods in Probability and Statistics (SMPS 2016) held in Rome (Italy). The book is dedicated to Data science which aims at developing automated methods to analyze massive amounts of data and to extract knowledge from them. It shows how Data science employs various programming techniques and methods of data wrangling, data visualization, machine learning, probability and statistics. The soft methods proposed in this volume represent a collection of tools in these fields that can also be useful for data science.
This book constitutes the revised selected papers of the 11th Italian Workshop on Advances in Artificial Life, Evolutionary Computation and Systems Chemistry, WIVACE 2016, held at Fisciano, Italy, in October 2016. The 16 full papers together with 1 short papers presented have been thoroughly reviewed and selected from 54 submissions. They cover the following topics: evolutionary computation, bioinspired algorithms, genetic algorithms, bioinformatics and computational biology, modelling and simulation of artificial and biological systems, complex systems, synthetic and systems biology, systems chemistry.
R for College Mathematics and Statistics encourages the use of R in mathematics and statistics courses. Instructors are no longer limited to ``nice'' functions in calculus classes. They can require reports and homework with graphs. They can do simulations and experiments. R can be useful for student projects, for creating graphics for teaching, as well as for scholarly work. This book presents ways R, which is freely available, can enhance the teaching of mathematics and statistics. R has the potential to help students learn mathematics due to the need for precision, understanding of symbols and functions, and the logical nature of code. Moreover, the text provides students the opportunity for experimenting with concepts in any mathematics course. Features: Does not require previous experience with R Promotes the use of R in typical mathematics and statistics course work Organized by mathematics topics Utilizes an example-based approach Chapters are largely independent of each other
This book constitutes the refereed proceedings of the Fourth Computer Games Workshop, CGW 2015, and the Fourth Workshop on General Intelligence in Game-Playing Agents, GIGA 2015, held in conjunction with the 24th International Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina, in July 2015.The 12 revised full papers presented were carefully reviewed and selected from 27 submissions. The papers address all aspects of artificial intelligence and computer game playing. They discuss topics such as Monte-Carlo methods; heuristic search; board games; card games; video games; perfect and imperfect information games; puzzles and single player games; multi-player games; combinatorial game theory; applications; computational creativity; computational game theory; evaluation and analysis; game design; knowledge representation; machine learning; multi-agent systems; opponent modeling; planning; reasoning; search.
This book presents the algorithms used to provide recommendations by exploiting matrix factorization and tensor decomposition techniques. It highlights well-known decomposition methods for recommender systems, such as Singular Value Decomposition (SVD), UV-decomposition, Non-negative Matrix Factorization (NMF), etc. and describes in detail the pros and cons of each method for matrices and tensors. This book provides a detailed theoretical mathematical background of matrix/tensor factorization techniques and a step-by-step analysis of each method on the basis of an integrated toy example that runs throughout all its chapters and helps the reader to understand the key differences among methods. It also contains two chapters, where different matrix and tensor methods are compared experimentally on real data sets, such as Epinions, GeoSocialRec, Last.fm, BibSonomy, etc. and provides further insights into the advantages and disadvantages of each method. The book offers a rich blend of theory and practice, making it suitable for students, researchers and practitioners interested in both recommenders and factorization methods. Lecturers can also use it for classes on data mining, recommender systems and dimensionality reduction methods.
This book constitutes the refereed proceedings of the Second Russian Supercomputing Days, RuSCDays 2016, held in Moscow, Russia, in September 2016. The 28 revised full papers presented were carefully reviewed and selected from 94 submissions. The papers are organized in topical sections on the present of supercomputing: large tasks solving experience; the future of supercomputing: new technologies.
Thomas Hubauer addresses the challenge of providing reasonable interpretations of incomplete observational data in the context of imperfect domain models - a situation often encountered in the context of industrial diagnostics. To tackle this problem, the author proposes a novel approach called Relaxed Abduction, which is able to derive pragmatic interpretations in situations where existing methods either fail or provide overly complex solutions. To strengthen the link to applications in industrial diagnostics, he develops a methodology to structure diagnosis problems according to ISO 13379 and express it using multiple description logic knowledge bases.
This book constitutes the revised selected papers of the 10th Italian Workshop on Advances in Artificial Life, Evolutionary Computation and Systems Chemistry, WIVACE 2015, held at Bari, Italy, in September 2015. The 18 papers presented have been thoroughly reviewed and selected from 45 submissions. They cover the following topics: evolutionary computation, bioinspired algorithms, genetic algorithms, bioinformatics and computational biology, modeling and simulation of artificial and biological systems, complex systems, synthetic and systems biology, systems chemistry.
This book provides an introduction to logic and mathematical induction which are the basis of any deductive computational framework. A strong mathematical foundation of the logical engines available in modern proof assistants, such as the PVS verification system, is essential for computer scientists, mathematicians and engineers to increment their capabilities to provide formal proofs of theorems and to certify the robustness of software and hardware systems. The authors present a concise overview of the necessary computational and mathematical aspects of 'logic', placing emphasis on both natural deduction and sequent calculus. Differences between constructive and classical logic are highlighted through several examples and exercises. Without neglecting classical aspects of computational logic, the authors also highlight the connections between logical deduction rules and proof commands in proof assistants, presenting simple examples of formalizations of the correctness of algebraic functions and algorithms in PVS. Applied Logic for Computer Scientists will not only benefit students of computer science and mathematics but also software, hardware, automation, electrical and mechatronic engineers who are interested in the application of formal methods and the related computational tools to provide mathematical certificates of the quality and accuracy of their products and technologies.
This timely text presents a comprehensive overview of fault tolerance techniques for high-performance computing (HPC). The text opens with a detailed introduction to the concepts of checkpoint protocols and scheduling algorithms, prediction, replication, silent error detection and correction, together with some application-specific techniques such as ABFT. Emphasis is placed on analytical performance models. This is then followed by a review of general-purpose techniques, including several checkpoint and rollback recovery protocols. Relevant execution scenarios are also evaluated and compared through quantitative models. Features: provides a survey of resilience methods and performance models; examines the various sources for errors and faults in large-scale systems; reviews the spectrum of techniques that can be applied to design a fault-tolerant MPI; investigates different approaches to replication; discusses the challenge of energy consumption of fault-tolerance methods in extreme-scale systems.
Applied Numerical Linear Algebra introduces students to numerical issues that arise in linear algebra and its applications. A wide range of techniques are touched on, including direct to iterative methods, orthogonal factorizations, least squares, eigenproblems, and nonlinear equations. Inside Applied Numerical Linear Algebra, readers will find: Clear and detailed explanations on a wide range of topics from condition numbers to the singular value decomposition. Material on nonlinear systems as well as linear systems. Frequent illustrations using discretizations of boundary-value problems or demonstrating other concepts. Exercises with detailed solutions at the end of the book. Supplemental material available at https://bookstore.siam.org/cl87/bonus. This textbook is appropriate for junior and senior undergraduate students and beginning graduate students in the following courses: Advanced Numerical Analysis, Special Topics on Numerical Analysis, Topics on Data Science, Topics on Numerical Optimization, and Topics on Approximation Theory.
This book constitutes the refereed proceedings of the 18th International Conference on Formal Engineering Methods, ICFEM 2016, held in Tokyo, Japan, in November 2016. The 27 revised full papers presented together with three invited talks were carefully reviewed and selected from 64 submissions. The conference focuses in all areas related to formal engineering meth-ods, such as verification and validation, software engineering, formal specification and modeling, software security, and software reliability.
This book considers specific inferential issues arising from the analysis of dynamic shapes with the attempt to solve the problems at hand using probability models and nonparametric tests. The models are simple to understand and interpret and provide a useful tool to describe the global dynamics of the landmark configurations. However, because of the non-Euclidean nature of shape spaces, distributions in shape spaces are not straightforward to obtain. The book explores the use of the Gaussian distribution in the configuration space, with similarity transformations integrated out. Specifically, it works with the offset-normal shape distribution as a probability model for statistical inference on a sample of a temporal sequence of landmark configurations. This enables inference for Gaussian processes from configurations onto the shape space. The book is divided in two parts, with the first three chapters covering material on the offset-normal shape distribution, and the remaining chapters covering the theory of NonParametric Combination (NPC) tests. The chapters offer a collection of applications which are bound together by the theme of this book. They refer to the analysis of data from the FG-NET (Face and Gesture Recognition Research Network) database with facial expressions. For these data, it may be desirable to provide a description of the dynamics of the expressions, or testing whether there is a difference between the dynamics of two facial expressions or testing which of the landmarks are more informative in explaining the pattern of an expression.
This book presents the basics of quantum information, e.g., foundation of quantum theory, quantum algorithms, quantum entanglement, quantum entropies, quantum coding, quantum error correction and quantum cryptography. The required knowledge is only elementary calculus and linear algebra. This way the book can be understood by undergraduate students. In order to study quantum information, one usually has to study the foundation of quantum theory. This book describes it from more an operational viewpoint which is suitable for quantum information while traditional textbooks of quantum theory lack this viewpoint. The current book bases on Shor's algorithm, Grover's algorithm, Deutsch-Jozsa's algorithm as basic algorithms. To treat several topics in quantum information, this book covers several kinds of information quantities in quantum systems including von Neumann entropy. The limits of several kinds of quantum information processing are given. As important quantum protocols, this book contains quantum teleportation, quantum dense coding, quantum data compression. In particular conversion theory of entanglement via local operation and classical communication are treated too. This theory provides the quantification of entanglement, which coincides with von Neumann entropy. The next part treats the quantum hypothesis testing. The decision problem of two candidates of the unknown state are given. The asymptotic performance of this problem is characterized by information quantities. Using this result, the optimal performance of classical information transmission via noisy quantum channel is derived. Quantum information transmission via noisy quantum channel by quantum error correction are discussed too. Based on this topic, the secure quantum communication is explained. In particular, the quantification of quantum security which has not been treated in existing book is explained. This book treats quantum cryptography from a more practical viewpoint.
Since their inception, the Perspectives in Logic and Lecture Notes in Logic series have published seminal works by leading logicians. Many of the original books in the series have been unavailable for years, but they are now in print once again. In this volume, the first publication in the Lecture Notes in Logic series, Shoenfield gives a clear and focused introduction to recursion theory. The fundamental concept of recursion makes the idea of computability accessible to a mathematical analysis, thus forming one of the pillars on which modern computer science rests. This introduction is an ideal instrument for teaching and self-study that prepares the reader for the study of advanced monographs and the current literature on recursion theory.
Since their inception, the Perspectives in Logic and Lecture Notes in Logic series have published seminal works by leading logicians. Many of the original books in the series have been unavailable for years, but they are now in print once again. This volume, the second publication in the Lecture Notes in Logic series, is the proceedings of the Association for Symbolic Logic meeting held in Helsinki, Finland, in July 1990. It contains eighteen papers by leading researchers, covering all fields of mathematical logic from the philosophy of mathematics, through model theory, proof theory, recursion theory, and set theory, to the connections of logic to computer science. The articles published here are still widely cited and continue to provide ideas for ongoing research projects.
This book constitutes the thoroughly refereed post-workshop proceedings of the 13th International Workshop on Approximation and Online Algorithms, WAOA 2015, held in Patras, Greece, in September 2015 as part of ALGO 2015. The 17 revised full papers presented were carefully reviewed and selected from 40 submissions. Topics of interest for WAOA 2015 were: algorithmic game theory, algorithmic trading, coloring and partitioning, competitive analysis, computational advertising, computational finance, cuts and connectivity, geometric problems, graph algorithms, inapproximability, mechanism design, natural algorithms, network design, packing and covering, paradigms for the design and analysis of approximation and online algorithms, parameterized complexity, scheduling problems,and real-world applications. |
![]() ![]() You may like...
Optimal Trajectory Tracking of Nonlinear…
Jakob Loeber
Hardcover
Nonlinear Wave and Plasma Structures in…
Evgeny Mishin, Anatoly Streltsov
Paperback
R3,556
Discovery Miles 35 560
High-Performance Computing on the Intel…
Endong Wang, Qing Zhang, …
Hardcover
Dynamic Optimization in Environmental…
Elke Moser, Willi Semmler, …
Hardcover
The Robust Maximum Principle - Theory…
Vladimir G. Boltyanski, Alexander S. Poznyak
Hardcover
R3,688
Discovery Miles 36 880
|