![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Mathematical theory of computation
This brief focuses on two main problems in the domain of optical flow and trajectory estimation: (i) The problem of finding convex optimization methods to apply sparsity to optical flow; and (ii) The problem of how to extend sparsity to improve trajectories in a computationally tractable way. Beginning with a review of optical flow fundamentals, it discusses the commonly used flow estimation strategies and the advantages or shortcomings of each. The brief also introduces the concepts associated with sparsity including dictionaries and low rank matrices. Next, it provides context for optical flow and trajectory methods including algorithms, data sets, and performance measurement. The second half of the brief covers sparse regularization of total variation optical flow and robust low rank trajectories. The authors describe a new approach that uses partially-overlapping patches to accelerate the calculation and is implemented in a coarse-to-fine strategy. Experimental results show that combining total variation and a sparse constraint from a learned dictionary is more effective than employing total variation alone. The brief is targeted at researchers and practitioners in the fields of engineering and computer science. It caters particularly to new researchers looking for cutting edge topics in optical flow as well as veterans of optical flow wishing to learn of the latest advances in multi-frame methods.
This book brings together historical notes, reviews of research developments, fresh ideas on how to make VC (Vapnik-Chervonenkis) guarantees tighter, and new technical contributions in the areas of machine learning, statistical inference, classification, algorithmic statistics, and pattern recognition. The contributors are leading scientists in domains such as statistics, mathematics, and theoretical computer science, and the book will be of interest to researchers and graduate students in these domains.
This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of parallel iterative linear system solvers with emphasis on scalable preconditioners, (b) parallel schemes for obtaining a few of the extreme eigenpairs or those contained in a given interval in the spectrum of a standard or generalized symmetric eigenvalue problem, and (c) parallel methods for computing a few of the extreme singular triplets. Part IV focuses on the development of parallel algorithms for matrix functions and special characteristics such as the matrix pseudospectrum and the determinant. The book also reviews the theoretical and practical background necessary when designing these algorithms and includes an extensive bibliography that will be useful to researchers and students alike. The book brings together many existing algorithms for the fundamental matrix computations that have a proven track record of efficient implementation in terms of data locality and data transfer on state-of-the-art systems, as well as several algorithms that are presented for the first time, focusing on the opportunities for parallelism and algorithm robustness.
This book constitutes the refereed proceedings of the 9th International Symposium on Algorithmic Game Theory, SAGT 2016, held in Liverpool, UK, in September 2016.The 26 full papers presented together with 2 one-page abstracts were carefully reviewed and selected from 62 submissions. The accepted submissions cover various important aspectsof algorithmic game theory such as computational aspects of games, congestion games and networks, matching and voting, auctions and markets, and mechanism design.
Quantum physics started in the 1920's with wave mechanics and the wave-particle duality. However, the last 20 years have seen a second quantum revolution, centered around non-locality and quantum correlations between measurement outcomes. The associated key property, entanglement, is recognized today as the signature of quantumness. This second revolution opened the possibility of studying quantum correlations without any assumption on the internal functioning of the measurement apparata, the so-called Device-Independent Approach to Quantum Physics. This thesis explores this new approach using the powerful geometrical tool of polytopes. Emphasis is placed on the study of non-locality in the case of three or more parties, where it is shown that a whole new variety of phenomena appear compared to the bipartite case. Genuine multiparty entanglement is also studied for the first time within the device-independent framework. Finally, these tools are used to answer a long-standing open question: could quantum non-locality be explained by influences that propagate from one party to the others faster than light, but that remain hidden so that one cannot use them to communicate faster than light? This would provide a way around Einstein's notion of action at a distance that would be compatible with relativity. However, the answer is shown to be negative, as such influences could not remain hidden.
This volume contains the articles presented at the 22nd International Meshing Roundtable (IMR) organized, in part, by Sandia National Laboratories and was held on Oct 13-16, 2013 in Orlando, Florida, USA. The first IMR was held in 1992, and the conference series has been held annually since. Each year the IMR brings together researchers, developers, and application experts in a variety of disciplines, from all over the world, to present and discuss ideas on mesh generation and related topics. The technical papers in this volume present theoretical and novel ideas and algorithms with practical potential, as well as technical applications in science and engineering, geometric modeling, computer graphics and visualization.
The book provides a bottom-up approach to understanding how a computer works and how to use computing to solve real-world problems. It covers the basics of digital logic through the lens of computer organization and programming. The reader should be able to design his or her own computer from the ground up at the end of the book. Logic simulation with Verilog is used throughout, assembly languages are introduced and discussed, and the fundamentals of computer architecture and embedded systems are touched upon, all in a cohesive design-driven framework suitable for class or self-study.
This book primarily addresses Intelligent Information Systems (IIS) and the integration of artificial intelligence, intelligent systems and technologies, database technologies and information systems methodologies to create the next generation of information systems. It includes original and state-of-the-art research on theoretical and practical advances in IIS, system architectures, tools and techniques, as well as "success stories" in intelligent information systems. Intended as an interdisciplinary forum in which scientists and professionals could share their research results and report on new developments and advances in intelligent information systems, technologies and related areas - as well as their applications - , it offers a valuable resource for researchers and practitioners alike.
This book explores the two major elements of Hintikka's model of inquiry: underlying game theoretical motivations and the central role of questioning. The chapters build on the Hintikkan tradition extending Hintikka's model and present a wide variety of approaches to the philosophy of inquiry from different directions, ranging from erotetic logic to Lakatosian philosophy, from socio-epistemologic approaches to strategic reasoning and mathematical practice. Hintikka's theory of inquiry is a well-known example of a dynamic epistemic procedure. In an interrogative inquiry, the inquirer is given a theory and a question. He then tries to answer the question based on the theory by posing questions to nature or an oracle. The initial formulation of this procedure by Hintikka is rather broad and informal. This volume introduces a carefully selected responses to the issues discussed by Hintikka. The articles in the volume were contributed by various authors associated with a research project on Hintikka's interrogative theory of inquiry conducted in the Institut d'Histoire et de Philosophie des Sciences et des Techniques (IHPST) of Paris, including those who visited to share their insight.
This proceedings volume collects review articles that summarize research conducted at the Munich Centre of Advanced Computing (MAC) from 2008 to 2012. The articles address the increasing gap between what should be possible in Computational Science and Engineering due to recent advances in algorithms, hardware, and networks, and what can actually be achieved in practice; they also examine novel computing architectures, where computation itself is a multifaceted process, with hardware awareness or ubiquitous parallelism due to many-core systems being just two of the challenges faced. Topics cover both the methodological aspects of advanced computing (algorithms, parallel computing, data exploration, software engineering) and cutting-edge applications from the fields of chemistry, the geosciences, civil and mechanical engineering, etc., reflecting the highly interdisciplinary nature of the Munich Centre of Advanced Computing.
This volume contains the articles presented at the 20th International Meshing Roundtable (IMR) organized, in part, by Sandia National Laboratories and was held in Paris, France on Oct 23-26, 2011. This is the first year the IMR was held outside the United States territory. Other sponsors of the 20th IMR are Systematic Paris Region Systems & ICT Cluster, AIAA, NAFEMS, CEA, and NSF. The Sandia National Laboratories started the first IMR in 1992, and the conference has been held annually since. Each year the IMR brings together researchers, developers, and application experts, from a variety of disciplines, to present and discuss ideas on mesh generation and related topics. The topics covered by the IMR have applications in numerical analysis, computational geometry, computer graphics, as well as other areas, and the presentations describe novel work ranging from theory to application.
This book honours the outstanding contributions of Vladimir Vapnik, a rare example of a scientist for whom the following statements hold true simultaneously: his work led to the inception of a new field of research, the theory of statistical learning and empirical inference; he has lived to see the field blossom; and he is still as active as ever. He started analyzing learning algorithms in the 1960s and he invented the first version of the generalized portrait algorithm. He later developed one of the most successful methods in machine learning, the support vector machine (SVM) - more than just an algorithm, this was a new approach to learning problems, pioneering the use of functional analysis and convex optimization in machine learning. Part I of this book contains three chapters describing and witnessing some of Vladimir Vapnik's contributions to science. In the first chapter, Leon Bottou discusses the seminal paper published in 1968 by Vapnik and Chervonenkis that lay the foundations of statistical learning theory, and the second chapter is an English-language translation of that original paper. In the third chapter, Alexey Chervonenkis presents a first-hand account of the early history of SVMs and valuable insights into the first steps in the development of the SVM in the framework of the generalised portrait method. The remaining chapters, by leading scientists in domains such as statistics, theoretical computer science, and mathematics, address substantial topics in the theory and practice of statistical learning theory, including SVMs and other kernel-based methods, boosting, PAC-Bayesian theory, online and transductive learning, loss functions, learnable function classes, notions of complexity for function classes, multitask learning, and hypothesis selection. These contributions include historical and context notes, short surveys, and comments on future research directions. This book will be of interest to researchers, engineers, and graduate students engaged with all aspects of statistical learning.
The book provides the first full length exploration of fuzzy computability. It describes the notion of fuzziness and present the foundation of computability theory. It then presents the various approaches to fuzzy computability. This text provides a glimpse into the different approaches in this area, which is important for researchers in order to have a clear view of the field. It contains a detailed literature review and the author includes all proofs to make the presentation accessible. Ideas for future research and explorations are also provided. Students and researchers in computer science and mathematics will benefit from this work.
This volume is the first ever collection devoted to the field of proof-theoretic semantics. Contributions address topics including the systematics of introduction and elimination rules and proofs of normalization, the categorial characterization of deductions, the relation between Heyting's and Gentzen's approaches to meaning, knowability paradoxes, proof-theoretic foundations of set theory, Dummett's justification of logical laws, Kreisel's theory of constructions, paradoxical reasoning, and the defence of model theory. The field of proof-theoretic semantics has existed for almost 50 years, but the term itself was proposed by Schroeder-Heister in the 1980s. Proof-theoretic semantics explains the meaning of linguistic expressions in general and of logical constants in particular in terms of the notion of proof. This volume emerges from presentations at the Second International Conference on Proof-Theoretic Semantics in Tubingen in 2013, where contributing authors were asked to provide a self-contained description and analysis of a significant research question in this area. The contributions are representative of the field and should be of interest to logicians, philosophers, and mathematicians alike.
Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaiko's book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, and artificial intelligence. The second edition of Ontology Matching has been thoroughly revised and updated to reflect the most recent advances in this quickly developing area, which resulted in more than 150 pages of new content. In particular, the book includes a new chapter dedicated to the methodology for performing ontology matching. It also covers emerging topics, such as data interlinking, ontology partitioning and pruning, context-based matching, matcher tuning, alignment debugging, and user involvement in matching, to mention a few. More than 100 state-of-the-art matching systems and frameworks were reviewed. With Ontology Matching, researchers and practitioners will find a reference book that presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can be equally applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a systematic and detailed account of matching techniques and matching systems from theoretical, practical and application perspectives.
This book presents theories and the main useful techniques of the Finite Element Method (FEM), with an introduction to FEM and many case studies of its use in engineering practice. It supports engineers and students to solve primarily linear problems in mechanical engineering, with a main focus on static and dynamic structural problems. Readers of this text are encouraged to discover the proper relationship between theory and practice, within the finite element method: Practice without theory is blind, but theory without practice is sterile. Beginning with elasticity basic concepts and the classical theories of stressed materials, the work goes on to apply the relationship between forces, displacements, stresses and strains on the process of modeling, simulating and designing engineered technical systems. Chapters discuss the finite element equations for static, eigenvalue analysis, as well as transient analyses. Students and practitioners using commercial FEM software will find this book very helpful. It uses straightforward examples to demonstrate a complete and detailed finite element procedure, emphasizing the differences between exact and numerical procedures.
The sheer computing power of modern information technology is changing the face of research not just in science, technology and mathematics, but in humanities and cultural studies too. Recent decades have seen a major shift both in attitudes and deployment of computers, which are now vital and highly effective tools in disciplines where they were once viewed as elaborate typewriters. This revealing volume details the vast array of computing applications that researchers in the humanities now have recourse to, including the dissemination of scholarly information through virtual 'co-laboratories', data retrieval, and the modeling of complex processes that contribute to our natural and cultural heritage. One key area covered in this book is the versatility of computers in presenting images and graphics, which is transforming the analysis of data sets and archaeological reconstructions alike. The papers published here are grouped into three broad categories that cover mathematical and computational methods, research developments in information systems, and a detailed portrayal of ongoing work on documenting, restoring and presenting cultural monuments including the temples in Pompeii and the Banteay Chhmar temples of the Angkorian period in present-day Cambodia. Originally presented at a research workshop in Heidelberg, Germany, they reflect the rapidly developing identity of computational humanities as an interdisciplinary field in its own right, as well as demonstrating the breadth of perspectives in this young and vibrant research area.
This volume consists of papers presented at the Variational Analysis and Aerospace Engineering Workshop II held in Erice, Italy in September 2010 at the International School of Mathematics "Guido Stampacchia". The workshop provided a platform for aerospace engineers and mathematicians (from universities, research centers and industry) to discuss the advanced problems requiring an extensive application of mathematics. The presentations were dedicated to the most advanced subjects in engineering and, in particular to computational fluid dynamics methods, introduction of new materials, optimization in aerodynamics, structural optimization, space missions, flight mechanics, control theory and optimization, variational methods and applications, etc. This book will capture the interest of researchers from both academia and industry.
This textbook addresses the mathematical description of sets, categories, topologies and measures, as part of the basis for advanced areas in theoretical computer science like semantics, programming languages, probabilistic process algebras, modal and dynamic logics and Markov transition systems. Using motivations, rigorous definitions, proofs and various examples, the author systematically introduces the Axiom of Choice, explains Banach-Mazur games and the Axiom of Determinacy, discusses the basic constructions of sets and the interplay of coalgebras and Kripke models for modal logics with an emphasis on Kleisli categories, monads and probabilistic systems. The text further shows various ways of defining topologies, building on selected topics like uniform spaces, Goedel's Completeness Theorem and topological systems. Finally, measurability, general integration, Borel sets and measures on Polish spaces, as well as the coalgebraic side of Markov transition kernels along with applications to probabilistic interpretations of modal logics are presented. Special emphasis is given to the integration of (co-)algebraic and measure-theoretic structures, a fairly new and exciting field, which is demonstrated through the interpretation of game logics. Readers familiar with basic mathematical structures like groups, Boolean algebras and elementary calculus including mathematical induction will discover a wealth of useful research tools. Throughout the book, exercises offer additional information, and case studies give examples of how the techniques can be applied in diverse areas of theoretical computer science and logics. References to the relevant mathematical literature enable the reader to find the original works and classical treatises, while the bibliographic notes at the end of each chapter provide further insights and discussions of alternative approaches.
This book presents the state-of-the-art in simulation on supercomputers. Leading researchers present results achieved on systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2010. The reports cover all fields of computational science and engineering, ranging from CFD to computational physics and chemistry to computer science, with a special emphasis on industrially relevant applications. Presenting results for both vector systems and microprocessor-based systems, the book makes it possible to compare the performance levels and usability of various architectures. As HLRS operates the largest NEC SX-8 vector system in the world, this book gives an excellent insight into the potential of vector systems, covering the main methods in high performance computing. Its outstanding results in achieving the highest performance for production codes are of particular interest for both scientists and engineers. The book includes a wealth of color illustrations and tables.
This work presents the Clifford-Cauchy-Dirac (CCD) technique for solving problems involving the scattering of electromagnetic radiation from materials of all kinds. It allows anyone who is interested to master techniques that lead to simpler and more efficient solutions to problems of electromagnetic scattering than are currently in use. The technique is formulated in terms of the Cauchy kernel, single integrals, Clifford algebra and a whole-field approach. This is in contrast to many conventional techniques that are formulated in terms of Green's functions, double integrals, vector calculus and the combined field integral equation (CFIE). Whereas these conventional techniques lead to an implementation using the method of moments (MoM), the CCD technique is implemented as alternating projections onto convex sets in a Banach space. The ultimate outcome is an integral formulation that lends itself to a more direct and efficient solution than conventionally is the case, and applies without exception to all types of materials. On any particular machine, it results in either a faster solution for a given problem or the ability to solve problems of greater complexity. The Clifford-Cauchy-Dirac technique offers very real and significant advantages in uniformity, complexity, speed, storage, stability, consistency and accuracy.
The problem of counting the number of self-avoiding polygons on a square grid, - therbytheirperimeterortheirenclosedarea,is aproblemthatis soeasytostate that, at ?rst sight, it seems surprising that it hasn't been solved. It is however perhaps the simplest member of a large class of such problems that have resisted all attempts at their exact solution. These are all problems that are easy to state and look as if they should be solvable. They include percolation, in its various forms, the Ising model of ferromagnetism, polyomino enumeration, Potts models and many others. These models are of intrinsic interest to mathematicians and mathematical physicists, but can also be applied to many other areas, including economics, the social sciences, the biological sciences and even to traf?c models. It is the widespread applicab- ity of these models to interesting phenomena that makes them so deserving of our attention. Here however we restrict our attention to the mathematical aspects. Here we are concerned with collecting together most of what is known about polygons, and the closely related problems of polyominoes. We describe what is known, taking care to distinguish between what has been proved, and what is c- tainlytrue,but has notbeenproved. Theearlierchaptersfocusonwhatis knownand on why the problems have not been solved, culminating in a proof of unsolvability, in a certain sense. The next chapters describe a range of numerical and theoretical methods and tools for extracting as much information about the problem as possible, in some cases permittingexactconjecturesto be made.
This work explores the scope and flexibility afforded by integrated quantum photonics, both in terms of practical problem-solving, and for the pursuit of fundamental science. The author demonstrates and fully characterizes a two-qubit quantum photonic chip, capable of arbitrary two-qubit state preparation. Making use of the unprecedented degree of reconfigurability afforded by this device, a novel variation on Wheeler's delayed choice experiment is implemented, and a new technique to obtain nonlocal statistics without a shared reference frame is tested. Also presented is a new algorithm for quantum chemistry, simulating the helium hydride ion. Finally, multiphoton quantum interference in a large Hilbert space is demonstrated, and its implications for computational complexity are examined.
This book brings together contributions by leading researchers in computational complexity theory written in honor of Somenath Biswas on the occasion of his sixtieth birthday. They discuss current trends and exciting developments in this flourishing area of research and offer fresh perspectives on various aspects of complexity theory. The topics covered include arithmetic circuit complexity, lower bounds and polynomial identity testing, the isomorphism conjecture, space-bounded computation, graph isomorphism, resolution and proof complexity, entropy and randomness. Several chapters have a tutorial flavor. The aim is to make recent research in these topics accessible to graduate students and senior undergraduates in computer science and mathematics. It can also be useful as a resource for teaching advanced level courses in computational complexity.
This book explains in detail how to define requirements modelling languages - formal languages used to solve requirement-related problems in requirements engineering. It moves from simple languages to more complicated ones and uses these languages to illustrate a discussion of major topics in requirements modelling language design. The book positions requirements problem solving within the framework of broader research on ill-structured problem solving in artificial intelligence and engineering in general. Further, it introduces the reader to many complicated issues in requirements modelling language design, starting from trivial questions and the definition of corresponding simple languages used to answer them, and progressing to increasingly complex issues and languages. In this way the reader is led step by step (and with the help of illustrations) to learn about the many challenges involved in designing modelling languages for requirements engineering. The book offers the first comprehensive treatment of a major challenge in requirements engineering and business analysis, namely, how to design and define requirements modelling languages. It is intended for researchers and graduate students interested in advanced topics of requirements engineering and formal language design. |
![]() ![]() You may like...
|