![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Mathematical theory of computation
This introduction to the field of hyper-heuristics presents the required foundations and tools and illustrates some of their applications. The authors organized the 13 chapters into three parts. The first, hyper-heuristic fundamentals and theory, provides an overview of selection constructive, selection perturbative, generation constructive and generation perturbative hyper-heuristics, and then a formal definition of hyper-heuristics. The chapters in the second part of the book examine applications of hyper-heuristics in vehicle routing, nurse rostering, packing and examination timetabling. The third part of the book presents advanced topics and then a summary of the field and future research directions. Finally the appendices offer details of the HyFlex framework and the EvoHyp toolkit, and then the definition, problem model and constraints for the most tested combinatorial optimization problems. The book will be of value to graduate students, researchers, and practitioners.
This thesis describes experimental work in the field of trapped-ion quantum computation. It outlines the theory of Raman interactions, examines the various sources of error in two-qubit gates, and describes in detail experimental explorations of the sources of infidelity in implementations of single- and two-qubit gates. Lastly, it presents an experimental demonstration of a mixed-species entangling gate.
* The book offers a well-balanced mathematical analysis of modelling physical systems. * Summarizes basic principles in differential geometry and convex analysis as needed. * The book covers a wide range of industrial and social applications, and bridges the gap between core theory and costly experiments through simulations and modelling. * The focus of the book is manifold ranging from stability of fluid flows, nano fluids, drug delivery, and security of image data to Pandemic modeling etc.
This is a textbook for undergraduate students of chemical and biological engineering. It is also useful for graduate students and professional engineers and numerical analysts. All reactive chemical and biological processes are highly nonlinear allowing for multiple steady states. This book addresses the bifurcation characteristics of chemical and biological processes as the general case and treats systems with a unique steady state as special cases. It uses a system approach which is the most efficient for knowledge organization and transfer. The book develops mathematical models for many commercial processes utilizing the mass, momentum, and heat-balance equations coupled to the rates of the processes that take place within the boundaries of the system. design and optimization of the chemical and biological industrial equipment and plants, such as single and batteries of CSTRs, porous and nonporous catalyst pellets and their effectiveness factors, tubular catalytic and noncatalytic reactors, fluidized bed catalytic reactors, coupled fluidized beds such as reactor-regenerator systems (industrial fluid catalytic cracking units), fluidized bed reformers for producing hydrogen or syngas, fermenters for fuel ethanol, simulation of the brain acetylcholine neurocycle, anaerobic digesters, co and countercurrent absorption columns, and many more. The book also includes verification against industrial data. The book's CD contains nearly 100 MATLAB programs which are meant to teach the readers how to solve a variety of important chemical and biological engineering problems. The algorithms include solving transcendental and algebraic equations, with and without bifurcation; as well as initial and boundary value ordinary differential equations. Said Elnashaie is Professor of Chemical and Biological Engineering at the University of British Columbia. is a Ph.D. candidate in Applied Mathematics at Auburn with a B.S. in Chemical Engineering. The active interaction of these authors has brought about this new and modern interdisciplinary book
This book on optimization includes forewords by Michael I. Jordan, Zongben Xu and Zhi-Quan Luo. Machine learning relies heavily on optimization to solve problems with its learning models, and first-order optimization algorithms are the mainstream approaches. The acceleration of first-order optimization algorithms is crucial for the efficiency of machine learning. Written by leading experts in the field, this book provides a comprehensive introduction to, and state-of-the-art review of accelerated first-order optimization algorithms for machine learning. It discusses a variety of methods, including deterministic and stochastic algorithms, where the algorithms can be synchronous or asynchronous, for unconstrained and constrained problems, which can be convex or non-convex. Offering a rich blend of ideas, theories and proofs, the book is up-to-date and self-contained. It is an excellent reference resource for users who are seeking faster optimization algorithms, as well as for graduate students and researchers wanting to grasp the frontiers of optimization in machine learning in a short time.
Over the past decade, many major advances have been made in the field of graph colouring via the probabilistic method. This monograph provides an accessible and unified treatment of these results, using tools such as the Lovasz Local Lemma and Talagrand's concentration inequality.The topics covered include: Kahn's proofs that the Goldberg-Seymour and List Colouring Conjectures hold asymptotically; a proof that for some absolute constant C, every graph of maximum degree Delta has a Delta+C total colouring; Johansson's proof that a triangle free graph has a O(Delta over log Delta) colouring; algorithmic variants of the Local Lemma which permit the efficient construction of many optimal and near-optimal colourings.This begins with a gentle introduction to the probabilistic method and will be useful to researchers and graduate students in graph theory, discrete mathematics, theoretical computer science and probability.
Fourier analysis is one of the most useful tools in many applied sciences. The recent developments of wavelet analysis indicate that in spite of its long history and well-established applications, the field is still one of active research. This text bridges the gap between engineering and mathematics, providing a rigorously mathematical introduction of Fourier analysis, wavelet analysis and related mathematical methods, while emphasizing their uses in signal processing and other applications in communications engineering. The interplay between Fourier series and Fourier transforms is at the heart of signal processing, which is couched most naturally in terms of the Dirac delta function and Lebesgue integrals. The exposition is organized into four parts. The first is a discussion of one-dimensional Fourier theory, including the classical results on convergence and the Poisson sum formula. The second part is devoted to the mathematical foundations of signal processing ¿ sampling, filtering, digital signal processing. Fourier analysis in Hilbert spaces is the focus of the third part, and the last part provides an introduction to wavelet analysis, time-frequency issues, and multiresolution analysis. An appendix provides the necessary background on Lebesgue integrals.
Solving nonsmooth optimization (NSO) problems is critical in many practical applications and real-world modeling systems. The aim of this book is to survey various numerical methods for solving NSO problems and to provide an overview of the latest developments in the field. Experts from around the world share their perspectives on specific aspects of numerical NSO. The book is divided into four parts, the first of which considers general methods including subgradient, bundle and gradient sampling methods. In turn, the second focuses on methods that exploit the problem's special structure, e.g. algorithms for nonsmooth DC programming, VU decomposition techniques, and algorithms for minimax and piecewise differentiable problems. The third part considers methods for special problems like multiobjective and mixed integer NSO, and problems involving inexact data, while the last part highlights the latest advancements in derivative-free NSO. Given its scope, the book is ideal for students attending courses on numerical nonsmooth optimization, for lecturers who teach optimization courses, and for practitioners who apply nonsmooth optimization methods in engineering, artificial intelligence, machine learning, and business. Furthermore, it can serve as a reference text for experts dealing with nonsmooth optimization.
The 2017 PIMS-CRM Summer School in Probability was held at the Pacific Institute for the Mathematical Sciences (PIMS) at the University of British Columbia in Vancouver, Canada, during June 5-30, 2017. It had 125 participants from 20 different countries, and featured two main courses, three mini-courses, and twenty-nine lectures. The lecture notes contained in this volume provide introductory accounts of three of the most active and fascinating areas of research in modern probability theory, especially designed for graduate students entering research: Scaling limits of random trees and random graphs (Christina Goldschmidt) Lectures on the Ising and Potts models on the hypercubic lattice (Hugo Duminil-Copin) Extrema of the two-dimensional discrete Gaussian free field (Marek Biskup) Each of these contributions provides a thorough introduction that will be of value to beginners and experts alike.
This book reviews selected topics charterized by great progress and covers the field from theoretical areas to experimental ones. It contains fundamental areas, quantum query complexity, quantum statistical inference, quantum cloning, quantum entanglement, additivity. It treats three types of quantum security system, quantum public key cryptography, quantum key distribution, and quantum steganography. A photonic system is highlighted for the realization of quantum information processing.
The mathematical theory of wavelets is less than 15 years old, yet already wavelets have become a fundamental tool in many areas of applied mathematics and engineering. This introduction to wavelets assumes a basic background in linear algebra (reviewed in Chapter 1) and real analysis at the undergraduate level. Fourier and wavelet analyses are first presented in the finite-dimensional context, using only linear algebra. Then Fourier series are introduced in order to develop wavelets in the infinite-dimensional, but discrete context. Finally, the text discusses Fourier transform and wavelet theory on the real line. The computation of the wavelet transform via filter banks is emphasized, and applications to signal compression and numerical differential equations are given. This text is ideal for a topics course for mathematics majors, because it exhibits and emerging mathematical theory with many applications. It also allows engineering students without graduate mathematics prerequisites to gain a practical knowledge of wavelets.
People, problems, and proofs are the lifeblood of theoretical computer science. Behind the computing devices and applications that have transformed our lives are clever algorithms, and for every worthwhile algorithm there is a problem that it solves and a proof that it works. Before this proof there was an open problem: can one create an efficient algorithm to solve the computational problem? And, finally, behind these questions are the people who are excited about these fundamental issues in our computational world. In this book the authors draw on their outstanding research and teaching experience to showcase some key people and ideas in the domain of theoretical computer science, particularly in computational complexity and algorithms, and related mathematical topics. They show evidence of the considerable scholarship that supports this young field, and they balance an impressive breadth of topics with the depth necessary to reveal the power and the relevance of the work described. Beyond this, the authors discuss the sustained effort of their community, revealing much about the culture of their field. A career in theoretical computer science at the top level is a vocation: the work is hard, and in addition to the obvious requirements such as intellect and training, the vignettes in this book demonstrate the importance of human factors such as personality, instinct, creativity, ambition, tenacity, and luck. The authors' style is characterized by personal observations, enthusiasm, and humor, and this book will be a source of inspiration and guidance for graduate students and researchers engaged with or planning careers in theoretical computer science.
Contents: The Possibility of Using Computer to Study the Equation of Gravitation (Q K Lu); Solving Polynomial Systems by Homotopy Continuation Methods (T Y Li); Sketch of a New Discipline of Modeling (E Engeler); The Symmetry Groups of Computer Programs and Program Equivalence (J R Gabriel); Computations with Rational Parametric Equations (S C Chou et al.); Computer Versus Paper and Pencil (M Mignotte); The Finite Basis of an Irreducible Ascending Set (H Shi); A Note on Wu Wen-Tsun's Non-Degenerate Condition (J Z Zhang et al.); Mechanical Theorem Proving in Riemann Geometry Using Wu's Method (S C Chou & X S Gao); and other papers;
The third edition of this authoritative and comprehensive handbook is the definitive work on the current state of the art of Biometric Presentation Attack Detection (PAD) - also known as Biometric Anti-Spoofing. Building on the success of the previous editions, this thoroughly updated third edition has been considerably revised to provide even greater coverage of PAD methods, spanning biometrics systems based on face, fingerprint, iris, voice, vein, and signature recognition. New material is also included on major PAD competitions, important databases for research, and on the impact of recent international legislation. Valuable insights are supplied by a selection of leading experts in the field, complete with results from reproducible research, supported by source code and further information available at an associated website. Topics and features: reviews the latest developments in PAD for fingerprint biometrics, covering recent technologies like Vision Transformers, and review of competition series; examines methods for PAD in iris recognition systems, the use of pupil size measurement or multiple spectra for this purpose; discusses advancements in PAD methods for face recognition-based biometrics, such as recent progress on detection of 3D facial masks and the use of multiple spectra with Deep Neural Networks; presents an analysis of PAD for automatic speaker recognition (ASV), including a study of the generalization to unseen attacks; describes the results yielded by key competitions on fingerprint liveness detection, iris liveness detection, and face anti-spoofing; provides analyses of PAD in finger-vein recognition, in signature biometrics, and in mobile biometrics; includes coverage of international standards in PAD and legal aspects of image manipulations like morphing.This text/reference is essential reading for anyone involved in biometric identity verification, be they students, researchers, practitioners, engineers, or technology consultants. Those new to the field will also benefit from a number of introductory chapters, outlining the basics for the most important biometrics. This text/reference is essential reading for anyone involved in biometric identity verification, be they students, researchers, practitioners, engineers, or technology consultants. Those new to the field will also benefit from a number of introductory chapters, outlining the basics for the most important biometrics.
A description of 148 algorithms fundamental to number-theoretic computations, in particular for computations related to algebraic number theory, elliptic curves, primality testing and factoring. The first seven chapters guide readers to the heart of current research in computational algebraic number theory, including recent algorithms for computing class groups and units, as well as elliptic curve computations, while the last three chapters survey factoring and primality testing methods, including a detailed description of the number field sieve algorithm. The whole is rounded off with a description of available computer packages and some useful tables, backed by numerous exercises. Written by an authority in the field, and one with great practical and teaching experience, this is certain to become the standard and indispensable reference on the subject.
The main idea of statistical convergence is to demand convergence only for a majority of elements of a sequence. This method of convergence has been investigated in many fundamental areas of mathematics such as: measure theory, approximation theory, fuzzy logic theory, summability theory, and so on. In this monograph we consider this concept in approximating a function by linear operators, especially when the classical limit fails. The results of this book not only cover the classical and statistical approximation theory, but also are applied in the fuzzy logic via the fuzzy-valued operators. The authors in particular treat the important Korovkin approximation theory of positive linear operators in statistical and fuzzy sense. They also present various statistical approximation theorems for some specific real and complex-valued linear operators that are not positive. This is the first monograph in Statistical Approximation Theory and Fuzziness. The chapters are self-contained and several advanced courses can be taught. The research findings will be useful in various applications including applied and computational mathematics, stochastics, engineering, artificial intelligence, vision and machine learning. This monograph is directed to graduate students, researchers, practitioners and professors of all disciplines.
This book presents the state-of-the-art in simulation on supercomputers. Leading researchers present results achieved on systems of the Gauss-Allianz, the association of High-Performance Computing centers in Germany. The reports cover all fields of computational science and engineering, ranging from CFD to Computational Physics and Biology to Computer Science, with a special emphasis on industrially relevant applications. Presenting results for large-scale parallel microprocessor-based systems and GPU and FPGA-supported systems, the book makes it possible to compare the performance levels and usability of various architectures. Its outstanding results in achieving the highest performance for production codes are of particular interest for both scientists and engineers. The book includes a wealth of color illustrations and tables.
The revised edition of this book offers an extended overview of quantum walks and explains their role in building quantum algorithms, in particular search algorithms. Updated throughout, the book focuses on core topics including Grover's algorithm and the most important quantum walk models, such as the coined, continuous-time, and Szedgedy's quantum walk models. There is a new chapter describing the staggered quantum walk model. The chapter on spatial search algorithms has been rewritten to offer a more comprehensive approach and a new chapter describing the element distinctness algorithm has been added. There is a new appendix on graph theory highlighting the importance of graph theory to quantum walks. As before, the reader will benefit from the pedagogical elements of the book, which include exercises and references to deepen the reader's understanding, and guidelines for the use of computer programs to simulate the evolution of quantum walks. Review of the first edition: "The book is nicely written, the concepts are introduced naturally, and many meaningful connections between them are highlighted. The author proposes a series of exercises that help the reader get some working experience with the presented concepts, facilitating a better understanding. Each chapter ends with a discussion of further references, pointing the reader to major results on the topics presented in the respective chapter." - Florin Manea, zbMATH.
In 1965 Juris Hartmanis and Richard E. Stearns published a paper "On the Computational Complexity of Algorithms." The field of complexity theory takes its name from this seminal paper and many of the major concepts and issues of complexity theory were introduced by Hartmanis in subsequent work. In honor of the contribution of Juris Hartmanis to the field of complexity theory, a special session of invited talks by Richard E. Stearns, Allan Borodin and Paul Young was held at the third annual meeting of the Structure in Complexity conference, and the first three chapters of this book are the final versions of these talks. They recall intellectual and professional trends in Hartmanis' contributions. All but one of the remainder of the chapters in this volume originated as a presentation at one of the recent meetings of the Structure in Complexity Theory Conference and appeared in preliminary form in the conference proceedings. In all, these expositions form an excellent description of much of contemporary complexity theory.
Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaiko's book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, and artificial intelligence. The second edition of Ontology Matching has been thoroughly revised and updated to reflect the most recent advances in this quickly developing area, which resulted in more than 150 pages of new content. In particular, the book includes a new chapter dedicated to the methodology for performing ontology matching. It also covers emerging topics, such as data interlinking, ontology partitioning and pruning, context-based matching, matcher tuning, alignment debugging, and user involvement in matching, to mention a few. More than 100 state-of-the-art matching systems and frameworks were reviewed. With Ontology Matching, researchers and practitioners will find a reference book that presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can be equally applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a systematic and detailed account of matching techniques and matching systems from theoretical, practical and application perspectives.
This focuses on the developing field of building probability models with the power of symbolic algebra systems. The book combines the uses of symbolic algebra with probabilistic/stochastic application and highlights the applications in a variety of contexts. The research explored in each chapter is unified by the use of A Probability Programming Language (APPL) to achieve the modeling objectives. APPL, as a research tool, enables a probabilist or statistician the ability to explore new ideas, methods, and models. Furthermore, as an open-source language, it sets the foundation for future algorithms to augment the original code. Computational Probability Applications is comprised of fifteen chapters, each presenting a specific application of computational probability using the APPL modeling and computer language. The chapter topics include using inverse gamma as a survival distribution, linear approximations of probability density functions, and also moment-ratio diagrams for univariate distributions. These works highlight interesting examples, often done by undergraduate students and graduate students that can serve as templates for future work. In addition, this book should appeal to researchers and practitioners in a range of fields including probability, statistics, engineering, finance, neuroscience, and economics.
The proceedings represent the state of knowledge in the area of algorithmic differentiation (AD). The 31 contributed papers presented at the AD2012 conference cover the application of AD to many areas in science and engineering as well as aspects of AD theory and its implementation in tools. For all papers the referees, selected from the program committee and the greater community, as well as the editors have emphasized accessibility of the presented ideas also to non-AD experts. In the AD tools arena new implementations are introduced covering, for example, Java and graphical modeling environments or join the set of existing tools for Fortran. New developments in AD algorithms target the efficiency of matrix-operation derivatives, detection and exploitation of sparsity, partial separability, the treatment of nonsmooth functions, and other high-level mathematical aspects of the numerical computations to be differentiated. Applications stem from the Earth sciences, nuclear engineering, fluid dynamics, and chemistry, to name just a few. In many cases the applications in a given area of science or engineering share characteristics that require specific approaches to enable AD capabilities or provide an opportunity for efficiency gains in the derivative computation. The description of these characteristics and of the techniques for successfully using AD should make the proceedings a valuable source of information for users of AD tools.
This textbook presents a survey of research on boolean functions, circuits, parallel computation models, function algebras, and proof systems. Its main aim is to elucidate the structure of "fast" parallel computation. The complexity of parallel computation is emphasized through a variety of techniques ranging from finite combinatorics, probability theory and finite group theory to finite model theory and proof theory. Nonuniform computation models are studied in the form of boolean circuits; uniform ones in a variety of forms. Steps in the investigation of non-deterministic polynomial time are surveyed as is the complexity of various proof systems. The book will benefit advanced undergraduates and graduate students as well as researchers in the field of complexity theory.
This book presents the best papers from the 1st International Conference on Mathematical Research for Blockchain Economy (MARBLE) 2019, held in Santorini, Greece. While most blockchain conferences and forums are dedicated to business applications, product development or Initial Coin Offering (ICO) launches, this conference focused on the mathematics behind blockchain to bridge the gap between practice and theory. Every year, thousands of blockchain projects are launched and circulated in the market, and there is a tremendous wealth of blockchain applications, from finance to healthcare, education, media, logistics and more. However, due to theoretical and technical barriers, most of these applications are impractical for use in a real-world business context. The papers in this book reveal the challenges and limitations, such as scalability, latency, privacy and security, and showcase solutions and developments to overcome them.
This work explores the scope and flexibility afforded by integrated quantum photonics, both in terms of practical problem-solving, and for the pursuit of fundamental science. The author demonstrates and fully characterizes a two-qubit quantum photonic chip, capable of arbitrary two-qubit state preparation. Making use of the unprecedented degree of reconfigurability afforded by this device, a novel variation on Wheeler's delayed choice experiment is implemented, and a new technique to obtain nonlocal statistics without a shared reference frame is tested. Also presented is a new algorithm for quantum chemistry, simulating the helium hydride ion. Finally, multiphoton quantum interference in a large Hilbert space is demonstrated, and its implications for computational complexity are examined. |
![]() ![]() You may like...
Modeling Semantic Web Services - The Web…
Jos De Bruijn, Mick Kerrigan, …
Hardcover
R1,527
Discovery Miles 15 270
The Future of Intelligent Transport…
George J. Dimitrakopoulos, Lorna Uden, …
Paperback
R2,676
Discovery Miles 26 760
Clean Mobility and Intelligent Transport…
Michele Fiorini, Jia-Chin Lin
Hardcover
Research Anthology on Cross-Industry…
Information R Management Association
Hardcover
R14,906
Discovery Miles 149 060
Adex Optimized Adaptive Controllers and…
Juan M. Martin-Sanchez, Jose Rodellar
Hardcover
R4,145
Discovery Miles 41 450
Long-Term Preservation of Digital…
Uwe M. Borghoff, Peter Roedig, …
Hardcover
R1,686
Discovery Miles 16 860
|