![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Mathematical theory of computation
This textbook is intended for practical, laboratory sessions associated with the course of quantum computing and quantum algorithms, as well as for self-study. It contains basic theoretical concepts and methods for solving basic types of problems and gives an overview of basic qubit operations, entangled states, quantum circuits, implementing functions, quantum Fourier transform, phase estimation, etc. The book serves as a basis for the application of new information technologies in education and corporate technical training: theoretical material and examples of practical problems, as well as exercises with, in most cases, detailed solutions, have relation to information technologies. A large number of detailed examples serve to better develop professional competencies in computer science.
Optimization methods have been considered in many articles, monographs, and handbooks. However, experts continue to experience difficulties in correctly stating optimization problems in engineering. These troubles typically emerge when trying to define the set of feasible solutions, i.e. the constraints imposed on the design variables, functional relationships, and criteria. The Parameter Space Investigation (PSI) method was developed specifically for the correct statement and solution of engineering optimization problems. It is implemented in the MOVI 1.0 software package, a tutorial version of which is included in this book. The PSI method and MOVI 1.0 software package have a wide range of applications. The PSI method can be successfully used for the statement and solution of the following multicriteria problems: design, identification, design with control, the optional development of prototypes, finite element models, and the decomposition and aggregation of large-scale systems. Audience: The PSI method will be of interest to researchers, graduate students, and engineers who work in engineering, mathematical modelling and industrial mathematics, and in computer and information science.
Optimization is a field important in its own right but is also integral to numerous applied sciences, including operations research, management science, economics, finance and all branches of mathematics-oriented engineering. Constrained optimization models are one of the most widely used mathematical models in operations research and management science. This book gives a modern and well-balanced presentation of the subject, focusing on theory but also including algorithims and examples from various real-world applications. The text is easy to read and accessible to anyone with a knowledge of multi-dimensional calculus, linear algebra and basic numerical methods. Detailed examples and counter-examples are provided--as are exercises, solutions and helpful hints, and Matlab/Maple supplements. The intended readership is advanced undergraduates, graduates, and professionals in any of the applied fields.
This volume assesses approaches to the construction of computer vision systems. It shows that there is a spectrum of approaches with different degrees of maturity and robustness. The useful exploitation of computer vision in industry and elsewhere and the development of the discipline itself depend on understanding the way these approaches influence one another. The chief topic discussed is autonomy.True autonomy may not be achievable in machines in the near future, and the workshop concluded that it may be more desirable - and is certainly more pragmatic - to leave a person in the processing loop. The second conclusion of the workshop concerns the manner in which a system is designedfor an application. It was agreed that designers should first specify the required functionality, then identify the knowledge appropriate to that task, and finally choose the appropriate techniques and algorithms. The third conclusion concerns the methodologies employed in developing vision systems: craft, engineering, and science are mutually relevant and contribute to one another. The contributors place heavy emphasis on providing the reader with concrete examples of operational systems. The book is based on a workshop held as part of the activities of an ESPRIT Basic Research Action.
A more accessible approach than most competitor texts, which move into advanced, research-level topics too quickly for today's students. Part I is comprehensive in providing all necessary mathematical underpinning, particularly for those who need more opportunity to develop their mathematical competence. More confident students may move directly to Part II and dip back into Part I as a reference. Ideal for use as an introductory text for courses in quantum computing. Fully worked examples illustrate the application of mathematical techniques. Exercises throughout develop concepts and enhance understanding. End-of-chapter exercises offer more practice in developing a secure foundation.
This textbook provides a hands-on treatment of the subject of optimization. A comprehensive set of problems and exercises makes it suitable for use in one or two semesters of an advanced undergraduate course or a first-year graduate course. Each half of the book contains a full semester's worth of complementary yet stand-alone material. The practical orientation of the topics chosen and a wealth of useful examples also make the book suitable as a reference work for practitioners in the field. In this second edition the authors have added sections on recent innovations, techniques, and methodologies.
This book provides a comprehensive introduction to rough set-based feature selection. Rough set theory, first proposed by Zdzislaw Pawlak in 1982, continues to evolve. Concerned with the classification and analysis of imprecise or uncertain information and knowledge, it has become a prominent tool for data analysis, and enables the reader to systematically study all topics in rough set theory (RST) including preliminaries, advanced concepts, and feature selection using RST. The book is supplemented with an RST-based API library that can be used to implement several RST concepts and RST-based feature selection algorithms. The book provides an essential reference guide for students, researchers, and developers working in the areas of feature selection, knowledge discovery, and reasoning with uncertainty, especially those who are working in RST and granular computing. The primary audience of this book is the research community using rough set theory (RST) to perform feature selection (FS) on large-scale datasets in various domains. However, any community interested in feature selection such as medical, banking, and finance can also benefit from the book. This second edition also covers the dominance-based rough set approach and fuzzy rough sets. The dominance-based rough set approach (DRSA) is an extension of the conventional rough set approach and supports the preference order using the dominance principle. In turn, fuzzy rough sets are fuzzy generalizations of rough sets. An API library for the DRSA is also provided with the second edition of the book.
A best-seller in its French edition, the construction of this book is original and its success in the French market demonstrates its appeal. It is based on three principles: 1. An organization of the chapters by families of algorithms : exhaustive search, divide and conquer, etc. At the contrary, there is no chapter only devoted to a systematic exposure of, say, algorithms on strings. Some of these will be found in different chapters. 2. For each family of algorithms, an introduction is given to the mathematical principles and the issues of a rigorous design, with one or two pedagogical examples. 3. For its most part, the book details 150 problems, spanning on seven families of algorithms. For each problem, a precise and progressive statement is given. More important, a complete solution is detailed, with respect to the design principles that have been presented ; often, some classical errors are pointed at. Roughly speaking, two thirds of the book are devoted to the detailed rational construction of the solutions.
The contributions included in the volume are drawn from presentations at ODS2019 - International Conference on Optimization and Decision Science, which was the 49th annual meeting of the Italian Operations Research Society (AIRO) held at Genoa, Italy, on 4-7 September 2019. This book presents very recent results in the field of Optimization and Decision Science. While the book is addressed primarily to the Operations Research (OR) community, the interdisciplinary contents ensure that it will also be of very high interest for scholars and researchers from many scientific disciplines, including computer sciences, economics, mathematics, and engineering. Operations Research is known as the discipline of optimization applied to real-world problems and to complex decision-making fields. The focus is on mathematical and quantitative methods aimed at determining optimal or near-optimal solutions in acceptable computation times. This volume not only presents theoretical results but also covers real industrial applications, making it interesting for practitioners facing decision problems in logistics, manufacturing production, and services. Readers will accordingly find innovative ideas from both a methodological and an applied perspective.
Advanced Problem Solving Using Maple (TM): Applied Mathematics, Operations Research, Business Analytics, and Decision Analysis applies the mathematical modeling process by formulating, building, solving, analyzing, and criticizing mathematical models. Scenarios are developed within the scope of the problem-solving process. The text focuses on discrete dynamical systems, optimization techniques, single-variable unconstrained optimization and applied problems, and numerical search methods. Additional coverage includes multivariable unconstrained and constrained techniques. Linear algebra techniques to model and solve problems such as the Leontief model, and advanced regression techniques including nonlinear, logistics, and Poisson are covered. Game theory, the Nash equilibrium, and Nash arbitration are also included. Features: The text's case studies and student projects involve students with real-world problem solving Focuses on numerical solution techniques in dynamical systems, optimization, and numerical analysis The numerical procedures discussed in the text are algorithmic and iterative Maple is utilized throughout the text as a tool for computation and analysis All algorithms are provided with step-by-step formats About the Authors: William P. Fox is an emeritus professor in the Department of Defense Analysis at the Naval Postgraduate School. Currently, he is an adjunct professor, Department of Mathematics, the College of William and Mary. He received his PhD at Clemson University and has many publications and scholarly activities including twenty books and over one hundred and fifty journal articles. William C. Bauldry, Prof. Emeritus and Adjunct Research Prof. of Mathematics at Appalachian State University, received his PhD in Approximation Theory from Ohio State. He has published many papers on pedagogy and technology, often using Maple, and has been the PI of several NSF-funded projects incorporating technology and modeling into math courses. He currently serves as Associate Director of COMAP's Math Contest in Modeling (MCM).
In many practical situations, we are interested in statistics characterizing a population of objects: e.g. in the mean height of people from a certain area. Most algorithms for estimating such statistics assume that the sample values are exact. In practice, sample values come from measurements, and measurements are never absolutely accurate. Sometimes, we know the exact probability distribution of the measurement inaccuracy, but often, we only know the upper bound on this inaccuracy. In this case, we have interval uncertainty: e.g. if the measured value is 1.0, and inaccuracy is bounded by 0.1, then the actual (unknown) value of the quantity can be anywhere between 1.0 - 0.1 = 0.9 and 1.0 + 0.1 = 1.1. In other cases, the values are expert estimates, and we only have fuzzy information about the estimation inaccuracy. This book shows how to compute statistics under such interval and fuzzy uncertainty. The resulting methods are applied to computer science (optimal scheduling of different processors), to information technology (maintaining privacy), to computer engineering (design of computer chips), and to data processing in geosciences, radar imaging, and structural mechanics.
In contrast to mainstream economics, complexity theory conceives the economy as a complex system of heterogeneous interacting agents characterised by limited information and bounded rationality. Agent Based Models (ABMs) are the analytical and computational tools developed by the proponents of this emerging methodology. Aimed at students and scholars of contemporary economics, this book includes a comprehensive toolkit for agent-based computational economics, now quickly becoming the new way to study evolving economic systems. Leading scholars in the field explain how ABMs can be applied fruitfully to many real-world economic examples and represent a great advancement over mainstream approaches. The essays discuss the methodological bases of agent-based approaches and demonstrate step-by-step how to build, simulate and analyse ABMs and how to validate their outputs empirically using the data. They also present a wide set of applications of these models to key economic topics, including the business cycle, labour markets, and economic growth.
Introduction to Quantitative Macroeconomics Using Julia: From Basic to State-of-the-Art Computational Techniques facilitates access to fundamental techniques in computational and quantitative macroeconomics. It focuses on the recent and very promising software, Julia, which offers a MATLAB-like language at speeds comparable to C/Fortran, also discussing modeling challenges that make quantitative macroeconomics dynamic, a key feature that few books on the topic include for macroeconomists who need the basic tools to build, solve and simulate macroeconomic models. This book neatly fills the gap between intermediate macroeconomic books and modern DSGE models used in research.
Complex high-technology devices are in growing use in industry, service sectors, and everyday life. Their reliability and maintenance is of utmost importance in view of their cost and critical functions. This book focuses on this theme and is intended to serve as a graduate-level textbook and reference book for scientists and academics in the field. The chapters are grouped into five complementary parts that cover the most important aspects of reliability and maintenance: stochastic models of reliability and maintenance, decision models involving optimal replacement and repair, stochastic methods in software engineering, computational methods and simulation, and maintenance management systems. This wide range of topics provides the reader with a complete picture in a self-contained volume.
This book is for graduate students and researchers, introducing modern foundational research in mathematics, computer science, and philosophy from an interdisciplinary point of view. Its scope includes Predicative Foundations, Constructive Mathematics and Type Theory, Computation in Higher Types, Extraction of Programs from Proofs, and Algorithmic Aspects in Financial Mathematics. By filling the gap between (under-)graduate level textbooks and advanced research papers, the book gives a scholarly account of recent developments and emerging branches of the aforementioned fields.
Many-valued logics is becoming increasingly important in many branches of science. This is the second volume of a comprehensive two-volume handbook on many-valued logics by two leading members of the famous Polish school of logic. While the first volume of 1992 was mainly concerned with theoretical foundations, this volume emphasizes automated reasoning, practical applications, and latest developments in closely related fields, such as fuzzy logics and rough set theory. It offers an extensive overview of Gentzen deduction systems and multi-sequential systems in many-valued logics and shows the application of the resolution principle to this logics. It discusses applications in such areas as software specification and electronic circuit verification and presents fuzzy logics and rough set theory in detail.
This monograph deals with theoretical aspects and numerical simulations of the interaction of electromagnetic fields with nonlinear materials. It focuses in particular on media with nonlinear polarization properties. It addresses the direct problem of nonlinear Electrodynamics, that is to understand the nonlinear behavior in the induced polarization and to analyze or even to control its impact on the propagation of electromagnetic fields in the matter. The book gives a comprehensive presentation of the results obtained by the authors during the last decade and put those findings in a broader, unified context and extends them in several directions.It is divided into eight chapters and three appendices. Chapter 1 starts from the Maxwell's equations and develops a wave propagation theory in plate-like media with nonlinear polarizability. In chapter 2 a theoretical framework in terms of weak solutions is given in order to prove the existence and uniqueness of a solution of the semilinear boundary-value problem derived in the first chapter. Chapter 3 presents a different approach to the solvability theory of the reduced frequency-domain model. Here the boundary-value problem is reduced to finding solutions of a system of one-dimensional nonlinear Hammerstein integral equations. Chapter 4 describes an approach to the spectral analysis of the linearized system of integral equations. Chapters 5 and 6 are devoted to the numerical approximation of the solutions of the corresponding mathematical models. Chapter 7 contains detailed descriptions, discussions and evaluations of the numerical experiments. Finally, chapter 8 gives a summary of the results and an outlook for future work.
"Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis, Second Edition, "provides a comprehensive guide for practitioners who wish to understand, construct, and analyze intelligent systems for decision support based on probabilistic networks. This new edition contains six new sections, in addition to fully-updated examples, tables, figures, and a revised appendix. Intended primarily for practitioners, this book does not require sophisticated mathematical skills or deep understanding of the underlying theory and methods nor does it discuss alternative technologies for reasoning under uncertainty. The theory and methods presented are illustrated through more than 140 examples, and exercises are included for the reader to check his or her level of understanding. The techniques and methods presented for knowledge elicitation, model construction and verification, modeling techniques and tricks, learning models from data, and analyses of models have all been developed and refined on the basis of numerous courses that the authors have held for practitioners worldwide. "
In this popular text for an Numerical Analysis course, the authors introduce several major methods of solving various partial differential equations (PDEs) including elliptic, parabolic, and hyperbolic equations. It covers traditional techniques including the classic finite difference method, finite element method, and state-of-the-art numercial methods.The text uniquely emphasizes both theoretical numerical analysis and practical implementation of the algorithms in MATLAB. This new edition includes a new chapter, Finite Value Method, the presentation has been tightened, new exercises and applications are included, and the text refers now to the latest release of MATLAB. Key Selling Points: A successful textbook for an undergraduate text on numerical analysis or methods taught in mathematics and computer engineering. This course is taught in every university throughout the world with an engineering department or school. Competitive advantage broader numerical methods (including finite difference, finite element, meshless method, and finite volume method), provides the MATLAB source code for most popular PDEs with detailed explanation about the implementation and theoretical analysis. No other existing textbook in the market offers a good combination of theoretical depth and practical source codes.
This book explains exactly what human knowledge is. The key concepts in this book are structures and algorithms, i.e., what the readers "see" and how they make use of what they see. Thus in comparison with some other books on the philosophy (or methodology) of science, which employ a syntactic approach, the author's approach is model theoretic or structural. Properly understood, it extends the current art and science of mathematical modeling to all fields of knowledge. The link between structure and algorithms is mathematics. But viewing "mathematics" as such a link is not exactly what readers most likely learned in school; thus, the task of this book is to explain what "mathematics" should actually mean. Chapter 1, an introductory essay, presents a general analysis of structures, algorithms and how they are to be linked. Several examples from the natural and social sciences, and from the history of knowledge, are provided in Chapters 2-6. In turn, Chapters 7 and 8 extend the analysis to include language and the mind. Structures are what the readers see. And, as abstract cultural objects, they can almost always be seen in many different ways. But certain structures, such as natural numbers and the basic theory of grammar, seem to have an absolute character. Any theory of knowledge grounded in human culture must explain how this is possible. The author's analysis of this cultural invariance, combining insights from evolutionary theory and neuroscience, is presented in the book's closing chapter. The book will be of interest to researchers, students and those outside academia who seek a deeper understanding of knowledge in our present-day society.
This book covers a variety of problems, and offers solutions to some, in: Statistical state and parameter estimation in nonlinear stochastic dynamical system in both the classical and quantum scenarios Propagation of electromagnetic waves in a plasma as described by the Boltzmann Kinetic Transport Equation Classical and Quantum General Relativity It will be of use to Engineering undergraduate students interested in analysing the motion of robots subject to random perturbation, and also to research scientists working in Quantum Filtering.
This book surveys some of the important research work carried out by Indian scientists in the field of pure and applied probability, quantum probability, quantum scattering theory, group representation theory and general relativity. It reviews the axiomatic foundations of probability theory by A.N. Kolmogorov and how the Indian school of probabilists and statisticians used this theory effectively to study a host of applied probability and statistics problems like parameter estimation, convergence of a sequence of probability distributions, and martingale characterization of diffusions. It will be an important resource to students and researchers of Physics and Engineering, especially those working with Advanced Probability and Statistics.
This book offers a self-contained exposition of the theory of computability in a higher-order context, where 'computable operations' may themselves be passed as arguments to other computable operations. The subject originated in the 1950s with the work of Kleene, Kreisel and others, and has since expanded in many different directions under the influence of workers from both mathematical logic and computer science. The ideas of higher-order computability have proved valuable both for elucidating the constructive content of logical systems, and for investigating the expressive power of various higher-order programming languages. In contrast to the well-known situation for first-order functions, it turns out that at higher types there are several different notions of computability competing for our attention, and each of these has given rise to its own strand of research. In this book, the authors offer an integrated treatment that draws together many of these strands within a unifying framework, revealing not only the range of possible computability concepts but the relationships between them. The book will serve as an ideal introduction to the field for beginning graduate students, as well as a reference for advanced researchers
This book introduces readers to fundamental concepts in fuzzy logic. It describes the necessary theoretical background and a number of basic mathematical models. Moreover, it makes them familiar with fuzzy control, an important topic in the engineering field. The book offers an unconventional introductory textbook on fuzzy logic, presenting theory together with examples and not always following the typical mathematical style of theorem-corollaries. Primarily intended to support engineers during their university studies, and to spark their curiosity about fuzzy logic and its applications, the book is also suitable for self-study, providing a valuable resource for engineers and professionals who deal with imprecision and non-random uncertainty in real-world applications.
In the field of economic analysis, computability in the formation of economic hypotheses is seen as the way forward. In this book, Professor Velupillai implements a theoretical research program along these lines. Choice theory, learning rational expectations equlibria, the persistence of adaptive behaviour, arithmetical games, aspects of production theory, and economic dynamics are given recursion theoretic (i.e. computable) interpretations. These interpretations lead to new kinds of questions being posed by the economic theorist. In particular, recurison theoretic decision problems replace standard optimisation paradigms in economic analysis. Economic theoretic questions, posed recursion-theoretically, lead to answers that are ambiguous: undecidable choices, uncomputable learning processes, and algorithmically unplayable games become standard answers. Professor Velupillai argues that a recursion theoretic formalisation of economic analysisComputable Economicsmakes the subject intrinsically inductive and computational. |
![]() ![]() You may like...
Introduction to Conformal Field Theory…
Ralph Blumenhagen, Erik Plauschinn
Hardcover
Concept and Formalization of…
Albrecht von Muller, Elias Zafiris
Hardcover
R4,134
Discovery Miles 41 340
Semigroups, Algebras and Operator Theory…
P. G. Romeo, John C. Meakin, …
Hardcover
The Complexity Theory Companion
Lane A. Hemaspaandra, Mitsunori Ogihara
Hardcover
R2,605
Discovery Miles 26 050
|