![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Mathematical theory of computation
This book discusses an important area of numerical optimization, called interior-point method. This topic has been popular since the 1980s when people gradually realized that all simplex algorithms were not convergent in polynomial time and many interior-point algorithms could be proved to converge in polynomial time. However, for a long time, there was a noticeable gap between theoretical polynomial bounds of the interior-point algorithms and efficiency of these algorithms. Strategies that were important to the computational efficiency became barriers in the proof of good polynomial bounds. The more the strategies were used in algorithms, the worse the polynomial bounds became. To further exacerbate the problem, Mehrotra's predictor-corrector (MPC) algorithm (the most popular and efficient interior-point algorithm until recently) uses all good strategies and fails to prove the convergence. Therefore, MPC does not have polynomiality, a critical issue with the simplex method. This book discusses recent developments that resolves the dilemma. It has three major parts. The first, including Chapters 1, 2, 3, and 4, presents some of the most important algorithms during the development of the interior-point method around the 1990s, most of them are widely known. The main purpose of this part is to explain the dilemma described above by analyzing these algorithms' polynomial bounds and summarizing the computational experience associated with them. The second part, including Chapters 5, 6, 7, and 8, describes how to solve the dilemma step-by-step using arc-search techniques. At the end of this part, a very efficient algorithm with the lowest polynomial bound is presented. The last part, including Chapters 9, 10, 11, and 12, extends arc-search techniques to some more general problems, such as convex quadratic programming, linear complementarity problem, and semi-definite programming.
Businesses today are faced with a highly competitive market and fast-changing technologies. In order to meet demanding customers' needs, they rely on high quality software. A new field of study, soft computing techniques, is needed to estimate the efforts invested in component-based software. Component-Based Systems: Estimating Efforts Using Soft Computing Techniques is an important resource that uses computer-based models for estimating efforts of software. It provides an overview of component-based software engineering, while addressing uncertainty involved in effort estimation and expert opinions. This book will also instruct the reader how to develop mathematical models. This book is an excellent source of information for students and researchers to learn soft computing models, their applications in software management, and will help software developers, managers, and those in the industry to apply soft computing techniques to estimate efforts.
Businesses today are faced with a highly competitive market and fast-changing technologies. In order to meet demanding customers' needs, they rely on high quality software. A new field of study, soft computing techniques, is needed to estimate the efforts invested in component-based software. Component-Based Systems: Estimating Efforts Using Soft Computing Techniques is an important resource that uses computer-based models for estimating efforts of software. It provides an overview of component-based software engineering, while addressing uncertainty involved in effort estimation and expert opinions. This book will also instruct the reader how to develop mathematical models. This book is an excellent source of information for students and researchers to learn soft computing models, their applications in software management, and will help software developers, managers, and those in the industry to apply soft computing techniques to estimate efforts.
This is the first rigorous, self-contained treatment of the theory of deep learning. Starting with the foundations of the theory and building it up, this is essential reading for any scientists, instructors, and students interested in artificial intelligence and deep learning. It provides guidance on how to think about scientific questions, and leads readers through the history of the field and its fundamental connections to neuroscience. The author discusses many applications to beautiful problems in the natural sciences, in physics, chemistry, and biomedicine. Examples include the search for exotic particles and dark matter in experimental physics, the prediction of molecular properties and reaction outcomes in chemistry, and the prediction of protein structures and the diagnostic analysis of biomedical images in the natural sciences. The text is accompanied by a full set of exercises at different difficulty levels and encourages out-of-the-box thinking.
A Computational Approach to Statistical Learning gives a novel introduction to predictive modeling by focusing on the algorithmic and numeric motivations behind popular statistical methods. The text contains annotated code to over 80 original reference functions. These functions provide minimal working implementations of common statistical learning algorithms. Every chapter concludes with a fully worked out application that illustrates predictive modeling tasks using a real-world dataset. The text begins with a detailed analysis of linear models and ordinary least squares. Subsequent chapters explore extensions such as ridge regression, generalized linear models, and additive models. The second half focuses on the use of general-purpose algorithms for convex optimization and their application to tasks in statistical learning. Models covered include the elastic net, dense neural networks, convolutional neural networks (CNNs), and spectral clustering. A unifying theme throughout the text is the use of optimization theory in the description of predictive models, with a particular focus on the singular value decomposition (SVD). Through this theme, the computational approach motivates and clarifies the relationships between various predictive models. Taylor Arnold is an assistant professor of statistics at the University of Richmond. His work at the intersection of computer vision, natural language processing, and digital humanities has been supported by multiple grants from the National Endowment for the Humanities (NEH) and the American Council of Learned Societies (ACLS). His first book, Humanities Data in R, was published in 2015. Michael Kane is an assistant professor of biostatistics at Yale University. He is the recipient of grants from the National Institutes of Health (NIH), DARPA, and the Bill and Melinda Gates Foundation. His R package bigmemory won the Chamber's prize for statistical software in 2010. Bryan Lewis is an applied mathematician and author of many popular R packages, including irlba, doRedis, and threejs.
Handbook of Robust Low-Rank and Sparse Matrix Decomposition: Applications in Image and Video Processing shows you how robust subspace learning and tracking by decomposition into low-rank and sparse matrices provide a suitable framework for computer vision applications. Incorporating both existing and new ideas, the book conveniently gives you one-stop access to a number of different decompositions, algorithms, implementations, and benchmarking techniques. Divided into five parts, the book begins with an overall introduction to robust principal component analysis (PCA) via decomposition into low-rank and sparse matrices. The second part addresses robust matrix factorization/completion problems while the third part focuses on robust online subspace estimation, learning, and tracking. Covering applications in image and video processing, the fourth part discusses image analysis, image denoising, motion saliency detection, video coding, key frame extraction, and hyperspectral video processing. The final part presents resources and applications in background/foreground separation for video surveillance. With contributions from leading teams around the world, this handbook provides a complete overview of the concepts, theories, algorithms, and applications related to robust low-rank and sparse matrix decompositions. It is designed for researchers, developers, and graduate students in computer vision, image and video processing, real-time architecture, machine learning, and data mining.
This new work is an introduction to the numerical solution of the initial value problem for a system of ordinary differential equations. The first three chapters are general in nature, and chapters 4 through 8 derive the basic numerical methods, prove their convergence, study their stability and consider how to implement them effectively. The book focuses on the most important methods in practice and develops them fully, uses examples throughout, and emphasizes practical problem-solving methods.
Accurate, robust and fast image reconstruction is a critical task in many scientific, industrial and medical applications. Over the last decade, image reconstruction has been revolutionized by the rise of compressive imaging. It has fundamentally changed the way modern image reconstruction is performed. This in-depth treatment of the subject commences with a practical introduction to compressive imaging, supplemented with examples and downloadable code, intended for readers without extensive background in the subject. Next, it introduces core topics in compressive imaging - including compressed sensing, wavelets and optimization - in a concise yet rigorous way, before providing a detailed treatment of the mathematics of compressive imaging. The final part is devoted to recent trends in compressive imaging: deep learning and neural networks. With an eye to the next decade of imaging research, and using both empirical and mathematical insights, it examines the potential benefits and the pitfalls of these latest approaches.
The goal of Computer Algebra: Concepts and Techniques is to demystify computer algebra systems for a wide audience including students, faculty, and professionals in scientific fields such as computer science, mathematics, engineering, and physics. Unlike previous books, the only prerequisites are knowledge of first year calculus and a little programming experience - a background that can be assumed of the intended audience. The book is written in a lean and lively style, with numerous examples to illustrate the issues and techniques discussed. It presents the principal algorithms and data structures, while also discussing the inherent and practical limitations of these systems
Exact eigenvalues, eigenvectors, and principal vectors of operators with infinite dimensional ranges can rarely be found. Therefore, one must approximate such operators by finite rank operators, then solve the original eigenvalue problem approximately. Serving as both an outstanding text for graduate students and as a source of current results for research scientists, Spectral Computations for Bounded Operators addresses the issue of solving eigenvalue problems for operators on infinite dimensional spaces. From a review of classical spectral theory through concrete approximation techniques to finite dimensional situations that can be implemented on a computer, this volume illustrates the marriage of pure and applied mathematics. It contains a variety of recent developments, including a new type of approximation that encompasses a variety of approximation methods but is simple to verify in practice. It also suggests a new stopping criterion for the QR Method and outlines advances in both the iterative refinement and acceleration techniques for improving the accuracy of approximations. The authors illustrate all definitions and results with elementary examples and include numerous exercises. Spectral Computations for Bounded Operators thus serves as both an outstanding text for second-year graduate students and as a source of current results for research scientists.
More than ever before, complicated mathematical procedures are integral to the success and advancement of technology, engineering, and even industrial production. Knowledge of and experience with these procedures is therefore vital to present and future scientists, engineers and technologists. Mathematical Methods in Physics and Engineering with Mathematica clearly demonstrates how to solve difficult practical problems involving ordinary and partial differential equations and boundary value problems using the software package Mathematica (4.x). Avoiding mathematical theorems and numerical methods-and requiring no prior experience with the software-the author helps readers learn by doing with step-by-step recipes useful in both new and classical applications. Mathematica and FORTRAN codes used in the book's examples and exercises are available for download from the Internet. The author's clear explanation of each Mathematica command along with a wealth of examples and exercises make Mathematical Methods in Physics and Engineering with Mathematica an outstanding choice both as a reference for practical problem solving and as a quick-start guide to using a leading mathematics software package.
Formal Languages and Computation: Models and Their Applications gives a clear, comprehensive introduction to formal language theory and its applications in computer science. It covers all rudimental topics concerning formal languages and their models, especially grammars and automata, and sketches the basic ideas underlying the theory of computation, including computability, decidability, and computational complexity. Emphasizing the relationship between theory and application, the book describes many real-world applications, including computer science engineering techniques for language processing and their implementation. Covers the theory of formal languages and their models, including all essential concepts and properties Explains how language models underlie language processors Pays a special attention to programming language analyzers, such as scanners and parsers, based on four language models-regular expressions, finite automata, context-free grammars, and pushdown automata Discusses the mathematical notion of a Turing machine as a universally accepted formalization of the intuitive notion of a procedure Reviews the general theory of computation, particularly computability and decidability Considers problem-deciding algorithms in terms of their computational complexity measured according to time and space requirements Points out that some problems are decidable in principle, but they are, in fact, intractable problems for absurdly high computational requirements of the algorithms that decide them In short, this book represents a theoretically oriented treatment of formal languages and their models with a focus on their applications. It introduces all formalisms concerning them with enough rigors to make all results quite clear and valid. Every complicated mathematic
This book takes recent theoretical advances in Finance and Economics and shows how they can be implemented in the real world. It presents tactics for using mathematical and simulation models to solve complex tasks of forecasting income, valuing businesses, predicting retail sales, and evaluating markets and tax and regulatory problems. Business Economics and Finance with Matlab, GIS, and Simulation Models provides a unique overview of sophisticated business and financial applications. It describes models that have been developed for analysis of retail sales, tax policy, location, economic impact, public policy issues, and other challenges faced by executives, investors, and economists on a daily basis. It also offers groundbreaking insight into the many calculation and modeling tools that can be remotely hosted and run over the Internet, resulting in substantial user benefits and cost savings. This book is the first to fully explore the capabilities of MATLAB in the field of business economics, and explain how the benefits of sophisticated mathematical models can be provided to users via the Internet, using a thin-client environment. Many techniques directly incorporate geographic information and GIS in a way that was impossible until quite recently. Some techniques, such as fuzzy logic, retail sales, economic and fiscal impact models, and other Matlab and Simulink models, are described for the first time in print in this book. The sections on business income and value break new ground by directly incorporating uncertainty, real option value, and prediction of variables using Ito and jump processes. Using dozens of examples, hundreds of references, and rigorous explanations of both theory and practice, it will become a prized reference for analysts demanding the best techniques.
Energy Power Risk: Derivatives, Computation and Optimization is a comprehensive guide presenting the latest mathematical and computational tools required for the quantification and management of energy power risk. Written by a practitioner with many years' experience in the field, it provides readers with valuable insights in to the latest practices and methodologies used in today's markets, showing readers how to create innovative quantitative models for energy and power risk and derivative valuation. The book begins with an introduction to the mathematics of Brownian motion and stochastic processes, covering Geometric Brownian motion, Ito's lemma, Ito's Isometry, the Ornstein Uhlenbeck process and more. It then moves on to the simulation of power prices and the valuation of energy derivatives, before considering software engineering techniques for energy risk and portfolio optimization. The book also covers additional topics including wind and solar generation, intraday storage, generation and demand optionality. Written in a highly practical manner and with example C++ and VBA code provided throughout, Energy Power Risk: Derivatives, Computation and Optimization will be an essential reference for quantitative analysts, financial engineers and other practitioners in the field of energy risk management, as well as researchers and students interested in the industry and how it works.
The present volume contains selected contributed papers from the BIOMAT 2008 Symposium and lectures delivered by keynote speakers during the plenary sessions. All chapters are centered on fundamental interdisciplinary areas of mathematical modeling of biosystems, like mathematical biology, biological physics, evolution biology and bioinformatics. It contains new results on the mathematical analysis of reaction-diffusion equations, demographic Allee effects and the dynamics of infection. Recent approaches to the modeling of biosystem structure, comprehensive reviews on icosahedral viral capsids and the classification of biological data via neural networks with prior knowledge, and a new perspective on a theoretical basis for bioinformatics are also discussed.This book contains original results on reaction-diffusion waves: the population dynamics of fishing resources and the effectiveness of marine protected areas; an approach to language evolution within a population dynamics framework; the analysis of bacterial genome evolution with Markov chains; the choice of defense strategies and the study of the arms-race phenomenon in a host-parasite system.
A Computational Approach to Statistical Learning gives a novel introduction to predictive modeling by focusing on the algorithmic and numeric motivations behind popular statistical methods. The text contains annotated code to over 80 original reference functions. These functions provide minimal working implementations of common statistical learning algorithms. Every chapter concludes with a fully worked out application that illustrates predictive modeling tasks using a real-world dataset. The text begins with a detailed analysis of linear models and ordinary least squares. Subsequent chapters explore extensions such as ridge regression, generalized linear models, and additive models. The second half focuses on the use of general-purpose algorithms for convex optimization and their application to tasks in statistical learning. Models covered include the elastic net, dense neural networks, convolutional neural networks (CNNs), and spectral clustering. A unifying theme throughout the text is the use of optimization theory in the description of predictive models, with a particular focus on the singular value decomposition (SVD). Through this theme, the computational approach motivates and clarifies the relationships between various predictive models. Taylor Arnold is an assistant professor of statistics at the University of Richmond. His work at the intersection of computer vision, natural language processing, and digital humanities has been supported by multiple grants from the National Endowment for the Humanities (NEH) and the American Council of Learned Societies (ACLS). His first book, Humanities Data in R, was published in 2015. Michael Kane is an assistant professor of biostatistics at Yale University. He is the recipient of grants from the National Institutes of Health (NIH), DARPA, and the Bill and Melinda Gates Foundation. His R package bigmemory won the Chamber's prize for statistical software in 2010. Bryan Lewis is an applied mathematician and author of many popular R packages, including irlba, doRedis, and threejs.
The main focus of this book is on presenting advances in fuzzy statistics, and on proposing a methodology for testing hypotheses in the fuzzy environment based on the estimation of fuzzy confidence intervals, a context in which not only the data but also the hypotheses are considered to be fuzzy. The proposed method for estimating these intervals is based on the likelihood method and employs the bootstrap technique. A new metric generalizing the signed distance measure is also developed. In turn, the book presents two conceptually diverse applications in which defended intervals play a role: one is a novel methodology for evaluating linguistic questionnaires developed at the global and individual levels; the other is an extension of the multi-ways analysis of variance to the space of fuzzy sets. To illustrate these approaches, the book presents several empirical and simulation-based studies with synthetic and real data sets. In closing, it presents a coherent R package called "FuzzySTs" which covers all the previously mentioned concepts with full documentation and selected use cases. Given its scope, the book will be of interest to all researchers whose work involves advanced fuzzy statistical methods.
In recent years, stylized forms of the Boltzmann equation, now going by the name of "Lattice Boltzmann equation" (LBE), have emerged, which relinquish most mathematical complexities of the true Boltzmann equation without sacrificing physical fidelity in the description of many situations involving complex fluid motion. This book provides the first detailed survey of LBE theory and its major applications to date. Accessible to a broad audience of scientists dealing with complex system dynamics, the book also portrays future developments in allied areas of science (material science, biology etc.) where fluid motion plays a distinguished role.
Taking an interdisciplinary approach, this new book provides a modern introduction to scientific computing, exploring numerical methods, computer technology, and their interconnections, which are treated with the goal of facilitating scientific research across all disciplines. Each chapter provides an insightful lesson and viewpoints from several subject areas are often compounded within a single chapter. Written with an eye on usefulness, longevity, and breadth, Lessons in Scientific Computing will serve as a "one stop shop" for students taking a unified course in scientific computing, or seeking a single cohesive text spanning multiple courses. Features: Provides a unique combination of numerical analysis, computer programming, and computer hardware in a single text Includes essential topics such as numerical methods, approximation theory, parallel computing, algorithms, and examples of computational discoveries in science Not wedded to a specific programming language
This book gathers papers presented at the Workshop on Computational Diffusion MRI, CDMRI 2020, held under the auspices of the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), which took place virtually on October 8th, 2020, having originally been planned to take place in Lima, Peru.This book presents the latest developments in the highly active and rapidly growing field of diffusion MRI. While offering new perspectives on the most recent research challenges in the field, the selected articles also provide a valuable starting point for anyone interested in learning computational techniques for diffusion MRI. The book includes rigorous mathematical derivations, a large number of rich, full-colour visualizations, and clinically relevant results. As such, it is of interest to researchers and practitioners in the fields of computer science, MRI physics, and applied mathematics. The reader will find numerous contributions covering a broad range of topics, from the mathematical foundations of the diffusion process and signal generation to new computational methods and estimation techniques for the in-vivo recovery of microstructural and connectivity features, as well as diffusion-relaxometry and frontline applications in research and clinical practice.
Lectures on Stochastic Programming: Modeling and Theory, Third Edition covers optimization problems involving uncertain parameters for which stochastic models are available. These problems occur in almost all areas of science and engineering. This substantial revision of the previous edition presents a modern theory of stochastic programming, including expanded coverage of sample complexity, risk measures, and distributionally robust optimization: Chapter 6 is updated and the interchangeability principle for risk measures is discussed in detail. Two new chapters, 'Distributionally Robust Stochastic Programming' (DRSP) and 'Computational Methods' provide readers with a solid understanding of emerging topics. Chapter 8 presents new material on formulation and numerical approaches to solving periodical multistage stochastic programs. This book is written for researchers and graduate students working on theory and applications of optimization.
The book provides many of the basic papers in computer arithmetic. These papers describe the concepts and basic operations (in the words of the original developers) that would be useful to the designers of computers and embedded systems. Although the main focus is on the basic operations of addition, multiplication and division, advanced concepts such as logarithmic arithmetic and the calculations of elementary functions are also covered.This volume is part of a 3 volume set:Computer Arithmetic Volume I Computer Arithmetic Volume II Computer Arithmetic Volume IIIThe full set is available for sale in a print-only version.
Problem Solving is essential to solve real-world problems. Advanced Problem Solving with Maple: A First Course applies the mathematical modeling process by formulating, building, solving, analyzing, and criticizing mathematical models. It is intended for a course introducing students to mathematical topics they will revisit within their further studies. The authors present mathematical modeling and problem-solving topics using Maple as the computer algebra system for mathematical explorations, as well as obtaining plots that help readers perform analyses. The book presents cogent applications that demonstrate an effective use of Maple, provide discussions of the results obtained using Maple, and stimulate thought and analysis of additional applications. Highlights: The book's real-world case studies prepare the student for modeling applications Bridges the study of topics and applications to various fields of mathematics, science, and engineering Features a flexible format and tiered approach offers courses for students at various levels The book can be used for students with only algebra or calculus behind them About the authors: Dr. William P. Fox is an emeritus professor in the Department of Defense Analysis at the Naval Postgraduate School. Currently, he is an adjunct professor, Department of Mathematics, the College of William and Mary. He received his Ph.D. at Clemson University and has many publications and scholarly activities including twenty books and over one hundred and fifty journal articles. William C. Bauldry, Prof. Emeritus and Adjunct Research Prof. of Mathematics at Appalachian State University, received his PhD in Approximation Theory from Ohio State. He has published many papers on pedagogy and technology, often using Maple, and has been the PI of several NSF-funded projects incorporating technology and modeling into math courses. He currently serves as Associate Director of COMAP's Math Contest in Modeling (MCM). *Please note that the Maple package, "PSM", is now on the public area of the Maple Cloud. To access it: * From the web: 1. Go to the website https://maple.cloud 2. Click on "packages" in the left navigation pane 3. Click on "PSM" in the list of packages. 4. Click the "Download" button to capture the package. * From Maple: 1. Click on the Maple Cloud icon (far right in the Maple window toolbar). Or click on the Maple Cloud button on Maple's Start page to go to the website. 2. Click on the "packages" in the navigation pane 3. Click on "PSM" in the list of packages. The package then downloads into Maple directly.
Unique selling point: * Industry standard book for merchants, banks, and consulting firms looking to learn more about PCI DSS compliance. Core audience: * Retailers (both physical and electronic), firms who handle credit or debit cards (such as merchant banks and processors), and firms who deliver PCI DSS products and services. Place in the market: * Currently there are no PCI DSS 4.0 books
Randomized search heuristics such as evolutionary algorithms, genetic algorithms, evolution strategies, ant colony and particle swarm optimization turn out to be highly successful for optimization in practice. The theory of randomized search heuristics, which has been growing rapidly in the last five years, also attempts to explain the success of the methods in practical applications.This book covers both classical results and the most recent theoretical developments in the field of randomized search heuristics such as runtime analysis, drift analysis and convergence. Each chapter provides an overview of a particular domain and gives insights into the proofs and proof techniques of more specialized areas. Open problems still remain widely in randomized search heuristics - being a relatively young and vast field. These problems and directions for future research are addressed and discussed in this book.The book will be an essential source of reference for experts in the domain of randomized search heuristics and also for researchers who are involved or ready to embark in this field. As an advanced textbook, graduate students will benefit from the comprehensive coverage of topics |
![]() ![]() You may like...
Waste Management Techniques for Improved…
Sang-Bing Tsai, Zhengxi Yuan, …
Hardcover
R5,800
Discovery Miles 58 000
The Seven - Taking a Closer Look at What…
Lonnie Davis Wesley
Hardcover
|