![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Mathematics > Applied mathematics > General
Toward the late 1990s, several research groups independently began developing new, related theories in mathematical finance. These theories did away with the standard stochastic geometric diffusion "Samuelson" market model (also known as the Black-Scholes model because it is used in that most famous theory), instead opting for models that allowed minimax approaches to complement or replace stochastic methods. Among the most fruitful models were those utilizing game-theoretic tools and the so-called interval market model. Over time, these models have slowly but steadily gained influence in the financial community, providing a useful alternative to classical methods. A self-contained monograph, The Interval Market Model in Mathematical Finance: Game-Theoretic Methods assembles some of the most important results, old and new, in this area of research. Written by seven of the most prominent pioneers of the interval market model and game-theoretic finance, the work provides a detailed account of several closely related modeling techniques for an array of problems in mathematical economics. The book is divided into five parts, which successively address topics including: * probability-free Black-Scholes theory; * fair-price interval of an option; * representation formulas and fast algorithms for option pricing; * rainbow options; * tychastic approach of mathematical finance based upon viability theory. This book provides a welcome addition to the literature, complementing myriad titles on the market that take a classical approach to mathematical finance. It is a worthwhile resource for researchers in applied mathematics and quantitative finance, and has also been written in a manner accessible to financially-inclined readers with a limited technical background.
This book examines in detail the planning and modelling of local infrastructure like energy systems, including the complexities resulting from various uncertainties. Readers will discover the individual steps involved in infrastructure planning in cities and territories, as well as the primary requirements and supporting quality factors. Further topics covered concern the field of uncertainty and its synergies with infrastructure planning. Theories, methodological backgrounds and concrete case studies will not only help readers to understand the proposed methodologies for modelling and uncertainty analysis, but will also show them how these approaches are implemented in practice.
This work presents a study of methods useful for modeling and understanding dynamical systems in the Galaxy. A natural coordinate system for the study of dynamical systems is the angle-action coordinate system. New methods for the approximation of the action-angle variables in general potentials are presented and discussed. These new tools are applied to the construction of dynamical models for two of the Galaxy's components: tidal streams and the Galactic disc. Tidal streams are remnants of tidally stripped satellites in the Milky Way that experience the effects of the large scale structure of the Galactic gravitational potential, while the Galactic disc provides insights into the nature of the Galaxy near the Sun. Appropriate action-based models are presented and discussed for these components, and extended to include further information such as the metallicity of stars.
The volume "Storing and Transmitting Data" is based on Rudolf Ahlswede's introductory course on "Information Theory I" and presents an introduction to Shannon Theory. Readers, familiar or unfamiliar with the technical intricacies of Information Theory, will benefit considerably from working through the book; especially Chapter VI with its lively comments and uncensored insider views from the world of science and research offers informative and revealing insights. This is the first of several volumes that will serve as a collected research documentation of Rudolf Ahlswede's lectures on information theory. Each volume includes comments from an invited well-known expert. Holger Boche contributed his insights in the supplement of the present volume. Classical information processing concerns the main tasks of gaining knowledge, storage, transmitting and hiding data. The first task is the prime goal of Statistics. For the two next, Shannon presented an impressive mathematical theory called Information Theory, which he based on probabilistic models. The theory largely involves the concept of codes with small error probabilities in spite of noise in the transmission, which is modeled by channels. The lectures presented in this work are suitable for graduate students in Mathematics, and also in Theoretical Computer Science, Physics, and Electrical Engineering with background in basic Mathematics. The lectures can be used as the basis for courses or to supplement courses in many ways. Ph.D. students will also find research problems, often with conjectures, that offer potential subjects for a thesis. More advanced researchers may find the basis of entire research programs.
This monograph tackles three challenges. First, show a mathematics-based meta-model that matches known elementary particles. Second, apply models, based on the meta-model, to match other known physics data. Third, predict future physics data. The math features solutions to isotropic pairs of isotropic quantum harmonic oscillators. This monograph matches some solutions to known elementary particles. Matched properties include spin, types of interactions in which the particles partake, and (for elementary bosons) approximate masses. Other solutions point to possible elementary particles. This monograph applies the models and the extended particle list. Results narrow gaps between physics data and theory. Results pertain to elementary particles, astrophysics, and cosmology. For example, this monograph predicts properties for beyond-the-Standard-Model elementary particles, proposes descriptions of dark matter and dark energy, provides new relationships between known physics constants (including masses of some elementary particles), includes theory that dovetails with the ratio of dark matter to ordinary matter, includes math that dovetails with the number of elementary-fermion generations, suggests forces that govern the rate of expansion of the universe, and suggests additions to and details for the cosmology timeline.
This book presents several new findings in the field of turbulent duct flows, which are important for a range of industrial applications. It presents both high-quality experiments and cutting-edge numerical simulations, providing a level of insight and rigour rarely found in PhD theses. The scientific advancements concern the effect of the Earth's rotation on large duct flows, the experimental confirmation of marginal turbulence in a pressure-driven square duct flow (previously only predicted in simulations), the identification of similar marginal turbulence in wall-driven flows using simulations (for the first time by any means) and, on a separate but related topic, a comprehensive experimental study on the phenomenon of drag reduction via polymer additives in turbulent duct flows. In turn, the work on drag reduction resulted in a correlation that provides a quantitative prediction of drag reduction based on a single, measurable material property of the polymer solution, regardless of the flow geometry or concentration. The first correlation of its kind, it represents an important advancement from both a scientific and practical perspective.
This second edition of Elements of Operator Theory is a concept-driven textbook that includes a significant expansion of the problems and solutions used to illustrate the principles of operator theory. Written in a user-friendly, motivating style intended to avoid the formula-computational approach, fundamental topics are presented in a systematic fashion, i.e., set theory, algebraic structures, topological structures, Banach spaces, and Hilbert spaces, culminating with the Spectral Theorem. Included in this edition: more than 150 examples, with several interesting counterexamples that demonstrate the frontiers of important theorems, as many as 300 fully rigorous proofs, specially tailored to the presentation, 300 problems, many with hints, and an additional 20 pages of problems for the second edition. *This self-contained work is an excellent text for the classroom as well as a self-study resource for researchers.
Various nanoclusters and microparticles are considered in excited and ionized gases, as well as various processes with their participation. The concepts of these processes were developed 50 - 100 years ago mostly for dense media, and basing on these concepts, we analyze these processes in gases in two opposite regimes, so that in the kinetic regime surrounding atoms of a buffer gas do not partake in processesinvolving small particles, and the diffusion regime corresponds to a dense gas where interaction of small particles with a buffer gas subjects to laws of hydrodynamics. For calculation or estimation of the rates of these processes, we are based on the liquid drop model for small particles which was introduced in physics by N. Bohr about 80 years ago for the analysis of properties of atomic nuclei including the nuclear fusion and the hard sphere model (or the model of billiard balls) which was used by J. C. Maxwell 150 years ago and helped to create the kinetic theory of gases. These models along with the analysis of their accuracy allow one to study various processes, such as transport processes in gases involving small particles, charging of small particles in gases, chemical processes, atom attachment and quenching of excited atomic particles on the surface of a small particle, nucleation processes for small particles including coagulation, coalescence and growth of fractal aggregates, chain aggregates, fractal fibres and aerogels. Each analysis is finished by analytic formulas or simple models which allow us to calculate the rate of a certain real process with a known accuracy or to estimate this, and criteria of validity are given for these expressions obtained. Examples of real objects and processes involving small particles are analyzed.
Problems of Point Blast Theory covers all the main topics of modern theory with the exception of applications to nova and supernova outbursts. All the presently known theoretical results are given and problems which are still to be resolved are indicated. A special feature of the book is the sophisticated mathematical approach. Of interest to specialists and graduate students working in hydrodynamics, explosion theory, plasma physics, mathematical physics, and applied mathematics.
This book provides the mathematical foundations of the theory of hyperhamiltonian dynamics, together with a discussion of physical applications. In addition, some open problems are discussed. Hyperhamiltonian mechanics represents a generalization of Hamiltonian mechanics, in which the role of the symplectic structure is taken by a hyperkahler one (thus there are three Kahler/symplectic forms satisfying quaternionic relations). This has proved to be of use in the description of physical systems with spin, including those which do not admit a Hamiltonian formulation. The book is the first monograph on the subject, which has previously been treated only in research papers.
"Networks of Echoes: Imitation, Innovation and Invisible Leaders" is a mathematically rigorous and data rich book on a fascinating area of the science and engineering of social webs. There are hundreds of complex network phenomena whose statistical properties are described by inverse power laws. The phenomena of interest are not arcane events that we encounter only fleetingly, but are events that dominate our lives. We examine how this intermittent statistical behavior intertwines itself with what appears to be the organized activity of social groups. The book is structured as answers to a sequence of questions such as: How are decisions reached in elections and boardrooms? How is the stability of a society undermined by zealots and committed minorities and how is that stability re-established? Can we learn to answer such questions about human behavior by studying the way flocks of birds retain their formation when eluding a predator? These questions and others are answered using a generic model of a complex dynamic network one whose global behavior is determined by a symmetric interaction among individuals based on social imitation. The complexity of the network is manifest in time series resulting from self-organized critical dynamics that have divergent first and second moments, are non-stationary, non-ergodic and non-Poisson. How phase transitions in the network dynamics influence such activity as decision making is a fascinating story and provides a context for introducing many of the mathematical ideas necessary for understanding complex networks in general. The decision making model (DMM) is selected to emphasize that there are features of complex webs that supersede specific mechanisms and need to be understood from a general perspective. This insightful overview of recent tools and their uses may serve as an introduction and curriculum guide in related courses."
This book contains the results in numerical analysis and optimization presented at the ECCOMAS thematic conference "Computational Analysis and Optimization" (CAO 2011) held in Jyvaskyla, Finland, June 9-11, 2011. Both the conference and this volume are dedicated to Professor Pekka Neittaanmaki on the occasion of his sixtieth birthday. It consists of five parts that are closely related to his scientific activities and interests: Numerical Methods for Nonlinear Problems; Reliable Methods for Computer Simulation; Analysis of Noised and Uncertain Data; Optimization Methods; Mathematical Models Generated by Modern Technological Problems. The book also includes a short biography of Professor Neittaanmaki.
This book sheds new light on the development and use of quantitative models to describe the process of skin permeation. It critically reviews the development of quantitative predictive models of skin absorption and discusses key recommendations for model development. Topics presented include an introduction to skin physiology; the underlying theories of skin absorption; the physical laboratory-based processes used to generate skin absorption data, which is in turn used to construct mathematical models describing the skin permeation process; algorithms of skin permeability including quantitative structure-activity (or permeability) relationships (QSARs or QSPRs); relationships between permeability and molecular properties; the development of formulation-focused approaches to models of skin permeability prediction; the use of artificial membranes, e.g. polydimethylsiloxane as alternatives to mammalian skin; and lastly, the use of novel Machine Learning methods in developing the next generation of predictive skin permeability models. The book will be of interest to all researchers in academia and industry working in pharmaceutical discovery and development, as well as readers from the field of occupational exposure and risk assessment, especially those whose work involves agrochemicals, bulk chemicals and cosmetics.
This book describes a promising approach to problems in the foundations of quantum mechanics, including the measurement problem. The dynamics of ensembles on configuration space is shown here to be a valuable tool for unifying the formalisms of classical and quantum mechanics, for deriving and extending the latter in various ways, and for addressing the quantum measurement problem. A description of physical systems by means of ensembles on configuration space can be introduced at a very fundamental level: the basic building blocks are a configuration space, probabilities, and Hamiltonian equations of motion for the probabilities. The formalism can describe both classical and quantum systems, and their thermodynamics, with the main difference being the choice of ensemble Hamiltonian. Furthermore, there is a natural way of introducing ensemble Hamiltonians that describe the evolution of hybrid systems; i.e., interacting systems that have distinct classical and quantum sectors, allowing for consistent descriptions of quantum systems interacting with classical measurement devices and quantum matter fields interacting gravitationally with a classical spacetime.
Based on the analytical methods and the computer programs presented in this book, all that may be needed to perform MRI tissue diagnosis is the availability of relaxometric data and simple computer program proficiency. These programs are easy to use, highly interactive and the data processing is fast and unambiguous. Laboratories (with or without sophisticated facilities) can perform computational magnetic resonance diagnosis with only T1 and T2 relaxation data. The results have motivated the use of data to produce data-driven predictions required for machine learning, artificial intelligence (AI) and deep learning for multidisciplinary and interdisciplinary research. Consequently, this book is intended to be very useful for students, scientists, engineers, the medical personnel and researchers who are interested in developing new concepts for deeper appreciation of computational magnetic resonance imaging for medical diagnosis, prognosis, therapy and management of tissue diseases.
Regularized equations of motion can improve numerical integration for the propagation of orbits, and simplify the treatment of mission design problems. This monograph discusses standard techniques and recent research in the area. While each scheme is derived analytically, its accuracy is investigated numerically. Algebraic and topological aspects of the formulations are studied, as well as their application to practical scenarios such as spacecraft relative motion and new low-thrust trajectories.
This work deals with the matrix methods of continuous signal and image processing according to which strip-transformation is used. The authors suggest ways to solve a problem of evaluating potential noise immunity and synthesis of an optimal filter for the case of pulse noises, of applying the two-dimensional strip-transformation for storage and noise immune transmission of images. The strip-transformation of images is illustrated by examples and classes of images invariant relative to symmetrical orthogonal transformations. The monograph is intended for scientists and specialists whose activities are connected with computer signals and images processing, instrumentation and metrology. It can also be used by undergraduates, as well as by post-graduates for studying computer methods of signal and image processing.
This volume reports results from the German research initiative MUNA (Management and Minimization of Errors and Uncertainties in Numerical Aerodynamics), which combined development activities of the German Aerospace Center (DLR), German universities and German aircraft industry. The main objective of this five year project was the development of methods and procedures aiming at reducing various types of uncertainties that are typical of numerical flow simulations. The activities were focused on methods for grid manipulation, techniques for increasing the simulation accuracy, sensors for turbulence modelling, methods for handling uncertainties of the geometry and grid deformation as well as stochastic methods for quantifying aleatoric uncertainties.
Quarks are the main constituents of protons and neutrons and hence are important building blocks of all the matter that surrounds us. However, quarks have the intriguing property that they never appear as isolated single particles but only in bound states. This phenomenon is called confinement and has been a central research topic of elementary particle physics for the last few decades. In order to find the mechanism that forbids the existence of free quarks many approaches and ideas are being followed, but by now it has become clear that they are not mutually exclusive but illuminate the problem from different perspectives. Two such confinement scenarios are investigated in this thesis: Firstly, the importance of Abelian field components for the low-energy regime is corroborated, thus supporting the dual superconductor picture of confinement and secondly, the influence of the Gribov horizon on non-perturbative solutions is studied.
This book offers a comprehensive and timely review of the fracture behavior of bimaterial composites consisting of periodically connected components, i.e. of bimaterial composites possessing periodical cracks along the interface. It first presents an overview of the literature, and then analyzes the isotropic, anisotropic and piezoelectric/dielectric properties of bimaterial components, gradually increasing the difficulty of the solutions discussed up to the coupled electromechanical problems. While in the case of isotropic and anisotropic materials it covers the problems generated by an arbitrary set of cracks, for the piezoelectric materials it focuses on studying the influence of the electric permittivity of the crack's filler, using not only a simple, fully electrically permeable model, but also a physically realistic, semi-permeable model. Throughout the analyses, the effects of the contact of the crack faces are taken into account so as to exclude the physically unrealistic interpenetration of the composite components that are typical of the classical open model. Further, the book derives and examines the mechanical and electromechanical fields, stress and electric intensity factors in detail. Providing extensive information on the fracture processes taking place in composite materials, the book helps readers become familiar with mathematical methods of complex function theory for obtaining exact analytical solutions.
This book surveys significant modern contributions to the mathematical theories of generalized heat wave equations. The first three chapters form a comprehensive survey of most modern contributions also describing in detail the mathematical properties of each model. Acceleration waves and shock waves are the focus in the next two chapters. Numerical techniques, continuous data dependence, and spatial stability of the solution in a cylinder, feature prominently among other topics treated in the following two chapters. The final two chapters are devoted to a description of selected applications and the corresponding formation of mathematical models. Illustrations are taken from a broad range that includes nanofluids, porous media, thin films, nuclear reactors, traffic flow, biology, and medicine, all of contemporary active technological importance and interest. This book will be of value to applied mathematicians, theoretical engineers and other practitioners who wish to know both the theory and its relevance to diverse applications.
This book presents multiprecision algorithms used in number theory and elsewhere, such as extrapolation, numerical integration, numerical summation (including multiple zeta values and the Riemann-Siegel formula), evaluation and speed of convergence of continued fractions, Euler products and Euler sums, inverse Mellin transforms, and complex $L$-functions. For each task, many algorithms are presented, such as Gaussian and doubly-exponential integration, Euler-MacLaurin, Abel-Plana, Lagrange, and Monien summation. Each algorithm is given in detail, together with a complete implementation in the free Pari/GP system. These implementations serve both to make even more precise the inner workings of the algorithms, and to gently introduce advanced features of the Pari/GP language. This book will be appreciated by anyone interested in number theory, specifically in practical implementations, computer experiments and numerical algorithms that can be scaled to produce thousands of digits of accuracy.
This book grew out of the need to provide students with a solid introduction to modern fluid dynamics. It offers a broad grounding in the underlying principles and techniques used, with some emphasis on applications in astrophysics and planetary science. The book comprehensively covers recent developments, methods and techniques, including, for example, new ideas on transitions to turbulence (via transiently growing stable linear modes), new approaches to turbulence (which remains the enigma of fluid dynamics), and the use of asymptotic approximation methods, which can give analytical or semi-analytical results and complement fully numerical treatments. The authors also briefly discuss some important considerations to be taken into account when developing a numerical code for computer simulation of fluid flows. Although the text is populated throughout with examples and problems from the field of astrophysics and planetary science, the text is eminently suitable as a general introduction to fluid dynamics. It is assumed that the readers are mathematically equipped with a reasonable knowledge in analysis, including basics of ordinary and partial differential equations and a good command of vector calculus and linear algebra. Each chapter concludes with bibliographical notes in which the authors briefly discuss the chapter's essential literature and give recommendations for further, deeper reading. Included in each chapter are a number of problems, some of them relevant to astrophysics and planetary science. The book is written for advanced undergraduate and graduate students, but will also prove a valuable source of reference for established researchers. |
![]() ![]() You may like...
Quantum Anharmonic Oscillator
Alexander Turbiner, Juan Carlos Del Valle Rosales
Hardcover
R2,877
Discovery Miles 28 770
Mathematical Statistics with…
William Mendenhall, Dennis Wackerly, …
Paperback
View of Sir Isaac Newton's Philosophy
Henry 1694-1771 Pemberton
Hardcover
R1,037
Discovery Miles 10 370
Dark Silicon and Future On-chip Systems…
Suyel Namasudra, Hamid Sarbazi-Azad
Hardcover
R4,084
Discovery Miles 40 840
Infinite Words, Volume 141 - Automata…
Dominique Perrin, Jean-Eric Pin
Hardcover
R4,214
Discovery Miles 42 140
Conway's Game of Life - Mathematics and…
Nathaniel Johnston, Dave Greene
Hardcover
R1,933
Discovery Miles 19 330
|