![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Numerical analysis
These days, the nature of services and the volume of demand in the telecommu nication industry is changing radically, with the replacement of analog transmis sion and traditional copper cables by digital technology and fiber optic transmis sion equipment. Moreover, we see an increasing competition among providers of telecommunication services, and the development of a broad range of new services for users, combining voice, data, graphics and video. Telecommunication network planning has thus become an important problem area for developing and applying optimization models. Telephone companies have initiated extensive modeling and planning efforts to expand and upgrade their transmission facilities, which are, for most national telecommunication networks, divided in three main levels (see Balakrishnan et al. [5]), namely, l. the long-distance or backbone network that typically connects city pairs through gateway nodes; 2. the inter-office or switching center network within each city, that interconnects switching centers in different subdivisions (clusters of customers) and provides access to the gateway(s) node(s); 1 2 DESIGN OF SURVNABLE NETWORKS WITH BOUNDED RINGS 3. the local access network that connects individual subscribers belonging to a cluster to the corresponding switching center. These three levels differ in several ways including their design criteria. Ideally, the design of a telecommunication network should simultaneously account for these three levels. However, to simplify the planning task, the overall planning problem is decomposed by considering each level separately.
Everything should be made as simple as possible, but not simpler. (Albert Einstein, Readers Digest, 1977) The modern practice of creating technical systems and technological processes of high effi.ciency besides the employment of new principles, new materials, new physical effects and other new solutions ( which is very traditional and plays the key role in the selection of the general structure of the object to be designed) also includes the choice of the best combination for the set of parameters (geometrical sizes, electrical and strength characteristics, etc.) concretizing this general structure, because the Variation of these parameters ( with the structure or linkage being already set defined) can essentially affect the objective performance indexes. The mathematical tools for choosing these best combinations are exactly what is this book about. With the advent of computers and the computer-aided design the pro bations of the selected variants are usually performed not for the real examples ( this may require some very expensive building of sample op tions and of the special installations to test them ), but by the analysis of the corresponding mathematical models. The sophistication of the mathematical models for the objects to be designed, which is the natu ral consequence of the raising complexity of these objects, greatly com plicates the objective performance analysis. Today, the main (and very often the only) available instrument for such an analysis is computer aided simulation of an object's behavior, based on numerical experiments with its mathematical model.
This book is intended to provide economists with mathematical tools necessary to handle the concepts of evolution under uncertainty and adaption arising in economics, pursuing the Arrow-Debreu-Hahn legacy. It applies the techniques of viability theory to the study of economic systems evolving under contingent uncertainty, faced with scarcity constraints, and obeying various implementation of the inertia principle. The book illustrates how new tools can be used to move from static analysis, built on concepts of optima, equilibria and attractors to a contingent dynamic framework.
Algebraic, differential, and integral equations are used in the applied sciences, en gineering, economics, and the social sciences to characterize the current state of a physical, economic, or social system and forecast its evolution in time. Generally, the coefficients of and/or the input to these equations are not precisely known be cause of insufficient information, limited understanding of some underlying phe nomena, and inherent randonmess. For example, the orientation of the atomic lattice in the grains of a polycrystal varies randomly from grain to grain, the spa tial distribution of a phase of a composite material is not known precisely for a particular specimen, bone properties needed to develop reliable artificial joints vary significantly with individual and age, forces acting on a plane from takeoff to landing depend in a complex manner on the environmental conditions and flight pattern, and stock prices and their evolution in time depend on a large number of factors that cannot be described by deterministic models. Problems that can be defined by algebraic, differential, and integral equations with random coefficients and/or input are referred to as stochastic problems. The main objective of this book is the solution of stochastic problems, that is, the determination of the probability law, moments, and/or other probabilistic properties of the state of a physical, economic, or social system. It is assumed that the operators and inputs defining a stochastic problem are specified."
Developed over a period of two years at the University of Utah Department of Computer Science, this course has been designed to encourage the integration of computation into the science and engineering curricula. Intended as an introductory course in computing expressly for science and engineering students, the course was created to satisfy the standard programming requirement, while preparing students to immediately exploit the broad power of modern computing in their science and engineering courses.
In this book we analyze the error caused by numerical schemes for the approximation of semilinear stochastic evolution equations (SEEq) in a Hilbert space-valued setting. The numerical schemes considered combine Galerkin finite element methods with Euler-type temporal approximations. Starting from a precise analysis of the spatio-temporal regularity of the mild solution to the SEEq, we derive and prove optimal error estimates of the strong error of convergence in the first part of the book. The second part deals with a new approach to the so-called weak error of convergence, which measures the distance between the law of the numerical solution and the law of the exact solution. This approach is based on Bismut's integration by parts formula and the Malliavin calculus for infinite dimensional stochastic processes. These techniques are developed and explained in a separate chapter, before the weak convergence is proven for linear SEEq.
Algorithms for the numerical computation of definite integrals have been proposed for more than 300 years, but practical considerations have led to problems of ever-increasing complexity, so that, even with current computing speeds, numerical integration may be a difficult task. High dimension and complicated structure of the region of integration and singularities of the integrand are the main sources of difficulties.
Matrix-analytic and related methods have become recognized as an important and fundamental approach for the mathematical analysis of general classes of complex stochastic models. Research in the area of matrix-analytic and related methods seeks to discover underlying probabilistic structures intrinsic in such stochastic models, develop numerical algorithms for computing functionals (e.g., performance measures) of the underlying stochastic processes, and apply these probabilistic structures and/or computational algorithms within a wide variety of fields. This volume presents recent research results on: the theory, algorithms and methodologies concerning matrix-analytic and related methods in stochastic models; and the application of matrix-analytic and related methods in various fields, which includes but is not limited to computer science and engineering, communication networks and telephony, electrical and industrial engineering, operations research, management science, financial and risk analysis, and bio-statistics. These research studies provide deep insights and understanding of the stochastic models of interest from a mathematics and/or applications perspective, as well as identify directions for future research.
Model-based recursive partitioning (MOB) provides a powerful synthesis between machine-learning inspired recursive partitioning methods and regression models. Hanna Birke extends this approach by allowing in addition for measurement error in covariates, as frequently occurring in biometric (or econometric) studies, for instance, when measuring blood pressure or caloric intake per day. After an introduction into the background, the extended methodology is developed in detail for the Cox model and the Weibull model, carefully implemented in R, and investigated in a comprehensive simulation study.
Real Analysis is a discipline of intensive study in many institutions of higher education, because it contains useful concepts and fundamental results in the study of mathematics and physics, of the technical disciplines and geometry. This book is the first one of its kind that solves mathematical analysis problems with all four related main software Matlab, Mathcad, Mathematica and Maple. Besides the fundamental theoretical notions, the book contains many exercises, solved both mathematically and by computer, using: Matlab 7.9, Mathcad 14, Mathematica 8 or Maple 15 programming languages. The book is divided into nine chapters, which illustrate the application of the mathematical concepts using the computer. Each chapter presents the fundamental concepts and the elements required to solve the problems contained in that chapter and finishes with some problems left to be solved by the readers. The calculations can be verified by using a specific software such as Matlab, Mathcad, Mathematica or Maple.
This book introduces the essential concepts of algorithm analysis required by core undergraduate and graduate computer science courses, in addition to providing a review of the fundamental mathematical notions necessary to understand these concepts. Features: includes numerous fully-worked examples and step-by-step proofs, assuming no strong mathematical background; describes the foundation of the analysis of algorithms theory in terms of the big-Oh, Omega, and Theta notations; examines recurrence relations; discusses the concepts of basic operation, traditional loop counting, and best case and worst case complexities; reviews various algorithms of a probabilistic nature, and uses elements of probability theory to compute the average complexity of algorithms such as Quicksort; introduces a variety of classical finite graph algorithms, together with an analysis of their complexity; provides an appendix on probability theory, reviewing the major definitions and theorems used in the book.
Elasticity theory is a classical discipline. The mathematical theory of elasticity in mechanics, especially the linearized theory, is quite mature, and is one of the foundations of several engineering sciences. In the last twenty years, there has been significant progress in several areas closely related to this classical field, this applies in particular to the following two areas. First, progress has been made in numerical methods, especially the development of the finite element method. The finite element method, which was independently created and developed in different ways by sci entists both in China and in the West, is a kind of systematic and modern numerical method for solving partial differential equations, especially el liptic equations. Experience has shown that the finite element method is efficient enough to solve problems in an extremely wide range of applica tions of elastic mechanics. In particular, the finite element method is very suitable for highly complicated problems. One of the authors (Feng) of this book had the good fortune to participate in the work of creating and establishing the theoretical basis of the finite element method. He thought in the early sixties that the method could be used to solve computational problems of solid mechanics by computers. Later practice justified and still continues to justify this point of view. The authors believe that it is now time to include the finite element method as an important part of the content of a textbook of modern elastic mechanics."
hereafter calledvolume the of In a volume study previous (H6non 1997, I), the restricted initiated. families in problem (We generating three body was recallthat families defined asthe limits offamilies of are periodic generating determinationof orbitsfor Themain wasfoundto lieinthe 4 problem p 0.) bifurcation wheretwo the betweenthebranches ata ormore orbit, junctions A solutionto this was familiesof orbits intersect. partial problem generating and sidesof theuseofinvariants: Manysimple symmetries passage. givenby In the evolution of the bifurcations can be solved in this way. particular, orbits be described almost nine natural families of can completely. periodic become i.e.when thenumber of asthe bifurcations morecomplex, However, fails. the bifurcation orbit themethod families increases, passingthrough of This volume describes another to the a approach problem, consisting in of bifurcation ofthe families the a analysis vicinity detailed, quantitative used in Vol. I. orbit. This moreworkthan the requires qualitativeapproach in at to deter it has the of least, However, advantage allowing us, principle branches Infact it morethanthat: minein allcaseshowthe are joined. gives almost all the first order we will see in asymptotic approxima that, cases, the families in the ofthe bifurcation can be derived. tion of neighbourhood found in with This a comparison numerically allows, particular, quantitative families. and The 11 dealswiththerelevant definitions Chapter generalequations. of describedin 12 16.The ofbifurcations 1 is Chaps. study type quantitative it is described in 17 23. 3 of 2 ismore Chaps. Type analysis type involved; its hadnot been at thetime of isevenmore completed complex; analysis yet writing.
Everybody is current in a world surrounded by computer. Computers determine our professional activity and penetrate increasingly deeper into our everyday life. Therein we also need increasingly refined c- puter technology. Sometimes we think that the next generation of c- puter will satisfy all our dreams, giving us hope that most of our urgent problems will be solved very soon. However, the future comes and il- sions dissipate. This phenomenon occurs and vanishes sporadically, and, possibly, is a fundamental law of our life. Experience shows that indeed 'systematically remaining' problems are mainly of a complex tech- logical nature (the creation of new generation of especially perfect - croschemes, elements of memory, etc. ). But let us note that amongst these problems there are always ones solved by our purely intellectual efforts alone. Progress in this direction does not require the invention of any 'superchip' or other similar elements. It is important to note that the results obtained in this way very often turn out to be more significant than the 'fruits' of relevant technological progress. The hierarchical asymptotic analytical-numerical methods can be - garded as results of such 'purely intellectual efforts'. Their application allows us to simplify essentially computer calculational procedures and, consequently, to reduce the calculational time required. It is obvious that this circumstance is very attractive to any computer user.
Stochastic instantaneous volatility models such as Heston, SABR or SV-LMM have mostly been developed to control the shape and joint dynamics of the implied volatility surface. In principle, they are well suited for pricing and hedging vanilla and exotic options, for relative value strategies or for risk management. In practice however, most SV models lack a closed form valuation for European options. This book presents the recently developed Asymptotic Chaos Expansions methodology (ACE) which addresses that issue. Indeed its generic algorithm provides, for any regular SV model, the pure asymptotes at any order for both the static and dynamic maps of the implied volatility surface. Furthermore, ACE is programmable and can complement other approximation methods. Hence it allows a systematic approach to designing, parameterising, calibrating and exploiting SV models, typically for Vega hedging or American Monte-Carlo. "Asymptotic Chaos Expansions in Finance" illustrates the ACE approach for single underlyings (such as a stock price or FX rate), baskets (indexes, spreads) and term structure models (especially SV-HJM and SV-LMM). It also establishes fundamental links between the Wiener chaos of the instantaneous volatility and the small-time asymptotic structure of the stochastic implied volatility framework. It is addressed primarily to financial mathematics researchers and graduate students, interested in stochastic volatility, asymptotics or market models. Moreover, as it contains many self-contained approximation results, it will be useful to practitioners modelling the shape of the smile and its evolution.
Providing an up-to-date overview of the geometry of manifolds with non-negative sectional curvature, this volume gives a detailed account of the most recent research in the area. The lectures cover a wide range of topics such as general isometric group actions, circle actions on positively curved four manifolds, cohomogeneity one actions on Alexandrov spaces, isometric torus actions on Riemannian manifolds of maximal symmetry rank, n-Sasakian manifolds, isoparametric hypersurfaces in spheres, contact CR and CR submanifolds, Riemannian submersions and the Hopf conjecture with symmetry. Also included is an introduction to the theory of exterior differential systems.
Primary Audience for the Book * Specialists in numerical computations who are interested in algorithms with automatic result verification. * Engineers, scientists, and practitioners who desire results with automatic verification and who would therefore benefit from the experience of suc cessful applications. * Students in applied mathematics and computer science who want to learn these methods. Goal Of the Book This book contains surveys of applications of interval computations, i. e. , appli cations of numerical methods with automatic result verification, that were pre sented at an international workshop on the subject in EI Paso, Texas, February 23-25, 1995. The purpose of this book is to disseminate detailed and surveyed information about existing and potential applications of this new growing field. Brief Description of the Papers At the most fundamental level, interval arithmetic operations work with sets: The result of a single arithmetic operation is the set of all possible results as the operands range over the domain. For example, [0. 9,1. 1] + [2. 9,3. 1] = [3. 8,4. 2], where [3. 8,4. 2] = {x + ylx E [0. 9,1. 1] and y E [3. 8,4. 2]}. The power of interval arithmetic comes from the fact that (i) the elementary operations and standard functions can be computed for intervals with formulas and subroutines; and (ii) directed roundings can be used, so that the images of these operations (e. g.
The finite element method (FEM) has been understood, at least in principle, for more than 50 years. The integral formulation on which it is based has been known for a longer time (thanks to the work of Galerkin, Ritz, Courant and Hilbert,1.4 to mention the most important). However, the method could not be applied in a practical way since it involved the solution of a large number of linear or non-linear algebraic equations. Today it is quite common, with the aid of computers, to solve non-linear algebraic problems of several thousand equations. The necessary numerical methods and programming techniques are now an integral part of the teaching curriculum in most engineering schools. Mechanical engineers, confronted with very complicated structural problems, were the first to take advantage of advanced computational methods and high level languages (FORTRAN) to transform the mechanical models into algebraic equations (1956). In recent times (1960), the FEM has been studied by applied mathematicians and, having received rigorous treatment, has become a part of the more general study of partial differential equations, gradually replacing the finite difference method which had been considered the universal tool to solve these types of problems.
This volume is an attempt to provide a graduate level introduction to various aspects of stochastic geometry, spatial statistics and random fields, with special emphasis placed on fundamental classes of models and algorithms as well as on their applications, e.g. in materials science, biology and genetics. This book has a strong focus on simulations and includes extensive codes in Matlab and R which are widely used in the mathematical community. It can be seen as a continuation of the recent volume 2068 of Lecture Notes in Mathematics, where other issues of stochastic geometry, spatial statistics and random fields were considered with a focus on asymptotic methods.
This book collects the refereed proceedings of the First International Conference onon Algorithms and Discrete Applied Mathematics, CALDAM 2015, held in Kanpur, India, in February 2015. The volume contains 26 full revised papers from 58 submissions along with 2 invited talks presented at the conference. The workshop covered a diverse range of topics on algorithms and discrete mathematics, including computational geometry, algorithms including approximation algorithms, graph theory and computational complexity.
Gathering and updating results scattered in journal articles over thirty years, this self-contained monograph gives a comprehensive introduction to the subject. Its goal is to: - motivate and explain the method for general Lie groups, reducing the proof of deep results in invariant analysis to the verification of two formal Lie bracket identities related to the Campbell-Hausdorff formula (the "Kashiwara-Vergne conjecture"); - give a detailed proof of the conjecture for quadratic and solvable Lie algebras, which is relatively elementary; - extend the method to symmetric spaces; here an obstruction appears, embodied in a single remarkable object called an "e-function"; - explain the role of this function in invariant analysis on symmetric spaces, its relation to invariant differential operators, mean value operators and spherical functions; - give an explicit e-function for rank one spaces (the hyperbolic spaces); - construct an e-function for general symmetric spaces, in the spirit of Kashiwara and Vergne's original work for Lie groups. The book includes a complete rewriting of several articles by the author, updated and improved following Alekseev, Meinrenken and Torossian's recent proofs of the conjecture. The chapters are largely independent of each other. Some open problems are suggested to encourage future research. It is aimed at graduate students and researchers with a basic knowledge of Lie theory.
This new edition strives yet again to provide readers with a working knowledge of chaos theory and dynamical systems. It does so through parallel introductory explanations in the book and interaction with carefully-selected programs supplied on the accompanying disk. The programs enable readers, especially advanced-undergraduate students in physics, engineering, and math, to tackle relevant physical systems quickly on their PCs, without distraction from algorithmic details. For the third edition of Chaos: A Program Collection for the PC, each of the previous twelve programs is polished and rewritten in C++ (both Windows and Linux versions are included). A new program treats kicked systems, an important class of two-dimensional problems.
Digital geometry emerged as an independent discipline in the second half of the last century. It deals with geometric properties of digital objects and is developed with the unambiguous goal to provide rigorous theoretical foundations for devising new advanced approaches and algorithms for various problems of visual computing. Different aspects of digital geometry have been addressed in the literature. This book is the first one that explicitly focuses on the presentation of the most important digital geometry algorithms. Each chapter provides a brief survey on a major research area related to the general volume theme, description and analysis of related fundamental algorithms, as well as new original contributions by the authors. Every chapter contains a section in which interesting open problems are addressed.
The rapid development of numerical analysis as a subject in its own right, as well as its increasing applicability to mathematical modeling in sciences and engineering, have led to a plethora of journals in its various subdisciplines, ranging from Computational Fluid Dynamics to Linear Algebra. These journals obviously represent the frontiers of research in their area. However, each specialization of numerical analysis is intricately linked and a broad knowledge of the subject is necessary for the solution of any "real" problem. Such an overview cannot be successfully achieved through either a single volume or a journal since the subject is constantly evolving and researchers need to be kept continuously informed of recent developments in a wide range of topics. Acta Numerica is an annual publication containing invited survey papers by leading researchers in a number of areas of applied mathematics. The papers included present overviews of recent developments in their area and provide "state of the art" techniques and analysis. Volume 1 aptly represents the flavor of the series and includes papers on such diverse topics as wavelets, optimization, and dynamical systems.
Two-armed response-adaptive clinical trials are modelled as Markov decision problems to pursue two overriding objectives: Firstly, to identify the superior treatment at the end of the trial and, secondly, to keep the number of patients receiving the inferior treatment small. Such clinical trial designs are very important, especially for rare diseases. Thomas Ondra presents the main solution techniques for Markov decision problems and provides a detailed description how to obtain optimal allocation sequences. |
You may like...
Linked Data in Linguistics…
Christian Chiarcos, Sebastian Nordhoff, …
Hardcover
R1,419
Discovery Miles 14 190
Prayer Power - 40 Days of Learning to…
Brent Patrick McDougal
Paperback
Designing Networks for Innovation and…
Matthaus P. Zylka, Hauke Fuehres, …
Hardcover
New Opportunities for Sentiment Analysis…
Aakanksha Sharaff, G. R. Sinha, …
Hardcover
R6,648
Discovery Miles 66 480
The Science of Science
Dashun Wang, Albert-Laszlo Barabasi
Paperback
Advances in Data Science and Management…
Samarjeet Borah, Sambit Kumar Mishra, …
Hardcover
R6,625
Discovery Miles 66 250
The Office of the Holy Communion in the…
Edward Meyrick Goulburn
Paperback
R501
Discovery Miles 5 010
|