![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Applied mathematics > General
Overtheyears, research inthelifescienceshasbenefitedgreatlyfromthequantita tive toolsofmathematics and modeling. Many aspectsofcomplex biological systems can be more deeply understood when mathematical techniques are incorporated into a scientific investigation. Modelingcanbefruitfully applied in many typesofbiological research, from studies on the molecular, cellular, and organ level, to experiments in wholeanimalsandinpopulations. Using the field of nutrition as an example, one can find many cases of recent advances in knowledge and understanding that were facilitated by the application of mathematical modelingtokineticdata. Theavailabilityofbiologicallyimportantstable isotope-labeled compounds, developments in sensitive mass spectrometry and other analytical techniques, and advances in the powerful modeling software applied to data haveeachcontributed toourability tocarryoutevermoresophisticated kinetic studies that are relevant to nutrition and the health sciences at many levels oforganization. Furthermore, weanticipatethatmodeling isonthebrinkofanothermajoradvance: the application of kinetic modeling to clinical practice. With advances in the abilityof modelstoaccesslargedatabases(e. g., apopulationofindividualpatientrecords)andthe developmentofuserinterfaces thatare"friendly"enough tobeused byclinicians who arenotmodelers, wepredictthathealthapplicationsmodeling willbeanimportantnew 51 directionformodelinginthe21 century. This book contains manuscripts that are based on presentations at the seventh conference in a series focused on advancing nutrition and health research by fostering exchange among scientists from such disciplines as nutrition, biology, mathematics, statistics, kinetics, andcomputing. Thethemesofthesixpreviousconferencesincluded general nutritionmodeling(CanoltyandCain, 1985;Hoover-PlowandChandra, 1988), amino acids and carbohydrates (Aburnrad, 1991), minerals (Siva Subramanian and Wastney, 1995), vitamins, proteins, andmodelingtheory(CoburnandTownsend, 1996), and physiological compartmental modeling (Clifford and Muller, 1998). The seventh conference in the series was held at The Pennsylvania State University from July 29 throughAugust1,2000. Themeetingbeganwithaninstructiveandentertainingkeynote address by Professor Britton Chance, Eldridge Reeves Johnson University Professor Emeritus of Biophysics, Physical Chemistry, and Radiologic Physics, University of Pennsylvania. Dr."
Along with the traditional material concerning linear programming (the simplex method, the theory of duality, the dual simplex method), In-Depth Analysis of Linear Programming contains new results of research carried out by the authors. For the first time, the criteria of stability (in the geometrical and algebraic forms) of the general linear programming problem are formulated and proved. New regularization methods based on the idea of extension of an admissible set are proposed for solving unstable (ill-posed) linear programming problems. In contrast to the well-known regularization methods, in the methods proposed in this book the initial unstable problem is replaced by a new stable auxiliary problem. This is also a linear programming problem, which can be solved by standard finite methods. In addition, the authors indicate the conditions imposed on the parameters of the auxiliary problem which guarantee its stability, and this circumstance advantageously distinguishes the regularization methods proposed in this book from the existing methods. In these existing methods, the stability of the auxiliary problem is usually only presupposed but is not explicitly investigated. In this book, the traditional material contained in the first three chapters is expounded in much simpler terms than in the majority of books on linear programming, which makes it accessible to beginners as well as those more familiar with the area.
Technological, economic, and regulatory changes are some of the driving forces in the modern world of finance. For instance, financial markets now trade twenty-four hours a day and securities are increasingly being traded via real-time computer-based systems in contrast to trading floor-based systems. Equally important, new security forms and pricing models are coming into existence in response to changes in domestic and international regulatory action. Accounting and risk management systems now enable financial and investment firms to manage risk more efficiently while meeting regulatory concerns. The challenge for academics and practitioners alike is how to keep themselves, and others, current with these changing markets, as well as the technology and current investment and risk management tools. Applications in Finance, Investments, and Banking offers presentations by twelve leading investment professionals and academics on a wide range of finance, investment and banking issues. Chapters include analysis of the basic foundations of financial analysis, as well as current approaches to managing risk. Presentations also include reviews of the means of measuring the volatility of the underlying return process and how investment performance measurement can be used to better understand the benefits of active management. Finally, articles also present advances in the pricing of the new financial assets (e.g., swaps), as well as the understanding of the factors (e.g., earnings estimates) affecting pricing of the traditional assets (e.g., stocks). Applications in Finance, Investments, and Banking provides beneficial information to the understanding of both traditional and modern approaches of financial and investment management.
All phenomena in nature are characterized by motion. Mechanics deals with the objective laws of mechanical motion of bodies, the simplest form of motion. In the study of a science of nature, mathematics plays an important role. Mechanics is the first science of nature which has been expressed in terms of mathematics, by considering various mathematical models, associated to phenomena of the surrounding nature. Thus, its development was influenced by the use of a strong mathematical tool. As it was already seen in the first two volumes of the present book, its guideline is precisely the mathematical model of mechanics. The classical models which we refer to are in fact models based on the Newtonian model of mechanics, that is on its five principles, i.e.: the inertia, the forces action, the action and reaction, the independence of the forces action and the initial conditions principle, respectively. Other models, e.g., the model of attraction forces between the particles of a discrete mechanical system, are part of the considered Newtonian model. Kepler's laws brilliantly verify this model in case of velocities much smaller then the light velocity in vacuum."
As the telecommunication industry introduces new sophisticated technologies, the nature of services and the volume of demands have changed. Indeed, a broad range of new services for users appear, combining voice, data, graphics, video, etc. This implies new planning issues. Fiber transmission systems that can carry large amounts of data on a few strands of wire were introduced. These systems have such a large bandwidth that the failure of even a single transmission link: in the network can create a severe service loss to customers. Therefore, a very high level of service reliability is becoming imperative for both system users and service providers. Since equipment failures and accidents cannot be avoided entirely, networks have to be designed so as to "survive" failures. This is done by judiciously installing spare capacity over the network so that all traffic interrupted by a failure may be diverted around that failure by way of this spare or reserve capacity. This of course translates into huge investments for network operators. Designing such survivable networks while minimizing spare capacity costs is, not surprisingly, a major concern of operating companies which gives rise to very difficult combinatorial problems. In order to make telecommunication networks survivable, one can essentially use two different strategies: protection or restoration. The protection approach preas signs spare capacity to protect each element of the network independently, while the restoration approach spreads the redundant capacity over the whole network and uses it as required in order to restore the disrupted traffic."
In the paper we propose a model of tax incentives optimization for inve- ment projects with a help of the mechanism of accelerated depreciation. Unlike the tax holidays which influence on effective income tax rate, accelerated - preciation affects on taxable income. In modern economic practice the state actively use for an attraction of - vestment into the creation of new enterprises such mechanisms as accelerated depreciation and tax holidays. The problem under our consideration is the following. Assume that the state (region) is interested in realization of a certain investment project, for ex- ple, the creation of a new enterprise. In order to attract a potential investor the state decides to use a mechanism of accelerated tax depreciation. The foll- ing question arise. What is a reasonable principle for choosing depreciation rate? From the state's point of view the future investor's behavior will be rat- nal. It means that while looking at economic environment the investor choose such a moment for investment which maximizes his expected net present value (NPV) from the given project. For this case both criteria and "investment rule" depend on proposed (by the state) depreciation policy. For the simplicity we will suppose that the purpose of the state for a given project is a maximi- tion of a discounted tax payments into the budget from the enterprise after its creation. Of course, these payments depend on the moment of investor's entry and, therefore, on the depreciation policy established by the state.
This book covers the topic of eddy current nondestructive evaluation, the most commonly practiced method of electromagnetic nondestructive evaluation (NDE). It emphasizes a clear presentation of the concepts, laws and relationships of electricity and magnetism upon which eddy current inspection methods are founded. The chapters include material on signals obtained using many common eddy current probe types in various testing environments. Introductory mathematical and physical concepts in electromagnetism are introduced in sufficient detail and summarized in the Appendices for easy reference. Worked examples and simple calculations that can be done by hand are distributed throughout the text. These and more complex end-of-chapter examples and assignments are designed to impart a working knowledge of the connection between electromagnetic theory and the practical measurements described. The book is intended to equip readers with sufficient knowledge to optimize routine eddy current NDE inspections, or design new ones. It is useful for graduate engineers and scientists seeking a deeper understanding of electromagnetic methods of NDE than can be found in a guide for practitioners.
Extending the well-known connection between classical linear potential theory and probability theory (through the interplay between harmonic functions and martingales) to the nonlinear case of tug-of-war games and their related partial differential equations, this unique book collects several results in this direction and puts them in an elementary perspective in a lucid and self-contained fashion.
Introduction to Dynamical Systems and Geometric Mechanics provides a comprehensive tour of two fields that are intimately entwined: dynamical systems is the study of the behavior of physical systems that may be described by a set of nonlinear first-order ordinary differential equations in Euclidean space, whereas geometric mechanics explore similar systems that instead evolve on differentiable manifolds. The first part discusses the linearization and stability of trajectories and fixed points, invariant manifold theory, periodic orbits, Poincare maps, Floquet theory, the Poincare-Bendixson theorem, bifurcations, and chaos. The second part of the book begins with a self-contained chapter on differential geometry that introduces notions of manifolds, mappings, vector fields, the Jacobi-Lie bracket, and differential forms.
This volume contains fourteen papers on mathematical problems of flow and transport through porous media presented at the conference held at Oberwolfach, June 21-27, 1992. Among the topics covered are miscible and immiscible displacement, groundwater contamination, reaction-diffusion instabilities and moving boundaries, random and fractal media, microstructure models, homogenization, spatial heterogeneties, inverse problems, degenerate equations. The papers deal with aspects of modelling, mathematical theory, numerical methods and applications in the engineering sciences.
To describe the true behavior of most real-world systems with sufficient accuracy, engineers have to overcome difficulties arising from their lack of knowledge about certain parts of a process or from the impossibility of characterizing it with absolute certainty. Depending on the application at hand, uncertainties in modeling and measurements can be represented in different ways. For example, bounded uncertainties can be described by intervals, affine forms or general polynomial enclosures such as Taylor models, whereas stochastic uncertainties can be characterized in the form of a distribution described, for example, by the mean value, the standard deviation and higher-order moments. The goal of this Special Volume on "Modeling, Design, and Simulation of Systems with Uncertainties" is to cover modern methods for dealing with the challenges presented by imprecise or unavailable information. All contributions tackle the topic from the point of view of control, state and parameter estimation, optimization and simulation. Thematically, this volume can be divided into two parts. In the first we present works highlighting the theoretic background and current research on algorithmic approaches in the field of uncertainty handling, together with their reliable software implementation. The second part is concerned with real-life application scenarios from various areas including but not limited to mechatronics, robotics, and biomedical engineering.
This is a state-of-the-art introduction to the work of Franz Reidemeister, Meng Taubes, Turaev, and the author on the concept of torsion and its generalizations. Torsion is the oldest topological (but not with respect to homotopy) invariant that in its almost eight decades of existence has been at the center of many important and surprising discoveries. During the past decade, in the work of Vladimir Turaev, new points of view have emerged, which turned out to be the "right ones" as far as gauge theory is concerned. The book features mostly the new aspects of this venerable concept. The theoretical foundations of this subject are presented in a style accessible to those, who wish to learn and understand the main ideas of the theory. Particular emphasis is upon the many and rather diverse concrete examples and techniques which capture the subleties of the theory better than any abstract general result. Many of these examples and techniques never appeared in print before, and their choice is often justified by ongoing current research on the topology of surface singularities. The text is addressed to mathematicians with geometric interests who want to become comfortable users of this versatile invariant.
grams of which the objective is given by the ratio of a convex by a positive (over a convex domain) concave function. As observed by Sniedovich (Ref. [102, 103]) most of the properties of fractional pro grams could be found in other programs, given that the objective function could be written as a particular composition of functions. He called this new field C programming, standing for composite concave programming. In his seminal book on dynamic programming (Ref. [104]), Sniedovich shows how the study of such com positions can help tackling non-separable dynamic programs that otherwise would defeat solution. Barros and Frenk (Ref. [9]) developed a cutting plane algorithm capable of optimizing C-programs. More recently, this algorithm has been used by Carrizosa and Plastria to solve a global optimization problem in facility location (Ref. [16]). The distinction between global optimization problems (Ref. [54]) and generalized convex problems can sometimes be hard to establish. That is exactly the reason why so much effort has been placed into finding an exhaustive classification of the different weak forms of convexity, establishing a new definition just to satisfy some desirable property in the most general way possible. This book does not aim at all the subtleties of the different generalizations of convexity, but concentrates on the most general of them all, quasiconvex programming. Chapter 5 shows clearly where the real difficulties appear.
This work provides the current theory and observations behind the cosmological phenomenon of dark energy. The approach is comprehensive with rigorous mathematical theory and relevant astronomical observations discussed in context. The book treats the background and history starting with the new-found importance of Einstein's cosmological constant (proposed long ago) in dark energy formulation, as well as the frontiers of dark energy. The authors do not presuppose advanced knowledge of astronomy, and basic mathematical concepts used in modern cosmology are presented in a simple, but rigorous way. All this makes the book useful for both astronomers and physicists, and also for university students of physical sciences.
The second volume of this work contains Parts 2 and 3 of the "Handbook of Coding Theory". Part 2, "Connections", is devoted to connections between coding theory and other branches of mathematics and computer science. Part 3, "Applications", deals with a variety of applications for coding.
Increasing demands on the output performance, exhaust emissions, and fuel consumption necessitate the development of a new generation of automotive engine functionality. This monograph is written by a long year developmental automotive engineer and offers a wide coverage of automotive engine control and estimation problems and its solutions. It addresses idle speed control, cylinder flow estimation, engine torque and friction estimation, engine misfire and CAM profile switching diagnostics, as well as engine knock detection. The book provides a wide and well structured collection of tools and new techniques useful for automotive engine control and estimation problems such as input estimation, composite adaptation, threshold detection adaptation, real-time algorithms, as well as the very important statistical techniques. It demonstrates the statistical detection of engine problems such as misfire or knock events and how it can be used to build a new generation of robust engine functionality. This book will be useful for practising automotive engineers, black belts working in the automotive industry as well as for lecturers and students since it provides a wide coverage of engine control and estimation problems, detailed and well structured descriptions of useful techniques in automotive applications and future trends and challenges in engine functionality.
This book discusses the latest advances in algorithms for symbolic summation, factorization, symbolic-numeric linear algebra and linear functional equations. It presents a collection of papers on original research topics from the Waterloo Workshop on Computer Algebra (WWCA-2016), a satellite workshop of the International Symposium on Symbolic and Algebraic Computation (ISSAC'2016), which was held at Wilfrid Laurier University (Waterloo, Ontario, Canada) on July 23-24, 2016. This workshop and the resulting book celebrate the 70th birthday of Sergei Abramov (Dorodnicyn Computing Centre of the Russian Academy of Sciences, Moscow), whose highly regarded and inspirational contributions to symbolic methods have become a crucial benchmark of computer algebra and have been broadly adopted by many Computer Algebra systems.
Mathematical Programming and Financial Objectives for Scheduling Projects focuses on decision problems where the performance is measured in terms of money. As the title suggests, special attention is paid to financial objectives and the relationship of financial objectives to project schedules and scheduling. In addition, how schedules relate to other decisions is treated in detail. The book demonstrates that scheduling must be combined with project selection and financing, and that scheduling helps to give an answer to the planning issue of the amount of resources required for a project. The author makes clear the relevance of scheduling to cutting budget costs. The book is divided into six parts. The first part gives a brief introduction to project management. Part two examines scheduling projects in order to maximize their net present value. Part three considers capital rationing. Many decisions on selecting or rejecting a project cannot be made in isolation and multiple projects must be taken fully into account. Since the requests for capital resources depend on the schedules of the projects, scheduling taken on more complexity. Part four studies the resource usage of a project in greater detail. Part five discusses cases where the processing time of an activity is a decision to be made. Part six summarizes the main results that have been accomplished.
Mathematical Visualization is a young new discipline. It offers
efficient visualization tools to the classical subjects of
mathematics, and applies mathematical techniques to problems in
computer graphics and scientific visualization. Originally, it
started in the interdisciplinary area of differential geometry,
numerical mathematics, and computer graphics. In recent years, the
methods developed have found important applications.
Researchers working with nonlinear programming often claim "the word is non linear" indicating that real applications require nonlinear modeling. The same is true for other areas such as multi-objective programming (there are always several goals in a real application), stochastic programming (all data is uncer tain and therefore stochastic models should be used), and so forth. In this spirit we claim: The word is multilevel. In many decision processes there is a hierarchy of decision makers, and decisions are made at different levels in this hierarchy. One way to handle such hierar chies is to focus on one level and include other levels' behaviors as assumptions. Multilevel programming is the research area that focuses on the whole hierar chy structure. In terms of modeling, the constraint domain associated with a multilevel programming problem is implicitly determined by a series of opti mization problems which must be solved in a predetermined sequence. If only two levels are considered, we have one leader (associated with the upper level) and one follower (associated with the lower level)."
This thesis discusses two key topics: strangeness and charge symmetry violation (CSV) in the nucleon. It also provides a pedagogical introduction to chiral effective field theory tailored to the high-precision era of lattice quantum chromodynamics (QCD). Because the nucleon has zero net strangeness, strange observables give tremendous insight into the nature of the vacuum; they can only arise through quantum fluctuations in which strange-antistrange quark pairs are generated. As a result, the precise values of these quantities within QCD are important in physics arenas as diverse as precision tests of QCD, searches for physics beyond the Standard Model, and the interpretation of dark matter direct-detection experiments. Similarly, the precise knowledge of CSV observables has, with increasing experimental precision, become essential to the interpretation of many searches for physics beyond the Standard Model. In this thesis, the numerical lattice gauge theory approach to QCD is combined with the chiral perturbation theory formalism to determine strange and CSV quantities in a diverse range of observables including the octet baryon masses, sigma terms, electromagnetic form factors, and parton distribution functions. This thesis builds a comprehensive and coherent picture of the current status of understanding of strangeness and charge symmetry violation in the nucleon.
The chapters in this volume, and the volume itself, celebrate the life and research of Roberto Tempo, a leader in the study of complex networked systems, their analysis and control under uncertainty, and robust designs. Contributors include authorities on uncertainty in systems, robustness, networked and network systems, social networks, distributed and randomized algorithms, and multi-agent systems-all fields that Roberto Tempo made vital contributions to. Additionally, at least one author of each chapter was a research collaborator of Roberto Tempo's. This volume is structured in three parts. The first covers robustness and includes topics like time-invariant uncertainties, robust static output feedback design, and the uncertainty quartet. The second part is focused on randomization and probabilistic methods, which covers topics such as compressive sensing, and stochastic optimization. Finally, the third part deals with distributed systems and algorithms, and explores matters involving mathematical sociology, fault diagnoses, and PageRank computation. Each chapter presents exposition, provides new results, and identifies fruitful future directions in research. This book will serve as a valuable reference volume to researchers interested in uncertainty, complexity, robustness, optimization, algorithms, and networked systems.
This book presents the basic algorithms, the main theoretical results, and some applications of spectral methods. Particular attention is paid to the applications of spectral methods to nonlinear problems arising in fluid dynamics, quantum mechanics, weather prediction, heat conduction and other fields.The book consists of three parts. The first part deals with orthogonal approximations in Sobolev spaces and the stability and convergence of approximations for nonlinear problems, as the mathematical foundation of spectral methods. In the second part, various spectral methods are described, with some applications. It includes Fourier spectral method, Legendre spectral method, Chebyshev spectral method, spectral penalty method, spectral vanishing viscosity method, spectral approximation of isolated solutions, multi-dimensional spectral method, spectral method for high-order equations, spectral-domain decomposition method and spectral multigrid method. The third part is devoted to some recent developments of spectral methods, such as mixed spectral methods, combined spectral methods and spectral methods on the surface.
The "Turbulence and Interactions 2006" (TI2006) conference was held on the island of Porquerolles, France, May 29-June 2, 2006. The scientific sponsors of the conference were * Association Francaise de Mecanique, * CD-adapco, * DGA * Ecole Polytechnique Federale de Lausanne (EPFL), * ERCOFTAC : European Research Community on Flow, Turbulence and Combustion, * FLUENT, * The French Ministery of Foreign Affairs, * Laboratoire de Modelisation en Mecanique, Paris 6, * ONERA. The conference was a unique event. Never before have so many organisations concerned with turbulence works come together in one conference. As the title "Turbulence and Interactions" anticipated, the workshop was not run with parallel sessions but instead of one united gathering where people had strong interactions and discussions. Many of the 85 or so attendants were veterans of previous ERCOFTAC conferences. Some young researchers attended their very first int- national meeting. The organisers were fortunate in obtaining the presence of the following - vited speakers: N. Adams (TUM, Germany), C. Cambon (ECL, France), J.-P. Dussauge (Polytech Marseille, France), D.A. Gosman (Imperial College, UK), Y. Kaneda (Nagoya University, Japan), O. Simonin (IMFT, France), G. Tryggvason (WPI, USA), D. Veynante (ECP, France), F. Waleffe (University of Wisconsin, USA), Y.K. Zhou (University of California, USA). The topics covered by the 59 papers ranged from experimental results through theory to computations. The papers of the conference went through the usual - viewing process for two special issues of international journals : Computers and Fluids, and Flow, Turbulence and Combustion. |
You may like...
A Brief Introduction to Topology and…
Antonio Sergio Teixeira Pires
Paperback
R756
Discovery Miles 7 560
A Commentary On Newton's Principia…
John Martin Frederick Wright
Hardcover
R1,048
Discovery Miles 10 480
Dark Silicon and Future On-chip Systems…
Suyel Namasudra, Hamid Sarbazi-Azad
Hardcover
R3,940
Discovery Miles 39 400
Mathematics for Physical Chemistry
Robert G. Mortimer, S. M. Blinder
Paperback
R1,402
Discovery Miles 14 020
Multiscale Modeling of Vascular Dynamics…
Huilin Ye, Zhiqiang Shen, …
Paperback
R750
Discovery Miles 7 500
|