![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Applied mathematics > General
This book presents the current knowledge about nonlinear localized travelling excitations in crystals. Excitations can be vibrational, electronic, magnetic or of many other types, in many different types of crystals, as silicates, semiconductors and metals. The book is dedicated to the British scientist FM Russell, recently turned 80. He found 50 years ago that a mineral mica muscovite was able to record elementary charged particles and much later that also some kind of localized excitations, he called them quodons, was also recorded. The tracks, therefore, provide a striking experimental evidence of quodons existence. The first chapter by him presents the state of knowledge in this topic. It is followed by about 18 chapters from world leaders in the field, reviewing different aspects, materials and methods including experiments, molecular dynamics and theory and also presenting the latest results. The last part includes a personal narration of FM Russell of the deciphering of the marks in mica. It provides a unique way to present the science in an accessible way and also illustrates the process of discovery in a scientist's mind.
The quantitative and qualitative study of the physical world makes use of many mathematical models governed by a great diversity of ordinary, partial differential, integral, and integro-differential equations. An essential step in such investigations is the solution of these types of equations, which sometimes can be performed analytically, while at other times only numerically. This edited, self-contained volume presents a series of state-of-the-art analytic and numerical methods of solution constructed for important problems arising in science and engineering, all based on the powerful operation of (exact or approximate) integration. The book, consisting of twenty seven selected chapters presented by well-known specialists in the field, is an outgrowth of the Eighth International Conference on Integral Methods in Science and Engineering, held August 2a "4, 2004, in Orlando, FL. Contributors cover a wide variety of topics, from the theoretical development of boundary integral methods to the application of integration-based analytic and numerical techniques that include integral equations, finite and boundary elements, conservation laws, hybrid approaches, and other procedures. The volume may be used as a reference guide and a practical resource. It is suitable for researchers and practitioners in applied mathematics, physics, and mechanical and electrical engineering, as well as graduate students in these disciplines.
In the pages of this text readers will find nothing less than a unified treatment of linear programming. Without sacrificing mathematical rigor, the main emphasis of the book is on models and applications. The most important classes of problems are surveyed and presented by means of mathematical formulations, followed by solution methods and a discussion of a variety of "what-if" scenarios. Non-simplex based solution methods and newer developments such as interior point methods are covered.
Overtheyears, research inthelifescienceshasbenefitedgreatlyfromthequantita tive toolsofmathematics and modeling. Many aspectsofcomplex biological systems can be more deeply understood when mathematical techniques are incorporated into a scientific investigation. Modelingcanbefruitfully applied in many typesofbiological research, from studies on the molecular, cellular, and organ level, to experiments in wholeanimalsandinpopulations. Using the field of nutrition as an example, one can find many cases of recent advances in knowledge and understanding that were facilitated by the application of mathematical modelingtokineticdata. Theavailabilityofbiologicallyimportantstable isotope-labeled compounds, developments in sensitive mass spectrometry and other analytical techniques, and advances in the powerful modeling software applied to data haveeachcontributed toourability tocarryoutevermoresophisticated kinetic studies that are relevant to nutrition and the health sciences at many levels oforganization. Furthermore, weanticipatethatmodeling isonthebrinkofanothermajoradvance: the application of kinetic modeling to clinical practice. With advances in the abilityof modelstoaccesslargedatabases(e. g., apopulationofindividualpatientrecords)andthe developmentofuserinterfaces thatare"friendly"enough tobeused byclinicians who arenotmodelers, wepredictthathealthapplicationsmodeling willbeanimportantnew 51 directionformodelinginthe21 century. This book contains manuscripts that are based on presentations at the seventh conference in a series focused on advancing nutrition and health research by fostering exchange among scientists from such disciplines as nutrition, biology, mathematics, statistics, kinetics, andcomputing. Thethemesofthesixpreviousconferencesincluded general nutritionmodeling(CanoltyandCain, 1985;Hoover-PlowandChandra, 1988), amino acids and carbohydrates (Aburnrad, 1991), minerals (Siva Subramanian and Wastney, 1995), vitamins, proteins, andmodelingtheory(CoburnandTownsend, 1996), and physiological compartmental modeling (Clifford and Muller, 1998). The seventh conference in the series was held at The Pennsylvania State University from July 29 throughAugust1,2000. Themeetingbeganwithaninstructiveandentertainingkeynote address by Professor Britton Chance, Eldridge Reeves Johnson University Professor Emeritus of Biophysics, Physical Chemistry, and Radiologic Physics, University of Pennsylvania. Dr."
H-infinity engineering continues to establish itself as a discipline of applied mathematics. As such, this extensively illustrated monograph makes a significant application of H-infinity theory to electronic amplifier design, demonstrating how recent developments in H-infinity engineering equip amplifier designers with new tools and avenues for research. The presentation, at the interface of applied mathematics and engineering, emphasizes how to (1) compute the best possible performance available from any matching circuits; (2) benchmark existing matching solutions; and (3) generalize results to multiple amplifiers. As the monograph develops, many research directions are pointed out for both disciplines. The physical meaning of a mathematical problem is made explicit for the mathematician, while circuit problems are presented in the H-infinity framework for the engineer. A final chapter organizes these research topics into a collection of open problems ranging from electrical engineering, numerical implementations, and generalizations to H-infinity theory.
After the ?rst edition of this book was published in early 2005, the world has changed dramatically and at a pace never seen before. The changes that - curred in 2008 and 2009 were completely unthinkable two years before. These changes took place not only in the Finance sector, the origin of the crisis, but also, as a result, in other economic sectors like the automotive sector. Governments now own substantial parts, if not majorities, in banks or other companies which recorded losses of double digit billions of USD in 2008. 2008 saw the collapse of leading stand-alone U. S. investment banks. In many co- tries interest rates fell close to zero. What has happend? While the economy showed strong growth in 2004 to 2006, the Subprime or Credit Crisis changed the picture completely. What started in the U. S. ho- ing market in late 2006 became a full-?edged global ?nancial crisis and has a?ected ?nancial markets around the world. A decline in U. S. house prices and increasing interest rates caused a higher rate of subprime mortgage delinqu- cies in the U. S. and, due to the wide distribution of securitized assets, had a negative e?ect on other markets. As a result, markets realized that risks had been underestimated and volatility increased. This development culminated in the bankruptcy of the investment bank Lehman Brothers in mid September 2008.
Technological, economic, and regulatory changes are some of the driving forces in the modern world of finance. For instance, financial markets now trade twenty-four hours a day and securities are increasingly being traded via real-time computer-based systems in contrast to trading floor-based systems. Equally important, new security forms and pricing models are coming into existence in response to changes in domestic and international regulatory action. Accounting and risk management systems now enable financial and investment firms to manage risk more efficiently while meeting regulatory concerns. The challenge for academics and practitioners alike is how to keep themselves, and others, current with these changing markets, as well as the technology and current investment and risk management tools. Applications in Finance, Investments, and Banking offers presentations by twelve leading investment professionals and academics on a wide range of finance, investment and banking issues. Chapters include analysis of the basic foundations of financial analysis, as well as current approaches to managing risk. Presentations also include reviews of the means of measuring the volatility of the underlying return process and how investment performance measurement can be used to better understand the benefits of active management. Finally, articles also present advances in the pricing of the new financial assets (e.g., swaps), as well as the understanding of the factors (e.g., earnings estimates) affecting pricing of the traditional assets (e.g., stocks). Applications in Finance, Investments, and Banking provides beneficial information to the understanding of both traditional and modern approaches of financial and investment management.
As the telecommunication industry introduces new sophisticated technologies, the nature of services and the volume of demands have changed. Indeed, a broad range of new services for users appear, combining voice, data, graphics, video, etc. This implies new planning issues. Fiber transmission systems that can carry large amounts of data on a few strands of wire were introduced. These systems have such a large bandwidth that the failure of even a single transmission link: in the network can create a severe service loss to customers. Therefore, a very high level of service reliability is becoming imperative for both system users and service providers. Since equipment failures and accidents cannot be avoided entirely, networks have to be designed so as to "survive" failures. This is done by judiciously installing spare capacity over the network so that all traffic interrupted by a failure may be diverted around that failure by way of this spare or reserve capacity. This of course translates into huge investments for network operators. Designing such survivable networks while minimizing spare capacity costs is, not surprisingly, a major concern of operating companies which gives rise to very difficult combinatorial problems. In order to make telecommunication networks survivable, one can essentially use two different strategies: protection or restoration. The protection approach preas signs spare capacity to protect each element of the network independently, while the restoration approach spreads the redundant capacity over the whole network and uses it as required in order to restore the disrupted traffic."
In this thesis, the author considers quantum gravity to investigate the mysterious origin of our universe and its mechanisms. He and his collaborators have greatly improved the analyticity of two models: causal dynamical triangulations (CDT) and n-DBI gravity, with the space-time foliation which is one common factor shared by these two separate models. In the first part, the analytic method of coupling matters to CDT in 2-dimensional toy models is proposed to uncover the underlying mechanisms of the universe and to remove ambiguities remaining in CDT. As a result, the wave function of the 2-dimensional universe where matters are coupled is derived. The behavior of the wave function reveals that the Hausdorff dimension can be changed when the matter is non-unitary. In the second part, the n-DBI gravity model is considered. The author mainly investigates two effects driven by the space-time foliation: the appearance of a new conserved charge in black holes and an extra scalar mode of the graviton. The former implies a breakdown of the black-hole uniqueness theorem while the latter does not show any pathological behavior. "
Polynomial extremal problems (PEP) constitute one of the most important subclasses of nonlinear programming models. Their distinctive feature is that an objective function and constraints can be expressed by polynomial functions in one or several variables. Let: e = {: e 1, ...: en} be the vector in n-dimensional real linear space Rn; n PO(: e), PI (: e), ..., Pm (: e) are polynomial functions in R with real coefficients. In general, a PEP can be formulated in the following form: (0.1) find r = inf Po(: e) subject to constraints (0.2) Pi (: e) =0, i=l, ..., m (a constraint in the form of inequality can be written in the form of equality by introducing a new variable: for example, P( x) 0 is equivalent to P(: e) + y2 = 0). Boolean and mixed polynomial problems can be written in usual form by adding for each boolean variable z the equality: Z2 - Z = O. Let a = {al, ..., a } be integer vector with nonnegative entries {a;}f=l. n Denote by R a](: e) monomial in n variables of the form: n R a](: e) = IT: ef';;=1 d(a) = 2:7=1 ai is the total degree of monomial R a]. Each polynomial in n variables can be written as sum of monomials with nonzero coefficients: P(: e) = L caR a](: e), aEA{P) IX x Nondifferentiable optimization and polynomial problems where A(P) is the set of monomials contained in polynomial P
Fractional evolution equations provide a unifying framework to investigate wellposedness of complex systems with fractional order derivatives. This monograph presents the existence, attractivity, stability, periodic solutions and control theory for time fractional evolution equations. The book contains an up-to-date and comprehensive stuff on the topic.
This volume contains a selection of papers presented at the first conference of the Society for Computational Economics held at ICC Institute, Austin, Texas, May 21-24, 1995. Twenty-two papers are included in this volume, devoted to applications of computational methods for the empirical analysis of economic and financial systems; the development of computing methodology, including software, related to economics and finance; and the overall impact of developments in computing. The various contributions represented in the volume indicate the growing interest in the topic due to the increased availability of computational concepts and tools and the necessity of analyzing complex decision problems. The papers in this volume are divided into four sections: Computational methods in econometrics, Computational methods in finance, Computational methods for a social environment and New computational methods.GBP/LISTGBP
The Virasoro algebra is an infinite dimensional Lie algebra that plays an increasingly important role in mathematics and theoretical physics. This book describes some fundamental facts about the representation theory of the Virasoro algebra in a self-contained manner. Topics include the structure of Verma modules and Fock modules, the classification of (unitarizable) Harish-Chandra modules, tilting equivalence, and the rational vertex operator algebras associated to the so-called minimal series representations. Covering a wide range of material, this book has three appendices which provide background information required for some of the chapters. The authors organize fundamental results in a unified way and refine existing proofs. For instance in chapter three, a generalization of Jantzen filtration is reformulated in an algebraic manner, and geometric interpretation is provided. Statements, widely believed to be true, are collated, and results which are known but not verified are proven, such as the corrected structure theorem of Fock modules in chapter eight. This book will be of interest to a wide range of mathematicians and physicists from the level of graduate students to researchers.
All phenomena in nature are characterized by motion. Mechanics deals with the objective laws of mechanical motion of bodies, the simplest form of motion. In the study of a science of nature, mathematics plays an important role. Mechanics is the first science of nature which has been expressed in terms of mathematics, by considering various mathematical models, associated to phenomena of the surrounding nature. Thus, its development was influenced by the use of a strong mathematical tool. As it was already seen in the first two volumes of the present book, its guideline is precisely the mathematical model of mechanics. The classical models which we refer to are in fact models based on the Newtonian model of mechanics, that is on its five principles, i.e.: the inertia, the forces action, the action and reaction, the independence of the forces action and the initial conditions principle, respectively. Other models, e.g., the model of attraction forces between the particles of a discrete mechanical system, are part of the considered Newtonian model. Kepler's laws brilliantly verify this model in case of velocities much smaller then the light velocity in vacuum."
Analytical solutions to the orbital motion of celestial objects have been nowadays mostly replaced by numerical solutions, but they are still irreplaceable whenever speed is to be preferred to accuracy, or to simplify a dynamical model. In this book, the most common orbital perturbations problems are discussed according to the Lie transforms method, which is the de facto standard in analytical orbital motion calculations.
In the field known as "the mathematical theory of shock waves," very exciting and unexpected developments have occurred in the last few years. Joel Smoller and Blake Temple have established classes of shock wave solutions to the Einstein Euler equations of general relativity; indeed, the mathematical and physical con sequences of these examples constitute a whole new area of research. The stability theory of "viscous" shock waves has received a new, geometric perspective due to the work of Kevin Zumbrun and collaborators, which offers a spectral approach to systems. Due to the intersection of point and essential spectrum, such an ap proach had for a long time seemed out of reach. The stability problem for "in viscid" shock waves has been given a novel, clear and concise treatment by Guy Metivier and coworkers through the use of paradifferential calculus. The L 1 semi group theory for systems of conservation laws, itself still a recent development, has been considerably condensed by the introduction of new distance functionals through Tai-Ping Liu and collaborators; these functionals compare solutions to different data by direct reference to their wave structure. The fundamental prop erties of systems with relaxation have found a systematic description through the papers of Wen-An Yong; for shock waves, this means a first general theorem on the existence of corresponding profiles. The five articles of this book reflect the above developments."
This book introduces mathematicians, physicists, and philosophers to a new, coherent approach to theory and interpretation of quantum physics, in which classical and quantum thinking live peacefully side by side and jointly fertilize the intuition. The formal, mathematical core of quantum physics is cleanly separated from the interpretation issues. The book demonstrates that the universe can be rationally and objectively understood from the smallest to the largest levels of modeling. The thermal interpretation featured in this book succeeds without any change in the theory. It involves one radical step, the reinterpretation of an assumption that was virtually never questioned before - the traditional eigenvalue link between theory and observation is replaced by a q-expectation link: Objective properties are given by q-expectations of products of quantum fields and what is computable from these. Averaging over macroscopic spacetime regions produces macroscopic quantities with negligible uncertainty, and leads to classical physics. - Reflects the actual practice of quantum physics. - Models the quantum-classical interface through coherent spaces. - Interprets both quantum mechanics and quantum field theory. - Eliminates probability and measurement from the foundations. - Proposes a novel solution of the measurement problem.
This book presents simple interdisciplinary stochastic models meant as a gentle introduction to the field of non-equilibrium statistical physics. It focuses on the analysis of two-state models with cooperative effects, which are versatile enough to be applied to many physical and social systems. The book also explores a variety of mathematical techniques to solve the master equations that govern these models: matrix theory, empty-interval methods, mean field theory, a quantum approach, and mapping onto classical Ising models. The models discussed are at the confluence of nanophysics, biology, mathematics, and the social sciences and provide a pedagogical path toward understanding the complex dynamics of particle self-assembly with the tools of statistical physics.
This volume contains fourteen papers on mathematical problems of flow and transport through porous media presented at the conference held at Oberwolfach, June 21-27, 1992. Among the topics covered are miscible and immiscible displacement, groundwater contamination, reaction-diffusion instabilities and moving boundaries, random and fractal media, microstructure models, homogenization, spatial heterogeneties, inverse problems, degenerate equations. The papers deal with aspects of modelling, mathematical theory, numerical methods and applications in the engineering sciences.
grams of which the objective is given by the ratio of a convex by a positive (over a convex domain) concave function. As observed by Sniedovich (Ref. [102, 103]) most of the properties of fractional pro grams could be found in other programs, given that the objective function could be written as a particular composition of functions. He called this new field C programming, standing for composite concave programming. In his seminal book on dynamic programming (Ref. [104]), Sniedovich shows how the study of such com positions can help tackling non-separable dynamic programs that otherwise would defeat solution. Barros and Frenk (Ref. [9]) developed a cutting plane algorithm capable of optimizing C-programs. More recently, this algorithm has been used by Carrizosa and Plastria to solve a global optimization problem in facility location (Ref. [16]). The distinction between global optimization problems (Ref. [54]) and generalized convex problems can sometimes be hard to establish. That is exactly the reason why so much effort has been placed into finding an exhaustive classification of the different weak forms of convexity, establishing a new definition just to satisfy some desirable property in the most general way possible. This book does not aim at all the subtleties of the different generalizations of convexity, but concentrates on the most general of them all, quasiconvex programming. Chapter 5 shows clearly where the real difficulties appear.
In the paper we propose a model of tax incentives optimization for inve- ment projects with a help of the mechanism of accelerated depreciation. Unlike the tax holidays which influence on effective income tax rate, accelerated - preciation affects on taxable income. In modern economic practice the state actively use for an attraction of - vestment into the creation of new enterprises such mechanisms as accelerated depreciation and tax holidays. The problem under our consideration is the following. Assume that the state (region) is interested in realization of a certain investment project, for ex- ple, the creation of a new enterprise. In order to attract a potential investor the state decides to use a mechanism of accelerated tax depreciation. The foll- ing question arise. What is a reasonable principle for choosing depreciation rate? From the state's point of view the future investor's behavior will be rat- nal. It means that while looking at economic environment the investor choose such a moment for investment which maximizes his expected net present value (NPV) from the given project. For this case both criteria and "investment rule" depend on proposed (by the state) depreciation policy. For the simplicity we will suppose that the purpose of the state for a given project is a maximi- tion of a discounted tax payments into the budget from the enterprise after its creation. Of course, these payments depend on the moment of investor's entry and, therefore, on the depreciation policy established by the state.
This textbook covers a broad spectrum of developments in QFT, emphasizing those aspects that are now well consolidated and for which satisfactory theoretical descriptions have been provided. The book is unique in that it offers a new approach to the subject and explores many topics merely touched upon, if covered at all, in standard reference works. A detailed and largely non-technical introductory chapter traces the development of QFT from its inception in 1926. The elegant functional differential approach put forward by Schwinger, referred to as the quantum dynamical (action) principle, and its underlying theory are used systematically in order to generate the so-called vacuum-to-vacuum transition amplitude of both abelian and non-abelian gauge theories, in addition to Feynman's well-known functional integral approach, referred to as the path-integral approach. Given the wealth of information also to be found in the abelian case, equal importance is put on both abelian and non-abelian gauge theories. Particular emphasis is placed on the concept of a quantum field and its particle content to provide an appropriate description of physical processes at high energies, where relativity becomes indispensable. Moreover, quantum mechanics implies that a wave function renormalization arises in the QFT field independent of any perturbation theory - a point not sufficiently emphasized in the literature. The book provides an overview of all the fields encountered in present high-energy physics, together with the details of the underlying derivations. Further, it presents "deep inelastic" experiments as a fundamental application of quantum chromodynamics. Though the author makes a point of deriving points in detail, the book still requires good background knowledge of quantum mechanics, including the Dirac Theory, as well as elements of the Klein-Gordon equation. The present volume sets the language, the notation and provides additional background for reading Quantum Field Theory II - Introduction to Quantum Gravity, Supersymmetry and String Theory, by the same author. Students in this field might benefit from first reading the book Quantum Theory: A Wide Spectrum (Springer, 2006), by the same author.
Along with the traditional material concerning linear programming (the simplex method, the theory of duality, the dual simplex method), In-Depth Analysis of Linear Programming contains new results of research carried out by the authors. For the first time, the criteria of stability (in the geometrical and algebraic forms) of the general linear programming problem are formulated and proved. New regularization methods based on the idea of extension of an admissible set are proposed for solving unstable (ill-posed) linear programming problems. In contrast to the well-known regularization methods, in the methods proposed in this book the initial unstable problem is replaced by a new stable auxiliary problem. This is also a linear programming problem, which can be solved by standard finite methods. In addition, the authors indicate the conditions imposed on the parameters of the auxiliary problem which guarantee its stability, and this circumstance advantageously distinguishes the regularization methods proposed in this book from the existing methods. In these existing methods, the stability of the auxiliary problem is usually only presupposed but is not explicitly investigated. In this book, the traditional material contained in the first three chapters is expounded in much simpler terms than in the majority of books on linear programming, which makes it accessible to beginners as well as those more familiar with the area. |
You may like...
Infinite Words, Volume 141 - Automata…
Dominique Perrin, Jean-Eric Pin
Hardcover
R4,065
Discovery Miles 40 650
Multiscale Modeling of Vascular Dynamics…
Huilin Ye, Zhiqiang Shen, …
Paperback
R750
Discovery Miles 7 500
Exploring Quantum Mechanics - A…
Victor Galitski, Boris Karnakov, …
Hardcover
R6,101
Discovery Miles 61 010
A Brief Introduction to Topology and…
Antonio Sergio Teixeira Pires
Paperback
R756
Discovery Miles 7 560
|