![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Applied mathematics > General
The first part of this volume gathers the lecture notes of the courses of the "XVII Escuela Hispano-Francesa", held in Gijon, Spain, in June 2016. Each chapter is devoted to an advanced topic and presents state-of-the-art research in a didactic and self-contained way. Young researchers will find a complete guide to beginning advanced work in fields such as High Performance Computing, Numerical Linear Algebra, Optimal Control of Partial Differential Equations and Quantum Mechanics Simulation, while experts in these areas will find a comprehensive reference guide, including some previously unpublished results, and teachers may find these chapters useful as textbooks in graduate courses. The second part features the extended abstracts of selected research work presented by the students during the School. It highlights new results and applications in Computational Algebra, Fluid Mechanics, Chemical Kinetics and Biomedicine, among others, offering interested researchers a convenient reference guide to these latest advances.
Galaxies and Chaos examines the application of tools developed for Nonlinear Dynamical Systems to Galactic Dynamics and Galaxy Formation, as well as to related issues in Celestial Mechanics. The contributions collected in this volume have emerged from selected presentations at a workshop on this topic and key chapters have been suitably expanded in order to be accessible to nonspecialist researchers and postgraduate students wishing to enter this exciting field of research.
Recent years have witnessed a surge of activity in the field of dynamic both theory and applications. Theoretical as well as practical games, in problems in zero-sum and nonzero-sum games, continuous time differential and discrete time multistage games, and deterministic and stochastic games games are currently being investigated by researchers in diverse disciplines, such as engineering, mathematics, biology, economics, management science, and political science. This surge of interest has led to the formation of the International Society of Dynamic Games (ISDG) in 1990, whose primary goal is to foster the development of advanced research and applications in the field of game theory. One important activity of the Society is to organize biannually an international symposium which aims at bringing together all those who contribute to the development of this active field of applied science. In 1992 the symposium was organized in Grimentz, Switzerland, under the supervision of an international scientific committee and with the help of a local organizing committee based at University of Geneva. This book, which is the first volume in the new Series, Annals of the International Society of Dynamic Games (see the Preface to the Series), is based on presentations made at this symposium. It is however more than a book of proceedings for a conference. Every paper published in this volume has passed through a very selective refereeing process, as in an archival technical journal.
During the 1980s, the use of log-linear statistical models in behavioral and life-science inquiry increased markedly. Concurrently, log-linear theory, developed largely during the previous decade, has been streamlined and refined. An aim of this second edition is to acquaint old and new readers with these refinements. The most significant change that has occurred is the increased availability of user-oriented computer programs for the performance of log-linear analyses. During this period, all major statistical packages (i.e., BMDP, SAS, and SPSS) introduced either new or improved computer programs designed specifically for the specification and fitting of log-linear models. Consequently, the enhanced ability of practicing researchers to perform log-linear analyses has been accompanied by an enhanced need for didactic explanations of this system of analysis--for explanations of log-linear theory and method that can be readily understood by practitioners and graduate students who do not possess recondite backgrounds in mathematical statistics, yet who desire to obtain a level of understanding beyond that which is typically offered by cookbook approaches to statistical topics. Another aim of this second edition is to fulfill this need. As before, this edition has been prepared for readers who have had at least one intermediate-level course in applied statistics in which the basic principles of factorial analysis of variance and multiple regression were discussed. Also as before, to assist readers with modest preparation in the analysis of quantitative/categorical data, this edition will review topics in such relevant areas as basic probability theory, traditional chi-square goodness-of-fit procedures, and the method of maximum-likelihood estimation. Readers with strong backgrounds in statistics can skim over these preparatory discussions, contained largely in Chapters 2 and 3, without prejudice.
This book is a useful and accessible introduction to symmetry principles in particle physics. New ideas are explained in a way that throws considerable light on difficult concepts, such as Lie groups and their representations. This book begins with introdutions both to the types of symmetries known in physics and to group theory and representation theory. Successive chapters deal with the symmetric groups and their Young diagrams, braid groups, Lie groups and algebras, Cartan's classification of semi-simple groups, and the Lie groups most used in physics are treated in detail. Gauge groups are discussed, and applications to elementary particle physics and multiquark systems introduced throughout the book where appropriate. Many worked examples are also included. There is a growing interestinthe quatk structure of hadrons and in theories of particle interactions based on the principle of gauge symmetries. In this book the concepts of group theory are clearly explained and their applications to subnuclear physics brought up-to-date.
This text takes readers in a clear and progressive format from simple to recent and advanced topics in pure and applied probability such as contraction and annealed properties of non-linear semi-groups, functional entropy inequalities, empirical process convergence, increasing propagations of chaos, central limit, and Berry Esseen type theorems as well as large deviation principles for strong topologies on path-distribution spaces. Topics also include a body of powerful branching and interacting particle methods.
A reference for the field of particle modelling - the study of dynamical behaviour of solids and fluids in response to external forces, with the solids and fluids modelled as systems of atoms and molecules.
This book offers a new approach to the long-standing problem of high-Tc copper-oxide superconductors. It has been demonstrated that starting from a strongly correlated Hamiltonian, even within the mean-field regime, the "competing orders" revealed by experiments can be achieved using numerical calculations. In the introduction, readers will find a brief review of the high-Tc problem and the unique challenges it poses, as well as a comparatively simple numerical approach, the renormalized mean-field theory (RMFT), which provides rich results detailed in the following chapters. With an additional phase picked up by the original Hamiltonian, some behaviors of interactive fermions under an external magnetic field, which have since been experimentally observed using cold atom techniques, are also highlighted.
Extending the well-known connection between classical linear potential theory and probability theory (through the interplay between harmonic functions and martingales) to the nonlinear case of tug-of-war games and their related partial differential equations, this unique book collects several results in this direction and puts them in an elementary perspective in a lucid and self-contained fashion.
Multilevel decision theory arises to resolve the contradiction between increasing requirements towards the process of design, synthesis, control and management of complex systems and the limitation of the power of technical, control, computer and other executive devices, which have to perform actions and to satisfy requirements in real time. This theory rises suggestions how to replace the centralised management of the system by hierarchical co-ordination of sub-processes. All sub-processes have lower dimensions, which support easier management and decision making. But the sub-processes are interconnected and they influence each other. Multilevel systems theory supports two main methodological tools: decomposition and co-ordination. Both have been developed, and implemented in practical applications concerning design, control and management of complex systems. In general, it is always beneficial to find the best or optimal solution in processes of system design, control and management. The real tendency towards the best (optimal) decision requires to present all activities in the form of a definition and then the solution of an appropriate optimization problem. Every optimization process needs the mathematical definition and solution of a well stated optimization problem. These problems belong to two classes: static optimization and dynamic optimization. Static optimization problems are solved applying methods of mathematical programming: conditional and unconditional optimization. Dynamic optimization problems are solved by methods of variation calculus: Euler Lagrange method; maximum principle; dynamical programming."
The work developed in this thesis addresses very important and relevant issues of accretion processes around black holes. Beginning by studying the time variation of the evolution of inviscid accretion discs around black holes and their properties, the author investigates the change of the pattern of the flows when the strength of the shear viscosity is varied and cooling is introduced. He succeeds to verify theoretical predictions of the so called Two Component Advective Flow (TCAF) solution of the accretion problem onto black holes through numerical simulations under different input parameters. TCAF solutions are found to be stable. And thus explanations of spectral and timing properties (including Quasi-Period Oscillations, QPOs) of galactic and extra-galactic black holes based on shocked TCAF models appear to have a firm foundation.
The classical optimal control theory deals with the determination of an optimal control that optimizes the criterion subjects to the dynamic constraint expressing the evolution of the system state under the influence of control variables. If this is extended to the case of multiple controllers (also called players) with different and sometimes conflicting optimization criteria (payoff function) it is possible to begin to explore differential games. Zero-sum differential games, also called differential games of pursuit, constitute the most developed part of differential games and are rigorously investigated. In this book, the full theory of differential games of pursuit with complete and partial information is developed. Numerous concrete pursuit-evasion games are solved ("life-line" games, simple pursuit games, etc.), and new time-consistent optimality principles in the n-person differential game theory are introduced and investigated.
Contents: The Possibility of Using Computer to Study the Equation of Gravitation (Q K Lu); Solving Polynomial Systems by Homotopy Continuation Methods (T Y Li); Sketch of a New Discipline of Modeling (E Engeler); The Symmetry Groups of Computer Programs and Program Equivalence (J R Gabriel); Computations with Rational Parametric Equations (S C Chou et al.); Computer Versus Paper and Pencil (M Mignotte); The Finite Basis of an Irreducible Ascending Set (H Shi); A Note on Wu Wen-Tsun's Non-Degenerate Condition (J Z Zhang et al.); Mechanical Theorem Proving in Riemann Geometry Using Wu's Method (S C Chou & X S Gao); and other papers;
Mathematical methods play a significant role in the rapidly growing field of nonlinear optical materials. This volume discusses a number of successful or promising contributions. The overall theme of this volume is twofold: (1) the challenges faced in computing and optimizing nonlinear optical material properties; and (2) the exploitation of these properties in important areas of application. These include the design of optical amplifiers and lasers, as well as novel optical switches. Research topics in this volume include how to exploit the magnetooptic effect, how to work with the nonlinear optical response of materials, how to predict laser-induced breakdown in efficient optical devices, and how to handle electron cloud distortion in femtosecond processes.
The feasibility to extract porous medium parameters from acoustic
recordings is investigated. The thesis gives an excellent
discussion of our basic understanding of different wave modes,
using a full-waveform and multi-component approach. Focus lies on
the dependency on porosity and permeability where especially the
latter is difficult to estimate. In this thesis, this sensitivity
is shown for interface-wave and reflected-wave modes. For each of
the pseudo-Rayleigh and pseudo-Stoneley interface waves unique
estimates for permeability and porosity can be obtained when
impedance and attenuation are combined.
Survival data or more general time-to-event data occur in many areas, including medicine, biology, engineering, economics, and demography, but previously standard methods have requested that all time variables are univariate and independent. This book extends the field by allowing for multivariate times. Applications where such data appear are survival of twins, survival of married couples and families, time to failure of right and left kidney for diabetic patients, life history data with time to outbreak of disease, complications and death, recurrent episodes of diseases and cross-over studies with time responses. As the field is rather new, the concepts and the possible types of data are described in detail and basic aspects of how dependence can appear in such data is discussed. Four different approaches to the analysis of such data are presented. The multi-state models where a life history is described as the subject moving from state to state is the most classical approach. The Markov models make up an important special case, but it is also described how easily more general models are set up and analyzed. Frailty models, which are random effects models for survival data, made a second approach, extending from the most simple shared frailty models, which are considered in detail, to models with more complicated dependence structures over individuals or over time. Marginal modelling has become a popular approach to evaluate the effect of explanatory factors in the presence of dependence, but without having specified a statistical model for the dependence. Finally, the completely non-parametric approach to bivariate censored survival data is described. This book is aimed at investigators who need to analyze multivariate survival data, but due to its focus on the concepts and the modelling aspects, it is also useful for persons interested in such data, but not having a statistical education. It can be used as a textbook for a graduate course in multivariate survival data. It is made from an applied point of view and covers all essential aspects of applying multivariate survival models. Also more theoretical evaluations, like asymptotic theory, are described, but only to the extent useful in applications and for understanding the models. For reading the book, it is useful, but not necessary, to have an understanding of univariate survival data. Philip Hougaard is a statistician at the pharmaceutical company Novo Nordisk. He has a Ph.D. in nonlinear regression models and is Doctor of Science based on a thesis on frailty models. He is associate editor of Biometrics and Lifetime Data Analysis. He has published over 80 papers in the statistical and medical literature.
Morphometrics is concerned with the study of variations and change in the form (size and shape) of organisms or objects adding a quantitative element to descriptions and thereby facilitating the comparison of different objects and organisms. This volume provides an introduction to morphometrics in a clear and simple way without recourse to complex mathematics and statistics. This introduction is followed by a series of case studies describing the variety of applications of morphometrics from paleontology and evolutionary ecology to archaeological artifacts analysis. This is followed by a presentation of future applications of morphometrics and state of the art software for analyzing and comparing shape.
This is the proceedings of the "8th IMACS Seminar on Monte Carlo Methods" held from August 29 to September 2, 2011 in Borovets, Bulgaria, and organized by the Institute of Information and Communication Technologies of the Bulgarian Academy of Sciences in cooperation with the International Association for Mathematics and Computers in Simulation (IMACS). Included are 24 papers which cover all topics presented in the sessions of the seminar: stochastic computation and complexity of high dimensional problems, sensitivity analysis, high-performance computations for Monte Carlo applications, stochastic metaheuristics for optimization problems, sequential Monte Carlo methods for large-scale problems, semiconductor devices and nanostructures. The history of the IMACS Seminar on Monte Carlo Methods goes back to April 1997 when the first MCM Seminar was organized in Brussels: 1st IMACS Seminar, 1997, Brussels, Belgium 2nd IMACS Seminar, 1999, Varna, Bulgaria 3rd IMACS Seminar, 2001, Salzburg, Austria 4th IMACS Seminar, 2003, Berlin, Germany 5th IMACS Seminar, 2005, Tallahassee, USA 6th IMACS Seminar, 2007, Reading, UK 7th IMACS Seminar, 2009, Brussels, Belgium 8th IMACS Seminar, 2011, Borovets, Bulgaria
In September 1997, the Working Week on Resolution of Singularities was held at Obergurgl in the Tyrolean Alps. Its objective was to manifest the state of the art in the field and to formulate major questions for future research. The four courses given during this week were written up by the speakers and make up part I of this volume. They are complemented in part II by fifteen selected contributions on specific topics and resolution theories. The volume is intended to provide a broad and accessible introduction to resolution of singularities leading the reader directly to concrete research problems.
The book aims to prioritise what needs mastering and presents the content in the most understandable, concise and pedagogical way illustrated by real market examples. Given the variety and the complexity of the materials the book covers, the author sorts through a vast array of topics in a subjective way, relying upon more than twenty years of experience as a market practitioner. The book only requires the reader to be knowledgeable in the basics of algebra and statistics. The Mathematical formulae are only fully proven when the proof brings some useful insight. These formulae are translated from algebra into plain English to aid understanding as the vast majority of practitioners involved in the financial markets are not required to compute or calculate prices or sensitivities themselves as they have access to data providers. Thus, the intention of this book is for the practitioner to gain a deeper understanding of these calculations, both for a safety reason - it is better to understand what is behind the data we manipulate - and secondly being able to appreciate the magnitude of the prices we are confronted with and being able to draft a rough calculation, aside of the market data. The author has avoided excessive formalism where possible. Formalism is securing the outputs of research, but may, in other circumstances, burden the understanding by non-mathematicians; an example of this case is in the chapter dedicated to the basis of stochastic calculus. The book is divided into two parts: - First, the deterministic world, starting from the yield curve building and related calculations (spot rates, forward rates, discrete versus continuous compounding, etc.), and continuing with spot instruments valuation (short term rates, bonds, currencies and stocks) and forward instruments valuation (forward forex, FRAs and variants, swaps & futures); - Second, the probabilistic world, starting with the basis of stochastic calculus and the alternative approach of ARMA to GARCH, and continuing with derivative pricing: options, second generation options, volatility, credit derivatives; - This second part is completed by a chapter dedicated to market performance & risk measures, and a chapter widening the scope of quantitative models beyond the Gaussian hypothesis and evidencing the potential troubles linked to derivative pricing models.
This volume presents two reviews from the cutting-edge of Russian plasma physics research. Plasma Models of Atom and Radiative-Collisional Processes, by V.A. Astapenko, L.A. Bureyeva, V.S. Lisitsa, is devoted to a unified description of the atomic core polarization effects in the free-free, free-bound and bound-bound transitions of the charged particles in the field of multielectron atom. These effects were treated independently in various applications for more than 40 years. The universal description is based on statistical plasma models of atomic processes with complex ions and atoms. This description makes it possible to extract general scaling laws for the processes above. This review is the first attempt to give the universal approach to the problem. All types of transitions are considered in the frame of both classical and quantum models for the energy scattering of the particle interacting with the atomic core. of atoms and highly charged ions, polarization phenomena in photoeffect, new polarization channel in recombination and for Bremsstrahlung of electrons, relativistic and heavy particles on complex atoms and ions. Asymptotic Theory of Charge Exchange And Mobility Processes for Atomic Ions by B.M. Smirnov reviews the process of resonant charge exchange, and also the transport processes (mobility and diffusion coefficients) for ions in parent gases which are determined by resonant electron transfer. The basis is the asymptotic theory of resonant charge exchange that allows us to evaluate cross sections for all the elements and estimate their accuracy. A simple version of the asymptotic theory is used as follows: a parameter is the ratio between an atom cross section, and the cross section of resonant charge exchange. The cross section of this process is expressed through asymptotic parameters of a transferring electron it the atom. Experimental results are also used, but their accuracy is usually lower than can be obtained by the asymptotic theory
The centerpiece of the thesis is the search for muon neutrino to electron neutrino oscillations which would indicate a non-zero mixing angle between the first and third neutrino generations ( 13), currently the holy grail of neutrino physics. The optimal extraction of the electron neutrino oscillation signal is based on the novel library event matching (LEM) method which Ochoa developed and implemented together with colleagues at Caltech and at Cambridge, which improves MINOS (Main Injector Neutrino Oscillator Search) reach for establishing an oscillation signal over any other method. LEM will now be the basis for MINOS final results, and will likely keep MINOS at the forefront of this field until it completes its data taking in 2011. Ochoa and his colleagues also developed the successful plan to run MINOS with a beam tuned for antineutrinos, to make a sensitive test of CPT symmetry by comparing the inter-generational mass splitting for neutrinos and antineutrinos. Ochoa s in-depth, creative approach to the solution of a variety of complex experimental problems is an outstanding example for graduate students and longtime practitioners of experimental physics alike. Some of the most exciting results in this field to emerge in the near future may find their foundations in this thesis.
This monograph aims to fill a void by making available a source book which first systematically describes all the available uniqueness and nonuniqueness criteria for ordinary differential equations, and compares and contrasts the merits of these criteria, and second, discusses open problems and offers some directions towards possible solutions.
This book is an introduction to convolution operators with
matrix-valued almost periodic or semi-almost periodic symbols.The
basic tools for the treatment of the operators are Wiener-Hopf
factorization and almost periodic factorization. These
factorizations are systematically investigated and explicitly
constructed for interesting concrete classes of matrix functions.
The material covered by the book ranges from classical results
through a first comprehensive presentation of the core of the
theory of almost periodic factorization up to the latest
achievements, such as the construction of factorizations by means
of the Portuguese transformation and the solution of corona
theorems. |
You may like...
Elementary Treatise on Mechanics - for…
William G (William Guy) 1820- Peck
Hardcover
R887
Discovery Miles 8 870
Exploring Quantum Mechanics - A…
Victor Galitski, Boris Karnakov, …
Hardcover
R6,101
Discovery Miles 61 010
Dark Silicon and Future On-chip Systems…
Suyel Namasudra, Hamid Sarbazi-Azad
Hardcover
R3,940
Discovery Miles 39 400
Infinite Words, Volume 141 - Automata…
Dominique Perrin, Jean-Eric Pin
Hardcover
R4,065
Discovery Miles 40 650
|