![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Applied mathematics > General
Hardbound. This research annual presents state-of-the-art studies in the integration of mathematical planning and management. As the literature and techniques in financial planning and management become increasingly complex, our monographs aid in the dissemination of research efforts in quantitative financial analysis. Topics include cash management, capital budgeting, financial decisions, portfolio management and performance analysis, and financial planning models.
This book is the first to report on theoretical breakthroughs on control of complex dynamical systems developed by collaborative researchers in the two fields of dynamical systems theory and control theory. As well, its basic point of view is of three kinds of complexity: bifurcation phenomena subject to model uncertainty, complex behavior including periodic/quasi-periodic orbits as well as chaotic orbits, and network complexity emerging from dynamical interactions between subsystems. Analysis and Control of Complex Dynamical Systems offers a valuable resource for mathematicians, physicists, and biophysicists, as well as for researchers in nonlinear science and control engineering, allowing them to develop a better fundamental understanding of the analysis and control synthesis of such complex systems.
This book is designed to provide valuable insight into how to improve the return on your investment when playing the lottery. While it does not promise that you will win more often, it does show you how to improve the odds of winning larger amounts when your numbers do come up. So, when you do win that million-dollar jackpot, you will be less likely to have to share it with anyone else. Among the intriguing topics covered are the most popular (and the most foolish) combinations of numbers, why it is impossible to improve the odds of any legitimate lottery, how popular (and thus unprofitable) an attractive-looking ticket might be, why not to follow the suggested numbers from so-called "expert advisors" and why it is important to avoid winning combinations of past drawings. With this book and a little luck, the dream of winning millions might just come true.
This volume contains the articles presented at the 18th International Meshing Roundtable (IMR) organized, in part, by Sandia National Laboratories and held October 25-28, 2009 in Salt Lake City, Utah, USA. The volume presents recent results of mesh generation and adaptation which has applications to finite element simulation. It introduces theoretical and novel ideas with practical potential.
This book covers recent developments in the understanding, quantification, and exploitation of entanglement in spin chain models from both condensed matter and quantum information perspectives. Spin chain models are at the foundation of condensed matter physics and quantum information technologies and elucidate many fundamental phenomena such as information scrambling, quantum phase transitions, and many-body localization. Moreover, many quantum materials and emerging quantum devices are well described by spin chains. Comprising accessible, self-contained chapters written by leading researchers, this book is essential reading for graduate students and researchers in quantum materials and quantum information. The coverage is comprehensive, from the fundamental entanglement aspects of quantum criticality, non-equilibrium dynamics, classical and quantum simulation of spin chains through to their experimental realizations, and beyond into machine learning applications.
- The book discusses the recent techniques in NGS data analysis which is the most needed material by biologists (students and researchers) in the wake of numerous genomic projects and the trend toward genomic research. - The book includes both theory and practice for the NGS data analysis. So, readers will understand the concept and learn how to do the analysis using the most recent programs. - The steps of application workflows are written in a manner that can be followed for related projects. - Each chapter includes worked examples with real data available on the NCBI databases. Programming codes and outputs are accompanied with explanation. - The book content is suitable as teaching material for biology and bioinformatics students. Meets the requirements of a complete semester course on Sequencing Data Analysis Covers the latest applications for Next Generation Sequencing Covers data reprocessing, genome assembly, variant discovery, gene profiling, epigenetics, and metagenomics
After about a century of success, physicists feel the need to probe the limits of validity of special-relativity base theories. This book is the outcome of a special seminar held on this topic. The authors gather in a single volume an extensive collection of introductions and reviews of the various facets involved, and also includes detailed discussion of philosophical and historical aspects.
This monograph provides the first up-to-date and self-contained presentation of a recently discovered mathematical structure-the Schrodinger-Virasoro algebra. Just as Poincare invariance or conformal (Virasoro) invariance play a key role in understanding, respectively, elementary particles and two-dimensional equilibrium statistical physics, this algebra of non-relativistic conformal symmetries may be expected to apply itself naturally to the study of some models of non-equilibrium statistical physics, or more specifically in the context of recent developments related to the non-relativistic AdS/CFT correspondence. The study of the structure of this infinite-dimensional Lie algebra touches upon topics as various as statistical physics, vertex algebras, Poisson geometry, integrable systems and supergeometry as well as representation theory, the cohomology of infinite-dimensional Lie algebras, and the spectral theory of Schrodinger operators."
The Second Edition of this book includes an abundance of examples to illustrate advanced concepts and brings out in a text book setting the algorithms for bivariate polynomial matrix factorization results that form the basis of two-dimensional systems theory. Algorithms and their implementation using symbolic algebra are emphasized.
The complexity of issues requiring rational decision making grows and thus such decisions are becoming more and more difficult, despite advances in methodology and tools for decision support and in other areas of research. Globalization, interlinks between environmental, industrial, social and political issues, and rapid speed of change all contribute to the increase of this complexity. Specialized knowledge about decision-making processes and their support is increasing, but a large spectrum of approaches presented in the literature is typically illustrated only by simple examples. Moreover, the integration of model-based decision support methodologies and tools with specialized model-based knowledge developed for handling real problems in environmental, engineering, industrial, economical, social and political activities is often not satisfactory. Therefore, there is a need to present the state of art of methodology and tools for development of model-based decision support systems, and illustrate this state by applications to various complex real-world decision problems. The monograph reports many years of experience of many researchers, who have not only contributed to the developments in operations research but also succeeded to integrate knowledge and craft of various disciplines into several modern decision support systems which have been applied to actual complex decision-making processes in various fields of policy making. The experience presented in this book will be of value to researchers and practitioners in various fields. The issues discussed in this book gain in importance with the development of the new era of the information society, where information, knowledge, and ways of processing them become a decisive part of human activities. The examples presented in this book illustrate how how various methods and tools of model-based decision support can actually be used for helping modern decision makers that face complex problems. Overview of the contents: The first part of this three-part book presents the methodological background and characteristics of modern decision-making environment, and the value of model-based decision support thus addressing current challenges of decision support. It also provides the methodology of building and analyzing mathematical models that represent underlying physical and economic processes, and that are useful for modern decision makers at various stages of decision making. These methods support not only the analysis of Pareto-efficient solutions that correspond best to decision maker preferences but also allow the use of other modeling concepts like soft constraints, soft simulation, or inverse simulation. The second part describes various types of tools that are used for the development of decision support systems. These include tools for modeling, simulation, optimization, tools supporting choice and user interfaces. The described tools are both standard, commercially available, and nonstandard, public domain or shareware software, which are robust enough to be used also for complex applications. All four environmental applications (regional water quality management, land use planning, cost-effective policies aimed at improving the European air quality, energy planning with environmental implications) presented in the third part of the book rely on many years of cooperation between the authors of the book with several IIASA's projects, and with many researchers from the wide IIASA network of collaborating institutions. All these applications are characterized by an intensive use of model-based decision support. Finally, the appendix contains a short description of some of the tools described in the book that are available from IIASA, free of charge, for research and educational purposes. The experiences reported in this book indicate that the development of DSSs for strategic environmental decision making should be a joint effort involving experts in the subject area, modelers, and decision support experts. For the other experiences discussed in this book, the authors stress the importance of good data bases, and good libraries of tools. One of the most important requirements is a modular structure of a DSS that enhances the reusability of system modules. In such modular structures, user interfaces play an important role. The book shows how modern achievements in mathematical programming and computer sciences may be exploited for supporting decision making, especially about strategic environmental problems. It presents the methodological background of various methods for model-based decision support and reviews methods and tools for model development and analysis. The methods and tools are amply illustrated with extensive applications. Audience: This book will be of interest to researchers and practitioners in the fields of model development and analysis, model-based decision analysis and support, (particularly in the environment, economics, agriculture, engineering, and negotiations areas) and mathematical programming. For understanding of some parts of the text a background in mathematics and operational research is required but several chapters of the book will be of value also for readers without such a background. The monograph is also suitable for use as a text book for courses on advanced (Master and Ph.D.) levels for programs on Operations Research, decision analysis, decision support and various environmental studies (depending on the program different parts of the book may be emphasized).
The last two decades have seen enormous developments in statistical methods for incomplete data. The EM algorithm and its extensions, multiple imputation, and Markov Chain Monte Carlo provide a set of flexible and reliable tools from inference in large classes of missing-data problems. Yet, in practical terms, those developments have had surprisingly little impact on the way most data analysts handle missing values on a routine basis.
The book is devoted to rigorous derivation of macroscopic mathematical models as a homogenization of exact mathematical models at the microscopic level. The idea is quite natural: one first must describe the joint motion of the elastic skeleton and the fluid in pores at the microscopic level by means of classical continuum mechanics, and then use homogenization to find appropriate approximation models (homogenized equations). The Navier-Stokes equations still hold at this scale of the pore size in the order of 5 - 15 microns. Thus, as we have mentioned above, the macroscopic mathematical models obtained are still within the limits of physical applicability. These mathematical models describe different physical processes of liquid filtration and acoustics in poroelastic media, such as isothermal or non-isothermal filtration, hydraulic shock, isothermal or non-isothermal acoustics, diffusion-convection, filtration and acoustics in composite media or in porous fractured reservoirs. Our research is based upon the Nguetseng two-scale convergent method.
The subject of General Cost Structure Analysis is the quantitative analysis of cost structures with a minimum of a priori assumptions on firm technology and on firm behaviour. The study develops an innovative line of attack building on the primal characterisation of the firm's generalised shadow cost minimisation program. The resulting Flexible Cost Model (FCM) is highly conducive to modern panel data techniques and allows for a flexible specification not only of firm technology but also of firm behaviour, as shadow prices can be made input-, time- and firm-specific. FCM is applied to a panel dataset on several hundred of the largest banking institutions in the G-5 (France, Germany, Japan, United Kingdom, United States) in the 1989-1996 period. The main empirical results are summarised. In particular, FCM provides new insights into the existence of scale economies in banking and an assessment of the extent of excess labour in the G-5 banking industries, particularly as a consequence of labour market rigidities in a context of rapidly declining technology prices. FCM also provides an evaluation of the sources of the cost advantage of American and British banks in comparison to Continental European banks.
Covers the State of the Art in Superfluidity and Superconductivity Superfluid States of Matter addresses the phenomenon of superfluidity/superconductivity through an emergent, topologically protected constant of motion and covers topics developed over the past 20 years. The approach is based on the idea of separating universal classical-field superfluid properties of matter from the underlying system's "quanta." The text begins by deriving the general physical principles behind superfluidity/superconductivity within the classical-field framework and provides a deep understanding of all key aspects in terms of the dynamics and statistics of a classical-field system. It proceeds by explaining how this framework emerges in realistic quantum systems, with examples that include liquid helium, high-temperature superconductors, ultra-cold atomic bosons and fermions, and nuclear matter. The book also offers several powerful modern approaches to the subject, such as functional and path integrals. Comprised of 15 chapters, this text: Establishes the fundamental macroscopic properties of superfluids and superconductors within the paradigm of the classical matter field Deals with a single-component neutral matter field Considers fundamentals and properties of superconductors Describes new physics of superfluidity and superconductivity that arises in multicomponent systems Presents the quantum-field perspective on the conditions under which classical-field description is relevant in bosonic and fermionic systems Introduces the path integral formalism Shows how Feynman path integrals can be efficiently simulated with the worm algorithm Explains why nonsuperfluid (insulating) ground states of regular and disordered bosons occur under appropriate conditions Explores superfluid solids (supersolids) Discusses the rich dynamics of vortices and various aspects of superfluid turbulence at T 0 Provides account of BCS theory for the weakly interacting Fermi gas Highlights and analyzes the most crucial developments that has led to the current understanding of superfluidity and superconductivity Reviews the variety of superfluid and superconducting systems available today in nature and the laboratory, as well as the states that experimental realization is currently actively pursuing
This open access proceedings volume brings selected, peer-reviewed contributions presented at the Stochastic Transport in Upper Ocean Dynamics (STUOD) 2021 Workshop, held virtually and in person at the Imperial College London, UK, September 20-23, 2021. The STUOD project is supported by an ERC Synergy Grant, and led by Imperial College London, the National Institute for Research in Computer Science and Automatic Control (INRIA) and the French Research Institute for Exploitation of the Sea (IFREMER). The project aims to deliver new capabilities for assessing variability and uncertainty in upper ocean dynamics. It will provide decision makers a means of quantifying the effects of local patterns of sea level rise, heat uptake, carbon storage and change of oxygen content and pH in the ocean. Its multimodal monitoring will enhance the scientific understanding of marine debris transport, tracking of oil spills and accumulation of plastic in the sea. All topics of these proceedings are essential to the scientific foundations of oceanography which has a vital role in climate science. Studies convened in this volume focus on a range of fundamental areas, including: Observations at a high resolution of upper ocean properties such as temperature, salinity, topography, wind, waves and velocity; Large scale numerical simulations; Data-based stochastic equations for upper ocean dynamics that quantify simulation error; Stochastic data assimilation to reduce uncertainty. These fundamental subjects in modern science and technology are urgently required in order to meet the challenges of climate change faced today by human society. This proceedings volume represents a lasting legacy of crucial scientific expertise to help meet this ongoing challenge, for the benefit of academics and professionals in pure and applied mathematics, computational science, data analysis, data assimilation and oceanography.
For more than five decades Bertram Kostant has been one of the major architects of modern Lie theory. Virtually all his papers are pioneering with deep consequences, many giving rise to whole new fields of activities. His interests span a tremendous range of Lie theory, from differential geometry to representation theory, abstract algebra, and mathematical physics. It is striking to note that Lie theory (and symmetry in general) now occupies an ever increasing larger role in mathematics than it did in the fifties. Now in the sixth decade of his career, he continues to produce results of astonishing beauty and significance for which he is invited to lecture all over the world. This is the fourth volume (1985-1995) of a five-volume set of Bertram Kostant's collected papers. A distinguished feature of this fourth volume is Kostant's commentaries and summaries of his papers in his own words.
Learn the basics of white noise theory with White Noise Distribution Theory. This book covers the mathematical foundation and key applications of white noise theory without requiring advanced knowledge in this area. This instructive text specifically focuses on relevant application topics such as integral kernel operators, Fourier transforms, Laplacian operators, white noise integration, Feynman integrals, and positive generalized functions. Extremely well-written by one of the field's leading researchers, White Noise Distribution Theory is destined to become the definitive introductory resource on this challenging topic.
This book is an enlarged second edition of a monograph published in the Springer AGEM2-Series, 2009. It presents, in a consistent and unified overview, a setup of the theory of spherical functions of mathematical (geo-)sciences. The content shows a twofold transition: First, the natural transition from scalar to vectorial and tensorial theory of spherical harmonics is given in a coordinate-free context, based on variants of the addition theorem, Funk-Hecke formulas, and Helmholtz as well as Hardy-Hodge decompositions. Second, the canonical transition from spherical harmonics via zonal (kernel) functions to the Dirac kernel is given in close orientation to an uncertainty principle classifying the space/frequency (momentum) behavior of the functions for purposes of data analysis and (geo-)application. The whole palette of spherical functions is collected in a well-structured form for modeling and simulating the phenomena and processes occurring in the Earth's system. The result is a work which, while reflecting the present state of knowledge in a time-related manner, claims to be of largely timeless significance in (geo-)mathematical research and teaching.
The main goal of this book is to elucidate what kind of experiment must be performed in order to determine the full set of independent parameters which can be extracted and calculated from theory, where electrons, photons, atoms, ions, molecules, or molecular ions may serve as the interacting constituents of matter. The feasibility of such perfect' and-or `complete' experiments, providing the complete quantum mechanical knowledge of the process, is associated with the enormous potential of modern research techniques, both, in experiment and theory. It is even difficult to overestimate the role of theory in setting of the complete experiment, starting with the fact that an experiment can be complete only within a certain theoretical framework, and ending with the direct prescription of what, and in what conditions should be measured to make the experiment `complete'. The language of the related theory is the language of quantum mechanical amplitudes and their relative phases. This book captures the spirit of research in the direction of the complete experiment in atomic and molecular physics, considering some of the basic quantum processes: scattering, Auger decay and photo-ionization. It includes a description of the experimental methods used to realize, step by step, the complete experiment up to the level of the amplitudes and phases. The corresponding arsenal includes, beyond determining the total cross section, the observation of angle and spin resolved quantities, photon polarization and correlation parameters, measurements applying coincidence techniques, preparing initially polarized targets, and even more sophisticated methods. The `complete' experiment is, until today, hardly to perform. Therefore, much attention is paid to the results of state-of-the-art experiments providing detailed information on the process, and their comparison to the related theoretical approaches, just to mention relativistic multi-configurational Dirac-Fock, convergent close-coupling, Breit-Pauli R-matrix, or relativistic distorted wave approaches, as well as Green's operator methods. This book has been written in honor of Herbert Walther and his major contribution to the field but even to stimulate advanced Bachelor and Master students by demonstrating that obviously nowadays atomic and molecular scattering physics yields and gives a much exciting appreciation for further advancing the field.
Considerable attention from the international scientific community is currently focused on the wide ranging applications of wavelets. For the first time, the field's leading experts have come together to produce a complete guide to wavelet transform applications in medicine and biology. Wavelets in Medicine and Biology provides accessible, detailed, and comprehensive guidelines for all those interested in learning about wavelets and their applications to biomedical problems.
This book introduces the basic concept of a dissipative soliton, before going to explore recent theoretical and experimental results for various classes of dissipative optical solitons, high-energy dissipative solitons and their applications, and mode-locked fiber lasers. A soliton is a concept which describes various physical phenomena ranging from solitary waves forming on water to ultrashort optical pulses propagating in an optical fiber. While solitons are usually attributed to integrability, in recent years the notion of a soliton has been extended to various systems which are not necessarily integrable. Until now, the main emphasis has been given to well-known conservative soliton systems, but new avenues of inquiry were opened when physicists realized that solitary waves did indeed exist in a wide range of non-integrable and non-conservative systems leading to the concept of so-called dissipative optical solitons. Dissipative optical solitons have many unique properties which differ from those of their conservative counterparts. For example, except for very few cases, they form zero-parameter families and their properties are completely determined by the external parameters of the optical system. They can exist indefinitely in time, as long as these parameters stay constant. These features of dissipative solitons are highly desirable for several applications, such as in-line regeneration of optical data streams and generation of stable trains of laser pulses by mode-locked cavities.
Metrological data is known to be blurred by the imperfections of the measuring process. In retrospect, for about two centuries regular or constant errors were no focal point of experimental activities, only irregular or random error were. Today's notation of unknown systematic errors is in line with this. Confusingly enough, the worldwide practiced approach to belatedly admit those unknown systematic errors amounts to consider them as being random, too. This book discusses a new error concept dispensing with the common practice to randomize unknown systematic errors. Instead, unknown systematic errors will be treated as what they physically are- namely as constants being unknown with respect to magnitude and sign. The ideas considered in this book issue a proceeding steadily localizing the true values of the measurands and consequently traceability.
Error detecting codes are very popular for error control in practical systems for two reasons. First, such codes can be used to provide any desired reliability of communication over any noisy channel. Second, implementation is usually much simpler than for a system using error correcting codes. To consider a particular code for use in such a system, it is very important to be able to calculate or estimate the probability of undetected error. For the binary symmetric channel, the probability of undetected error can be expressed in terms of the weight distribution of the code. The first part of the book gives a detailed description of all known methods to calculate or estimate the probability of undetected error, for the binary symmetric channel in particular, but a number of other channel models are also considered. The second part of the book describes a number of protocols for feedback communication systems (ARQ systems), with methods for optimal choice of error detecting codes for the protocols. Results have been collected from many sources and given a unified presentation. The results are presented in a form which make them accessible to the telecommunication system designer as well as the coding theory researcher and student. The system designer may find the presentation of CRC codes as well as the system performance analysis techniques particularly useful. The coding theorist will find a detailed account of a part of coding theory which is usually just mentioned in most text books and which contains a number of interesting and useful results as well as many challenging open problems. Audience: Essential for students, practitioners and researchers working in communications and coding theory. An excellent text for an advanced course on the subject.
This book is for students taking either a first-year graduate statistics course or an advanced undergraduate statistics course in Psychology. Enough introductory statistics is briefly reviewed to bring everyone up to speed. The book is highly user-friendly without sacrificing rigor, not only in anticipating students' questions, but also in paying attention to the introduction of new methods and notation. In addition, many topics given only casual or superficial treatment are elaborated here, such as: the nature of interaction and its interpretation, in terms of theory and response scale transformations; generalized forms of analysis of covariance; extensive coverage of multiple comparison methods; coverage of nonorthogonal designs; and discussion of functional measurement. The text is structured for reading in multiple passes of increasing depth; for the student who desires deeper understanding, there are optional sections; for the student who is or becomes proficient in matrix algebra, there are still deeper optional sections. The book is also equipped with an excellent set of class-tested exercises and answers. |
You may like...
Majorization and the Lorenz Order with…
Barry C. Arnold, Jose-Maria Sarabia
Hardcover
R2,907
Discovery Miles 29 070
Fractional Programming - Theory, Methods…
I.M.Stancu Minasian
Hardcover
R2,720
Discovery Miles 27 200
Mathematical and Statistical…
Avishek Adhikari, Mahima Ranjan Adhikari, …
Hardcover
R3,385
Discovery Miles 33 850
Statistical Aspects Of The Design And…
Brian S. Everitt, Andrew Pickles
Hardcover
R2,707
Discovery Miles 27 070
|