![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Applied mathematics > General
This volume presents the refereed proceedings of the Guangzhou International Symposium on Computational Mathematics, held at the Zhongshan University, People's Republic of China. Nearly 90 international mathematicians examine numerical optimization methods, wavelet analysis, computational approximation, numerical solutions of differential and integral equations, numerical linear algebra, inverse and ill-posed problems, geometric modelling, and signal and image processing and their applications.
In 1994 Peter Shor [65] published a factoring algorithm for a quantum computer that finds the prime factors of a composite integer N more efficiently than is possible with the known algorithms for a classical com puter. Since the difficulty of the factoring problem is crucial for the se curity of a public key encryption system, interest (and funding) in quan tum computing and quantum computation suddenly blossomed. Quan tum computing had arrived. The study of the role of quantum mechanics in the theory of computa tion seems to have begun in the early 1980s with the publications of Paul Benioff [6]' [7] who considered a quantum mechanical model of computers and the computation process. A related question was discussed shortly thereafter by Richard Feynman [35] who began from a different perspec tive by asking what kind of computer should be used to simulate physics. His analysis led him to the belief that with a suitable class of "quantum machines" one could imitate any quantum system.
Hardbound. This research annual presents state-of-the-art studies in the integration of mathematical planning and management. As the literature and techniques in financial planning and management become increasingly complex, our monographs aid in the dissemination of research efforts in quantitative financial analysis. Topics include cash management, capital budgeting, financial decisions, portfolio management and performance analysis, and financial planning models.
This book is the first to report on theoretical breakthroughs on control of complex dynamical systems developed by collaborative researchers in the two fields of dynamical systems theory and control theory. As well, its basic point of view is of three kinds of complexity: bifurcation phenomena subject to model uncertainty, complex behavior including periodic/quasi-periodic orbits as well as chaotic orbits, and network complexity emerging from dynamical interactions between subsystems. Analysis and Control of Complex Dynamical Systems offers a valuable resource for mathematicians, physicists, and biophysicists, as well as for researchers in nonlinear science and control engineering, allowing them to develop a better fundamental understanding of the analysis and control synthesis of such complex systems.
This volume contains the articles presented at the 18th International Meshing Roundtable (IMR) organized, in part, by Sandia National Laboratories and held October 25-28, 2009 in Salt Lake City, Utah, USA. The volume presents recent results of mesh generation and adaptation which has applications to finite element simulation. It introduces theoretical and novel ideas with practical potential.
This book covers recent developments in the understanding, quantification, and exploitation of entanglement in spin chain models from both condensed matter and quantum information perspectives. Spin chain models are at the foundation of condensed matter physics and quantum information technologies and elucidate many fundamental phenomena such as information scrambling, quantum phase transitions, and many-body localization. Moreover, many quantum materials and emerging quantum devices are well described by spin chains. Comprising accessible, self-contained chapters written by leading researchers, this book is essential reading for graduate students and researchers in quantum materials and quantum information. The coverage is comprehensive, from the fundamental entanglement aspects of quantum criticality, non-equilibrium dynamics, classical and quantum simulation of spin chains through to their experimental realizations, and beyond into machine learning applications.
This book is designed to provide valuable insight into how to improve the return on your investment when playing the lottery. While it does not promise that you will win more often, it does show you how to improve the odds of winning larger amounts when your numbers do come up. So, when you do win that million-dollar jackpot, you will be less likely to have to share it with anyone else. Among the intriguing topics covered are the most popular (and the most foolish) combinations of numbers, why it is impossible to improve the odds of any legitimate lottery, how popular (and thus unprofitable) an attractive-looking ticket might be, why not to follow the suggested numbers from so-called "expert advisors" and why it is important to avoid winning combinations of past drawings. With this book and a little luck, the dream of winning millions might just come true.
After about a century of success, physicists feel the need to probe the limits of validity of special-relativity base theories. This book is the outcome of a special seminar held on this topic. The authors gather in a single volume an extensive collection of introductions and reviews of the various facets involved, and also includes detailed discussion of philosophical and historical aspects.
This monograph provides the first up-to-date and self-contained presentation of a recently discovered mathematical structure-the Schrodinger-Virasoro algebra. Just as Poincare invariance or conformal (Virasoro) invariance play a key role in understanding, respectively, elementary particles and two-dimensional equilibrium statistical physics, this algebra of non-relativistic conformal symmetries may be expected to apply itself naturally to the study of some models of non-equilibrium statistical physics, or more specifically in the context of recent developments related to the non-relativistic AdS/CFT correspondence. The study of the structure of this infinite-dimensional Lie algebra touches upon topics as various as statistical physics, vertex algebras, Poisson geometry, integrable systems and supergeometry as well as representation theory, the cohomology of infinite-dimensional Lie algebras, and the spectral theory of Schrodinger operators."
The Second Edition of this book includes an abundance of examples to illustrate advanced concepts and brings out in a text book setting the algorithms for bivariate polynomial matrix factorization results that form the basis of two-dimensional systems theory. Algorithms and their implementation using symbolic algebra are emphasized.
The complexity of issues requiring rational decision making grows and thus such decisions are becoming more and more difficult, despite advances in methodology and tools for decision support and in other areas of research. Globalization, interlinks between environmental, industrial, social and political issues, and rapid speed of change all contribute to the increase of this complexity. Specialized knowledge about decision-making processes and their support is increasing, but a large spectrum of approaches presented in the literature is typically illustrated only by simple examples. Moreover, the integration of model-based decision support methodologies and tools with specialized model-based knowledge developed for handling real problems in environmental, engineering, industrial, economical, social and political activities is often not satisfactory. Therefore, there is a need to present the state of art of methodology and tools for development of model-based decision support systems, and illustrate this state by applications to various complex real-world decision problems. The monograph reports many years of experience of many researchers, who have not only contributed to the developments in operations research but also succeeded to integrate knowledge and craft of various disciplines into several modern decision support systems which have been applied to actual complex decision-making processes in various fields of policy making. The experience presented in this book will be of value to researchers and practitioners in various fields. The issues discussed in this book gain in importance with the development of the new era of the information society, where information, knowledge, and ways of processing them become a decisive part of human activities. The examples presented in this book illustrate how how various methods and tools of model-based decision support can actually be used for helping modern decision makers that face complex problems. Overview of the contents: The first part of this three-part book presents the methodological background and characteristics of modern decision-making environment, and the value of model-based decision support thus addressing current challenges of decision support. It also provides the methodology of building and analyzing mathematical models that represent underlying physical and economic processes, and that are useful for modern decision makers at various stages of decision making. These methods support not only the analysis of Pareto-efficient solutions that correspond best to decision maker preferences but also allow the use of other modeling concepts like soft constraints, soft simulation, or inverse simulation. The second part describes various types of tools that are used for the development of decision support systems. These include tools for modeling, simulation, optimization, tools supporting choice and user interfaces. The described tools are both standard, commercially available, and nonstandard, public domain or shareware software, which are robust enough to be used also for complex applications. All four environmental applications (regional water quality management, land use planning, cost-effective policies aimed at improving the European air quality, energy planning with environmental implications) presented in the third part of the book rely on many years of cooperation between the authors of the book with several IIASA's projects, and with many researchers from the wide IIASA network of collaborating institutions. All these applications are characterized by an intensive use of model-based decision support. Finally, the appendix contains a short description of some of the tools described in the book that are available from IIASA, free of charge, for research and educational purposes. The experiences reported in this book indicate that the development of DSSs for strategic environmental decision making should be a joint effort involving experts in the subject area, modelers, and decision support experts. For the other experiences discussed in this book, the authors stress the importance of good data bases, and good libraries of tools. One of the most important requirements is a modular structure of a DSS that enhances the reusability of system modules. In such modular structures, user interfaces play an important role. The book shows how modern achievements in mathematical programming and computer sciences may be exploited for supporting decision making, especially about strategic environmental problems. It presents the methodological background of various methods for model-based decision support and reviews methods and tools for model development and analysis. The methods and tools are amply illustrated with extensive applications. Audience: This book will be of interest to researchers and practitioners in the fields of model development and analysis, model-based decision analysis and support, (particularly in the environment, economics, agriculture, engineering, and negotiations areas) and mathematical programming. For understanding of some parts of the text a background in mathematics and operational research is required but several chapters of the book will be of value also for readers without such a background. The monograph is also suitable for use as a text book for courses on advanced (Master and Ph.D.) levels for programs on Operations Research, decision analysis, decision support and various environmental studies (depending on the program different parts of the book may be emphasized).
The book is devoted to rigorous derivation of macroscopic mathematical models as a homogenization of exact mathematical models at the microscopic level. The idea is quite natural: one first must describe the joint motion of the elastic skeleton and the fluid in pores at the microscopic level by means of classical continuum mechanics, and then use homogenization to find appropriate approximation models (homogenized equations). The Navier-Stokes equations still hold at this scale of the pore size in the order of 5 - 15 microns. Thus, as we have mentioned above, the macroscopic mathematical models obtained are still within the limits of physical applicability. These mathematical models describe different physical processes of liquid filtration and acoustics in poroelastic media, such as isothermal or non-isothermal filtration, hydraulic shock, isothermal or non-isothermal acoustics, diffusion-convection, filtration and acoustics in composite media or in porous fractured reservoirs. Our research is based upon the Nguetseng two-scale convergent method.
The subject of General Cost Structure Analysis is the quantitative analysis of cost structures with a minimum of a priori assumptions on firm technology and on firm behaviour. The study develops an innovative line of attack building on the primal characterisation of the firm's generalised shadow cost minimisation program. The resulting Flexible Cost Model (FCM) is highly conducive to modern panel data techniques and allows for a flexible specification not only of firm technology but also of firm behaviour, as shadow prices can be made input-, time- and firm-specific. FCM is applied to a panel dataset on several hundred of the largest banking institutions in the G-5 (France, Germany, Japan, United Kingdom, United States) in the 1989-1996 period. The main empirical results are summarised. In particular, FCM provides new insights into the existence of scale economies in banking and an assessment of the extent of excess labour in the G-5 banking industries, particularly as a consequence of labour market rigidities in a context of rapidly declining technology prices. FCM also provides an evaluation of the sources of the cost advantage of American and British banks in comparison to Continental European banks.
The last two decades have seen enormous developments in statistical methods for incomplete data. The EM algorithm and its extensions, multiple imputation, and Markov Chain Monte Carlo provide a set of flexible and reliable tools from inference in large classes of missing-data problems. Yet, in practical terms, those developments have had surprisingly little impact on the way most data analysts handle missing values on a routine basis.
This book is an enlarged second edition of a monograph published in the Springer AGEM2-Series, 2009. It presents, in a consistent and unified overview, a setup of the theory of spherical functions of mathematical (geo-)sciences. The content shows a twofold transition: First, the natural transition from scalar to vectorial and tensorial theory of spherical harmonics is given in a coordinate-free context, based on variants of the addition theorem, Funk-Hecke formulas, and Helmholtz as well as Hardy-Hodge decompositions. Second, the canonical transition from spherical harmonics via zonal (kernel) functions to the Dirac kernel is given in close orientation to an uncertainty principle classifying the space/frequency (momentum) behavior of the functions for purposes of data analysis and (geo-)application. The whole palette of spherical functions is collected in a well-structured form for modeling and simulating the phenomena and processes occurring in the Earth's system. The result is a work which, while reflecting the present state of knowledge in a time-related manner, claims to be of largely timeless significance in (geo-)mathematical research and teaching.
The main goal of this book is to elucidate what kind of experiment must be performed in order to determine the full set of independent parameters which can be extracted and calculated from theory, where electrons, photons, atoms, ions, molecules, or molecular ions may serve as the interacting constituents of matter. The feasibility of such perfect' and-or `complete' experiments, providing the complete quantum mechanical knowledge of the process, is associated with the enormous potential of modern research techniques, both, in experiment and theory. It is even difficult to overestimate the role of theory in setting of the complete experiment, starting with the fact that an experiment can be complete only within a certain theoretical framework, and ending with the direct prescription of what, and in what conditions should be measured to make the experiment `complete'. The language of the related theory is the language of quantum mechanical amplitudes and their relative phases. This book captures the spirit of research in the direction of the complete experiment in atomic and molecular physics, considering some of the basic quantum processes: scattering, Auger decay and photo-ionization. It includes a description of the experimental methods used to realize, step by step, the complete experiment up to the level of the amplitudes and phases. The corresponding arsenal includes, beyond determining the total cross section, the observation of angle and spin resolved quantities, photon polarization and correlation parameters, measurements applying coincidence techniques, preparing initially polarized targets, and even more sophisticated methods. The `complete' experiment is, until today, hardly to perform. Therefore, much attention is paid to the results of state-of-the-art experiments providing detailed information on the process, and their comparison to the related theoretical approaches, just to mention relativistic multi-configurational Dirac-Fock, convergent close-coupling, Breit-Pauli R-matrix, or relativistic distorted wave approaches, as well as Green's operator methods. This book has been written in honor of Herbert Walther and his major contribution to the field but even to stimulate advanced Bachelor and Master students by demonstrating that obviously nowadays atomic and molecular scattering physics yields and gives a much exciting appreciation for further advancing the field.
This book introduces the basic concept of a dissipative soliton, before going to explore recent theoretical and experimental results for various classes of dissipative optical solitons, high-energy dissipative solitons and their applications, and mode-locked fiber lasers. A soliton is a concept which describes various physical phenomena ranging from solitary waves forming on water to ultrashort optical pulses propagating in an optical fiber. While solitons are usually attributed to integrability, in recent years the notion of a soliton has been extended to various systems which are not necessarily integrable. Until now, the main emphasis has been given to well-known conservative soliton systems, but new avenues of inquiry were opened when physicists realized that solitary waves did indeed exist in a wide range of non-integrable and non-conservative systems leading to the concept of so-called dissipative optical solitons. Dissipative optical solitons have many unique properties which differ from those of their conservative counterparts. For example, except for very few cases, they form zero-parameter families and their properties are completely determined by the external parameters of the optical system. They can exist indefinitely in time, as long as these parameters stay constant. These features of dissipative solitons are highly desirable for several applications, such as in-line regeneration of optical data streams and generation of stable trains of laser pulses by mode-locked cavities.
Learn the basics of white noise theory with White Noise Distribution Theory. This book covers the mathematical foundation and key applications of white noise theory without requiring advanced knowledge in this area. This instructive text specifically focuses on relevant application topics such as integral kernel operators, Fourier transforms, Laplacian operators, white noise integration, Feynman integrals, and positive generalized functions. Extremely well-written by one of the field's leading researchers, White Noise Distribution Theory is destined to become the definitive introductory resource on this challenging topic.
Considerable attention from the international scientific community is currently focused on the wide ranging applications of wavelets. For the first time, the field's leading experts have come together to produce a complete guide to wavelet transform applications in medicine and biology. Wavelets in Medicine and Biology provides accessible, detailed, and comprehensive guidelines for all those interested in learning about wavelets and their applications to biomedical problems.
Error detecting codes are very popular for error control in practical systems for two reasons. First, such codes can be used to provide any desired reliability of communication over any noisy channel. Second, implementation is usually much simpler than for a system using error correcting codes. To consider a particular code for use in such a system, it is very important to be able to calculate or estimate the probability of undetected error. For the binary symmetric channel, the probability of undetected error can be expressed in terms of the weight distribution of the code. The first part of the book gives a detailed description of all known methods to calculate or estimate the probability of undetected error, for the binary symmetric channel in particular, but a number of other channel models are also considered. The second part of the book describes a number of protocols for feedback communication systems (ARQ systems), with methods for optimal choice of error detecting codes for the protocols. Results have been collected from many sources and given a unified presentation. The results are presented in a form which make them accessible to the telecommunication system designer as well as the coding theory researcher and student. The system designer may find the presentation of CRC codes as well as the system performance analysis techniques particularly useful. The coding theorist will find a detailed account of a part of coding theory which is usually just mentioned in most text books and which contains a number of interesting and useful results as well as many challenging open problems. Audience: Essential for students, practitioners and researchers working in communications and coding theory. An excellent text for an advanced course on the subject.
This book deals with the modeling, analysis and simulation of problems arising in the life sciences, and especially in biological processes. The models and findings presented result from intensive discussions with microbiologists, doctors and medical staff, physicists, chemists and industrial engineers and are based on experimental data. They lead to a new class of degenerate density-dependent nonlinear reaction-diffusion convective equations that simultaneously comprise two kinds of degeneracy: porous-medium and fast-diffusion type degeneracy. To date, this class is still not clearly understood in the mathematical literature and thus especially interesting. The author both derives realistic life science models and their above-mentioned governing equations of the degenerate types and systematically studies these classes of equations. In each concrete case well-posedness, the dependence of solutions on boundary conditions reflecting some properties of the environment, and the large-time behavior of solutions are investigated and in some instances also studied numerically.
This volume provides a detailed description of the seminal theoretical construction in 1964, independently by Robert Brout and Francois Englert, and by Peter W. Higgs, of a mechanism for short-range fundamental interactions, now called the Brout-Englert-Higgs (BEH) mechanism. It accounts for the non-zero mass of elementary particles and predicts the existence of a new particle - an elementary massive scalar boson. In addition to this the book describes the experimental discovery of this fundamental missing element in the Standard Model of particle physics. The H Boson, also called the Higgs Boson, was produced and detected in the Large Hadron Collider (LHC) of CERN near Geneva by two large experimental collaborations, ATLAS and CMS, which announced its discovery on the 4th of July 2012.This new volume of the Poincare Seminar Series, The H Boson, corresponds to the nineteenth seminar, held on November 29, 2014, at Institut Henri Poincare in Paris.
Foresight is an area within Futures Studies that focuses on critical thinking concerning long term developments, whether within the public sector or in industry and management, and is something of a sub-section of complexity and network science. This book examines developments in foresight methodologies and relates in its greater part to the work done in the context of the COSTA22 network of the EU on Foresight Methodologies. Foresight is a professional practice that supports significant decisions, and as such it needs to be more assured of its claims to knowledge (methodology). Foresight is practiced across many domains and is not the preserve of specialized futurists, or indeed of foresight specialists. However, the disciplines of foresight are not well articulated or disseminated across domains, leading to re-inventions and practice that does not make best use of experience in other domains. The methodological development of foresight is an important task that aims at strengthening the pool of the tools available for application, thereby empowering the actors involved in foresight practice. Elaborating further on methodological issues, such as those presented in the present book, enables the actors involved in foresight to begin to critique current practice from this perspective and, thirdly, to begin to design foresight practice. The present trends towards methodological concerns indicates a move from given expert-predicted futures to one in which futures are nurtured through a dialogue among stakeholders. The book has four parts, each elaborating on a set of aspects of foresight methodologies. After an introductory section, Part II considers theorizing about foresight methodologies. Part III covers system content issues, and Part IV presents foresight tools and approaches." |
You may like...
Exploring Quantum Mechanics - A…
Victor Galitski, Boris Karnakov, …
Hardcover
R6,101
Discovery Miles 61 010
Infinite Words, Volume 141 - Automata…
Dominique Perrin, Jean-Eric Pin
Hardcover
R4,065
Discovery Miles 40 650
Oscillation Theory of Delay Differential…
I. Gyoeri, G. Ladas
Hardcover
R4,940
Discovery Miles 49 400
Multiscale Modeling of Vascular Dynamics…
Huilin Ye, Zhiqiang Shen, …
Paperback
R750
Discovery Miles 7 500
|