![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Applied mathematics > General
The introduction of cross diffusivity opens many questions in the theory of reactiondiffusion systems. This book will be the first to investigate such problems presenting new findings for researchers interested in studying parabolic and elliptic systems where classical methods are not applicable. In addition, The Gagliardo-Nirenberg inequality involving BMO norms is improved and new techniques are covered that will be of interest. This book also provides many open problems suitable for interested Ph.D students.
This book is an enlarged second edition of a monograph published in the Springer AGEM2-Series, 2009. It presents, in a consistent and unified overview, a setup of the theory of spherical functions of mathematical (geo-)sciences. The content shows a twofold transition: First, the natural transition from scalar to vectorial and tensorial theory of spherical harmonics is given in a coordinate-free context, based on variants of the addition theorem, Funk-Hecke formulas, and Helmholtz as well as Hardy-Hodge decompositions. Second, the canonical transition from spherical harmonics via zonal (kernel) functions to the Dirac kernel is given in close orientation to an uncertainty principle classifying the space/frequency (momentum) behavior of the functions for purposes of data analysis and (geo-)application. The whole palette of spherical functions is collected in a well-structured form for modeling and simulating the phenomena and processes occurring in the Earth's system. The result is a work which, while reflecting the present state of knowledge in a time-related manner, claims to be of largely timeless significance in (geo-)mathematical research and teaching.
The main goal of this book is to elucidate what kind of experiment must be performed in order to determine the full set of independent parameters which can be extracted and calculated from theory, where electrons, photons, atoms, ions, molecules, or molecular ions may serve as the interacting constituents of matter. The feasibility of such perfect' and-or `complete' experiments, providing the complete quantum mechanical knowledge of the process, is associated with the enormous potential of modern research techniques, both, in experiment and theory. It is even difficult to overestimate the role of theory in setting of the complete experiment, starting with the fact that an experiment can be complete only within a certain theoretical framework, and ending with the direct prescription of what, and in what conditions should be measured to make the experiment `complete'. The language of the related theory is the language of quantum mechanical amplitudes and their relative phases. This book captures the spirit of research in the direction of the complete experiment in atomic and molecular physics, considering some of the basic quantum processes: scattering, Auger decay and photo-ionization. It includes a description of the experimental methods used to realize, step by step, the complete experiment up to the level of the amplitudes and phases. The corresponding arsenal includes, beyond determining the total cross section, the observation of angle and spin resolved quantities, photon polarization and correlation parameters, measurements applying coincidence techniques, preparing initially polarized targets, and even more sophisticated methods. The `complete' experiment is, until today, hardly to perform. Therefore, much attention is paid to the results of state-of-the-art experiments providing detailed information on the process, and their comparison to the related theoretical approaches, just to mention relativistic multi-configurational Dirac-Fock, convergent close-coupling, Breit-Pauli R-matrix, or relativistic distorted wave approaches, as well as Green's operator methods. This book has been written in honor of Herbert Walther and his major contribution to the field but even to stimulate advanced Bachelor and Master students by demonstrating that obviously nowadays atomic and molecular scattering physics yields and gives a much exciting appreciation for further advancing the field.
This book introduces the basic concept of a dissipative soliton, before going to explore recent theoretical and experimental results for various classes of dissipative optical solitons, high-energy dissipative solitons and their applications, and mode-locked fiber lasers. A soliton is a concept which describes various physical phenomena ranging from solitary waves forming on water to ultrashort optical pulses propagating in an optical fiber. While solitons are usually attributed to integrability, in recent years the notion of a soliton has been extended to various systems which are not necessarily integrable. Until now, the main emphasis has been given to well-known conservative soliton systems, but new avenues of inquiry were opened when physicists realized that solitary waves did indeed exist in a wide range of non-integrable and non-conservative systems leading to the concept of so-called dissipative optical solitons. Dissipative optical solitons have many unique properties which differ from those of their conservative counterparts. For example, except for very few cases, they form zero-parameter families and their properties are completely determined by the external parameters of the optical system. They can exist indefinitely in time, as long as these parameters stay constant. These features of dissipative solitons are highly desirable for several applications, such as in-line regeneration of optical data streams and generation of stable trains of laser pulses by mode-locked cavities.
Learn the basics of white noise theory with White Noise Distribution Theory. This book covers the mathematical foundation and key applications of white noise theory without requiring advanced knowledge in this area. This instructive text specifically focuses on relevant application topics such as integral kernel operators, Fourier transforms, Laplacian operators, white noise integration, Feynman integrals, and positive generalized functions. Extremely well-written by one of the field's leading researchers, White Noise Distribution Theory is destined to become the definitive introductory resource on this challenging topic.
Considerable attention from the international scientific community is currently focused on the wide ranging applications of wavelets. For the first time, the field's leading experts have come together to produce a complete guide to wavelet transform applications in medicine and biology. Wavelets in Medicine and Biology provides accessible, detailed, and comprehensive guidelines for all those interested in learning about wavelets and their applications to biomedical problems.
Error detecting codes are very popular for error control in practical systems for two reasons. First, such codes can be used to provide any desired reliability of communication over any noisy channel. Second, implementation is usually much simpler than for a system using error correcting codes. To consider a particular code for use in such a system, it is very important to be able to calculate or estimate the probability of undetected error. For the binary symmetric channel, the probability of undetected error can be expressed in terms of the weight distribution of the code. The first part of the book gives a detailed description of all known methods to calculate or estimate the probability of undetected error, for the binary symmetric channel in particular, but a number of other channel models are also considered. The second part of the book describes a number of protocols for feedback communication systems (ARQ systems), with methods for optimal choice of error detecting codes for the protocols. Results have been collected from many sources and given a unified presentation. The results are presented in a form which make them accessible to the telecommunication system designer as well as the coding theory researcher and student. The system designer may find the presentation of CRC codes as well as the system performance analysis techniques particularly useful. The coding theorist will find a detailed account of a part of coding theory which is usually just mentioned in most text books and which contains a number of interesting and useful results as well as many challenging open problems. Audience: Essential for students, practitioners and researchers working in communications and coding theory. An excellent text for an advanced course on the subject.
This book deals with the modeling, analysis and simulation of problems arising in the life sciences, and especially in biological processes. The models and findings presented result from intensive discussions with microbiologists, doctors and medical staff, physicists, chemists and industrial engineers and are based on experimental data. They lead to a new class of degenerate density-dependent nonlinear reaction-diffusion convective equations that simultaneously comprise two kinds of degeneracy: porous-medium and fast-diffusion type degeneracy. To date, this class is still not clearly understood in the mathematical literature and thus especially interesting. The author both derives realistic life science models and their above-mentioned governing equations of the degenerate types and systematically studies these classes of equations. In each concrete case well-posedness, the dependence of solutions on boundary conditions reflecting some properties of the environment, and the large-time behavior of solutions are investigated and in some instances also studied numerically.
For more than five decades Bertram Kostant has been one of the major architects of modern Lie theory. Virtually all his papers are pioneering with deep consequences, many giving rise to whole new fields of activities. His interests span a tremendous range of Lie theory, from differential geometry to representation theory, abstract algebra, and mathematical physics. It is striking to note that Lie theory (and symmetry in general) now occupies an ever increasing larger role in mathematics than it did in the fifties. Now in the sixth decade of his career, he continues to produce results of astonishing beauty and significance for which he is invited to lecture all over the world. This is the third volume (1975-1985) of a five-volume set of Bertram Kostant's collected papers. A distinguished feature of this third volume is Kostant's commentaries and summaries of his papers in his own words.
This volume provides a detailed description of the seminal theoretical construction in 1964, independently by Robert Brout and Francois Englert, and by Peter W. Higgs, of a mechanism for short-range fundamental interactions, now called the Brout-Englert-Higgs (BEH) mechanism. It accounts for the non-zero mass of elementary particles and predicts the existence of a new particle - an elementary massive scalar boson. In addition to this the book describes the experimental discovery of this fundamental missing element in the Standard Model of particle physics. The H Boson, also called the Higgs Boson, was produced and detected in the Large Hadron Collider (LHC) of CERN near Geneva by two large experimental collaborations, ATLAS and CMS, which announced its discovery on the 4th of July 2012.This new volume of the Poincare Seminar Series, The H Boson, corresponds to the nineteenth seminar, held on November 29, 2014, at Institut Henri Poincare in Paris.
Foresight is an area within Futures Studies that focuses on critical thinking concerning long term developments, whether within the public sector or in industry and management, and is something of a sub-section of complexity and network science. This book examines developments in foresight methodologies and relates in its greater part to the work done in the context of the COSTA22 network of the EU on Foresight Methodologies. Foresight is a professional practice that supports significant decisions, and as such it needs to be more assured of its claims to knowledge (methodology). Foresight is practiced across many domains and is not the preserve of specialized futurists, or indeed of foresight specialists. However, the disciplines of foresight are not well articulated or disseminated across domains, leading to re-inventions and practice that does not make best use of experience in other domains. The methodological development of foresight is an important task that aims at strengthening the pool of the tools available for application, thereby empowering the actors involved in foresight practice. Elaborating further on methodological issues, such as those presented in the present book, enables the actors involved in foresight to begin to critique current practice from this perspective and, thirdly, to begin to design foresight practice. The present trends towards methodological concerns indicates a move from given expert-predicted futures to one in which futures are nurtured through a dialogue among stakeholders. The book has four parts, each elaborating on a set of aspects of foresight methodologies. After an introductory section, Part II considers theorizing about foresight methodologies. Part III covers system content issues, and Part IV presents foresight tools and approaches."
This book is for students taking either a first-year graduate statistics course or an advanced undergraduate statistics course in Psychology. Enough introductory statistics is briefly reviewed to bring everyone up to speed. The book is highly user-friendly without sacrificing rigor, not only in anticipating students' questions, but also in paying attention to the introduction of new methods and notation. In addition, many topics given only casual or superficial treatment are elaborated here, such as: the nature of interaction and its interpretation, in terms of theory and response scale transformations; generalized forms of analysis of covariance; extensive coverage of multiple comparison methods; coverage of nonorthogonal designs; and discussion of functional measurement. The text is structured for reading in multiple passes of increasing depth; for the student who desires deeper understanding, there are optional sections; for the student who is or becomes proficient in matrix algebra, there are still deeper optional sections. The book is also equipped with an excellent set of class-tested exercises and answers.
This work presents invited contributions from the second "International Conference on Mathematics and Statistics" jointly organized by the AUS (American University of Sharjah) and the AMS (American Mathematical Society). Addressing several research fields across the mathematical sciences, all of the papers were prepared by faculty members at universities in the Gulf region or prominent international researchers. The current volume is the first of its kind in the UAE and is intended to set new standards of excellence for collaboration and scholarship in the region.
For more than five decades Bertram Kostant has been one of the major architects of modern Lie theory. Virtually all his papers are pioneering with deep consequences, many giving rise to whole new fields of activities. His interests span a tremendous range of Lie theory, from differential geometry to representation theory, abstract algebra, and mathematical physics. It is striking to note that Lie theory (and symmetry in general) now occupies an ever increasing larger role in mathematics than it did in the fifties. Now in the sixth decade of his career, he continues to produce results of astonishing beauty and significance for which he is invited to lecture all over the world. This is the fifth volume (1995-2005) of a five-volume set of Bertram Kostant's collected papers. A distinguished feature of this fifth volume is Kostant's commentaries and summaries of his papers in his own words.
Features Provides an accessible introduction to mathematics in art Supports the narrative with a self-contained mathematical theory, with complete proofs of the main results (including the classification theorem for similarities) Presents hundreds of figures, illustrations, computer-generated graphics, designs, photographs, and art reproductions, mainly presented in full color Includes 21 projects and about 280 exercises, about half of which are fully solved Covers Euclidean geometry, golden section, Fibonacci numbers, symmetries, tilings, similarities, fractals, cellular automata, inversion, hyperbolic geometry, perspective drawing, Platonic and Archimedean solids, and topology New to the Second Edition New exercises, projects and artworks Revised, reorganised and expanded chapters More use of color throughout
This work describes several statistical techniques for studying repeated measures data, presenting growth curve methods applicable to biomedical, social, animal, agricultural and business research. It details the multivariate development of growth science and repeated measures experiments, covering time-moving covariates, exchangable errors, bioassay results, missing data procedures and nonparametric and Bayesian methods.
This thesis presents profound insights into the origins and dynamics of beam instabilities using both experimental observations and numerical simulations. When the Recycler Ring, a high-intensity proton beam accelerator at Fermi National Accelerator Laboratory, was commissioned, it became evident that the Recycler beam experiences a very fast instability of unknown nature. This instability was so fast that the existing dampers were ineffective at suppressing it. The nature of this phenomenon, alongside several other poorly understood features of the beam, became one of the biggest puzzles in the accelerator community. The author investigated a hypothesis that the instability arises from an interaction with a dense cloud of electrons accompanying the proton beam. He studied the phenomena experimentally by comparing the dynamics of stable and unstable beams, by numerically simulating the build-up of the electron cloud and its interaction with the beam, and by constructing an analytical model of an electron cloud-driven instability with the electrons trapped in combined-function dipole magnets. He has devised a method to stabilize the beam by a clearing bunch, which conclusively revealed that the instability is caused by the electron cloud, trapped in a strong magnetic field. Finally, he conducted measurements of the microwave propagation through a single dipole magnet. These measurements have confirmed the presence of the electron cloud in combined-function magnets.
Automata Theory and Formal Languages: Concepts and Practices presents the difficult concepts of automata theory in a straightforward manner, including discussions on diverse concepts and tools that play major roles in developing computing machines, algorithms and code. Automata theory includes numerous concepts such as finite automata, regular grammar, formal languages, context free and context sensitive grammar, push down automata, Turing machine, and decidability, which constitute the backbone of computing machines. This book enables readers to gain sufficient knowledge and experience to construct and solve complex machines. Each chapter begins with key concepts followed by a number of important examples that demonstrate the solution. The book explains concepts and simultaneously helps readers develop an understanding of their application with real-world examples, including application of Context Free Grammars in programming languages and Artificial Intelligence, and cellular automata in biomedical problems.
The three volumes of Interest Rate Modeling present a comprehensive and up-to-date treatment of techniques and models used in the pricing and risk management of fixed income securities. Written by two leading practitioners and seasoned industry veterans, this unique series combines finance theory, numerical methods, and approximation techniques to provide the reader with an integrated approach to the process of designing and implementing industrial-strength models for fixed income security valuation and hedging. Aiming to bridge the gap between advanced theoretical models and real-life trading applications, the pragmatic, yet rigorous, approach taken in this book will appeal to students, academics, and professionals working in quantitative finance. The first half of Volume III contains a detailed study of several classes of fixed income securities, ranging from simple vanilla options to highly exotic cancelable and path-dependent derivatives. The analysis is done in product-specific fashion covering, among other subjects, risk characterization, calibration strategies, and valuation methods. In its second half, Volume III studies the general topic of derivative portfolio risk management, with a particular emphasis on the challenging problem of computing smooth price sensitivities to market input perturbations.
In 1988, E. Verlinde gave a remarkable conjectural formula for the dimension of conformal blocks over a smooth curve in terms of representations of affine Lie algebras. Verlinde's formula arose from physical considerations, but it attracted further attention from mathematicians when it was realized that the space of conformal blocks admits an interpretation as the space of generalized theta functions. A proof followed through the work of many mathematicians in the 1990s. This book gives an authoritative treatment of all aspects of this theory. It presents a complete proof of the Verlinde formula and full details of the connection with generalized theta functions, including the construction of the relevant moduli spaces and stacks of G-bundles. Featuring numerous exercises of varying difficulty, guides to the wider literature and short appendices on essential concepts, it will be of interest to senior graduate students and researchers in geometry, representation theory and theoretical physics.
This book should be of interest to statistics lecturers who want ready-made data sets complete with notes for teaching.
This book presents a selection of papers based on the XXXIII Bialowieza Workshop on Geometric Methods in Physics, 2014. The Bialowieza Workshops are among the most important meetings in the field and attract researchers from both mathematics and physics. The articles gathered here are mathematically rigorous and have important physical implications, addressing the application of geometry in classical and quantum physics. Despite their long tradition, the workshops remain at the cutting edge of ongoing research. For the last several years, each Bialowieza Workshop has been followed by a School on Geometry and Physics, where advanced lectures for graduate students and young researchers are presented; some of the lectures are reproduced here. The unique atmosphere of the workshop and school is enhanced by its venue, framed by the natural beauty of the Bialowieza forest in eastern Poland.The volume will be of interest to researchers and graduate students in mathematical physics, theoretical physics and mathematmtics.
ThisvolumeispublishedastheproceedingsoftheRussian-GermanAdvanced Research workshop on Computational Science and High Performance C- puting in Novosibirsk Academgorodok in September 2003. The contributions of these proceedings were provided and edited by the authors, chosen after a careful selection and reviewing. The workshop was organized by the Institute of Computational Techno- gies SB RAS (Novosibirsk, Russia) and the High Performance Computing Center Stuttgart (Stuttgart, Germany). The objective was the discussion of the latest results in computational science and to develop a close coope- tion between Russian and German specialists in the above-mentioned ?eld. The main directions of the workshop are associated with the problems of computational hydrodynamics, application of mathematical methods to the development of new generation of materials, environment protection pr- lems, development of algorithms, software and hardware support for hi- performance computation, and designing modern facilities for visualization of computational modelling results. The importance of the workshop topics was con?rmed by the partici- tion of representatives of major research organizations engaged in the so- tion of the most complex problems of mathematical modelling, development of new algorithms, programs and key elements of new information techno- gies. Among the Russian participants were researchers of the Institutes of the Siberian Branch of the Russian Academy of Sciences: Institute of Com- tational Technologies, Institute of Computational Mathematics and Mat- matical Geophysics, Institute of Computational Modelling, Russian Federal Nuclear Center, All-Russian Research Institute of Experimental Physics, - merovo State University.
Using simple physical examples, this work by Erhard Scheibe presents an important and powerful approach to the reduction of physical theories. Novel to the approach is that it is not based, as usual, on a single reduction concept that is fixed once and for all, but on a series of recursively constructed reductions, with which all reductions appear as combinations of very specific elementary reductions. This leaves the general notion of theory reduction initially open and is beneficial for the treatment of the difficult cases of reduction from the fields of special and general relativity, thermodynamics, statistical mechanics,and quantum mechanics, which are treated in the second volume. The book is systematically organized and intended for readers interested in philosophy of science as well as physicists without deep philosophical knowledge. |
You may like...
Green IT Engineering: Social, Business…
Vyacheslav Kharchenko, Yuriy Kondratenko, …
Hardcover
R4,127
Discovery Miles 41 270
Present and Ulterior Software…
Manuel Mazzara, Bertrand Meyer
Hardcover
R2,666
Discovery Miles 26 660
Engineering Service Oriented Systems - A…
Bill Karakostas, Yannis Zorgios
Hardcover
R2,646
Discovery Miles 26 460
Constraint-Based Design Recovery for…
Steven G. Woods, Alexander E. Quilici, …
Hardcover
R5,253
Discovery Miles 52 530
Software Diagnostics - The Collected…
Dmitry Vostokov, Software Diagnostics Services
Hardcover
R2,273
Discovery Miles 22 730
|