![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Physics > Thermodynamics & statistical physics > Statistical physics
An international workshop on Elementary Excitations and Fluctuations in Magnetic Systems was held in San Miniato, Italy, for five days beginning 28 May, 1984. The workshop comprised eight working sessions that contai- ned a total of 43 invited talks, and 58 scientists were in attendance from 14 countries. Our aim was to review some topics of current interest in the statistical physics of magnetic materials and models, with an emphasis on theoretical studies and confrontations between these and experimental and computer simulation data. book contains summary papers written by the invited speakers, and This the material will be of immediate interest to graduate students and resear- chers engaged in studies of magnetic properties. There is, perhaps, no ef- fective way to record and convey the benefit of the numerous discussions between the participants that are a significant integral feature of a work- shop. The magnificent .venue of the workshop, I Cappuccini, was made availa- ble to us by the.Cassa di Risparmio San Miniato. Financial support for the workshop was received from Consiglio Nazionale delle Ricerche, Universita degli Studi di Firenze and the Gruppo Nazionale Struttura,della Materia. Our administrative load and the burden of preparing the proceedings for publication was made light by the talents of Carla Pardini (CNR, Florence), and Caroline Monypenny and Jane Warren (Rutherford Appleton Laboratory). Fina 11y, we wish to thank all the participants for their attendance and individual contributions to the success of the workshop.
The last decades have demonstrated that quantum mechanics is an inexhaustible source of inspiration for contemporary mathematical physics. Of course, it seems to be hardly surprising if one casts a glance toward the history of the subject; recall the pioneering works of von Neumann, Weyl, Kato and their followers which pushed forward some of the classical mathematical disciplines: functional analysis, differential equations, group theory, etc. On the other hand, the evident powerful feedback changed the face of the "naive" quantum physics. It created a contem porary quantum mechanics, the mathematical problems of which now constitute the backbone of mathematical physics. The mathematical and physical aspects of these problems cannot be separated, even if one may not share the opinion of Hilbert who rigorously denied differences between pure and applied mathemat ics, and the fruitful oscilllation between the two creates a powerful stimulus for development of mathematical physics. The International Conference on Mathematical Results in Quantum Mechan ics, held in Blossin (near Berlin), May 17-21, 1993, was the fifth in the series of meetings started in Dubna (in the former USSR) in 1987, which were dedicated to mathematical problems of quantum mechanics. A primary motivation of any meeting is certainly to facilitate an exchange of ideas, but there also other goals. The first meeting and those that followed (Dubna, 1988; Dubna, 1989; Liblice (in the Czech Republic), 1990) were aimed, in particular, at paving ways to East-West contacts."
Vision-based mobile robot guidance has proved difficult for classical machine vision methods because of the diversity and real-time constraints inherent in the task. This book describes a connectionist system called ALVINN (Autonomous Land Vehicle In a Neural Network) that overcomes these difficulties. ALVINN learns to guide mobile robots using the back-propagation training algorithm. Because of its ability to learn from example, ALVINN can adapt to new situations and therefore cope with the diversity of the autonomous navigation task. But real world problems like vision-based mobile robot guidance present a different set of challenges for the connectionist paradigm. Among them are: * how to develop a general representation from a limited amount of real training data; * how to understand the internal representations developed by artificial neural networks; * how to estimate the reliability of individual networks; * how to combine multiple networks trained for different situations into a single system; * how to combine connectionist perception with symbolic reasoning.Neural Network Perception for Mobile Robot Guidance presents novel solutions to each of these problems. Using these techniques, the ALVINN system can learn to control an autonomous van in under 5 minutes by watching a person drive. Once trained, individual ALVINN networks can drive in a variety of circumstances, including single-lane paved and unpaved roads, and multi-lane lined and unlined roads, at speeds of up to 55 miles per hour. The techniques also are shown to generalize to the task of controlling the precise foot placement of a walking robot.
Mon but n'a jamais be de m'occuper des ces matieres comme physicien, mais seulement comme /ogicien ... F. REECH, 1856 I do not think it possible to write the history of a science until that science itself shall have been understood, thanks to a clear, explicit, and decent logical structure. The exuberance of dim, involute, and undisciplined his torical essays upon classical thermodynamics reflects the confusion of the theory itself. Thermodynamics, despite its long history, has never had the benefit of a magisterial synthesis like that which EULER gave to hydro dynamics in 1757 or that which MAXWELL gave to electromagnetism in 1873; the expositions in the works of discovery in thermodynamics stand a pole apart from the pellucid directness of the notes in which CAUCHY presented his creation and development of the theory of elasticity from 1822 to 1845. Thermodynamics was born in obscurity and disorder, not to say confusion, and there the common presentations of it have remained. With this tractate I aim to provide a simple logical structure for the classical thermodynamics of homogeneous fluid bodies. Like any logical structure, it is only one of many possible ones. I think it is as simple and pretty as can be."
Cellular automata are fully discrete dynamical systems with dynamical variables defined at the nodes of a lattice and taking values in a finite set. Application of a local transition rule at each lattice site generates the dynamics. The interpretation of systems with a large number of degrees of freedom in terms of lattice gases has received considerable attention recently due to the many applications of this approach, e.g. for simulating fluid flows under nearly realistic conditions, for modeling complex microscopic natural phenomena such as diffusion-reaction or catalysis, and for analysis of pattern-forming systems. The discussion in this book covers aspects of cellular automata theory related to general problems of information theory and statistical physics, lattice gas theory, direct applications, problems arising in the modeling of microscopic physical processes, complex macroscopic behavior (mostly in connection with turbulence), and the design of special-purpose computers.
Lectures on Non-linear Plasma Kinetics is an introduction to modern non-linear plasma physics showing how many of the techniques of modern non-linear physics find applications in plasma physics and how, in turn, the results of this research find applications in astrophysics. Emphasis is given to explaining the physics of nonlinear processes and the radical change of cross-sections by collective effects. The author discusses new nonlinear phenomena involving the excitation of coherent nonlinear structures and the dynamics of their random motions in relation to new self-organization processes. He also gives a detailed description of applications of the general theory to various research fields, including the interaction of powerful radiation with matter, controlled thermonuclear research, etc.
by W. J. Freeman These two volumes on "Brain Oscillations" appear at a most opportune time. As the "Decade of the Brain" draws to its close, brain science is coming to terms with its ultimate problem: understanding the mechanisms by which the immense number of neurons in the human brain interact to produce the higher cognitive functions. The ideas, concepts, methods, interpretations and examples, which are presented here in voluminous detail by a world-class authority in electrophysiology, summarize the intellectual equipment that will be required to construct satisfactory solutions to the problem. Neuroscience is ripe for change. The last revolution of ideas took place in the middle of the century now ending, when the field took a sharp turn into a novel direction. During the preceding five decades the prevailing view, carried forward from the 19th century, was that neurons are the carriers of nerve energy, either in chemical or electrical forms (Freeman, 1995). That point of view was enormously productive in terms of coming to understand the chemical basis for synaptic transmission, the electrochemistry of the ac tion potential, the ionic mechanisms of membrane currents and gates, the functional neuroanatomy that underlies the hierarchy of reflexes, and the neural fields and'their resonances that support Gestalt phenomena. No bet ter testimony can be given of the power of the applications of this approach than to point out that it provides the scientific basis for contemporary neu rology, neuropsychiatry, and brain imaging."
Our aim in this book is to present and enlarge upon those aspects of parallel computing that are needed by practitioners of computational science. Today al most all classical sciences, such as mathematics, physics, chemistry and biology, employ numerical methods to help gain insight into nature. In addition to the traditional numerical methods, such as matrix inversions and the like, a whole new field of computational techniques has come to assume central importance, namely the numerical simulation methods. These methods are much less fully developed than those which are usually taught in a standard numerical math ematics course. However, they form a whole new set of tools for research in the physical sciences and are applicable to a very wide range of problems. At the same time there have been not only enormous strides forward in the speed and capability of computers but also dramatic new developments in computer architecture, and particularly in parallel computers. These improvements offer exciting prospects for computer studies of physical systems, and it is the new techniques and methods connected with such computer simulations that we seek to present in this book, particularly in the light of the possibilities opened up by parallel computers. It is clearly not possible at this early stage to write a definitive book on simulation methods and parallel computing."
For four decades, information theory has been viewed almost exclusively as a theory based upon the Shannon measure of uncertainty and information, usually referred to as Shannon entropy. Since the publication of Shannon's seminal paper in 1948, the theory has grown extremely rapidly and has been applied with varied success in almost all areas of human endeavor. At this time, the Shannon information theory is a well established and developed body of knowledge. Among its most significant recent contributions have been the use of the complementary principles of minimum and maximum entropy in dealing with a variety of fundamental systems problems such as predic tive systems modelling, pattern recognition, image reconstruction, and the like. Since its inception in 1948, the Shannon theory has been viewed as a restricted information theory. It has often been argued that the theory is capable of dealing only with syntactic aspects of information, but not with its semantic and pragmatic aspects. This restriction was considered a v~rtue by some experts and a vice by others. More recently, however, various arguments have been made that the theory can be appropriately modified to account for semantic aspects of in formation as well. Some of the most convincing arguments in this regard are in cluded in Fred Dretske's Know/edge & Flow of Information (The M.LT. Press, Cambridge, Mass., 1981) and in this book by Guy lumarie.
Physicists firmly believe that the differential equations of nature should be hyperbolic so as to exclude action at a distance; yet the equations of irreversible thermodynamics - those of Navier-Stokes and Fourier - are parabolic. This incompatibility between the expectation of physicists and the classical laws of thermodynamics has prompted the formulation of extended thermodynamics. After describing the motifs and early evolution of this new branch of irreversible thermodynamics, the authors apply the theory to mon-atomic gases, mixtures of gases, relativistic gases, and "gases" of phonons and photons. The discussion brings into perspective the various phenomena called second sound, such as heat propagation, propagation of shear stress and concentration, and the second sound in liquid helium. The formal mathematical structure of extended thermodynamics is exposed and the theory is shown to be fully compatible with the kinetic theory of gases. The study closes with the testing of extended thermodynamics through the exploitation of its predictions for measurements of light scattering and sound propagation.
Fuzzy technology has emerged as one of the most exciting new concepts available. Fuzzy Logic and its Applications... covers a wide range of the theory and applications of fuzzy logic and related systems, including industrial applications of fuzzy technology, implementing human intelligence in machines and systems. There are four main themes: intelligent systems, engineering, mathematical foundations, and information sciences. Both academics and the technical community will learn how and why fuzzy logic is appreciated in the conceptual, design and manufacturing stages of intelligent systems, gaining an improved understanding of the basic science and the foundations of human reasoning.
For a system consisting of a random medium with rough boundaries, the governing (Bethe-Salpeter) equation for boundary-value transport problems can be written in a form such that the medium and the boundaries are treatedon an equal footing. This enables several expressions for the solution to be obtained by interchanging the roles of the medium and the boundaries, thus allowing the most convenient one to be selected according to the specific situation and the information required. This book presents a unified theory based on the Bethe-Salpeter equation with particular attention being paid to: boundary-value problems of transport, layer problems, a fixed scatterer imbedded in a bounded random medium, construction of an optical scattering matrix for a complete system, and optical wave propagation in a turbulent medium. The last topic is treated in terms of first moment equations combined with the cluster expansion and, second, the two-scale method based on the Lagrange variational principle.
"Principles of Statistical Radiophysics" is a four-volume series that introduces the newcomer to the theory of random functions. It aims at providing the background necessary to understand papers and monographs on the subject and to carry out independent research in the fields where fluctuations are of importance, e.g. radiophysics, optics, astronomy, and acoustics. Volume 3, "Elements of Random Fields," gives the basic mathematical definitions, general properties and specific forms of random fields, the generalization from correlation theory to random fields. It deals with stochastic partial differential equations, wave scattering at a chaotic screen, single scattering in random media and thermal fluctuations and radiation of electromagnetic fields.
The basic subjects and main topics covered by this book are: (1) Physics of Black Holes (classical and quantum); (2) Thermodynamics, entropy and internal dynamics; (3) Creation of particles and evaporation; (4) Mini black holes; (5) Quantum mechanics of black holes in curved spacetime; (6) The role of spin and torsion in the black hole physics; (7) Equilibrium geometry and membrane paradigm; (8) Black hole in string and superstring theory; (9) Strings, quantum gravity and black holes; (10) The problem of singularity; (11) Astrophysics of black holes; (12) Observational evidence of black holes. The book reveals the deep connection between gravitational, quantum and statistical physics and also the importance of black hole behaviour in the very early universe. An important new point discussed concerns the introduction of spin in the physics of black holes, showing its central role when correctly put into the Einstein equations through the geometric concept of torsion, with the new concept of a time-temperature uncertainty relation, minimal time, minimal entropy, quantization of entropy and the connection of black hole with wormholes. Besides theoretical aspects, the reader will also find observational evidence for black holes in active galactic nuclei, in binary X-ray sources and in supernova remnants. The book will thus interest physicists, astronomers, and astrophysicists at different levels of their career who specialize in classical properties, quantum processes, statistical thermodynamics, numerical collapse, observational evidence, general relativity and other related problems.
In this monograph, a statistical description of natural phenomena
is used to develop an information processing system capable of
modeling non-linear relationships between sensory data. The system,
based on self-organized, optimal preservation of empirical
information, applies these relationships for prediction and
adaptive control.
This volume contains most of the invited papers presented at the International Work shop on Synergetics, Schloss E1mau, Bavaria, May 2 to.May 7, 1977. This workshop fol lowed an International Symposium on SynergetiGS at Schloss E1mau, 1972, and an Inter national SUl11l1erschoo1 at Erice, Sicily, 1974. Synergetics is a rather new field of interdisciplinary research which studies the self-organized behavior of systems leading to the formation of structures and func tionings. Indeed the whole universe seems to be organized, with pronounced structures starting from spiral galaxies down to living cells. Furthermore, very many of the most interesting phenomena occur in systems which are far from thermal equilibrium. Synergetics in its present form focusses its attention on those phenomena where dra matic changes occur on a macroscopic scale. Here indeed Synergetics was able to re veal profound analogies between systems in different disciplines ranging from physics to sociology. This volume contains contributions from various fields but the reader will easily discover their cOl11J1on goal. Not only in the natural sciences but also in ecology, sociology, and economy, man is confronted with the problems of complex sys tems. The principles and analogies unearthed by Synergetics will certainly be very he1pfu to cope with such difficult problems. I use this opportunity to thank the Vo1kswagenwerk Foundation for its support of the project Synergetics and in particular for sponsoring the International Workshop on Synergetics."
Neurobiology research suggests that information can be represented by the location of an activity spot in a population of cells ('place coding'), and that this information can be processed by means of networks of interconnections. Place Coding in Analog VLSI defines a representation convention of similar flavor intended for analog-integrated circuit design. It investigates its properties and suggests ways to build circuits on the basis of this coding scheme. In this electronic version of place coding, numbers are represented by the state of an array of nodes called a map, and computation is carried out by a network of links. In the simplest case, a link is just a wire connecting a node of an input map to a node of an output map. In other cases, a link is an elementary circuit cell. Networks of links are somewhat reminiscent of look-up tables in that they hardwire an arbitrary function of one or several variables. Interestingly, these structures are also related to fuzzy rules, as well as some types of artificial neural networks.The place coding approach provides several substantial benefits over conventional analog design: * Networks of links can be synthesized by a simple procedure whatever the function to be computed. * Place coding is tolerant to perturbations and noise in current-mode implementations. * Tolerance to noise implies that the fundamental power dissipation limits of conventional analog circuits can be overcome by using place coding. The place coding approach is illustrated by three integrated circuits computing non-linear functions of several variables. The simplest one is made up of 80 links and achieves submicrowatt power consumption in continuous operation. The most complex one incorporates about 1800 links for a power consumption of 6 milliwatts, and controls the operation of an active vision system with a moving field of view. Place Coding in Analog VLSI is primarily intended for researchers and practicing engineers involved in analog and digital hardware design (especially bio-inspired circuits). The book is also a valuable reference for researchers and students in neurobiology, neuroscience, robotics, fuzzy logic and fuzzy control.
This book contains the lectures given at the Conference on Dynamics and Randomness held at the Centro de Modelamiento Matematico of the Universidad de Chile from December 11th to 15th, 2000. This meeting brought together mathematicians, theoretical physicists and theoretical computer scientists, and graduate students interested in fields re lated to probability theory, ergodic theory, symbolic and topological dynam ics. We would like to express our gratitude to all the participants of the con ference and to the people who contributed to its organization. In particular, to Pierre Collet, Bernard Host and Mike Keane for their scientific advise. VVe want to thank especially the authors of each chapter for their well prepared manuscripts and the stimulating conferences they gave at Santiago. We are also indebted to our sponsors and supporting institutions, whose interest and help was essential to organize this meeting: ECOS-CONICYT, FONDAP Program in Applied Mathematics, French Cooperation, Fundacion Andes, Presidential Fellowship and Universidad de Chile. We are grateful to Ms. Gladys Cavallone for their excellent work during the preparation of the meeting as well as for the considerable task of unifying the typography of the different chapters of this book.
Approach your problems from the right end It isn't that they can't see the solution. It is and begin with the answers. Then one day, that they can't see the problem. perhaps you will find the final question. G. K. Chesterton. The Scandal of Father 'The Hermit Clad in Crane Feathers' in R. Brown 'The point of a Pin'. van Gulik's The Chif1ese Maze Murders. Growing specialization and diversification have brought a host of monographs and textbooks on increasingly specialized topics. However, the "tree" of knowledge of mathematics and related fields does not grow only by putting forth new branches. It also happens, quite often in fact, that branches which were thought to be completely disparate are suddenly seen to be related. Further, the kind and level of sophistication of mathematics applied in various sciences has changed drastically in recent years: measure theory is used (non trivially) in regional and theoretical economics; algebraic geometry interacts with physics; the Minkowsky lemma, coding theory and the structure of water meet one another in packing and covering theory; quantum fields, crystal defects and mathematical programming profit from homotopy theory; Lie algebras are relevant to filtering; and prediction and electrical engineering can use Stein spaces. And in addition to this there are such new emerging subdisciplines as "experimental mathematics", "CFD", "completely integrable systems", "chaos, synergetics and large-scale order", which are almost impossible to fit into the existing classification schemes. They draw upon widely different sections of mathematics.
Substances possessing heterogeneous microstructure on the nanometer and micron scales are scientifically fascinating and technologically useful. Examples of such substances include liquid crystals, microemulsions, biological matter, polymer mixtures and composites, vycor glasses, and zeolites. In this volume, an interdisciplinary group of researchers report their developments in this field. Topics include statistical mechanical free energy theories which predict the appearance of various microstructures, the topological and geometrical methods needed for a mathematical description of the subparts and dividing surfaces of heterogeneous materials, and modern computer-aided mathematical models and graphics for effective exposition of the salient features of microstructured materials.
Controlling Chaos achieves three goals: the suppression, synchronisation and generation of chaos, each of which is the focus of a separate part of the book. The text deals with the well-known Lorenz, Roessler and Henon attractors and the Chua circuit and with less celebrated novel systems. Modelling of chaos is accomplished using difference equations and ordinary and time-delayed differential equations. The methods directed at controlling chaos benefit from the influence of advanced nonlinear control theory: inverse optimal control is used for stabilization; exact linearization for synchronization; and impulsive control for chaotification. Notably, a fusion of chaos and fuzzy systems theories is employed. Time-delayed systems are also studied. The results presented are general for a broad class of chaotic systems. This monograph is self-contained with introductory material providing a review of the history of chaos control and the necessary mathematical preliminaries for working with dynamical systems.
In this volume we continue the logical development of the work begun in Volume I, and the equilibrium theory now becomes a very special case of the exposition presented here. Once a departure is made from equilibrium, however, the problems become deeper and more subtle-and unlike the equilibrium theory, many aspects of nonequilibrium phenomena remain poorly understood. For over a century a great deal of effort has been expended on the attempt to develop a comprehensive and sensible description of nonequilibrium phenomena and irreversible processes. What has emerged is a hodgepodge of ad hoc constructs that do little to provide either a firm foundation, or a systematic means for proceeding to higher levels of understanding with respect to ever more complicated examples of nonequilibria. Although one should rightfully consider this situation shameful, the amount of effort invested testifies to the degree of difficulty of the problems. In Volume I it was emphasized strongly that the traditional exposition of equilibrium theory lacked a certain cogency which tended to impede progress with extending those considerations to more complex nonequilibrium problems. The reasons for this were adduced to be an unfortunate reliance on ergodicity and the notions of kinetic theory, but in the long run little harm was done regarding the treatment of equilibrium problems. On the nonequilibrium level the potential for disaster increases enormously, as becomes evident already in Chapter 1.
Correlation Effects in Low-Dimensional Electron Systems describes recent developments in theoretical condensed-matter physics, emphasizing exact solutions in one dimension including conformal-field theoretical approaches, the application of quantum groups, and numerical diagonalization techniques. Various key properties are presented for two-dimensional, highly correlated electron systems.
This text takes readers in a clear and progressive format from simple to recent and advanced topics in pure and applied probability such as contraction and annealed properties of non-linear semi-groups, functional entropy inequalities, empirical process convergence, increasing propagations of chaos, central limit, and Berry Esseen type theorems as well as large deviation principles for strong topologies on path-distribution spaces. Topics also include a body of powerful branching and interacting particle methods.
Many novel cooperative phenomena found in a variety of systems studied by scientists can be treated using the uniting principles of synergetics. Examples are frustrated and random systems, polymers, spin glasses, neural networks, chemical and biological systems, and fluids. In this book attention is focused on two main problems. First, how local, topological constraints (frustrations) can cause macroscopic cooperative behavior: related ideas initially developed for spin glasses are shown to play key roles also for optimization and the modeling of neural networks. Second, the dynamical constraints that arise from the nonlinear dynamics of the systems: the discussion covers turbulence in fluids, pattern formation, and conventional 1/f noise. The volume will be of interest to anyone wishing to understand the current development of work on complex systems, which is presently one of the most challenging subjects in statistical and condensed matter physics. |
You may like...
Vibrations and Stability of Complex Beam…
Vladimir Stojanovic, Predrag Kozic
Hardcover
Progress in Turbulence VII - Proceedings…
Ramis Oerlu, Alessandro Talamelli, …
Hardcover
Novel Approaches for Single Molecule…
Fabio Benfenati, Enzo Di Fabrizio, …
Hardcover
R5,443
Discovery Miles 54 430
|