Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Science & Mathematics > Physics > Thermodynamics & statistical physics > Statistical physics
Adaptive Resonance Theory Microchips describes circuit strategies resulting in efficient and functional adaptive resonance theory (ART) hardware systems. While ART algorithms have been developed in software by their creators, this is the first book that addresses efficient VLSI design of ART systems. All systems described in the book have been designed and fabricated (or are nearing completion) as VLSI microchips in anticipation of the impending proliferation of ART applications to autonomous intelligent systems. To accommodate these systems, the book not only provides circuit design techniques, but also validates them through experimental measurements. The book also includes a chapter tutorially describing four ART architectures (ART1, ARTMAP, Fuzzy-ART and Fuzzy-ARTMAP) while providing easily understandable MATLAB code examples to implement these four algorithms in software. In addition, an entire chapter is devoted to other potential applications for real-time data clustering and category learning.
This sixth Volume of the International Workshop on Instabilities and Nonequilibrium Structures is dedicated to the memory of my friend Walter Zeller, Professor of the Universidad C'at6lica df' Valparaiso and Vice-Director of the Workshop. Walter Zeller was much more than an organizer of this meeting: his enthusiasm, dedication and critical views were many times the essential ingredients to continue with a task which in occasions faced difficulties and incomprehensiolls. It is in great part due to him that the workshop has adquired to-day tradition. maturity and international recognition. This Volume should have been coedited by Walter and it is with df'ep emotion that I learned that his disciples Javier Martinez and Rolando Tiemann wanted as a last hommage to their Professor and friend to coedit tfus book. No me seria posible terminal' estas lineas sin pensar en la senora Adriana Gamonal de Zelln. qUf' ella encuentre en este libro la admiraci6n y reconocimiento hacia su marido de quiPIlf's [l\Prall sus discipulos, colegas y amigos.
As our title suggests, there are two aspects in the subject of this book. The first is the mathematical investigation of the dynamics of infinite systems of in teracting particles and the description of the time evolution of their states. The second is the rigorous derivation of kinetic equations starting from the results of the aforementioned investigation. As is well known, statistical mechanics started in the last century with some papers written by Maxwell and Boltzmann. Although some of their statements seemed statistically obvious, we must prove that they do not contradict what me chanics predicts. In some cases, in particular for equilibrium states, it turns out that mechanics easily provides the required justification. However things are not so easy, if we take a step forward and consider a gas is not in equilibrium, as is, e.g., the case for air around a flying vehicle. Questions of this kind have been asked since the dawn of the kinetic theory of gases, especially when certain results appeared to lead to paradoxical conclu sions. Today this matter is rather well understood and a rigorous kinetic theory is emerging. The importance of these developments stems not only from the need of providing a careful foundation of such a basic physical theory, but also to exhibit a prototype of a mathematical construct central to the theory of non-equilibrium phenomena of macroscopic size."
We have classified the articles presented here in two Sections according to their general content. In Part I we have included papers which deal with statistical mechanics, math ematical aspects of dynamical systems and sthochastic effects in nonequilibrium systems. Part II is devoted mainly to instabilities and self-organization in extended nonequilibrium systems. The study of partial differential equations by numerical and analytic methods plays a great role here and many works are related to this subject. Most recent developments in this fascinating and rapidly growing area are discussed. PART I STATISTICAL MECHANICS AND RELATED TOPICS NONEQUILIBRIUM POTENTIALS FOR PERIOD DOUBLING R. Graham and A. Hamm Fachbereich Physik, Universitiit Gesamthochschule Essen D4300 Essen 1 Germany ABSTRACT. In this lecture we consider the influence of weak stochastic perturbations on period doubling using nonequilibrium potentials, a concept which is explained in section 1 and formulated for the case of maps in section 2. In section 3 nonequilibrium potentials are considered for the family of quadratic maps (a) at the Feigenbaum 'attractor' with Gaussian noise, (b) for more general non Gaussian noise, and (c) for the case of a strange repeller. Our discussion will be informal. A more detailed account of this and related material can be found in our papers [1-3] and in the reviews [4, 5], where further references to related work are also given. 1.
Physicists, when modelling physical systems with a large number of degrees of freedom, and statisticians, when performing data analysis, have developed their own concepts and methods for making the `best' inference. But are these methods equivalent, or not? What is the state of the art in making inferences? The physicists want answers. More: neural computation demands a clearer understanding of how neural systems make inferences; the theory of chaotic nonlinear systems as applied to time series analysis could profit from the experience already booked by the statisticians; and finally, there is a long-standing conjecture that some of the puzzles of quantum mechanics are due to our incomplete understanding of how we make inferences. Matter enough to stimulate the writing of such a book as the present one. But other considerations also arise, such as the maximum entropy method and Bayesian inference, information theory and the minimum description length. Finally, it is pointed out that an understanding of human inference may require input from psychologists. This lively debate, which is of acute current interest, is well summarized in the present work.
Dynamic Neural Field Theory for Motion Perception provides a new theoretical framework that permits a systematic analysis of the dynamic properties of motion perception. This framework uses dynamic neural fields as a key mathematical concept. The author demonstrates how neural fields can be applied for the analysis of perceptual phenomena and its underlying neural processes. Also, similar principles form a basis for the design of computer vision systems as well as the design of artificially behaving systems. The book discusses in detail the application of this theoretical approach to motion perception and will be of great interest to researchers in vision science, psychophysics, and biological visual systems.
Human Face Recognition Using Third-Order Synthetic Neural Networks explores the viability of the application of High-order synthetic neural network technology to transformation-invariant recognition of complex visual patterns. High-order networks require little training data (hence, short training times) and have been used to perform transformation-invariant recognition of relatively simple visual patterns, achieving very high recognition rates. The successful results of these methods provided inspiration to address more practical problems which have grayscale as opposed to binary patterns (e.g., alphanumeric characters, aircraft silhouettes) and are also more complex in nature as opposed to purely edge-extracted images - human face recognition is such a problem. Human Face Recognition Using Third-Order Synthetic Neural Networks serves as an excellent reference for researchers and professionals working on applying neural network technology to the recognition of complex visual patterns.
In this paper we shall discuss the construction of formal short-wave asymp totic solutions of problems of mathematical physics. The topic is very broad. It can somewhat conveniently be divided into three parts: 1. Finding the short-wave asymptotics of a rather narrow class of problems, which admit a solution in an explicit form, via formulas that represent this solution. 2. Finding formal asymptotic solutions of equations that describe wave processes by basing them on some ansatz or other. We explain what 2 means. Giving an ansatz is knowing how to give a formula for the desired asymptotic solution in the form of a series or some expression containing a series, where the analytic nature of the terms of these series is indicated up to functions and coefficients that are undetermined at the first stage of consideration. The second stage is to determine these functions and coefficients using a direct substitution of the ansatz in the equation, the boundary conditions and the initial conditions. Sometimes it is necessary to use different ansiitze in different domains, and in the overlapping parts of these domains the formal asymptotic solutions must be asymptotically equivalent (the method of matched asymptotic expansions). The basis for success in the search for formal asymptotic solutions is a suitable choice of ansiitze. The study of the asymptotics of explicit solutions of special model problems allows us to "surmise" what the correct ansiitze are for the general solution."
One of the most spectacular consequences of the description of the superfluid condensate in superfluid He or in superconductors as a single macroscopic quantum state is the quantization of circulation, resulting in quantized vortex lines. This book draws no distinction between superfluid He3 and He4 and superconductors. The reader will find the essential introductory chapters and the most recent theoretical and experimental progress in our understanding of the vortex state in both superconductors and superfluids, from lectures given by leading experts in the field, both experimentalists and theoreticians, who gathered in Cargese for a NATO ASI. The peculiar features related to short coherence lengths, 2D geometry, high temperatures, disorder, and pinning are thoroughly discussed. "
Polymers are substances made of macromolecules formed by thousands of atoms organized in one (homopolymers) or more (copolymers) groups that repeat themselves to form linear or branched chains, or lattice structures. The concept of polymer traces back to the years 1920's and is one of the most significant ideas of last century. It has given great impulse to indus try but also to fundamental research, including life sciences. Macromolecules are made of sm all molecules known as monomers. The process that brings monomers into polymers is known as polymerization. A fundamental contri bution to the industrial production of polymers, particularly polypropylene and polyethylene, is due to the Nobel prize winners Giulio Natta and Karl Ziegler. The ideas of Ziegler and Natta date back to 1954, and the process has been improved continuously over the years, particularly concerning the design and shaping of the catalysts. Chapter 1 (due to A. Fasano ) is devoted to a review of some results concerning the modelling of the Ziegler- Natta polymerization. The specific ex am pie is the production of polypropilene. The process is extremely complex and all studies with relevant mathematical contents are fairly recent, and several problems are still open.
The motion of a particle in a random potential in two or more dimensions is chaotic, and the trajectories in deterministically chaotic systems are effectively random. It is therefore no surprise that there are links between the quantum properties of disordered systems and those of simple chaotic systems. The question is, how deep do the connec tions go? And to what extent do the mathematical techniques designed to understand one problem lead to new insights into the other? The canonical problem in the theory of disordered mesoscopic systems is that of a particle moving in a random array of scatterers. The aim is to calculate the statistical properties of, for example, the quantum energy levels, wavefunctions, and conductance fluctuations by averaging over different arrays; that is, by averaging over an ensemble of different realizations of the random potential. In some regimes, corresponding to energy scales that are large compared to the mean level spacing, this can be done using diagrammatic perturbation theory. In others, where the discreteness of the quantum spectrum becomes important, such an approach fails. A more powerful method, devel oped by Efetov, involves representing correlation functions in terms of a supersymmetric nonlinear sigma-model. This applies over a wider range of energy scales, covering both the perturbative and non-perturbative regimes. It was proved using this method that energy level correlations in disordered systems coincide with those of random matrix theory when the dimensionless conductance tends to infinity.
Simple random walks - or equivalently, sums of independent random vari ables - have long been a standard topic of probability theory and mathemat ical physics. In the 1950s, non-Markovian random-walk models, such as the self-avoiding walk, were introduced into theoretical polymer physics, and gradu ally came to serve as a paradigm for the general theory of critical phenomena. In the past decade, random-walk expansions have evolved into an important tool for the rigorous analysis of critical phenomena in classical spin systems and of the continuum limit in quantum field theory. Among the results obtained by random-walk methods are the proof of triviality of the cp4 quantum field theo ryin space-time dimension d (:::: ) 4, and the proof of mean-field critical behavior for cp4 and Ising models in space dimension d (:::: ) 4. The principal goal of the present monograph is to present a detailed review of these developments. It is supplemented by a brief excursion to the theory of random surfaces and various applications thereof. This book has grown out of research carried out by the authors mainly from 1982 until the middle of 1985. Our original intention was to write a research paper. However, the writing of such a paper turned out to be a very slow process, partly because of our geographical separation, partly because each of us was involved in other projects that may have appeared more urgent.
Deng Feng Wang was born February 8, 1965 in Chongqing City, China and died August 15, 1999 while swimming with friends in the Atlantic Ocean off Island Beach State Park, New Jersey. In his brief life, he was to have an influence far beyond his years. On August 12th 2000, The Deng Feng Wang Memorial Conference was held at his alma mater, Princeton University, during which Deng Feng's mentors, collaborators and friends presented scientific talks in a testimonial to his tremendous influence on their work and careers. The first part of this volume contains proceedings contributions from the conference, with plenary talks by Nobel Laureate Professor Phil Anderson of Princeton University and leading Condensed Matter Theorists Professor Piers Coleman of Rutgers University and Professor Christian Gruber of the University of Lausanne. Other talks, given by collaborators, friends and classmates testify to the great breadth of Deng Feng Wang's influence, with remarkable connections shown between seemingly unrelated areas in physics such as Condensed Matter Physics, Superconductivity, One-Dimensional Models, Statistical Physics, Mathematical Physics, Quantum Field Theory, High Energy Theory, Nuclear Magnetic Resonance, Supersymmetry, M-Theory and String Theory, in addition to such varied fields outside of physics such as Oil Drilling, Mixed Signal Circuits and Neurology. The second part of the volume consists of reprints of some of Deng Feng Wang's most important papers in the areas of Condensed Matter Physics, Statistical Physics, Magnetism, Mathematical Physics and Mathematical Finance. This volume represents a fascinating synthesis of a wide variety of topics, and ultimately points to the universality of physics and of science as a whole. As such, it represents a fitting tribute to a remarkable individual, whose tragic death will never erase his enduring influence.
An Analog VLSI System for Stereoscopic Vision investigates the interaction of the physical medium and the computation in both biological and analog VLSI systems by synthesizing a functional neuromorphic system in silicon. In both the synthesis and analysis of the system, a point of view from within the system is adopted rather than that of an omniscient designer drawing a blueprint. This perspective projects the design and the designer into a living landscape. The motivation for a machine-centered perspective is explained in the first chapter. The second chapter describes the evolution of the silicon retina. The retina accurately encodes visual information over orders of magnitude of ambient illumination, using mismatched components that are calibrated as part of the encoding process. The visual abstraction created by the retina is suitable for transmission through a limited bandwidth channel. The third chapter introduces a general method for interchip communication, the address-event representation, which is used for transmission of retinal data. The address-event representation takes advantage of the speed of CMOS relative to biological neurons to preserve the information of biological action potentials using digital circuitry in place of axons. The fourth chapter describes a collective circuit that computes stereodisparity. In this circuit, the processing that corrects for imperfections in the hardware compensates for inherent ambiguity in the environment. The fifth chapter demonstrates a primitive working stereovision system. An Analog VLSI System for Stereoscopic Vision contributes to both computer engineering and neuroscience at a concrete level. Through the construction of a working analog of biological vision subsystems, new circuits for building brain-style analog computers have been developed. Specific neuropysiological and psychophysical results in terms of underlying electronic mechanisms are explained. These examples demonstrate the utility of using biological principles for building brain-style computers and the significance of building brain-style computers for understanding the nervous system.
Neural Networks in Telecommunications consists of a carefully edited collection of chapters that provides an overview of a wide range of telecommunications tasks being addressed with neural networks. These tasks range from the design and control of the underlying transport network to the filtering, interpretation and manipulation of the transported media. The chapters focus on specific applications, describe specific solutions and demonstrate the benefits that neural networks can provide. By doing this, the authors demonstrate that neural networks should be another tool in the telecommunications engineer's toolbox. Neural networks offer the computational power of nonlinear techniques, while providing a natural path to efficient massively-parallel hardware implementations. In addition, the ability of neural networks to learn allows them to be used on problems where straightforward heuristic or rule-based solutions do not exist. Together these capabilities mean that neural networks offer unique solutions to problems in telecommunications. For engineers and managers in telecommunications, Neural Networks in Telecommunications provides a single point of access to the work being done by leading researchers in this field, and furnishes an in-depth description of neural network applications.
Over the past five de-:: ades researchers have sought to develop a new framework that would resolve the anomalies attributable to a patchwork formulation of relativistic quantum mechanics. This book chronicles the development of a new paradigm for describing relativistic quantum phenomena. What makes the new paradigm unique is its inclusion of a physically measurable, invariant evolution parameter. The resulting theory has been sufficiently well developed in the refereed literature that it is now possible to present a synthesis of its ideas and techniques. My synthesis is intended to encourage and enhance future research, and is presented in six parts. The environment within which the conventional paradigm exists is described in the Introduction. Part I eases the mainstream reader into the ideas of the new paradigm by providing the reader with a discussion that should look very familiar, but contains subtle nuances. Indeed, I try to provide the mainstream reader with familiar "landmarks" throughout the text. This is possible because the new paradigm contains the conventional paradigm as a subset. The foundation of the new paradigm is presented in Part II, fol owed by numerous applications in the remaining three parts. The reader should notice that the new paradigm handles not only the broad class of problems typically dealt with in conventional relativistic quantum theory, but also contains fertile research areas for both experimentalists and theorists. To avoid developing a theoretical framework without physical validity, numerous comparisons between theory and experiment are provided, and several predictions are made.
Observation, Prediction and Simulation of Phase Transitions in Complex Fluids presents an overview of the phase transitions that occur in a variety of soft-matter systems: colloidal suspensions of spherical or rod-like particles and their mixtures, directed polymers and polymer blends, colloid--polymer mixtures, and liquid-forming mesogens. This modern and fascinating branch of condensed matter physics is presented from three complementary viewpoints. The first section, written by experimentalists, emphasises the observation of basic phenomena (by light scattering, for example). The second section, written by theoreticians, focuses on the necessary theoretical tools (density functional theory, path integrals, free energy expansions). The third section is devoted to the results of modern simulation techniques (Gibbs ensemble, free energy calculations, configurational bias Monte Carlo). The interplay between the disciplines is clearly illustrated. For all those interested in modern research in equilibrium statistical mechanics.
arise automatically as a result of the recursive structure of the task and the continuous nature of the SRN's state space. Elman also introduces a new graphical technique for study ing network behavior based on principal components analysis. He shows that sentences with multiple levels of embedding produce state space trajectories with an intriguing self similar structure. The development and shape of a recurrent network's state space is the subject of Pollack's paper, the most provocative in this collection. Pollack looks more closely at a connectionist network as a continuous dynamical system. He describes a new type of machine learning phenomenon: induction by phase transition. He then shows that under certain conditions, the state space created by these machines can have a fractal or chaotic structure, with a potentially infinite number of states. This is graphically illustrated using a higher-order recurrent network trained to recognize various regular languages over binary strings. Finally, Pollack suggests that it might be possible to exploit the fractal dynamics of these systems to achieve a generative capacity beyond that of finite-state machines."
Building on Wilson's renormalization group, the authors have developed a unified approach that not only reproduces known results but also yields new results. A systematic exposition of the contemporary theory of phase transitions, the book includes detailed discussions of phenomena in Heisenberg magnets, granular super-conducting alloys, anisotropic systems of dipoles, and liquid-vapor transitions. Suitable for advanced undergraduates as well as graduate students in physics, the text assumes some knowledge of statistical mechanics, but is otherwise self-contained.
The basic subjects and main topics covered by this book are: (1) Physics of Black Holes (classical and quantum); (2) Thermodynamics, entropy and internal dynamics; (3) Creation of particles and evaporation; (4) Mini black holes; (5) Quantum mechanics of black holes in curved spacetime; (6) The role of spin and torsion in the black hole physics; (7) Equilibrium geometry and membrane paradigm; (8) Black hole in string and superstring theory; (9) Strings, quantum gravity and black holes; (10) The problem of singularity; (11) Astrophysics of black holes; (12) Observational evidence of black holes. The book reveals the deep connection between gravitational, quantum and statistical physics and also the importance of black hole behaviour in the very early universe. An important new point discussed concerns the introduction of spin in the physics of black holes, showing its central role when correctly put into the Einstein equations through the geometric concept of torsion, with the new concept of a time-temperature uncertainty relation, minimal time, minimal entropy, quantization of entropy and the connection of black hole with wormholes. Besides theoretical aspects, the reader will also find observational evidence for black holes in active galactic nuclei, in binary X-ray sources and in supernova remnants. The book will thus interest physicists, astronomers, and astrophysicists at different levels of their career who specialize in classical properties, quantum processes, statistical thermodynamics, numerical collapse, observational evidence, general relativity and other related problems.
Vision-based mobile robot guidance has proved difficult for classical machine vision methods because of the diversity and real-time constraints inherent in the task. This book describes a connectionist system called ALVINN (Autonomous Land Vehicle In a Neural Network) that overcomes these difficulties. ALVINN learns to guide mobile robots using the back-propagation training algorithm. Because of its ability to learn from example, ALVINN can adapt to new situations and therefore cope with the diversity of the autonomous navigation task. But real world problems like vision-based mobile robot guidance present a different set of challenges for the connectionist paradigm. Among them are: * how to develop a general representation from a limited amount of real training data; * how to understand the internal representations developed by artificial neural networks; * how to estimate the reliability of individual networks; * how to combine multiple networks trained for different situations into a single system; * how to combine connectionist perception with symbolic reasoning.Neural Network Perception for Mobile Robot Guidance presents novel solutions to each of these problems. Using these techniques, the ALVINN system can learn to control an autonomous van in under 5 minutes by watching a person drive. Once trained, individual ALVINN networks can drive in a variety of circumstances, including single-lane paved and unpaved roads, and multi-lane lined and unlined roads, at speeds of up to 55 miles per hour. The techniques also are shown to generalize to the task of controlling the precise foot placement of a walking robot.
Covers a wide spectrum of applications and contains a wide discussion of the foundations and the scope of the most current theories of non-equilibrium thermodynamics. The new edition reflects new developments and contains a new chapter on the interplay between hydrodynamics and thermodynamics.
One SCI\'ice mathematics bas rendered the 'Et moi, ...si j'avait su comment en revcnir. je n'y serais point aile: human race. It bas put common sc:nsc back where it belongs, on the topmost shelf next Jules Verne to the dusty canister labelled 'discarded n- sense'. The series is divergent; therefore we may be able to do something with it. Eric T. Bell O. Hcavisidc Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and non- linearities abound. Similarly. all kinds of parts of mathematics serve as tools for other parts and for other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One service topology has rendered mathematical physics .. :; 'One service logic has rendered com- puter science .. :; 'One service category theory has rendered mathematics .. :. All arguably true. And all statements obtainable this way form part of the raison d'etre of this series.
Our book introduces a method to evaluate the accuracy of trend estimation algorithms under conditions similar to those encountered in real time series processing. This method is based on Monte Carlo experiments with artificial time series numerically generated by an original algorithm. The second part of the book contains several automatic algorithms for trend estimation and time series partitioning. The source codes of the computer programs implementing these original automatic algorithms are given in the appendix and will be freely available on the web. The book contains clear statement of the conditions and the approximations under which the algorithms work, as well as the proper interpretation of their results. We illustrate the functioning of the analyzed algorithms by processing time series from astrophysics, finance, biophysics, and paleoclimatology. The numerical experiment method extensively used in our book is already in common use in computational and statistical physics.
The contents of this book correspond to Sessions VII and VIII of the International Workshop on Instabilities and Nonequilibrium Structures which took place in Vifia del Mar, Chile, in December 1997 and December 1999, respectively. We were not able to publish this book before and we apologize for this fact to the authors and participants of the meeting. We have made an effort to actualize the courses and articles which have been reviewed by the authors. Both Workshops were organized by Facultad de Ciencias Fisicas y Matematicas, Universidad de Chile, Instituto de Fisica of Universidad Cat61ica de Valparaiso and Centro de Fisica No Lineal y Sistemas Complejos de Santiago. We are glad to acknowledge here the support of the Facultad de Ingenieria of Universidad de los Andes of Santiago which also be from now on one of the organizing Institutions of future Workshops. Enrique Tirapegui PREFACE This book is divided in two parts. In Part I we have collected the courses given in Sessions VII and VIII of the Workshop and in Part II we include a selection of the invited Conferences and Seminars presented at both meetings. |
You may like...
Mystery Of Time, The: Asymmetry Of Time…
Alexander L Kuzemsky
Hardcover
R3,980
Discovery Miles 39 800
Attractor Dimension Estimates for…
Nikolay Kuznetsov, Volker Reitmann
Hardcover
R5,900
Discovery Miles 59 000
Feedback Economics - Economic Modeling…
Robert Y. Cavana, Brian C. Dangerfield, …
Hardcover
Corruption Networks - Concepts and…
Oscar M. Granados, Jose R. Nicolas-Carlock
Hardcover
R3,505
Discovery Miles 35 050
Traffic and Granular Flow 2019
Iker Zuriguel, Angel Garcimartin, …
Hardcover
R4,308
Discovery Miles 43 080
|