![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Physics > Thermodynamics & statistical physics > Statistical physics
Nonlinear Modeling: Advanced Black-Box Techniques discusses methods on Neural nets and related model structures for nonlinear system identification; Enhanced multi-stream Kalman filter training for recurrent networks; The support vector method of function estimation; Parametric density estimation for the classification of acoustic feature vectors in speech recognition; Wavelet-based modeling of nonlinear systems; Nonlinear identification based on fuzzy models; Statistical learning in control and matrix theory; Nonlinear time-series analysis. It also contains the results of the K.U. Leuven time series prediction competition, held within the framework of an international workshop at the K.U. Leuven, Belgium in July 1998.
Econophysics is a newborn field of science bridging economics and physics. A special feature of this new science is the data analysis of high-precision market data. In economics arbitrage opportunity is strictly denied; however, by observing high-precision data we can prove the existence of arbitrage opportunity. Also, financial technology neglects the possibility of market prediction; however, in this book you can find many examples of predicted events. There are other surprising findings. This volume is the proceedings of a workshop on "application of econophysics" at which leading international researchers discussed their most recent results.
Our book introduces a method to evaluate the accuracy of trend estimation algorithms under conditions similar to those encountered in real time series processing. This method is based on Monte Carlo experiments with artificial time series numerically generated by an original algorithm. The second part of the book contains several automatic algorithms for trend estimation and time series partitioning. The source codes of the computer programs implementing these original automatic algorithms are given in the appendix and will be freely available on the web. The book contains clear statement of the conditions and the approximations under which the algorithms work, as well as the proper interpretation of their results. We illustrate the functioning of the analyzed algorithms by processing time series from astrophysics, finance, biophysics, and paleoclimatology. The numerical experiment method extensively used in our book is already in common use in computational and statistical physics.
Neural nets offer a fascinating new strategy for spatial analysis, and their application holds enormous potential for the geographic sciences. However, the number of studies that have utilized these techniques is limited. This lack of interest can be attributed, in part, to lack of exposure, to the use of extensive and often confusing jargon, and to the misapprehension that, without an underlying statistical model, the explanatory power of the neural net is very low. Neural Nets: Applications for Geography attacks all three issues; the text demonstrates a wide variety of neural net applications in geography in a simple manner, with minimal jargon. The volume presents an introduction to neural nets that describes some of the basic concepts, as well as providing a more mathematical treatise for those wishing further details on neural net architecture. The bulk of the text, however, is devoted to descriptions of neural net applications in such broad-ranging fields as census analysis, predicting the spread of AIDS, describing synoptic controls on mountain snowfall, examining the relationships between atmospheric circulation and tropical rainfall, and the remote sensing of polar cloud and sea ice characteristics. The text illustrates neural nets employed in modes analogous to multiple regression analysis, cluster analysis, and maximum likelihood classification. Not only are the neural nets shown to be equal or superior to these more conventional methods, particularly where the relationships have a strong nonlinear component, but they are also shown to contain significant explanatory power. Several chapters demonstrate that the nets themselves can be decomposed to illuminate causative linkages between different events in both the physical and human environments.
In the last ten to fifteen years there have been many important developments in the theory of integrable equations. This period is marked in particular by the strong impact of soliton theory in many diverse areas of mathematics and physics; for example, algebraic geometry (the solution of the Schottky problem), group theory (the discovery of quantum groups), topology (the connection of Jones polynomials with integrable models), and quantum gravity (the connection of the KdV with matrix models). This is the first book to present a comprehensive overview of these developments. Numbered among the authors are many of the most prominent researchers in the field.
The first part is devoted to colloidal particles and stochastic dynamics, mainly concerned with recent authoritative results in the study of interactions between colloidal particles and transport properties in colloids and ferrocolloids. Recent advances in non-equilibrium statistical physics, such as stochastic resonance, Brownian motors, ratchets and noise-induced transport are also reported. The second part deals with biological systems and polymers. Here, standard simulation methodology to treat diffusional dynamics of multi-protein systems and proton transport in macromolecules is presented. Results of nervous system, spectroscopy of biological membrane models, and Monte Carlo simulations of polymers chains are also discussed. The third part is concerned with granular materials and quantum systems, in particular an effective-medium theory for a random system is reported. Additionally, a comprehensive treatment of spin and charge order in the vortex lattice of the cuprates, both theoretical and experimental, is included. Thermodynamics analogies between Bose-Einstein condensation and black-body radiation are also presented.The last part of the book contains recent developments of certain topics of liquid crystals and molecular fluids, including nonequilibrium thermal light scattering from nematic liquid crystals, relaxation in the kinetic Ising model on the periodic in homogeneous chain, models for thermotropic liquid-crystals, thermodynamic properties of fluids with discrete potentials as well as of fluids determined from the speed of sound effective potentials, and second viral coefficient for polar fluids.
One of the most spectacular consequences of the description of the superfluid condensate in superfluid He or in superconductors as a single macroscopic quantum state is the quantization of circulation, resulting in quantized vortex lines. This book draws no distinction between superfluid He3 and He4 and superconductors. The reader will find the essential introductory chapters and the most recent theoretical and experimental progress in our understanding of the vortex state in both superconductors and superfluids, from lectures given by leading experts in the field, both experimentalists and theoreticians, who gathered in Cargese for a NATO ASI. The peculiar features related to short coherence lengths, 2D geometry, high temperatures, disorder, and pinning are thoroughly discussed. "
We have classified the articles presented here in two Sections according to their general content. In Part I we have included papers which deal with statistical mechanics, math ematical aspects of dynamical systems and sthochastic effects in nonequilibrium systems. Part II is devoted mainly to instabilities and self-organization in extended nonequilibrium systems. The study of partial differential equations by numerical and analytic methods plays a great role here and many works are related to this subject. Most recent developments in this fascinating and rapidly growing area are discussed. PART I STATISTICAL MECHANICS AND RELATED TOPICS NONEQUILIBRIUM POTENTIALS FOR PERIOD DOUBLING R. Graham and A. Hamm Fachbereich Physik, Universitiit Gesamthochschule Essen D4300 Essen 1 Germany ABSTRACT. In this lecture we consider the influence of weak stochastic perturbations on period doubling using nonequilibrium potentials, a concept which is explained in section 1 and formulated for the case of maps in section 2. In section 3 nonequilibrium potentials are considered for the family of quadratic maps (a) at the Feigenbaum 'attractor' with Gaussian noise, (b) for more general non Gaussian noise, and (c) for the case of a strange repeller. Our discussion will be informal. A more detailed account of this and related material can be found in our papers [1-3] and in the reviews [4, 5], where further references to related work are also given. 1.
Deeply rooted in fundamental research in Mathematics and Computer Science, Cellular Automata (CA) are recognized as an intuitive modeling paradigm for Complex Systems. Already very basic CA, with extremely simple micro dynamics such as the Game of Life, show an almost endless display of complex emergent behavior. Conversely, CA can also be designed to produce a desired emergent behavior, using either theoretical methodologies or evolutionary techniques. Meanwhile, beyond the original realm of applications - Physics, Computer Science, and Mathematics - CA have also become work horses in very different disciplines such as epidemiology, immunology, sociology, and finance. In this context of fast and impressive progress, spurred further by the enormous attraction these topics have on students, this book emerges as a welcome overview of the field for its practitioners, as well as a good starting point for detailed study on the graduate and post-graduate level. The book contains three parts, two major parts on theory and applications, and a smaller part on software. The theory part contains fundamental chapters on how to design and/or apply CA for many different areas. In the applications part a number of representative examples of really using CA in a broad range of disciplines is provided - this part will give the reader a good idea of the real strength of this kind of modeling as well as the incentive to apply CA in their own field of study. Finally, we included a smaller section on software, to highlight the important work that has been done to create high quality problem solving environments that allow to quickly and relatively easily implement a CA model and run simulations, both on the desktop and if needed, on High Performance Computing infrastructures.
Adaptive Resonance Theory Microchips describes circuit strategies resulting in efficient and functional adaptive resonance theory (ART) hardware systems. While ART algorithms have been developed in software by their creators, this is the first book that addresses efficient VLSI design of ART systems. All systems described in the book have been designed and fabricated (or are nearing completion) as VLSI microchips in anticipation of the impending proliferation of ART applications to autonomous intelligent systems. To accommodate these systems, the book not only provides circuit design techniques, but also validates them through experimental measurements. The book also includes a chapter tutorially describing four ART architectures (ART1, ARTMAP, Fuzzy-ART and Fuzzy-ARTMAP) while providing easily understandable MATLAB code examples to implement these four algorithms in software. In addition, an entire chapter is devoted to other potential applications for real-time data clustering and category learning.
It is our pleasure to contribute the forewordto this book on symbiotic mul- robot organisms, which is largely based on the scienti?c ?ndings and exp- rations of two major EU research projects, Symbrion and Replicator, funded under the Seventh Framework Programme for Research and Technological 1 development (FP7) . FP7 emphasises consortia of European partners, tra- national collaboration, open coordination, ?exibility and excellence of - search and plays a leading role in multidisciplinary research and cooperative activities in Europe and beyond. Its impact is major in terms of integrating and structuring research communities across national borders to achieve a critical mass, providing the leverage for high-potential ?elds to take o?, and encouraging healthy competition at European level while avoiding unn- essary duplication of research capacities. Research proposals are evaluated through a demanding peer-review process and only the best are selected to be funded bythe EuropeanCommission(EC). The InformationandCom- nication Technologies(ICT) theme has set out a number of challengeswithin this context, which cover topics such as cognitive systems, modular robotics, adaptive systems and societies of artefacts. * Symbrion was selected following the Call "Pervasive Adaptation" of the 2 "Future and Emerging Technologies (FET)" programme area. Itstarted on 1 February 2008 and will run for 5 years. FET Proactive addresses evolutionary and revolutionary approaches through multidisciplinary - operation and investigates new future technology options in response to emerging societal and industrial needs and identi?es new drivers for - search.
Computational intelligence encompasses a wide variety of techniques that allow computation to learn, to adapt, and to seek. That is, they may be designed to learn information without explicit programming regarding the nature of the content to be retained, they may be imbued with the functionality to adapt to maintain their course within a complex and unpredictably changing environment, and they may help us seek out truths about our own dynamics and lives through their inclusion in complex system modeling. These capabilities place our ability to compute in a category apart from our ability to erect suspension bridges, although both are products of technological advancement and reflect an increased understanding of our world. In this book, we show how to unify aspects of learning and adaptation within the computational intelligence framework. While a number of algorithms exist that fall under the umbrella of computational intelligence, with new ones added every year, all of them focus on the capabilities of learning, adapting, and helping us seek. So, the term unified computational intelligence relates not to the individual algorithms but to the underlying goals driving them. This book focuses on the computational intelligence areas of neural networks and dynamic programming, showing how to unify aspects of these areas to create new, more powerful, computational intelligence architectures to apply to new problem domains.
This sixth Volume of the International Workshop on Instabilities and Nonequilibrium Structures is dedicated to the memory of my friend Walter Zeller, Professor of the Universidad C'at6lica df' Valparaiso and Vice-Director of the Workshop. Walter Zeller was much more than an organizer of this meeting: his enthusiasm, dedication and critical views were many times the essential ingredients to continue with a task which in occasions faced difficulties and incomprehensiolls. It is in great part due to him that the workshop has adquired to-day tradition. maturity and international recognition. This Volume should have been coedited by Walter and it is with df'ep emotion that I learned that his disciples Javier Martinez and Rolando Tiemann wanted as a last hommage to their Professor and friend to coedit tfus book. No me seria posible terminal' estas lineas sin pensar en la senora Adriana Gamonal de Zelln. qUf' ella encuentre en este libro la admiraci6n y reconocimiento hacia su marido de quiPIlf's [l\Prall sus discipulos, colegas y amigos.
Hybrid Neural Network and Expert Systems presents the basics of expert systems and neural networks, and the important characteristics relevant to the integration of these two technologies. Through case studies of actual working systems, the author demonstrates the use of these hybrid systems in practical situations. Guidelines and models are described to help those who want to develop their own hybrid systems. Neural networks and expert systems together represent two major aspects of human intelligence and therefore are appropriate for integration. Neural networks represent the visual, pattern-recognition types of intelligence, while expert systems represent the logical, reasoning processes. Together, these technologies allow applications to be developed that are more powerful than when each technique is used individually. Hybrid Neural Network and Expert Systems provides frameworks for understanding how the combination of neural networks and expert systems can produce useful hybrid systems, and illustrates the issues and opportunities in this dynamic field.
An Analog VLSI System for Stereoscopic Vision investigates the interaction of the physical medium and the computation in both biological and analog VLSI systems by synthesizing a functional neuromorphic system in silicon. In both the synthesis and analysis of the system, a point of view from within the system is adopted rather than that of an omniscient designer drawing a blueprint. This perspective projects the design and the designer into a living landscape. The motivation for a machine-centered perspective is explained in the first chapter. The second chapter describes the evolution of the silicon retina. The retina accurately encodes visual information over orders of magnitude of ambient illumination, using mismatched components that are calibrated as part of the encoding process. The visual abstraction created by the retina is suitable for transmission through a limited bandwidth channel. The third chapter introduces a general method for interchip communication, the address-event representation, which is used for transmission of retinal data. The address-event representation takes advantage of the speed of CMOS relative to biological neurons to preserve the information of biological action potentials using digital circuitry in place of axons. The fourth chapter describes a collective circuit that computes stereodisparity. In this circuit, the processing that corrects for imperfections in the hardware compensates for inherent ambiguity in the environment. The fifth chapter demonstrates a primitive working stereovision system. An Analog VLSI System for Stereoscopic Vision contributes to both computer engineering and neuroscience at a concrete level. Through the construction of a working analog of biological vision subsystems, new circuits for building brain-style analog computers have been developed. Specific neuropysiological and psychophysical results in terms of underlying electronic mechanisms are explained. These examples demonstrate the utility of using biological principles for building brain-style computers and the significance of building brain-style computers for understanding the nervous system.
Deng Feng Wang was born February 8, 1965 in Chongqing City, China and died August 15, 1999 while swimming with friends in the Atlantic Ocean off Island Beach State Park, New Jersey. In his brief life, he was to have an influence far beyond his years. On August 12th 2000, The Deng Feng Wang Memorial Conference was held at his alma mater, Princeton University, during which Deng Feng's mentors, collaborators and friends presented scientific talks in a testimonial to his tremendous influence on their work and careers. The first part of this volume contains proceedings contributions from the conference, with plenary talks by Nobel Laureate Professor Phil Anderson of Princeton University and leading Condensed Matter Theorists Professor Piers Coleman of Rutgers University and Professor Christian Gruber of the University of Lausanne. Other talks, given by collaborators, friends and classmates testify to the great breadth of Deng Feng Wang's influence, with remarkable connections shown between seemingly unrelated areas in physics such as Condensed Matter Physics, Superconductivity, One-Dimensional Models, Statistical Physics, Mathematical Physics, Quantum Field Theory, High Energy Theory, Nuclear Magnetic Resonance, Supersymmetry, M-Theory and String Theory, in addition to such varied fields outside of physics such as Oil Drilling, Mixed Signal Circuits and Neurology. The second part of the volume consists of reprints of some of Deng Feng Wang's most important papers in the areas of Condensed Matter Physics, Statistical Physics, Magnetism, Mathematical Physics and Mathematical Finance. This volume represents a fascinating synthesis of a wide variety of topics, and ultimately points to the universality of physics and of science as a whole. As such, it represents a fitting tribute to a remarkable individual, whose tragic death will never erase his enduring influence.
Human Face Recognition Using Third-Order Synthetic Neural Networks explores the viability of the application of High-order synthetic neural network technology to transformation-invariant recognition of complex visual patterns. High-order networks require little training data (hence, short training times) and have been used to perform transformation-invariant recognition of relatively simple visual patterns, achieving very high recognition rates. The successful results of these methods provided inspiration to address more practical problems which have grayscale as opposed to binary patterns (e.g., alphanumeric characters, aircraft silhouettes) and are also more complex in nature as opposed to purely edge-extracted images - human face recognition is such a problem. Human Face Recognition Using Third-Order Synthetic Neural Networks serves as an excellent reference for researchers and professionals working on applying neural network technology to the recognition of complex visual patterns.
In this paper we shall discuss the construction of formal short-wave asymp totic solutions of problems of mathematical physics. The topic is very broad. It can somewhat conveniently be divided into three parts: 1. Finding the short-wave asymptotics of a rather narrow class of problems, which admit a solution in an explicit form, via formulas that represent this solution. 2. Finding formal asymptotic solutions of equations that describe wave processes by basing them on some ansatz or other. We explain what 2 means. Giving an ansatz is knowing how to give a formula for the desired asymptotic solution in the form of a series or some expression containing a series, where the analytic nature of the terms of these series is indicated up to functions and coefficients that are undetermined at the first stage of consideration. The second stage is to determine these functions and coefficients using a direct substitution of the ansatz in the equation, the boundary conditions and the initial conditions. Sometimes it is necessary to use different ansiitze in different domains, and in the overlapping parts of these domains the formal asymptotic solutions must be asymptotically equivalent (the method of matched asymptotic expansions). The basis for success in the search for formal asymptotic solutions is a suitable choice of ansiitze. The study of the asymptotics of explicit solutions of special model problems allows us to "surmise" what the correct ansiitze are for the general solution."
Observation, Prediction and Simulation of Phase Transitions in Complex Fluids presents an overview of the phase transitions that occur in a variety of soft-matter systems: colloidal suspensions of spherical or rod-like particles and their mixtures, directed polymers and polymer blends, colloid--polymer mixtures, and liquid-forming mesogens. This modern and fascinating branch of condensed matter physics is presented from three complementary viewpoints. The first section, written by experimentalists, emphasises the observation of basic phenomena (by light scattering, for example). The second section, written by theoreticians, focuses on the necessary theoretical tools (density functional theory, path integrals, free energy expansions). The third section is devoted to the results of modern simulation techniques (Gibbs ensemble, free energy calculations, configurational bias Monte Carlo). The interplay between the disciplines is clearly illustrated. For all those interested in modern research in equilibrium statistical mechanics.
As our title suggests, there are two aspects in the subject of this book. The first is the mathematical investigation of the dynamics of infinite systems of in teracting particles and the description of the time evolution of their states. The second is the rigorous derivation of kinetic equations starting from the results of the aforementioned investigation. As is well known, statistical mechanics started in the last century with some papers written by Maxwell and Boltzmann. Although some of their statements seemed statistically obvious, we must prove that they do not contradict what me chanics predicts. In some cases, in particular for equilibrium states, it turns out that mechanics easily provides the required justification. However things are not so easy, if we take a step forward and consider a gas is not in equilibrium, as is, e.g., the case for air around a flying vehicle. Questions of this kind have been asked since the dawn of the kinetic theory of gases, especially when certain results appeared to lead to paradoxical conclu sions. Today this matter is rather well understood and a rigorous kinetic theory is emerging. The importance of these developments stems not only from the need of providing a careful foundation of such a basic physical theory, but also to exhibit a prototype of a mathematical construct central to the theory of non-equilibrium phenomena of macroscopic size."
Polymers are substances made of macromolecules formed by thousands of atoms organized in one (homopolymers) or more (copolymers) groups that repeat themselves to form linear or branched chains, or lattice structures. The concept of polymer traces back to the years 1920's and is one of the most significant ideas of last century. It has given great impulse to indus try but also to fundamental research, including life sciences. Macromolecules are made of sm all molecules known as monomers. The process that brings monomers into polymers is known as polymerization. A fundamental contri bution to the industrial production of polymers, particularly polypropylene and polyethylene, is due to the Nobel prize winners Giulio Natta and Karl Ziegler. The ideas of Ziegler and Natta date back to 1954, and the process has been improved continuously over the years, particularly concerning the design and shaping of the catalysts. Chapter 1 (due to A. Fasano ) is devoted to a review of some results concerning the modelling of the Ziegler- Natta polymerization. The specific ex am pie is the production of polypropilene. The process is extremely complex and all studies with relevant mathematical contents are fairly recent, and several problems are still open.
Simple random walks - or equivalently, sums of independent random vari ables - have long been a standard topic of probability theory and mathemat ical physics. In the 1950s, non-Markovian random-walk models, such as the self-avoiding walk, were introduced into theoretical polymer physics, and gradu ally came to serve as a paradigm for the general theory of critical phenomena. In the past decade, random-walk expansions have evolved into an important tool for the rigorous analysis of critical phenomena in classical spin systems and of the continuum limit in quantum field theory. Among the results obtained by random-walk methods are the proof of triviality of the cp4 quantum field theo ryin space-time dimension d (:::: ) 4, and the proof of mean-field critical behavior for cp4 and Ising models in space dimension d (:::: ) 4. The principal goal of the present monograph is to present a detailed review of these developments. It is supplemented by a brief excursion to the theory of random surfaces and various applications thereof. This book has grown out of research carried out by the authors mainly from 1982 until the middle of 1985. Our original intention was to write a research paper. However, the writing of such a paper turned out to be a very slow process, partly because of our geographical separation, partly because each of us was involved in other projects that may have appeared more urgent.
Neural Networks in Telecommunications consists of a carefully edited collection of chapters that provides an overview of a wide range of telecommunications tasks being addressed with neural networks. These tasks range from the design and control of the underlying transport network to the filtering, interpretation and manipulation of the transported media. The chapters focus on specific applications, describe specific solutions and demonstrate the benefits that neural networks can provide. By doing this, the authors demonstrate that neural networks should be another tool in the telecommunications engineer's toolbox. Neural networks offer the computational power of nonlinear techniques, while providing a natural path to efficient massively-parallel hardware implementations. In addition, the ability of neural networks to learn allows them to be used on problems where straightforward heuristic or rule-based solutions do not exist. Together these capabilities mean that neural networks offer unique solutions to problems in telecommunications. For engineers and managers in telecommunications, Neural Networks in Telecommunications provides a single point of access to the work being done by leading researchers in this field, and furnishes an in-depth description of neural network applications.
Neural Information Processing and VLSI provides a unified treatment of this important subject for use in classrooms, industry, and research laboratories, in order to develop advanced artificial and biologically-inspired neural networks using compact analog and digital VLSI parallel processing techniques. Neural Information Processing and VLSI systematically presents various neural network paradigms, computing architectures, and the associated electronic/optical implementations using efficient VLSI design methodologies. Conventional digital machines cannot perform computationally-intensive tasks with satisfactory performance in such areas as intelligent perception, including visual and auditory signal processing, recognition, understanding, and logical reasoning (where the human being and even a small living animal can do a superb job). Recent research advances in artificial and biological neural networks have established an important foundation for high-performance information processing with more efficient use of computing resources. The secret lies in the design optimization at various levels of computing and communication of intelligent machines. Each neural network system consists of massively paralleled and distributed signal processors with every processor performing very simple operations, thus consuming little power. Large computational capabilities of these systems in the range of some hundred giga to several tera operations per second are derived from collectively parallel processing and efficient data routing, through well-structured interconnection networks. Deep-submicron very large-scale integration (VLSI) technologies can integrate tens of millions of transistors in a single silicon chip for complex signal processing and information manipulation. The book is suitable for those interested in efficient neurocomputing as well as those curious about neural network system applications. It has been especially prepared for use as a text for advanced undergraduate and first year graduate students, and is an excellent reference book for researchers and scientists working in the fields covered.
The motion of a particle in a random potential in two or more dimensions is chaotic, and the trajectories in deterministically chaotic systems are effectively random. It is therefore no surprise that there are links between the quantum properties of disordered systems and those of simple chaotic systems. The question is, how deep do the connec tions go? And to what extent do the mathematical techniques designed to understand one problem lead to new insights into the other? The canonical problem in the theory of disordered mesoscopic systems is that of a particle moving in a random array of scatterers. The aim is to calculate the statistical properties of, for example, the quantum energy levels, wavefunctions, and conductance fluctuations by averaging over different arrays; that is, by averaging over an ensemble of different realizations of the random potential. In some regimes, corresponding to energy scales that are large compared to the mean level spacing, this can be done using diagrammatic perturbation theory. In others, where the discreteness of the quantum spectrum becomes important, such an approach fails. A more powerful method, devel oped by Efetov, involves representing correlation functions in terms of a supersymmetric nonlinear sigma-model. This applies over a wider range of energy scales, covering both the perturbative and non-perturbative regimes. It was proved using this method that energy level correlations in disordered systems coincide with those of random matrix theory when the dimensionless conductance tends to infinity. |
![]() ![]() You may like...
Super Thinking - Upgrade Your Reasoning…
Gabriel Weinberg, Lauren McCann
Paperback
![]()
|