![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Physics > Thermodynamics & statistical physics > Statistical physics
Mathematical modelling is ubiquitous. Almost every book in exact science touches on mathematical models of a certain class of phenomena, on more or less speci?c approaches to construction and investigation of models, on their applications, etc. As many textbooks with similar titles, Part I of our book is devoted to general qu- tions of modelling. Part II re?ects our professional interests as physicists who spent much time to investigations in the ?eld of non-linear dynamics and mathematical modelling from discrete sequences of experimental measurements (time series). The latter direction of research is known for a long time as "system identi?cation" in the framework of mathematical statistics and automatic control theory. It has its roots in the problem of approximating experimental data points on a plane with a smooth curve. Currently, researchers aim at the description of complex behaviour (irregular, chaotic, non-stationary and noise-corrupted signals which are typical of real-world objects and phenomena) with relatively simple non-linear differential or difference model equations rather than with cumbersome explicit functions of time. In the second half of the twentieth century, it has become clear that such equations of a s- ?ciently low order can exhibit non-trivial solutions that promise suf?ciently simple modelling of complex processes; according to the concepts of non-linear dynamics, chaotic regimes can be demonstrated already by a third-order non-linear ordinary differential equation, while complex behaviour in a linear model can be induced either by random in?uence (noise) or by a very high order of equations.
This work addresses time-delay in complex nonlinear systems and, in particular, its applications in complex networks; its role in control theory and nonlinear optics are also investigated. Delays arise naturally in networks of coupled systems due to finite signal propagation speeds and are thus a key issue in many areas of physics, biology, medicine, and technology. Synchronization phenomena in these networks play an important role, e.g., in the context of learning, cognitive and pathological states in the brain, for secure communication with chaotic lasers or for gene regulation. The thesis includes both novel results on the control of complex dynamics by time-delayed feedback and fundamental new insights into the interplay of delay and synchronization. One of the most interesting results here is a solution to the problem of complete synchronization in general networks with large coupling delay, i.e., large distances between the nodes, by giving a universal classification of networks that has a wide range of interdisciplinary applications.
Connection science is a new information-processing paradigm which attempts to imitate the architecture and process of the brain, and brings together researchers from disciplines as diverse as computer science, physics, psychology, philosophy, linguistics, biology, engineering, neuroscience and AI. Work in Connectionist Natural Language Processing (CNLP) is now expanding rapidly, yet much of the work is still only available in journals, some of them quite obscure. To make this research more accessible this book brings together an important and comprehensive set of articles from the journal CONNECTION SCIENCE which represent the state of the art in Connectionist natural language processing; from speech recognition to discourse comprehension. While it is quintessentially Connectionist, it also deals with hybrid systems, and will be of interest to both theoreticians as well as computer modellers. Range of topics covered: Connectionism and Cognitive Linguistics Motion, Chomsky's Government-binding Theory Syntactic Transformations on Distributed Representations Syntactic Neural Networks A Hybrid Symbolic/Connectionist Model for Understanding of Nouns Connectionism and Determinism in a Syntactic Parser Context Free Grammar Recognition Script Recognition with Hierarchical Feature Maps Attention Mechanisms in Language Script-Based Story Processing A Connectionist Account of Similarity in Vowel Harmony Learning Distributed Representations Connectionist Language Users Representation and Recognition of Temporal Patterns A Hybrid Model of Script Generation Networks that Learn about Phonological Features Pronunciation in Text-to-Speech Systems
"MEMS Linear and Nonlinear Statics and Dynamics" presents the necessary analytical and computational tools for MEMS designers to model and simulate most known MEMS devices, structures, and phenomena. This book also provides an in-depth analysis and treatment of the most common static and dynamic phenomena in MEMS that are encountered by engineers. Coverage alsoincludes nonlinear modeling approaches to modeling various MEMS phenomena of a nonlinear nature, such as those due to electrostatic forces, squeeze-film damping, and large deflection of structures. The book also: Includes examples of numerous MEMS devices and structures that require static or dynamic modelingProvides code for programs in Matlab, Mathematica, and ANSYS for simulating the behavior of MEMS structuresProvides real world problems related to the dynamics of MEMS such as dynamics of electrostatically actuated devices, stiction and adhesion of microbeams due to electrostatic and capillary forces "MEMS Linear and Nonlinear Statics and Dynamics "is an ideal volume for researchers and engineers working in MEMS design and fabrication.
Deeply rooted in fundamental research in Mathematics and Computer Science, Cellular Automata (CA) are recognized as an intuitive modeling paradigm for Complex Systems. Already very basic CA, with extremely simple micro dynamics such as the Game of Life, show an almost endless display of complex emergent behavior. Conversely, CA can also be designed to produce a desired emergent behavior, using either theoretical methodologies or evolutionary techniques. Meanwhile, beyond the original realm of applications - Physics, Computer Science, and Mathematics - CA have also become work horses in very different disciplines such as epidemiology, immunology, sociology, and finance. In this context of fast and impressive progress, spurred further by the enormous attraction these topics have on students, this book emerges as a welcome overview of the field for its practitioners, as well as a good starting point for detailed study on the graduate and post-graduate level. The book contains three parts, two major parts on theory and applications, and a smaller part on software. The theory part contains fundamental chapters on how to design and/or apply CA for many different areas. In the applications part a number of representative examples of really using CA in a broad range of disciplines is provided - this part will give the reader a good idea of the real strength of this kind of modeling as well as the incentive to apply CA in their own field of study. Finally, we included a smaller section on software, to highlight the important work that has been done to create high quality problem solving environments that allow to quickly and relatively easily implement a CA model and run simulations, both on the desktop and if needed, on High Performance Computing infrastructures.
Adaptive Resonance Theory Microchips describes circuit strategies resulting in efficient and functional adaptive resonance theory (ART) hardware systems. While ART algorithms have been developed in software by their creators, this is the first book that addresses efficient VLSI design of ART systems. All systems described in the book have been designed and fabricated (or are nearing completion) as VLSI microchips in anticipation of the impending proliferation of ART applications to autonomous intelligent systems. To accommodate these systems, the book not only provides circuit design techniques, but also validates them through experimental measurements. The book also includes a chapter tutorially describing four ART architectures (ART1, ARTMAP, Fuzzy-ART and Fuzzy-ARTMAP) while providing easily understandable MATLAB code examples to implement these four algorithms in software. In addition, an entire chapter is devoted to other potential applications for real-time data clustering and category learning.
As our title suggests, there are two aspects in the subject of this book. The first is the mathematical investigation of the dynamics of infinite systems of in teracting particles and the description of the time evolution of their states. The second is the rigorous derivation of kinetic equations starting from the results of the aforementioned investigation. As is well known, statistical mechanics started in the last century with some papers written by Maxwell and Boltzmann. Although some of their statements seemed statistically obvious, we must prove that they do not contradict what me chanics predicts. In some cases, in particular for equilibrium states, it turns out that mechanics easily provides the required justification. However things are not so easy, if we take a step forward and consider a gas is not in equilibrium, as is, e.g., the case for air around a flying vehicle. Questions of this kind have been asked since the dawn of the kinetic theory of gases, especially when certain results appeared to lead to paradoxical conclu sions. Today this matter is rather well understood and a rigorous kinetic theory is emerging. The importance of these developments stems not only from the need of providing a careful foundation of such a basic physical theory, but also to exhibit a prototype of a mathematical construct central to the theory of non-equilibrium phenomena of macroscopic size."
The aim of this Book is to give an overview, based on the results of nearly three decades of intensive research, of transient chaos. One belief that motivates us to write this book is that, transient chaos may not have been appreciated even within the nonlinear-science community, let alone other scientific disciplines.
This volume presents the proceedings of the Workshop on Momentum Distributions held on October 24 to 26, 1988 at Argonne National Laboratory. This workshop was motivated by the enormous progress within the past few years in both experimental and theoretical studies of momentum distributions, by the growing recognition of the importance of momentum distributions to the characterization of quantum many-body systems, and especially by the realization that momentum distribution studies have much in common across the entire range of modern physics. Accordingly, the workshop was unique in that it brought together researchers in nuclear physics, electronic systems, quantum fluids and solids, and particle physics to address the common elements of momentum distribution studies. The topics dis cussed in the workshop spanned more than ten orders of magnitude range in charac teristic energy scales. The workshop included an extraordinary variety of interactions from Coulombic to hard core repulsive, from non-relativistic to extreme relativistic."
Polymers are substances made of macromolecules formed by thousands of atoms organized in one (homopolymers) or more (copolymers) groups that repeat themselves to form linear or branched chains, or lattice structures. The concept of polymer traces back to the years 1920's and is one of the most significant ideas of last century. It has given great impulse to indus try but also to fundamental research, including life sciences. Macromolecules are made of sm all molecules known as monomers. The process that brings monomers into polymers is known as polymerization. A fundamental contri bution to the industrial production of polymers, particularly polypropylene and polyethylene, is due to the Nobel prize winners Giulio Natta and Karl Ziegler. The ideas of Ziegler and Natta date back to 1954, and the process has been improved continuously over the years, particularly concerning the design and shaping of the catalysts. Chapter 1 (due to A. Fasano ) is devoted to a review of some results concerning the modelling of the Ziegler- Natta polymerization. The specific ex am pie is the production of polypropilene. The process is extremely complex and all studies with relevant mathematical contents are fairly recent, and several problems are still open.
Physicists, when modelling physical systems with a large number of degrees of freedom, and statisticians, when performing data analysis, have developed their own concepts and methods for making the `best' inference. But are these methods equivalent, or not? What is the state of the art in making inferences? The physicists want answers. More: neural computation demands a clearer understanding of how neural systems make inferences; the theory of chaotic nonlinear systems as applied to time series analysis could profit from the experience already booked by the statisticians; and finally, there is a long-standing conjecture that some of the puzzles of quantum mechanics are due to our incomplete understanding of how we make inferences. Matter enough to stimulate the writing of such a book as the present one. But other considerations also arise, such as the maximum entropy method and Bayesian inference, information theory and the minimum description length. Finally, it is pointed out that an understanding of human inference may require input from psychologists. This lively debate, which is of acute current interest, is well summarized in the present work.
Simple random walks - or equivalently, sums of independent random vari ables - have long been a standard topic of probability theory and mathemat ical physics. In the 1950s, non-Markovian random-walk models, such as the self-avoiding walk, were introduced into theoretical polymer physics, and gradu ally came to serve as a paradigm for the general theory of critical phenomena. In the past decade, random-walk expansions have evolved into an important tool for the rigorous analysis of critical phenomena in classical spin systems and of the continuum limit in quantum field theory. Among the results obtained by random-walk methods are the proof of triviality of the cp4 quantum field theo ryin space-time dimension d (:::: ) 4, and the proof of mean-field critical behavior for cp4 and Ising models in space dimension d (:::: ) 4. The principal goal of the present monograph is to present a detailed review of these developments. It is supplemented by a brief excursion to the theory of random surfaces and various applications thereof. This book has grown out of research carried out by the authors mainly from 1982 until the middle of 1985. Our original intention was to write a research paper. However, the writing of such a paper turned out to be a very slow process, partly because of our geographical separation, partly because each of us was involved in other projects that may have appeared more urgent.
One of the most spectacular consequences of the description of the superfluid condensate in superfluid He or in superconductors as a single macroscopic quantum state is the quantization of circulation, resulting in quantized vortex lines. This book draws no distinction between superfluid He3 and He4 and superconductors. The reader will find the essential introductory chapters and the most recent theoretical and experimental progress in our understanding of the vortex state in both superconductors and superfluids, from lectures given by leading experts in the field, both experimentalists and theoreticians, who gathered in Cargese for a NATO ASI. The peculiar features related to short coherence lengths, 2D geometry, high temperatures, disorder, and pinning are thoroughly discussed. "
Dynamic Neural Field Theory for Motion Perception provides a new theoretical framework that permits a systematic analysis of the dynamic properties of motion perception. This framework uses dynamic neural fields as a key mathematical concept. The author demonstrates how neural fields can be applied for the analysis of perceptual phenomena and its underlying neural processes. Also, similar principles form a basis for the design of computer vision systems as well as the design of artificially behaving systems. The book discusses in detail the application of this theoretical approach to motion perception and will be of great interest to researchers in vision science, psychophysics, and biological visual systems.
In the last few years we have witnessed an upsurge of interest in exactly solvable quantum field theoretical models in many branches of theoretical physics ranging from mathematical physics through high-energy physics to solid states. This book contains six pedagogically written articles meant as an introduction for graduate students to this fascinating area of mathematical physics. It leads them to the front line of present-day research. The topics include conformal field theory and W algebras, the special features of 2d scattering theory as embodied in the exact S matrices and the form factor studies built on them, the Yang--Baxter equations, and the various aspects of the Bethe Ansatz systems.
arise automatically as a result of the recursive structure of the task and the continuous nature of the SRN's state space. Elman also introduces a new graphical technique for study ing network behavior based on principal components analysis. He shows that sentences with multiple levels of embedding produce state space trajectories with an intriguing self similar structure. The development and shape of a recurrent network's state space is the subject of Pollack's paper, the most provocative in this collection. Pollack looks more closely at a connectionist network as a continuous dynamical system. He describes a new type of machine learning phenomenon: induction by phase transition. He then shows that under certain conditions, the state space created by these machines can have a fractal or chaotic structure, with a potentially infinite number of states. This is graphically illustrated using a higher-order recurrent network trained to recognize various regular languages over binary strings. Finally, Pollack suggests that it might be possible to exploit the fractal dynamics of these systems to achieve a generative capacity beyond that of finite-state machines."
Over the past five de-:: ades researchers have sought to develop a new framework that would resolve the anomalies attributable to a patchwork formulation of relativistic quantum mechanics. This book chronicles the development of a new paradigm for describing relativistic quantum phenomena. What makes the new paradigm unique is its inclusion of a physically measurable, invariant evolution parameter. The resulting theory has been sufficiently well developed in the refereed literature that it is now possible to present a synthesis of its ideas and techniques. My synthesis is intended to encourage and enhance future research, and is presented in six parts. The environment within which the conventional paradigm exists is described in the Introduction. Part I eases the mainstream reader into the ideas of the new paradigm by providing the reader with a discussion that should look very familiar, but contains subtle nuances. Indeed, I try to provide the mainstream reader with familiar "landmarks" throughout the text. This is possible because the new paradigm contains the conventional paradigm as a subset. The foundation of the new paradigm is presented in Part II, fol owed by numerous applications in the remaining three parts. The reader should notice that the new paradigm handles not only the broad class of problems typically dealt with in conventional relativistic quantum theory, but also contains fertile research areas for both experimentalists and theorists. To avoid developing a theoretical framework without physical validity, numerous comparisons between theory and experiment are provided, and several predictions are made.
Neural Networks in Telecommunications consists of a carefully edited collection of chapters that provides an overview of a wide range of telecommunications tasks being addressed with neural networks. These tasks range from the design and control of the underlying transport network to the filtering, interpretation and manipulation of the transported media. The chapters focus on specific applications, describe specific solutions and demonstrate the benefits that neural networks can provide. By doing this, the authors demonstrate that neural networks should be another tool in the telecommunications engineer's toolbox. Neural networks offer the computational power of nonlinear techniques, while providing a natural path to efficient massively-parallel hardware implementations. In addition, the ability of neural networks to learn allows them to be used on problems where straightforward heuristic or rule-based solutions do not exist. Together these capabilities mean that neural networks offer unique solutions to problems in telecommunications. For engineers and managers in telecommunications, Neural Networks in Telecommunications provides a single point of access to the work being done by leading researchers in this field, and furnishes an in-depth description of neural network applications.
An Analog VLSI System for Stereoscopic Vision investigates the interaction of the physical medium and the computation in both biological and analog VLSI systems by synthesizing a functional neuromorphic system in silicon. In both the synthesis and analysis of the system, a point of view from within the system is adopted rather than that of an omniscient designer drawing a blueprint. This perspective projects the design and the designer into a living landscape. The motivation for a machine-centered perspective is explained in the first chapter. The second chapter describes the evolution of the silicon retina. The retina accurately encodes visual information over orders of magnitude of ambient illumination, using mismatched components that are calibrated as part of the encoding process. The visual abstraction created by the retina is suitable for transmission through a limited bandwidth channel. The third chapter introduces a general method for interchip communication, the address-event representation, which is used for transmission of retinal data. The address-event representation takes advantage of the speed of CMOS relative to biological neurons to preserve the information of biological action potentials using digital circuitry in place of axons. The fourth chapter describes a collective circuit that computes stereodisparity. In this circuit, the processing that corrects for imperfections in the hardware compensates for inherent ambiguity in the environment. The fifth chapter demonstrates a primitive working stereovision system. An Analog VLSI System for Stereoscopic Vision contributes to both computer engineering and neuroscience at a concrete level. Through the construction of a working analog of biological vision subsystems, new circuits for building brain-style analog computers have been developed. Specific neuropysiological and psychophysical results in terms of underlying electronic mechanisms are explained. These examples demonstrate the utility of using biological principles for building brain-style computers and the significance of building brain-style computers for understanding the nervous system.
Deng Feng Wang was born February 8, 1965 in Chongqing City, China and died August 15, 1999 while swimming with friends in the Atlantic Ocean off Island Beach State Park, New Jersey. In his brief life, he was to have an influence far beyond his years. On August 12th 2000, The Deng Feng Wang Memorial Conference was held at his alma mater, Princeton University, during which Deng Feng's mentors, collaborators and friends presented scientific talks in a testimonial to his tremendous influence on their work and careers. The first part of this volume contains proceedings contributions from the conference, with plenary talks by Nobel Laureate Professor Phil Anderson of Princeton University and leading Condensed Matter Theorists Professor Piers Coleman of Rutgers University and Professor Christian Gruber of the University of Lausanne. Other talks, given by collaborators, friends and classmates testify to the great breadth of Deng Feng Wang's influence, with remarkable connections shown between seemingly unrelated areas in physics such as Condensed Matter Physics, Superconductivity, One-Dimensional Models, Statistical Physics, Mathematical Physics, Quantum Field Theory, High Energy Theory, Nuclear Magnetic Resonance, Supersymmetry, M-Theory and String Theory, in addition to such varied fields outside of physics such as Oil Drilling, Mixed Signal Circuits and Neurology. The second part of the volume consists of reprints of some of Deng Feng Wang's most important papers in the areas of Condensed Matter Physics, Statistical Physics, Magnetism, Mathematical Physics and Mathematical Finance. This volume represents a fascinating synthesis of a wide variety of topics, and ultimately points to the universality of physics and of science as a whole. As such, it represents a fitting tribute to a remarkable individual, whose tragic death will never erase his enduring influence.
Covers a wide spectrum of applications and contains a wide discussion of the foundations and the scope of the most current theories of non-equilibrium thermodynamics. The new edition reflects new developments and contains a new chapter on the interplay between hydrodynamics and thermodynamics.
One SCI\'ice mathematics bas rendered the 'Et moi, ...si j'avait su comment en revcnir. je n'y serais point aile: human race. It bas put common sc:nsc back where it belongs, on the topmost shelf next Jules Verne to the dusty canister labelled 'discarded n- sense'. The series is divergent; therefore we may be able to do something with it. Eric T. Bell O. Hcavisidc Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and non- linearities abound. Similarly. all kinds of parts of mathematics serve as tools for other parts and for other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One service topology has rendered mathematical physics .. :; 'One service logic has rendered com- puter science .. :; 'One service category theory has rendered mathematics .. :. All arguably true. And all statements obtainable this way form part of the raison d'etre of this series.
The contents of this book correspond to Sessions VII and VIII of the International Workshop on Instabilities and Nonequilibrium Structures which took place in Vifia del Mar, Chile, in December 1997 and December 1999, respectively. We were not able to publish this book before and we apologize for this fact to the authors and participants of the meeting. We have made an effort to actualize the courses and articles which have been reviewed by the authors. Both Workshops were organized by Facultad de Ciencias Fisicas y Matematicas, Universidad de Chile, Instituto de Fisica of Universidad Cat61ica de Valparaiso and Centro de Fisica No Lineal y Sistemas Complejos de Santiago. We are glad to acknowledge here the support of the Facultad de Ingenieria of Universidad de los Andes of Santiago which also be from now on one of the organizing Institutions of future Workshops. Enrique Tirapegui PREFACE This book is divided in two parts. In Part I we have collected the courses given in Sessions VII and VIII of the Workshop and in Part II we include a selection of the invited Conferences and Seminars presented at both meetings.
Our book introduces a method to evaluate the accuracy of trend estimation algorithms under conditions similar to those encountered in real time series processing. This method is based on Monte Carlo experiments with artificial time series numerically generated by an original algorithm. The second part of the book contains several automatic algorithms for trend estimation and time series partitioning. The source codes of the computer programs implementing these original automatic algorithms are given in the appendix and will be freely available on the web. The book contains clear statement of the conditions and the approximations under which the algorithms work, as well as the proper interpretation of their results. We illustrate the functioning of the analyzed algorithms by processing time series from astrophysics, finance, biophysics, and paleoclimatology. The numerical experiment method extensively used in our book is already in common use in computational and statistical physics.
This book focuses mainly on fractional Brownian fields and their extensions. It has been used to teach graduate students at Grenoble and Toulouse's Universities. It is as self-contained as possible and contains numerous exercises, with solutions in an appendix. After a foreword by Stephane Jaffard, a long first chapter is devoted to classical results from stochastic fields and fractal analysis. A central notion throughout this book is self-similarity, which is dealt with in a second chapter with a particular emphasis on the celebrated Gaussian self-similar fields, called fractional Brownian fields after Mandelbrot and Van Ness's seminal paper. Fundamental properties of fractional Brownian fields are then stated and proved. The second central notion of this book is the so-called local asymptotic self-similarity (in short lass), which is a local version of self-similarity, defined in the third chapter. A lengthy study is devoted to lass fields with finite variance. Among these lass fields, we find both Gaussian fields and non-Gaussian fields, called Levy fields. The Levy fields can be viewed as bridges between fractional Brownian fields and stable self-similar fields. A further key issue concerns the identification of fractional parameters. This is the raison d'etre of the statistics chapter, where generalized quadratic variations methods are mainly used for estimating fractional parameters. Last but not least, the simulation is addressed in the last chapter. Unlike the previous issues, the simulation of fractional fields is still an area of ongoing research. The algorithms presented in this chapter are efficient but do not claim to close the debate. |
![]() ![]() You may like...
Systems of Frequency Distributions for…
Vijay P. Singh, Lan Zhang
Hardcover
R3,412
Discovery Miles 34 120
Attractor Dimension Estimates for…
Nikolay Kuznetsov, Volker Reitmann
Hardcover
R6,335
Discovery Miles 63 350
Corruption Networks - Concepts and…
Oscar M. Granados, Jose R. Nicolas-Carlock
Hardcover
R3,607
Discovery Miles 36 070
Statistical Mechanics - An Introductory…
A. J. Berlinsky, A. B. Harris
Hardcover
R3,245
Discovery Miles 32 450
Quantum Signatures of Chaos
Fritz Haake, Sven Gnutzmann, …
Hardcover
Traffic and Granular Flow 2019
Iker Zuriguel, Angel Garcimartin, …
Hardcover
R4,479
Discovery Miles 44 790
Contemporary Kinetic Theory of Matter
J. R. Dorfman, Henk van Beijeren, …
Hardcover
R5,269
Discovery Miles 52 690
|