![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Physics > Thermodynamics & statistical physics > Statistical physics
During the first week of September 1999, the Second EvoNet Summer School on Theoretical Aspects of Evolutionary Computing was held at the Middelheim cam pus of the University of Antwerp, Belgium. Originally intended as a small get together of PhD students interested in the theory of evolutionary computing, the summer school grew to become a successful combination of a four-day workshop with over twenty researchers in the field and a two-day lecture series open to a wider audience. This book is based on the lectures and workshop contributions of this summer school. Its first part consists of tutorial papers which introduce the reader to a num ber of important directions in the theory of evolutionary computing. The tutorials are at graduate level andassume only a basic backgroundin mathematics and com puter science. No prior knowledge ofevolutionary computing or its theory is nec essary. The second part of the book consists of technical papers, selected from the workshop contributions. A number of them build on the material of the tutorials, exploring the theory to research level. Other technical papers may require a visit to the library."
This book presents a novel approach to neural nets and thus offers a genuine alternative to the hitherto known neuro-computers. The new edition includes a section on transformation properties of the equations of the synergetic computer and on the invariance properties of the order parameter equations. Further additions are a new section on stereopsis and recent developments in the use of pulse-coupled neural nets for pattern recognition.
Recent years have seen a rapid development of neural network control tech niques and their successful applications. Numerous simulation studies and actual industrial implementations show that artificial neural network is a good candidate for function approximation and control system design in solving the control problems of complex nonlinear systems in the presence of different kinds of uncertainties. Many control approaches/methods, reporting inventions and control applications within the fields of adaptive control, neural control and fuzzy systems, have been published in various books, journals and conference proceedings. In spite of these remarkable advances in neural control field, due to the complexity of nonlinear systems, the present research on adaptive neural control is still focused on the development of fundamental methodologies. From a theoretical viewpoint, there is, in general, lack of a firmly mathematical basis in stability, robustness, and performance analysis of neural network adaptive control systems. This book is motivated by the need for systematic design approaches for stable adaptive control using approximation-based techniques. The main objec tives of the book are to develop stable adaptive neural control strategies, and to perform transient performance analysis of the resulted neural control systems analytically. Other linear-in-the-parameter function approximators can replace the linear-in-the-parameter neural networks in the controllers presented in the book without any difficulty, which include polynomials, splines, fuzzy systems, wavelet networks, among others. Stability is one of the most important issues being concerned if an adaptive neural network controller is to be used in practical applications."
Within the framework of Jaynes' "Predictive Statistical Mechanics,"
this book presents a detailed derivation of an ensemble formalism
for open systems arbitrarily away from equilibrium. This involves a
large systematization and extension of the fundamental works and
ideas of the outstanding pioneers Gibbs and Boltzmann, and of
Bogoliubov, Kirkwood, Green, Mori, Zwanzig, Prigogine and Zubarev,
among others.
In this book, the necessary background for understanding viscoelasticity is covered; both the continuum and microstructure approaches to modelling viscoelastic materials are discussed, since neither approach alone is sufficient.
This book contains the courses given at the Fifth School on Complex Systems held at Santiago, Chile, from 9th .to 13th December 1996. At this school met researchers working on areas related with recent trends in Complex Systems, which include dynamical systems, cellular automata, symbolic dynamics, spatial systems, statistical physics and thermodynamics. Scientists working in these subjects come from several areas: pure and applied mathematics, physics, biology, computer science and electrical engineering. Each contribution is devoted to one of the above subjects. In most cases they are structured as surveys, presenting at the same time an original point of view about the topic and showing mostly new results. The paper of Bruno Durand presents the state of the art on the relationships between the notions of surjectivity, injectivity and reversibility in cellular automata when finite, infinite or periodic configurations are considered, also he discusses decidability problems related with the classification of cellular automata as well as global properties mentioned above. The paper of Eric Goles and Martin Matamala gives a uniform presentation of simulations of Turing machines by cellular automata. The main ingredient is the encoding function which must be fixed for all Turing machine. In this context known results are revised and new results are presented.
Part I of this book is a short review of the classical part of representation theory. The main chapters of representation theory are discussed: representations of finite and compact groups, finite- and infinite-dimensional representations of Lie groups. It is a typical feature of this survey that the structure of the theory is carefully exposed - the reader can easily see the essence of the theory without being overwhelmed by details. The final chapter is devoted to the method of orbits for different types of groups. Part II deals with representation of Virasoro and Kac-Moody algebra. The second part of the book deals with representations of Virasoro and Kac-Moody algebra. The wealth of recent results on representations of infinite-dimensional groups is presented.
In recent years statistical physics has made significant progress as a result of advances in numerical techniques. While good textbooks exist on the general aspects of statistical physics, the numerical methods and the new developments based on large-scale computing are not usually adequately presented. In this book 16 experts describe the application of methods of statistical physics to various areas in physics such as disordered materials, quasicrystals, semiconductors, and also to other areas beyond physics, such as financial markets, game theory, evolution, and traffic planning, in which statistical physics has recently become significant. In this way the universality of the underlying concepts and methods such as fractals, random matrix theory, time series, neural networks, evolutionary algorithms, becomes clear. The topics are covered by introductory, tutorial presentations.
Dry granular materials, such as sand, sugar and powders, can be poured into a container like a liquid and can also form a pile, resisting gravity like a solid, which is why they can be regarded as a fourth state of matter, neither solid nor liquid. This book focuses on defining the physics of dry granular media in a systematic way, providing a collection of articles written by recognised experts. The physics of this field is new and full of challenges, but many questions (such as kinetic theories, plasticity, continuum and discrete modelling) also require the strong participation of mechanical and chemical engineers, soil mechanists, geologists and astrophysicists. The book gathers into a single volume the relevant concepts from all these disciplines, enabling the reader to gain a rapid understanding of the foundations, as well as the open questions, of the physics of granular materials. The contributors have been chosen particularly for their ability to explain new concepts, making the book attractive to students or researchers contemplating a foray into the field. The breadth of the treatment, on the other hand, makes the book a useful reference for scientists who are already experienced in the subject.
In Statistical Physics one of the ambitious goals is to derive rigorously, from statistical mechanics, the thermodynamic properties of models with realistic forces. Elliott Lieb is a mathematical physicist who meets the challenge of statistical mechanics head on, taking nothing for granted and not being content until the purported consequences have been shown, by rigorous analysis, to follow from the premises. The present volume contains a selection of his contributions to the field, in particular papers dealing with general properties of Coulomb systems, phase transitions in systems with a continuous symmetry, lattice crystals, and entropy inequalities. It also includes work on classical thermodynamics, a discipline that, despite many claims to the contrary, is logically independent of statistical mechanics and deserves a rigorous and unambiguous foundation of its own. The articles in this volume have been carefully annotated by the editors.
In recent years there has been an explosion of network data - that is, measu- ments that are either of or from a system conceptualized as a network - from se- ingly all corners of science. The combination of an increasingly pervasive interest in scienti c analysis at a systems level and the ever-growing capabilities for hi- throughput data collection in various elds has fueled this trend. Researchers from biology and bioinformatics to physics, from computer science to the information sciences, and from economics to sociology are more and more engaged in the c- lection and statistical analysis of data from a network-centric perspective. Accordingly, the contributions to statistical methods and modeling in this area have come from a similarly broad spectrum of areas, often independently of each other. Many books already have been written addressing network data and network problems in speci c individual disciplines. However, there is at present no single book that provides a modern treatment of a core body of knowledge for statistical analysis of network data that cuts across the various disciplines and is organized rather according to a statistical taxonomy of tasks and techniques. This book seeks to ll that gap and, as such, it aims to contribute to a growing trend in recent years to facilitate the exchange of knowledge across the pre-existing boundaries between those disciplines that play a role in what is coming to be called 'network science.
This textbook covers the basic principles of statistical physics and thermodynamics. The text is pitched at the level equivalent to first-year graduate studies or advanced undergraduate studies. It presents the subject in a straightforward and lively manner. After reviewing the basic probability theory of classical thermodynamics, the author addresses the standard topics of statistical physics. The text demonstrates their relevance in other scientific fields using clear and explicit examples. Later chapters introduce phase transitions, critical phenomena and non-equilibrium phenomena.
This monograph is devoted to an entirely new branch of nonlinear physics - solitary intrinsic states, or autosolitons, which form in a broad class of physical, chemical and biological dissipative systems. Autosolitons are often observed as highly nonequilibrium regions in slightly nonequilibrium systems, in many ways resembling ball lightning which occurs in the atmosphere. We develop a new approach to problems of self-organization and turbulence, treating these phenomena as a result of spontaneous formation and subsequent evolution of autosolitons. Scenarios of self-organization involve sophisticated interactions between autosolitons, whereas turbulence is regarded as a pattern of autosolitons which appear and disappear at random in different parts of the system. This monograph is the first attempt to provide a comprehensive summary of the theory of autosolitons as developed by the authors over the years of research. The monograph is comprised of three more or less autonomous parts. Part I deals with the physical nature and experimental studies of autosolitons and self organization in various physical systems: semiconductor and gas plasma, heated gas mixture, semiconductor structures, composite superconductors, optical and magnetic media, systems with uniformly generated combustion matter, distributed gas-discharge and electronic systems. We discuss feasibility of autosolitons in the form of highly nonequilibrium regions in slightly nonequilibrium gases and semiconductors, "hot" and "cold" regions in semiconductor and gas plasmas, static, pulsating and traveling combustion fronts."
In the last two decades extraordinary progress in the experimental handling of single quantum objects has spurred theoretical research into investigating the coupling between quantum systems and their environment. Decoherence, the gradual deterioration of entanglement due to dissipation and noise fed to the system by the environment, has emerged as a central concept. The present set of lectures is intended as a high-level, but self-contained, introduction into the fields of quantum noise and dissipation.In particular their influence on decoherence and applications pertaining to quantum information and quantum communication are studied, leading the nonspecialist researchers and the advanced students gradually to the forefront of research.
As robotic systems make their way into standard practice, they have opened the door to a wide spectrum of complex applications. Such applications usually demand that the robots be highly intelligent. Future robots are likely to have greater sensory capabilities, more intelligence, higher levels of manual dexter ity, and adequate mobility, compared to humans. In order to ensure high-quality control and performance in robotics, new intelligent control techniques must be developed, which are capable of coping with task complexity, multi-objective decision making, large volumes of perception data and substantial amounts of heuristic information. Hence, the pursuit of intelligent autonomous robotic systems has been a topic of much fascinating research in recent years. On the other hand, as emerging technologies, Soft Computing paradigms consisting of complementary elements of Fuzzy Logic, Neural Computing and Evolutionary Computation are viewed as the most promising methods towards intelligent robotic systems. Due to their strong learning and cognitive ability and good tolerance of uncertainty and imprecision, Soft Computing techniques have found wide application in the area of intelligent control of robotic systems."
Artificial neural networks possess several properties that make them particularly attractive for applications to modelling and control of complex non-linear systems. Among these properties are their universal approximation ability, their parallel network structure and the availability of on- and off-line learning methods for the interconnection weights. However, dynamic models that contain neural network architectures might be highly non-linear and difficult to analyse as a result. Artificial Neural Networks for Modelling and Control of Non-Linear Systems investigates the subject from a system theoretical point of view. However the mathematical theory that is required from the reader is limited to matrix calculus, basic analysis, differential equations and basic linear system theory. No preliminary knowledge of neural networks is explicitly required. The book presents both classical and novel network architectures and learning algorithms for modelling and control. Topics include non-linear system identification, neural optimal control, top-down model based neural control design and stability analysis of neural control systems. A major contribution of this book is to introduce NLq Theory as an extension towards modern control theory, in order to analyze and synthesize non-linear systems that contain linear together with static non-linear operators that satisfy a sector condition: neural state space control systems are an example. Moreover, it turns out that NLq Theory is unifying with respect to many problems arising in neural networks, systems and control. Examples show that complex non-linear systems can be modelled and controlled within NLq theory, including mastering chaos. The didactic flavor of this book makes it suitable for use as a text for a course on Neural Networks. In addition, researchers and designers will find many important new techniques, in particular NLq Theory, that have applications in control theory, system theory, circuit theory and Time Series Analysis.
Independent Component Analysis (ICA) is a signal-processing method to extract independent sources given only observed data that are mixtures of the unknown sources. Recently, blind source separation by ICA has received considerable attention because of its potential signal-processing applications such as speech enhancement systems, telecommunications, medical signal-processing and several data mining issues. This book presents theories and applications of ICA and includes invaluable examples of several real-world applications. Based on theories in probabilistic models, information theory and artificial neural networks, several unsupervised learning algorithms are presented that can perform ICA. The seemingly different theories such as infomax, maximum likelihood estimation, negentropy maximization, nonlinear PCA, Bussgang algorithm and cumulant-based methods are reviewed and put in an information theoretic framework to unify several lines of ICA research. An algorithm is presented that is able to blindly separate mixed signals with sub- and super-Gaussian source distributions. The learning algorithms can be extended to filter systems, which allows the separation of voices recorded in a real environment (cocktail party problem). The ICA algorithm has been successfully applied to many biomedical signal-processing problems such as the analysis of electroencephalographic data and functional magnetic resonance imaging data. ICA applied to images results in independent image components that can be used as features in pattern classification problems such as visual lip-reading and face recognition systems. The ICA algorithm can furthermore be embedded in an expectation maximization framework for unsupervised classification. Independent Component Analysis: Theory and Applications is the first book to successfully address this fairly new and generally applicable method of blind source separation. It is essential reading for researchers and practitioners with an interest in ICA.
Applications of Neural Networks gives a detailed description of 13 practical applications of neural networks, selected because the tasks performed by the neural networks are real and significant. The contributions are from leading researchers in neural networks and, as a whole, provide a balanced coverage across a range of application areas and algorithms. The book is divided into three sections. Section A is an introduction to neural networks for nonspecialists. Section B looks at examples of applications using Supervised Training'. Section C presents a number of examples of Unsupervised Training'. For neural network enthusiasts and interested, open-minded sceptics. The book leads the latter through the fundamentals into a convincing and varied series of neural success stories -- described carefully and honestly without over-claiming. Applications of Neural Networks is essential reading for all researchers and designers who are tasked with using neural networks in real life applications.
1.1 Overview We are living in a decade recently declared as the "Decade of the Brain." Neuroscientists may soon manage to work out a functional map of the brain, thanks to technologies that open windows on the mind. With the average human brain consisting of 15 billion neurons, roughly equal to the number of stars in our milky way, each receiving signals through as many as 10,000 synapses, it is quite a view. "The brain is the last and greatest biological frontier," says James Weston codiscoverer of DNA, considered to be the most complex piece of biological machinery on earth. After many years of research by neuroanatomists and neurophys iologists, the overall organization of the brain is well understood, but many of its detailed neural mechanisms remain to be decoded. In order to understand the functioning of the brain, neurobiologists have taken a bottom-up approach of studying the stimulus-response characteristics of single neurons and networks of neurons, while psy chologists have taken a top-down approach of studying brain func tions from the cognitive and behavioral level. While these two ap proaches are gradually converging, it is generally accepted that it may take another fifty years before we achieve a solid microscopic, intermediate, and macroscopic understanding of brain."
'Et moi, ..., si j'avait su comment en revenIT, One service mathematics has rendered the je n'y serais point allt\.' human race. It has put common sense back where it belongs, on the topmost shelf next Jules Verne to the dusty canister labelled 'discarded non- The series is divergent; therefore we may be sense'. able to do something with it. Eric T. Bell O. Heaviside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and non- linearities abound. Similarly, all kinds of parts of mathematics serve as tools for other parts and for other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One service topology has rendered mathematical physics .. :; 'One service logic has rendered com- puter science .. :; 'One service category theory has rendered mathematics .. :. All arguably true. And all statements obtainable this way form part of the raison d'etre of this series.
A dominant feature of our ordinary experience of the world is a sense of irreversible change: things lose form, people grow old, energy dissipates. On the other hand, a major conceptual scheme we use to describe the natural world, molecular dynamics, has reversibility at its core. The need to harmonize conceptual schemes and experience leads to several questions, one of which is the focus of this book. How does irreversibility at the macroscopic level emerge from the reversibility that prevails at the molecular level? Attempts to explain the emergence have emphasized probability, and assigned different probabilities to the forward and reversed directions of processes so that one direction is far more probable than the other. The conclu sion is promising, but the reasons for it have been obscure. In many cases the aim has been to find an explana tion in the nature of probability itself. Reactions to that have been divided: some think the aim is justified while others think it is absurd."
Lifelong learning addresses situations in which a learner faces a series of different learning tasks providing the opportunity for synergy among them. Explanation-based neural network learning (EBNN) is a machine learning algorithm that transfers knowledge across multiple learning tasks. When faced with a new learning task, EBNN exploits domain knowledge accumulated in previous learning tasks to guide generalization in the new one. As a result, EBNN generalizes more accurately from less data than comparable methods. Explanation-Based Neural Network Learning: A Lifelong Learning Approach describes the basic EBNN paradigm and investigates it in the context of supervised learning, reinforcement learning, robotics, and chess. The paradigm of lifelong learning - using earlier learned knowledge to improve subsequent learning - is a promising direction for a new generation of machine learning algorithms. Given the need for more accurate learning methods, it is difficult to imagine a future for machine learning that does not include this paradigm.' From the Foreword by Tom M. Mitchell.
Critical phenomena arise in a wide variety of physical systems. Classi cal examples are the liquid-vapour critical point or the paramagnetic ferromagnetic transition. Further examples include multicomponent fluids and alloys, superfluids, superconductors, polymers and fully developed tur bulence and may even extend to the quark-gluon plasma and the early uni verse as a whole. Early theoretical investigators tried to reduce the problem to a very small number of degrees of freedom, such as the van der Waals equation and mean field approximations, culminating in Landau's general theory of critical phenomena. Nowadays, it is understood that the common ground for all these phenomena lies in the presence of strong fluctuations of infinitely many coupled variables. This was made explicit first through the exact solution of the two-dimensional Ising model by Onsager. Systematic subsequent developments have been leading to the scaling theories of critical phenomena and the renormalization group which allow a precise description of the close neighborhood of the critical point, often in good agreement with experiments. In contrast to the general understanding a century ago, the presence of fluctuations on all length scales at a critical point is emphasized today. This can be briefly summarized by saying that at a critical point a system is scale invariant. In addition, conformal invaTiance permits also a non-uniform, local rescal ing, provided only that angles remain unchanged."
This book contains a mathematical exposition of analogies between classical (Hamiltonian) mechanics, geometrical optics, and hydrodynamics. In addition, it details some interesting applications of the general theory of vortices, such as applications in numerical methods, stability theory, and the theory of exact integration of equations of dynamics.
This book offers a systematic and comprehensive exposition of the quantum stochastic methods that have been developed in the field of quantum optics. It includes new treatments of photodetection, quantum amplifier theory, non-Markovian quantum stochastic processes, quantum input--output theory, and positive P-representations. It is the first book in which quantum noise is described by a mathematically complete theory in a form that is also suited to practical applications. Special attention is paid to non-classical effects, such as squeezing and antibunching. Chapters added to the previous edition, on the stochastic Schr dinger equation, and on cascaded quantum systems, and now supplemented, in the third edition by a chapter on recent developments in various pertinent fields such as laser cooling, Bose-Einstein condensation, quantum feedback and quantum information. |
![]() ![]() You may like...
Computer-Graphic Facial Reconstruction
John G. Clement, Murray K. Marks
Hardcover
R2,470
Discovery Miles 24 700
Principles of Integrated Airborne…
Igor Victorovich Avtin, Vladimir Ivanovich Baburov, …
Hardcover
R5,153
Discovery Miles 51 530
ROMANSY 22 - Robot Design, Dynamics and…
Vigen Arakelian, Philippe Wenger
Hardcover
CyberParks - The Interface Between…
Martijn De Waal, Gabriela Maksymiuk, …
Hardcover
R1,619
Discovery Miles 16 190
|