![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Physics > Thermodynamics & statistical physics > Statistical physics
Dry granular materials, such as sand, sugar and powders, can be poured into a container like a liquid and can also form a pile, resisting gravity like a solid, which is why they can be regarded as a fourth state of matter, neither solid nor liquid. This book focuses on defining the physics of dry granular media in a systematic way, providing a collection of articles written by recognised experts. The physics of this field is new and full of challenges, but many questions (such as kinetic theories, plasticity, continuum and discrete modelling) also require the strong participation of mechanical and chemical engineers, soil mechanists, geologists and astrophysicists. The book gathers into a single volume the relevant concepts from all these disciplines, enabling the reader to gain a rapid understanding of the foundations, as well as the open questions, of the physics of granular materials. The contributors have been chosen particularly for their ability to explain new concepts, making the book attractive to students or researchers contemplating a foray into the field. The breadth of the treatment, on the other hand, makes the book a useful reference for scientists who are already experienced in the subject.
When comparing conventional computing architectures to the architectures of biological neural systems, we find several striking differences. Conventional computers use a low number of high performance computing elements that are programmed with algorithms to perform tasks in a time sequenced way; they are very successful in administrative applications, in scientific simulations, and in certain signal processing applications. However, the biological systems still significantly outperform conventional computers in perception tasks, sensory data processing and motory control. Biological systems use a completely dif ferent computing paradigm: a massive network of simple processors that are (adaptively) interconnected and operate in parallel. Exactly this massively parallel processing seems the key aspect to their success. On the other hand the development of VLSI technologies provide us with technological means to implement very complicated systems on a silicon die. Especially analog VLSI circuits in standard digital technologies open the way for the implement at ion of massively parallel analog signal processing systems for sensory signal processing applications and for perception tasks. In chapter 1 the motivations behind the emergence of the analog VLSI of massively parallel systems is discussed in detail together with the capabilities and imitations of VLSI technologies and the required research and developments. Analog parallel signal processing drives for the development of very com pact, high speed and low power circuits. An important technologicallimitation in the reduction of the size of circuits and the improvement of the speed and power consumption performance is the device inaccuracies or device mismatch."
Applications of Neural Networks gives a detailed description of 13 practical applications of neural networks, selected because the tasks performed by the neural networks are real and significant. The contributions are from leading researchers in neural networks and, as a whole, provide a balanced coverage across a range of application areas and algorithms. The book is divided into three sections. Section A is an introduction to neural networks for nonspecialists. Section B looks at examples of applications using Supervised Training'. Section C presents a number of examples of Unsupervised Training'. For neural network enthusiasts and interested, open-minded sceptics. The book leads the latter through the fundamentals into a convincing and varied series of neural success stories -- described carefully and honestly without over-claiming. Applications of Neural Networks is essential reading for all researchers and designers who are tasked with using neural networks in real life applications.
At what level of physical existence does "quantum behavior" begin? How does it develop from classical mechanics? This book addresses these questions and thereby sheds light on fundamental conceptual problems of quantum mechanics. It elucidates the problem of quantum-classical correspondence by developing a procedure for quantizing stochastic systems (e.g. Brownian systems) described by Fokker-Planck equations. The logical consistency of the scheme is then verified by taking the classical limit of the equations of motion and corresponding physical quantities. Perhaps equally important, conceptual problems concerning the relationship between classical and quantum physics are identified and discussed. Graduate students and physical scientists will find this an accessible entr e to an intriguing and thorny issue at the core of modern physics.
The focus is on the main physical ideas and mathematical methods of the microscopic theory of fluids, starting with the basic principles of statistical mechanics. The detailed derivation of results is accompanied by explanation of their physical meaning. The same approach refers to several specialized topics of the liquid state, most of which are recent developments, such as: a perturbation approach to the surface tension, an algebraic perturbation theory of polar nonpolarizable fluids and ferrocolloids, a semi-phenomenological theory of the Tolman length and some others.
Nonextensive statistical mechanics is now a rapidly growing field and a new stream in the research of the foundations of statistical mechanics. This generalization of the well-known Boltzmann--Gibbs theory enables the study of systems with long-range interactions, long-term memories or multi-fractal structures. This book consists of a set of self-contained lectures and includes additional contributions where some of the latest developments -- ranging from astro- to biophysics -- are covered. Addressing primarily graduate students and lecturers, this book will also be a useful reference for all researchers working in the field.
Numbers ... , natural, rational, real, complex, p-adic .... What do you know about p-adic numbers? Probably, you have never used any p-adic (nonrational) number before now. I was in the same situation few years ago. p-adic numbers were considered as an exotic part of pure mathematics without any application. I have also used only real and complex numbers in my investigations in functional analysis and its applications to the quantum field theory and I was sure that these number fields can be a basis of every physical model generated by nature. But recently new models of the quantum physics were proposed on the basis of p-adic numbers field Qp. What are p-adic numbers, p-adic analysis, p-adic physics, p-adic probability? p-adic numbers were introduced by K. Hensel (1904) in connection with problems of the pure theory of numbers. The construction of Qp is very similar to the construction of (p is a fixed prime number, p = 2,3,5, ... ,127, ... ). Both these number fields are completions of the field of rational numbers Q. But another valuation 1 . Ip is introduced on Q instead of the usual real valuation 1 . I* We get an infinite sequence of non isomorphic completions of Q : Q2, Q3, ... , Q127, ... , IR = Qoo* These fields are the only possibilities to com plete Q according to the famous theorem of Ostrowsky.
The present book has been written by two mathematicians and one physicist: a pure mathematician specializing in Finsler geometry (Makoto Matsumoto), one working in mathematical biology (Peter Antonelli), and a mathematical physicist specializing in information thermodynamics (Roman Ingarden). The main purpose of this book is to present the principles and methods of sprays (path spaces) and Finsler spaces together with examples of applications to physical and life sciences. It is our aim to write an introductory book on Finsler geometry and its applications at a fairly advanced level. It is intended especially for graduate students in pure mathemat ics, science and applied mathematics, but should be also of interest to those pure "Finslerists" who would like to see their subject applied. After more than 70 years of relatively slow development Finsler geometry is now a modern subject with a large body of theorems and techniques and has math ematical content comparable to any field of modern differential geometry. The time has come to say this in full voice, against those who have thought Finsler geometry, because of its computational complexity, is only of marginal interest and with prac tically no interesting applications. Contrary to these outdated fossilized opinions, we believe "the world is Finslerian" in a true sense and we will try to show this in our application in thermodynamics, optics, ecology, evolution and developmental biology. On the other hand, while the complexity of the subject has not disappeared, the modern bundle theoretic approach has increased greatly its understandability."
The lectures that comprise this volume constitute a comprehensive survey of the many and various aspects of integrable dynamical systems. The present edition is a streamlined, revised and updated version of a 1997 set of notes that was published as Lecture Notes in Physics, Volume 495. This volume will be complemented by a companion book dedicated to discrete integrable systems. Both volumes address primarily graduate students and nonspecialist researchers but will also benefit lecturers looking for suitable material for advanced courses and researchers interested in specific topics.
This two-volume work provides a comprehensive study of the statistical mechanics of lattice models. It introduces readers to the main topics and the theory of phase transitions, building on a firm mathematical and physical basis. Volume 1 contains an account of mean-field and cluster variation methods successfully used in many applications in solid-state physics and theoretical chemistry, as well as an account of exact results for the Ising and six-vertex models and those derivable by transformation methods.
Flux quantization experiments indicate that the carriers, Cooper pairs (pairons), in the supercurrent have charge magnitude 2e, and that they move independently. Josephson interference in a Superconducting Quantum Int- ference Device (SQUID) shows that the centers of masses (CM) of pairons move as bosons with a linear dispersion relation. Based on this evidence we develop a theory of superconductivity in conventional and mate- als from a unified point of view. Following Bardeen, Cooper and Schrieffer (BCS) we regard the phonon exchange attraction as the cause of superc- ductivity. For cuprate superconductors, however, we take account of both optical- and acoustic-phonon exchange. BCS started with a Hamiltonian containing "electron" and "hole" kinetic energies and a pairing interaction with the phonon variables eliminated. These "electrons" and "holes" were introduced formally in terms of a free-electron model, which we consider unsatisfactory. We define "electrons" and "holes" in terms of the cur- tures of the Fermi surface. "Electrons" (1) and "holes" (2) are different and so they are assigned with different effective masses: Blatt, Schafroth and Butler proposed to explain superconductivity in terms of a Bose-Einstein Condensation (BEC) of electron pairs, each having mass M and a size. The system of free massive bosons, having a quadratic dispersion relation: and moving in three dimensions (3D) undergoes a BEC transition at where is the pair density.
In this book, the necessary background for understanding viscoelasticity is covered; both the continuum and microstructure approaches to modelling viscoelastic materials are discussed, since neither approach alone is sufficient.
This book contains the courses given at the Fifth School on Complex Systems held at Santiago, Chile, from 9th .to 13th December 1996. At this school met researchers working on areas related with recent trends in Complex Systems, which include dynamical systems, cellular automata, symbolic dynamics, spatial systems, statistical physics and thermodynamics. Scientists working in these subjects come from several areas: pure and applied mathematics, physics, biology, computer science and electrical engineering. Each contribution is devoted to one of the above subjects. In most cases they are structured as surveys, presenting at the same time an original point of view about the topic and showing mostly new results. The paper of Bruno Durand presents the state of the art on the relationships between the notions of surjectivity, injectivity and reversibility in cellular automata when finite, infinite or periodic configurations are considered, also he discusses decidability problems related with the classification of cellular automata as well as global properties mentioned above. The paper of Eric Goles and Martin Matamala gives a uniform presentation of simulations of Turing machines by cellular automata. The main ingredient is the encoding function which must be fixed for all Turing machine. In this context known results are revised and new results are presented.
Part I of this book is a short review of the classical part of representation theory. The main chapters of representation theory are discussed: representations of finite and compact groups, finite- and infinite-dimensional representations of Lie groups. It is a typical feature of this survey that the structure of the theory is carefully exposed - the reader can easily see the essence of the theory without being overwhelmed by details. The final chapter is devoted to the method of orbits for different types of groups. Part II deals with representation of Virasoro and Kac-Moody algebra. The second part of the book deals with representations of Virasoro and Kac-Moody algebra. The wealth of recent results on representations of infinite-dimensional groups is presented.
This book presents a novel approach to neural nets and thus offers a genuine alternative to the hitherto known neuro-computers. The new edition includes a section on transformation properties of the equations of the synergetic computer and on the invariance properties of the order parameter equations. Further additions are a new section on stereopsis and recent developments in the use of pulse-coupled neural nets for pattern recognition.
In recent years there has been an explosion of network data - that is, measu- ments that are either of or from a system conceptualized as a network - from se- ingly all corners of science. The combination of an increasingly pervasive interest in scienti c analysis at a systems level and the ever-growing capabilities for hi- throughput data collection in various elds has fueled this trend. Researchers from biology and bioinformatics to physics, from computer science to the information sciences, and from economics to sociology are more and more engaged in the c- lection and statistical analysis of data from a network-centric perspective. Accordingly, the contributions to statistical methods and modeling in this area have come from a similarly broad spectrum of areas, often independently of each other. Many books already have been written addressing network data and network problems in speci c individual disciplines. However, there is at present no single book that provides a modern treatment of a core body of knowledge for statistical analysis of network data that cuts across the various disciplines and is organized rather according to a statistical taxonomy of tasks and techniques. This book seeks to ll that gap and, as such, it aims to contribute to a growing trend in recent years to facilitate the exchange of knowledge across the pre-existing boundaries between those disciplines that play a role in what is coming to be called 'network science.
This monograph is devoted to an entirely new branch of nonlinear physics - solitary intrinsic states, or autosolitons, which form in a broad class of physical, chemical and biological dissipative systems. Autosolitons are often observed as highly nonequilibrium regions in slightly nonequilibrium systems, in many ways resembling ball lightning which occurs in the atmosphere. We develop a new approach to problems of self-organization and turbulence, treating these phenomena as a result of spontaneous formation and subsequent evolution of autosolitons. Scenarios of self-organization involve sophisticated interactions between autosolitons, whereas turbulence is regarded as a pattern of autosolitons which appear and disappear at random in different parts of the system. This monograph is the first attempt to provide a comprehensive summary of the theory of autosolitons as developed by the authors over the years of research. The monograph is comprised of three more or less autonomous parts. Part I deals with the physical nature and experimental studies of autosolitons and self organization in various physical systems: semiconductor and gas plasma, heated gas mixture, semiconductor structures, composite superconductors, optical and magnetic media, systems with uniformly generated combustion matter, distributed gas-discharge and electronic systems. We discuss feasibility of autosolitons in the form of highly nonequilibrium regions in slightly nonequilibrium gases and semiconductors, "hot" and "cold" regions in semiconductor and gas plasmas, static, pulsating and traveling combustion fronts."
In the last two decades extraordinary progress in the experimental handling of single quantum objects has spurred theoretical research into investigating the coupling between quantum systems and their environment. Decoherence, the gradual deterioration of entanglement due to dissipation and noise fed to the system by the environment, has emerged as a central concept. The present set of lectures is intended as a high-level, but self-contained, introduction into the fields of quantum noise and dissipation.In particular their influence on decoherence and applications pertaining to quantum information and quantum communication are studied, leading the nonspecialist researchers and the advanced students gradually to the forefront of research.
This textbook covers the basic principles of statistical physics and thermodynamics. The text is pitched at the level equivalent to first-year graduate studies or advanced undergraduate studies. It presents the subject in a straightforward and lively manner. After reviewing the basic probability theory of classical thermodynamics, the author addresses the standard topics of statistical physics. The text demonstrates their relevance in other scientific fields using clear and explicit examples. Later chapters introduce phase transitions, critical phenomena and non-equilibrium phenomena.
Independent Component Analysis (ICA) is a signal-processing method to extract independent sources given only observed data that are mixtures of the unknown sources. Recently, blind source separation by ICA has received considerable attention because of its potential signal-processing applications such as speech enhancement systems, telecommunications, medical signal-processing and several data mining issues. This book presents theories and applications of ICA and includes invaluable examples of several real-world applications. Based on theories in probabilistic models, information theory and artificial neural networks, several unsupervised learning algorithms are presented that can perform ICA. The seemingly different theories such as infomax, maximum likelihood estimation, negentropy maximization, nonlinear PCA, Bussgang algorithm and cumulant-based methods are reviewed and put in an information theoretic framework to unify several lines of ICA research. An algorithm is presented that is able to blindly separate mixed signals with sub- and super-Gaussian source distributions. The learning algorithms can be extended to filter systems, which allows the separation of voices recorded in a real environment (cocktail party problem). The ICA algorithm has been successfully applied to many biomedical signal-processing problems such as the analysis of electroencephalographic data and functional magnetic resonance imaging data. ICA applied to images results in independent image components that can be used as features in pattern classification problems such as visual lip-reading and face recognition systems. The ICA algorithm can furthermore be embedded in an expectation maximization framework for unsupervised classification. Independent Component Analysis: Theory and Applications is the first book to successfully address this fairly new and generally applicable method of blind source separation. It is essential reading for researchers and practitioners with an interest in ICA.
As robotic systems make their way into standard practice, they have opened the door to a wide spectrum of complex applications. Such applications usually demand that the robots be highly intelligent. Future robots are likely to have greater sensory capabilities, more intelligence, higher levels of manual dexter ity, and adequate mobility, compared to humans. In order to ensure high-quality control and performance in robotics, new intelligent control techniques must be developed, which are capable of coping with task complexity, multi-objective decision making, large volumes of perception data and substantial amounts of heuristic information. Hence, the pursuit of intelligent autonomous robotic systems has been a topic of much fascinating research in recent years. On the other hand, as emerging technologies, Soft Computing paradigms consisting of complementary elements of Fuzzy Logic, Neural Computing and Evolutionary Computation are viewed as the most promising methods towards intelligent robotic systems. Due to their strong learning and cognitive ability and good tolerance of uncertainty and imprecision, Soft Computing techniques have found wide application in the area of intelligent control of robotic systems."
A dominant feature of our ordinary experience of the world is a sense of irreversible change: things lose form, people grow old, energy dissipates. On the other hand, a major conceptual scheme we use to describe the natural world, molecular dynamics, has reversibility at its core. The need to harmonize conceptual schemes and experience leads to several questions, one of which is the focus of this book. How does irreversibility at the macroscopic level emerge from the reversibility that prevails at the molecular level? Attempts to explain the emergence have emphasized probability, and assigned different probabilities to the forward and reversed directions of processes so that one direction is far more probable than the other. The conclu sion is promising, but the reasons for it have been obscure. In many cases the aim has been to find an explana tion in the nature of probability itself. Reactions to that have been divided: some think the aim is justified while others think it is absurd."
Lifelong learning addresses situations in which a learner faces a series of different learning tasks providing the opportunity for synergy among them. Explanation-based neural network learning (EBNN) is a machine learning algorithm that transfers knowledge across multiple learning tasks. When faced with a new learning task, EBNN exploits domain knowledge accumulated in previous learning tasks to guide generalization in the new one. As a result, EBNN generalizes more accurately from less data than comparable methods. Explanation-Based Neural Network Learning: A Lifelong Learning Approach describes the basic EBNN paradigm and investigates it in the context of supervised learning, reinforcement learning, robotics, and chess. The paradigm of lifelong learning - using earlier learned knowledge to improve subsequent learning - is a promising direction for a new generation of machine learning algorithms. Given the need for more accurate learning methods, it is difficult to imagine a future for machine learning that does not include this paradigm.' From the Foreword by Tom M. Mitchell.
This book represents a thoroughly comprehensive treatment of computational intelligence from an electrical power system engineer's perspective. Thorough, well-organised and up-to-date, it examines in some detail all the important aspects of this very exciting and rapidly emerging technology, including: expert systems, fuzzy logic, artificial neural networks, genetic algorithms and hybrid systems. Written in a concise and flowing manner, by experts in the area of electrical power systems who have had many years of experience in the application of computational intelligence for solving many complex and onerous power system problems, this book is ideal for professional engineers and postgraduate students entering this exciting field. This book would also provide a good foundation for senior undergraduate students entering into their final year of study.
Recent years have seen a rapid development of neural network control tech niques and their successful applications. Numerous simulation studies and actual industrial implementations show that artificial neural network is a good candidate for function approximation and control system design in solving the control problems of complex nonlinear systems in the presence of different kinds of uncertainties. Many control approaches/methods, reporting inventions and control applications within the fields of adaptive control, neural control and fuzzy systems, have been published in various books, journals and conference proceedings. In spite of these remarkable advances in neural control field, due to the complexity of nonlinear systems, the present research on adaptive neural control is still focused on the development of fundamental methodologies. From a theoretical viewpoint, there is, in general, lack of a firmly mathematical basis in stability, robustness, and performance analysis of neural network adaptive control systems. This book is motivated by the need for systematic design approaches for stable adaptive control using approximation-based techniques. The main objec tives of the book are to develop stable adaptive neural control strategies, and to perform transient performance analysis of the resulted neural control systems analytically. Other linear-in-the-parameter function approximators can replace the linear-in-the-parameter neural networks in the controllers presented in the book without any difficulty, which include polynomials, splines, fuzzy systems, wavelet networks, among others. Stability is one of the most important issues being concerned if an adaptive neural network controller is to be used in practical applications." |
![]() ![]() You may like...
Ginzburg-Landau Phase Transition Theory…
K.H. Hoffmann, Q. Tang
Hardcover
R3,107
Discovery Miles 31 070
Richardson Extrapolation - Practical…
Zahari Zlatev, Ivan Dimov, …
Hardcover
R4,609
Discovery Miles 46 090
Computational Diffusion MRI - MICCAI…
Elisenda Bonet-Carne, Jana Hutter, …
Hardcover
R4,359
Discovery Miles 43 590
Time-dependent Problems in Imaging and…
Barbara Kaltenbacher, Thomas Schuster, …
Hardcover
R4,265
Discovery Miles 42 650
|