![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Physics > Thermodynamics & statistical physics > Statistical physics
Though the reductionist approachto biology and medicine has led to several imp- tant advances, further progresses with respect to the remaining challenges require integration of representation, characterization and modeling of the studied systems along a wide range of spatial and time scales. Such an approach, intrinsically - lated to systems biology, is poised to ultimately turning biology into a more precise and synthetic discipline, paving the way to extensive preventive and regenerative medicine [1], drug discovery [20] and treatment optimization [24]. A particularly appealing and effective approach to addressing the complexity of interactions inherent to the biological systems is provided by the new area of c- plex networks [34, 30, 8, 13, 12]. Basically, it is an extension of graph theory [10], focusing on the modeling, representation, characterization, analysis and simulation ofcomplexsystemsbyconsideringmanyelementsandtheirinterconnections.C- plex networks concepts and methods have been used to study disease [17], tr- scription networks [5, 6, 4], protein-protein networks [22, 36, 16, 39], metabolic networks [23] and anatomy [40].
In our daily lives we conceive of our surroundings as an objectively given reality. The world is perceived through our senses, and ~hese provide us, so we believe, with a faithful image of the world. But occ~ipnally we are forced to realize that our senses deceive us, e. g. , by illusions. For a while it was believed that the sensation of color is directly r~lated to the frequency of light waves, until E. Land (the inventor of the polaroid camera) showed in detailed experiments that our perception of, say, a colored spot depends on the colors of its surrounding. On the other hand, we may experience hallucinations or dreams as real. Quite evidently, the relationship between the "world" and our "brain" is intricate. Another strange problem is the way in which we perceive time or the "Now". Psychophysical experiments tell us that the psychological "Now" is an extended period of time in the sense of physics. The situation was made still more puzzling when, in the nineteen-twenties, Heisenberg and others realized that, by observing processes in the microscopic world of electrons and other elementary particles, we strongly interfere with that world. The outcome of experiments - at least in general - can only be predicted statistically. What is the nature ofthis strange relationship between "object" and "observer"? This is another crucial problem of the inside-outside or endo-exo dichotomy.
The study of cooperative phenomena is one of the dominant features of contem porary physics. Outside physics it has grown to a huge field of interdisciplinary investigation, involving all the natural sciences from physics via biology to socio logy. Yet, during the first few decades following the advent of quantum theory, the pursuit of the single particle or the single atom, as the case may be, has been so fascinating that only a small number of physicists have stressed the importance of collective behaviour. One outstanding personality among these few is Professor HERBERT FROHLICH. He has made an enormous contribution to the modern concept of cooperativity and has stimulated a whole generation of physicists. Therefore, it seemed to the editors very appropriate to dedicate a volume on "cooperative phenomena" to him on the occasion of his official retirement from his university duties. Nevertheless, in the course of carrying out this project, the editors have been somewhat amazed to find that they have covered the essentials of contemporary physics and its im pact on other scientific disciplines. It thus becomes clear how much HERBERT FROHLICH has inspired research workers and has acted as a stimulating discussion partner for others. FROHLICH is one of those exceptional scientists who have wor ked in quite different fields and given them an enormous impetus. Unfortunately, the number of scientists of such distinctive personality has been decreasing in our century."
The International Conference on Complex Systems (ICCS) creates a unique atmosphere for scientists of all fields, engineers, physicians, executives, and a host of other professionals to explore common themes and applications of complex system science. With this new volume, Unifying Themes in Complex Systems continues to build common ground between the wide-ranging domains of complex system science.
Reinforcement learning is the learning of a mapping from situations to actions so as to maximize a scalar reward or reinforcement signal. The learner is not told which action to take, as in most forms of machine learning, but instead must discover which actions yield the highest reward by trying them. In the most interesting and challenging cases, actions may affect not only the immediate reward, but also the next situation, and through that all subsequent rewards. These two characteristics -- trial-and-error search and delayed reward -- are the most important distinguishing features of reinforcement learning. Reinforcement learning is both a new and a very old topic in AI. The term appears to have been coined by Minsk (1961), and independently in control theory by Walz and Fu (1965). The earliest machine learning research now viewed as directly relevant was Samuel's (1959) checker player, which used temporal-difference learning to manage delayed reward much as it is used today. Of course learning and reinforcement have been studied in psychology for almost a century, and that work has had a very strong impact on the AI/engineering work. One could in fact consider all of reinforcement learning to be simply the reverse engineering of certain psychological learning processes (e.g. operant conditioning and secondary reinforcement). Reinforcement Learning is an edited volume of original research, comprising seven invited contributions by leading researchers.
Leading researchers in the area of the origin, evolution and distribution of life in the universe contributed to Exobiology: Matter, Energy, and Information in the Origin and Evolution of Life in the Universe. This volume provides a review of this interdisciplinary field. In 50 chapters many aspects that contribute to exobiology are reviewed by 90 authors. These include: historical perspective of biological evolution; cultural aspects of exobiology, cosmic, chemical and biological evolution, molecular biology, geochronology, biogeochemistry, biogeology, and planetology. Some of the current missions are discussed. Other subjects in the frontier of exobiology are reviewed, such as the search for planets outside the solar system, and the possible manifestation of intelligence in those new potential environments. The SETI research effort is well represented in this general overview of exobiology. This book is the proceedings of the Fifth Trieste Conference on Chemical Evolution that took place in September 1997. The volume is dedicated to the memory of Nobel Laureate Abdus Salam who suggested the initiation of the Trieste conferences on chemical evolution and the origin of life. Audience: Graduate students and researchers in the many areas of basic, earth, and life sciences that contribute to the study of chemical evolution and the origin, evolution and distribution of life in the universe.
The analysis of plates and shells under static and dynamic loads is of greatinterest to scientists and engineers both from the theoretical and the practical viewpoint. The Boun- dary Element Method (BEM) has some distinct advantages over domain techniques such as the Finite Difference Method (FDM) and the Finite Element Method (FEM) for a wide class of structuralanalysis problems. This is the first book to deal specifically with the analysis of plates and shells by the BEM and to cover all aspects of their behaviour, and combi- nes tutorial and state-of-the-art articles on the BEM as ap- plied to plates and shells. It aims to inform scientists and engineers about the use and the advantages of this techni- que, the most recent developments in the field and the per- tinent literature for further study.
This volume contains the invited lectures and a selection of the contributed papers and posters of the workshop on "Fluctuations and Sensitivity in Nonequil ibrium Systems", held at the Joe C. Thompson Conference Center, Un i vers ity of Texas at Austin, March 12-16, 1984. The workshop dealt with stochastic phenomena and sensi- tivity in nonequilibrium systems from a macroscopic point of view. Durin9 the last few years it has been realized that the role of fluctuations is far less trivial in systems far from equilibrium than in systems under thermodynamic equilibrium condi- tions. It was found that random fluctuations often are a determining factor for the state adopted by macroscopic systems and cannot be regarded as secondary effects of minor importance. Further, nonequilibrium systems are also very sensitive to small systematic changes in their environment. The main aims of the workshop were: i) to provide scientists with an occasion to acquaint themselves with the state of the art in fluctuation theory and sensitivity analysis; ii) to provide a forum for the presentation of recent advances in theory and experiment; iii) to bring toge- ther theoreticians and experimentalists in order to delineate the major open problems and to formulate strategies to tackle these problems. The organizing committee of the workshop consisted of W. Horsthemke, O. K. Konde- pudi, G. Dewel, G. Nicolis, I. Prigogine and L. Reichl.
Over the years enormous effort was invested in proving ergodicity, but for a number of reasons, con?dence in the fruitfulness of this approach has waned. - Y. Ben-Menahem and I. Pitowsky [1] Abstract The basic motivation behind the present text is threefold: To give a new explanation for the emergence of thermodynamics, to investigate the interplay between quantum mechanics and thermodynamics, and to explore possible ext- sions of the common validity range of thermodynamics. Originally, thermodynamics has been a purely phenomenological science. Early s- entists (Galileo, Santorio, Celsius, Fahrenheit) tried to give de?nitions for quantities which were intuitively obvious to the observer, like pressure or temperature, and studied their interconnections. The idea that these phenomena might be linked to other ?elds of physics, like classical mechanics, e.g., was not common in those days. Such a connection was basically introduced when Joule calculated the heat equ- alent in 1840 showing that heat was a form of energy, just like kinetic or potential energy in the theory of mechanics. At the end of the 19th century, when the atomic theory became popular, researchers began to think of a gas as a huge amount of bouncing balls inside a box.
This book is concerned with Artificial Intelligence (AI) concepts and techniques as applied to industrial decision making, control and automation problems. The field of AI has been expanded enormously during the last years due to that solid theoretical and application results have accumulated. During the first stage of AI development most workers in the field were content with illustrations showing ideas at work on simple problems. Later, as the field matured, emphasis was turned to demonstrations that showed the capability of AI techniques to handle problems of practical value. Now, we arrived at the stage where researchers and practitioners are actually building AI systems that face real-world and industrial problems. This volume provides a set of twenty four well-selected contributions that deal with the application of AI to such real-life and industrial problems. These contributions are grouped and presented in five parts as follows: Part 1: General Issues Part 2: Intelligent Systems Part 3: Neural Networks in Modelling, Control and Scheduling Part 4: System Diagnostics Part 5: Industrial Robotic, Manufacturing and Organizational Systems Part 1 involves four chapters providing background material and dealing with general issues such as the conceptual integration of qualitative and quantitative models, the treatment of timing problems at system integration, and the investigation of correct reasoning in interactive man-robot systems.
The dynamics of transition from laminar to turbulent flow remains to this day a major challenge in theoretical and applied mechanics. A series of IUTAM symposia held over the last twenty five years at well-known Centres of research in the subject - Novosibirsk, Stuttgart, Toulouse, Sendai and Sedona (Arizona) - has proved to be a great catalyst which has given a boost to research and our understanding of the field. At this point of time, the field is changing significantly with several emerging directions. The sixth IUTAM meeting in the series, which was held at the Jawaharlal Nehru Centre for Advanced Scientific Research, Bangalore, India, focused on the progress after the fifth meeting held at Sedona in 1999. The s- posium, which adhered to the IUTAM format of a single session, included seven invited lectures, fifty oral presentations and eight posters. During the course of the symposium, the following became evident. The area of laminar-turbulent transition has progressed considerably since 1999. Better theoretical tools, for handling nonlinearities as well as transient behaviour are now available. This is accompanied by an en- mous increase in the level of sophistication of both experiments and direct numerical simulations. The result has been that our understanding of the early stages of the transition process is now on much firmer footing and we are now able to study many aspects of the later stages of the transition process.
The ninth Advanced Study Institute (AS!) on Techniques and Concepts of High Energy Physics was almost canceled before ifbegan! A certain visitor to the area (Hurricane Bertha) arrived unexpectedly early in 1996. It was the first hur ricane in memory to menace the Caribbean in early July! Fortunately, it passed St. Croix several days before our meeting, and left very little damage. (The Altar ellis survived the eye of the storm in the in the British West Islands!) The meeting was held once again at the hotel on the Cay, on that spec of land in the harbor ofChrirtiansted, St. Croix, U. S. Virgin Islands. After the first two days of, at times, outrageous downpour, the 71 participants from 26 coun tries began to relax and enjoy the lectures and the lovely surroundings of the In stitute. The primary support for the meeting was provided by the ~cientific Affairs Division of the North Atlantic Treaty Organization (NATO). The ASI was cosponsored by the U. S. department of Energy, by the Fermi National Accelera tor Laboratory (Fermi-lab), by the U. S. National Science Foundation, and by the University of Rochester. In addition, the International Science Foundation con tributed to the support of a participant from Russia. As in the case of the previous ASIs, the scientific program was designed for advanced graduate students and recent Ph. D. recipients in experimental parti cle physics.
Emergence and complexity refer to the appearance of higher-level properties and behaviours of a system that obviously comes from the collective dynamics of that system's components. These properties are not directly deducible from the lower-level motion of that system. Emergent properties are properties of the "whole'' that are not possessed by any of the individual parts making up that whole. Such phenomena exist in various domains and can be described, using complexity concepts and thematic knowledges. This book highlights complexity modelling through dynamical or behavioral systems. The pluridisciplinary purposes, developed along the chapters, are able to design links between a wide-range of fundamental and applicative Sciences. Developing such links - instead of focusing on specific and narrow researches - is characteristic of the Science of Complexity that we try to promote by this contribution.
Adding one and one makes two, usually. But sometimes things add up to more than the sum of their parts. This observation, now frequently expressed in the maxim "more is different", is one of the characteristic features of complex systems and, in particular, complex networks. Along with their ubiquity in real world systems, the ability of networks to exhibit emergent dynamics, once they reach a certain size, has rendered them highly attractive targets for research. The resulting network hype has made the word "network" one of the most in uential buzzwords seen in almost every corner of science, from physics and biology to economy and social sciences. The theme of "more is different" appears in a different way in the present v- ume, from the viewpoint of what we call "adaptive networks." Adaptive networks uniquely combine dynamics on a network with dynamical adaptive changes of the underlying network topology, and thus they link classes of mechanisms that were previously studied in isolation. Here adding one and one certainly does not make two, but gives rise to a number of new phenomena, including highly robust se- organization of topology and dynamics and other remarkably rich dynamical beh- iors.
Connectionist Speech Recognition: A Hybrid Approach describes the theory and implementation of a method to incorporate neural network approaches into state of the art continuous speech recognition systems based on hidden Markov models (HMMs) to improve their performance. In this framework, neural networks (and in particular, multilayer perceptrons or MLPs) have been restricted to well-defined subtasks of the whole system, i.e. HMM emission probability estimation and feature extraction. The book describes a successful five-year international collaboration between the authors. The lessons learned form a case study that demonstrates how hybrid systems can be developed to combine neural networks with more traditional statistical approaches. The book illustrates both the advantages and limitations of neural networks in the framework of a statistical systems. Using standard databases and comparison with some conventional approaches, it is shown that MLP probability estimation can improve recognition performance. Other approaches are discussed, though there is no such unequivocal experimental result for these methods. Connectionist Speech Recognition is of use to anyone intending to use neural networks for speech recognition or within the framework provided by an existing successful statistical approach. This includes research and development groups working in the field of speech recognition, both with standard and neural network approaches, as well as other pattern recognition and/or neural network researchers. The book is also suitable as a text for advanced courses on neural networks or speech processing.
Monte Carlo computer simulations are now a standard tool in scientific fields such as condensed-matter physics, including surface-physics and applied-physics problems (metallurgy, diffusion, and segregation, etc. ), chemical physics, including studies of solutions, chemical reactions, polymer statistics, etc., and field theory. With the increasing ability of this method to deal with quantum-mechanical problems such as quantum spin systems or many-fermion problems, it will become useful for other questions in the fields of elementary-particle and nuclear physics as well. The large number of recent publications dealing either with applications or further development of some aspects of this method is a clear indication that the scientific community has realized the power and versatility of Monte Carlo simula tions, as well as of related simulation techniques such as "molecular dynamics" and "Langevin dynamics," which are only briefly mentioned in the present book. With the increasing availability of recent very-high-speed general-purpose computers, many problems become tractable which have so far escaped satisfactory treatment due to prac tical limitations (too small systems had to be chosen, or too short averaging times had to be used). While this approach is admittedly rather expensive, two cheaper alternatives have become available, too: (i) array or vector processors specifical ly suited for wide classes of simulation purposes; (ii) special purpose processors, which are built for a more specific class of problems or, in the extreme case, for the simulation of one single model system."
This work arises from our teaching this subject during many years. The vast majority of these exercises are the exams we gave to our students in this period. We carefully selected the subjects of the exercises to cover all the material which is most needed and which is treated in the most well known texts on these subjects. Each exercise is carefully solved in full details, explaining the theory behind the solution with particular care for those issues that, from our experience, are found most difficult from the average student. Indeed, several exercises are designed to throw light on aspects of the theory that, for one reason or another, are usually neglected with the result to make the students feel uneasy about them. In fact most students get acquainted just with the more common manipulations, which are illustrated by many examples in textbooks. Our exercises never require extensive calculations but tend to be somewhat unusual and force the solver to think about the problem starting from the principles, rather than by analogy with some previously solved exercise."
Thisvolumeexploresabductivecognition, animportantbut, atleastuntilthe third quarter of the last century, neglected topic in cognition. It integrates and further develops ideas already introduced in a previous book, which I published in 2001 (Abduction, Reason, and Science. Processes of Discovery and Explanation, Kluwer Academic/Plenum Publishers, New York). Thestatusofabductionisverycontroversial. Whendealingwithabductive reasoning misinterpretations and equivocations are common. What are the di?erences between abduction and induction? What are the di?erences - tween abduction and the well-known hypothetico-deductive method? What did Peircemeanwhen heconsideredabductionboth a kindofinferenceanda kind of instinct or when he considered perception a kind of abduction? Does abduction involve only the generation of hypotheses or their evaluation too? Are the criteria for the best explanation in abductive reasoning epistemic, or pragmatic, or both? Does abduction preserve ignorance or extend truth or both? How many kinds of abduction are there? Is abduction merely a kind of "explanatory" inference or does it involve other non-explanatory ways of guessing hypotheses? The book aims at increasing knowledge about creative and expert inf- ences. The study of these high-level methods of abductive reasoning is s- uated at the crossroads of philosophy, logic, epistemology, arti?cial intel- gence, neuroscience, cognitive psychology, animal cognition and evolutionary theories; that is, at the heart of cognitive science. Philosophers of science in thetwentiethcenturyhavetraditionallydistinguishedbetweentheinferential processesactiveinthelogicofdiscoveryandtheonesactiveinthelogicofj- ti?cation. Most have concluded that no logic of creative processes exists and, moreover, that a rational model of discovery is impossible. In short, scienti?c creative inferences are irrational and there is no "reasoning" to hypotheses.
Multilayer networks is a rising topic in Network Science which characterizes the structure and the function of complex systems formed by several interacting networks. Multilayer networks research has been propelled forward by the wide realm of applications in social, biological and infrastructure networks and the large availability of network data, as well as by the significance of recent results, which have produced important advances in this rapidly growing field. This book presents a comprehensive account of this emerging field. It provides a theoretical introduction to the main results of multilayer network science.
The field of ultracold atomic physics has developed rapidly during the last two decades, and currently encompasses a broad range of topics in physics, with a variety of important applications in topics ranging from quantum computing and simulation to quantum metrology, and can be used to probe fundamental many-body effects such as superconductivity and superfluidity. Beginning with the underlying and including the most cutting-edge experimental developments, this textbook covers essential topics such as Bose-Einstein condensation of alkali atoms, studies of BEC-BCS crossover in degenerate Fermi gas, synthetic gauge fields and Hubbard models, and many-body localization and dynamical gauge fields. Key physical concepts, such as symmetry and universality highlight the connections between different systems, and theory is developed with plain derivations supported by experimental results. This self-contained and modern text will be invaluable for researchers, graduate students and advanced undergraduates studying cold atom physics, from both a theoretical and experimental perspective.
This book covers developments in the theory of oscillations from diverse viewpoints, reflecting the fields multidisciplinary nature. It introduces the state-of-the-art in the theory and various applications of nonlinear dynamics. It also offers the first treatment of the asymptotic and homogenization methods in the theory of oscillations in combination with Pad approximations. With its wealth of interesting examples, this book will prove useful as an introduction to the field for novices and as a reference for specialists.
In the past decade, a number of different research communities within the computational sciences have studied learning in networks, starting from a number of different points of view. There has been substantial progress in these different communities and surprising convergence has developed between the formalisms. The awareness of this convergence and the growing interest of researchers in understanding the essential unity of the subject underlies the current volume. Two research communities which have used graphical or network formalisms to particular advantage are the belief network community and the neural network community. Belief networks arose within computer science and statistics and were developed with an emphasis on prior knowledge and exact probabilistic calculations. Neural networks arose within electrical engineering, physics and neuroscience and have emphasised pattern recognition and systems modelling problems. This volume draws together researchers from these two communities and presents both kinds of networks as instances of a general unified graphical formalism. The book focuses on probabilistic methods for learning and inference in graphical models, algorithm analysis and design, theory and applications. Exact methods, sampling methods and variational methods are discussed in detail. Audience: A wide cross-section of computationally oriented researchers, including computer scientists, statisticians, electrical engineers, physicists and neuroscientists.
One of the great intellectual challenges for the next few decades is the question of brain organization. What is the basic mechanism for storage of memory? What are the processes that serve as the interphase between the basically chemical processes of the body and the very specific and nonstatistical operations in the brain? Above all, how is concept formation achieved in the human brain? I wonder whether the spirit of the physics that will be involved in these studies will not be akin to that which moved the founders of the "rational foundation of thermodynamics". C. N. Yang! 10 The human brain is said to have roughly 10 neurons connected through about 14 10 synapses. Each neuron is itself a complex device which compares and integrates incoming electrical signals and relays a nonlinear response to other neurons. The brain certainly exceeds in complexity any system which physicists have studied in the past. Nevertheless, there do exist many analogies of the brain to simpler physical systems. We have witnessed during the last decade some surprising contributions of physics to the study of the brain. The most significant parallel between biological brains and many physical systems is that both are made of many tightly interacting components.
Computer Simulation Studies in Condensed-Matter Physics VIII covers recent developments in this field presented at the 1995 workshop, such as new algorithms, methods of analysis, and conceptual developments. This volume is composed of three parts. The first part contains invited papers that deal with simulational studies of classical systems. The second part is devoted to invited papers on quantum systems, including new results for strongly correlated electron and quantum spin models. The final part comprises contributed presentations.
|
![]() ![]() You may like...
Comunicacion especializada y divulgacion…
Gianluca Pontrandolfo, Sara Piccioni
Hardcover
R4,566
Discovery Miles 45 660
Reconciling Translingualism and Second…
Tony Silva, Zhaozhe Wang
Paperback
R1,367
Discovery Miles 13 670
Every Day Is An Opening Night - Our…
Des & Dawn Lindberg
Paperback
![]()
|