![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Physics > Thermodynamics & statistical physics > Statistical physics
Connectionist Speech Recognition: A Hybrid Approach describes the theory and implementation of a method to incorporate neural network approaches into state of the art continuous speech recognition systems based on hidden Markov models (HMMs) to improve their performance. In this framework, neural networks (and in particular, multilayer perceptrons or MLPs) have been restricted to well-defined subtasks of the whole system, i.e. HMM emission probability estimation and feature extraction. The book describes a successful five-year international collaboration between the authors. The lessons learned form a case study that demonstrates how hybrid systems can be developed to combine neural networks with more traditional statistical approaches. The book illustrates both the advantages and limitations of neural networks in the framework of a statistical systems. Using standard databases and comparison with some conventional approaches, it is shown that MLP probability estimation can improve recognition performance. Other approaches are discussed, though there is no such unequivocal experimental result for these methods. Connectionist Speech Recognition is of use to anyone intending to use neural networks for speech recognition or within the framework provided by an existing successful statistical approach. This includes research and development groups working in the field of speech recognition, both with standard and neural network approaches, as well as other pattern recognition and/or neural network researchers. The book is also suitable as a text for advanced courses on neural networks or speech processing.
In May 2002 a number of about 20 scientists from various disciplines were invited by the Berlin-Brandenburg Academy of Sciences and Humanities to participate in an interdisciplinary workshop on structures and structure generating processes. The site was the beautiful little castle of Blankensee, south of Berlin. The disciplines represented ranged from mathematics and information theory, over various ?elds of engineering, biochemistry and biology, to the economic and social sciences. All participants presented talks explaining the nature of structures considered in their ?elds and the associated procedures of analysis. It soon became evident that the study of structures is indeed a common c- cern of virtually all disciplines. The motivation as well as the methods of analysis, however, differ considerably. In engineering, the generation of artifacts, such as infrastructures or technological processes, are of primary interest. Frequently, the analysis aims there at de?ning a simpli?ed mathematical model for the optimization of the structures and the structure generating processes. Mathematical or heuristic methods are applied, the latter preferably of the type of biology based evolutionary algorithms. On the other hand, setting up complex technical structures is not pos- ble by such simpli?ed model calculations but requires a different and less model but rather knowledge-based type of approach, using empirical rules rather than formal equations. In biochemistry, interest is frequently focussed on the structures of molecules, such as proteins or ribonucleic acids. Again, optimal structures can usually be de?ned.
Emergence and complexity refer to the appearance of higher-level properties and behaviours of a system that obviously comes from the collective dynamics of that system's components. These properties are not directly deducible from the lower-level motion of that system. Emergent properties are properties of the "whole'' that are not possessed by any of the individual parts making up that whole. Such phenomena exist in various domains and can be described, using complexity concepts and thematic knowledges. This book highlights complexity modelling through dynamical or behavioral systems. The pluridisciplinary purposes, developed along the chapters, are able to design links between a wide-range of fundamental and applicative Sciences. Developing such links - instead of focusing on specific and narrow researches - is characteristic of the Science of Complexity that we try to promote by this contribution.
Adding one and one makes two, usually. But sometimes things add up to more than the sum of their parts. This observation, now frequently expressed in the maxim "more is different", is one of the characteristic features of complex systems and, in particular, complex networks. Along with their ubiquity in real world systems, the ability of networks to exhibit emergent dynamics, once they reach a certain size, has rendered them highly attractive targets for research. The resulting network hype has made the word "network" one of the most in uential buzzwords seen in almost every corner of science, from physics and biology to economy and social sciences. The theme of "more is different" appears in a different way in the present v- ume, from the viewpoint of what we call "adaptive networks." Adaptive networks uniquely combine dynamics on a network with dynamical adaptive changes of the underlying network topology, and thus they link classes of mechanisms that were previously studied in isolation. Here adding one and one certainly does not make two, but gives rise to a number of new phenomena, including highly robust se- organization of topology and dynamics and other remarkably rich dynamical beh- iors.
Monte Carlo computer simulations are now a standard tool in scientific fields such as condensed-matter physics, including surface-physics and applied-physics problems (metallurgy, diffusion, and segregation, etc. ), chemical physics, including studies of solutions, chemical reactions, polymer statistics, etc., and field theory. With the increasing ability of this method to deal with quantum-mechanical problems such as quantum spin systems or many-fermion problems, it will become useful for other questions in the fields of elementary-particle and nuclear physics as well. The large number of recent publications dealing either with applications or further development of some aspects of this method is a clear indication that the scientific community has realized the power and versatility of Monte Carlo simula tions, as well as of related simulation techniques such as "molecular dynamics" and "Langevin dynamics," which are only briefly mentioned in the present book. With the increasing availability of recent very-high-speed general-purpose computers, many problems become tractable which have so far escaped satisfactory treatment due to prac tical limitations (too small systems had to be chosen, or too short averaging times had to be used). While this approach is admittedly rather expensive, two cheaper alternatives have become available, too: (i) array or vector processors specifical ly suited for wide classes of simulation purposes; (ii) special purpose processors, which are built for a more specific class of problems or, in the extreme case, for the simulation of one single model system."
The state-of-the-art in the theoretical statistical physics treatment of the Janus fluid is reported with a bridge between new research results published in journal articles and a contextual literature review. Recent Monte Carlo simulations on the Kern and Frenkel model of the Janus fluid have revealed that in the vapor phase, below the critical point, there is the formation of preferred inert clusters made up of a well-defined number of particles: the micelles and the vesicles. This is responsible for a re-entrant gas branch of the gas-liquid binodal. Detailed account of this findings are given in the first chapter where the Janus fluid is introduced as a product of new sophisticated synthesis laboratory techniques. In the second chapter a cluster theory is developed to approximate the exact clustering properties stemming from the simulations. It is shown that the theory is able to reproduce semi-quantitatively the micellization phenomenon.
Computer Simulation Studies in Condensed-Matter Physics VIII covers recent developments in this field presented at the 1995 workshop, such as new algorithms, methods of analysis, and conceptual developments. This volume is composed of three parts. The first part contains invited papers that deal with simulational studies of classical systems. The second part is devoted to invited papers on quantum systems, including new results for strongly correlated electron and quantum spin models. The final part comprises contributed presentations.
Quantum field theory in curved spacetime has been remarkably fruitful. It can be used to explain how the large-scale structure of the universe and the anisotropies of the cosmic background radiation that we observe today first arose. Similarly, it provides a deep connection between general relativity, thermodynamics, and quantum field theory. This book develops quantum field theory in curved spacetime in a pedagogical style, suitable for graduate students. The authors present detailed, physically motivated, derivations of cosmological and black hole processes in which curved spacetime plays a key role. They explain how such processes in the rapidly expanding early universe leave observable consequences today, and how in the context of evaporating black holes, these processes uncover deep connections between gravitation and elementary particles. The authors also lucidly describe many other aspects of free and interacting quantized fields in curved spacetime.
Interacting many-body systems are the main subjects of research in theoretical condensed matter physics, and they are the source of both the interest and the difficulty in this field. In order to understand the macroscopic properties of matter in terms of macroscopic knowledge, many analytic and approximate methods have been introduced. The contributions to this proceedings volume focus on the most recent developments of computational approaches in condensed matter physics. Monte Carlo methods and molecular dynamics simulations applied to strongly correlated classical and quantum systems such as electron systems, quantum spin systems, spin glassss, coupled map systems, polymers and other random and comlex systems are reviewed. Comprising easy to follow introductions to each field covered and also more specialized contributions, this proceedings volume explains why computational approaches are necessary and how different fields are related to each other.
One high-level ability of the human brain is to understand what it has learned. This seems to be the crucial advantage in comparison to the brain activity of other primates. At present we are technologically almost ready to artificially reproduce human brain tissue, but we still do not fully understand the information processing and the related biological mechanisms underlying this ability. Thus an electronic clone of the human brain is still far from being realizable. At the same time, around twenty years after the revival of the connectionist paradigm, we are not yet satisfied with the typical subsymbolic attitude of devices like neural networks: we can make them learn to solve even difficult problems, but without a clear explanation of why a solution works. Indeed, to widely use these devices in a reliable and non elementary way we need formal and understandable expressions of the learnt functions. of being tested, manipulated and composed with These must be susceptible other similar expressions to build more structured functions as a solution of complex problems via the usual deductive methods of the Artificial Intelligence. Many effort have been steered in this directions in the last years, constructing artificial hybrid systems where a cooperation between the sub symbolic processing of the neural networks merges in various modes with symbolic algorithms. In parallel, neurobiology research keeps on supplying more and more detailed explanations of the low-level phenomena responsible for mental processes.
There are numerous technological materials - such as metals, polymers, ceramics, concrete, and many others - that vary in properties and serviceability. However, the almost universal common theme to most real materials is that their properties depend on the scale at which the analysis or observation takes place and at each scale "probabilities" play an important role. Here the word "probabilities" is used in a wider than the classical sense. In order to increase the efficiency and serviceability of these materials, researchers from NATO, CP and other countries were brought together to exchange knowledge and develop avenues for progress and applications in the st 21 century. The workshop began by reviewing progress in the subject area over the past few years and by identifying key questions that remain open. One point was how to observe/measure material properties at different scales and whether a probabilistic approach, at each scale, was always applicable and advantageous. The wide range of materials, from wood to advanced metals and from concrete to complex advanced composites, and the diversity of applications, e.g. fatigue, fracture, deformation, etc., were recognized as "obstacles" in identifying a "universal" approach.
At the present moment, after the success of the renormalization group in providing a conceptual framework for studying second-order phase tran sitions, we have a nearly satisfactory understanding of the statistical me chanics of classical systems with a non-random Hamiltonian. The situation is completely different if we consider the theory of systems with a random Hamiltonian or of chaotic dynamical systems. The two fields are connected; in fact, in the latter the effects of deterministic chaos can be modelled by an appropriate stochastic process. Although many interesting results have been obtained in recent years and much progress has been made, we still lack a satisfactory understanding of the extremely wide variety of phenomena which are present in these fields. The study of disordered or chaotic systems is the new frontier where new ideas and techniques are being developed. More interesting and deep results are expected to come in future years. The properties of random matrices and their products form a basic tool, whose importance cannot be underestimated. They playa role as important as Fourier transforms for differential equations. This book is extremely interesting as far as it presents a unified approach for the main results which have been obtained in the study of random ma trices. It will become a reference book for people working in the subject. The book is written by physicists, uses the language of physics and I am sure that many physicists will read it with great pleasure."
This volume contains the written versions of lectures held at the "23. Internationale Universit tswochen fUr Kernphysik" in Schladming, Austria, in February 1984. Once again the generous support of our sponsors, the Austrian Ministry of Science and Research, the Styrian Government and others, had made it possible to organize this school. The aim of the topics chosen for the meeting was to present different aspects of stochastic methods and techniques. These methods have opened up new ways to attack problems in a broad field ranging from quantum mechanics to quantum field theory. Thanks to the efforts of the lecturers it was possible to take this development into account and show relations to areas where stochastic methods have been used for a long time. Due to limited space only short manuscript versions of the many seminars presented could be included. The lecture notes were reexamined by the authors after the school and are now published in their final form. It is a pleasure to thank all the lecturers for their efforts which made it possible to speed up publication. Thanks are also due to Mrs. Neuhold for her careful typing of the notes. H. Mitter L. Pittner Acta Physica Austriaca, Suppl. XXVI, 3-52 (1984) (c) by Springer-Verlag 1984 STOCHASTIC PROCESSES - QUANTUM PHYSICS+ by L. STREIT Universitat Bielefeld BiBoS D-4800 Bielefeld. FR Germany I.
The field of neural information processing has two main objects: investigation into the functioning of biological neural networks and use of artificial neural networks to sol ve real world problems. Even before the reincarnation of the field of artificial neural networks in mid nineteen eighties, researchers have attempted to explore the engineering of human brain function. After the reincarnation, we have seen an emergence of a large number of neural network models and their successful applications to solve real world problems. This volume presents a collection of recent research and developments in the field of neural information processing. The book is organized in three Parts, i.e., (1) architectures, (2) learning algorithms, and (3) applications. Artificial neural networks consist of simple processing elements called neurons, which are connected by weights. The number of neurons and how they are connected to each other defines the architecture of a particular neural network. Part 1 of the book has nine chapters, demonstrating some of recent neural network architectures derived either to mimic aspects of human brain function or applied in some real world problems. Muresan provides a simple neural network model, based on spiking neurons that make use of shunting inhibition, which is capable of resisting small scale changes of stimulus. Hoshino and Zheng simulate a neural network of the auditory cortex to investigate neural basis for encoding and perception of vowel sounds.
The Fifteenth International Workshop on Maximum Entropy and Bayesian Meth- ods was held July 31-August 4, 1995 in Santa Fe, New Mexico, USA. St. John's College, located in the foothills of the Sangre de Cristo Mountains, provided a congenial setting for the Workshop. The relaxed atmosphere of the College, which was thoroughly enjoyed by all the attendees, stimulated free-flowing and thought- ful discussions. Conversations continued at the social events, which included a reception at the Santa Fe Institute, a New Mexican dinner at Richard Silver's home, and an excursion to Los Alamos that ended with a mixed grill at FUller Lodge, the main hall of the former Los Alamos Ranch School. This volume represents the Proceedings of the Workshop. Articles on the tra- ditional theme of the Workshop, application of the maximum-entropy principle and Bayesian methods for statistical inference in diverse areas of scientific re- search, are contained in these Proceedings. As is tradition, the Workshop opened with a tutorial on Bayesian methods, lucidly presented by Peter Cheeseman and Wray Buntine (NASA AMES, Moffett Field). The lecture notes for their tutorial are available on the World Wide Web at http://bayes .lanl. gov / "'maxent/. In addition, several new thrusts for the Workshop are described below.
Nervous System Actions and Interactions: Concepts in Neurophysiology approaches the nervous system from a functional, rather than structural, point of view. While all of the central topics of functional neuroscience are covered, these topics are organized from a neurophysiological perspective yielding chapters on subjects such as information storage and effector actions. Each chapter is organized around general concepts that then are further developed in the text. The authors attempt to establish a dialogue with the reader by means of proposed experiments and open ended questions that are designed to both reinforce and question the text. This volume is intended to be a book of ideas for the novice or seasoned researcher in neuroscience.
This volume contains papers presented at the Thirteenth Taniguchi Symposium on the Theory of Condensed Matter, which was held at Kashikojima (in Ise Shima National Park), Japan, from 6th to 9th November, 1990. The topic of the symposium was Molecular Dynamics Simulations. The general objective of this series of the Taniguchi Symposia is to encour age developing fields of great promise in condensed matter physics. Our theme, molecular dynamics (MD) simulations, certainly fulfills this requirement, be cause the field is developing at a remarkable pace and its future is considered almost boundless. It was in the 1950s that the original idea of the MD methods was first pro posed and applied to the study of physical systems composed of many particles. In fact, the invention of the MD techniques occurred soon after the construction of the first computers. For almost 35 years since then, MD methods, together with Monte Carlo methods, have played major parts in the drama of computer simulations. The triumph of MD simulations is not confined to numerical aspects of detailed analyses of physical systems. MD simulations have verified some un expected facts and introduced some new concepts, all of which had never been predicted previously from analytical theories. The occurrence of the Alder tran sition in a system of repulsive particles and the behavior of the long-time tails of the velocity autocorrelation function for a liquid are just two examples of the results achieved by means of MD studies."
The formation and evolution of complex dynamical structures is one of the most exciting areas of nonlinear physics. Such pattern formation problems are common in practically all systems involving a large number of interacting components. Here, the basic problem is to understand how competing physical forces can shape stable geometries and to explain why nature prefers just these. Motivation for the intensive study of pattern formation phenomena during the past few years derives from an increasing appreciation of the remarkable diversity of behaviour encountered in nonlinear systems and of universal features shared by entire classes of nonlinear processes. As physics copes with ever more ambi tious problems in pattern formation, summarizing our present state of knowledge becomes a pressing issue. This volume presents an overview of selected topics in this field of current interest. It deals with theoretical models of pattern formation and with simulations that bridge the gap between theory and experiment. The book is a product of the International Symposium on the Physics of Structure Formation, held from October 27 through November 2, 1986, at the Institute for Information Sciences of the University of Tiibingen. The symposium brought together a group of distinguished scientists from various disciplines to exchange ideas about recent advances in pattern formation in the physical sciences, and also to introduce young scientists to the fi"
The core of the material on large scale dynamics of interacting particles grew out of courses I taught at the Katholieke Universiteit Leuven, Rutgers Universi ty, and the Ludwig-Maximilians-Universitat Munchen and out of lectures I gave at the workshop "Hydrodynamical Behavior of Microscopic Systems" at the Universita dell'Aquila. I had the good luck of being helped through difficult ground by many friends. Amongst them I am deeply indebted to Joel L. Lebowitz. He got me started. Relatively little would have been achieved without his never-ending curiosity and insistence on clarity. Furthermore, I gratefully acknowledge the cooperation of Michael Aizenman, Henk van Beijeren, Carlo Boldrighini, Jean Bricmont, Paola Calderoni, Brian Davies, Anna DeMasi, Roland Dobrushin, Detlef Durr, Gregory Eyink, Mark Fannes, Pablo Ferrari, Alberto Frigerio, Joseph Fritz, Antonio Galves, Shelly Goldstein, Vittorio Gorini, Reinhard Illner, Claude Kipnis, Joachim Krug, Oscar Lanford, Reinhard Lang, Joel Lebowitz, Christian Maes, Stefano Olla, George Papanicolaou, Errico Presutti, Mario Pulvirenti, Fraydoun Rezakhanlou, Hermann Rost, Yasha Sinai, Yuri Suhov, Domo Szasz, Ragu Varadhan, Andre Verbeure, David Wick, and Horng-Tzer Yau. The list is somewhat lengthy, perhaps, but besides thanks I want to make clear that what I will describe is the outcome of a common scientific enterprise. I thank Henk van Beijeren and Detlef Durr for careful reading of and com ments on a previous version. Paola Calderoni and Detlef Durr supplied me with the proof in Part I, Chapter 8. 4 which is most appreciated. Munchen, May 1991 Herbert Spohn Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ."
Locality is a fundamental restriction in nature. On the other hand, adaptive complex systems, life in particular, exhibit a sense of permanence and time lessness amidst relentless constant changes in surrounding environments that make the global properties of the physical world the most important problems in understanding their nature and structure. Thus, much of the differential and integral Calculus deals with the problem of passing from local information (as expressed, for example, by a differential equation, or the contour of a region) to global features of a system's behavior (an equation of growth, or an area). Fundamental laws in the exact sciences seek to express the observable global behavior of physical objects through equations about local interaction of their components, on the assumption that the continuum is the most accurate model of physical reality. Paradoxically, much of modern physics calls for a fundamen tal discrete component in our understanding of the physical world. Useful computational models must be eventually constructed in hardware, and as such can only be based on local interaction of simple processing elements."
For any research field to have a lasting impact, there must be a firm theoretical foundation. Neural networks research is no exception. Some of the founda tional concepts, established several decades ago, led to the early promise of developing machines exhibiting intelligence. The motivation for studying such machines comes from the fact that the brain is far more efficient in visual processing and speech recognition than existing computers. Undoubtedly, neu robiological systems employ very different computational principles. The study of artificial neural networks aims at understanding these computational prin ciples and applying them in the solutions of engineering problems. Due to the recent advances in both device technology and computational science, we are currently witnessing an explosive growth in the studies of neural networks and their applications. It may take many years before we have a complete understanding about the mechanisms of neural systems. Before this ultimate goal can be achieved, an swers are needed to important fundamental questions such as (a) what can neu ral networks do that traditional computing techniques cannot, (b) how does the complexity of the network for an application relate to the complexity of that problem, and (c) how much training data are required for the resulting network to learn properly? Everyone working in the field has attempted to answer these questions, but general solutions remain elusive. However, encouraging progress in studying specific neural models has been made by researchers from various disciplines."
In this paper we shall discuss the construction of formal short-wave asymp totic solutions of problems of mathematical physics. The topic is very broad. It can somewhat conveniently be divided into three parts: 1. Finding the short-wave asymptotics of a rather narrow class of problems, which admit a solution in an explicit form, via formulas that represent this solution. 2. Finding formal asymptotic solutions of equations that describe wave processes by basing them on some ansatz or other. We explain what 2 means. Giving an ansatz is knowing how to give a formula for the desired asymptotic solution in the form of a series or some expression containing a series, where the analytic nature of the terms of these series is indicated up to functions and coefficients that are undetermined at the first stage of consideration. The second stage is to determine these functions and coefficients using a direct substitution of the ansatz in the equation, the boundary conditions and the initial conditions. Sometimes it is necessary to use different ansiitze in different domains, and in the overlapping parts of these domains the formal asymptotic solutions must be asymptotically equivalent (the method of matched asymptotic expansions). The basis for success in the search for formal asymptotic solutions is a suitable choice of ansiitze. The study of the asymptotics of explicit solutions of special model problems allows us to "surmise" what the correct ansiitze are for the general solution."
Human Face Recognition Using Third-Order Synthetic Neural Networks explores the viability of the application of High-order synthetic neural network technology to transformation-invariant recognition of complex visual patterns. High-order networks require little training data (hence, short training times) and have been used to perform transformation-invariant recognition of relatively simple visual patterns, achieving very high recognition rates. The successful results of these methods provided inspiration to address more practical problems which have grayscale as opposed to binary patterns (e.g., alphanumeric characters, aircraft silhouettes) and are also more complex in nature as opposed to purely edge-extracted images - human face recognition is such a problem. Human Face Recognition Using Third-Order Synthetic Neural Networks serves as an excellent reference for researchers and professionals working on applying neural network technology to the recognition of complex visual patterns.
This book gives the first detailed coherent treatment of a relatively young branch of statistical physics - nonlinear nonequilibrium and fluctuation-dissipative thermo dynamics. This area of research has taken shape fairly recently: its development began in 1959. The earlier theory -linear nonequilibrium thermodynamics - is in principle a simple special case of the new theory. Despite the fact that the title of this book includes the word "nonlinear", it also covers the results of linear nonequilibrium thermodynamics. The presentation of the linear and nonlinear theories is done within a common theoretical framework that is not subject to the linearity condition. The author hopes that the reader will perceive the intrinsic unity of this discipline, and the uniformity and generality of its constituent parts. This theory has a wide variety of applications in various domains of physics and physical chemistry, enabling one to calculate thermal fluctuations in various nonlinear systems. The book is divided into two volumes. Fluctuation-dissipation theorems (or relations) of various types (linear, quadratic and cubic, classical and quantum) are considered in the first volume. Here one encounters the Markov and non-Markov fluctuation-dissipation theorems (FDTs), theorems of the first, second and third kinds. Nonlinear FDTs are less well known than their linear counterparts.
Ordinary thermodynamics provides reliable results when the thermodynamic fields are smooth, in the sense that there are no steep gradients and no rapid changes. In fluids and gases this is the domain of the equations of Navier-Stokes and Fourier. Extended thermodynamics becomes relevant for rapidly varying and strongly inhomogeneous processes. Thus the propagation of high frequency waves, and the shape of shock waves, and the regression of small-scale fluctuation are governed by extended thermodynamics. The field equations of ordinary thermodynamics are parabolic while extended thermodynamics is governed by hyperbolic systems. The main ingredients of extended thermodynamics are * field equations of balance type, * constitutive quantities depending on the present local state and * entropy as a concave function of the state variables. This set of assumptions leads to first order quasi-linear symmetric hyperbolic systems of field equations; it guarantees the well-posedness of initial value problems and finite speeds of propaga tion. Several tenets of irreversible thermodynamics had to be changed in subtle ways to make extended thermodynamics work. Thus, the entropy is allowed to depend on nonequilibrium vari ables, the entropy flux is a general constitutive quantity, and the equations for stress and heat flux contain inertial terms. New insight is therefore provided into the principle of material frame indifference. With these modifications an elegant formal structure can be set up in which, just as in classical thermostatics, all restrictive conditions--derived from the entropy principle-take the form of integrability conditions. |
You may like...
Nano-sized Multifunctional Materials…
Nguyen Hoa Hong
Paperback
Legal Guide for Police - Constitutional…
Jeffery T Walker, Craig Hemmens
Paperback
R1,393
Discovery Miles 13 930
Turbulence in Porous Media - Modeling…
Marcelo J. S. de Lemos
Hardcover
R3,974
Discovery Miles 39 740
|