![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Physics > Thermodynamics & statistical physics
This is the eighth volume in the series "Mathematics in Industrial Prob lems." The motivation for these volumes is to foster interaction between Industry and Mathematics at the "grass roots level"; that is, at the level of specific problems. These problems come from Industry: they arise from models developed by the industrial scientists in ventures directed at the manufacture of new or improved products. At the same time, these prob lems have the potential for mathematical challenge and novelty. To identify such problems, I have visited industries and had discussions with their scientists. Some of the scientists have subsequently presented their problems in the IMA Seminar on Industrial Problems. The book is based on the seminar presentations and on questions raised in subsequent discussions. Each chapter is devoted to one of the talks and is self-contained. The chapters usually provide references to the mathematical literature and a list of open problems that are of interest to industrial scientists. For some problems, a partial solution is indicated briefly. The last chapter of the book contains a short description of solutions to some of the problems raised in the previous volume, as well as references to papers in which such solutions have been published."
The expression 'Neural Networks' refers traditionally to a class of mathematical algorithms that obtain their proper performance while they 'learn' from examples or from experience. As a consequence, they are suitable for performing straightforward and relatively simple tasks like classification, pattern recognition and prediction, as well as more sophisticated tasks like the processing of temporal sequences and the context dependent processing of complex problems. Also, a wide variety of control tasks can be executed by them, and the suggestion is relatively obvious that neural networks perform adequately in such cases because they are thought to mimic the biological nervous system which is also devoted to such tasks. As we shall see, this suggestion is false but does not do any harm as long as it is only the final performance of the algorithm which counts. Neural networks are also used in the modelling of the functioning of (sub systems in) the biological nervous system. It will be clear that in such cases it is certainly not irrelevant how similar their algorithm is to what is precisely going on in the nervous system. Standard artificial neural networks are constructed from 'units' (roughly similar to neurons) that transmit their 'activity' (similar to membrane potentials or to mean firing rates) to other units via 'weight factors' (similar to synaptic coupling efficacies)."
The subject of this book is predictive modular neural networks and their ap plication to time series problems: classification, prediction and identification. The intended audience is researchers and graduate students in the fields of neural networks, computer science, statistical pattern recognition, statistics, control theory and econometrics. Biologists, neurophysiologists and medical engineers may also find this book interesting. In the last decade the neural networks community has shown intense interest in both modular methods and time series problems. Similar interest has been expressed for many years in other fields as well, most notably in statistics, control theory, econometrics etc. There is a considerable overlap (not always recognized) of ideas and methods between these fields. Modular neural networks come by many other names, for instance multiple models, local models and mixtures of experts. The basic idea is to independently develop several "subnetworks" (modules), which may perform the same or re lated tasks, and then use an "appropriate" method for combining the outputs of the subnetworks. Some of the expected advantages of this approach (when compared with the use of "lumped" or "monolithic" networks) are: superior performance, reduced development time and greater flexibility. For instance, if a module is removed from the network and replaced by a new module (which may perform the same task more efficiently), it should not be necessary to retrain the aggregate network."
VLSI-Compatible Implementations for Artificial Neural Networks introduces the basic premise of the authors' approach to biologically-inspired and VLSI-compatible definition, simulation, and implementation of artificial neural networks. In addition, the book develops a set of guidelines for general hardware implementation of ANNs. These guidelines are then used to find solutions for the usual difficulties encountered in any potential work, and as guidelines by which to reach the best compromise when several options exist. Furthermore, system-level consequences of using the proposed techniques in future submicron technologies with almost-linear MOS devices are discussed. While the major emphasis in this book is to develop neural networks optimized for compatibility with their implementation media, the work has also been extended to the design and implementation of a fully-quadratic ANN based on the desire to have network definitions epitomized for both efficient discrimination of closed-boundary circular areas and ease of implementation in a CMOS technology.VLSI-Compatible Implementations for Artificial Neural Networks implements a comprehensive approach which starts with an analytical evaluation of specific artificial networks. This provides a clear geometrical interpretation of the behavior of different variants of these networks. In combination with the guidelines developed towards a better final implementation, these concepts have allowed the authors to conquer various problems encountered and to make effective compromises. Then, to facilitate the investigation of the models needed when more difficult problems must be faced, a custom simulating program for various cases is developed. Finally, in order to demonstrate the authors' findings and expectations, several VLSI integrated circuits have been designed, fabricated, and tested. VLSI-Compatible Implementations for Artificial Neural Networksm> serves as an excellent reference source and may be used as a text for advanced courses on the subject.
No-one who took part in the NATO Advanced Studies Institute from which this book emerges will have forgotten the experience. True, the necessary conditions for a very successful workshop were satisfied: a field of physics bursting with new power and new puzzles, a matchless team of lecturers, an international gathering of students many of whom had themselves contributed at the forefront of their subject, an admirable overlap of experiment and theory, a good mix of experimenters and theorists, an enviable environment. But who could have foreseen the way the workshop became a focus for future directions, how fresh scientific ideas tumbled out of the discussion periods, how the context of teaching the field produced such fruitfulness of research at the highest level? The organisers did have some specific aims in mind. Perhaps foremost was the desire to compare notes among different areas within the sub field of soft condensed matter physics fast becoming known as "complex fluids." For readers seeking a definition, the prosaic "fluids with bits in" can be passed rapidly over in favour of the elegant discussion of slow variables by Scott Milner in his chapter. The uniting goals of the subject are to model the essential molecular or mesoscopic structure theoretically, and to probe this structure as well as the bulk response of the system experimentally. Our famous examples were: colloids, polymers, liquid crystals, block co-polymers and self-assembling surfactant systems.
Hybrid Neural Network and Expert Systems presents the basics of expert systems and neural networks, and the important characteristics relevant to the integration of these two technologies. Through case studies of actual working systems, the author demonstrates the use of these hybrid systems in practical situations. Guidelines and models are described to help those who want to develop their own hybrid systems. Neural networks and expert systems together represent two major aspects of human intelligence and therefore are appropriate for integration. Neural networks represent the visual, pattern-recognition types of intelligence, while expert systems represent the logical, reasoning processes. Together, these technologies allow applications to be developed that are more powerful than when each technique is used individually. Hybrid Neural Network and Expert Systems provides frameworks for understanding how the combination of neural networks and expert systems can produce useful hybrid systems, and illustrates the issues and opportunities in this dynamic field.
Engineers have long been fascinated by how efficient and how fast biological neural networks are capable of performing such complex tasks as recognition. Such networks are capable of recognizing input data from any of the five senses with the necessary accuracy and speed to allow living creatures to survive. Machines which perform such complex tasks as recognition, with similar ac curacy and speed, were difficult to implement until the technological advances of VLSI circuits and systems in the late 1980's. Since then, the field of VLSI Artificial Neural Networks (ANNs) have witnessed an exponential growth and a new engineering discipline was born. Today, many engineering curriculums have included a course or more on the subject at the graduate or senior under graduate levels. Since the pioneering book by Carver Mead; "Analog VLSI and Neural Sys tems," Addison-Wesley, 1989; there were a number of excellent text and ref erence books on the subject, each dealing with one or two topics. This book attempts to present an integrated approach of a single research team to VLSI ANNs Engineering."
This IMA Volume in Mathematics and its Applications DYNAMICAL ISSUES IN COMBUSTION THEORY is based on the proceedings of a workshop which was an integral part of the 1989-90 IMA program on "Dynamical Systems and their Applications." The aim of this workshop was to cross-fertilize research groups working in topics of current interest in combustion dynamics and mathematical methods applicable thereto. We thank Shui-Nee Chow, Martin Golubitsky, Richard McGehee, George R. Sell, Paul Fife, Amable Liiian and Foreman Williams for organizing the meeting. We especially thank Paul Fife, Amable Liiilin and Foreman Williams for editing the proceedings. We also take this opportunity to thank those agencies whose financial support made the workshop possible: the Army Research Office, the National Science Foundation and the Office of Naval Research. Avner Friedman Willard Miller, Jr. ix PREFACE The world ofcombustion phenomena is rich in problems intriguing to the math ematical scientist. They offer challenges on several fronts: (1) modeling, which involves the elucidation of the essential features of a given phenomenon through physical insight and knowledge of experimental results, (2) devising appropriate asymptotic and computational methods, and (3) developing sound mathematical theories. Papers in the present volume, which are based on talks given at the Workshop on Dynamical Issues in Combustion Theory in November, 1989, describe how all of these challenges have been met for particular examples within a number of common combustion scenarios: reactiveshocks, low Mach number premixed reactive flow, nonpremixed phenomena, and solid propellants."
Intelligent Hybrid Systems: Fuzzy Logic, Neural Networks, and Genetic Algorithms is an organized edited collection of contributed chapters covering basic principles, methodologies, and applications of fuzzy systems, neural networks and genetic algorithms. All chapters are original contributions by leading researchers written exclusively for this volume. This book reviews important concepts and models, and focuses on specific methodologies common to fuzzy systems, neural networks and evolutionary computation. The emphasis is on development of cooperative models of hybrid systems. Included are applications related to intelligent data analysis, process analysis, intelligent adaptive information systems, systems identification, nonlinear systems, power and water system design, and many others. Intelligent Hybrid Systems: Fuzzy Logic, Neural Networks, and Genetic Algorithms provides researchers and engineers with up-to-date coverage of new results, methodologies and applications for building intelligent systems capable of solving large-scale problems.
Dr. Ganti has introduced Chemoton Theory to explain the origin of life. Theoretical Foundations of Fluid Machineries is a discussion of the theoretical foundations of fluid automata. It introduces quantitative methods - cycle stoichiometry and stoichiokinetics - in order to describe fluid automata with the methods of algebra, as well as their construction, starting from elementary chemical reactions up to the complex, program-directed, proliferating fluid automata, the chemotons. Chemoton Theory outlines the development of a theoretical biology, based on exact quantitative considerations and the consequences of its application on biotechnology and on the artificial synthesis of living systems.
Neural Information Processing and VLSI provides a unified treatment of this important subject for use in classrooms, industry, and research laboratories, in order to develop advanced artificial and biologically-inspired neural networks using compact analog and digital VLSI parallel processing techniques. Neural Information Processing and VLSI systematically presents various neural network paradigms, computing architectures, and the associated electronic/optical implementations using efficient VLSI design methodologies. Conventional digital machines cannot perform computationally-intensive tasks with satisfactory performance in such areas as intelligent perception, including visual and auditory signal processing, recognition, understanding, and logical reasoning (where the human being and even a small living animal can do a superb job). Recent research advances in artificial and biological neural networks have established an important foundation for high-performance information processing with more efficient use of computing resources. The secret lies in the design optimization at various levels of computing and communication of intelligent machines. Each neural network system consists of massively paralleled and distributed signal processors with every processor performing very simple operations, thus consuming little power. Large computational capabilities of these systems in the range of some hundred giga to several tera operations per second are derived from collectively parallel processing and efficient data routing, through well-structured interconnection networks. Deep-submicron very large-scale integration (VLSI) technologies can integrate tens of millions of transistors in a single silicon chip for complex signal processing and information manipulation. The book is suitable for those interested in efficient neurocomputing as well as those curious about neural network system applications. It has been especially prepared for use as a text for advanced undergraduate and first year graduate students, and is an excellent reference book for researchers and scientists working in the fields covered.
One of the most challenging and fascinating problems of the theory of neural nets is that of asymptotic behavior, of how a system behaves as time proceeds. This is of particular relevance to many practical applications. Here we focus on association, generalization, and representation. We turn to the last topic first. The introductory chapter, "Global Analysis of Recurrent Neural Net works," by Andreas Herz presents an in-depth analysis of how to construct a Lyapunov function for various types of dynamics and neural coding. It includes a review of the recent work with John Hopfield on integrate-and fire neurons with local interactions. The chapter, "Receptive Fields and Maps in the Visual Cortex: Models of Ocular Dominance and Orientation Columns" by Ken Miller, explains how the primary visual cortex may asymptotically gain its specific structure through a self-organization process based on Hebbian learning. His argu ment since has been shown to be rather susceptible to generalization."
This work addresses time-delay in complex nonlinear systems and, in particular, its applications in complex networks; its role in control theory and nonlinear optics are also investigated. Delays arise naturally in networks of coupled systems due to finite signal propagation speeds and are thus a key issue in many areas of physics, biology, medicine, and technology. Synchronization phenomena in these networks play an important role, e.g., in the context of learning, cognitive and pathological states in the brain, for secure communication with chaotic lasers or for gene regulation. The thesis includes both novel results on the control of complex dynamics by time-delayed feedback and fundamental new insights into the interplay of delay and synchronization. One of the most interesting results here is a solution to the problem of complete synchronization in general networks with large coupling delay, i.e., large distances between the nodes, by giving a universal classification of networks that has a wide range of interdisciplinary applications.
'Et moi, ..., si j' avait su comment en revenir, One service mathematics has rendered the human race. It has put common sense back je n'y serais point aIle.' Jules Verne where it belongs, on the topmost shelf next to the dusty canister labelled 'discarded non- The series is divergent; therefore we may be sense'" able 10 do something with it. Eric T. Bell O. Heaviside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and non linearities abound_ Similarly, all kinds of parts of mathematics serve as tools for other parts and for other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One service topology has rendered mathematical physics .. .'; 'One service logic has rendered com puter science .. .'; 'One service category theory has rendered mathematics .. .'. All arguably true. And all statements obtainable this way form part of the raison d'etre of this series."
Neural Network Simulation Environments describes some of the best examples of neural simulation environments. All current neural simulation tools can be classified into four overlapping categories of increasing sophistication in software engineering. The least sophisticated are undocumented and dedicated programs, developed to solve just one specific problem; these tools cannot easily be used by the larger community and have not been included in this volume. The next category is a collection of custom-made programs, some perhaps borrowed from other application domains, and organized into libraries, sometimes with a rudimentary user interface. More recently, very sophisticated programs started to appear that integrate advanced graphical user interface and other data analysis tools. These are frequently dedicated to just one neural architecture/algorithm as, for example, three layers of interconnected artificial `neurons' learning to generalize input vectors using a backpropagation algorithm. Currently, the most sophisticated simulation tools are complete, system-level environments, incorporating the most advanced concepts in software engineering that can support experimentation and model development of a wide range of neural networks. These environments include sophisticated graphical user interfaces as well as an array of tools for analysis, manipulation and visualization of neural data. Neural Network Simulation Environments is an excellent reference for researchers in both academia and industry, and can be used as a text for advanced courses on the subject.
Mathematical modelling is ubiquitous. Almost every book in exact science touches on mathematical models of a certain class of phenomena, on more or less speci?c approaches to construction and investigation of models, on their applications, etc. As many textbooks with similar titles, Part I of our book is devoted to general qu- tions of modelling. Part II re?ects our professional interests as physicists who spent much time to investigations in the ?eld of non-linear dynamics and mathematical modelling from discrete sequences of experimental measurements (time series). The latter direction of research is known for a long time as "system identi?cation" in the framework of mathematical statistics and automatic control theory. It has its roots in the problem of approximating experimental data points on a plane with a smooth curve. Currently, researchers aim at the description of complex behaviour (irregular, chaotic, non-stationary and noise-corrupted signals which are typical of real-world objects and phenomena) with relatively simple non-linear differential or difference model equations rather than with cumbersome explicit functions of time. In the second half of the twentieth century, it has become clear that such equations of a s- ?ciently low order can exhibit non-trivial solutions that promise suf?ciently simple modelling of complex processes; according to the concepts of non-linear dynamics, chaotic regimes can be demonstrated already by a third-order non-linear ordinary differential equation, while complex behaviour in a linear model can be induced either by random in?uence (noise) or by a very high order of equations.
Econophysics is a newborn field of science bridging economics and physics. A special feature of this new science is the data analysis of high-precision market data. In economics arbitrage opportunity is strictly denied; however, by observing high-precision data we can prove the existence of arbitrage opportunity. Also, financial technology neglects the possibility of market prediction; however, in this book you can find many examples of predicted events. There are other surprising findings. This volume is the proceedings of a workshop on "application of econophysics" at which leading international researchers discussed their most recent results.
Connection science is a new information-processing paradigm which attempts to imitate the architecture and process of the brain, and brings together researchers from disciplines as diverse as computer science, physics, psychology, philosophy, linguistics, biology, engineering, neuroscience and AI. Work in Connectionist Natural Language Processing (CNLP) is now expanding rapidly, yet much of the work is still only available in journals, some of them quite obscure. To make this research more accessible this book brings together an important and comprehensive set of articles from the journal CONNECTION SCIENCE which represent the state of the art in Connectionist natural language processing; from speech recognition to discourse comprehension. While it is quintessentially Connectionist, it also deals with hybrid systems, and will be of interest to both theoreticians as well as computer modellers. Range of topics covered: Connectionism and Cognitive Linguistics Motion, Chomsky's Government-binding Theory Syntactic Transformations on Distributed Representations Syntactic Neural Networks A Hybrid Symbolic/Connectionist Model for Understanding of Nouns Connectionism and Determinism in a Syntactic Parser Context Free Grammar Recognition Script Recognition with Hierarchical Feature Maps Attention Mechanisms in Language Script-Based Story Processing A Connectionist Account of Similarity in Vowel Harmony Learning Distributed Representations Connectionist Language Users Representation and Recognition of Temporal Patterns A Hybrid Model of Script Generation Networks that Learn about Phonological Features Pronunciation in Text-to-Speech Systems
Knowledge of the renormalization group and field theory is a key part of physics, and is essential in condensed matter and particle physics. Written for advanced undergraduate and beginning graduate students, this textbook provides a concise introduction to this subject. The textbook deals directly with the loop-expansion of the free-energy, also known as the background field method. This is a powerful method, especially when dealing with symmetries, and statistical mechanics. In focussing on free-energy, the author avoids long developments on field theory techniques. The necessity of renormalization then follows.
The aim of this book is to comment on, and clarify, the mathematical aspects of the theory of thermodynamics. The standard presentations of the subject are often beset by a number of obscurities associated with the words "state", "reversible", "irreversible", and "quasi-static". This book is written in the belief that such obscurities are best removed not by the formal axiomatization of thermodynamics, but by setting the theory in the wider context of a genuine field theory which incorporates the effects of heat conduction and intertia, and proving appropriate results about the governing differential equations of this field theory. Even in the simplest one-dimensional case it is a nontrivial task to carry through the details of this program, and many challenging problems remain open.
Examining important results and analytical techniques, this graduate-level textbook is a step-by-step presentation of the structure and function of complex networks. Using a range of examples, from the stability of the internet to efficient methods of immunizing populations, and from epidemic spreading to how one might efficiently search for individuals, this textbook explains the theoretical methods that can be used, and the experimental and analytical results obtained in the study and research of complex networks. Giving detailed derivations of many results in complex networks theory, this is an ideal text to be used by graduate students entering the field. End-of-chapter review questions help students monitor their own understanding of the materials presented.
This volume presents the proceedings of the Workshop on Momentum Distributions held on October 24 to 26, 1988 at Argonne National Laboratory. This workshop was motivated by the enormous progress within the past few years in both experimental and theoretical studies of momentum distributions, by the growing recognition of the importance of momentum distributions to the characterization of quantum many-body systems, and especially by the realization that momentum distribution studies have much in common across the entire range of modern physics. Accordingly, the workshop was unique in that it brought together researchers in nuclear physics, electronic systems, quantum fluids and solids, and particle physics to address the common elements of momentum distribution studies. The topics dis cussed in the workshop spanned more than ten orders of magnitude range in charac teristic energy scales. The workshop included an extraordinary variety of interactions from Coulombic to hard core repulsive, from non-relativistic to extreme relativistic."
The aim of this Book is to give an overview, based on the results of nearly three decades of intensive research, of transient chaos. One belief that motivates us to write this book is that, transient chaos may not have been appreciated even within the nonlinear-science community, let alone other scientific disciplines.
Deeply rooted in fundamental research in Mathematics and Computer Science, Cellular Automata (CA) are recognized as an intuitive modeling paradigm for Complex Systems. Already very basic CA, with extremely simple micro dynamics such as the Game of Life, show an almost endless display of complex emergent behavior. Conversely, CA can also be designed to produce a desired emergent behavior, using either theoretical methodologies or evolutionary techniques. Meanwhile, beyond the original realm of applications - Physics, Computer Science, and Mathematics - CA have also become work horses in very different disciplines such as epidemiology, immunology, sociology, and finance. In this context of fast and impressive progress, spurred further by the enormous attraction these topics have on students, this book emerges as a welcome overview of the field for its practitioners, as well as a good starting point for detailed study on the graduate and post-graduate level. The book contains three parts, two major parts on theory and applications, and a smaller part on software. The theory part contains fundamental chapters on how to design and/or apply CA for many different areas. In the applications part a number of representative examples of really using CA in a broad range of disciplines is provided - this part will give the reader a good idea of the real strength of this kind of modeling as well as the incentive to apply CA in their own field of study. Finally, we included a smaller section on software, to highlight the important work that has been done to create high quality problem solving environments that allow to quickly and relatively easily implement a CA model and run simulations, both on the desktop and if needed, on High Performance Computing infrastructures.
Adaptive Resonance Theory Microchips describes circuit strategies resulting in efficient and functional adaptive resonance theory (ART) hardware systems. While ART algorithms have been developed in software by their creators, this is the first book that addresses efficient VLSI design of ART systems. All systems described in the book have been designed and fabricated (or are nearing completion) as VLSI microchips in anticipation of the impending proliferation of ART applications to autonomous intelligent systems. To accommodate these systems, the book not only provides circuit design techniques, but also validates them through experimental measurements. The book also includes a chapter tutorially describing four ART architectures (ART1, ARTMAP, Fuzzy-ART and Fuzzy-ARTMAP) while providing easily understandable MATLAB code examples to implement these four algorithms in software. In addition, an entire chapter is devoted to other potential applications for real-time data clustering and category learning. |
You may like...
Measurements and their Uncertainties - A…
Ifan Hughes, Thomas Hase
Hardcover
R2,694
Discovery Miles 26 940
Chemical Thermodynamics: Principles and…
J. Bevan Ott, Juliana Boerio-Goates
Hardcover
R2,979
Discovery Miles 29 790
Waste Biorefineries - Advanced Design…
Jinyue Yan, Chaudhary Awais Salman
Paperback
R3,239
Discovery Miles 32 390
Computational Modeling of Intelligent…
Mostafa Baghani, Majid Baniassadi, …
Paperback
R3,933
Discovery Miles 39 330
Advances in Heat Transfer, Volume 50
Ephraim M. Sparrow, John Patrick Abraham, …
Hardcover
R4,671
Discovery Miles 46 710
|