![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Physics > Thermodynamics & statistical physics > Statistical physics
Monte Carlo computer simulations are now a standard tool in scientific fields such as condensed-matter physics, including surface-physics and applied-physics problems (metallurgy, diffusion, and segregation, etc. ), chemical physics, including studies of solutions, chemical reactions, polymer statistics, etc., and field theory. With the increasing ability of this method to deal with quantum-mechanical problems such as quantum spin systems or many-fermion problems, it will become useful for other questions in the fields of elementary-particle and nuclear physics as well. The large number of recent publications dealing either with applications or further development of some aspects of this method is a clear indication that the scientific community has realized the power and versatility of Monte Carlo simula tions, as well as of related simulation techniques such as "molecular dynamics" and "Langevin dynamics," which are only briefly mentioned in the present book. With the increasing availability of recent very-high-speed general-purpose computers, many problems become tractable which have so far escaped satisfactory treatment due to prac tical limitations (too small systems had to be chosen, or too short averaging times had to be used). While this approach is admittedly rather expensive, two cheaper alternatives have become available, too: (i) array or vector processors specifical ly suited for wide classes of simulation purposes; (ii) special purpose processors, which are built for a more specific class of problems or, in the extreme case, for the simulation of one single model system."
The state-of-the-art in the theoretical statistical physics treatment of the Janus fluid is reported with a bridge between new research results published in journal articles and a contextual literature review. Recent Monte Carlo simulations on the Kern and Frenkel model of the Janus fluid have revealed that in the vapor phase, below the critical point, there is the formation of preferred inert clusters made up of a well-defined number of particles: the micelles and the vesicles. This is responsible for a re-entrant gas branch of the gas-liquid binodal. Detailed account of this findings are given in the first chapter where the Janus fluid is introduced as a product of new sophisticated synthesis laboratory techniques. In the second chapter a cluster theory is developed to approximate the exact clustering properties stemming from the simulations. It is shown that the theory is able to reproduce semi-quantitatively the micellization phenomenon.
Computer Simulation Studies in Condensed-Matter Physics VIII covers recent developments in this field presented at the 1995 workshop, such as new algorithms, methods of analysis, and conceptual developments. This volume is composed of three parts. The first part contains invited papers that deal with simulational studies of classical systems. The second part is devoted to invited papers on quantum systems, including new results for strongly correlated electron and quantum spin models. The final part comprises contributed presentations.
Quantum field theory in curved spacetime has been remarkably fruitful. It can be used to explain how the large-scale structure of the universe and the anisotropies of the cosmic background radiation that we observe today first arose. Similarly, it provides a deep connection between general relativity, thermodynamics, and quantum field theory. This book develops quantum field theory in curved spacetime in a pedagogical style, suitable for graduate students. The authors present detailed, physically motivated, derivations of cosmological and black hole processes in which curved spacetime plays a key role. They explain how such processes in the rapidly expanding early universe leave observable consequences today, and how in the context of evaporating black holes, these processes uncover deep connections between gravitation and elementary particles. The authors also lucidly describe many other aspects of free and interacting quantized fields in curved spacetime.
Interacting many-body systems are the main subjects of research in theoretical condensed matter physics, and they are the source of both the interest and the difficulty in this field. In order to understand the macroscopic properties of matter in terms of macroscopic knowledge, many analytic and approximate methods have been introduced. The contributions to this proceedings volume focus on the most recent developments of computational approaches in condensed matter physics. Monte Carlo methods and molecular dynamics simulations applied to strongly correlated classical and quantum systems such as electron systems, quantum spin systems, spin glassss, coupled map systems, polymers and other random and comlex systems are reviewed. Comprising easy to follow introductions to each field covered and also more specialized contributions, this proceedings volume explains why computational approaches are necessary and how different fields are related to each other.
One high-level ability of the human brain is to understand what it has learned. This seems to be the crucial advantage in comparison to the brain activity of other primates. At present we are technologically almost ready to artificially reproduce human brain tissue, but we still do not fully understand the information processing and the related biological mechanisms underlying this ability. Thus an electronic clone of the human brain is still far from being realizable. At the same time, around twenty years after the revival of the connectionist paradigm, we are not yet satisfied with the typical subsymbolic attitude of devices like neural networks: we can make them learn to solve even difficult problems, but without a clear explanation of why a solution works. Indeed, to widely use these devices in a reliable and non elementary way we need formal and understandable expressions of the learnt functions. of being tested, manipulated and composed with These must be susceptible other similar expressions to build more structured functions as a solution of complex problems via the usual deductive methods of the Artificial Intelligence. Many effort have been steered in this directions in the last years, constructing artificial hybrid systems where a cooperation between the sub symbolic processing of the neural networks merges in various modes with symbolic algorithms. In parallel, neurobiology research keeps on supplying more and more detailed explanations of the low-level phenomena responsible for mental processes.
There are numerous technological materials - such as metals, polymers, ceramics, concrete, and many others - that vary in properties and serviceability. However, the almost universal common theme to most real materials is that their properties depend on the scale at which the analysis or observation takes place and at each scale "probabilities" play an important role. Here the word "probabilities" is used in a wider than the classical sense. In order to increase the efficiency and serviceability of these materials, researchers from NATO, CP and other countries were brought together to exchange knowledge and develop avenues for progress and applications in the st 21 century. The workshop began by reviewing progress in the subject area over the past few years and by identifying key questions that remain open. One point was how to observe/measure material properties at different scales and whether a probabilistic approach, at each scale, was always applicable and advantageous. The wide range of materials, from wood to advanced metals and from concrete to complex advanced composites, and the diversity of applications, e.g. fatigue, fracture, deformation, etc., were recognized as "obstacles" in identifying a "universal" approach.
At the present moment, after the success of the renormalization group in providing a conceptual framework for studying second-order phase tran sitions, we have a nearly satisfactory understanding of the statistical me chanics of classical systems with a non-random Hamiltonian. The situation is completely different if we consider the theory of systems with a random Hamiltonian or of chaotic dynamical systems. The two fields are connected; in fact, in the latter the effects of deterministic chaos can be modelled by an appropriate stochastic process. Although many interesting results have been obtained in recent years and much progress has been made, we still lack a satisfactory understanding of the extremely wide variety of phenomena which are present in these fields. The study of disordered or chaotic systems is the new frontier where new ideas and techniques are being developed. More interesting and deep results are expected to come in future years. The properties of random matrices and their products form a basic tool, whose importance cannot be underestimated. They playa role as important as Fourier transforms for differential equations. This book is extremely interesting as far as it presents a unified approach for the main results which have been obtained in the study of random ma trices. It will become a reference book for people working in the subject. The book is written by physicists, uses the language of physics and I am sure that many physicists will read it with great pleasure."
This volume contains the written versions of lectures held at the "23. Internationale Universit tswochen fUr Kernphysik" in Schladming, Austria, in February 1984. Once again the generous support of our sponsors, the Austrian Ministry of Science and Research, the Styrian Government and others, had made it possible to organize this school. The aim of the topics chosen for the meeting was to present different aspects of stochastic methods and techniques. These methods have opened up new ways to attack problems in a broad field ranging from quantum mechanics to quantum field theory. Thanks to the efforts of the lecturers it was possible to take this development into account and show relations to areas where stochastic methods have been used for a long time. Due to limited space only short manuscript versions of the many seminars presented could be included. The lecture notes were reexamined by the authors after the school and are now published in their final form. It is a pleasure to thank all the lecturers for their efforts which made it possible to speed up publication. Thanks are also due to Mrs. Neuhold for her careful typing of the notes. H. Mitter L. Pittner Acta Physica Austriaca, Suppl. XXVI, 3-52 (1984) (c) by Springer-Verlag 1984 STOCHASTIC PROCESSES - QUANTUM PHYSICS+ by L. STREIT Universitat Bielefeld BiBoS D-4800 Bielefeld. FR Germany I.
The field of neural information processing has two main objects: investigation into the functioning of biological neural networks and use of artificial neural networks to sol ve real world problems. Even before the reincarnation of the field of artificial neural networks in mid nineteen eighties, researchers have attempted to explore the engineering of human brain function. After the reincarnation, we have seen an emergence of a large number of neural network models and their successful applications to solve real world problems. This volume presents a collection of recent research and developments in the field of neural information processing. The book is organized in three Parts, i.e., (1) architectures, (2) learning algorithms, and (3) applications. Artificial neural networks consist of simple processing elements called neurons, which are connected by weights. The number of neurons and how they are connected to each other defines the architecture of a particular neural network. Part 1 of the book has nine chapters, demonstrating some of recent neural network architectures derived either to mimic aspects of human brain function or applied in some real world problems. Muresan provides a simple neural network model, based on spiking neurons that make use of shunting inhibition, which is capable of resisting small scale changes of stimulus. Hoshino and Zheng simulate a neural network of the auditory cortex to investigate neural basis for encoding and perception of vowel sounds.
The Fifteenth International Workshop on Maximum Entropy and Bayesian Meth- ods was held July 31-August 4, 1995 in Santa Fe, New Mexico, USA. St. John's College, located in the foothills of the Sangre de Cristo Mountains, provided a congenial setting for the Workshop. The relaxed atmosphere of the College, which was thoroughly enjoyed by all the attendees, stimulated free-flowing and thought- ful discussions. Conversations continued at the social events, which included a reception at the Santa Fe Institute, a New Mexican dinner at Richard Silver's home, and an excursion to Los Alamos that ended with a mixed grill at FUller Lodge, the main hall of the former Los Alamos Ranch School. This volume represents the Proceedings of the Workshop. Articles on the tra- ditional theme of the Workshop, application of the maximum-entropy principle and Bayesian methods for statistical inference in diverse areas of scientific re- search, are contained in these Proceedings. As is tradition, the Workshop opened with a tutorial on Bayesian methods, lucidly presented by Peter Cheeseman and Wray Buntine (NASA AMES, Moffett Field). The lecture notes for their tutorial are available on the World Wide Web at http://bayes .lanl. gov / "'maxent/. In addition, several new thrusts for the Workshop are described below.
Nervous System Actions and Interactions: Concepts in Neurophysiology approaches the nervous system from a functional, rather than structural, point of view. While all of the central topics of functional neuroscience are covered, these topics are organized from a neurophysiological perspective yielding chapters on subjects such as information storage and effector actions. Each chapter is organized around general concepts that then are further developed in the text. The authors attempt to establish a dialogue with the reader by means of proposed experiments and open ended questions that are designed to both reinforce and question the text. This volume is intended to be a book of ideas for the novice or seasoned researcher in neuroscience.
This volume contains papers presented at the Thirteenth Taniguchi Symposium on the Theory of Condensed Matter, which was held at Kashikojima (in Ise Shima National Park), Japan, from 6th to 9th November, 1990. The topic of the symposium was Molecular Dynamics Simulations. The general objective of this series of the Taniguchi Symposia is to encour age developing fields of great promise in condensed matter physics. Our theme, molecular dynamics (MD) simulations, certainly fulfills this requirement, be cause the field is developing at a remarkable pace and its future is considered almost boundless. It was in the 1950s that the original idea of the MD methods was first pro posed and applied to the study of physical systems composed of many particles. In fact, the invention of the MD techniques occurred soon after the construction of the first computers. For almost 35 years since then, MD methods, together with Monte Carlo methods, have played major parts in the drama of computer simulations. The triumph of MD simulations is not confined to numerical aspects of detailed analyses of physical systems. MD simulations have verified some un expected facts and introduced some new concepts, all of which had never been predicted previously from analytical theories. The occurrence of the Alder tran sition in a system of repulsive particles and the behavior of the long-time tails of the velocity autocorrelation function for a liquid are just two examples of the results achieved by means of MD studies."
The formation and evolution of complex dynamical structures is one of the most exciting areas of nonlinear physics. Such pattern formation problems are common in practically all systems involving a large number of interacting components. Here, the basic problem is to understand how competing physical forces can shape stable geometries and to explain why nature prefers just these. Motivation for the intensive study of pattern formation phenomena during the past few years derives from an increasing appreciation of the remarkable diversity of behaviour encountered in nonlinear systems and of universal features shared by entire classes of nonlinear processes. As physics copes with ever more ambi tious problems in pattern formation, summarizing our present state of knowledge becomes a pressing issue. This volume presents an overview of selected topics in this field of current interest. It deals with theoretical models of pattern formation and with simulations that bridge the gap between theory and experiment. The book is a product of the International Symposium on the Physics of Structure Formation, held from October 27 through November 2, 1986, at the Institute for Information Sciences of the University of Tiibingen. The symposium brought together a group of distinguished scientists from various disciplines to exchange ideas about recent advances in pattern formation in the physical sciences, and also to introduce young scientists to the fi"
The core of the material on large scale dynamics of interacting particles grew out of courses I taught at the Katholieke Universiteit Leuven, Rutgers Universi ty, and the Ludwig-Maximilians-Universitat Munchen and out of lectures I gave at the workshop "Hydrodynamical Behavior of Microscopic Systems" at the Universita dell'Aquila. I had the good luck of being helped through difficult ground by many friends. Amongst them I am deeply indebted to Joel L. Lebowitz. He got me started. Relatively little would have been achieved without his never-ending curiosity and insistence on clarity. Furthermore, I gratefully acknowledge the cooperation of Michael Aizenman, Henk van Beijeren, Carlo Boldrighini, Jean Bricmont, Paola Calderoni, Brian Davies, Anna DeMasi, Roland Dobrushin, Detlef Durr, Gregory Eyink, Mark Fannes, Pablo Ferrari, Alberto Frigerio, Joseph Fritz, Antonio Galves, Shelly Goldstein, Vittorio Gorini, Reinhard Illner, Claude Kipnis, Joachim Krug, Oscar Lanford, Reinhard Lang, Joel Lebowitz, Christian Maes, Stefano Olla, George Papanicolaou, Errico Presutti, Mario Pulvirenti, Fraydoun Rezakhanlou, Hermann Rost, Yasha Sinai, Yuri Suhov, Domo Szasz, Ragu Varadhan, Andre Verbeure, David Wick, and Horng-Tzer Yau. The list is somewhat lengthy, perhaps, but besides thanks I want to make clear that what I will describe is the outcome of a common scientific enterprise. I thank Henk van Beijeren and Detlef Durr for careful reading of and com ments on a previous version. Paola Calderoni and Detlef Durr supplied me with the proof in Part I, Chapter 8. 4 which is most appreciated. Munchen, May 1991 Herbert Spohn Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ."
Locality is a fundamental restriction in nature. On the other hand, adaptive complex systems, life in particular, exhibit a sense of permanence and time lessness amidst relentless constant changes in surrounding environments that make the global properties of the physical world the most important problems in understanding their nature and structure. Thus, much of the differential and integral Calculus deals with the problem of passing from local information (as expressed, for example, by a differential equation, or the contour of a region) to global features of a system's behavior (an equation of growth, or an area). Fundamental laws in the exact sciences seek to express the observable global behavior of physical objects through equations about local interaction of their components, on the assumption that the continuum is the most accurate model of physical reality. Paradoxically, much of modern physics calls for a fundamen tal discrete component in our understanding of the physical world. Useful computational models must be eventually constructed in hardware, and as such can only be based on local interaction of simple processing elements."
For any research field to have a lasting impact, there must be a firm theoretical foundation. Neural networks research is no exception. Some of the founda tional concepts, established several decades ago, led to the early promise of developing machines exhibiting intelligence. The motivation for studying such machines comes from the fact that the brain is far more efficient in visual processing and speech recognition than existing computers. Undoubtedly, neu robiological systems employ very different computational principles. The study of artificial neural networks aims at understanding these computational prin ciples and applying them in the solutions of engineering problems. Due to the recent advances in both device technology and computational science, we are currently witnessing an explosive growth in the studies of neural networks and their applications. It may take many years before we have a complete understanding about the mechanisms of neural systems. Before this ultimate goal can be achieved, an swers are needed to important fundamental questions such as (a) what can neu ral networks do that traditional computing techniques cannot, (b) how does the complexity of the network for an application relate to the complexity of that problem, and (c) how much training data are required for the resulting network to learn properly? Everyone working in the field has attempted to answer these questions, but general solutions remain elusive. However, encouraging progress in studying specific neural models has been made by researchers from various disciplines."
In this paper we shall discuss the construction of formal short-wave asymp totic solutions of problems of mathematical physics. The topic is very broad. It can somewhat conveniently be divided into three parts: 1. Finding the short-wave asymptotics of a rather narrow class of problems, which admit a solution in an explicit form, via formulas that represent this solution. 2. Finding formal asymptotic solutions of equations that describe wave processes by basing them on some ansatz or other. We explain what 2 means. Giving an ansatz is knowing how to give a formula for the desired asymptotic solution in the form of a series or some expression containing a series, where the analytic nature of the terms of these series is indicated up to functions and coefficients that are undetermined at the first stage of consideration. The second stage is to determine these functions and coefficients using a direct substitution of the ansatz in the equation, the boundary conditions and the initial conditions. Sometimes it is necessary to use different ansiitze in different domains, and in the overlapping parts of these domains the formal asymptotic solutions must be asymptotically equivalent (the method of matched asymptotic expansions). The basis for success in the search for formal asymptotic solutions is a suitable choice of ansiitze. The study of the asymptotics of explicit solutions of special model problems allows us to "surmise" what the correct ansiitze are for the general solution."
Human Face Recognition Using Third-Order Synthetic Neural Networks explores the viability of the application of High-order synthetic neural network technology to transformation-invariant recognition of complex visual patterns. High-order networks require little training data (hence, short training times) and have been used to perform transformation-invariant recognition of relatively simple visual patterns, achieving very high recognition rates. The successful results of these methods provided inspiration to address more practical problems which have grayscale as opposed to binary patterns (e.g., alphanumeric characters, aircraft silhouettes) and are also more complex in nature as opposed to purely edge-extracted images - human face recognition is such a problem. Human Face Recognition Using Third-Order Synthetic Neural Networks serves as an excellent reference for researchers and professionals working on applying neural network technology to the recognition of complex visual patterns.
This book gives the first detailed coherent treatment of a relatively young branch of statistical physics - nonlinear nonequilibrium and fluctuation-dissipative thermo dynamics. This area of research has taken shape fairly recently: its development began in 1959. The earlier theory -linear nonequilibrium thermodynamics - is in principle a simple special case of the new theory. Despite the fact that the title of this book includes the word "nonlinear", it also covers the results of linear nonequilibrium thermodynamics. The presentation of the linear and nonlinear theories is done within a common theoretical framework that is not subject to the linearity condition. The author hopes that the reader will perceive the intrinsic unity of this discipline, and the uniformity and generality of its constituent parts. This theory has a wide variety of applications in various domains of physics and physical chemistry, enabling one to calculate thermal fluctuations in various nonlinear systems. The book is divided into two volumes. Fluctuation-dissipation theorems (or relations) of various types (linear, quadratic and cubic, classical and quantum) are considered in the first volume. Here one encounters the Markov and non-Markov fluctuation-dissipation theorems (FDTs), theorems of the first, second and third kinds. Nonlinear FDTs are less well known than their linear counterparts.
Ordinary thermodynamics provides reliable results when the thermodynamic fields are smooth, in the sense that there are no steep gradients and no rapid changes. In fluids and gases this is the domain of the equations of Navier-Stokes and Fourier. Extended thermodynamics becomes relevant for rapidly varying and strongly inhomogeneous processes. Thus the propagation of high frequency waves, and the shape of shock waves, and the regression of small-scale fluctuation are governed by extended thermodynamics. The field equations of ordinary thermodynamics are parabolic while extended thermodynamics is governed by hyperbolic systems. The main ingredients of extended thermodynamics are * field equations of balance type, * constitutive quantities depending on the present local state and * entropy as a concave function of the state variables. This set of assumptions leads to first order quasi-linear symmetric hyperbolic systems of field equations; it guarantees the well-posedness of initial value problems and finite speeds of propaga tion. Several tenets of irreversible thermodynamics had to be changed in subtle ways to make extended thermodynamics work. Thus, the entropy is allowed to depend on nonequilibrium vari ables, the entropy flux is a general constitutive quantity, and the equations for stress and heat flux contain inertial terms. New insight is therefore provided into the principle of material frame indifference. With these modifications an elegant formal structure can be set up in which, just as in classical thermostatics, all restrictive conditions--derived from the entropy principle-take the form of integrability conditions.
This is the Proceedings of the Taniguchi International Symposium on "Relaxation of Elementary Excitations" which was held October 12-16,1979, at Susono-shi (at the foot of f1t. Fuji) in Japan. The pleasant atmosphere of the Symposium is evidenced in the picture of the participants shown on the next page. The purpose of the symposium was to provide an opportunity for a limited number of active researchers to meet and to discuss relaxation processes and related phenomena not only of excitons and phonons in solids but also electronic and vibrational excitations in molecules and biological systems. First, the lattice relaxation, i.e., multi-phonon process, associated with electronic excitation, which plays important roles in self-trapping of an exciton and a particle (electron and hole) and also in degradation of semi conductor lasers, is discussed. Second, this lattice relaxation is studied as the intermediate state interaction in the second-order optical responses, i.e., in connection with the competitive behavior of Raman scattering and luminescence. Third, relaxation mechanisms and relaxation constants are by spectroscopic methods as well as by genuine nonlinear optical determined phenomena. Conversely the relaxation is decisive in coherent nonlinear optical phenomena such as laser, superradiance, and optical bistability. Fourth, the role played by relaxation processes is discussed for optical phenomena in macromolecules and biological system such as photosynthesis."
It is universally recognized that the end of the current and the beginning of the next century will be characterized by a radical change in the existing trends in the economic development of all countries and a transition to new principles of economic management on the basis of a resource and energy conservation policy. Thus there is an urgent necessity to study methods, technical aids and economic consequences of this change, and particularly, to determine the possible amounts of energy resources which could be conserved (energy "reserves") in different spheres of the national economy. An increased interest towards energy conservation in industry, one of the largest energy consumers, is quite natural and is manifested by the large num ber of publications on this topic. But the majority of publications are devoted to the solution of narrowly defined problems, determination of energy reserves in specific processes and plants, efficiency estimation of individual energy conserva tion measures, etc. However, it is necessary to develop a general methodological approach to the solution of such problems and create a scientific and methodical base for realizing an energy conservation policy. Such an effort is made in this book, which is concerned with methods for studying energy use efficiency in technological processes and estimation of the theoretical and actual energy reserves in a given process, technology, or industrial sector on the basis of their complete energy balances."
In the past three decades there has been enormous progress in identifying the essential role that nonlinearity plays in physical systems, including supporting soliton-like solutions and self-trapped sxcitations such as polarons. during the same period, similarly impressive progress has occurred in understanding the effects of disorder in linear quantum problems, especially regarding Anderson localization arising from impurities, random spatial structures, stochastic applied fields, and so forth. These striking consequences of disorder, noise and nonlinearity frequently occur together in physical systems. Yet there have been only limited attempts to develop systematic techniques which can include all of these ingredients, which may reinforce, complement or frustrate each other. This book contains a range of articles which provide important steps toward the goal of systematic understanding and classification of phenomenology. Experts from Australia, Europe, Japan, USA, and the USSR describe both mathematical and numerical techniques - especially from soliton and statistical physics disciplines - and applicaations to a number of important physical systems and devices, including optical and electronic transmission lines, liquid crystals, biophysics and magnetism.
In March 1997, we launched the Japan Association for Evolutionary Economics {JAFEE) to gather the academic minds that, out of dissatisfaction with established dynamic approaches, were separately searching for new approaches to economics. To our surprise and joy, as many as 500members, including graduate students, joined us. Later that year Prof. Horst Hanusch, then President of the International oseph A. Schumpeter Society, remarked that such a start would take a couple of decades in Europe to prepare for. Since then we have been developing our activities incessantly not only in terms of the number of members, but also in terms of the intensity of international academic exchange. Originally the planning of this book came about as the successful outcome of our fourth annual conference organized as an international one, JAFEE 2000.Incorporat ing other international contributions related to our preceding conferences, this book has eventually turned out to be one of the most enterprising anthologies on evolu tionary economics ever published. Specifically, it contains excellent papers on such topics as streams of evolutionary economics, evolutionary nonlinear dynamics, experimental economics and evolution, multiagent systems and complexity, new frontiers for evolutionary economics, and economic heresies. In short, this book will provide a vivid and full-fledged picture of up-to-date evolutionary economics." |
You may like...
Mechanisms and Therapy of Liver Cancer…
Paul B. Fisher, Devanand Sarkar
Hardcover
R3,734
Discovery Miles 37 340
Antibiotic Resistance Protocols - Second…
Stephen H. Gillespie, Timothy D. McHugh
Hardcover
R2,824
Discovery Miles 28 240
Towards Advanced Data Analysis by…
Christian Borgelt, Maria Angeles Gil, …
Hardcover
R4,064
Discovery Miles 40 640
A Manifesto For Social Change - How To…
Moeletsi Mbeki, Nobantu Mbeki
Paperback
(4)
Green Apple - Life Skills - The…
Mark Twain, Gina D. B Clemen
Mixed media product
R403
Discovery Miles 4 030
|