![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Applied mathematics > Mathematics for scientists & engineers
This work provides descriptions, explanations and examples of the Bayesian approach to statistics, demonstrating the utility of Bayesian methods for analyzing real-world problems in the health sciences. The work considers the individual components of Bayesian analysis.;College or university bookstores may order five or more copies at a special student price, available on request from Marcel Dekker, Inc.
Beginning his work on the monograph to be published in English, this author tried to present more or less general notions of the possibilities of mathematics in the new and rapidly developing science of infectious immunology, describing the processes of an organism's defence against antigen invasions. The results presented in this monograph are based on the construc tion and application of closed models of immune response to infections which makes it possible to approach problems of optimizing the treat ment of chronic and hypertoxic forms of diseases. The author, being a mathematician, had creative long-Iasting con tacts with immunologists, geneticist, biologists, and clinicians. As far back as 1976 it resulted in the organization of a special seminar in the Computing Center of Siberian Branch of the USSR Academy of Sci ences on mathematical models in immunology. The seminar attracted the attention of a wide circle of leading specialists in various fields of science. All these made it possible to approach, from a more or less united stand point, the construction of models of immune response, the mathematical description of the models, and interpretation of results."
These are the proceedings of the Third Max Born Symposium which took place at SobOtka Castle in September 1993. The Symposium is organized annually by the Institute of Theoretical Physics of the University of Wroclaw. Max Born was a student and later on an assistant at the University of Wroclaw (Wroclaw belonged to Germany at this time and was called Breslau). The topic of the Max Born Sympo sium varies each year reflecting the developement of theoretical physics. The subject of this Symposium "Stochasticity and quantum chaos" may well be considered as a continuation of the research interest of Max Born. Recall that Born treats his "Lectures on the mechanics of the atom" (published in 1925) as a nrst volume of a complete monograph (supposedly to be written by another person). His lectures concern the quantum mechanics of integrable systems. The quantum mechanics of non-integrable systems was the subject of the Third Max Born Symposium. It is known that classical non-integrable Hamiltonian systems show a chaotic behaviour. On the other hand quantum systems bounded in space are quasiperi odic. We believe that quantum systems have a reasonable classical limit. It is not clear how to reconcile the seemingly regular behaviour of quantum systems with the possible chaotic properties of their classical counterparts. The quantum proper ties of classically chaotic systems constitute the main subject of these Proceedings. Other topics discussed are: the quantum mechanics of dissipative systems, quantum measurement theory, the role of noise in classical and quantum systems."
The purpose of this book is to collect contributions that deal with the use of nature inspired metaheuristics for solving multi-objective combinatorial optimization problems. Such a collection intends to provide an overview of the state-of-the-art developments in this field, with the aim of motivating more researchers in operations research, engineering, and computer science, to do research in this area. As such, this book is expected to become a valuable reference for those wishing to do research on the use of nature inspired metaheuristics for solving multi-objective combinatorial optimization problems.
Bayesian probability theory and maximum entropy methods are at the core of a new view of scientific inference. These new' ideas, along with the revolution in computational methods afforded by modern computers, allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. This volume records the Proceedings of Eleventh Annual Maximum Entropy' Workshop, held at Seattle University in June, 1991. These workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this volume. There are tutorial papers, theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. The contributions contained in this volume present a state-of-the-art review that will be influential and useful for many years to come.
Biology is in the midst of a era yielding many significant discoveries and promising many more. Unique to this era is the exponential growth in the size of information-packed databases. Inspired by a pressing need to analyze that data, Introduction to Computational Biology explores a new area of expertise that emerged from this fertile field- the combination of biological and information sciences.
Nonlinear measurement data arise in a wide variety of biological and biomedical applications, such as longitudinal clinical trials, studies of drug kinetics and growth, and the analysis of assay and laboratory data. Nonlinear Models for Repeated Measurement Data provides the first unified development of methods and models for data of this type, with a detailed treatment of inference for the nonlinear mixed effects and its extensions. A particular strength of the book is the inclusion of several detailed case studies from the areas of population pharmacokinetics and pharmacodynamics, immunoassay and bioassay development and the analysis of growth curves.
This book shows that evolutionary game theory can unravel how mutual cooperation, trust, and credit in a group emerge in organizations and institutions. Some organizations and institutions, such as insurance unions, credit unions, and banks, originated from very simple mutual-aid groups. Members in these early-stage mutual-aid groups help each other, making rules to promote cooperation, and suppressing free riders. Then, they come to "trust" not only each other but also the group they belong to, itself. The division of labor occurs when the society comes to have diversity and complexity in a larger group, and the division of labor also requires mutual cooperation and trust among different social roles. In a larger group, people cannot directly interact with each other, and the reputation of unknown people helps other decide who is a trustworthy person. However, if gossip spreads untruths about a reputation, trust and cooperation are destroyed. Therefore, how to suppress untrue gossip is also important for trust and cooperation in a larger group. If trustworthiness and credibility can be established, these groups are successfully sustainable. Some develop and evolve and then mature into larger organizations and institutions. Finally, these organizations and institutions become what they are now. Therefore, not only cooperation but also trust and credit are keys to understanding these organizations and institutions. The evolution of cooperation, a topic of research in evolutionary ecology and evolutionary game theory, can be applied to understanding how to make institutions and organizations sustainable, trustworthy, and credible. It provides us with the idea that evolutionary game theory is a good mathematical tool to analyze trust and credit. This kind of research can be applied to current hot topics such as microfinance and the sustainable use of ecosystems.
"Offers a comprehensive, unified presentation of statistical designs and methods of analysis for all stages of pharmaceutical development--emphasizing biopharmaceutical applications and demonstrating statistical techniques with real-world examples."
Before you lies the proceedings oft he NATO Advanced Study Institute/Newton Institute Workshop "Confinement, duality and non perturbative aspects of QCD." The school covered the most important techniques to study Quantum Chromodynamics (QCD) andconfinement, fromlattice gauge theory, through Wilson's renormalisation group, to electromagnetic duality. The organisingcommittee existed of: Ian Drummond (DAMTP, Cambridge), Mikhail Shifman (Minneapolis), Peter West (King's, London), and Pierrevan Baal (Leiden), who acted as director oft he school. This summer school was the concluding activity ofa six month programme on "Non perturbative Aspects of Qua ntum Field Theory" taking place at the Isaac Newton Institute for Mathematical Sciences in Ca mbridge, UK, whic h started in January 1997, organised by David Olive, Pierre van Baal, and Peter West. A large number ofthe lecturers also participated in the programme and a few programme participants were asked to present a seminar at the school. Not contained in these proceedings are the seminars by Peter Landshoff (DAMTP, Cambridge) on "The Pomeron" and Ludwig Faddeev (Steklov Math. Inst., St. Petersburg) on "Knot like solitons in 3+1 dimen sional field theory." In additiont o the lectures and seminars there were two poster sessions at which participants presented their work. Authors and titles ofthese posters are listed on a separate page. These pro ceedings address the longstanding question of understanding how quarks are confined w ithin subnuclear particles.
This text examines the Atiyah-Singer theorem using the heat equation, which gives a local formula for the index of any elliptic complex. Heat equation methods are also used to discuss Lefschetz fixed point formulas, the Gauss-Bonnet theorem for a manifold with smooth boundary, and the geometrical theorem for a manifold with smooth boundary. The book presents a careful treatment of non-self-adjoint operators, asymptotics of the heat equation and variational formulas. It also introduces spectral geometry and provides a list of asymptotic formulas. The bibliography has been complied by Herbert Schroeder.
Although several books and conference proceedings have already appeared dealing with either the mathematical aspects or applications of homogenization theory, there seems to be no comprehensive volume dealing with both aspects. The present volume is meant to fill this gap, at least partially, and deals with recent developments in nonlinear homogenization emphasizing applications of current interest. It contains thirteen key lectures presented at the NATO Advanced Workshop on Nonlinear Homogenization and Its Applications to Composites, Polycrystals and Smart Materials. The list of thirty one contributed papers is also appended. The key lectures cover both fundamental, mathematical aspects of homogenization, including nonconvex and stochastic problems, as well as several applications in micromechanics, thin films, smart materials, and structural and topology optimization. One lecture deals with a topic important for nanomaterials: the passage from discrete to continuum problems by using nonlinear homogenization methods. Some papers reveal the role of parameterized or Young measures in description of microstructures and in optimal design. Other papers deal with recently developed methods both analytical and computational for estimating the effective behavior and field fluctuations in composites and polycrystals with nonlinear constitutive behavior. All in all, the volume offers a cross-section of current activity in nonlinear homogenization including a broad range of physical and engineering applications. The careful reader will be able to identify challenging open problems in this still evolving field. For instance, there is the need to improve bounding techniques for nonconvex problems, as well as for solving geometrically nonlinear optimum shape-design problems, using relaxation and homogenization methods."
We have considered writing the present book for a long time, since the lack of a sufficiently complete textbook about complex analysis in infinite dimensional spaces was apparent. There are, however, some separate topics on this subject covered in the mathematical literature. For instance, the elementary theory of holomorphic vector- functions.and mappings on Banach spaces is presented in the monographs of E. Hille and R. Phillips [1] and L. Schwartz [1], whereas some results on Banach algebras of holomorphic functions and holomorphic operator-functions are discussed in the books of W. Rudin [1] and T. Kato [1]. Apparently, the need to study holomorphic mappings in infinite dimensional spaces arose for the first time in connection with the development of nonlinear anal- ysis. A systematic study of integral equations with an analytic nonlinear part was started at the end ofthe 19th and the beginning ofthe 20th centuries by A. Liapunov, E. Schmidt, A. Nekrasov and others. Their research work was directed towards the theory of nonlinear waves and used mainly the undetermined coefficients and the majorant power series methods. The most complete presentation of these methods comes from N. Nazarov. In the forties and fifties the interest in Liapunov's and Schmidt's analytic methods diminished temporarily due to the appearence of variational calculus meth- ods (M. Golomb, A. Hammerstein and others) and also to the rapid development of the mapping degree theory (J. Leray, J. Schauder, G. Birkhoff, O. Kellog and others).
For the past several decades, the study of free boundary problems has been a very active subject of research occurring in a variety of applied sciences. What these problems have in common is their formulation in terms of suitably posed initial and boundary value problems for nonlinear partial differential equations. Such problems arise, for example, in the mathematical treatment of the processes of heat conduction, filtration through porous media, flows of non-Newtonian fluids, boundary layers, chemical reactions, semiconductors, and so on. The growing interest in these problems is reflected by the series of meetings held under the title "Free Boundary Problems: Theory and Applications" (Ox ford 1974, Pavia 1979, Durham 1978, Montecatini 1981, Maubuisson 1984, Irsee 1987, Montreal 1990, Toledo 1993, Zakopane 1995, Crete 1997, Chiba 1999). From the proceedings of these meetings, we can learn about the different kinds of mathematical areas that fall within the scope of free boundary problems. It is worth mentioning that the European Science Foundation supported a vast research project on free boundary problems from 1993 until 1999. The recent creation of the specialized journal Interfaces and Free Boundaries: Modeling, Analysis and Computation gives us an idea of the vitality of the subject and its present state of development. This book is a result of collaboration among the authors over the last 15 years."
Application of quantum mechanics in physics and chemistry often entails manipulation and evaluation of sums and products of coupling coefficients for the theory of angular momentum. Challenges encountered in such work can be tamed by graphical techniques that provide both the insight and analytical power. The book is the first step-by-step exposition of a graphical method grounded in established work. Copious exercises recover standard results but demonstrate the power to go beyond.
The present monograph defines, interprets and uses the matrix of partial derivatives of the state vector with applications for the study of some common categories of engineering. The book covers broad categories of processes that are formed by systems of partial derivative equations (PDEs), including systems of ordinary differential equations (ODEs). The work includes numerous applications specific to Systems Theory based on Mpdx, such as parallel, serial as well as feed-back connections for the processes defined by PDEs. For similar, more complex processes based on Mpdx with PDEs and ODEs as components, we have developed control schemes with PID effects for the propagation phenomena, in continuous media (spaces) or discontinuous ones (chemistry, power system, thermo-energetic) or in electro-mechanics (railway - traction) and so on. The monograph has a purely engineering focus and is intended for a target audience working in extremely diverse fields of application (propagation phenomena, diffusion, hydrodynamics, electromechanics) in which the use of PDEs and ODEs is justified.
Praise for the First Edition "A very useful book for self study and reference." "Very well written. It is concise and really packs a lot of material in a valuable reference book." "An informative and well-written book . . . presented in an easy-to-understand style with many illustrative numerical examples taken from engineering and scientific studies." Practicing engineers and scientists often have a need to utilize statistical approaches to solving problems in an experimental setting. Yet many have little formal training in statistics. Statistical Design and Analysis of Experiments gives such readers a carefully selected, practical background in the statistical techniques that are most useful to experimenters and data analysts who collect, analyze, and interpret data. The First Edition of this now-classic book garnered praise in the field. Now its authors update and revise their text, incorporating readers’ suggestions as well as a number of new developments. Statistical Design and Analysis of Experiments, Second Edition emphasizes the strategy of experimentation, data analysis, and the interpretation of experimental results, presenting statistics as an integral component of experimentation from the planning stage to the presentation of conclusions. Giving an overview of the conceptual foundations of modern statistical practice, the revised text features discussions of:
Ideal for both students and professionals, this focused and cogent reference has proven to be an excellent classroom textbook with numerous examples. It deserves a place among the tools of every engineer and scientist working in an experimental setting.
This updated and enlarged Second Edition provides in-depth, progressive studies of kinematic mechanisms and offers novel, simplified methods of solving typical problems that arise in mechanisms synthesis and analysis - concentrating on the use of algebra and trigonometry and minimizing the need for calculus.;It continues to furnish complete coverage of: key concepts, including kinematic terminology, uniformly accelerated motion, and the properties of vectors; graphical techniques for both velocity and acceleration analysis; analytical techniques; and ready-to-use computer and calculator programmes for analyzing basic classes of mechanisms.;This edition supplies detailed explications of such new topics as: gears, gear trains, and cams; velocity and acceleration analyses of rolling elements; acceleration analysis of sliding contact mechanisms by the effective component method; four-bar analysis by the parallelogram method; and centre of curvature determination methods.
Important developments in the progress of the theory of rock mechanics during recent years are based on fractals and damage mechanics. The concept of fractals has proved to be a useful way of describing the statistics of naturally occurring geometrics. Natural objects, from mountains and coastlines to clouds and forests, are found to have boundaries best described as fractals. Fluid flow through jointed rock masses and clusterings of earthquakes are found to follow fractal patterns in time and space. Fracturing in rocks at all scales, from the microscale (microcracks) to the continental scale (megafaults), can lead to fractal structures. The process of diagenesis and pore geometry of sedimentary rock can be quantitatively described by fractals, etc. The book is mainly concerned with these developments, as related to fractal descriptions of fragmentations, damage and fracture of rocks, rock burst, joint roughness, rock porosity and permeability, rock grain growth, rock and soil particles, shear slips, fluid flow through jointed rocks, faults, earthquake clustering, and so on. The prime concerns of the book are to give a simple account of the basic concepts, methods of fractal geometry, and their applications to rock mechanics, geology, and seismology, and also to discuss damage mechanics of rocks and its application to mining engineering. The book can be used as a textbook for graduate students, by university teachers to prepare courses and seminars, and by active scientists who want to become familiar with a fascinating new field.
Sensitivity analysis and optimal shape design are key issues in engineering that have been affected by advances in numerical tools currently available. This book, and its supplementary online files, presents basic optimization techniques that can be used to compute the sensitivity of a given design to local change, or to improve its performance by local optimization of these data. The relevance and scope of these techniques have improved dramatically in recent years because of progress in discretization strategies, optimization algorithms, automatic differentiation, software availability, and the power of personal computers. Numerical Methods in Sensitivity Analysis and Shape Optimization will be of interest to graduate students involved in mathematical modeling and simulation, as well as engineers and researchers in applied mathematics looking for an up-to-date introduction to optimization techniques, sensitivity analysis, and optimal design.
This revised edition offers an approach to information theory that is more general than the classical approach of Shannon. Classically, information is defined for an alphabet of symbols or for a set of mutually exclusive propositions (a partition of the probability space ) with corresponding probabilities adding up to 1. The new definition is given for an arbitrary cover of , i.e. for a set of possibly overlapping propositions. The generalized information concept is called novelty and it is accompanied by two concepts derived from it, designated as information and surprise, which describe "opposite" versions of novelty, information being related more to classical information theory and surprise being related more to the classical concept of statistical significance. In the discussion of these three concepts and their interrelations several properties or classes of covers are defined, which turn out to be lattices. The book also presents applications of these concepts, mostly in statistics and in neuroscience.
Gaussian linear modelling cannot address current signal processing demands. In moderncontexts, suchasIndependentComponentAnalysis(ICA), progresshasbeen made speci?cally by imposing non-Gaussian and/or non-linear assumptions. Hence, standard Wiener and Kalman theories no longer enjoy their traditional hegemony in the ?eld, revealing the standard computational engines for these problems. In their place, diverse principles have been explored, leading to a consequent diversity in the implied computational algorithms. The traditional on-line and data-intensive pre- cupations of signal processing continue to demand that these algorithms be tractable. Increasingly, full probability modelling (the so-called Bayesian approach)-or partial probability modelling using the likelihood function-is the pathway for - sign of these algorithms. However, the results are often intractable, and so the area of distributional approximation is of increasing relevance in signal processing. The Expectation-Maximization (EM) algorithm and Laplace approximation, for ex- ple, are standard approaches to handling dif?cult models, but these approximations (certainty equivalence, and Gaussian, respectively) are often too drastic to handle the high-dimensional, multi-modal and/or strongly correlated problems that are - countered. Since the 1990s, stochastic simulation methods have come to dominate Bayesian signal processing. Markov Chain Monte Carlo (MCMC) sampling, and - lated methods, are appreciated for their ability to simulate possibly high-dimensional distributions to arbitrary levels of accuracy. More recently, the particle ?ltering - proach has addressed on-line stochastic simulation. Nevertheless, the wider acce- ability of these methods-and, to some extent, Bayesian signal processing itself- has been undermined by the large computational demands they typically mak
Graph algorithms is a well-established subject in mathematics and computer science. Beyond classical application fields, such as approximation, combinatorial optimization, graphics, and operations research, graph algorithms have recently attracted increased attention from computational molecular biology and computational chemistry. Centered around the fundamental issue of graph isomorphism, this text goes beyond classical graph problems of shortest paths, spanning trees, flows in networks, and matchings in bipartite graphs. Advanced algorithmic results and techniques of practical relevance are presented in a coherent and consolidated way. This book introduces graph algorithms on an intuitive basis followed by a detailed exposition in a literate programming style, with correctness proofs as well as worst-case analyses. Furthermore, full C++ implementations of all algorithms presented are given using the LEDA library of efficient data structures and algorithms.
A matroid is an abstract mathematical structure that captures combinatorial properties of matrices. This book offers a unique introduction to matroid theory, emphasizing motivations from matrix theory and applications to systems analysis.This book serves also as a comprehensive presentation of the theory and application of mixed matrices, developed primarily by the present author in the last decade. A mixed matrix is a convenient mathematical tool for systems analysis, compatible with the physical observation that "fixed constants" and "system parameters" are to be distinguished in the description of engineering systems.This book will be extremely useful to graduate students and researchers in engineering, mathematics and computer science.
'Et moi, ..., si j'avait Sll comment en revemr, One service mathematics has rendered the je n'y serais point aIle.' human race. It has put common sense back Jules Verne where it belongs, on the topmost shelf next to the dusty canister labelled 'discarded non sense'. The series is divergent; therefore we may be able to do something with it. Eric T. Bell O. Heaviside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and non linearities abound. Similarly, all kinds of parts of mathematics serve as tools for other parts and for other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One service topology has rendered mathematical physics .. .'; 'One service logic has rendered com puter science .. .'; 'One service category theory has rendered mathematics .. .'. All arguably true. And all statements obtainable this way form part of the raison d'etre of this series." |
You may like...
Mathematics For Engineering Students
Ramoshweu Solomon Lebelo, Radley Kebarapetse Mahlobo
Paperback
R397
Discovery Miles 3 970
Nonlinear Approaches in Engineering…
Reza N. Jazar, Liming Dai
Hardcover
R4,049
Discovery Miles 40 490
Mathematical Neuroscience
Stanislaw Brzychczy, Roman R. Poznanski
Hardcover
R2,462
Discovery Miles 24 620
Constructive Approximation on the Sphere…
W Freeden, T. Gervens, …
Hardcover
R3,855
Discovery Miles 38 550
Blast Mitigation - Experimental and…
Arun Shukla, Yapa D.S. Rajapakse, …
Hardcover
R4,819
Discovery Miles 48 190
Mathematical Methods in Science and…
Masud Mansuripur
Paperback
|