![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Applied mathematics > General
The book aims at surveying results in the application of fuzzy sets and fuzzy logic to economics and engineering. New results include fuzzy non-linear regression, fully fuzzified linear programming, fuzzy multi-period control, fuzzy network analysis, each using an evolutionary algorithm; fuzzy queuing decision analysis using possibility theory; fuzzy differential equations; fuzzy difference equations; fuzzy partial differential equations; fuzzy eigenvalues based on an evolutionary algorithm; fuzzy hierarchical analysis using an evolutionary algorithm; fuzzy integral equations. Other important topics covered are fuzzy input-output analysis; fuzzy mathematics of finance; fuzzy PERT (project evaluation and review technique). No previous knowledge of fuzzy sets is needed. The mathematical background is assumed to be elementary calculus.
The book introduces new techniques that imply rigorous lower bounds on the com plexity of some number-theoretic and cryptographic problems. It also establishes certain attractive pseudorandom properties of various cryptographic primitives. These methods and techniques are based on bounds of character sums and num bers of solutions of some polynomial equations over finite fields and residue rings. Other number theoretic techniques such as sieve methods and lattice reduction algorithms are used as well. The book also contains a number of open problems and proposals for further research. The emphasis is on obtaining unconditional rigorously proved statements. The bright side of this approach is that the results do not depend on any assumptions or conjectures. On the downside, the results are much weaker than those which are widely believed to be true. We obtain several lower bounds, exponential in terms of logp, on the degrees and orders of o polynomials; o algebraic functions; o Boolean functions; o linear recurrence sequences; coinciding with values of the discrete logarithm modulo a prime p at sufficiently many points (the number of points can be as small as pI/2+O: ). These functions are considered over the residue ring modulo p and over the residue ring modulo an arbitrary divisor d of p - 1. The case of d = 2 is of special interest since it corresponds to the representation of the rightmost bit of the discrete logarithm and defines whether the argument is a quadratic residue."
This book, part of the seriesContributions in Mathematical and Computational Sciences, reviews recent developments in the theory of vertex operator algebras (VOAs) and their applications to mathematics and physics. The mathematical theory of VOAs originated from the famous monstrous moonshine conjectures of J.H. Conway and S.P. Norton, which predicted a deep relationship between the characters of the largest simple finite sporadic group, the Monster and the theory of modular forms inspired by the observations of J. MacKay and J. Thompson. The contributions are based on lectures delivered at the 2011 conference on Conformal Field Theory, Automorphic Forms and Related Topics, organized by the editors as part of a special program offered at Heidelberg University that summer under the sponsorship of the Mathematics Center Heidelberg (MATCH)."
Kiyosi Ito, the founder of stochastic calculus, is one of the few central figures of the twentieth century mathematics who reshaped the mathematical world. Today stochastic calculus is a central research field with applications in several other mathematical disciplines, for example physics, engineering, biology, economics and finance. The Abel Symposium 2005 was organized as a tribute to the work of Kiyosi Ito on the occasion of his 90th birthday. Distinguished researchers from all over the world were invited to present the newest developments within the exciting and fast growing field of stochastic analysis. The present volume combines both papers from the invited speakers and contributions by the presenting lecturers. A special feature is the Memoirs that Kiyoshi Ito wrote for this occasion. These are valuable pages for both young and established researchers in the field.
This volume is a substantially revised new edition of the earlier book of the same title. Six new chapters (14-19) deal with topics of current interest: multi-component convection diffusion, convection in a compressible fluid, convenction with temperature dependent viscosity and thermal conductivity, penetrative convection, nonlinear stability in ocean circulation models, and numerical solution of eigenvalue problems. The book presents convection studies in a variety of fluid and porous media contexts. It begins at an elementary level and should be accessible to a wide audience of applied mathematicians, physicists, and engineers.
Increasingly, neural networks are used and implemented in a wide range of fields and have become useful tools in probabilistic analysis and prediction theory. This booka "unique in the literaturea "studies the application of neural networks to the analysis of time series of sea data, namely significant wave heights and sea levels. The particular problem examined as a starting point is the reconstruction of missing data, a general problem that appears in many cases of data analysis. Specific topics covered include: * Presentation of general information on the phenomenology of waves and tides, as well as related technical details of various measuring processes used in the study * Description of the model of wind waves (WAM) used to determine the spectral function of waves and predict the behavior of SWH (significant wave heights); a comparison is made of the reconstruction of SWH time series obtained by means of neural network algorithms versus SWH computed by WAM * Principles of artificial neural networks, approximation theory, and extreme-value theory necessary to understand the main applications of the book. * Application of artificial neural networks (ANN) to reconstruct SWH and sea levels (SL) * Comparison of the ANN approach and the approximation operator approach, displaying the advantages of ANN * Examination of extreme-event analysis applied to the time series of sea data in specific locations * Generalizations of ANN to treat analogous problems for other types of phenomena and data This book, a careful blend of theory and applications, is an excellent introduction to the use of ANN, which may encourage readers to try analogous approachesin other important application areas. Researchers, practitioners, and advanced graduate students in neural networks, hydraulic and marine engineering, prediction theory, and data analysis will benefit from the results and novel ideas presented in this useful resource.
The term " nite Fermi systems" usually refers to systems where the fermionic nature of the constituents is of dominating importance but the nite spatial extent also cannot be ignored. Historically the prominent examples were atoms, molecules, and nuclei. These should be seen in contrast to solid-state systems, where an in nite extent is usually a good approximation. Recently, new and different types of nite Fermi systems have become important, most noticeably metallic clusters, quantum dots, fermion traps, and compact stars. The theoretical description of nite Fermi systems has a long tradition and dev- oped over decades from most simple models to highly elaborate methods of ma- body theory. In fact, nite Fermi systems are the most demanding ground for theory as one often does not have any symmetry to simplify classi cation and as a possibly large but always nite particle number requires to take into account all particles. In spite of the practical complexity, most methods rely on simple and basic schemes which can be well understood in simple test cases. We therefore felt it a timely undertaking to offer a comprehensive view of the underlying theoretical ideas and techniques used for the description of such s- tems across physical disciplines. The book demonstrates how theoretical can be successively re ned from the Fermi gas via external potential and mean- eld m- els to various techniques for dealing with residual interactions, while following the universality of such concepts like shells and magic numbers across the application elds.
Andrej V. Cherkaev and Robert V. Kohn In the past twenty years we have witnessed a renaissance of theoretical work on the macroscopic behavior of microscopically heterogeneous mate rials. This activity brings together a number of related themes, including: ( 1) the use of weak convergence as a rigorous yet general language for the discussion of macroscopic behavior; (2) interest in new types of questions, particularly the "G-closure problem," motivated in large part by applications of optimal control theory to structural optimization; (3) the introduction of new methods for bounding effective moduli, including one based on "com pensated compactness"; and (4) the identification of deep links between the analysis of microstructures and the multidimensional calculus of variations. This work has implications for many physical problems involving optimal design, composite materials, and coherent phase transitions. As a result it has received attention and support from numerous scientific communities, including engineering, materials science, and physics as well as mathematics. There is by now an extensive literature in this area. But for various reasons certain fundamental papers were never properly published, circu lating instead as mimeographed notes or preprints. Other work appeared in poorly distributed conference proceedings volumes. Still other work was published in standard books or journals, but written in Russian or French. The net effect is a sort of "gap" in the literature, which has made the subject unnecessarily difficult for newcomers to penetrate."
Elliptic curves have been intensively studied in algebraic geometry and number theory. In recent years they have been used in devising efficient algorithms for factoring integers and primality proving, and in the construction of public key cryptosystems. Elliptic Curve Public Key Cryptosystems provides an up-to-date and self-contained treatment of elliptic curve-based public key cryptology. Elliptic curve cryptosystems potentially provide equivalent security to the existing public key schemes, but with shorter key lengths. Having short key lengths means smaller bandwidth and memory requirements and can be a crucial factor in some applications, for example the design of smart card systems. The book examines various issues which arise in the secure and efficient implementation of elliptic curve systems. Elliptic Curve Public Key Cryptosystems is a valuable reference resource for researchers in academia, government and industry who are concerned with issues of data security. Because of the comprehensive treatment, the book is also suitable for use as a text for advanced courses on the subject.
This self-tutorial offers a concise yet thorough grounding in the mathematics necessary for successfully applying FEMs to practical problems in science and engineering. The unique approach first summarizes and outlines the finite-element mathematics in general and then, in the second and major part, formulates problem examples that clearly demonstrate the techniques of functional analysis via numerous and diverse exercises. The solutions of the problems are given directly afterwards. Using this approach, the author motivates and encourages the reader to actively acquire the knowledge of finite-element methods instead of passively absorbing the material, as in most standard textbooks. The enlarged English-language edition, based on the original French, also contains a chapter on the approximation steps derived from the description of nature with differential equations and then applied to the specific model to be used. Furthermore, an introduction to tensor calculus using distribution theory offers further insight for readers with different mathematical backgrounds.
This thesis presents two analyses of semileptonic b sl+l decays using Flavour Changing Neutral Currents (FCNCs) to test for the presence of new physics and lepton flavour universality, and the equality of couplings for different leptons, which on the basis of experimental evidence is assumed to hold in the Standard Model, free from uncertainties as a result of knowledge of the hadronic matrix elements. It also includes the angular analysis of Lambda_b->Lambda mumu decay and the RK* measurement, both of which are first measurements, not yet performed by any other experiment.
This self-contained book is an up-to-date description of the basic theory of molecular gas dynamics and its various applications. The book, unique in the literature, presents working knowledge, theory, techniques, and typical phenomena in rarefied gases for theoretical development and application. Basic theory is developed in a systematic way and presented in a form easily applied for practical use. In this work, the ghost effect and non-Navier Stokes effects are demonstrated for typical examples B nard and Taylor Couette problems in the context of a new framework. A new type of ghost effect is also discussed.
Computational Issues in High Performance Software for Nonlinear Research brings together in one place important contributions and up-to-date research results in this important area. Computational Issues in High Performance Software for Nonlinear Research serves as an excellent reference, providing insight into some of the most important research issues in the field.
An easy to read survey of data analysis, linear regression models and analysis of variance. The extensive development of the linear model includes the use of the linear model approach to analysis of variance provides a strong link to statistical software packages, and is complemented by a thorough overview of theory. It is assumed that the reader has the background equivalent to an introductory book in statistical inference. Can be read easily by those who have had brief exposure to calculus and linear algebra. Intended for first year graduate students in business, social and the biological sciences. Provides the student with the necessary statistics background for a course in research methodology. In addition, undergraduate statistics majors will find this text useful as a survey of linear models and their applications.
Modem geometric methods combine the intuitiveness of spatial visualization with the rigor of analytical derivation. Classical analysis is shown to provide a foundation for the study of geometry while geometrical ideas lead to analytical concepts of intrinsic beauty. Arching over many subdisciplines of mathematics and branching out in applications to every quantitative science, these methods are, notes the Russian mathematician A.T. Fomenko, in tune with the Renais sance traditions. Economists and finance theorists are already familiar with some aspects of this synthetic tradition. Bifurcation and catastrophe theo ries have been used to analyze the instability of economic models. Differential topology provided useful techniques for deriving results in general equilibrium analysis. But they are less aware of the central role that Felix Klein and Sophus Lie gave to group theory in the study of geometrical systems. Lie went on to show that the special methods used in solving differential equations can be classified through the study of the invariance of these equations under a continuous group of transformations. Mathematicians and physicists later recognized the relation between Lie's work on differential equations and symme try and, combining the visions of Hamilton, Lie, Klein and Noether, embarked on a research program whose vitality is attested by the innumerable books and articles written by them as well as by biolo gists, chemists and philosophers."
Relational databases hold data, right? They indeed do, but to think of a database as nothing more than a container for data is to miss out on the profound power that underlies relational technology. Use the expressive power of mathematics to precisely specify designs and business rules. Communicate effectively about design using the universal language of mathematics. Develop and write complex SQL statements with confidence. Avoid pitfalls and problems from common relational bugaboos such as null values and duplicate rows. The math that you learn in this book will put you above the level of understanding of most database professionals today. You'll better understand the technology and be able to apply it more effectively. You'll avoid data anomalies like redundancy and inconsistency. Understanding what's in this book will take your mastery of relational technology to heights you may not have thought possible.
This monograph contributes to the mathematical analysis of systems exhibiting hysteresis effects and phase transitions. Its main part begins with a detailed study of models for scalar rate independent hysteresis in the form of hysteresis operators. Applications to ferromagnetism, elastoplasticity and fatigue analysis are presented, and two representative distributed systems with hysteresis operator are discussed. The attention then shifts to the mechanisms of energy dissipation and transformation that induce a hysteretic behavior in continuous media undergoing phase transitions. After an introduction to phenomenological thermodynamic theories of phase transitions, in particular, the Landau-Ginzburg theory and phase field models, several specific models are discussed in detail. These include Falk's model for the hysteresis in shape memory alloys and the phase field models due to Caginalp and Penrose-Fife. The latter are studied both for conserved and non-conserved order parameters. A chapter presenting a mathematical model for the austenite-pearlite and austenite-martensite phase transitions in eutectoid carbon steels concludes the book.
Many aspects of Nature, Biology or even from Society have become part of the techniques and algorithms used in computer science or they have been used to enhance or hybridize several techniques through the inclusion of advanced evolution, cooperation or biologically based additions. The previous NICSO workshops were held in Granada, Spain, 2006, Acireale, Italy, 2007, and in Tenerife, Spain, 2008. As in the previous editions, NICSO 2010, held in Granada, Spain, was conceived as a forum for the latest ideas and the state of the art research related to nature inspired cooperative strategies. The contributions collected in this book cover topics including nature-inspired techniques like Genetic Algorithms, Evolutionary Algorithms, Ant and Bee Colonies, Swarm Intelligence approaches, Neural Networks, several Cooperation Models, Structures and Strategies, Agents Models, Social Interactions, as well as new algorithms based on the behaviour of fireflies or bats.
This volume contains the papers presented at the NATO Advanced Research Institute on "Non-Linear Dynamics and Fundamental Interactions" held in Tashkent, Uzbekistan, from Oct.10-16,2004. The main objective of the Workshop was to bring together people working in areas of Fundamental physics relating to Quantum Field Theory, Finite Temperature Field theory and their applications to problems in particle physics, phase transitions and overlap regions with the areas of Quantum Chaos. The other important area is related to aspects of Non-Linear Dynamics which has been considered with the topic of chaology. The applications of such techniques are to mesoscopic systems, nanostructures, quantum information, particle physics and cosmology. All this forms a very rich area to review critically and then find aspects that still need careful consideration with possible new developments to find appropriate solutions. There were 29 one-hour talks and a total of seven half-hour talks, mostly by the students. In addition two round table discussions were organised to bring the important topics that still need careful consideration. One was devoted to questions and unsolved problems in Chaos, in particular Quantum Chaos. The other round table discussion considered the outstanding problems in Fundamental Interactions. There were extensive discussions during the two hours devoted to each area. Applications and development of new and diverse techniques was the real focus of these discussions. The conference was ably organised by the local committee consisting of D.U.
The focus of this book is on bilevel programming which combines elements of hierarchical optimization and game theory. The basic model addresses the problem where two decision-makers, each with their individual objectives, act and react in a noncooperative manner. The actions of one affect the choices and payoffs available to the other but neither player can completely dominate the other in the traditional sense. Over the last 20 years there has been a steady growth in research related to theory and solution methodologies for bilevel programming. This interest stems from the inherent complexity and consequent challenge of the underlying mathematics, as well as the applicability of the bilevel model to many real-world situations. The primary aim of this book is to provide a historical perspective on algorithmic development and to highlight those implementations that have proved to be the most efficient in their class. A corollary aim is to provide a sampling of applications in order to demonstrate the versatility of the basic model and the limitations of current technology. What is unique about this book is its comprehensive and integrated treatment of theory, algorithms and implementation issues. It is the first text that offers researchers and practitioners an elementary understanding of how to solve bilevel programs and a perspective on what success has been achieved in the field. Audience: Includes management scientists, operations researchers, industrial engineers, mathematicians and economists.
This updated edition provides an introduction to computational physics in order to perform physics experiments on the computer. Computers can be used for a wide variety of scientific tasks, from the simple manipulation of data to simulations of real-world events. This book is designed to provide the reader with a grounding in scientific programming. It contains many examples and exercises developed in the context of physics problems. The new edition now uses C++ as the primary language. The book covers topics such as interpolation, integration, and the numerical solutions to both ordinary and partial differential equations. It discusses simple ideas, such as linear interpolation and root finding through bisection, to more advanced concepts in order to solve complex differential equations. It also contains a chapter on high performance computing which provides an introduction to parallel programming. Features Includes some advanced material as well as the customary introductory topics Uses a comprehensive C++ library and several C++ sample programs ready to use and build into a library of scientific programs Features problem-solving aspects to show how problems are approached and to demonstrate the methods of constructing models and solutions
Algorithmic Principles of Mathematical Programming investigates the
mathematical structures and principles underlying the design of
efficient algorithms for optimization problems. Recent advances in
algorithmic theory have shown that the traditionally separate areas
of discrete optimization, linear programming, and nonlinear
optimization are closely linked. This book offers a comprehensive
introduction to the whole subject and leads the reader to the
frontiers of current research. The prerequisites to use the book
are very elementary. All the tools from numerical linear algebra
and calculus are fully reviewed and developed. Rather than
attempting to be encyclopedic, the book illustrates the important
basic techniques with typical problems. The focus is on efficient
algorithms with respect to practical usefulness. Algorithmic
complexity theory is presented with the goal of helping the reader
understand the concepts without having to become a theoretical
specialist. Further theory is outlined and supplemented with
pointers to the relevant literature.
The book offers a new approach to information theory that is more general then the classical approach by Shannon. The classical definition of information is given for an alphabet of symbols or for a set of mutually exclusive propositions (a partition of the probability space ) with corresponding probabilities adding up to 1. The new definition is given for an arbitrary cover of , i.e. for a set of possibly overlapping propositions. The generalized information concept is called novelty and it is accompanied by two new concepts derived from it, designated as information and surprise, which describe "opposite" versions of novelty, information being related more to classical information theory and surprise being related more to the classical concept of statistical significance. In the discussion of these three concepts and their interrelations several properties or classes of covers are defined, which turn out to be lattices. The book also presents applications of these new concepts, mostly in statistics and in neuroscience.
Symmetry is at the heart of our understanding of matter. This book tells the fascinating story of the constituents of matter from a common symmetry perspective. The standard model of elementary particles and the periodic table of chemical elements have the common goal to bring order in the bewildering chaos of the constituents of matter. Their success relies on the presence of fundamental symmetries in their core. The purpose of Shattered Symmetry is to share the admiration for the power and the beauty of these symmetries. The reader is taken on a journey from the basic geometric symmetry group of a circle to the sublime dynamic symmetries that govern the motions of the particles. Along the way the theory of symmetry groups is gradually introduced with special emphasis on its use as a classification tool and its graphical representations. This is applied to the unitary symmetry of the eightfold way of quarks, and to the four-dimensional symmetry of the hydrogen atom. The final challenge is to open up the structure of Mendeleev's table which goes beyond the symmetry of the hydrogen atom. Breaking this symmetry to accommodate the multi-electron atoms requires us to leave the common ground of linear algebras and explore the potential of non-linearity.
This book is devoted to one of the main questions of the theory of extremal prob lems, namely, to necessary and sufficient extremality conditions. It is intended mostly for mathematicians and also for all those who are interested in optimiza tion problems. The book may be useful for advanced students, post-graduated students, and researchers. The book consists of four chapters. In Chap. 1 we study the abstract minimization problem with constraints, which is often called the mathemati cal programming problem. Chapter 2 is devoted to one of the most important classes of extremal problems, the optimal control problem. In the third chapter we study one of the main objects of the calculus of variations, the integral quadratic form. In the concluding, fourth, chapter we study local properties of smooth nonlinear mappings in a neighborhood of an abnormal point. The problems which are studied in this book (of course, in addition to their extremal nature) are united by our main interest being in the study of the so called abnormal or degenerate problems. This is the main distinction of the present book from a large number of books devoted to theory of extremal problems, among which there are many excellent textbooks, and books such as, e.g., 13, 38, 59, 78, 82, 86, 101, 112, 119], to mention a few." |
You may like...
Infinite Words, Volume 141 - Automata…
Dominique Perrin, Jean-Eric Pin
Hardcover
R4,065
Discovery Miles 40 650
Mortgage Valuation Models - Embedded…
Andrew Davidson, Alexander Levin
Hardcover
R3,586
Discovery Miles 35 860
Acoustic and Elastic Waves - Recent…
Dimitrios G. Aggelis, Nathalie Godin
Hardcover
|