![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Applied mathematics > General
The present book deals with the issues of stability of Motion which most often are encountered in the analysis of scientific and technical problems. There are many comprehensive monographs on the theory of stability of motion, with each one devoted to a separate complicated issue of the theory. The main advantage of this book, however, is its simple yet simultaneous rigorous presentation of the concepts of the theory, which often are presented in the context of applied problems with detailed examples demonstrating effective methods of solving practical problems.
In his paper Theory of Communication [Gab46], D. Gabor proposed the use of a family of functions obtained from one Gaussian by time-and frequency shifts. Each of these is well concentrated in time and frequency; together they are meant to constitute a complete collection of building blocks into which more complicated time-depending functions can be decomposed. The application to communication proposed by Gabor was to send the coeffi cients of the decomposition into this family of a signal, rather than the signal itself. This remained a proposal-as far as I know there were no seri ous attempts to implement it for communication purposes in practice, and in fact, at the critical time-frequency density proposed originally, there is a mathematical obstruction; as was understood later, the family of shifted and modulated Gaussians spans the space of square integrable functions [BBGK71, Per71] (it even has one function to spare [BGZ75] . . . ) but it does not constitute what we now call a frame, leading to numerical insta bilities. The Balian-Low theorem (about which the reader can find more in some of the contributions in this book) and its extensions showed that a similar mishap occurs if the Gaussian is replaced by any other function that is "reasonably" smooth and localized. One is thus led naturally to considering a higher time-frequency density.
Formalism of classical mechanics underlies a number of powerful mathematical methods that are widely used in theoretical and mathematical physics. This book considers the basics facts of Lagrangian and Hamiltonian mechanics, as well as related topics, such as canonical transformations, integral invariants, potential motion in geometric setting, symmetries, the Noether theorem and systems with constraints. While in some cases the formalism is developed beyond the traditional level adopted in the standard textbooks on classical mechanics, only elementary mathematical methods are used in the exposition of the material. The mathematical constructions involved are explicitly described and explained, so the book can be a good starting point for the undergraduate student new to this field. At the same time and where possible, intuitive motivations are replaced by explicit proofs and direct computations, preserving the level of rigor that makes the book useful for the graduate students intending to work in one of the branches of the vast field of theoretical physics. To illustrate how classical-mechanics formalism works in other branches of theoretical physics, examples related to electrodynamics, as well as to relativistic and quantum mechanics, are included.
Least squares estimation, when used appropriately, is a powerful research tool. A deeper understanding of the regression concepts is essential for achieving optimal benefits from a least squares analysis. This book builds on the fundamentals of statistical methods and provides appropriate concepts that will allow a scientist to use least squares as an effective research tool. This book is aimed at the scientist who wishes to gain a working knowledge of regression analysis. The basic purpose of this book is to develop an understanding of least squares and related statistical methods without becoming excessively mathematical. It is the outgrowth of more than 30 years of consulting experience with scientists and many years of teaching an appied regression course to graduate students. This book seves as an excellent text for a service course on regression for non-statisticians and as a reference for researchers. It also provides a bridge between a two-semester introduction to statistical methods and a thoeretical linear models course. This book emphasizes the concepts and the analysis of data sets. It provides a review of the key concepts in simple linear regression, matrix operations, and multiple regression. Methods and criteria for selecting regression variables and geometric interpretations are discussed. Polynomial, trigonometric, analysis of variance, nonlinear, time series, logistic, random effects, and mixed effects models are also discussed. Detailed case studies and exercises based on real data sets are used to reinforce the concepts. John O. Rawlings, Professor Emeritus in the Department of Statistics at North Carolina State University, retired after 34 years of teaching, consulting, and research in statistical methods. He was instrumental in developing, and for many years taught, the course on which this text is based. He is a Fellow of the American Statistical Association and the Crop Science Society of America. Sastry G. Pantula is Professor and Directory of Graduate Programs in the Department of Statistics at North Carolina State University. He is a member of the Academy of Outstanding Teachers at North Carolina State University. David A. Dickey is Professor of Statistics at North Carolina State University. He is a member of the Academy of Outstanding Teachers at North Carolina State University.
The problems of conditional optimization of the uniform (or C-) norm for polynomials and rational functions arise in various branches of science and technology. Their numerical solution is notoriously difficult in case of high degree functions. The book develops the classical Chebyshev's approach which gives analytical representation for the solution in terms of Riemann surfaces. The techniques born in the remote (at the first glance) branches of mathematics such as complex analysis, Riemann surfaces and Teichmuller theory, foliations, braids, topology are applied to approximation problems. The key feature of this book is the usage of beautiful ideas of contemporary mathematics for the solution of applied problems and their effective numerical realization. This is one of the few books where the computational aspects of the higher genus Riemann surfaces are illuminated. Effective work with the moduli spaces of algebraic curves provides wide opportunities for numerical experiments in mathematics and theoretical physics.
The goal of this book is to present the new trend of Computational Fluid Dynamics (CFD) for the 21 st Century. It consists of papers presented at a symposium honoring Prof. No buyuki Satofuka on the occasion of his 60th birthday. The symposium entitled Computational Fluid Dynamics fOT the 21st Century was held at Kyoto Institute of Technology (KIT) in Kyoto, Japan on July 15-17,2000. The symposium was hosted by KIT as a memorial event celebrating the 100 year anniversary of this establishment. The invited speakers were from Ja pan as weil as from the international community in Asia, Europe and North America. It is a great pleasure to dedicate this book to Prof. Satofuka in appreciation ofhis contributions to this field. During the last 30 years, Prof. Satofuka made many important contributions to CFD ad vancing the numerics and our understanding of flow physics in different regimes. The details of his contributions are discussed in the first chapter. The book contains chapters covering re lated topics with emphasis on new promising directions for the 21 st Century. The chapters of the book reflect the 10 sessions of the symposium on both the numerics and the applications including grid generation and adaptation, new numerical schemes, optimi zation techniques and parallel computations as weil as applications to multi-sc ale and multi physics problems, design and flow control and new topics beyond aeronautics. In the follow ing, the chapters of the book are introduced."
This unique book on the subject addresses fundamental problems and will be the standard reference for a long time to come. The authors have different scientific origins and combine these successfully, creating a text aimed at graduate students and researchers that can be used for courses and seminars.
This book is devoted to current advances in the field of nonlinear mathematical physics and modeling of critical phenomena that can lead to catastrophic events. Pursuing a multidisciplinary approach, it gathers the work of scientists who are developing mathematical and computational methods for the study and analysis of nonlinear phenomena and who are working actively to apply these tools and create conditions to mitigate and reduce the negative consequences of natural and socio-economic disaster risk. This book summarizes the contributions of the International School and Workshop on Nonlinear Mathematical Physics and Natural Hazards, organized within the framework of the South East Europe Network in Mathematical and Theoretical Physics (SEENET MTP) and supported by UNESCO. It was held at the Bulgarian Academy of Sciences from November 28 to December 2, 2013. The contributions are divided into two major parts in keeping with the scientific program of the meeting. Among the topics covered in Part I (Nonlinear Mathematical Physics towards Critical Phenomena) are predictions and correlations in self organized criticality, space-time structure of extreme current and activity events in exclusion processes, quantum spin chains and integrability of many-body systems, applications of discriminantly separable polynomials, MKdV-type equations, and chaotic behavior in Yang-Mills theories. Part II (Seismic Hazard and Risk) is devoted to probabilistic seismic hazard assessment, seismic risk mapping, seismic monitoring, networking and data processing in Europe, mainly in South-East Europe. The book aims to promote collaboration at the regional and European level to better understand and model phenomena that can cause natural and socio-economic disasters, and to contribute to the joint efforts to mitigate the negative consequence of natural disasters. This collection of papers reflects contemporary efforts on capacity building through developing skills, exchanging knowledge and practicing mathematical methods for modeling nonlinear phenomena, disaster risk preparedness and natural hazards mitigation. The target audience includes students and researchers in mathematical and theoretical physics, earth physics, applied physics, geophysics, seismology and earthquake danger and risk mitigation.
Optimization in Computational Chemistry and Molecular Biology: Local and Global Approaches covers recent developments in optimization techniques for addressing several computational chemistry and biology problems. A tantalizing problem that cuts across the fields of computational chemistry, biology, medicine, engineering and applied mathematics is how proteins fold. Global and local optimization provide a systematic framework of conformational searches for the prediction of three-dimensional protein structures that represent the global minimum free energy, as well as low-energy biomolecular conformations. Each contribution in the book is essentially expository in nature, but of scholarly treatment. The topics covered include advances in local and global optimization approaches for molecular dynamics and modeling, distance geometry, protein folding, molecular structure refinement, protein and drug design, and molecular and peptide docking. Audience: The book is addressed not only to researchers in mathematical programming, but to all scientists in various disciplines who use optimization methods in solving problems in computational chemistry and biology.
This book focuses on problems at the interplay between the theory of partitions and optimal transport with a view toward applications. Topics covered include problems related to stable marriages and stable partitions, multipartitions, optimal transport for measures and optimal partitions, and finally cooperative and noncooperative partitions. All concepts presented are illustrated by examples from game theory, economics, and learning.
Since the introduction of genetic algorithms in the 1970s, an enormous number of articles together with several significant monographs and books have been published on this methodology. As a result, genetic algorithms have made a major contribution to optimization, adaptation, and learning in a wide variety of unexpected fields. Over the years, many excellent books in genetic algorithm optimization have been published; however, they focus mainly on single-objective discrete or other hard optimization problems under certainty. There appears to be no book that is designed to present genetic algorithms for solving not only single-objective but also fuzzy and multiobjective optimization problems in a unified way. Genetic Algorithms And Fuzzy Multiobjective Optimization introduces the latest advances in the field of genetic algorithm optimization for 0-1 programming, integer programming, nonconvex programming, and job-shop scheduling problems under multiobjectiveness and fuzziness. In addition, the book treats a wide range of actual real world applications. The theoretical material and applications place special stress on interactive decision-making aspects of fuzzy multiobjective optimization for human-centered systems in most realistic situations when dealing with fuzziness. The intended readers of this book are senior undergraduate students, graduate students, researchers, and practitioners in the fields of operations research, computer science, industrial engineering, management science, systems engineering, and other engineering disciplines that deal with the subjects of multiobjective programming for discrete or other hard optimization problems under fuzziness. Real world research applications are used throughout the book to illustrate the presentation. These applications are drawn from complex problems. Examples include flexible scheduling in a machine center, operation planning of district heating and cooling plants, and coal purchase planning in an actual electric power plant.
The scientific monograph of a survey kind presented to the reader's attention deals with fundamental ideas and basic schemes of optimization methods that can be effectively used for solving strategic planning and operations manage ment problems related, in particular, to transportation. This monograph is an English translation of a considerable part of the author's book with a similar title that was published in Russian in 1992. The material of the monograph embraces methods of linear and nonlinear programming; nonsmooth and nonconvex optimization; integer programming, solving problems on graphs, and solving problems with mixed variables; rout ing, scheduling, solving network flow problems, and solving the transportation problem; stochastic programming, multicriteria optimization, game theory, and optimization on fuzzy sets and under fuzzy goals; optimal control of systems described by ordinary differential equations, partial differential equations, gen eralized differential equations (differential inclusions), and functional equations with a variable that can assume only discrete values; and some other methods that are based on or adjoin to the listed ones."
Emergent Computation emphasizes the interrelationship of the different classes of languages studied in mathematical linguistics (regular, context-free, context-sensitive, and type 0) with aspects to the biochemistry of DNA, RNA, and proteins. In addition, aspects of sequential machines such as parity checking and semi-groups are extended to the study of the Biochemistry of DNA, RNA, and proteins. Mention is also made of the relationship of algebraic topology, knot theory, complex fields, quaternions, and universal turing machines and the biochemistry of DNA, RNA, and proteins. Emergent Computation tries to avoid an emphasis upon mathematical abstraction ("elegance") at the expense of ignoring scientific facts known to Biochemists. Emergent Computation is based entirely upon papers published by scientists in well-known and respected professional journals. These papers are based upon current research. A few examples of what is not ignored to gain "elegance": - DNA exists as triple and quadruple strands - Watson-Crick complementary bases have mismatches - There can be more than four bases in DNA - There are more than sixty-four codons - There may be more that twenty amino acids in proteins While Emergent Computation emphasizes bioinformatics applications, the last chapter studies mathematical linguistics applied to areas such as languages found in birds, insects, medical applications, anthropology, etc. Emergent Computation tries to avoid unnecessary mathematical abstraction while still being rigorous. The demands made upon the knowledge of chemistry or mathematics is minimized as well. The collected technical references are valuable in itself for additional reading.
This book presents contributions and review articles on the theory of copulas and their applications. The authoritative and refereed contributions review the latest findings in the area with emphasis on "classical" topics like distributions with fixed marginals, measures of association, construction of copulas with given additional information, etc. The book celebrates the 75th birthday of Professor Roger B. Nelsen and his outstanding contribution to the development of copula theory. Most of the book's contributions were presented at the conference "Copulas and Their Applications" held in his honor in Almeria, Spain, July 3-5, 2017. The chapter 'When Gumbel met Galambos' is published open access under a CC BY 4.0 license.
This text provides an application oriented introduction to the numerical methods for partial differential equations. It covers finite difference, finite element and finite volume methods, interweaving theory and applications throughout. Extensive exercises are provided throughout the text. Graduate students in mathematics, engineering and physics will find this book useful.
Since I started working in the area of nonlinear programming and, later on, variational inequality problems, I have frequently been surprised to find that many algorithms, however scattered in numerous journals, monographs and books, and described rather differently, are closely related to each other. This book is meant to help the reader understand and relate algorithms to each other in some intuitive fashion, and represents, in this respect, a consolidation of the field. The framework of algorithms presented in this book is called Cost Approxi mation. (The preface of the Ph.D. thesis Pat93d] explains the background to the work that lead to the thesis, and ultimately to this book.) It describes, for a given formulation of a variational inequality or nonlinear programming problem, an algorithm by means of approximating mappings and problems, a principle for the update of the iteration points, and a merit function which guides and monitors the convergence of the algorithm. One purpose of this book is to offer this framework as an intuitively appeal ing tool for describing an algorithm. One of the advantages of the framework, or any reasonable framework for that matter, is that two algorithms may be easily related and compared through its use. This framework is particular in that it covers a vast number of methods, while still being fairly detailed; the level of abstraction is in fact the same as that of the original problem statement."
Practical quantum computing still seems more than a decade away, and researchers have not even identified what the best physical implementation of a quantum bit will be. There is a real need in the scientific literature for a dialogue on the topic of lessons learned and looming roadblocks. This reprint from Quantum Information Processing is dedicated to the experimental aspects of quantum computing and includes articles that 1) highlight the lessons learned over the last 10 years, and 2) outline the challenges over the next 10 years. The special issue includes a series of invited articles that discuss the most promising physical implementations of quantum computing. The invited articles were to draw grand conclusions about the past and speculate about the future, not just report results from the present.
This interdisciplinary thesis involves the design and analysis of coordination algorithms on networks, identification of dynamic networks and estimation on networks with random geometries with implications for networks that support the operation of dynamic systems, e.g., formations of robotic vehicles, distributed estimation via sensor networks. The results have ramifications for fault detection and isolation of large-scale networked systems and optimization models and algorithms for next generation aircraft power systems. The author finds novel applications of the methodology in energy systems, such as residential and industrial smart energy management systems.
Many questions dealing with solvability, stability and solution methods for va- ational inequalities or equilibrium, optimization and complementarity problems lead to the analysis of certain (perturbed) equations. This often requires a - formulation of the initial model being under consideration. Due to the specific of the original problem, the resulting equation is usually either not differ- tiable (even if the data of the original model are smooth), or it does not satisfy the assumptions of the classical implicit function theorem. This phenomenon is the main reason why a considerable analytical inst- ment dealing with generalized equations (i.e., with finding zeros of multivalued mappings) and nonsmooth equations (i.e., the defining functions are not c- tinuously differentiable) has been developed during the last 20 years, and that under very different viewpoints and assumptions. In this theory, the classical hypotheses of convex analysis, in particular, monotonicity and convexity, have been weakened or dropped, and the scope of possible applications seems to be quite large. Briefly, this discipline is often called nonsmooth analysis, sometimes also variational analysis. Our book fits into this discipline, however, our main intention is to develop the analytical theory in close connection with the needs of applications in optimization and related subjects. Main Topics of the Book 1. Extended analysis of Lipschitz functions and their generalized derivatives, including "Newton maps" and regularity of multivalued mappings. 2. Principle of successive approximation under metric regularity and its - plication to implicit functions.
This volume of High Performance Computing in Science and Engineering is fully dedicated to the final report of KONWIHR, the Bavarian Competence Network for Technical and Scientific High Performance Computing. It includes the transactions of the final KONWIHR workshop, that was held at Technische Universitat Munchen, October 14-15, 2004, as well as additional reports of KONWIHR research groups. KONWIHR was established by the Bavarian State Government in order to support the broad application of high performance computing in science and technology throughout the country. KONWIHR is a supporting action to the installation of the German supercomputer Hitachi SR 8000 in the Leibniz Computing Center of the Bavarian Academy of Sciences. The report covers projects from basic research in computer science to develop tools for high performance computing as well as applications from biology, chemistry, electrical engineering, geology, mathematics, physics, computational fluid dynamics, materials science and computer science."
This is a textbook for a course (or self-instruction) in cryptography with emphasis on algebraic methods. The first half of the book is a self-contained informal introduction to areas of algebra, number theory, and computer science that are used in cryptography. Most of the material in the second half - "hidden monomial" systems, combinatorial-algebraic systems, and hyperelliptic systems - has not previously appeared in monograph form. The Appendix by Menezes, Wu, and Zuccherato gives an elementary treatment of hyperelliptic curves. This book is intended for graduate students, advanced undergraduates, and scientists working in various fields of data security.
This corrected and updated second edition of "Scattering Theory" presents a concise and modern coverage of the subject. In the present treatment, special attention is given to the role played by the long-range behaviour of the projectile-target interaction, and a theory is developed, which is well suited to describe near-threshold bound and continuum states in realistic binary systems such as diatomic molecules or molecular ions. It is motivated by the fact that experimental advances have shifted and broadened the scope of applications where concepts from scattering theory are used, e.g. to the field of ultracold atoms and molecules, which has been experiencing enormous growth in recent years, largely triggered by the successful realization of Bose-Einstein condensates of dilute atomic gases in 1995. The book contains sections on special topics such as near-threshold quantization, quantum reflection, Feshbach resonances and the quantum description of scattering in two dimensions. The level of abstraction is kept as low as at all possible and deeper questions related to the mathematical foundations of scattering theory are passed by. It should be understandable for anyone with a basic knowledge of nonrelativistic quantum mechanics. The book is intended for advanced students and researchers, and it is hoped that it will be useful for theorists and experimentalists alike.
The 1980s and 1990s have been a period of exciting new developments in the modelling of decision-making under risk and uncertainty. Extensions of the theory of expected utility and alternative theories of non-expected utility' have been devised to explain many puzzles and paradoxes of individual and collective choice behaviour. This volume presents some of the best recent work on the modelling of risk and uncertainty, with applications to problems in environmental policy, public health, economics and finance. Eighteen papers by distinguished economists, management scientists, and statisticians shed new light on phenomena such as the Allais and St. Petersburg paradoxes, the equity premium puzzle, the demand for insurance, the valuation of public health and safety, and environmental goods. Audience: This work will be of interest to economists, management scientists, risk and policy analysts, and others who study risky decision-making in economic and environmental contexts.
This is a thorough and comprehensive treatment of the theory of NP-completeness in the framework of algebraic complexity theory. Coverage includes Valiant's algebraic theory of NP-completeness; interrelations with the classical theory as well as the Blum-Shub-Smale model of computation, questions of structural complexity; fast evaluation of representations of general linear groups; and complexity of immanants. |
You may like...
New Numerical Scheme with Newton…
Abdon Atangana, Seda Igret Araz
Paperback
R3,037
Discovery Miles 30 370
Elementary Treatise on Mechanics - for…
William G (William Guy) 1820- Peck
Hardcover
R887
Discovery Miles 8 870
Exploring Quantum Mechanics - A…
Victor Galitski, Boris Karnakov, …
Hardcover
R6,101
Discovery Miles 61 010
Dark Silicon and Future On-chip Systems…
Suyel Namasudra, Hamid Sarbazi-Azad
Hardcover
R3,940
Discovery Miles 39 400
Infinite Words, Volume 141 - Automata…
Dominique Perrin, Jean-Eric Pin
Hardcover
R4,065
Discovery Miles 40 650
A Brief Introduction to Topology and…
Antonio Sergio Teixeira Pires
Paperback
R756
Discovery Miles 7 560
|