![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Applied mathematics > Mathematics for scientists & engineers
The Adomian decomposition method enables the accurate and efficient analytic solution of nonlinear ordinary or partial differential equations without the need to resort to linearization or perturbation approaches. It unifies the treatment of linear and nonlinear, ordinary or partial differential equations, or systems of such equations, into a single basic method, which is applicable to both initial and boundary-value problems. This volume deals with the application of this method to many problems of physics, including some frontier problems which have previously required much more computationally-intensive approaches. The opening chapters deal with various fundamental aspects of the decomposition method. Subsequent chapters deal with the application of the method to nonlinear oscillatory systems in physics, the Duffing equation, boundary-value problems with closed irregular contours or surfaces, and other frontier areas. The potential application of this method to a wide range of problems in diverse disciplines such as biology, hydrology, semiconductor physics, wave propagation, etc., is highlighted. For researchers and graduate students of physics, applied mathematics and engineering, whose work involves mathematical modelling and the quantitative solution of systems of equations.
This book examines the latest research results from combined multi-component and multi-scale explorations. It provides theory, considers underlying numerical methods and presents brilliant computational experimentation. Engineering computations featured in this monograph further offer particular interest to many researchers, engineers and computational scientists working in frontier modeling and applications of multicomponent and multiscale problems. Professor Geiser gives specific attention to the aspects of decomposing and splitting delicate structures and controlling decomposition and the rationale behind many important applications of multi-component and multi-scale analysis. Multicomponent and Multiscale Systems: Theory, Methods and Applications in Engineering also considers the question of why iterative methods can be powerful and more appropriate for well-balanced multiscale and multicomponent coupled nonlinear problems. The book is ideal for engineers and scientists working in theoretical and applied areas.
This volume contains a collection of papers dealing with applications of orthogonal polynomials and methods for their computation, of interest to a wide audience of numerical analysts, engineers, and scientists. The applications address problems in applied mathematics as well as problems in engineering and the sciences.
The human brain is made up of 85 billion neurons, which are connected by over 100 trillion synapses. For more than a century, a diverse array of researchers searched for a language that could be used to capture the essence of what these neurons do and how they communicate. The language they were looking for was mathematics, and we would not be able to understand the brain as we do today without it. In Models of the Mind, author and computational neuroscientist Grace Lindsay explains how mathematical models have allowed scientists to understand and describe many of the brain's processes. She introduces readers to the most important concepts in modern neuroscience, and highlights the tensions that arise when the abstract world of mathematical modelling collides with the messy details of biology. Each chapter of Models of the Mind focuses on mathematical tools that have been applied in a particular area of neuroscience, progressing from the simplest building block of the brain - the individual neuron - through to circuits of interacting neurons, whole brain areas and even the behaviours that brains command. Grace examines the history of the field, starting with experiments done on frog legs in the late eighteenth century and building to the large models of artificial neural networks that form the basis of modern artificial intelligence. Throughout, she reveals the value of using the elegant language of mathematics to describe the machinery of neuroscience.
Agent based evolutionary search is an emerging paradigm in computational int- ligence offering the potential to conceptualize and solve a variety of complex problems such as currency trading, production planning, disaster response m- agement, business process management etc. There has been a significant growth in the number of publications related to the development and applications of agent based systems in recent years which has prompted special issues of journals and dedicated sessions in premier conferences. The notion of an agent with its ability to sense, learn and act autonomously - lows the development of a plethora of efficient algorithms to deal with complex problems. This notion of an agent differs significantly from a restrictive definition of a solution in an evolutionary algorithm and opens up the possibility to model and capture emergent behavior of complex systems through a natural age- oriented decomposition of the problem space. While this flexibility of represen- tion offered by agent based systems is widely acknowledged, they need to be - signed for specific purposes capturing the right level of details and description. This edited volume is aimed to provide the readers with a brief background of agent based evolutionary search, recent developments and studies dealing with various levels of information abstraction and applications of agent based evo- tionary systems. There are 12 peer reviewed chapters in this book authored by d- tinguished researchers who have shared their experience and findings spanning across a wide range of applications.
The time delays in controllers and actuators can either deteriorate or improve the dynamic performance of a controlled mechanical system. Thus, it is desirable to gain an insight into the effect of time delays on the dynamics of a practical system in its design phase. This monograph represents the recent advances in system modeling, analysis of stability, robust stability and bifurcation by using some new mathematical tools such as generalized Sturm criterion and Dixon's resultant elimination of polynomials. The theoretical results are demonstrated through a number of examples of active vehicle chassis, structure control, as well as the control of chaos of mechanical systems.
The book presents theory and algorithms for secure networked inference in the presence of Byzantines. It derives fundamental limits of networked inference in the presence of Byzantine data and designs robust strategies to ensure reliable performance for several practical network architectures. In particular, it addresses inference (or learning) processes such as detection, estimation or classification, and parallel, hierarchical, and fully decentralized (peer-to-peer) system architectures. Furthermore, it discusses a number of new directions and heuristics to tackle the problem of design complexity in these practical network architectures for inference.
For several decades since its inception, Einstein's general theory of relativity stood somewhat aloof from the rest of physics. Paradoxically, the attributes which normally boost a physical theory - namely, its perfection as a theoreti cal framework and the extraordinary intellectual achievement underlying i- prevented the general theory from being assimilated in the mainstream of physics. It was as if theoreticians hesitated to tamper with something that is manifestly so beautiful. Happily, two developments in the 1970s have narrowed the gap. In 1974 Stephen Hawking arrived at the remarkable result that black holes radiate after all. And in the second half of the decade, particle physicists discovered that the only scenario for applying their grand unified theories was offered by the very early phase in the history of the Big Bang universe. In both cases, it was necessary to discuss the ideas of quantum field theory in the background of curved spacetime that is basic to general relativity. This is, however, only half the total story. If gravity is to be brought into the general fold of theoretical physics we have to know how to quantize it. To date this has proved a formidable task although most physicists would agree that, as in the case of grand unified theories, quantum gravity will have applications to cosmology, in the very early stages of the Big Bang universe. In fact, the present picture of the Big Bang universe necessarily forces us to think of quantum cosmology."
Symmetry and Dynamics have played, sometimes dualistic, sometimes complimentary, but always a very essential role in the physicist's description and conception of Nature. These are again the basic underlying themes of the present volume. It collects self-contained introductory contributions on some of the recent developments both in mathematical concepts and in physical applications which are becoming very important in current research. So we see in this volume, on the one hand, differential geometry, group representations, topology and algebras and on the other hand, particle equations, particle dynamics and particle interactions. Specifically, this book contains a complete exposition of the theory of deformations of symplectic algebras and quantization, expository material on topology and geometry in physics, and group representations. On the more physical side, we have studies on the concept of particles, on conformal spinors of Cartan, on gauge and supersymmetric field theories, and on relativistic theory of particle interactions and the theory of magnetic resonances. The contributions collected here were originally delivered at two Meetings in Turkey, at Blacksea University in Trabzon and at the University of Bosphorus in Istanbul. But they have been thoroughly revised, updated and extended for this volume. It is a pleasure for me to acknowledge the support of UNESCO, the support and hospitality of Blacksea and Bosphorus Universities for these two memorable Meetings in Mathematical Physics, and to thank the Contributors for their effort and care in preparing this work.
This volume is concerned with the theoretical description of patterns and instabilities and their relevance to physics, chemistry, and biology. More specifically, the theme of the work is the theory of nonlinear physical systems with emphasis on the mechanisms leading to the appearance of regular patterns of ordered behavior and chaotic patterns of stochastic behavior. The aim is to present basic concepts and current problems from a variety of points of view. In spite of the emphasis on concepts, some effort has been made to bring together experimental observations and theoretical mechanisms to provide a basic understanding of the aspects of the behavior of nonlinear systems which have a measure of generality. Chaos theory has become a real challenge to physicists with very different interests and also in many other disciplines, of which astronomy, chemistry, medicine, meteorology, economics, and social theory are already embraced at the time of writing. The study of chaos-related phenomena has a truly interdisciplinary charac ter and makes use of important concepts and methods from other disciplines. As one important example, for the description of chaotic structures the branch of mathematics called fractal geometry (associated particularly with the name of Mandelbrot) has proved invaluable. For the discussion of the richness of ordered structures which appear, one relies on the theory of pattern recognition. It is relevant to mention that, to date, computer studies have greatly aided the analysis of theoretical models describing chaos."
Chaos and nonlinear dynamics initially developed as a new emergent field with its foundation in physics and applied mathematics. The highly generic, interdisciplinary quality of the insights gained in the last few decades has spawned myriad applications in almost all branches of science and technology-and even well beyond. Wherever quantitative modeling and analysis of complex, nonlinear phenomena is required, chaos theory and its methods can play a key role. This volume concentrates on reviewing the most relevant contemporary applications of chaotic nonlinear systems as they apply to the various cutting-edge branches of engineering. The book covers the theory as applied to robotics, electronic and communication engineering (for example chaos synchronization and cryptography) as well as to civil and mechanical engineering, where its use in damage monitoring and control is explored). Featuring contributions from active and leading research groups, this collection is ideal both as a reference and as a 'recipe book' full of tried and tested, successful engineering applications
This book brings together aspects of statistics and machine learning to provide a comprehensive guide to evaluating, interpreting and understanding biometric data. It naturally leads to topics including data mining and prediction to be examined in detail. The book places an emphasis on the various performance measures available for biometric systems, what they mean, and when they should and should not be applied. The evaluation techniques are presented rigorously, however they are always accompanied by intuitive explanations. This is important for the increased acceptance of biometrics among non-technical decision makers, and ultimately the general public.
This edited volume is devoted to the now-ubiquitous use of computational models across most disciplines of engineering and science, led by a trio of world-renowned researchers in the field. Focused on recent advances of modeling and optimization techniques aimed at handling computationally-expensive engineering problems involving simulation models, this book will be an invaluable resource for specialists (engineers, researchers, graduate students) working in areas as diverse as electrical engineering, mechanical and structural engineering, civil engineering, industrial engineering, hydrodynamics, aerospace engineering, microwave and antenna engineering, ocean science and climate modeling, and the automotive industry, where design processes are heavily based on CPU-heavy computer simulations. Various techniques, such as knowledge-based optimization, adjoint sensitivity techniques, and fast replacement models (to name just a few) are explored in-depth along with an array of the latest techniques to optimize the efficiency of the simulation-driven design process. High-fidelity simulation models allow for accurate evaluations of the devices and systems, which is critical in the design process, especially to avoid costly prototyping stages. Despite this and other advantages, the use of simulation tools in the design process is quite challenging due to associated high computational cost. The steady increase of available computational resources does not always translate into the shortening of the design cycle because of the growing demand for higher accuracy and necessity to simulate larger and more complex systems. For this reason, automated simulation-driven design-while highly desirable-is difficult when using conventional numerical optimization routines which normally require a large number of system simulations, each one already expensive.
Approach your problems from the right It isn't that they can't see the solution. It end and begin with the answers. Then is that they can't see the problem. one day, perhaps you will find the final question. G.K. Chesterton. The Scandal of Father Brown 'The point of a Pin'. 'The Hermit Clad in Crane Feathers' in R. van Gulik's The Chinese Maze Murders. Growing specialization and diversification have brought a host of monographs and textbooks on increasingly specialized topics. However, the "tree" of knowledge of mathematics and related fields does not grow only by putting forth new branches. It also happens, quite often in fact, that branches which were thought to be completely disparate are suddenly seen to be related. Further, the kind and level of sophistication of mathematics applied in various sciences has changed drastically in recent years: measure theory is used (non-trivially) in regional and theoretical economics; algebraic geometry interacts with physics; the Minkowsky lemma, cod ing theory and the structure of water meet one another in packing and covering theory; quantum fields, crystal defects and mathematical pro gramming profit from homotopy theory; Lie algebras are relevant to filtering; and prediction and electrical engineering can use Stein spaces."
The fascinating correspondence between Paul Levy and Maurice Frechet spans an extremely active period in French mathematics during the twentieth century. The letters of these two Frenchmen show their vicissitudes of research and passionate enthusiasm for the emerging field of modern probability theory. The letters cover various topics of mathematical importance including academic careers and professional travels, issues concerning students and committees, and the difficulties both mathematicians met to be elected to the Paris Academy of Sciences. The technical questions that occupied Levy and Frechet on almost a daily basis are the primary focus of these letters, which are charged with elation, frustration and humour. Their mathematical victories and setbacks unfolded against the dramatic backdrop of the two World Wars and the occupation of France, during which Levy was obliged to go into hiding. The clear and persistent desire of these mathematicians to continue their work whatever the circumstance testifies to the enlightened spirit of their discipline which was persistent against all odds. The book contains a detailed and comprehensive introduction to the central topics of the correspondence. The original text of the letters is also annotated by numerous footnotes for helpful guidance. Paul Levy and Maurice Frechet will be useful to anybody interested in the history of mathematics in the twentieth century and, in particular, the birth of modern probab ility theory.
This book presents a mathematically-based introduction into the fascinating topic of Fuzzy Sets and Fuzzy Logic and might be used as textbook at both undergraduate and graduate levels and also as reference guide for mathematician, scientists or engineers who would like to get an insight into Fuzzy Logic. Fuzzy Sets have been introduced by Lotfi Zadeh in 1965 and since then, they have been used in many applications. As a consequence, there is a vast literature on the practical applications of fuzzy sets, while theory has a more modest coverage. The main purpose of the present book is to reduce this gap by providing a theoretical introduction into Fuzzy Sets based on Mathematical Analysis and Approximation Theory. Well-known applications, as for example fuzzy control, are also discussed in this book and placed on new ground, a theoretical foundation. Moreover, a few advanced chapters and several new results are included. These comprise, among others, a new systematic and constructive approach for fuzzy inference systems of Mamdani and Takagi-Sugeno types, that investigates their approximation capability by providing new error estimates. "
This volume discusses the rich and interesting properties of dynamical systems that appear in ecology and environmental sciences. It provides a fascinating survey of the theory of dynamical systems in ecology and environmental science. Each chapter introduces students and scholars to the state-of-the-art in an exciting area, presents new results, and inspires future contributions to mathematical modeling in ecology and environmental sciences.
This book is a self-contained, tutorial-based introduction to quantum information theory and quantum biology. It serves as a single-source reference to the topic for researchers in bioengineering, communications engineering, electrical engineering, applied mathematics, biology, computer science, and physics. The book provides all the essential principles of the quantum biological information theory required to describe the quantum information transfer from DNA to proteins, the sources of genetic noise and genetic errors as well as their effects. Integrates quantum information and quantum biology concepts; Assumes only knowledge of basic concepts of vector algebra at undergraduate level; Provides a thorough introduction to basic concepts of quantum information processing, quantum information theory, and quantum biology; Includes in-depth discussion of the quantum biological channel modelling, quantum biological channel capacity calculation, quantum models of aging, quantum models of evolution, quantum models on tumor and cancer development, quantum modeling of bird navigation compass, quantum aspects of photosynthesis, quantum biological error correction.
"Poisson Point Processes provides an overview of non-homogeneous and multidimensional Poisson point processes and their numerous applications. Readers will find constructive mathematical tools and applications ranging from emission and transmission computed tomography to multiple target tracking and distributed sensor detection, written from an engineering perspective. A valuable discussion of the basic properties of finite random sets is included. Maximum likelihood estimation techniques are discussed for several parametric forms of the intensity function, including Gaussian sums, together with their Cramer-Rao bounds. These methods are then used to investigate: -Several medical imaging techniques, including positron emission tomography (PET), single photon emission computed tomography (SPECT), and transmission tomography (CT scans) -Various multi-target and multi-sensor tracking applications, -Practical applications in areas like distributed sensing and detection, -Related finite point processes such as marked processes, hard core processes, cluster processes, and doubly stochastic processes, Perfect for researchers, engineers and graduate students working in electrical engineering and computer science, Poisson Point Processes will prove to be an extremely valuable volume for those seeking insight into the nature of these processes and their diverse applications.
Over the past two decades, swarm intelligence has emerged as a powerful approach to solving optimization as well as other complex problems. Swarm intelligence models are inspired by social behaviours of simple agents interacting among themselves as well as with the environment, e.g., flocking of birds, schooling of fish, foraging of bees and ants. The collective behaviours that emerge out of the interactions at the colony level are useful in achieving complex goals. The main aim of this research book is to present a sample of recent innovations and advances in techniques and applications of swarm intelligence. Among the topics covered in this book include: particle swarm optimization and hybrid methods, ant colony optimization and hybrid methods, bee colony optimization, glowworm swarm optimization, and complex social swarms, application of various swarm intelligence models to operational planning of energy plants, modeling and control of nanorobots, classification of documents, identification of disease biomarkers, and prediction of gene signals. The book is directed to researchers, practicing professionals, and undergraduate as well as graduate students of all disciplines who are interested in enhancing their knowledge in techniques and applications of swarm intelligence.
This book is based on the premise that the entropy concept, a fundamental element of probability theory as logic, governs all of thermal physics, both equilibrium and nonequilibrium. The variational algorithm of J. Willard Gibbs, dating from the 19th Century and extended considerably over the following 100 years, is shown to be the governing feature over the entire range of thermal phenomena, such that only the nature of the macroscopic constraints changes. Beginning with a short history of the development of the entropy concept by Rudolph Clausius and his predecessors, along with the formalization of classical thermodynamics by Gibbs, the first part of the book describes the quest to uncover the meaning of thermodynamic entropy, which leads to its relationship with probability and information as first envisioned by Ludwig Boltzmann. Recognition of entropy first of all as a fundamental element of probability theory in mid-twentieth Century led to deep insights into both statistical mechanics and thermodynamics, the details of which are presented here in several chapters. The later chapters extend these ideas to nonequilibrium statistical mechanics in an unambiguous manner, thereby exhibiting the overall unifying role of the entropy.
The analysis, processing, evolution, optimization and/or regulation, and control of shapes and images appear naturally in engineering (shape optimization, image processing, visual control), numerical analysis (interval analysis), physics (front propagation), biological morphogenesis, population dynamics (migrations), and dynamic economic theory. These problems are currently studied with tools forged out of differential geometry and functional analysis, thus requiring shapes and images to be smooth. However, shapes and images are basically sets, most often not smooth. J.-P. Aubin thus constructs another vision, where shapes and images are just any compact set. Hence their evolution -- which requires a kind of differential calculus -- must be studied in the metric space of compact subsets. Despite the loss of linearity, one can transfer most of the basic results of differential calculus and differential equations in vector spaces to mutational calculus and mutational equations in any mutational space, including naturally the space of nonempty compact subsets. "Mutational and Morphological Analysis" offers a structure that embraces and integrates the various approaches, including shape optimization and mathematical morphology. Scientists and graduate students will find here other powerful mathematical tools for studying problems dealing with shapes and images arising in so many fields.
In January 2012 an Oberwolfach workshop took place on the topic of recent developments in the numerics of partial differential equations. Focus was laid on methods of high order and on applications in Computational Fluid Dynamics. The book covers most of the talks presented at this workshop.
Optimization problems abound in most fields of science, engineering, and tech nology. In many of these problems it is necessary to compute the global optimum (or a good approximation) of a multivariable function. The variables that define the function to be optimized can be continuous and/or discrete and, in addition, many times satisfy certain constraints. Global optimization problems belong to the complexity class of NP-hard prob lems. Such problems are very difficult to solve. Traditional descent optimization algorithms based on local information are not adequate for solving these problems. In most cases of practical interest the number of local optima increases, on the aver age, exponentially with the size of the problem (number of variables). Furthermore, most of the traditional approaches fail to escape from a local optimum in order to continue the search for the global solution. Global optimization has received a lot of attention in the past ten years, due to the success of new algorithms for solving large classes of problems from diverse areas such as engineering design and control, computational chemistry and biology, structural optimization, computer science, operations research, and economics. This book contains refereed invited papers presented at the conference on "State of the Art in Global Optimization: Computational Methods and Applications" held at Princeton University, April 28-30, 1995. The conference presented current re search on global optimization and related applications in science and engineering. The papers included in this book cover a wide spectrum of approaches for solving global optimization problems and applications."
This book is an introduction to the mathematical description of information in science and engineering. The necessary ma- thematical theory will be treated in a more vivid way than in the usual theoretical proof structure. This enables the reader to develop an idea of the connections between diffe- rent information measures and to understand the trains of thoughts in their derivation. As there exist a great number of different possible ways to describe information, these measures are presented in a coherent manner. Some examples of the information measures examined are: Shannon informati- on, applied in coding theory; Akaike information criterion, used in system identification to determine auto-regressive models and in neural networks to identify the number of neu- rons; and Cramer-Rao bound or Fisher information, describing the minimal variances achieved by unbiased estimators. |
You may like...
Expansive - A Guide To Thinking Bigger…
John Sanei, Erik Kruger
Paperback
Digital T Level: Digital Support…
Sonia Stuart, Maureen Everett
Paperback
R1,333
Discovery Miles 13 330
ESD Program Management - A Realistic…
G. Theodore Dangelmayer
Hardcover
R4,199
Discovery Miles 41 990
Full-Chip Nanometer Routing Techniques
Tsung-Yi Ho, Yao-Wen Chang, …
Hardcover
R2,712
Discovery Miles 27 120
Olympiad Online Test Package Class 7…
Shraddha Singh, Bunny Mehra
Paperback
R492
Discovery Miles 4 920
|