![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Mathematics > Applied mathematics
Mathematical modeling is both a skill and an art and must be practiced in order to maintain and enhance the ability to use those skills. Though the topics covered in this book are the typical topics of most mathematical modeling courses, this book is best used for individuals or groups who have already taken an introductory mathematical modeling course. Advanced Mathematical Modeling with Technology will be of interest to instructors and students offering courses focused on discrete modeling or modeling for decision making. Each chapter begins with a problem to motivate the reader. The problem tells "what" the issue is or problem that needs to be solved. In each chapter, the authors apply the principles of mathematical modeling to that problem and present the steps in obtaining a model. The key focus is the mathematical model and the technology is presented as a method to solve that model or perform sensitivity analysis. We have selected , where applicable to the content because of their wide accessibility. The authors utilize technology to build, compute, or implement the model and then analyze the it. Features: MAPLE (c), Excel (c), and R (c) to support the mathematical modeling process. Excel templates, macros, and programs are available upon request from authors. Maple templates and example solution are also available. Includes coverage of mathematical programming. The power and limitations of simulations is covered. Introduces multi-attribute decision making (MADM) and game theory for solving problems. The book provides an overview to the decision maker of the wide range of applications of quantitative approaches to aid in the decision making process, and present a framework for decision making. Table of Contents 1. Perfect Partners: Mathematical Modeling and Technology 2. Review of Modeling with Discrete Dynamical Systems and Modeling Systems of DDS 3. Modeling with Differential Equations 4. Modeling System of Ordinary Differential Equation 5. Regression and Advanced Regression Methods and Models 6. Linear, Integer and Mixed Integer Programming 7. Nonlinear Optimization Methods 8. Multivariable Optimization 9. Simulation Models 10. Modeling Decision Making with Multi-Attribute Decision Modeling with Technology 11. Modeling with Game Theory 12. Appendix Using R Index Biographies Dr. William P. Fox is currently a visiting professor of Computational Operations Research at the College of William and Mary. He is an emeritus professor in the Department of Defense Analysis at the Naval Postgraduate School and teaches a three-course sequence in mathematical modeling for decision making. He received his Ph.D. in Industrial Engineering from Clemson University. He has taught at the United States Military Academy for twelve years until retiring and at Francis Marion University where he was the chair of mathematics for eight years. He has many publications and scholarly activities including twenty plus books and one hundred and fifty journal articles. Colonel (R) Robert E. Burks, Jr., Ph.D. is an Associate Professor in the Defense Analysis Department of the Naval Postgraduate School (NPS) and the Director of the NPS' Wargaming Center. He holds a Ph.D. in Operations Research form the Air Force Institute of Technology. He is a retired logistics Army Colonel with more than thirty years of military experience in leadership, advanced analytics, decision modeling, and logistics operations who served as an Army Operations Research analyst at the Naval Postgraduate School, TRADOC Analysis Center, United States Military Academy, and the United States Army Recruiting Command.
Includes over 250 solved problems to supplement graduate-level courses in fluid mechanics and turbomachinery. Enables students to practice applying key concepts of fluid mechanics and the governing conservation laws to solve real-world problems. Uses the physics-first approach, allowing for a good understanding of the problem physics and the results obtained. Covers problems on flowpath aerodynamics design. Covers problems on secondary air systems modeling of gas turbines.
Inhomogeneous Random Evolutions and Their Applications explains how to model various dynamical systems in finance and insurance with non-homogeneous in time characteristics. It includes modeling for: financial underlying and derivatives via Levy processes with time-dependent characteristics; limit order books in the algorithmic and HFT with counting price changes processes having time-dependent intensities; risk processes which count number of claims with time-dependent conditional intensities; multi-asset price impact from distressed selling; regime-switching Levy-driven diffusion-based price dynamics. Initial models for those systems are very complicated, which is why the author's approach helps to simplified their study. The book uses a very general approach for modeling of those systems via abstract inhomogeneous random evolutions in Banach spaces. To simplify their investigation, it applies the first averaging principle (long-run stability property or law of large numbers [LLN]) to get deterministic function on the long run. To eliminate the rate of convergence in the LLN, it uses secondly the functional central limit theorem (FCLT) such that the associated cumulative process, centered around that deterministic function and suitably scaled in time, may be approximated by an orthogonal martingale measure, in general; and by standard Brownian motion, in particular, if the scale parameter increases. Thus, this approach allows the author to easily link, for example, microscopic activities with macroscopic ones in HFT, connecting the parameters driving the HFT with the daily volatilities. This method also helps to easily calculate ruin and ultimate ruin probabilities for the risk process. All results in the book are new and original, and can be easily implemented in practice.
David Sandwell developed this advanced textbook over a period of nearly 30 years for his graduate course at Scripps Institution of Oceanography. The book augments the classic textbook Geodynamics by Don Turcotte and Jerry Schubert, presenting more complex and foundational mathematical methods and approaches to geodynamics. The main new tool developed in the book is the multi-dimensional Fourier transform for solving linear partial differential equations. The book comprises nineteen chapters, including: the latest global data sets; quantitative plate tectonics; plate driving forces associated with lithospheric heat transfer and subduction; the physics of the earthquake cycle; postglacial rebound; and six chapters on gravity field development and interpretation. Each chapter has a set of student exercises that make use of the higher-level mathematical and numerical methods developed in the book. Solutions to the exercises are available online for course instructors, on request.
Mathematicians have devised different chaotic systems that are modeled by integer or fractional-order differential equations, and whose mathematical models can generate chaos or hyperchaos. The numerical methods to simulate those integer and fractional-order chaotic systems are quite different and their exactness is responsible in the evaluation of characteristics like Lyapunov exponents, Kaplan-Yorke dimension, and entropy. One challenge is estimating the step-size to run a numerical method. It can be done analyzing the eigenvalues of self-excited attractors, while for hidden attractors it is difficult to evaluate the equilibrium points that are required to formulate the Jacobian matrices. Time simulation of fractional-order chaotic oscillators also requires estimating a memory length to achieve exact results, and it is associated to memories in hardware design. In this manner, simulating chaotic/hyperchaotic oscillators of integer/fractional-order and with self-excited/hidden attractors is quite important to evaluate their Lyapunov exponents, Kaplan-Yorke dimension and entropy. Further, to improve the dynamics of the oscillators, their main characteristics can be optimized applying metaheuristics, which basically consists of varying the values of the coefficients of a mathematical model. The optimized models can then be implemented using commercially available amplifiers, field-programmable analog arrays (FPAA), field-programmable gate arrays (FPGA), microcontrollers, graphic processing units, and even using nanometer technology of integrated circuits. The book describes the application of different numerical methods to simulate integer/fractional-order chaotic systems. These methods are used within optimization loops to maximize positive Lyapunov exponents, Kaplan-Yorke dimension, and entropy. Single and multi-objective optimization approaches applying metaheuristics are described, as well as their tuning techniques to generate feasible solutions that are suitable for electronic implementation. The book details several applications of chaotic oscillators such as in random bit/number generators, cryptography, secure communications, robotics, and Internet of Things.
One cannot watch or read about the news these days without hearing about the models for COVID-19 or the testing that must occur to approve vaccines or treatments for the disease. The purpose of Mathematical Modeling in the Age of a Pandemic is to shed some light on the meaning and interpretations of many of the types of models that are or might be used in the presentation of analysis. Understanding the concepts presented is essential in the entire modeling process of a pandemic. From the virus itself and its infectious rates and deaths rates to explain the process for testing a vaccine or eventually a cure, the author builds, presents, and shows model testing. This book is an attempt, based on available data, to add some validity to the models developed and used, showing how close to reality the models are to predicting "results" from previous pandemics such as the Spanish flu in 1918 and more recently the Hong Kong flu. Then the author applies those same models to Italy, New York City, and the United States as a whole. Modeling is a process. It is essential to understand that there are many assumptions that go into the modeling of each type of model. The assumptions influence the interpretation of the results. Regardless of the modeling approach the results generally indicate approximately the same results. This book reveals how these interesting results are obtained.
Fractional Brownian Motion (FBM) is a very classical continuous self-similar Gaussian field with stationary increments. In 1940, some works of Kolmogorov on turbulence led him to introduce this quite natural extension of Brownian Motion, which, in contrast with the latter, has correlated increments. However, the denomination FBM is due to a very famous article by Mandelbrot and Van Ness, published in 1968. Not only in it, but also in several of his following works, Mandelbrot emphasized the importance of FBM as a model in several applied areas, and thus he made it to be known by a wide community. Therefore, FBM has been studied by many authors, and used in a lot of applications.In spite of the fact that FBM is a very useful model, it does not always fit to real data. This is the reason why, for at least two decades, there has been an increasing interest in the construction of new classes of random models extending it, which offer more flexibility. A paradigmatic example of them is the class of Multifractional Fields. Multifractional means that fractal properties of models, typically, roughness of paths and self-similarity of probability distributions, are locally allowed to change from place to place.In order to sharply determine path behavior of Multifractional Fields, a wavelet strategy, which can be considered to be new in the probabilistic framework, has been developed since the end of the 90's. It is somehow inspired by some rather non-standard methods, related to the fine study of Brownian Motion roughness, through its representation in the Faber-Schauder system. The main goal of the book is to present the motivations behind this wavelet strategy, and to explain how it can be applied to some classical examples of Multifractional Fields. The book also discusses some topics concerning them which are not directly related to the wavelet strategy.
Linear Algebra: An Inquiry-based Approach is written to give instructors a tool to teach students to develop a mathematical concept from first principles. The Inquiry-based Approach is central to this development. The text is organized around and offers the standard topics expected in a first undergraduate course in linear algebra. In our approach, students begin with a problem and develop the mathematics necessary to describe, solve, and generalize it. Thus students learn a vital skill for the 21st century: the ability to create a solution to a problem. This text is offered to foster an environment that supports the creative process. The twin goals of this textbook are: *Providing opportunities to be creative, *Teaching "ways of thinking" that will make it easier for to be creative. To motivate the development of the concepts and techniques of linear algebra, we include more than two hundred activities on a wide range of problems, from purely mathematical questions, through applications in biology, computer science, cryptography, and more. Table of Contents Introduction and Features For the Student . . . and Teacher Prerequisites Suggested Sequences 1 Tuples and Vectors 2 Systems of Linear Equations 3 Transformations 4 Matrix Algebra 5 Vector Spaces 6 Determinants 7 Eigenvalues and Eigenvectors 8 Decomposition 9 Extras Bibliography Index Bibliography Jeff Suzuki is Associate Professor of Mathematics at Brooklyn College and holds a Ph.D. from Boston University. His research interests include mathematics education, history of mathematics, and the application of mathematics to society and technology. He is a two-time winner of the prestigious Carl B. Allendoerfer Award for expository writing. His publications have appeared in The College Mathematics Journals; Mathematics Magazine; Mathematics Teacher; and the American Mathematical Society's blog on teaching and learning mathematics. His YouTube channel (http://youtube.com/jeffsuzuki1) includes videos on mathematical subjects ranging from elementary arithmetic to linear algebra, cryptography, and differential equations.
This is the first book to discuss the search for new physics in charged leptons, neutrons, and quarks in one coherent volume. The area of indirect searches for new physics is highly topical; though no new physics particles have yet been observed directly at the Large Hadron Collider at CERN, the methods described in this book will provide researchers with the necessary tools to keep searching for new physics. It describes the lines of research that attempt to identify quantum effects of new physics particles in low-energy experiments, in addition to detailing the mathematical basis and theoretical and phenomenological methods involved in the searches, whilst making a clear distinction between model-dependent and model-independent methods employed to make predictions. This book will be a valuable guide for graduate students and early-career researchers in particle and high energy physics who wish to learn about the techniques used in modern predictions of new physics effects at low energies, whilst also serving as a reference for researchers at other levels. Key features: * Takes an accessible, pedagogical approach suitable for graduate students and those seeking an overview of this new and fast-growing field * Illustrates common theoretical trends seen in different subfields of particle physics * Valuable both for researchers in the phenomenology of elementary particles and for experimentalists
An Introduction to Compressible Flow, Second Edition covers the material typical of a single-semester course in compressible flow. The book begins with a brief review of thermodynamics and control volume fluid dynamics, then proceeds to cover isentropic flow, normal shock waves, shock tubes, oblique shock waves, Prandtl-Meyer expansion fans, Fanno-line flow, Rayleigh-line flow, and conical shock waves. The book includes a chapter on linearized flow following chapters on oblique shocks and Prandtl-Meyer flows to appropriately ground students in this approximate method. It includes detailed appendices to support problem solutions and covers new oblique shock tables, which allow for quick and accurate solutions of flows with concave corners. The book is intended for senior undergraduate engineering students studying thermal-fluids and practicing engineers in the areas of aerospace or energy conversion. This book is also useful in providing supplemental coverage of compressible flow material in gas turbine and aerodynamics courses.
This book, first published in 1981, offers a critical review of the techniques of mathematical modelling and their appropriate application to military operations research - the analysis of data (historical data, exercise and test results, and intelligence) in preparation for war. The virtues of sophistication via simplicity, and the beauty of the artful finesse, emerge as the signature of successful modelling.
Designed as an introduction to numerical methods for students, this book combines mathematical correctness with numerical performance, and concentrates on numerical methods and problem solving. It applies actual numerical solution strategies to formulated process models to help identify and solve chemical engineering problems. Second edition comes with additional chapter on numerical integration and section on boundary value problems in the relevant chapter. Additional material on general modelling principles, mass/energy balances and separate section on DAE's is also included. Case study section has been extended with additional examples.
Written for a two-semester Master's or graduate course, this comprehensive treatise intertwines theory and experiment in an original approach that covers all aspects of modern particle physics. The author uses rigorous step-by-step derivations and provides more than 100 end-of-chapter problems for additional practice to ensure that students will not only understand the material but also be able to apply their knowledge. Featuring up-to-date experimental material, including the discovery of the Higgs boson at CERN and of neutrino oscillations, this monumental volume also serves as a one-stop reference for particle physics researchers of all levels and specialties. Richly illustrated with more than 450 figures, the text guides students through all the intricacies of quantum mechanics and quantum field theory in an intuitive manner that few books achieve.
Principles of Copula Theory explores the state of the art on copulas and provides you with the foundation to use copulas in a variety of applications. Throughout the book, historical remarks and further readings highlight active research in the field, including new results, streamlined presentations, and new proofs of old results. After covering the essentials of copula theory, the book addresses the issue of modeling dependence among components of a random vector using copulas. It then presents copulas from the point of view of measure theory, compares methods for the approximation of copulas, and discusses the Markov product for 2-copulas. The authors also examine selected families of copulas that possess appealing features from both theoretical and applied viewpoints. The book concludes with in-depth discussions on two generalizations of copulas: quasi- and semi-copulas. Although copulas are not the solution to all stochastic problems, they are an indispensable tool for understanding several problems about stochastic dependence. This book gives you the solid and formal mathematical background to apply copulas to a range of mathematical areas, such as probability, real analysis, measure theory, and algebraic structures.
The aim of this Book is to give an overview, based on the results of nearly three decades of intensive research, of transient chaos. One belief that motivates us to write this book is that, transient chaos may not have been appreciated even within the nonlinear-science community, let alone other scientific disciplines.
Our lives and the functioning of modern societies are intimately intertwined with electricity consumption. We owe our quality of life to electricity. However, the electricity generation industry is partly responsible for some of the most pressing challenges we currently face, including climate change and the pollution of natural environments, energy inequality, and energy insecurity. Maintaining our standard of living while addressing these problems is the ultimate challenge for the future of humanity. The objective of this book is to equip engineering and science students and professionals to tackle this task. Written by an expert with over 25 years of combined academic and industrial experience in the field, this comprehensive textbook covers both fossil fuels and renewable power generation technologies. For each topic, fundamental principles, historical backgrounds, and state-of-the-art technologies are covered. Conventional power production technologies, steam power plants, gas turbines, and combined cycle power plants are presented. For steam power plants, the historical background, thermodynamic principles, steam generators, combustion systems, emission reduction technologies, steam turbines, condensate-feedwater systems, and cooling systems are covered in separate chapters. Similarly, the historical background and thermodynamic principles of gas turbines, along with comprehensive discussions on compressors, combustors, and turbines, are presented and then followed with combined cycle power plants. The second half of the book deals with renewable energy sources, including solar photovoltaic systems, solar thermal power plants, wind turbines, ocean energy systems, and geothermal power plants. For each energy source, the available energy and its variations, historical background, operational principles, basic calculations, current and future technologies, and environmental impacts are presented. Finally, energy storage systems as required technologies to address the intermittent nature of renewable energy sources are covered. While the book has been written with the needs of undergraduate and graduate college students in mind, professionals interested in widening their understanding of the field can also benefit from it.
Mathematical Modeling using Fuzzy Logic has been a dream project for the author. Fuzzy logic provides a unique method of approximate reasoning in an imperfect world. This text is a bridge to the principles of fuzzy logic through an application-focused approach to selected topics in engineering and management. The many examples point to the richer solutions obtained through fuzzy logic and to the possibilities of much wider applications. There are relatively very few texts available at present in fuzzy logic applications. The style and content of this text is complementary to those already available. New areas of application, like application of fuzzy logic in modeling of sustainability, are presented in a graded approach in which the underlying concepts are first described. The text is broadly divided into two parts: the first treats processes, materials, and system applications related to fuzzy logic, and the second delves into the modeling of sustainability with the help of fuzzy logic. This book offers comprehensive coverage of the most essential topics, including: Treating processes, materials, system applications related to fuzzy logic Highlighting new areas of application of fuzzy logic Identifying possibilities of much wider applications of fuzzy logic Modeling of sustainability with the help of fuzzy logic The level enables a selection of the text to be made for the substance of undergraduate-, graduate-, and postgraduate-level courses. There is also sufficient volume and quality for the basis of a postgraduate course. A more restricted and judicious selection can provide the material for a professional short course and various university-level courses.
First published in 1992, AY's Neuroanatomy of C. elegans for Computation provides the neural circuitry database of the nematode Caenorhabditis elegans, both in printed form and in ASCII files on 5.25-inch diskettes (for use on IBM (R) and compatible personal computers, Macintosh (R) computers, and higher level machines). Tables of connections among neuron classes, synapses among individual neurons, gap junctions among neurons, worm cells and their embryonic origin, and synthetically derived neuromuscular connections are presented together with the references from which the data were compiled and edited. Sample data files and source codes of FORTRAN and BASIC programs are provided to illustrate the use of mathematical tools for any researcher or student interested in examining a natural neural network and discovering what makes it tick.
A First Course in Ergodic Theory provides readers with an introductory course in Ergodic Theory. This textbook has been developed from the authors' own notes on the subject, which they have been teaching since the 1990s. Over the years they have added topics, theorems, examples and explanations from various sources. The result is a book that is easy to teach from and easy to learn from - designed to require only minimal prerequisites. Features Suitable for readers with only a basic knowledge of measure theory, some topology and a very basic knowledge of functional analysis Perfect as the primary textbook for a course in Ergodic Theory Examples are described and are studied in detail when new properties are presented.
Game theory has revolutionised our understanding of industrial organisation and the traditional theory of the firm. Despite these advances, industrial economists have tended to rely on a restricted set of tools from game theory, focusing on static and repeated games to analyse firm structure and behaviour. Luca Lambertini, a leading expert on the application of differential game theory to economics, argues that many dynamic phenomena in industrial organisation (such as monopoly, oligopoly, advertising, R&D races) can be better understood and analysed through the use of differential games. After illustrating the basic elements of the theory, Lambertini guides the reader through the main models, spanning from optimal control problems describing the behaviour of a monopolist through to oligopoly games in which firms' strategies include prices, quantities and investments. This approach will be of great value to students and researchers in economics and those interested in advanced applications of game theory.
This book, based on published studies, takes a unique perspective on the 30-year collapse of pharmaceutical industry productivity in the search for small molecule "magic bullet" interventions. The relentless escalation of inflation-adjusted cost per approved medicine in the United States - from $200 million in 1950 to $1.2 billion in 2010 - has driven industry giants to, at best, slavish imitation in drug design, and at worst, abandonment of research and embracing of widespread fraud in consumer marketing.The book adapts formalism across a number of disciplines to the strategy for design of mutilevel interventions, focusing first on molecular, cellular, and larger scale examples, and then extending the argument to the simplifications provided by the dominant role of social and cultural structures and processes in individual and population patterns of health and illness.In place of "magic bullets", we must now apply "magic strategies" that act across both the scale and level of organization. This book provides an introductory roadmap to the new tools that will be needed for the design of such strategies.
This accessible text presents a detailed introduction to the use of a wide range of software tools and modeling environments for use in the biosciences, as well as the fundamental mathematical background. The practical constraints presented by each modeling technique are described in detail, enabling the researcher to determine which software package would be most useful for a particular problem. Features: introduces a basic array of techniques to formulate models of biological systems, and to solve them; discusses agent-based models, stochastic modeling techniques, differential equations, spatial simulations, and Gillespie's stochastic simulation algorithm; provides exercises; describes such useful tools as the Maxima algebra system, the PRISM model checker, and the modeling environments Repast Simphony and Smoldyn; contains appendices on rules of differentiation and integration, Maxima and PRISM notation, and some additional mathematical concepts; offers supplementary material at an associated website.
Many books are already available on the general topic of 'probability and statistics for engineers and scientists', so why choose this one? This textbook differs in that it has been prepared very much with students and their needs in mind. Having been classroom tested over many years, it is a true "learner's book" made for students who require a deeper understanding of probability and statistics and the process of model selection, verification and analysis. Emphasising both sound development of the principles and their engineering applications, this book offers purposely selected practical examples from many different fields. This textbook: Presents a sound treatment of the fundamentals in probability and statistics. Explains the concept of probabilistic modelling and the process of model selection, verification and analysis. Provides self-contained material with smooth and logical transition from chapter to chapter. Includes relevant and motivational applications in every chapter with numerous examples and problems. Demonstrates practical problem solving throughout the book with stimulating exercises, including answers to selected problems. Includes an accompanying online Solutions Manual for instructors with complete step-by-step solutions to all problems. (URL) "Fundamentals In Applied Probability And Statistics For Engineers" provides invaluable support for all engineering students involved in applications of probability, random variables and statistical inference. This book is also an ideal reference for lecturers, educators and newcomers to the field who wish to increase their knowledge of fundamental concepts. Engineering consulting firms will also find the explanationsand examples useful.
The composition of portfolios is one of the most fundamental and important methods in financial engineering, used to control the risk of investments. This book provides a comprehensive overview of statistical inference for portfolios and their various applications. A variety of asset processes are introduced, including non-Gaussian stationary processes, nonlinear processes, non-stationary processes, and the book provides a framework for statistical inference using local asymptotic normality (LAN). The approach is generalized for portfolio estimation, so that many important problems can be covered. This book can primarily be used as a reference by researchers from statistics, mathematics, finance, econometrics, and genomics. It can also be used as a textbook by senior undergraduate and graduate students in these fields.
With the internationalization of Renminbi (RMB), the gradual liberalization of China's capital account and the recent reform of the RMB pricing mechanism, the RMB exchange rate has been volatile. This book examines how we can forecast exchange rate reliably. It explains how we can do so through a new methodology for exchange rate forecasting. The book also analyzes the dynamic relationship between exchange rate and the exchange rate data decomposition and integration, the domestic economic situation, the international economic situation and the public's expectations and how these interactions would affect the exchange rate. The book also explains why this comprehensive integrated approach is the best model for optimizing accuracy in exchange rate forecasting. |
![]() ![]() You may like...
Biodegradable Polymers - Concepts and…
Margarita del Rosario Salazar, Jose Fernando Solanilla Duque, …
Hardcover
R4,766
Discovery Miles 47 660
|