![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Applied mathematics > Mathematical modelling
Modelling Transitions shows what computational, formal and data-driven approaches can and could mean for sustainability transitions research, presenting the state-of-the-art and exploring what lies beyond. Featuring contributions from many well-known authors, this book presents the various benefits of modelling for transitions research. More than just taking stock, it also critically examines what modelling of transformative change means and could mean for transitions research and for other disciplines that study societal changes. This includes identifying a variety of approaches currently not part of the portfolios of transitions modellers. Far from only singing praise, critical methodological and philosophical introspection are key aspects of this important book. This book speaks to modellers and non-modellers alike who value the development of robust knowledge on transitions to sustainability, including colleagues in congenial fields. Be they students, researchers or practitioners, everyone interested in transitions should find this book relevant as reference, resource and guide.
Examines classic algorithms, geometric diagrams, and mechanical principles for enhancing visualization of statistical estimation procedures and mathematical concepts in physics, engineering, and computer programming.
Modelling large-scale wave fields and their interaction with coastal and offshore structures has become much more feasible over the last two decades with increases in computer speeds. Wave modelling can be viewed as an extension of wave theory, a mature and widely published field, applied to practical engineering through the use of computer tools. Information about the various wave models which have been developed is often widely scattered in the literature, and consequently this is one of the first books devoted to wave models and their applications. At the core of the book is an introduction to various types of wave models. For each model, the theoretical assumptions, the application range, and the advantages and limitations are elaborated. The combined use of different wave models from large-scale to local-scale is highlighted with a detailed discussion of the application and matching of boundary conditions. At the same time the book provides a grounding in hydrodynamics, wave theory, and numerical methods which underlie wave modelling. It presents the theoretical background and also shows how to use these models for achieving different engineering tasks, illustrated and reinforced with case study examples.
This book focuses on a development for assessing mental changes using eye pupil reactions, namely extracting emotional change from the response to evaluate the viewer's interest in visual information. The pupil of the eye reacts to both brightness and emotional state, including interest, enjoyment, and mental workload. Because pupillary change is a biological signal, various artifacts influence measurements of eye images. Technical procedures are required to extract mental activities from pupillary changes, and they are summarized here step by step, although some procedures contain earlier techniques such as analog video processing. This study examines the possibility of estimating the viewer's interest and enjoyment of viewing movies by measuring the dynamic pupillary changes, blinking, and subjective interest responses. In evaluation of pupil size, there was a significant difference in pupil size between the higher and the lower shot for the degree of subject interest response in each kind of movies. The first part of the book shows a pupil reaction model for brightness changes to extract mental activities. Pupil reactions were observed for various visual stimuli in brightness changes. With regard to the characteristics of pupillary changes, a model with a three-layer neural network was developed and the performance was evaluated. Characteristics of pupil reactions during model development are summarized here. The second part examines the possibility of estimating the viewer's interest and enjoyment of television programs by measuring dynamic pupillary changes, blinking, and subjective interest responses. The final part describes a development of estimation model of pupil size for blink artifact. The model development was able to estimate pupillary changes and pupil size while the viewer was blinking and was applied to pupillary changes in viewing television programs.
Coherent states (CS) were originally introduced in 1926 by Schroedinger and rediscovered in the early 1960s in the context of laser physics. Since then, they have evolved into an extremely rich domain that pervades virtually every corner of physics, and have also given rise to a range of research topics in mathematics. The purpose of the 2016 CIRM conference was to bring together leading experts in the field with scientists interested in related topics, to jointly investigate their applications in physics, their various mathematical properties, and their generalizations in many directions. Instead of traditional proceedings, this book presents sixteen longer review-type contributions, which are the outcome of a collaborative effort by many conference participants, subsequently reviewed by independent experts. The book aptly illustrates the diversity of CS aspects, from purely mathematical topics to physical applications, including quantum gravity.
The high reliability required in industrial processes has created the necessity of detecting abnormal conditions, called faults, while processes are operating. The term fault generically refers to any type of process degradation, or degradation in equipment performance because of changes in the process's physical characteristics, process inputs or environmental conditions. This book is about the fundamentals of fault detection and diagnosis in a variety of nonlinear systems which are represented by ordinary differential equations. The fault detection problem is approached from a differential algebraic viewpoint, using residual generators based upon high-gain nonlinear auxiliary systems ('observers'). A prominent role is played by the type of mathematical tools that will be used, requiring knowledge of differential algebra and differential equations. Specific theorems tailored to the needs of the problem-solving procedures are developed and proved. Applications to real-world problems, both with constant and time-varying faults, are made throughout the book and include electromechanical positioning systems, the Continuous Stirred Tank Reactor (CSTR), bioreactor models and belt drive systems, to name but a few.
This book describes the computational challenges posed by the progression toward nanoscale electronic devices and increasingly short design cycles in the microelectronics industry, and proposes methods of model reduction which facilitate circuit and device simulation for specific tasks in the design cycle. The goal is to develop and compare methods for system reduction in the design of high dimensional nanoelectronic ICs, and to test these methods in the practice of semiconductor development. Six chapters describe the challenges for numerical simulation of nanoelectronic circuits and suggest model reduction methods for constituting equations. These include linear and nonlinear differential equations tailored to circuit equations and drift diffusion equations for semiconductor devices. The performance of these methods is illustrated with numerical experiments using real-world data. Readers will benefit from an up-to-date overview of the latest model reduction methods in computational nanoelectronics.
Large observational studies involving research questions that require the measurement of several features on each individual arise in many fields including the social and medical sciences. This book sets out both the general concepts and the more technical statistical issues involved in analysis and interpretation. Numerous illustrative examples are described in outline and four studies are discussed in some detail. The use of graphical representations of dependencies and independencies among the features under study is stressed, both to incorporate available knowledge at the planning stage of an analysis and to summarize aspects important for interpretation after detailed statistical analysis is complete. This book is aimed at research workers using statistical methods as well as statisticians involved in empirical research.
The prolonged boom in the US and European stock markets has led to increased interest in the mathematics of security markets, most notably in the theory of stochastic integration. This text gives a rigorous development of the theory of stochastic integration as it applies to the valuation of derivative securities. It includes all the tools necessary for readers to understand how the stochastic integral is constructed with respect to a general continuous martingale. The author develops the stochastic calculus from first principles, but at a relaxed pace that includes proofs that are detailed, but streamlined to applications to finance. The treatment requires minimal prerequisites-a basic knowledge of measure theoretic probability and Hilbert space theory-and devotes an entire chapter to application in finances, including the Black Scholes market, pricing contingent claims, the general market model, pricing of random payoffs, and interest rate derivatives. Continuous Stochastic Calculus with Application to Finance is your first opportunity to explore stochastic integration at a reasonable and practical mathematical level. It offers a treatment well balanced between aesthetic appeal, degree of generality, depth, and ease of reading.
This book collects papers presented at the International Conference on Mathematical Modelling and Computational Intelligence Techniques (ICMMCIT) 2021, held at the Department of Mathematics, The Gandhigram Rural Institute (Deemed to be University), Gandhigram, Tamil Nadu, India, from 10-12 February 2021. Significant contributions from renowned researchers from fields of applied analysis, mathematical modelling and computing techniques have been received for this conference. Chapters emphasize on the research of computational nature focusing on new algorithms, their analysis and numerical results, as well as applications in physical, biological, social, and behavioural sciences. The accepted papers are organized in topical sections as mathematical modelling, image processing, control theory, graphs and networks, and inventory control.
This book introduces several mathematical models in assembly line balancing based on stochastic programming and develops exact and heuristic methods to solve them. An assembly line system is a manufacturing process in which parts are added in sequence from workstation to workstation until the final assembly is produced. In an assembly line balancing problem, tasks belonging to different product models are allocated to workstations according to their processing times and precedence relationships among tasks. It incorporates two features, uncertain task times, and demand volatility, separately and simultaneously, into the conventional assembly line balancing model. A real-life case study related to the mask production during the COVID-19 pandemic is presented to illustrate the application of the proposed framework and methodology. The book is intended for graduate students who are interested in combinatorial optimizations in manufacturing with uncertain input.
Modeling and Inverse Problems in the Presence of Uncertainty collects recent research-including the authors' own substantial projects-on uncertainty propagation and quantification. It covers two sources of uncertainty: where uncertainty is present primarily due to measurement errors and where uncertainty is present due to the modeling formulation itself. After a useful review of relevant probability and statistical concepts, the book summarizes mathematical and statistical aspects of inverse problem methodology, including ordinary, weighted, and generalized least-squares formulations. It then discusses asymptotic theories, bootstrapping, and issues related to the evaluation of correctness of assumed form of statistical models. The authors go on to present methods for evaluating and comparing the validity of appropriateness of a collection of models for describing a given data set, including statistically based model selection and comparison techniques. They also explore recent results on the estimation of probability distributions when they are embedded in complex mathematical models and only aggregate (not individual) data are available. In addition, they briefly discuss the optimal design of experiments in support of inverse problems for given models. The book concludes with a focus on uncertainty in model formulation itself, covering the general relationship of differential equations driven by white noise and the ones driven by colored noise in terms of their resulting probability density functions. It also deals with questions related to the appropriateness of discrete versus continuum models in transitions from small to large numbers of individuals. With many examples throughout addressing problems in physics, biology, and other areas, this book is intended for applied mathematicians interested in deterministic and/or stochastic models and their interactions. It is also s
This book will serve as a reference, presenting state-of-the-art research on theoretical aspects of optimal sensor coverage problems. Readers will find it a useful tool for furthering developments on theory and applications of optimal coverage; much of the content can serve as material for advanced topics courses at the graduate level. The book is well versed with the hottest research topics such as Lifetime of Coverage, Weighted Sensor Cover, k-Coverage, Heterogeneous Sensors, Barrier, Sweep and Partial Coverage, Mobile Sensors, Camera Sensors and Energy-Harvesting Sensors, and more. Topics are introduced in a natural order from simple covers to connected covers, to the lifetime problem. Later, the book begins revisiting earlier problems ranging from the introduction of weights to coverage by k sensors and partial coverage, and from sensor heterogeneity to novel problems such as the barrier coverage problem. The book ends with coverage of mobile sensors, camera sensors, energy-harvesting sensors, underwater sensors, and crowdsensing.
This book discusses the interplay of stochastics (applied probability theory) and numerical analysis in the field of quantitative finance. The stochastic models, numerical valuation techniques, computational aspects, financial products, and risk management applications presented will enable readers to progress in the challenging field of computational finance.When the behavior of financial market participants changes, the corresponding stochastic mathematical models describing the prices may also change. Financial regulation may play a role in such changes too. The book thus presents several models for stock prices, interest rates as well as foreign-exchange rates, with increasing complexity across the chapters. As is said in the industry, 'do not fall in love with your favorite model.' The book covers equity models before moving to short-rate and other interest rate models. We cast these models for interest rate into the Heath-Jarrow-Morton framework, show relations between the different models, and explain a few interest rate products and their pricing.The chapters are accompanied by exercises. Students can access solutions to selected exercises, while complete solutions are made available to instructors. The MATLAB and Python computer codes used for most tables and figures in the book are made available for both print and e-book users. This book will be useful for people working in the financial industry, for those aiming to work there one day, and for anyone interested in quantitative finance. The topics that are discussed are relevant for MSc and PhD students, academic researchers, and for quants in the financial industry.Supplementary Material:Solutions Manual is available to instructors who adopt this textbook for their courses. Please contact [email protected].
Galton used quantiles more than a hundred years ago in describing data. Tukey and Parzen used them in the 60s and 70s in describing populations. Since then, the authors of many papers, both theoretical and practical, have used various aspects of quantiles in their work. Until now, however, no one put all the ideas together to form what turns out to be a general approach to statistics. Statistical Modelling with Quantile Functions does just that. It systematically examines the entire process of statistical modelling, starting with using the quantile function to define continuous distributions. The author shows that by using this approach, it becomes possible to develop complex distributional models from simple components. A modelling kit can be developed that applies to the whole model - deterministic and stochastic components - and this kit operates by adding, multiplying, and transforming distributions rather than data. Statistical Modelling with Quantile Functions adds a new dimension to the practice of statistical modelling that will be of value to anyone faced with analyzing data. Not intended to replace classical approaches but to supplement them, it will make some of the traditional topics easier and clearer, and help readers build and investigate models for their own practical statistical problems.
The authors of this monograph have developed a large and important class of survival analysis models that generalize most of the existing models. In a unified, systematic presentation, this monograph fully details those models and explores areas of accelerated life testing usually only touched upon in the literature. Accelerated Life Models: Modeling and Statistical Analysis presents models, methods of data collection, and statistical analysis for failure-time regression data in accelerated life testing and for degradation data with explanatory variables. In addition to the classical results, the authors devote considerable attention to models with time-varying explanatory variables and to methods of semiparametric estimation. They also examine the simultaneous analysis of degradation and failure-time data when the intensities of failure in different modes depend on the level of degradation and the values of explanatory variables. The authors avoid technical details by explaining the ideas and referring to resources where thorough analysis can be found. Whether used for teaching, research or general reference, Accelerated Life Models: Modeling and Statistical Analysis provides new and known models and modern methods of accelerated life data analysis.
During the last decade, financial models based on jump processes have acquired increasing popularity in risk management and option pricing applications. Much has been published on the subject, but the technical nature of most papers makes them difficult for nonspecialists to understand, and the mathematical tools required for applications can be intimidating. Potential end users often get the impression that jump and Lévy processes are beyond their reach.
The International Conference on Computational Fluid Dynamics (ICCFD) is the merger of the International Conference on Numerical Methods in Fluid Dynamics (ICNMFD) and the International Symposium on Computational Fluid Dynamics (ISCFD). It is held every two years and brings together physicists, mathematicians and engineers to review and share recent advances in mathematical and computational techniques for modeling fluid dynamics. The proceedings of the 2004 conference held in Toronto, Canada, contain a selection of refereed contributions and are meant to serve as a source of reference for all those interested in the state of the art in computational fluid dynamics.
Taking a novel, more appealing approach than current texts, An Integrated Introduction to Computer Graphics and Geometric Modeling focuses on graphics, modeling, and mathematical methods, including ray tracing, polygon shading, radiosity, fractals, freeform curves and surfaces, vector methods, and transformation techniques. The author begins with fractals, rather than the typical line-drawing algorithms found in many standard texts. He also brings the turtle back from obscurity to introduce several major concepts in computer graphics. Supplying the mathematical foundations, the book covers linear algebra topics, such as vector geometry and algebra, affine and projective spaces, affine maps, projective transformations, matrices, and quaternions. The main graphics areas explored include reflection and refraction, recursive ray tracing, radiosity, illumination models, polygon shading, and hidden surface procedures. The book also discusses geometric modeling, including planes, polygons, spheres, quadrics, algebraic and parametric curves and surfaces, constructive solid geometry, boundary files, octrees, interpolation, approximation, Bezier and B-spline methods, fractal algorithms, and subdivision techniques. Making the material accessible and relevant for years to come, the text avoids descriptions of current graphics hardware and special programming languages. Instead, it presents graphics algorithms based on well-established physical models of light and cogent mathematical methods.
This volume is devoted to original research results and survey articles reviewing recent developments in reduction for stochastic PDEs with multiscale as well as application to science and technology, and to present some future research direction. This volume includes a dozen chapters by leading experts in the area, with a broad audience in mind. It should be accessible to graduate students, junior researchers and other professionals who are interested in the subject. We also take this opportunity to celebrate the contributions of Professor Anthony J Roberts, an internationally leading figure on the occasion of his 60th years birthday in 2017.
This volume explores the complex problems that arise in the modeling and simulation of crowd dynamics in order to present the state-of-the-art of this emerging field and contribute to future research activities. Experts in various areas apply their unique perspectives to specific aspects of crowd dynamics, covering the topic from multiple angles. These include a demonstration of how virtual reality may solve dilemmas in collecting empirical data; a detailed study on pedestrian movement in smoke-filled environments; a presentation of one-dimensional conservation laws with point constraints on the flux; a collection of new ideas on the modeling of crowd dynamics at the microscopic scale; and others. Applied mathematicians interested in crowd dynamics, pedestrian movement, traffic flow modeling, urban planning, and other topics will find this volume a valuable resource. Additionally, researchers in social psychology, architecture, and engineering may find this information relevant to their work.
This book reports on advanced theories and methods in three related fields of research: applied physics, system science and computers. It is organized in two main parts, the first of which covers applied physics topics, including lasers and accelerators; condensed matter, soft matter and materials science; nanoscience and quantum engineering; atomic, molecular, optical and plasma physics; as well as nuclear and high-energy particle physics. It also addresses astrophysics, gravitation, earth and environmental science, as well as medical and biological physics. The second part focuses on advances in system science and computers, exploring automatic circuit control, power systems, computer communication, fluid mechanics, simulation and modeling, software engineering, data structures and applications of artificial intelligence among other areas. Offering a collection of contributions presented at the 1st International Conference on Applied Physics, System Science and Computers (APSAC 2016), the book bridges the gap between applied physics and electrical engineering. It not only to presents new methods, but also promotes collaborations between different communities working on related topics at the interface between physics and engineering, with a special focus on communication, data modeling and visualization, quantum information, applied mechanics as well as bio and geophysics.
This book describes a system of mathematical models and methods that can be used to analyze real economic and managerial decisions and to improve their effectiveness. Application areas include: management of development and operation budgets, assessment and management of economic systems using an energy entropy approach, equation of exchange rates and forecasting foreign exchange operations, evaluation of innovative projects, monitoring of governmental programs, risk management of investment processes, decisions on the allocation of resources, and identification of competitive industrial clusters. The proposed methods and models were tested on the example of Kazakhstan's economy, but the generated solutions will be useful for applications at other levels and in other countries. Regarding your book "Mathematical Methods and Models in Economics", I am impressed because now it is time when "econometrics" is becoming more appreciated by economists and by schools that are the hosts or employers of modern economists. ... Your presented results really impressed me. John F. Nash, Jr., Princeton University, Nobel Memorial Prize in Economic Sciences The book is within my scope of interest because of its novelty and practicality. First, there is a need for realistic modeling of complex systems, both natural and artificial that conclude computer and economic systems. There has been an ongoing effort in developing models dealing with complexity and incomplete knowledge. Consequently, it is clear to recognize the contribution of Mutanov to encapsulate economic modeling with emphasis on budgeting and innovation. Secondly, the method proposed by Mutanov has been verified by applying to the case of the Republic of Kazakhstan, with her vibrant emerging economy. Thirdly, Chapter 5 of the book is of particular interest for the computer technology community because it deals with innovation. In summary, the book of Mutanov should become one of the outstanding recognized pragmatic guides for dealing with innovative systems. Andrzej Rucinski, University of New Hampshire This book is unique in its theoretical findings and practical applicability. The book is an illuminating study based on an applied mathematical model which uses methods such as linear programming and input-output analysis. Moreover, this work demonstrates the author's great insight and academic brilliance in the fields of finance, technological innovations and marketing vis-a-vis the market economy. From both theoretical and practical standpoint, this work is indeed a great achievement. Yeon Cheon Oh, President of Seoul National University
Designs in nanoelectronics often lead to challenging simulation problems and include strong feedback couplings. Industry demands provisions for variability in order to guarantee quality and yield. It also requires the incorporation of higher abstraction levels to allow for system simulation in order to shorten the design cycles, while at the same time preserving accuracy. The methods developed here promote a methodology for circuit-and-system-level modelling and simulation based on best practice rules, which are used to deal with coupled electromagnetic field-circuit-heat problems, as well as coupled electro-thermal-stress problems that emerge in nanoelectronic designs. This book covers: (1) advanced monolithic/multirate/co-simulation techniques, which are combined with envelope/wavelet approaches to create efficient and robust simulation techniques for strongly coupled systems that exploit the different dynamics of sub-systems within multiphysics problems, and which allow designers to predict reliability and ageing; (2) new generalized techniques in Uncertainty Quantification (UQ) for coupled problems to include a variability capability such that robust design and optimization, worst case analysis, and yield estimation with tiny failure probabilities are possible (including large deviations like 6-sigma); (3) enhanced sparse, parametric Model Order Reduction techniques with a posteriori error estimation for coupled problems and for UQ to reduce the complexity of the sub-systems while ensuring that the operational and coupling parameters can still be varied and that the reduced models offer higher abstraction levels that can be efficiently simulated. All the new algorithms produced were implemented, transferred and tested by the EDA vendor MAGWEL. Validation was conducted on industrial designs provided by end-users from the semiconductor industry, who shared their feedback, contributed to the measurements, and supplied both material data and process data. In closing, a thorough comparison to measurements on real devices was made in order to demonstrate the algorithms' industrial applicability. |
You may like...
This Book is Full of Brains - All Kinds…
Little House Of Science
Hardcover
R261
Discovery Miles 2 610
|