![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Mathematics > Optimization
In this book applications of cooperative game theory that arise from combinatorial optimization problems are described. It is well known that the mathematical modeling of various real-world decision-making situations gives rise to combinatorial optimization problems. For situations where more than one decision-maker is involved classical combinatorial optimization theory does not suffice and it is here that cooperative game theory can make an important contribution. If a group of decision-makers decide to undertake a project together in order to increase the total revenue or decrease the total costs, they face two problems. The first one is how to execute the project in an optimal way so as to increase revenue. The second one is how to divide the revenue attained among the participants. It is with this second problem that cooperative game theory can help. The solution concepts from cooperative game theory can be applied to arrive at revenue allocation schemes. In this book the type of problems described above are examined. Although the choice of topics is application-driven, it also discusses theoretical questions that arise from the situations that are studied. For all the games described attention will be paid to the appropriateness of several game-theoretic solution concepts in the particular contexts that are considered. The computation complexity of the game-theoretic solution concepts in the situation at hand will also be considered.
"Decision Systems and Non-stochastic Randomness" is the first systematic presentation and mathematical formalization (including existence theorems) of the statistical regularities of non-stochastic randomness. The results presented in this book extend the capabilities of probability theory by providing mathematical techniques that allow for the description of uncertain events that do not fit standard stochastic models. The book demonstrates how non-stochastic regularities can be incorporated into decision theory and information theory, offering an alternative to the subjective probability approach to uncertainty and the unified approach to the measurement of information. This book is intended for statisticians, mathematicians, engineers, economists or other researchers interested in non-stochastic modeling and decision theory.
When I wrote the book Quantitative Sociodynamics, it was an early attempt to make methods from statistical physics and complex systems theory fruitful for the modeling and understanding of social phenomena. Unfortunately, the ?rst edition appeared at a quite prohibitive price. This was one reason to make these chapters available again by a new edition. The other reason is that, in the meantime, many of the methods discussed in this book are more and more used in a variety of different ?elds. Among the ideas worked out in this book are: 1 * a statistical theory of binary social interactions, * a mathematical formulation of social ?eld theory, which is the basis of social 2 force models, * a microscopic foundation of evolutionary game theory, based on what is known today as 'proportional imitation rule', a stochastic treatment of interactions in evolutionary game theory, and a model for the self-organization of behavioral 3 conventions in a coordination game. It, therefore, appeared reasonable to make this book available again, but at a more affordable price. To keep its original character, the translation of this book, which 1 D. Helbing, Interrelations between stochastic equations for systems with pair interactions. Ph- icaA 181, 29-52 (1992); D. Helbing, Boltzmann-like and Boltzmann-Fokker-Planck equations as a foundation of behavioral models. PhysicaA 196, 546-573 (1993). 2 D. Helbing, Boltzmann-like and Boltzmann-Fokker-Planck equations as a foundation of beh- ioral models. PhysicaA 196, 546-573 (1993); D.
The interaction between mathematicians, statisticians and econometricians working in actuarial sciences and finance is producing numerous meaningful scientific results. This volume introduces new ideas, in the form of four-page papers, presented at the international conference Mathematical and Statistical Methods for Actuarial Sciences and Finance (MAF), held at Universidad Carlos III de Madrid (Spain), 4th-6th April 2018. The book covers a wide variety of subjects in actuarial science and financial fields, all discussed in the context of the cooperation between the three quantitative approaches. The topics include: actuarial models; analysis of high frequency financial data; behavioural finance; carbon and green finance; credit risk methods and models; dynamic optimization in finance; financial econometrics; forecasting of dynamical actuarial and financial phenomena; fund performance evaluation; insurance portfolio risk analysis; interest rate models; longevity risk; machine learning and soft-computing in finance; management in insurance business; models and methods for financial time series analysis, models for financial derivatives; multivariate techniques for financial markets analysis; optimization in insurance; pricing; probability in actuarial sciences, insurance and finance; real world finance; risk management; solvency analysis; sovereign risk; static and dynamic portfolio selection and management; trading systems. This book is a valuable resource for academics, PhD students, practitioners, professionals and researchers, and is also of interest to other readers with quantitative background knowledge.
This volume is a comprehensive collection of extended contributions from the Workshop on Computational Optimization 2014, held at Warsaw, Poland, September 7-10, 2014. The book presents recent advances in computational optimization. The volume includes important real problems like parameter settings for controlling processes in bioreactor and other processes, resource constrained project scheduling, infection distribution, molecule distance geometry, quantum computing, real-time management and optimal control, bin packing, medical image processing, localization the abrupt atmospheric contamination source and so on. It shows how to develop algorithms for them based on new metaheuristic methods like evolutionary computation, ant colony optimization, constrain programming and others. This research demonstrates how some real-world problems arising in engineering, economics, medicine and other domains can be formulated as optimization tasks.
Coordination is extremely important in economic, political, and social life. The concept of economic equilibrium is based on the coordination of producers and consumers in buying and selling. This book reviews the topic of coordination from an economic, theoretical standpoint. The aim of this volume is twofold: first, the book contributes to the ongoing research on the economics of coordination; and second, it disseminates results and encourages interest in the topic. The volume contains original research on coordination including general game-theoretic questions, particular coordination issues within specific fields of economics (i.e. industrial organization, international trade, and macroeconomics), and experimental research.
Herbert Scarf is a highly esteemed distinguished American economist. He is internationally famous for his early epoch-making work on optimal inventory policies and his highly influential study with Andrew Clark on optimal policies for a multi-echelon inventory problem, which initiated the important and flourishing field of supply chain management. Equally, he has gained world recognition for his classic study on the stability of the Walrasian price adjustment processes and his fundamental analysis on the relationship between the core and the set of competitive equilibria (the so-called Edgeworth conjecture). Further achievements include his remarkable sufficient condition for the existence of a core in non-transferable utility games and general exchange economies, his seminal paper with Lloyd Shapley on housing markets, and his pioneering study on increasing returns and models of production in the presence of indivisibilities. All in all, however, the name of Scarf is always remembered as a synonym for the computation of economic equilibria and fixed points. In the early 1960s he invented a path-breaking technique for computing equilibrium prices.This work has generated a major research field in economics termed Applied General Equilibrium Analysis and a corresponding area in operations research known as Simplicial Fixed Point Methods. This book comprises all his research articles and consists of four volumes. This volume collects Herbert Scarf's papers in the area of Operations Research and Management.
This edited book is dedicated to Professor N. U. Ahmed, a leading scholar and a renowned researcher in optimal control and optimization on the occasion of his retirement from the Department of Electrical Engineering at University of Ottawa in 1999. The contributions of this volume are in the areas of optimal control, non linear optimization and optimization applications. They are mainly the im proved and expanded versions of the papers selected from those presented in two special sessions of two international conferences. The first special session is Optimization Methods, which was organized by K. L. Teo and X. Q. Yang for the International Conference on Optimization and Variational Inequality, the City University of Hong Kong, Hong Kong, 1998. The other one is Optimal Control, which was organized byK. Teo and L. Caccetta for the Dynamic Control Congress, Ottawa, 1999. This volume is divided into three parts: Optimal Control; Optimization Methods; and Applications. The Optimal Control part is concerned with com putational methods, modeling and nonlinear systems. Three computational methods for solving optimal control problems are presented: (i) a regularization method for computing ill-conditioned optimal control problems, (ii) penalty function methods that appropriately handle final state equality constraints, and (iii) a multilevel optimization approach for the numerical solution of opti mal control problems. In the fourth paper, the worst-case optimal regulation involving linear time varying systems is formulated as a minimax optimal con trol problem."
Local search has been applied successfully to a diverse collection of optimization problems. However, results are scattered throughout the literature. This is the first book that presents a large collection of theoretical results in a consistent manner. It provides the reader with a coherent overview of the achievements obtained so far, and serves as a source of inspiration for the development of novel results in the challenging field of local search.
The editors draw on a 3-year project that analyzed a Portuguese area in detail, comparing this study with papers from other regions. Applications include the estimation of technical efficiency in agricultural grazing systems (dairy, beef and mixed) and specifically for dairy farms. The conclusions indicate that it is now necessary to help small dairy farms in order to make them more efficient. These results can be compared with the technical efficiency of a sample of Spanish dairy processing firms presented by Magdalena Kapelko and co-authors.
The aim of this book is to furnish the reader with a rigorous and detailed exposition of the concept of control parametrization and time scaling transformation. It presents computational solution techniques for a special class of constrained optimal control problems as well as applications to some practical examples. The book may be considered an extension of the 1991 monograph A Unified Computational Approach Optimal Control Problems, by K.L. Teo, C.J. Goh, and K.H. Wong. This publication discusses the development of new theory and computational methods for solving various optimal control problems numerically and in a unified fashion. To keep the book accessible and uniform, it includes those results developed by the authors, their students, and their past and present collaborators. A brief review of methods that are not covered in this exposition, is also included. Knowledge gained from this book may inspire advancement of new techniques to solve complex problems that arise in the future. This book is intended as reference for researchers in mathematics, engineering, and other sciences, graduate students and practitioners who apply optimal control methods in their work. It may be appropriate reading material for a graduate level seminar or as a text for a course in optimal control.
Internet is starting to permeate politics much as it has previously revolutionised education, business or the arts. Thus, there is a growing interest in areas of e-government and, more recently, e-democracy. However, most attempts in this field have just envisioned standard political approaches facilitated by technology, like e-voting or e-debating. Alternatively, we could devise a more transforming strategy based on deploying web based group decision support tools and promote their use for public policy decision making. This book delineates how this approach could be implemented. It addresses foundations, basic methodologies, potential implementation and applications, together with a thorough discussion of the many challenging issues. This innovative text will be of interest to students, researchers and practitioners in the fields of e-government, e-democracy and e-participation and research in decision analysis, negotiation analysis and group decision support.
Chapters in Game Theory has been written on the occasion of the 65th birthday of Stef Tijs, who can be regarded as the godfather of game theory in the Netherlands. The contributors all are indebted to Stef Tijs, as former Ph.D. students or otherwise. The book contains fourteen chapters on a wide range of subjects. Some of these can be considered surveys while other chapters present new results: most contributions can be positioned somewhere in between these categories. The topics covered include: cooperative stochastic games; noncooperative stochastic games; sequencing games; games arising form linear (semi-) infinite programming problems; network formation, costs and potential games; potentials and consistency in transferable utility games; the nucleolus and equilibrium prices; population uncertainty and equilibrium selection; cost sharing; centrality in social networks; extreme points of the core; equilibrium sets of bimatrix games; game theory and the market; and transfer procedures for nontransferable utility games. Both editors did their Ph.D with Stef Tijs, while he was affiliated with the mathematics department of the University of Nijmegen.
Continuous-time Markov decision processes (MDPs), also known as controlled Markov chains, are used for modeling decision-making problems that arise in operations research (for instance, inventory, manufacturing, and queueing systems), computer science, communications engineering, control of populations (such as fisheries and epidemics), and management science, among many other fields. This volume provides a unified, systematic, self-contained presentation of recent developments on the theory and applications of continuous-time MDPs. The MDPs in this volume include most of the cases that arise in applications, because they allow unbounded transition and reward/cost rates. Much of the material appears for the first time in book form.
Bioinspired computation methods such as evolutionary algorithms and ant colony optimization are being applied successfully to complex engineering problems and to problems from combinatorial optimization, and with this comes the requirement to more fully understand the computational complexity of these search heuristics. This is the first textbook covering the most important results achieved in this area. The authors study the computational complexity of bioinspired computation and show how runtime behavior can be analyzed in a rigorous way using some of the best-known combinatorial optimization problems -- minimum spanning trees, shortest paths, maximum matching, covering and scheduling problems. A feature of the book is the separate treatment of single- and multiobjective problems, the latter a domain where the development of the underlying theory seems to be lagging practical successes. This book will be very valuable for teaching courses on bioinspired computation and combinatorial optimization. Researchers will also benefit as the presentation of the theory covers the most important developments in the field over the last 10 years. Finally, with a focus on well-studied combinatorial optimization problems rather than toy problems, the book will also be very valuable for practitioners in this field.
Experimental Econophysics describes the method of controlled human experiments, which is developed by physicists to study some problems in economics or finance, namely, stylized facts, fluctuation phenomena, herd behavior, contrarian behavior, hedge behavior, cooperation, business cycles, partial information, risk management, and stock prediction. Experimental econophysics together with empirical econophysics are two branches of the field of econophysics. The latter one has been extensively discussed in the existing books, while the former one has been seldom touched. In this book, the author will focus on the branch of experimental econophysics. Empirical econophysics is based on the analysis of data in real markets by using some statistical tools borrowed from traditional statistical physics. Differently, inspired by the role of controlled experiments and system modelling (for computer simulations and/or analytical theory) in developing modern physics, experimental econophysics specially relies on controlled human experiments in the laboratory (producing data for analysis) together with agent-based modelling (for computer simulations and/or analytical theory), with an aim at revealing the general cause-effect relationship between specific parameters and emergent properties of real economic/financial markets. This book covers the basic concepts, experimental methods, modelling approaches, and latest progress in the field of experimental econophysics.
Finite-dimensional optimization problems occur throughout the mathematical sciences. The majority of these problems cannot be solved analytically. This introduction to optimization attempts to strike a balance between presentation of mathematical theory and development of numerical algorithms. Building on students' skills in calculus and linear algebra, the text provides a rigorous exposition without undue abstraction. Its stress on statistical applications will be especially appealing to graduate students of statistics and biostatistics. The intended audience also includes students in applied mathematics, computational biology, computer science, economics, and physics who want to see rigorous mathematics combined with real applications. In this second edition the emphasis remains on finite-dimensional optimization. New material has been added on the MM algorithm, block descent and ascent, and the calculus of variations. Convex calculus is now treated in much greater depth. Advanced topics such as the Fenchel conjugate, subdifferentials, duality, feasibility, alternating projections, projected gradient methods, exact penalty methods, and Bregman iteration will equip students with the essentials for understanding modern data mining techniques in high dimensions.
Hybrid Optimization focuses on the application of artificial intelligence and operations research techniques to constraint programming for solving combinatorial optimization problems. This book covers the most relevant topics investigated in the last ten years by leading experts in the field, and speculates about future directions for research. This book includes contributions by experts from different but related areas of research including constraint programming, decision theory, operations research, SAT, artificial intelligence, as well as others. These diverse perspectives are actively combined and contrasted in order to evaluate their relative advantages. This volume presents techniques for hybrid modeling, integrated solving strategies including global constraints, decomposition techniques, use of relaxations, and search strategies including tree search local search and metaheuristics. Various applications of the techniques presented as well as supplementary computational tools are also discussed.
This work is a revised and enlarged edition of a book with the same title published in Romanian by the Publishing House of the Romanian Academy in 1989. It grew out of lecture notes for a graduate course given by the author at the University if Ia i and was initially intended for students and readers primarily interested in applications of optimal control of ordinary differential equations. In this vision the book had to contain an elementary description of the Pontryagin maximum principle and a large number of examples and applications from various fields of science. The evolution of control science in the last decades has shown that its meth ods and tools are drawn from a large spectrum of mathematical results which go beyond the classical theory of ordinary differential equations and real analy ses. Mathematical areas such as functional analysis, topology, partial differential equations and infinite dimensional dynamical systems, geometry, played and will continue to play an increasing role in the development of the control sciences. On the other hand, control problems is a rich source of deep mathematical problems. Any presentation of control theory which for the sake of accessibility ignores these facts is incomplete and unable to attain its goals. This is the reason we considered necessary to widen the initial perspective of the book and to include a rigorous mathematical treatment of optimal control theory of processes governed by ordi nary differential equations and some typical problems from theory of distributed parameter systems."
This book covers algorithms and discretization procedures for the solution of nonlinear progamming, semi-infinite optimization and optimal control problems. Among the important features included are the theory of algorithms represented as point-to-set maps, the treatment of min-max problems with and without constraints, the theory of consistent approximation which provides a framework for the solution of semi-infinite optimization, optimal control, and shape optimization problems with very general constraints, using simple algorithms that call standard nonlinear programming algorithms as subroutines, the completeness with which algorithms are analysed, and chapter 5 containing mathematical results needed in optimization from a large assortment of sources. Readers will find of particular interest the exhaustive modern treatment of optimality conditions and algorithms for min-max problems, as well as the newly developed theory of consistent approximations and the treatment of semi-infinite optimization and optimal control problems in this framework. This book presents the first treatment of optimization algorithms for optimal control problems with state-trajectory and control constraints, and fully accounts for all the approximations that one must make in their solution.It is also the first to make use of the concepts of epi-convergence and optimality functions in the construction of consistent approximations to infinite dimensional problems.
This book presents the latest findings on stochastic dynamic programming models and on solving optimal control problems in networks. It includes the authors' new findings on determining the optimal solution of discrete optimal control problems in networks and on solving game variants of Markov decision problems in the context of computational networks. First, the book studies the finite state space of Markov processes and reviews the existing methods and algorithms for determining the main characteristics in Markov chains, before proposing new approaches based on dynamic programming and combinatorial methods. Chapter two is dedicated to infinite horizon stochastic discrete optimal control models and Markov decision problems with average and expected total discounted optimization criteria, while Chapter three develops a special game-theoretical approach to Markov decision processes and stochastic discrete optimal control problems. In closing, the book's final chapter is devoted to finite horizon stochastic control problems and Markov decision processes. The algorithms developed represent a valuable contribution to the important field of computational network theory.
Controlled stochastic processes with discrete time form a very interest ing and meaningful field of research which attracts widespread attention. At the same time these processes are used for solving of many applied problems in the queueing theory, in mathematical economics. in the theory of controlled technical systems, etc. . In this connection, methods of the theory of controlled processes constitute the every day instrument of many specialists working in the areas mentioned. The present book is devoted to the rather new area, that is, to the optimal control theory with functional constraints. This theory is close to the theory of multicriteria optimization. The compromise between the mathematical rigor and the big number of meaningful examples makes the book attractive for professional mathematicians and for specialists who ap ply mathematical methods in different specific problems. Besides. the book contains setting of many new interesting problems for further invf'stigatioll. The book can form the basis of special courses in the theory of controlled stochastic processes for students and post-graduates specializing in the ap plied mathematics and in the control theory of complex systf'ms. The grounding of graduating students of mathematical department is sufficient for the perfect understanding of all the material. The book con tains the extensive Appendix where the necessary knowledge ill Borel spaces and in convex analysis is collected. All the meaningful examples can be also understood by readers who are not deeply grounded in mathematics."
Recently, a great deal of progress has been made in the modeling and understanding of processes with nonlinear dynamics, even when only time series data are available. Modern reconstruction theory deals with creating nonlinear dynamical models from data and is at the heart of this improved understanding. Most of the work has been done by dynamicists, but for the subject to reach maturity, statisticians and signal processing engineers need to provide input both to the theory and to the practice. The book brings together different approaches to nonlinear time series analysis in order to begin a synthesis that will lead to better theory and practice in all the related areas. This book describes the state of the art in nonlinear dynamical reconstruction theory. The chapters are based upon a workshop held at the Isaac Newton Institute, Cambridge University, UK, in late 1998. The book's chapters present theory and methods topics by leading researchers in applied and theoretical nonlinear dynamics, statistics, probability, and systems theory. Features and topics: * disentangling uncertainty and error: the predictability of nonlinear systems * achieving good nonlinear models * delay reconstructions: dynamics vs. statistics * introduction to Monte Carlo Methods for Bayesian Data Analysis * latest results in extracting dynamical behavior via Markov Models * data compression, dynamics and stationarity Professionals, researchers, and advanced graduates in nonlinear dynamics, probability, optimization, and systems theory will find the book a useful resource and guide to current developments in the subject.
Studies in generalized convexity and generalized monotonicity have significantly increased during the last two decades. Researchers with very diverse backgrounds such as mathematical programming, optimization theory, convex analysis, nonlinear analysis, nonsmooth analysis, linear algebra, probability theory, variational inequalities, game theory, economic theory, engineering, management science, equilibrium analysis, for example are attracted to this fast growing field of study. Such enormous research activity is partially due to the discovery of a rich, elegant and deep theory which provides a basis for interesting existing and potential applications in different disciplines. The handbook offers an advanced and broad overview of the current state of the field. It contains fourteen chapters written by the leading experts on the respective subject; eight on generalized convexity and the remaining six on generalized monotonicity. |
![]() ![]() You may like...
Die Singende Hand - Versamelde Gedigte…
Breyten Breytenbach
Paperback
|