![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Mathematics > Optimization > Game theory
Predation is an ecological factor of almost universal importance for the biol ogist who aims at an understanding of the habits and structures of animals. Despite its pervasive nature opinions differ as to what predation really is. So far it has been defined only in negative terms; it is thought not to be par asitism, the other great process by which one organism harms another, nor filter-feeding, carrion-eating, or browsing. Accordingly, one could define predation as a process by which an animal spends some effort to locate a live prey and, in addition, spends another effort to mutilate or kill it. Ac cording to this usage of the word a nudibranch, for example, that feeds on hydroids would be a predator inasmuch as it needs some time to locate col onies of its prey which, after being located, scarcely demand more than eating, which differs little from browsing. From the definition just proposed consumption of the prey following its capture has been intentionally omit ted. Indeed, an animal may be disposed of without being eaten. Hence the biological significance of predation may be more than to maintain nutrition al homeostasis. In fact, predation may have something in common with the more direct forms of competition, a facet that will be only cursorily touched upon in this book."
This book was originally conceived as a continuation in theme of the collec- tive monograph Limits of Predictability (Yu. A. Kravtsov, Ed. , Springer Series in Synergetics, Vol. 60, Springer-Verlag, Heidelberg, 1993). The main thrust of that book was to examine the various effects and factors (system non- stationarity, measurement noise, predictive model accuracy, and so on) that may limit, in a fundamental fashion, our ability to mathematically predict physical and man-made phenomena and events. Particularly interesting was the diversity of fields from which the papers and examples were drawn, in- cluding climatology, physics, biophysics, cybernetics, synergetics, sociology, and ethnogenesis. Twelve prominant Russian scientists, and one American (Prof. A. J. Lichtman) discussed their philosophical and scientific standpoints on the problem of the limits of predictability in their various fields. During the preparation of that book, the editor (Yu. A. K) had the great pleasure of interacting with world-renowned Russian scientists such as oceanologist A. S. Monin, geophysicist V. I. Keilis-Borok, sociologist I. V. Bestuzhev-Lada, histo- rian L. N. Gumilev, to name a few. Dr. Angela M. Lahee, managing editor of the Synergetics Series at Springer, was enormously helpful in the publishing of that book. In 1992, Prof. H. Haken along with Dr. Lahee kindly supported the idea of publishing a second volume on the theme of nonlinear system predictability, this time with a more international flavor.
The developments within the computationally and numerically oriented ar eas of Operations Research, Finance, Statistics and Economics have been sig nificant over the past few decades. Each area has been developing its own computer systems and languages that suit its needs, but there is relatively little cross-fertilization among them yet. This volume contains a collection of papers that each highlights a particular system, language, model or paradigm from one of the computational disciplines, aimed at researchers and practitioners from the other fields. The 15 papers cover a number of relevant topics: Models and Modelling in Operations Research and Economics, novel High-level and Object-Oriented approaches to programming, through advanced uses of Maple and MATLAB, and applications and solution of Differential Equations in Finance. It is hoped that the material in this volume will whet the reader's appetite for discovering and exploring new approaches to old problems, and in the longer run facilitate cross-fertilization among the fields. We would like to thank the contributing authors, the reviewers, the publisher, and last, but not least, Jesper Saxtorph, Anders Nielsen, and Thomas Stidsen for invaluable technical assistance.
The ?nite-dimensional nonlinear complementarity problem (NCP) is a s- tem of ?nitely many nonlinear inequalities in ?nitely many nonnegative variables along with a special equation that expresses the complementary relationship between the variables and corresponding inequalities. This complementarity condition is the key feature distinguishing the NCP from a general inequality system, lies at the heart of all constrained optimi- tion problems in ?nite dimensions, provides a powerful framework for the modeling of equilibria of many kinds, and exhibits a natural link between smooth and nonsmooth mathematics. The ?nite-dimensional variational inequality (VI), which is a generalization of the NCP, provides a broad unifying setting for the study of optimization and equilibrium problems and serves as the main computational framework for the practical solution of a host of continuum problems in the mathematical sciences. The systematic study of the ?nite-dimensional NCP and VI began in the mid-1960s; in a span of four decades, the subject has developed into a very fruitful discipline in the ?eld of mathematical programming. The - velopments include a rich mathematical theory, a host of e?ective solution algorithms, a multitude of interesting connections to numerous disciplines, and a wide range of important applications in engineering and economics. As a result of their broad associations, the literature of the VI/CP has bene?ted from contributions made by mathematicians (pure, applied, and computational), computer scientists, engineers of many kinds (civil, ch- ical, electrical, mechanical, and systems), and economists of diverse exp- tise (agricultural, computational, energy, ?nancial, and spatial).
Complementarity theory is a new domain in applied mathematics and is concerned with the study of complementarity problems. These problems represent a wide class of mathematical models related to optimization, game theory, economic engineering, mechanics, fluid mechanics, stochastic optimal control etc. The book is dedicated to the study of nonlinear complementarity problems by topological methods. Audience: Mathematicians, engineers, economists, specialists working in operations research and anybody interested in applied mathematics or in mathematical modeling.
STATISTICAL PHYSICS AND ECONOMICS covers systematically and in simple language the physical foundations of evolution equations, stochastic processes, and generalized Master equations applied to complex economic systems. Strong emphasis is placed on concepts, methods, and techniques for modeling, assessment, and solving or estimation of economic problems in an attempt to understand the large variability of financial markets, trading and communication networks, barriers and acceleration of the economic growth as well as the kinetics of product and money flows. The main focus of the book is a clear physical understanding of the self-organizing principles in social and economic systems. This modern introduction will be a useful tool for researchers, engineers, as well as graduate and post-graduate students in econophysics and related topics.
The English edition differs only slightly from the Russian original. The main struc tural difference is that all the material on the theory of finite noncooperative games has been collected in Chapter 2, with renumbering of the material of the remain ing chapters. New sections have been added in this chapter: devoted to general questions of equilibrium theory in nondegenerate games, subsections 3.9-3.17, by N.N. Vorob'ev, Jr.; and 4, by A.G. Chernyakov; and 5, by N.N. Vorob'ev, Jr., on the computational complexity of the process of finding equilibrium points in finite games. It should also be mentioned that subsections 3.12-3.14 in Chapter 1 were written by E.B. Yanovskaya especially for the Russian edition. The author regrets that the present edition does not reflect the important game-theoretical achievements presented in the splendid monographs by E. van Damme (on the refinement of equilibrium principles for finite games), as well as those by J.e. Harsanyi and R. Selten, and by W. Giith and B. Kalkofen (on equilibrium selection). When the Russian edition was being written, these direc tions in game theory had not yet attained their final form, which appeared only in quite recent monographs; the present author has had to resist the temptation of attempting to produce an elementary exposition of the new theories for the English edition; readers of this edition will find only brief mention of the new material."
This handbook provides an in-depth examination of important theoretical methods and procedures in applied analysis. It details many of the most important theoretical trends in nonlinear analysis and applications to different fields. These features make the volume a valuable tool for every researcher working on nonlinear analysis.
The theory of dynamic games is very rich in nature and very much alive If the reader does not already agree with this statement, I hope he/she will surely do so after having consulted the contents of the current volume. The activities which fall under the heading of 'dynamic games' cannot easily be put into one scientific discipline. On the theoretical side one deals with differential games, difference games (the underlying models are described by differential, respec tively difference equations) and games based on Markov chains, with determin istic and stochastic games, zero-sum and nonzero-sum games, two-player and many-player games - all under various forms of equilibria. On the practical side, one sees applications to economics (stimulated by the recent Nobel prize for economics which went to three prominent scientists in game theory), biology, management science, and engineering. The contents of this volume are primarily based on selected presentations made at the Sixth International Symposium on Dynamic Games and Applica tions, held in St Jovite, Quebec, Canada, 13-15 July 1994. Every paper that appears in this volume has passed through a stringent reviewing process, as is the case with publications for archival technical journals. This conference, as well as its predecessor which was held in Grimentz, 1992, took place under the auspices of the International Society of Dynamic Games (ISDG), established in 1990. One of the activities of the ISDG is the publication of these Annals. The contributions in this volume have been grouped around five themes."
There are problems to whose solution I would attach an infinitely greater import ancf than to those of mathematics, for example touching ethics, or our relation to God, or conceming our destiny and our future; but their solution lies wholly beyond us and completely outside the province 0 f science. J. F. C. Gauss For a1l his prescience in matters physical and mathematieal, the great Gauss apparently did not foresee one development peculiar to OUT own time. The development I have in mind is the use of mathematical reasoning - in partieu lar the axiomatic method - to explicate alternative concepts of rationality and morality. The present bipartite collection of essays (Vol. 11, Nos. 2 and 3 of this journal) is entitled 'Game Theory, Social Choiee, and Ethics'. The eight papers represent state-of-the-art research in formal moral theory. Their intended aim is to demonstrate how the methods of game theory, decision theory, and axiomatic social choice theory can help to illuminate ethical questions central not only to moral theory, but also to normative public policy analysis. Before discussion of the contents of the papers, it should prove helpful to recall a number of pioneering papers that appeared during the decade of the 1950s. These papers contained aseries of mathematical and conceptual break through which laid the basis for much of today's research in formal moral theory. The papers deal with two somewhat distinct topics: the concept of individual and collective rationality, and the concept of social justiee."
This book constitutes the refereed proceedings of the Third International Conference on Decision and Game Theory for Security, GameSec 2012, held in Budapest, Hungary, in November 2012. The 18 revised full papers presented were carefully reviewed and selected from numerous submissions. The papers are organized in topical sections on secret communications, identification of attackers, multi-step attacks, network security, system defense, and applications security.
Our objectives may be briefly stated. They are two. First, we have sought to provide a compact and digestible exposition of some sub-branches of mathematics which are of interest to economists but which are underplayed in mathematical texts and dispersed in the journal literature. Second, we have sought to demonstrate the usefulness of the mathematics by providing a systematic account of modern neoclassical economics, that is, of those parts of economics from which jointness in production has been excluded. The book is introductory not in the sense that it can be read by any high-school graduate but in the sense that it provides some of the mathematics needed to appreciate modern general-equilibrium economic theory. It is aimed primarily at first-year graduate students and final-year honors students in economics who have studied mathematics at the university level for two years and who, in particular, have mastered a full-year course in analysis and calculus. The book is the outcome of a long correspondence punctuated by periodic visits by Kimura to the University of New South Wales. Without those visits we would never have finished. They were made possible by generous grants from the Leverhulme Foundation, Nagoya City University, and the University of New South Wales. Equally indispensible were the expert advice and generous encouragement of our friends Martin Beckmann, Takashi Negishi, Ryuzo Sato, and Yasuo Uekawa.
This volume records the proceedings of the 22nd Annual International Con ference of the International Simulation and Gaming Association (ISAGA), 15-19 July, 1991, Kyoto, Japan, sponsored by the Science Council of Japan and the Japanese Association of Simulation and Gaming (JASAG). The con ference theme was Global Modeling for Solving Global Problems. The first 2 days of the conference were held in the magnificent Kyoto International Conference Hall; the 3rd day was spent admiring the floats of the famous Gion Festival in the exquisite city of Kyoto and the Daibutsu (or Great Buddha) of the Todaiji Temple in Nara and visiting one of the Sharp factories. During the last 2 days of the conference we were made most welcome in the Faculty of International Relations of Ritsumeikan University. The day after the conference, a number of delegates went to Hiroshima (the Peace Memorial Hall, Museum and Park) and also to one of Japan's "Scenic Trio," the island of Miyajima with its breathtaking views and the Itsukushima Shrine. The conference was attended by some 400 delegates from over 30 different countries. Over 100 sessions, both theoretical and practical, were given: keynote speeches, round-table discussions, workshops, papers. This volume reflects most of those sessions, in the form of either a full paper or a short abstract."
During the last decade I have explored the consequences of what I have chosen to call the 'consistent preferences' approach to deductive reasoning in games. To a great extent this work has been done in coop eration with my co-authors Martin Dufwenberg, Andres Perea, and Ylva Sovik, and it has lead to a series of journal articles. This book presents the results of this research program. Since the present format permits a more extensive motivation for and presentation of the analysis, it is my hope that the content will be of interest to a wider audience than the corresponding journal articles can reach. In addition to active researcher in the field, it is intended for graduate students and others that wish to study epistemic conditions for equilibrium and rationalizability concepts in game theory. Structure of the book This book consists of twelve chapters. The main interactions between the chapters are illustrated in Table 0.1. As Table 0.1 indicates, the chapters can be organized into four dif ferent parts. Chapters 1 and 2 motivate the subsequent analysis by introducing the 'consistent preferences' approach, and by presenting ex amples and concepts that are revisited throughout the book. Chapters 3 and 4 present the decision-theoretic framework and the belief operators that are used in later chapters. Chapters 5, 6, 10, and 11 analyze games in the strategic form, while the remaining chapters-Chapters 7, 8, 9, and 12-are concerned with games in the extensive form."
Birkhauser Boston, Inc., will publish a series of carefully selected mono graphs in the area of mathematical modeling to present serious applications of mathematics for both the undergraduate and the professional audience. Some of the monographs to be selected and published will appeal more to the professional mathematician and user of mathematics, serving to familiarize the user with new models and new methods. Some, like the present monograph, will stress the educational aspect and will appeal more to a student audience, either as a textbook or as additional reading. We feel that this first volume in the series may in itself serve as a model for our program. Samuel Goldberg attaches a high priority to teaching stu dents the art of modeling, that is, to use his words, the art of constructing useful mathematical models of real-world phenomena. We concur. It is our strong conviction as editors that the connection between the actual problems and their mathematical models must be factually plausible, if not actually real. As this first volume in the new series goes to press, we invite its readers to share with us both their criticisms and their constructive suggestions."
Econometric theory, as presented in textbooks and the econometric literature generally, is a somewhat disparate collection of findings. Its essential nature is to be a set of demonstrated results that increase over time, each logically based on a specific set of axioms or assumptions, yet at every moment, rather than a finished work, these inevitably form an incomplete body of knowledge. The practice of econometric theory consists of selecting from, applying, and evaluating this literature, so as to test its applicability and range. The creation, development, and use of computer software has led applied economic research into a new age. This book describes the history of econometric computation from 1950 to the present day, based upon an interactive survey involving the collaboration of the many econometricians who have designed and developed this software. It identifies each of the econometric software packages that are made available to and used by economists and econometricians worldwide.
Many boundary value problems are equivalent to Au=O (1) where A: X ---+ Y is a mapping between two Banach spaces. When the problem is variational, there exists a differentiable functional 0 and e E X such that lIell > rand inf"
Since the first Congress in Zurich in 1897, the ICM has been an eagerly awaited event every four years. Many of these occasions are celebrated for historie developments and seminal contributions to mathematics. 2002 marks the year of the 24th ICM, the first of the new millennium. Also historie is the first ICM Satellite Conference devoted to game theory and applications. It is one of those rare occasions, in which masters of the field are able to meet under congenial surroundings to talk and share their gathered wisdom. As is usually the case in ICM meetings, participants of the ICM Satellite Conference on Game Theory and Applications (Qingdao, August 2(02) hailed from the four corners of the world. In addition to presentations of high qual ity research, the program also included twelve invited plenary sessions with distinguished speakers. This volume, which gathers together selected papers read at the conference, is divided into four sections: (I) Foundations, Concepts, and Structure. (II) Equilibrium Properties. (III) Applications to the Natural and Social Sciences. (IV) Computational Aspects of Games."
The ?nite-dimensional nonlinear complementarity problem (NCP) is a s- tem of ?nitely many nonlinear inequalities in ?nitely many nonnegative variables along with a special equation that expresses the complementary relationship between the variables and corresponding inequalities. This complementarity condition is the key feature distinguishing the NCP from a general inequality system, lies at the heart of all constrained optimi- tion problems in ?nite dimensions, provides a powerful framework for the modeling of equilibria of many kinds, and exhibits a natural link between smooth and nonsmooth mathematics. The ?nite-dimensional variational inequality (VI), which is a generalization of the NCP, provides a broad unifying setting for the study of optimization and equilibrium problems and serves as the main computational framework for the practical solution of a host of continuum problems in the mathematical sciences. The systematic study of the ?nite-dimensional NCP and VI began in the mid-1960s; in a span of four decades, the subject has developed into a very fruitful discipline in the ?eld of mathematical programming. The - velopments include a rich mathematical theory, a host of e?ective solution algorithms, a multitude of interesting connections to numerous disciplines, and a wide range of important applications in engineering and economics. As a result of their broad associations, the literature of the VI/CP has bene?ted from contributions made by mathematicians (pure, applied, and computational), computer scientists, engineers of many kinds (civil, ch- ical, electrical, mechanical, and systems), and economists of diverse exp- tise (agricultural, computational, energy, ?nancial, and spatial).
This book constitutes the refereed proceedings of the 18th International Symposium Fundamentals of Computation Theory, FCT 2011, held in Oslo, Norway, in August 2011. The 28 revised full papers presented were carefully reviewed and selected from 78 submissions. FCT 2011 focused on algorithms, formal methods, and emerging fields, such as ad hoc, dynamic and evolving systems; algorithmic game theory; computational biology; foundations of cloud computing and ubiquitous systems; and quantum computation.
Max-Min problems are two-step allocation problems in which one side must make his move knowing that the other side will then learn what the move is and optimally counter. They are fundamental in parti cular to military weapons-selection problems involving large systems such as Minuteman or Polaris, where the systems in the mix are so large that they cannot be concealed from an opponent. One must then expect the opponent to determine on an optlmal mixture of, in the case men tioned above, anti-Minuteman and anti-submarine effort. The author's first introduction to a problem of Max-Min type occurred at The RAND Corporation about 1951. One side allocates anti-missile defenses to various cities. The other side observes this allocation and then allocates missiles to those cities. If F(x, y) denotes the total residual value of the cities after the attack, with x denoting the defender's strategy and y the attacker's, the problem is then to find Max MinF(x, y) = Max MinF(x, y)] ."
Groups of people perform acts that are subject to standards of rationality. A committee may sensibly award fellowships, or may irrationally award them in violation of its own policies. A theory of collective rationality defines collective acts that are evaluable for rationality and formulates principles for their evaluation. This book argues that a group's act is evaluable for rationality if it is the products of acts its members fully control. It also argues that such an act is collectively rational if the acts of the group's members are rational. Efficiency is a goal of collective rationality, but not a requirement, except in cases where conditions are ideal for joint action and agents have rationally prepared for joint action. The people engaged in a game of strategy form a group, and the combination of their acts yields a collective act. If their collective act is rational, it constitutes a solution to their game. A theory of collective rationality yields principles concerning solutions to games. One principle requires that a solution constitute an equilibrium among the incentives of the agents in the game. In a cooperative game some agents are coalitions of individuals, and it may be impossible for all agents to pursue all incentives. Because rationality is attainable, the appropriate equilibrium standard for cooperative games requires that agents pursue only incentives that provide sufficient reasons to act. The book's theory of collective rationality supports an attainable equilibrium-standard for solutions to cooperative games and shows that its realization follows from individuals' rational acts. By extending the theory of rationality to groups, this book reveals the characteristics that make an act evaluable for rationality and the way rationality's evaluation of an act responds to the type of control its agent exercises over the act. The book's theory of collective rationality contributes to philosophical projects such as contractarian ethics and to practical projects such as the design of social institutions.
Changing interest rates constitute one of the major risk sources for banks, insurance companies, and other financial institutions. Modeling the term-structure movements of interest rates is a challenging task. This volume gives an introduction to the mathematics of term-structure models in continuous time. It includes practical aspects for fixed-income markets such as day-count conventions, duration of coupon-paying bonds and yield curve construction; arbitrage theory; short-rate models; the Heath-Jarrow-Morton methodology; consistent term-structure parametrizations; affine diffusion processes and option pricing with Fourier transform; LIBOR market models; and credit risk. The focus is on a mathematically straightforward but rigorous development of the theory. Students, researchers and practitioners will find this volume very useful. Each chapter ends with a set of exercises, that provides source for homework and exam questions. Readers are expected to be familiar with elementary Ito calculus, basic probability theory, and real and complex analysis."
This work is an exploration of the global market dynamics, their intrinsic natures, common trends and dynamic interlinkages during the stock market crises over the last twelve years. The study isolates different phases of crisis and differentiates between any crisis that remains confined to the region and those that take up a global dimension. The latent structure of the global stock market, the inter-regional and intra-regional stock market dynamics around the crises are analyzed to get a complete picture of the structure of the global stock market. The study further probing into the inherent nature of the global stock market in generating crisis finds the global market to be chaotic thus making the system intrinsically unstable or at best to follow knife-edge stability. The findings have significant bearing at theoretical level and on policy decisions.
Since the introduction of electrosurgery the techniques of surgery on the nervous system have passed through further improvements (bipolar coagulation, microscope), even if the procedure was not substantially modified. Today, laser represents a new "discipline," as it offers a new way of performing all basic maneuvers (dissection, demolition, hemostasis, vessel sutures). Furthermore, laser offers the possibility of a special maneuver, namely reduction of the volume of a tumoral mass through vaporization. Its application is not restricted to traditional neurosurgery but extends also to stereotactic and vascular neurosurgery. Laser surgery has also influenced the anesthesiologic techniques. At the same time new instrumentation has been introduced: CUSA ultrasonic aspiration, echotomography, and Doppler flowmeter. I have had the chance to utilize these new technologies all at a time and have come to the conclusion that we are facing the dawn of a new methodology which has already shown its validity and lack of inconveniences, and whose object is to increase the precision of neurological surgery. The technological development is still going on, and some improvements are to be foreseen. Laser scalpel is splitting the initial laser surgery into NO TOUCH and TOUCH surgery with laser. As new instrumentarium will be developed, a variable and tunable beam will become available. For example, in a few years Free Electron Laser will further add to the progress in this field." |
![]() ![]() You may like...
Formal Languages and Compilation
Stefano Crespi Reghizzi, Luca Breveglieri, …
Hardcover
R2,722
Discovery Miles 27 220
Reasoning About Program Transformations…
Jean-Francois Collard
Hardcover
R1,670
Discovery Miles 16 700
Modern Data Mining Algorithms in C++ and…
Timothy Masters
Paperback
Persistent Object Systems: Design…
Graham N.C. Kirby, Alan Dearle, …
Paperback
R1,544
Discovery Miles 15 440
Rewriting Techniques and Applications…
Paliath Narendran, Michael Rusinowitch
Paperback
R1,686
Discovery Miles 16 860
UML 2001 - The Unified Modeling…
Martin Gogolla, Cris Kobryn
Paperback
R1,753
Discovery Miles 17 530
Advanced Game Programming for…
Oscar Toledo Gutierrez
Hardcover
|