![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
Nevanlinna-Pick interpolation for time-varying input-output maps: The discrete case.- 0. Introduction.- 1. Preliminaries.- 2. J-Unitary operators on ?2.- 3. Time-varying Nevanlinna-Pick interpolation.- 4. Solution of the time-varying tangential Nevanlinna-Pick interpolation problem.- 5. An illustrative example.- References.- Nevanlinna-Pick interpolation for time-varying input-output maps: The continuous time case.- 0. Introduction.- 1. Generalized point evaluation.- 2. Bounded input-output maps.- 3. Residue calculus and diagonal expansion.- 4. J-unitary and J-inner operators.- 5. Time-varying Nevanlinna-Pick interpolation.- 6. An example.- References.- Dichotomy of systems and invertibility of linear ordinary differential operators.- 1. Introduction.- 2. Preliminaries.- 3. Invertibility of differential operators on the real line.- 4. Relations between operators on the full line and half line.- 5. Fredholm properties of differential operators on a half line.- 6. Fredholm properties of differential operators on a full line.- 7. Exponentially dichotomous operators.- 8. References.- Inertia theorems for block weighted shifts and applications.- 1. Introduction.- 2. One sided block weighted shifts.- 3. Dichotomies for left systems and two sided systems.- 4. Two sided block weighted shifts.- 5. Asymptotic inertia.- 6. References.- Interpolation for upper triangular operators.- 1. Introduction.- 2. Preliminaries.- 3. Colligations & characteristic functions.- 4. Towards interpolation.- 5. Explicit formulas for ?.- 6. Admissibility and more on general interpolation.- 7. Nevanlinna-Pick Interpolation.- 8. Caratheodory-Fejer interpolation.- 9. Mixed interpolation problems.- 10. Examples.- 11. Block Toeplitz & some implications.- 12. Varying coordinate spaces.- 13. References.- Minimality and realization of discrete time-varying systems.- 1. Preliminaries.- 2. Observability and reachability.- 3. Minimality for time-varying systems.- 4. Proofs of the minimality theorems.- 5. Realizations of infinite lower triangular matrices.- 6. The class of systems with constant state space dimension.- 7. Minimality and realization for periodical systems.- References.
"Et moi, ... si j'avait su comment en revenir. One service mathematics has rendered the je n'y serais poin t aile.' human race. It has put common sense back Jules Verne where it belongs, on the topmost shelf next to the dusty canister labelled 'discarded non- The series is divergent; therefore we may be sense'. able to do something with it. Eric T. Bell O. H ea viside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and non Iinearities abound. Similarly, all kinds of parts of mathematics serve as tools for other parts and for other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One service. topology has rendered mathematical physics .. .' 'One service logic has rendered com puter science .. .'; 'One service category theory has rendered mathematics .. .'. All arguably true. And all statements obtainable this way form part of the raison d 'e1: re of this series."
This book is the result of the International Symposium on Semi Markov Processes and their Applications held on June 4-7, 1984 at the Universite Libre de Bruxelles with the help of the FNRS (Fonds National de la Recherche Scientifique, Belgium), the Ministere de l'Education Nationale (Belgium) and the Bernoulli Society for Mathe matical Statistics and Probability. This international meeting was planned to make a state of the art for the area of semi-Markov theory and its applications, to bring together researchers in this field and to create a platform for open and thorough discussion. Main themes of the Symposium are the first ten sections of this book. The last section presented here gives an exhaustive biblio graphy on semi-Markov processes for the last ten years. Papers selected for this book are all invited papers and in addition some contributed papers retained after strong refereeing. Sections are I. Markov additive processes and regenerative systems II. Semi-Markov decision processes III. Algorithmic and computer-oriented approach IV. Semi-Markov models in economy and insurance V. Semi-Markov processes and reliability theory VI. Simulation and statistics for semi-Markov processes VII. Semi-Markov processes and queueing theory VIII. Branching IX. Applications in medicine X. Applications in other fields v PREFACE XI. A second bibliography on semi-Markov processes It is interesting to quote that sections IV to X represent a good sample of the main applications of semi-Markov processes i. e."
Psychological Statistics: The Basics walks the reader through the core logic of statistical inference and provides a solid grounding in the techniques necessary to understand modern statistical methods in the psychological and behavioral sciences. This book is designed to be a readable account of the role of statistics in the psychological sciences. Rather than providing a comprehensive reference for statistical methods, Psychological Statistics: The Basics gives the reader an introduction to the core procedures of estimation and model comparison, both of which form the cornerstone of statistical inference in psychology and related fields. Instead of relying on statistical recipes, the book gives the reader the big picture and provides a seamless transition to more advanced methods, including Bayesian model comparison. Psychological Statistics: The Basics not only serves as an excellent primer for beginners but it is also the perfect refresher for graduate students, early career psychologists, or anyone else interested in seeing the big picture of statistical inference. Concise and conversational, its highly readable tone will engage any reader who wants to learn the basics of psychological statistics.
This book is concerned with important problems of robust (stable) statistical pat tern recognition when hypothetical model assumptions about experimental data are violated (disturbed). Pattern recognition theory is the field of applied mathematics in which prin ciples and methods are constructed for classification and identification of objects, phenomena, processes, situations, and signals, i. e., of objects that can be specified by a finite set of features, or properties characterizing the objects (Mathematical Encyclopedia (1984)). Two stages in development of the mathematical theory of pattern recognition may be observed. At the first stage, until the middle of the 1970s, pattern recogni tion theory was replenished mainly from adjacent mathematical disciplines: mathe matical statistics, functional analysis, discrete mathematics, and information theory. This development stage is characterized by successful solution of pattern recognition problems of different physical nature, but of the simplest form in the sense of used mathematical models. One of the main approaches to solve pattern recognition problems is the statisti cal approach, which uses stochastic models of feature variables. Under the statistical approach, the first stage of pattern recognition theory development is characterized by the assumption that the probability data model is known exactly or it is esti mated from a representative sample of large size with negligible estimation errors (Das Gupta, 1973, 1977), (Rey, 1978), (Vasiljev, 1983))."
The first edition was released in 1996 and has sold close to 2200 copies. Provides an up-to-date comprehensive treatment of MDS, a statistical technique used to analyze the structure of similarity or dissimilarity data in multidimensional space. The authors have added three chapters and exercise sets. The text is being moved from SSS to SSPP. The book is suitable for courses in statistics for the social or managerial sciences as well as for advanced courses on MDS. All the mathematics required for more advanced topics is developed systematically in the text.
This book offers a straightforward introduction to the mathematical theory of probability. It presents the central results and techniques of the subject in a complete and self-contained account. As a result, the emphasis is on giving results in simple forms with clear proofs and to eschew more powerful forms of theorems which require technically involved proofs. Throughout there are a wide variety of exercises to illustrate and to develop ideas in the text.
Discrete event simulation and agent-based modeling are increasingly recognized as critical for diagnosing and solving process issues in complex systems. Introduction to Discrete Event Simulation and Agent-based Modeling covers the techniques needed for success in all phases of simulation projects. These include: * Definition - The reader will learn how to plan a project and communicate using a charter. * Input analysis - The reader will discover how to determine defensible sample sizes for all needed data collections. They will also learn how to fit distributions to that data. * Simulation - The reader will understand how simulation controllers work, the Monte Carlo (MC) theory behind them, modern verification and validation, and ways to speed up simulation using variation reduction techniques and other methods. * Output analysis - The reader will be able to establish simultaneous intervals on key responses and apply selection and ranking, design of experiments (DOE), and black box optimization to develop defensible improvement recommendations. * Decision support - Methods to inspire creative alternatives are presented, including lean production. Also, over one hundred solved problems are provided and two full case studies, including one on voting machines that received international attention. Introduction to Discrete Event Simulation and Agent-based Modeling demonstrates how simulation can facilitate improvements on the job and in local communities. It allows readers to competently apply technology considered key in many industries and branches of government. It is suitable for undergraduate and graduate students, as well as researchers and other professionals.
Harmonic analysis and probability have long enjoyed a mutually beneficial relationship that has been rich and fruitful. This monograph, aimed at researchers and students in these fields, explores several aspects of this relationship. The primary focus of the text is the nontangential maximal function and the area function of a harmonic function and their probabilistic analogues in martingale theory. The text first gives the requisite background material from harmonic analysis and discusses known results concerning the nontangential maximal function and area function, as well as the central and essential role these have played in the development of the field.The book next discusses further refinements of traditional results: among these are sharp good-lambda inequalities and laws of the iterated logarithm involving nontangential maximal functions and area functions. Many applications of these results are given. Throughout, the constant interplay between probability and harmonic analysis is emphasized and explained. The text contains some new and many recent results combined in a coherent presentation.
Sample data alone never suffice to draw conclusions about populations. Inference always requires assumptions about the population and sampling process. Statistical theory has revealed much about how strength of assumptions affects the precision of point estimates, but has had much less to say about how it affects the identification of population parameters. Indeed, it has been commonplace to think of identification as a binary event - a parameter is either identified or not - and to view point identification as a pre-condition for inference. Yet there is enormous scope for fruitful inference using data and assumptions that partially identify population parameters. This book explains why and shows how. The book presents in a rigorous and thorough manner the main elements of Charles Manski's research on partial identification of probability distributions. One focus is prediction with missing outcome or covariate data. Another is decomposition of finite mixtures, with application to the analysis of contaminated sampling and ecological inference. A third major focus is the analysis of treatment response. Whatever the particular subject under study, the presentation follows a common path. The author first specifies the sampling process generating the available data and asks what may be learned about population parameters using the empirical evidence alone. He then ask how the (typically) setvalued identification regions for these parameters shrink if various assumptions are imposed. The approach to inference that runs throughout the book is deliberately conservative and thoroughly nonparametric. Conservative nonparametric analysis enables researchers to learn from the available data without imposing untenable assumptions. It enables establishment of a domain of consensus among researchers who may hold disparate beliefs about what assumptions are appropriate. Charles F. Manski is Board of Trustees Professor at Northwestern University. He is author of Identification Problems in the Social Sciences and Analog Estimation Methods in Econometrics. He is a Fellow of the American Academy of Arts and Sciences, the American Association for the Advancement of Science, and the Econometric Society.
The concept of conditional specification of distributions is not new but, except in normal families, it has not been well developed in the literature. Computational difficulties undoubtedly hindered or discouraged developments in this direction. However, such roadblocks are of dimished importance today. Questions of compatibility of conditional and marginal specifications of distributions are of fundamental importance in modeling scenarios. Models with conditionals in exponential families are particularly tractable and provide useful models in a broad variety of settings.
The past several years have seen the creation and extension of a very conclusive theory of statistics and probability. Many of the research workers who have been concerned with both probability and statistics felt the need for meetings that provide an opportunity for personal con tacts among scholars whose fields of specialization cover broad spectra in both statistics and probability: to discuss major open problems and new solutions, and to provide encouragement for further research through the lectures of carefully selected scholars, moreover to introduce to younger colleagues the latest research techniques and thus to stimulate their interest in research. To meet these goals, the series of Pannonian Symposia on Mathematical Statistics was organized, beginning in the year 1979: the first, second and fourth one in Bad Tatzmannsdorf, Burgenland, Austria, the third and fifth in Visegrad, Hungary. The Sixth Pannonian Symposium was held in Bad Tatzmannsdorf again, in the time between 14 and 20 September 1986, under the auspices of Dr. Heinz FISCHER, Federal Minister of Science and Research, Theodor KERY, President of the State Government of Burgenland, Dr. Franz SAUERZOPF, Vice-President of the State Govern ment of Burgenland and Dr. Josef SCHMIDL, President of the Austrian Sta tistical Central Office. The members of the Honorary Committee were Pal ERDOS, WXadisXaw ORLICZ, Pal REVESz, Leopold SCHMETTERER and Istvan VINCZE; those of the Organizing Committee were Wilfried GROSSMANN (Uni versity of Vienna), Franz KONECNY (University of Agriculture of Vienna) and, as the chairman, Wolfgang WERTZ (Technical University of Vienna)."
This volume has its origin in the third *Workshop on Maximum-Entropy and Bayesian Methods in Applied Statistics,* held at the University of Wyoming, August 1 to 4, 1983. It was anticipated that the proceedings of this workshop could not be prepared in a timely fashion, so most of the papers were not collected until a year or so ago. Because most of the papers are in the nature of advancing theory or solving specific problems, as opposed to status reports, it is believed that the contents of this volume will be of lasting interest to the Bayesian community. The workshop was organized to bring together researchers from differ ent fields to examine critically maximum-entropy and Bayesian methods in science, engineering, medicine, economics, and other disciplines. Some of the papers were chosen specifically to kindle interest in new areas that may offer new tools or insight to the reader or to stimulate work on pressing problems that appear to be ideally suited to the maximum-entropy or Bayes ian method.
This book is in two volumes, and is intended as a text for introductory courses in probability and statistics at the second or third year university level. It em phasizes applications and logical principles rather than mathematical theory. A good background in freshman calculus is sufficient for most of the material presented. Several starred sections have been included as supplementary material. Nearly 900 problems and exercises of varying difficulty are given, and Appendix A contains answers to about one-third of them. The first volume (Chapters 1-8) deals with probability models and with math ematical methods for describing and manipulating them. It is similar in content and organization to the 1979 edition. Some sections have been rewritten and expanded-for example, the discussions of independent random variables and conditional probability. Many new exercises have been added. In the second volume (Chapters 9-16), probability models are used as the basis for the analysis and interpretation of data. This material has been revised extensively. Chapters 9 and 10 describe the use of the likelihood function in estimation problems, as in the 1979 edition. Chapter 11 then discusses frequency properties of estimation procedures, and introduces coverage probability and confidence intervals. Chapter 12 describes tests of significance, with applications primarily to frequency data. The likelihood ratio statistic is used to unify the material on testing, and connect it with earlier material on estimation."
Handbook of Alternative Data in Finance, Volume I motivates and challenges the reader to explore and apply Alternative Data in finance. The book provides a robust and in-depth overview of Alternative Data, including its definition, characteristics, difference from conventional data, categories of Alternative Data, Alternative Data providers, and more. The book also offers a rigorous and detailed exploration of process, application and delivery that should be practically useful to researchers and practitioners alike. Features Includes cutting edge applications in machine learning, fintech, and more Suitable for professional quantitative analysts, and as a resource for postgraduates and researchers in financial mathematics Features chapters from many leading researchers and practitioners.
During the second half of the 20th century, Murray Rosenblatt was one of the most celebrated and leading figures in probability and statistics. Among his many contributions, Rosenblatt conducted seminal work on density estimation, central limit theorems under strong mixing conditions, spectral domain methodology, long memory processes and Markov processes. He has published over 130 papers and 5 books, many as relevant today as when they first appeared decades ago. Murray Rosenblatt was one of the founding members of the Department of Mathematics at the University of California at San Diego (UCSD) and served as advisor to over twenty PhD students. He maintains a close association with UCSD in his role as Professor Emeritus. This volume is a celebration of Murray Rosenblatt's stellar research career that spans over six decades, and includes some of his most interesting and influential papers. Several leading experts provide commentary and reflections on various directions of Murray's research portfolio."
This selection of reviews and papers is intended to stimulate renewed reflection on the fundamental and practical aspects of probability in physics. While putting emphasis on conceptual aspects in the foundations of statistical and quantum mechanics, the book deals with the philosophy of probability in its interrelation with mathematics and physics in general. Addressing graduate students and researchers in physics and mathematics together with philosophers of science, the contributions avoid cumbersome technicalities in order to make the book worthwhile reading for nonspecialists and specialists alike.
V-INVEX FUNCTIONS AND VECTOR OPTIMIZATION summarizes and synthesizes an aspect of research work that has been done in the area of Generalized Convexity over the past several decades. Specifically, the book focuses on V-invex functions in vector optimization that have grown out of the work of Jeyakumar and Mond in the 1990?s. V-invex functions are areas in which there has been much interest because it allows researchers and practitioners to address and provide better solutions to problems that are nonlinear, multi-objective, fractional, and continuous in nature. Hence, V-invex functions have permitted work on a whole new class of vector optimization applications. There has been considerable work on vector optimization by some highly distinguished researchers including Kuhn, Tucker, Geoffrion, Mangasarian, Von Neuman, Schaiible, Ziemba, etc. The authors have integrated this related research into their book and demonstrate the wide context from which the area has grown and continues to grow. The result is a well-synthesized, accessible, and usable treatment for students, researchers, and practitioners in the areas of OR, optimization, applied mathematics, engineering, and their work relating to a wide range of problems which include financial institutions, logistics, transportation, traffic management, etc.
This book aims to present, in a unified approach, a series of mathematical results con cerning triangular norm-based measures and a class of cooperative games with Juzzy coalitions. Our approach intends to emphasize that triangular norm-based measures are powerful tools in exploring the coalitional behaviour in 'such games. They not and simplify some technical aspects of the already classical axiomatic the only unify ory of Aumann-Shapley values, but also provide new perspectives and insights into these results. Moreover, this machinery allows us to obtain, in the game theoretical context, new and heuristically meaningful information, which has a significant impact on balancedness and equilibria analysis in a cooperative environment. From a formal point of view, triangular norm-based measures are valuations on subsets of a unit cube [0, 1]X which preserve dual binary operations induced by trian gular norms on the unit interval [0, 1]. Triangular norms (and their dual conorms) are algebraic operations on [0,1] which were suggested by MENGER [1942] and which proved to be useful in the theory of probabilistic metric spaces (see also [WALD 1943]). The idea of a triangular norm-based measure was implicitly used under various names: vector integrals [DVORETZKY, WALD & WOLFOWITZ 1951], prob abilities oj Juzzy events [ZADEH 1968], and measures on ideal sets [AUMANN & SHAPLEY 1974, p. 152].
Kiyosi Ito, the founder of stochastic calculus, is one of the few central figures of the twentieth century mathematics who reshaped the mathematical world. Today stochastic calculus is a central research field with applications in several other mathematical disciplines, for example physics, engineering, biology, economics and finance. The Abel Symposium 2005 was organized as a tribute to the work of Kiyosi Ito on the occasion of his 90th birthday. Distinguished researchers from all over the world were invited to present the newest developments within the exciting and fast growing field of stochastic analysis. The present volume combines both papers from the invited speakers and contributions by the presenting lecturers. A special feature is the Memoirs that Kiyoshi Ito wrote for this occasion. These are valuable pages for both young and established researchers in the field.
This book presents a unique collection of contributions from some of the foremost scholars in the field of risk and reliability analysis. Combining the most advanced analysis techniques with practical applications, it is one of the most comprehensive and up-to-date books available on risk-based engineering. All the fundamental concepts needed to conduct risk and reliability assessments are covered in detail, providing readers with a sound understanding of the field and making the book a powerful tool for students and researchers alike. This book was prepared in honor of Professor Armen Der Kiureghian, one of the fathers of modern risk and reliability analysis.
This is the first book in the Selecta, the collected works of Benoit Mandelbrot. This volume incorporates his original contributions to finance. The chapters consist of much new material prepared for this volume, as well as reprints of his classic papers which are devoted to the roles that discontinuity and related forms of concentration play in finance and economics. Much of this work helps to lay a foundation for evaluating risks in trading strategies.
I once heard the book by Meyer (1993) described as a "vulgarization" of wavelets. While this is true in one sense of the word, that of making a sub ject popular (Meyer's book is one of the early works written with the non specialist in mind), the implication seems to be that such an attempt some how cheapens or coarsens the subject. I have to disagree that popularity goes hand-in-hand with debasement. is certainly a beautiful theory underlying wavelet analysis, there is While there plenty of beauty left over for the applications of wavelet methods. This book is also written for the non-specialist, and therefore its main thrust is toward wavelet applications. Enough theory is given to help the reader gain a basic understanding of how wavelets work in practice, but much of the theory can be presented using only a basic level of mathematics. Only one theorem is for mally stated in this book, with only one proof. And these are only included to introduce some key concepts in a natural way." |
You may like...
The Era of NFTs - A Beginner's Guide To…
Alex Caine, Matthew Thrush
Hardcover
R711
Discovery Miles 7 110
Computational Modelling of…
Stefan T. Bromley, Scott M. Woodley
Hardcover
Understanding and Mitigating Ageing in…
Philip G. Tipping
Paperback
Numerical Methods for Structured…
Dario Andrea Bini, Volker Mehrmann, …
Hardcover
R4,095
Discovery Miles 40 950
Digital Manufacturing - The…
Chandrakant D. Patel, Chun-Hsien Chen
Paperback
R4,567
Discovery Miles 45 670
|