Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Science & Mathematics > Mathematics > Probability & statistics
This book is concerned with important problems of robust (stable) statistical pat tern recognition when hypothetical model assumptions about experimental data are violated (disturbed). Pattern recognition theory is the field of applied mathematics in which prin ciples and methods are constructed for classification and identification of objects, phenomena, processes, situations, and signals, i. e., of objects that can be specified by a finite set of features, or properties characterizing the objects (Mathematical Encyclopedia (1984)). Two stages in development of the mathematical theory of pattern recognition may be observed. At the first stage, until the middle of the 1970s, pattern recogni tion theory was replenished mainly from adjacent mathematical disciplines: mathe matical statistics, functional analysis, discrete mathematics, and information theory. This development stage is characterized by successful solution of pattern recognition problems of different physical nature, but of the simplest form in the sense of used mathematical models. One of the main approaches to solve pattern recognition problems is the statisti cal approach, which uses stochastic models of feature variables. Under the statistical approach, the first stage of pattern recognition theory development is characterized by the assumption that the probability data model is known exactly or it is esti mated from a representative sample of large size with negligible estimation errors (Das Gupta, 1973, 1977), (Rey, 1978), (Vasiljev, 1983))."
This volume has been created in honor of the seventieth birthday of Ted Harris, which was celebrated on January 11th, 1989. The papers rep resent the wide range of subfields of probability theory in which Ted has made profound and fundamental contributions. This breadth in Ted's research complicates the task of putting together in his honor a book with a unified theme. One common thread noted was the spatial, or geometric, aspect of the phenomena Ted investigated. This volume has been organized around that theme, with papers covering four major subject areas of Ted's research: branching processes, percola tion, interacting particle systems, and stochastic flows. These four topics do not* exhaust his research interests; his major work on Markov chains is commemorated in the standard technology "Harris chain" and "Harris recurrent" . The editors would like to take this opportunity to thank the speakers at the symposium and the contributors to this volume. Their enthusi astic support is a tribute to Ted Harris. We would like to express our appreciation to Annette Mosley for her efforts in typing the manuscripts and to Arthur Ogawa for typesetting the volume. Finally, we gratefully acknowledge the National Science Foundation and the University of South ern California for their financial support.
V-INVEX FUNCTIONS AND VECTOR OPTIMIZATION summarizes and synthesizes an aspect of research work that has been done in the area of Generalized Convexity over the past several decades. Specifically, the book focuses on V-invex functions in vector optimization that have grown out of the work of Jeyakumar and Mond in the 1990?s. V-invex functions are areas in which there has been much interest because it allows researchers and practitioners to address and provide better solutions to problems that are nonlinear, multi-objective, fractional, and continuous in nature. Hence, V-invex functions have permitted work on a whole new class of vector optimization applications. There has been considerable work on vector optimization by some highly distinguished researchers including Kuhn, Tucker, Geoffrion, Mangasarian, Von Neuman, Schaiible, Ziemba, etc. The authors have integrated this related research into their book and demonstrate the wide context from which the area has grown and continues to grow. The result is a well-synthesized, accessible, and usable treatment for students, researchers, and practitioners in the areas of OR, optimization, applied mathematics, engineering, and their work relating to a wide range of problems which include financial institutions, logistics, transportation, traffic management, etc.
Discrete event simulation and agent-based modeling are increasingly recognized as critical for diagnosing and solving process issues in complex systems. Introduction to Discrete Event Simulation and Agent-based Modeling covers the techniques needed for success in all phases of simulation projects. These include: * Definition - The reader will learn how to plan a project and communicate using a charter. * Input analysis - The reader will discover how to determine defensible sample sizes for all needed data collections. They will also learn how to fit distributions to that data. * Simulation - The reader will understand how simulation controllers work, the Monte Carlo (MC) theory behind them, modern verification and validation, and ways to speed up simulation using variation reduction techniques and other methods. * Output analysis - The reader will be able to establish simultaneous intervals on key responses and apply selection and ranking, design of experiments (DOE), and black box optimization to develop defensible improvement recommendations. * Decision support - Methods to inspire creative alternatives are presented, including lean production. Also, over one hundred solved problems are provided and two full case studies, including one on voting machines that received international attention. Introduction to Discrete Event Simulation and Agent-based Modeling demonstrates how simulation can facilitate improvements on the job and in local communities. It allows readers to competently apply technology considered key in many industries and branches of government. It is suitable for undergraduate and graduate students, as well as researchers and other professionals.
Sample data alone never suffice to draw conclusions about populations. Inference always requires assumptions about the population and sampling process. Statistical theory has revealed much about how strength of assumptions affects the precision of point estimates, but has had much less to say about how it affects the identification of population parameters. Indeed, it has been commonplace to think of identification as a binary event - a parameter is either identified or not - and to view point identification as a pre-condition for inference. Yet there is enormous scope for fruitful inference using data and assumptions that partially identify population parameters. This book explains why and shows how. The book presents in a rigorous and thorough manner the main elements of Charles Manski's research on partial identification of probability distributions. One focus is prediction with missing outcome or covariate data. Another is decomposition of finite mixtures, with application to the analysis of contaminated sampling and ecological inference. A third major focus is the analysis of treatment response. Whatever the particular subject under study, the presentation follows a common path. The author first specifies the sampling process generating the available data and asks what may be learned about population parameters using the empirical evidence alone. He then ask how the (typically) setvalued identification regions for these parameters shrink if various assumptions are imposed. The approach to inference that runs throughout the book is deliberately conservative and thoroughly nonparametric. Conservative nonparametric analysis enables researchers to learn from the available data without imposing untenable assumptions. It enables establishment of a domain of consensus among researchers who may hold disparate beliefs about what assumptions are appropriate. Charles F. Manski is Board of Trustees Professor at Northwestern University. He is author of Identification Problems in the Social Sciences and Analog Estimation Methods in Econometrics. He is a Fellow of the American Academy of Arts and Sciences, the American Association for the Advancement of Science, and the Econometric Society.
This book provides advanced theoretical and applied tools for the implementation of modern micro-econometric techniques in evidence-based program evaluation for the social sciences. The author presents a comprehensive toolbox for designing rigorous and effective ex-post program evaluation using the statistical software package Stata. For each method, a statistical presentation is developed, followed by a practical estimation of the treatment effects. By using both real and simulated data, readers will become familiar with evaluation techniques, such as regression-adjustment, matching, difference-in-differences, instrumental-variables, regression-discontinuity-design, and synthetic control method, and are given practical guidelines for selecting and applying suitable methods for specific policy contexts. The second revised and extended edition features two new chapters on some recent development of difference-in-differences. Specifically, chapter 5 introduces advanced difference-in-differences methods when many times are available and treatment can be either time-varying or fixed at a specific time. Chapter 6 introduces the synthetic control method, a treatment effect estimation approach suitable when only one unit is treated. Both chapters present applications using the software Stata.
This book is the result of the International Symposium on Semi Markov Processes and their Applications held on June 4-7, 1984 at the Universite Libre de Bruxelles with the help of the FNRS (Fonds National de la Recherche Scientifique, Belgium), the Ministere de l'Education Nationale (Belgium) and the Bernoulli Society for Mathe matical Statistics and Probability. This international meeting was planned to make a state of the art for the area of semi-Markov theory and its applications, to bring together researchers in this field and to create a platform for open and thorough discussion. Main themes of the Symposium are the first ten sections of this book. The last section presented here gives an exhaustive biblio graphy on semi-Markov processes for the last ten years. Papers selected for this book are all invited papers and in addition some contributed papers retained after strong refereeing. Sections are I. Markov additive processes and regenerative systems II. Semi-Markov decision processes III. Algorithmic and computer-oriented approach IV. Semi-Markov models in economy and insurance V. Semi-Markov processes and reliability theory VI. Simulation and statistics for semi-Markov processes VII. Semi-Markov processes and queueing theory VIII. Branching IX. Applications in medicine X. Applications in other fields v PREFACE XI. A second bibliography on semi-Markov processes It is interesting to quote that sections IV to X represent a good sample of the main applications of semi-Markov processes i. e."
The first edition was released in 1996 and has sold close to 2200 copies. Provides an up-to-date comprehensive treatment of MDS, a statistical technique used to analyze the structure of similarity or dissimilarity data in multidimensional space. The authors have added three chapters and exercise sets. The text is being moved from SSS to SSPP. The book is suitable for courses in statistics for the social or managerial sciences as well as for advanced courses on MDS. All the mathematics required for more advanced topics is developed systematically in the text.
The concept of conditional specification of distributions is not new but, except in normal families, it has not been well developed in the literature. Computational difficulties undoubtedly hindered or discouraged developments in this direction. However, such roadblocks are of dimished importance today. Questions of compatibility of conditional and marginal specifications of distributions are of fundamental importance in modeling scenarios. Models with conditionals in exponential families are particularly tractable and provide useful models in a broad variety of settings.
The second book in a set of ten on quantitative finance for practitioners Presents the theory needed to better understand applications Supplements previous training in mathematics Built from the author's four decades of experience in industry, research, and teaching
"Et moi, ... si j'avait su comment en revenir. One service mathematics has rendered the je n'y serais poin t aile.' human race. It has put common sense back Jules Verne where it belongs, on the topmost shelf next to the dusty canister labelled 'discarded non- The series is divergent; therefore we may be sense'. able to do something with it. Eric T. Bell O. H ea viside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and non Iinearities abound. Similarly, all kinds of parts of mathematics serve as tools for other parts and for other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One service. topology has rendered mathematical physics .. .' 'One service logic has rendered com puter science .. .'; 'One service category theory has rendered mathematics .. .'. All arguably true. And all statements obtainable this way form part of the raison d 'e1: re of this series."
Kiyosi Ito, the founder of stochastic calculus, is one of the few central figures of the twentieth century mathematics who reshaped the mathematical world. Today stochastic calculus is a central research field with applications in several other mathematical disciplines, for example physics, engineering, biology, economics and finance. The Abel Symposium 2005 was organized as a tribute to the work of Kiyosi Ito on the occasion of his 90th birthday. Distinguished researchers from all over the world were invited to present the newest developments within the exciting and fast growing field of stochastic analysis. The present volume combines both papers from the invited speakers and contributions by the presenting lecturers. A special feature is the Memoirs that Kiyoshi Ito wrote for this occasion. These are valuable pages for both young and established researchers in the field.
Now available with Macmillan's new online learning tool Achieve, the ninth edition of The Basic Practice of Statistics 9e teaches statistical thinking by guiding students through an investigative process of problem-solving with pedagogy designed to help students of all levels. Examples and exercises from a wide variety of topic areas use current, real data to provide students insight into how and why statistics are used to make decisions in the real world. Achieve for The Basic Practice of Statistics connects the trusted Four-Step problem-solving approach and real world examples in the book to rich digital resources that foster further understanding and application of statistics. Assets in Achieve support learning before, during, and after class for students, while providing instructors with class performance analytics in an easy-to-use interface. Achieve Online Homework Macmillan's new online learning tool Achieve features intuitive design, assessment, insights, and reporting built with the direct input of students, educators, and our learning science team. Achieve for The Basic Practice of Statistics features: Learning Objectives tagged to all assessments within Achieve. In-Class Activity Guides to facilitate active learning during class time. over 3,000 homework questions, each with hints, answer-specific feedback, and a fully worked solution. LearningCurve adaptive quizzing. an interactive e-book, powered by VitalSource. multimedia student resources, such as interactive applets and videos. data sets for common statistical software, video technology manuals, and access to Macmillan's proprietary statistical software, CrunchIt! Content Updates to the Ninth Edition: Examples and exercises more clearly emphasize the decision-making process. Chapter Summaries and Review Chapters have been revised to help students check their knowledge and review for exams. - Summaries are in concise list form, and Skills Reviews (in Review Chapters) refer back to relevant chapter sections. Data in examples and exercises have been updated for currency, and new examples and exercises explore contemporary issues such as social media usage.
This volume contains a selection of invited and contributed papers presented at the International Conference on Linear Statistical Inference LINSTAT '93, held in Poznan, Poland, from May 31 to June 4, 1993. Topics treated include estimation, prediction and testing in linear models, robustness of relevant statistical methods, estimation of variance components appearing in linear models, generalizations to nonlinear models, design and analysis of experiments, including optimality and comparison of linear experiments. This text should be of interest to mathematical statisticians, applied statisticians, biometricians, biostatisticians, and econometrists.
This book offers a straightforward introduction to the mathematical theory of probability. It presents the central results and techniques of the subject in a complete and self-contained account. As a result, the emphasis is on giving results in simple forms with clear proofs and to eschew more powerful forms of theorems which require technically involved proofs. Throughout there are a wide variety of exercises to illustrate and to develop ideas in the text.
This book discusses research, methods, and recent developments in the interdisciplinary field that spans research in visualization, eye tracking, human-computer interaction, and psychology. It presents extended versions of papers from the First Workshop on Eye Tracking and Visualization (ETVIS), which was organized as a workshop of the IEEE VIS Conference 2015. Topics include visualization and visual analytics of eye-tracking data, metrics and cognitive models, eye-tracking experiments in the context of visualization interfaces, and eye tracking in 3D and immersive environments. The extended ETVIS papers are complemented by a chapter offering an overview of visualization approaches for analyzing eye-tracking data and a chapter that discusses electrooculography (EOG) as an alternative of acquiring information about eye movements. Covering scientific visualization, information visualization, and visual analytics, this book is a valuable resource for eye-tracking researchers within the visualization community.
This book focuses on the recent development of methodologies and computation methods in mathematical and statistical modelling, computational science and applied mathematics. It emphasizes the development of theories and applications, and promotes interdisciplinary endeavour among mathematicians, statisticians, scientists, engineers and researchers from other disciplines. The book provides ideas, methods and tools in mathematical and statistical modelling that have been developed for a wide range of research fields, including medical, health sciences, biology, environmental science, engineering, physics and chemistry, finance, economics and social sciences. It presents original results addressing real-world problems. The contributions are products of a highly successful meeting held in August 2017 on the main campus of Wilfrid Laurier University, in Waterloo, Canada, the International Conference on Applied Mathematics, Modeling and Computational Science (AMMCS-2017). They make this book a valuable resource for readers interested not only in a broader overview of the methods, ideas and tools in mathematical and statistical approaches, but also in how they can attain valuable insights into problems arising in other disciplines.
Provides a logical framework for considering and evaluating standard setting procedures Covers formal development of a psychometric theory for standard setting Develops logical argument for evaluation procedures for standard setting processes Contains detailed analyses of several standard setting methods Includes problem sets at the ends of chapters that focus on common problems with standard setting methods
This unique book provides an overview of continuous time modeling in the behavioral and related sciences. It argues that the use of discrete time models for processes that are in fact evolving in continuous time produces problems that make their application in practice highly questionable. One main issue is the dependence of discrete time parameter estimates on the chosen time interval, which leads to incomparability of results across different observation intervals. Continuous time modeling by means of differential equations offers a powerful approach for studying dynamic phenomena, yet the use of this approach in the behavioral and related sciences such as psychology, sociology, economics and medicine, is still rare. This is unfortunate, because in these fields often only a few discrete time (sampled) observations are available for analysis (e.g., daily, weekly, yearly, etc.). However, as emphasized by Rex Bergstrom, the pioneer of continuous-time modeling in econometrics, neither human beings nor the economy cease to exist in between observations. In 16 chapters, the book addresses a vast range of topics in continuous time modeling, from approaches that closely mimic traditional linear discrete time models to highly nonlinear state space modeling techniques. Each chapter describes the type of research questions and data that the approach is most suitable for, provides detailed statistical explanations of the models, and includes one or more applied examples. To allow readers to implement the various techniques directly, accompanying computer code is made available online. The book is intended as a reference work for students and scientists working with longitudinal data who have a Master's- or early PhD-level knowledge of statistics.
This book is in two volumes, and is intended as a text for introductory courses in probability and statistics at the second or third year university level. It em phasizes applications and logical principles rather than mathematical theory. A good background in freshman calculus is sufficient for most of the material presented. Several starred sections have been included as supplementary material. Nearly 900 problems and exercises of varying difficulty are given, and Appendix A contains answers to about one-third of them. The first volume (Chapters 1-8) deals with probability models and with math ematical methods for describing and manipulating them. It is similar in content and organization to the 1979 edition. Some sections have been rewritten and expanded-for example, the discussions of independent random variables and conditional probability. Many new exercises have been added. In the second volume (Chapters 9-16), probability models are used as the basis for the analysis and interpretation of data. This material has been revised extensively. Chapters 9 and 10 describe the use of the likelihood function in estimation problems, as in the 1979 edition. Chapter 11 then discusses frequency properties of estimation procedures, and introduces coverage probability and confidence intervals. Chapter 12 describes tests of significance, with applications primarily to frequency data. The likelihood ratio statistic is used to unify the material on testing, and connect it with earlier material on estimation."
The past several years have seen the creation and extension of a very conclusive theory of statistics and probability. Many of the research workers who have been concerned with both probability and statistics felt the need for meetings that provide an opportunity for personal con tacts among scholars whose fields of specialization cover broad spectra in both statistics and probability: to discuss major open problems and new solutions, and to provide encouragement for further research through the lectures of carefully selected scholars, moreover to introduce to younger colleagues the latest research techniques and thus to stimulate their interest in research. To meet these goals, the series of Pannonian Symposia on Mathematical Statistics was organized, beginning in the year 1979: the first, second and fourth one in Bad Tatzmannsdorf, Burgenland, Austria, the third and fifth in Visegrad, Hungary. The Sixth Pannonian Symposium was held in Bad Tatzmannsdorf again, in the time between 14 and 20 September 1986, under the auspices of Dr. Heinz FISCHER, Federal Minister of Science and Research, Theodor KERY, President of the State Government of Burgenland, Dr. Franz SAUERZOPF, Vice-President of the State Govern ment of Burgenland and Dr. Josef SCHMIDL, President of the Austrian Sta tistical Central Office. The members of the Honorary Committee were Pal ERDOS, WXadisXaw ORLICZ, Pal REVESz, Leopold SCHMETTERER and Istvan VINCZE; those of the Organizing Committee were Wilfried GROSSMANN (Uni versity of Vienna), Franz KONECNY (University of Agriculture of Vienna) and, as the chairman, Wolfgang WERTZ (Technical University of Vienna)."
This selection of reviews and papers is intended to stimulate renewed reflection on the fundamental and practical aspects of probability in physics. While putting emphasis on conceptual aspects in the foundations of statistical and quantum mechanics, the book deals with the philosophy of probability in its interrelation with mathematics and physics in general. Addressing graduate students and researchers in physics and mathematics together with philosophers of science, the contributions avoid cumbersome technicalities in order to make the book worthwhile reading for nonspecialists and specialists alike. |
You may like...
Fatal Numbers: Why Count on Chance
Hans Magnus Enzensberger
Paperback
Statistics for Management and Economics
Gerald Keller, Nicoleta Gaciu
Paperback
Time Series Analysis - With Applications…
Jonathan D. Cryer, Kung-Sik Chan
Hardcover
R2,549
Discovery Miles 25 490
Kwantitatiewe statistiese tegnieke
A. Swanepoel, F.L. Vivier, …
Paperback
Mathematical Statistics with…
William Mendenhall, Dennis Wackerly, …
Paperback
|