Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Showing 1 - 15 of 15 matches in All Departments
This book presents the proceedings of the 2nd Pacific Rim Statistical Conference for Production Engineering: Production Engineering, Big Data and Statistics, which took place at Seoul National University in Seoul, Korea in December, 2016. The papers included discuss a wide range of statistical challenges, methods and applications for big data in production engineering, and introduce recent advances in relevant statistical methods.
Following the recent financial crisis, risk management in financial institutions, particularly in banks, has attracted widespread attention and discussion. Novel modeling approaches and courses to educate future professionals in industry, government, and academia are of timely relevance. This book introduces an innovative concept and methodology developed by the authors: active risk management. It is suitable for graduate students in mathematical finance/financial engineering, economics, and statistics as well as for practitioners in the fields of finance and insurance. The book s website features the data sets used in the examples along with various exercises."
The idea of writing this bookarosein 2000when the ?rst author wasassigned to teach the required course STATS 240 (Statistical Methods in Finance) in the new M. S. program in ?nancial mathematics at Stanford, which is an interdisciplinary program that aims to provide a master's-level education in applied mathematics, statistics, computing, ?nance, and economics. Students in the programhad di?erent backgroundsin statistics. Some had only taken a basic course in statistical inference, while others had taken a broad spectrum of M. S. - and Ph. D. -level statistics courses. On the other hand, all of them had already taken required core courses in investment theory and derivative pricing, and STATS 240 was supposed to link the theory and pricing formulas to real-world data and pricing or investment strategies. Besides students in theprogram, thecoursealso attractedmanystudentsfromother departments in the university, further increasing the heterogeneity of students, as many of them had a strong background in mathematical and statistical modeling from the mathematical, physical, and engineering sciences but no previous experience in ?nance. To address the diversity in background but common strong interest in the subject and in a potential career as a "quant" in the ?nancialindustry, thecoursematerialwascarefullychosennotonlytopresent basic statistical methods of importance to quantitative ?nance but also to summarize domain knowledge in ?nance and show how it can be combined with statistical modeling in ?nancial analysis and decision making. The course material evolved over the years, especially after the second author helped as the head TA during the years 2004 and 2005.
Sequential Experimentation in Clinical Trials: Design and Analysis is developed from decades of work in research groups, statistical pedagogy, and workshop participation. Different parts of the book can be used for short courses on clinical trials, translational medical research, and sequential experimentation. The authors have successfully used the book to teach innovative clinical trial designs and statistical methods for Statistics Ph.D. students at Stanford University. There are additional online supplements for the book that include chapter-specific exercises and information. Sequential Experimentation in Clinical Trials: Design and Analysis covers the much broader subject of sequential experimentation that includes group sequential and adaptive designs of Phase II and III clinical trials, which have attracted much attention in the past three decades. In particular, the broad scope of design and analysis problems in sequential experimentation clearly requires a wide range of statistical methods and models from nonlinear regression analysis, experimental design, dynamic programming, survival analysis, resampling, and likelihood and Bayesian inference. The background material in these building blocks is summarized in Chapter 2 and Chapter 3 and certain sections in Chapter 6 and Chapter 7. Besides group sequential tests and adaptive designs, the book also introduces sequential change-point detection methods in Chapter 5 in connection with pharmacovigilance and public health surveillance. Together with dynamic programming and approximate dynamic programming in Chapter 3, the book therefore covers all basic topics for a graduate course in sequential analysis designs.
This book presents a systematic and unified approach for modern nonparametric treatment of missing and modified data via examples of density and hazard rate estimation, nonparametric regression, filtering signals, and time series analysis. All basic types of missing at random and not at random, biasing, truncation, censoring, and measurement errors are discussed, and their treatment is explained. Ten chapters of the book cover basic cases of direct data, biased data, nondestructive and destructive missing, survival data modified by truncation and censoring, missing survival data, stationary and nonstationary time series and processes, and ill-posed modifications. The coverage is suitable for self-study or a one-semester course for graduate students with a prerequisite of a standard course in introductory probability. Exercises of various levels of difficulty will be helpful for the instructor and self-study. The book is primarily about practically important small samples. It explains when consistent estimation is possible, and why in some cases missing data should be ignored and why others must be considered. If missing or data modification makes consistent estimation impossible, then the author explains what type of action is needed to restore the lost information. The book contains more than a hundred figures with simulated data that explain virtually every setting, claim, and development. The companion R software package allows the reader to verify, reproduce and modify every simulation and used estimators. This makes the material fully transparent and allows one to study it interactively. Sam Efromovich is the Endowed Professor of Mathematical Sciences and the Head of the Actuarial Program at the University of Texas at Dallas. He is well known for his work on the theory and application of nonparametric curve estimation and is the author of Nonparametric Curve Estimation: Methods, Theory, and Applications. Professor Sam Efromovich is a Fellow of the Institute of Mathematical Statistics and the American Statistical Association.
Medical Product Safety Evaluation: Biological Models and Statistical Methods presents cutting-edge biological models and statistical methods that are tailored to specific objectives and data types for safety analysis and benefit-risk assessment. Some frequently encountered issues and challenges in the design and analysis of safety studies are discussed with illustrative applications and examples. Medical Product Safety Evaluation: Biological Models and Statistical Methods presents cutting-edge biological models and statistical methods that are tailored to specific objectives and data types for safety analysis and benefit-risk assessment. Some frequently encountered issues and challenges in the design and analysis of safety studies are discussed with illustrative applications and examples. The book is designed not only for biopharmaceutical professionals, such as statisticians, safety specialists, pharmacovigilance experts, and pharmacoepidemiologists, who can use the book as self-learning materials or in short courses or training programs, but also for graduate students in statistics and biomedical data science for a one-semester course. Each chapter provides supplements and problems as more readings and exercises.
The first part of this book discusses institutions and mechanisms of algorithmic trading, market microstructure, high-frequency data and stylized facts, time and event aggregation, order book dynamics, trading strategies and algorithms, transaction costs, market impact and execution strategies, risk analysis, and management. The second part covers market impact models, network models, multi-asset trading, machine learning techniques, and nonlinear filtering. The third part discusses electronic market making, liquidity, systemic risk, recent developments and debates on the subject.
This book presents the proceedings of the 2nd Pacific Rim Statistical Conference for Production Engineering: Production Engineering, Big Data and Statistics, which took place at Seoul National University in Seoul, Korea in December, 2016. The papers included discuss a wide range of statistical challenges, methods and applications for big data in production engineering, and introduce recent advances in relevant statistical methods.
Herbert Robbins is widely recognized as one of the most creative and original mathematical statisticians of our time. The purpose of this book is to reprint, on the occasion of his seventieth birthday, some of his most outstanding research. In making selections for reprinting we have tried to keep in mind three potential audiences: (1) the historian who would like to know Robbins' seminal role in stimulating a substantial proportion of current research in mathematical statistics; (2) the novice who would like a readable, conceptually oriented introduction to these subjects; and (3) the expert who would like to have useful reference material in a single collection. In many cases the needs of the first two groups can be met simulta neously. A distinguishing feature of Robbins' research is its daring originality, which literally creates new specialties for subsequent generations of statisticians to explore. Often these seminal papers are also models of exposition serving to introduce the reader, in the simplest possible context, to ideas that are important for contemporary research in the field. An example is the paper of Robbins and Monro which initiated the subject of stochastic approximation. We have also attempted to provide some useful guidance to the literature in various subjects by supplying additional references, particularly to books and survey articles, with some remarks about important developments in these areas.
Sequential Experimentation in Clinical Trials: Design and Analysis is developed from decades of work in research groups, statistical pedagogy, and workshop participation. Different parts of the book can be used for short courses on clinical trials, translational medical research, and sequential experimentation. The authors have successfully used the book to teach innovative clinical trial designs and statistical methods for Statistics Ph.D. students at Stanford University. There are additional online supplements for the book that include chapter-specific exercises and information. Sequential Experimentation in Clinical Trials: Design and Analysis covers the much broader subject of sequential experimentation that includes group sequential and adaptive designs of Phase II and III clinical trials, which have attracted much attention in the past three decades. In particular, the broad scope of design and analysis problems in sequential experimentation clearly requires a wide range of statistical methods and models from nonlinear regression analysis, experimental design, dynamic programming, survival analysis, resampling, and likelihood and Bayesian inference. The background material in these building blocks is summarized in Chapter 2 and Chapter 3 and certain sections in Chapter 6 and Chapter 7. Besides group sequential tests and adaptive designs, the book also introduces sequential change-point detection methods in Chapter 5 in connection with pharmacovigilance and public health surveillance. Together with dynamic programming and approximate dynamic programming in Chapter 3, the book therefore covers all basic topics for a graduate course in sequential analysis designs.
Self-normalized processes are of common occurrence in probabilistic and statistical studies. A prototypical example is Student's t-statistic introduced in 1908 by Gosset, whose portrait is on the front cover. Due to the highly non-linear nature of these processes, the theory experienced a long period of slow development. In recent years there have been a number of important advances in the theory and applications of self-normalized processes. Some of these developments are closely linked to the study of central limit theorems, which imply that self-normalized processes are approximate pivots for statistical inference. The present volume covers recent developments in the area, including self-normalized large and moderate deviations, and laws of the iterated logarithms for self-normalized martingales. This is the first book that systematically treats the theory and applications of self-normalization.
The idea of writing this bookarosein 2000when the ?rst author wasassigned to teach the required course STATS 240 (Statistical Methods in Finance) in the new M. S. program in ?nancial mathematics at Stanford, which is an interdisciplinary program that aims to provide a master's-level education in applied mathematics, statistics, computing, ?nance, and economics. Students in the programhad di?erent backgroundsin statistics. Some had only taken a basic course in statistical inference, while others had taken a broad spectrum of M. S. - and Ph. D. -level statistics courses. On the other hand, all of them had already taken required core courses in investment theory and derivative pricing, and STATS 240 was supposed to link the theory and pricing formulas to real-world data and pricing or investment strategies. Besides students in theprogram, thecoursealso attractedmanystudentsfromother departments in the university, further increasing the heterogeneity of students, as many of them had a strong background in mathematical and statistical modeling from the mathematical, physical, and engineering sciences but no previous experience in ?nance. To address the diversity in background but common strong interest in the subject and in a potential career as a "quant" in the ?nancialindustry, thecoursematerialwascarefullychosennotonlytopresent basic statistical methods of importance to quantitative ?nance but also to summarize domain knowledge in ?nance and show how it can be combined with statistical modeling in ?nancial analysis and decision making. The course material evolved over the years, especially after the second author helped as the head TA during the years 2004 and 2005.
Self-normalized processes are of common occurrence in probabilistic and statistical studies. A prototypical example is Student's t-statistic introduced in 1908 by Gosset, whose portrait is on the front cover. Due to the highly non-linear nature of these processes, the theory experienced a long period of slow development. In recent years there have been a number of important advances in the theory and applications of self-normalized processes. Some of these developments are closely linked to the study of central limit theorems, which imply that self-normalized processes are approximate pivots for statistical inference. The present volume covers recent developments in the area, including self-normalized large and moderate deviations, and laws of the iterated logarithms for self-normalized martingales. This is the first book that systematically treats the theory and applications of self-normalization.
The first part of this book discusses institutions and mechanisms of algorithmic trading, market microstructure, high-frequency data and stylized facts, time and event aggregation, order book dynamics, trading strategies and algorithms, transaction costs, market impact and execution strategies, risk analysis, and management. The second part covers market impact models, network models, multi-asset trading, machine learning techniques, and nonlinear filtering. The third part discusses electronic market making, liquidity, systemic risk, recent developments and debates on the subject.
Medical Product Safety Evaluation: Biological Models and Statistical Methods presents cutting-edge biological models and statistical methods that are tailored to specific objectives and data types for safety analysis and benefit-risk assessment. Some frequently encountered issues and challenges in the design and analysis of safety studies are discussed with illustrative applications and examples. Medical Product Safety Evaluation: Biological Models and Statistical Methods presents cutting-edge biological models and statistical methods that are tailored to specific objectives and data types for safety analysis and benefit-risk assessment. Some frequently encountered issues and challenges in the design and analysis of safety studies are discussed with illustrative applications and examples. The book is designed not only for biopharmaceutical professionals, such as statisticians, safety specialists, pharmacovigilance experts, and pharmacoepidemiologists, who can use the book as self-learning materials or in short courses or training programs, but also for graduate students in statistics and biomedical data science for a one-semester course. Each chapter provides supplements and problems as more readings and exercises.
|
You may like...
|