![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics > Economic statistics
This volume presents classical results of the theory of enlargement of filtration. The focus is on the behavior of martingales with respect to the enlarged filtration and related objects. The study is conducted in various contexts including immersion, progressive enlargement with a random time and initial enlargement with a random variable. The aim of this book is to collect the main mathematical results (with proofs) previously spread among numerous papers, great part of which is only available in French. Many examples and applications to finance, in particular to credit risk modelling and the study of asymmetric information, are provided to illustrate the theory. A detailed summary of further connections and applications is given in bibliographic notes which enables to deepen study of the topic. This book fills a gap in the literature and serves as a guide for graduate students and researchers interested in the role of information in financial mathematics and in econometric science. A basic knowledge of the general theory of stochastic processes is assumed as a prerequisite.
Most academic and policy commentary represents adverse selection as a severe problem in insurance, which should always be deprecated, avoided or minimised. This book gives a contrary view. It details the exaggeration of adverse selection in insurers' rhetoric and insurance economics, and presents evidence that in many insurance markets, adverse selection is weaker than most commentators suggest. A novel arithmetical argument shows that from a public policy perspective, 'weak' adverse selection can be a good thing. This is because a degree of adverse selection is needed to maximise 'loss coverage', the expected fraction of the population's losses which is compensated by insurance. This book will be valuable for those interested in public policy arguments about insurance and discrimination: academics (in economics, law and social policy), policymakers, actuaries, underwriters, disability activists, geneticists and other medical professionals.
Explosive growth in computing power has made Bayesian methods for infinite-dimensional models - Bayesian nonparametrics - a nearly universal framework for inference, finding practical use in numerous subject areas. Written by leading researchers, this authoritative text draws on theoretical advances of the past twenty years to synthesize all aspects of Bayesian nonparametrics, from prior construction to computation and large sample behavior of posteriors. Because understanding the behavior of posteriors is critical to selecting priors that work, the large sample theory is developed systematically, illustrated by various examples of model and prior combinations. Precise sufficient conditions are given, with complete proofs, that ensure desirable posterior properties and behavior. Each chapter ends with historical notes and numerous exercises to deepen and consolidate the reader's understanding, making the book valuable for both graduate students and researchers in statistics and machine learning, as well as in application areas such as econometrics and biostatistics.
Now in its fifth edition, this book offers a detailed yet concise introduction to the growing field of statistical applications in finance. The reader will learn the basic methods for evaluating option contracts, analyzing financial time series, selecting portfolios and managing risks based on realistic assumptions about market behavior. The focus is both on the fundamentals of mathematical finance and financial time series analysis, and on applications to specific problems concerning financial markets, thus making the book the ideal basis for lectures, seminars and crash courses on the topic. All numerical calculations are transparent and reproducible using quantlets. For this new edition the book has been updated and extensively revised and now includes several new aspects such as neural networks, deep learning, and crypto-currencies. Both R and Matlab code, together with the data, can be downloaded from the book's product page and the Quantlet platform. The Quantlet platform quantlet.de, quantlet.com, quantlet.org is an integrated QuantNet environment consisting of different types of statistics-related documents and program codes. Its goal is to promote reproducibility and offer a platform for sharing validated knowledge native to the social web. QuantNet and the corresponding Data-Driven Documents-based visualization allow readers to reproduce the tables, pictures and calculations inside this Springer book. "This book provides an excellent introduction to the tools from probability and statistics necessary to analyze financial data. Clearly written and accessible, it will be very useful to students and practitioners alike." Yacine Ait-Sahalia, Otto Hack 1903 Professor of Finance and Economics, Princeton University
"The level is appropriate for an upper-level undergraduate or graduate-level statistics major. Sampling: Design and Analysis (SDA) will also benefit a non-statistics major with a desire to understand the concepts of sampling from a finite population. A student with patience to delve into the rigor of survey statistics will gain even more from the content that SDA offers. The updates to SDA have potential to enrich traditional survey sampling classes at both the undergraduate and graduate levels. The new discussions of low response rates, non-probability surveys, and internet as a data collection mode hold particular value, as these statistical issues have become increasingly important in survey practice in recent years… I would eagerly adopt the new edition of SDA as the required textbook." (Emily Berg, Iowa State University)
One of the major problems of macroeconomic theory is the way in which the people exchange goods in decentralized market economies. There are major disagreements among macroeconomists regarding tools to influence required outcomes. Since the mainstream efficient market theory fails to provide an internal coherent framework, there is a need for an alternative theory. The book provides an innovative approach for the analysis of agent based models, populated by the heterogeneous and interacting agents in the field of financial fragility. The text is divided in two parts; the first presents analytical developments of stochastic aggregation and macro-dynamics inference methods. The second part introduces macroeconomic models of financial fragility for complex systems populated by heterogeneous and interacting agents. The concepts of financial fragility and macroeconomic dynamics are explained in detail in separate chapters. The statistical physics approach is applied to explain theories of macroeconomic modelling and inference.
Social media has made charts, infographics and diagrams ubiquitous-and easier to share than ever. While such visualisations can better inform us, they can also deceive by displaying incomplete or inaccurate data, suggesting misleading patterns-or misinform by being poorly designed. Many of us are ill equipped to interpret the visuals that politicians, journalists, advertisers and even employers present each day, enabling bad actors to easily manipulate visuals to promote their own agendas. Public conversations are increasingly driven by numbers and to make sense of them, we must be able to decode and use visual information. By examining contemporary examples ranging from election-result infographics to global GDP maps and box-office record charts, How Charts Lie teaches us how to do just that.
This comprehensive book is an introduction to multilevel Bayesian models in R using brms and the Stan programming language. Featuring a series of fully worked analyses of repeated-measures data, focus is placed on active learning through the analyses of the progressively more complicated models presented throughout the book. In this book, the authors offer an introduction to statistics entirely focused on repeated measures data beginning with very simple two-group comparisons and ending with multinomial regression models with many 'random effects'. Across 13 well-structured chapters, readers are provided with all the code necessary to run all the analyses and make all the plots in the book, as well as useful examples of how to interpret and write-up their own analyses. This book provides an accessible introduction for readers in any field, with any level of statistical background. Senior undergraduate students, graduate students, and experienced researchers looking to 'translate' their skills with more traditional models to a Bayesian framework, will benefit greatly from the lessons in this text.
Statistics for Business is meant as a textbook for students in business, computer science, bioengineering, environmental technology, and mathematics. In recent years, business statistics is used widely for decision making in business endeavours. It emphasizes statistical applications, statistical model building, and determining the manual solution methods. Special Features: This text is prepared based on "self-taught" method. For most of the methods, the required algorithm is clearly explained using flow-charting methodology. More than 200 solved problems provided. More than 175 end-of-chapter exercises with answers are provided. This allows teachers ample flexibility in adopting the textbook to their individual class plans. This textbook is meant to for beginners and advanced learners as a text in Statistics for Business or Applied Statistics for undergraduate and graduate students.
This substantial volume has two principal objectives. First it provides an overview of the statistical foundations of Simulation-based inference. This includes the summary and synthesis of the many concepts and results extant in the theoretical literature, the different classes of problems and estimators, the asymptotic properties of these estimators, as well as descriptions of the different simulators in use. Second, the volume provides empirical and operational examples of SBI methods. Often what is missing, even in existing applied papers, are operational issues. Which simulator works best for which problem and why? This volume will explicitly address the important numerical and computational issues in SBI which are not covered comprehensively in the existing literature. Examples of such issues are: comparisons with existing tractable methods, number of replications needed for robust results, choice of instruments, simulation noise and bias as well as efficiency loss in practice.
Originally published in 1931, this book was written to provide actuarial students with a guide to mathematics, with information on elementary trigonometry, finite differences, summation, differential and integral calculus, and probability. Examples are included throughout. This book will be of value to anyone with an interest in actuarial practice and its relationship with aspects of mathematics.
This introductory textbook for business statistics teaches statistical analysis and research methods via business case studies and financial data using Excel, Minitab, and SAS. Every chapter in this textbook engages the reader with data of individual stock, stock indices, options, and futures. One studies and uses statistics to learn how to study, analyze, and understand a data set of particular interest. Some of the more popular statistical programs that have been developed to use statistical and computational methods to analyze data sets are SAS, SPSS, and Minitab. Of those, we look at Minitab and SAS in this textbook. One of the main reasons to use Minitab is that it is the easiest to use among the popular statistical programs. We look at SAS because it is the leading statistical package used in industry. We also utilize the much less costly and ubiquitous Microsoft Excel to do statistical analysis, as the benefits of Excel have become widely recognized in the academic world and its analytical capabilities extend to about 90 percent of statistical analysis done in the business world. We demonstrate much of our statistical analysis using Excel and double check the analysis and outcomes using Minitab and SAS-also helpful in some analytical methods not possible or practical to do in Excel.
Statistical Programming in SAS Second Edition provides a foundation for programming to implement statistical solutions using SAS, a system that has been used to solve data analytic problems for more than 40 years. The author includes motivating examples to inspire readers to generate programming solutions. Upper-level undergraduates, beginning graduate students, and professionals involved in generating programming solutions for data-analytic problems will benefit from this book. The ideal background for a reader is some background in regression modeling and introductory experience with computer programming. The coverage of statistical programming in the second edition includes Getting data into the SAS system, engineering new features, and formatting variables Writing readable and well-documented code Structuring, implementing, and debugging programs that are well documented Creating solutions to novel problems Combining data sources, extracting parts of data sets, and reshaping data sets as needed for other analyses Generating general solutions using macros Customizing output Producing insight-inspiring data visualizations Parsing, processing, and analyzing text Programming solutions using matrices and connecting to R Processing text Programming with matrices Connecting SAS with R Covering topics that are part of both base and certification exams.
Originally published in 1954, on behalf of the National Institute of Economic and Social Research, this book presents a general review of British economic statistics in relation to the uses made of them for policy purposes. The text begins with an examination, in general terms, of the ways in which statistics can help in guiding or assessing policy, covering housing, coal, the development areas, agricultural price-fixing, the balance of external payments and the balance of the economy. The problems of statistical application are then separately discussed under the headings of quality, presentation and availability, and organization. A full bibliography and reference table of principal British economic statistics are also included. This book will be of value to anyone with an interest in British economic history and statistics.
Originally published in 1939, this book forms the first part of a two-volume series on the mathematics required for the examinations of the Institute of Actuaries, focusing on elementary differential and integral calculus. Miscellaneous examples are included at the end of the text. This book will be of value to anyone with an interest in actuarial science and mathematics.
Originally published in 1930, this book was formed from the content of three lectures delivered at London University during March of that year. The text provides a concise discussion of the relationship between theoretical statistics and actuarial science. This book will be of value to anyone with an interest in the actuarial profession, statistics and the history of finance.
Originally published in 1932, as part of the Institute of Actuaries Students' Society's Consolidation of Reading Series, this book was written to provide actuarial students with a guide 'to bridging the gap between the strict mathematics of life contingencies and the severely practical problems of Life Office Valuations'. This book will be of value to anyone with an interest in the actuarial profession and the history of finance.
Developed from the author's course on Monte Carlo simulation at Brown University, Monte Carlo Simulation with Applications to Finance provides a self-contained introduction to Monte Carlo methods in financial engineering. It is suitable for advanced undergraduate and graduate students taking a one-semester course or for practitioners in the financial industry. The author first presents the necessary mathematical tools for simulation, arbitrary free option pricing, and the basic implementation of Monte Carlo schemes. He then describes variance reduction techniques, including control variates, stratification, conditioning, importance sampling, and cross-entropy. The text concludes with stochastic calculus and the simulation of diffusion processes. Only requiring some familiarity with probability and statistics, the book keeps much of the mathematics at an informal level and avoids technical measure-theoretic jargon to provide a practical understanding of the basics. It includes a large number of examples as well as MATLAB (R) coding exercises that are designed in a progressive manner so that no prior experience with MATLAB is needed.
Best-worst scaling (BWS) is an extension of the method of paired comparison to multiple choices that asks participants to choose both the most and the least attractive options or features from a set of choices. It is an increasingly popular way for academics and practitioners in social science, business, and other disciplines to study and model choice. This book provides an authoritative and systematic treatment of best-worst scaling, introducing readers to the theory and methods for three broad classes of applications. It uses a variety of case studies to illustrate simple but reliable ways to design, implement, apply, and analyze choice data in specific contexts, and showcases the wide range of potential applications across many different disciplines. Best-worst scaling avoids many rating scale problems and will appeal to those wanting to measure subjective quantities with known measurement properties that can be easily interpreted and applied.
This volume deals with two complementary topics. On one hand the book deals with the problem of determining the the probability distribution of a positive compound random variable, a problem which appears in the banking and insurance industries, in many areas of operational research and in reliability problems in the engineering sciences. On the other hand, the methodology proposed to solve such problems, which is based on an application of the maximum entropy method to invert the Laplace transform of the distributions, can be applied to many other problems. The book contains applications to a large variety of problems, including the problem of dependence of the sample data used to estimate empirically the Laplace transform of the random variable. Contents Introduction Frequency models Individual severity models Some detailed examples Some traditional approaches to the aggregation problem Laplace transforms and fractional moment problems The standard maximum entropy method Extensions of the method of maximum entropy Superresolution in maxentropic Laplace transform inversion Sample data dependence Disentangling frequencies and decompounding losses Computations using the maxentropic density Review of statistical procedures
This book treats the latest developments in the theory of order-restricted inference, with special attention to nonparametric methods and algorithmic aspects. Among the topics treated are current status and interval censoring models, competing risk models, and deconvolution. Methods of order restricted inference are used in computing maximum likelihood estimators and developing distribution theory for inverse problems of this type. The authors have been active in developing these tools and present the state of the art and the open problems in the field. The earlier chapters provide an introduction to the subject, while the later chapters are written with graduate students and researchers in mathematical statistics in mind. Each chapter ends with a set of exercises of varying difficulty. The theory is illustrated with the analysis of real-life data, which are mostly medical in nature.
This book is an introduction to regression analysis, focusing on the practicalities of doing regression analysis on real-life data. Contrary to other textbooks on regression, this book is based on the idea that you do not necessarily need to know much about statistics and mathematics to get a firm grip on regression and perform it to perfection. This non-technical point of departure is complemented by practical examples of real-life data analysis using statistics software such as Stata, R and SPSS. Parts 1 and 2 of the book cover the basics, such as simple linear regression, multiple linear regression, how to interpret the output from statistics programs, significance testing and the key regression assumptions. Part 3 deals with how to practically handle violations of the classical linear regression assumptions, regression modeling for categorical y-variables and instrumental variable (IV) regression. Part 4 puts the various purposes of, or motivations for, regression into the wider context of writing a scholarly report and points to some extensions to related statistical techniques. This book is written primarily for those who need to do regression analysis in practice, and not only to understand how this method works in theory. The book's accessible approach is recommended for students from across the social sciences.
Spatial Econometrics provides a modern, powerful and flexible skillset to early career researchers interested in entering this rapidly expanding discipline. It articulates the principles and current practice of modern spatial econometrics and spatial statistics, combining rigorous depth of presentation with unusual depth of coverage. Introducing and formalizing the principles of, and 'need' for, models which define spatial interactions, the book provides a comprehensive framework for almost every major facet of modern science. Subjects covered at length include spatial regression models, weighting matrices, estimation procedures and the complications associated with their use. The work particularly focuses on models of uncertainty and estimation under various complications relating to model specifications, data problems, tests of hypotheses, along with systems and panel data extensions which are covered in exhaustive detail. Extensions discussing pre-test procedures and Bayesian methodologies are provided at length. Throughout, direct applications of spatial models are described in detail, with copious illustrative empirical examples demonstrating how readers might implement spatial analysis in research projects. Designed as a textbook and reference companion, every chapter concludes with a set of questions for formal or self--study. Finally, the book includes extensive supplementing information in a large sample theory in the R programming language that supports early career econometricians interested in the implementation of statistical procedures covered.
The most authoritative and up-to-date core econometrics textbook available Econometrics is the quantitative language of economic theory, analysis, and empirical work, and it has become a cornerstone of graduate economics programs. Econometrics provides graduate and PhD students with an essential introduction to this foundational subject in economics and serves as an invaluable reference for researchers and practitioners. This comprehensive textbook teaches fundamental concepts, emphasizes modern, real-world applications, and gives students an intuitive understanding of econometrics. Covers the full breadth of econometric theory and methods with mathematical rigor while emphasizing intuitive explanations that are accessible to students of all backgrounds Draws on integrated, research-level datasets, provided on an accompanying website Discusses linear econometrics, time series, panel data, nonparametric methods, nonlinear econometric models, and modern machine learning Features hundreds of exercises that enable students to learn by doing Includes in-depth appendices on matrix algebra and useful inequalities and a wealth of real-world examples Can serve as a core textbook for a first-year PhD course in econometrics and as a follow-up to Bruce E. Hansen's Probability and Statistics for Economists
The Who, What, and Where of America is designed to provide a sampling of key demographic information. It covers the United States, every state, each metropolitan statistical area, and all the counties and cities with a population of 20,000 or more. Who: Age, Race and Ethnicity, and Household Structure What: Education, Employment, and Income Where: Migration, Housing, and Transportation Each part is preceded by highlights and ranking tables that show how areas diverge from the national norm. These research aids are invaluable for understanding data from the ACS and for highlighting what it tells us about who we are, what we do, and where we live. Each topic is divided into four tables revealing the results of the data collected from different types of geographic areas in the United States, generally with populations greater than 20,000. ·Table A. States ·Table B. Counties ·Table C. Metropolitan Areas ·Table D. Cities In this edition, you will find social and economic estimates on the ways American communities are changing with regard to the following: ·Age and race ·Health care coverage ·Marital history ·Education attainment ·Income and occupation ·Commute time to work ·Employment status ·Home values and monthly costs ·Veteran status ·Size of home or rental unit This title is the latest in the County and City Extra Series of publications from Bernan Press. Other titles include County and City Extra, County and City Extra: Special Decennial Census Edition, and Places, Towns, and Townships. |
You may like...
Kwantitatiewe statistiese tegnieke
Swanepoel Swanepoel, Vivier Vivier, …
Book
R345
Discovery Miles 3 450
|