![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Business & Economics > Economics > Econometrics > Economic statistics
In order to make informed decisions, there are three important elements: intuition, trust, and analytics. Intuition is based on experiential learning and recent research has shown that those who rely on their "gut feelings" may do better than those who don't. Analytics, however, are important in a data-driven environment to also inform decision making. The third element, trust, is critical for knowledge sharing to take place. These three elements-intuition, analytics, and trust-make a perfect combination for decision making. This book gathers leading researchers who explore the role of these three elements in the process of decision-making.
Written in a highly accessible style, A Factor Model Approach to Derivative Pricing lays a clear and structured foundation for the pricing of derivative securities based upon simple factor model related absence of arbitrage ideas. This unique and unifying approach provides for a broad treatment of topics and models, including equity, interest-rate, and credit derivatives, as well as hedging and tree-based computational methods, but without reliance on the heavy prerequisites that often accompany such topics. Key features A single fundamental absence of arbitrage relationship based on factor models is used to motivate all the results in the book A structured three-step procedure is used to guide the derivation of absence of arbitrage equations and illuminate core underlying concepts Brownian motion and Poisson process driven models are treated together, allowing for a broad and cohesive presentation of topics The final chapter provides a new approach to risk neutral pricing that introduces the topic as a seamless and natural extension of the factor model approach Whether being used as text for an intermediate level course in derivatives, or by researchers and practitioners who are seeking a better understanding of the fundamental ideas that underlie derivative pricing, readers will appreciate the book's ability to unify many disparate topics and models under a single conceptual theme. James A Primbs is an Associate Professor of Finance at the Mihaylo College of Business and Economics at California State University, Fullerton.
Customer and Business Analytics: Applied Data Mining for Business Decision Making Using R explains and demonstrates, via the accompanying open-source software, how advanced analytical tools can address various business problems. It also gives insight into some of the challenges faced when deploying these tools. Extensively classroom-tested, the text is ideal for students in customer and business analytics or applied data mining as well as professionals in small- to medium-sized organizations. The book offers an intuitive understanding of how different analytics algorithms work. Where necessary, the authors explain the underlying mathematics in an accessible manner. Each technique presented includes a detailed tutorial that enables hands-on experience with real data. The authors also discuss issues often encountered in applied data mining projects and present the CRISP-DM process model as a practical framework for organizing these projects. Showing how data mining can improve the performance of organizations, this book and its R-based software provide the skills and tools needed to successfully develop advanced analytics capabilities.
The composition of portfolios is one of the most fundamental and important methods in financial engineering, used to control the risk of investments. This book provides a comprehensive overview of statistical inference for portfolios and their various applications. A variety of asset processes are introduced, including non-Gaussian stationary processes, nonlinear processes, non-stationary processes, and the book provides a framework for statistical inference using local asymptotic normality (LAN). The approach is generalized for portfolio estimation, so that many important problems can be covered. This book can primarily be used as a reference by researchers from statistics, mathematics, finance, econometrics, and genomics. It can also be used as a textbook by senior undergraduate and graduate students in these fields.
'Refreshingly clear and engaging' Tim Harford 'Delightful . . . full of unique insights' Prof Sir David Spiegelhalter There's no getting away from statistics. We encounter them every day. We are all users of statistics whether we like it or not. Do missed appointments really cost the NHS GBP1bn per year? What's the difference between the mean gender pay gap and the median gender pay gap? How can we work out if a claim that we use 42 billion single-use plastic straws per year in the UK is accurate? What did the Vote Leave campaign's GBP350m bus really mean? How can we tell if the headline 'Public pensions cost you GBP4,000 a year' is correct? Does snow really cost the UK economy GBP1bn per day? But how do we distinguish statistical fact from fiction? What can we do to decide whether a number, claim or news story is accurate? Without an understanding of data, we cannot truly understand what is going on in the world around us. Written by Anthony Reuben, the BBC's first head of statistics, Statistical is an accessible and empowering guide to challenging the numbers all around us.
This book is about learning from data using the Generalized Additive Models for Location, Scale and Shape (GAMLSS). GAMLSS extends the Generalized Linear Models (GLMs) and Generalized Additive Models (GAMs) to accommodate large complex datasets, which are increasingly prevalent. In particular, the GAMLSS statistical framework enables flexible regression and smoothing models to be fitted to the data. The GAMLSS model assumes that the response variable has any parametric (continuous, discrete or mixed) distribution which might be heavy- or light-tailed, and positively or negatively skewed. In addition, all the parameters of the distribution (location, scale, shape) can be modelled as linear or smooth functions of explanatory variables. Key Features: Provides a broad overview of flexible regression and smoothing techniques to learn from data whilst also focusing on the practical application of methodology using GAMLSS software in R. Includes a comprehensive collection of real data examples, which reflect the range of problems addressed by GAMLSS models and provide a practical illustration of the process of using flexible GAMLSS models for statistical learning. R code integrated into the text for ease of understanding and replication. Supplemented by a website with code, data and extra materials. This book aims to help readers understand how to learn from data encountered in many fields. It will be useful for practitioners and researchers who wish to understand and use the GAMLSS models to learn from data and also for students who wish to learn GAMLSS through practical examples.
Despite numerous books on research methodology, many have failed to present a complete, hands-on, practical book to lead college classes or individuals through the research process. We are seeing more and more scientific papers from all research fields that fail to meet the basic criteria in terms of research methods, as well as the structure, writing style and presentation of results. This book aims to address this gap in the market by providing an authoritative, easy to follow guide to research methods and how to apply them. Qualitative Methods in Economics is focused not only on the research methods/techniques but also the methodology. The main objective of this book is to discuss qualitative methods and their use in economics and social science research. Chapters identify several of the research approaches commonly used in social studies, from the importance of the role of science through to the techniques of data collection. Using an example research paper to examine the methods used to present the research, the second half of this book breaks down how to present and format your results successfully. This book will be of use to students and researchers who want to improve their research methods and read up on the new and cutting edge advances in research methods, as well as those who like to study ways to improve the research process.
Economic evaluation has become an essential component of clinical trial design to show that new treatments and technologies offer value to payers in various healthcare systems. Although many books exist that address the theoretical or practical aspects of cost-effectiveness analysis, this book differentiates itself from the competition by detailing how to apply health economic evaluation techniques in a clinical trial context, from both academic and pharmaceutical/commercial perspectives. It also includes a special chapter for clinical trials in Cancer. Design & Analysis of Clinical Trials for Economic Evaluation & Reimbursement is not just about performing cost-effectiveness analyses. It also emphasizes the strategic importance of economic evaluation and offers guidance and advice on the complex factors at play before, during, and after an economic evaluation. Filled with detailed examples, the book bridges the gap between applications of economic evaluation in industry (mainly pharmaceutical) and what students may learn in university courses. It provides readers with access to SAS and STATA code. In addition, Windows-based software for sample size and value of information analysis is available free of charge-making it a valuable resource for students considering a career in this field or for those who simply wish to know more about applying economic evaluation techniques. The book includes coverage of trial design, case report form design, quality of life measures, sample sizes, submissions to regulatory authorities for reimbursement, Markov models, cohort models, and decision trees. Examples and case studies are provided at the end of each chapter. Presenting first-hand insights into how economic evaluations are performed from a drug development perspective, the book supplies readers with the foundation required to succeed in an environment where clinical trials and cost-effectiveness of new treatments are central. It also includes thought-provoking exercises for use in classroom and seminar discussions.
Principles of Copula Theory explores the state of the art on copulas and provides you with the foundation to use copulas in a variety of applications. Throughout the book, historical remarks and further readings highlight active research in the field, including new results, streamlined presentations, and new proofs of old results. After covering the essentials of copula theory, the book addresses the issue of modeling dependence among components of a random vector using copulas. It then presents copulas from the point of view of measure theory, compares methods for the approximation of copulas, and discusses the Markov product for 2-copulas. The authors also examine selected families of copulas that possess appealing features from both theoretical and applied viewpoints. The book concludes with in-depth discussions on two generalizations of copulas: quasi- and semi-copulas. Although copulas are not the solution to all stochastic problems, they are an indispensable tool for understanding several problems about stochastic dependence. This book gives you the solid and formal mathematical background to apply copulas to a range of mathematical areas, such as probability, real analysis, measure theory, and algebraic structures.
A unique and comprehensive source of information, this book is the only international publication providing economists, planners, policymakers and business people with worldwide statistics on current performance and trends in the manufacturing sector. The Yearbook is designed to facilitate international comparisons relating to manufacturing activity and industrial development and performance. It provides data which can be used to analyse patterns of growth and related long term trends, structural change and industrial performance in individual industries. Statistics on employment patterns, wages, consumption and gross output and other key indicators are also presented.
Quants, physicists working on Wall Street as quantitative analysts, have been widely blamed for triggering financial crises with their complex mathematical models. Their formulas were meant to allow Wall Street to prosper without risk. But in this penetrating insider's look at the recent economic collapse, Emanuel Derman--former head quant at Goldman Sachs--explains the collision between mathematical modeling and economics and what makes financial models so dangerous. Though such models imitate the style of physics and employ the language of mathematics, theories in physics aim for a description of reality--but in finance, models can shoot only for a very limited approximation of reality. Derman uses his firsthand experience in financial theory and practice to explain the complicated tangles that have paralyzed the economy. "Models.Behaving.Badly. "exposes Wall Street's love affair with models, and shows us why nobody will ever be able to write a model that can encapsulate human behavior.
The main objective of politicians is to maximise economic growth, which heavily drives political policy and decision-making. Critics of the maximisation of growth as the central aim of economic policy have argued that growth in itself is not necessarily a good thing, particularly for the environment; however, what would replace the system and how it would be measured are questions that have been rarely answered satisfactorily. First published in 1991, this book was the first to lay out an entirely new set of practical proposals for developing new economic measurement tools, with the aim of being sustainable, 'green' and human-centred. Victor Anderson proposes that a whole set of indicators, rather than a single one, should play all the roles that GNP (Gross National Product) is responsible for. With a detailed overview of the central debates between the advocates and opponents of continued economic growth and an analysis of the various proposals for modification, this title will be of particular value to students interested in the diversity of measurement tools and the notion that economies should also be evaluated by their social and environmental consequences.
Statistics for Business and Economics introduces statistics in the context of contemporary business. Emphasising statistical literacy in thinking, the text applies its concepts with real data and uses technology to develop a deeper conceptual understanding. Examples, activities, and case studies foster active learning in the classroom while emphasising intuitive concepts of probability and teaching students to make informed business decisions. The 14th Edition continues to highlight the importance of ethical behaviour in collecting, interpreting, and reporting on data, while also providing a wealth of new and updated exercises and case studies.
A unique and comprehensive source of information, this book is the only international publication providing economists, planners, policymakers and business people with worldwide statistics on current performance and trends in the manufacturing sector. The Yearbook is designed to facilitate international comparisons relating to manufacturing activity and industrial development and performance. It provides data which can be used to analyse patterns of growth and related long term trends, structural change and industrial performance in individual industries. Statistics on employment patterns, wages, consumption and gross output and other key indicators are also presented.
Economic Time Series: Modeling and Seasonality is a focused resource on analysis of economic time series as pertains to modeling and seasonality, presenting cutting-edge research that would otherwise be scattered throughout diverse peer-reviewed journals. This compilation of 21 chapters showcases the cross-fertilization between the fields of time series modeling and seasonal adjustment, as is reflected both in the contents of the chapters and in their authorship, with contributors coming from academia and government statistical agencies. For easier perusal and absorption, the contents have been grouped into seven topical sections: Section I deals with periodic modeling of time series, introducing, applying, and comparing various seasonally periodic models Section II examines the estimation of time series components when models for series are misspecified in some sense, and the broader implications this has for seasonal adjustment and business cycle estimation Section III examines the quantification of error in X-11 seasonal adjustments, with comparisons to error in model-based seasonal adjustments Section IV discusses some practical problems that arise in seasonal adjustment: developing asymmetric trend-cycle filters, dealing with both temporal and contemporaneous benchmark constraints, detecting trading-day effects in monthly and quarterly time series, and using diagnostics in conjunction with model-based seasonal adjustment Section V explores outlier detection and the modeling of time series containing extreme values, developing new procedures and extending previous work Section VI examines some alternative models and inference procedures for analysis of seasonal economic time series Section VII deals with aspects of modeling, estimation, and forecasting for nonseasonal economic time series By presenting new methodological developments as well as pertinent empirical analyses and reviews of established methods, the book provides much that is stimulating and practically useful for the serious researcher and analyst of economic time series.
It is increasingly common for analysts to seek out the opinions of individuals and organizations using attitudinal scales such as degree of satisfaction or importance attached to an issue. Examples include levels of obesity, seriousness of a health condition, attitudes towards service levels, opinions on products, voting intentions, and the degree of clarity of contracts. Ordered choice models provide a relevant methodology for capturing the sources of influence that explain the choice made amongst a set of ordered alternatives. The methods have evolved to a level of sophistication that can allow for heterogeneity in the threshold parameters, in the explanatory variables (through random parameters), and in the decomposition of the residual variance. This book brings together contributions in ordered choice modeling from a number of disciplines, synthesizing developments over the last fifty years, and suggests useful extensions to account for the wide range of sources of influence on choice.
The chapters in this book describe various aspects of the application of statistical methods in finance. It will interest and attract statisticians to this area, illustrate some of the many ways that statistical tools are used in financial applications, and give some indication of problems which are still outstanding. The statisticians will be stimulated to learn more about the kinds of models and techniques outlined in the book - both the domain of finance and the science of statistics will benefit from increased awareness by statisticians of the problems, models, and techniques applied in financial applications. For this reason, extensive references are given. The level of technical detail varies between the chapters. Some present broad non-technical overviews of an area, while others describe the mathematical niceties. This illustrates both the range of possibilities available in the area for statisticians, while simultaneously giving a flavour of the different kinds of mathematical and statistical skills required. Whether you favour data analysis or mathematical manipulation, if you are a statistician there are problems in finance which are appropriate to your skills.
This book analyzes the following four distinct, although not dissimilar, areas of social choice theory and welfare economics: nonstrategic choice, Harsanyi's aggregation theorems, distributional ethics and strategic choice. While for aggregation of individual ranking of social states, whether the persons behave strategically or non-strategically, the decision making takes place under complete certainty; in the Harsanyi framework uncertainty has a significant role in the decision making process. Another ingenious characteristic of the book is the discussion of ethical approaches to evaluation of inequality arising from unequal distributions of achievements in the different dimensions of human well-being. Given its wide coverage, combined with newly added materials, end-chapter problems and bibliographical notes, the book will be helpful material for students and researchers interested in this frontline area research. Its lucid exposition, along with non-technical and graphical illustration of the concepts, use of numerical examples, makes the book a useful text.
The first book for a popular audience on the transformative, democratising technology of 'DeFi'. After over a decade of Bitcoin, which has now moved beyond lore and hype into an increasingly robust star in the firmament of global assets, a new and more important question has arisen. What happens beyond Bitcoin? The answer is decentralised finance - 'DeFi'. Tech and finance experts Steven Boykey Sidley and Simon Dingle argue that DeFi - which enables all manner of financial transactions to take place directly, person to person, without the involvement of financial institutions - will redesign the cogs and wheels in the engines of trust, and make the remarkable rise of Bitcoin look quaint by comparison. It will disrupt and displace fine and respectable companies, if not entire industries. Sidley and Dingle explain how DeFi works, introduce the organisations and individuals that comprise the new industry, and identify the likely winners and losers in the coming revolution.
This essential reference for students and scholars in the input-output research and applications community has been fully revised and updated to reflect important developments in the field. Expanded coverage includes construction and application of multiregional and interregional models, including international models and their application to global economic issues such as climate change and international trade; structural decomposition and path analysis; linkages and key sector identification and hypothetical extraction analysis; the connection of national income and product accounts to input-output accounts; supply and use tables for commodity-by-industry accounting and models; social accounting matrices; non-survey estimation techniques; and energy and environmental applications. Input-Output Analysis is an ideal introduction to the subject for advanced undergraduate and graduate students in many scholarly fields, including economics, regional science, regional economics, city, regional and urban planning, environmental planning, public policy analysis and public management.
Factor Analysis and Dimension Reduction in R provides coverage, with worked examples, of a large number of dimension reduction procedures along with model performance metrics to compare them. Factor analysis in the form of principal components analysis (PCA) or principal factor analysis (PFA) is familiar to most social scientists. However, what is less familiar is understanding that factor analysis is a subset of the more general statistical family of dimension reduction methods. The social scientist's toolkit for factor analysis problems can be expanded to include the range of solutions this book presents. In addition to covering FA and PCA with orthogonal and oblique rotation, this book's coverage includes higher-order factor models, bifactor models, models based on binary and ordinal data, models based on mixed data, generalized low-rank models, cluster analysis with GLRM, models involving supplemental variables or observations, Bayesian factor analysis, regularized factor analysis, testing for unidimensionality, and prediction with factor scores. The second half of the book deals with other procedures for dimension reduction. These include coverage of kernel PCA, factor analysis with multidimensional scaling, locally linear embedding models, Laplacian eigenmaps, diffusion maps, force directed methods, t-distributed stochastic neighbor embedding, independent component analysis (ICA), dimensionality reduction via regression (DRR), non-negative matrix factorization (NNMF), Isomap, Autoencoder, uniform manifold approximation and projection (UMAP) models, neural network models, and longitudinal factor analysis models. In addition, a special chapter covers metrics for comparing model performance. Features of this book include: Numerous worked examples with replicable R code Explicit comprehensive coverage of data assumptions Adaptation of factor methods to binary, ordinal, and categorical data Residual and outlier analysis Visualization of factor results Final chapters that treat integration of factor analysis with neural network and time series methods Presented in color with R code and introduction to R and RStudio, this book will be suitable for graduate-level and optional module courses for social scientists, and on quantitative methods and multivariate statistics courses.
Delivering cutting-edge coverage that includes the latest thinking and practices from the field, QUALITY AND PERFORMANCE EXCELLENCE, 8e presents the basic principles and tools associated with quality and performance excellence. Packed with relevant, real-world examples, the text thoroughly illustrates how these principles and methods have been put into effect in a variety of organizations. It also highlights the relationship between basic principles and the popular theories and models studied in management courses. The eighth edition reflects the 2015-16 Baldrige criteria and includes new boxed features, experiential exercises, and up-to-date case studies that give you practical experience working with real-world issues. Many cases focus on large and small companies in manufacturing and service industries in North and South America, Europe, and Asia-Pacific. In addition, chapters now open with a "Performance Excellence Profile" highlighting a recent Baldrige recipient.
Business Statistics narrows the gap between theory and practice by focusing on relevant statistical methods, thus empowering business students to make good, data-driven decisions. Using the latest GAISE (Guidelines for Assessment and Instruction in Statistics Education) report, which included extensive revisions to reflect both the evolution of technology and new wisdom on statistics education, this edition brings a modern edge to teaching business statistics. This includes a focus on the report's key recommendations: teaching statistical thinking, focusing on conceptual understanding, integrating real data with a context and a purpose, fostering active learning, using technology to explore concepts and analyse data, and using assessments to improve and evaluate student learning. By presenting statistics in the context of real-world businesses and by emphasising analysis and understanding over computation, this book helps students be more analytical, prepares them to make better business decisions, and shows them how to effectively communicate results. Samples Preview the detailed table of contents Download a sample chapter from Business Statistics, Global Edition, 4th Edition |
You may like...
Operations And Supply Chain Management
David Collier, James Evans
Hardcover
Operations and Supply Chain Management
James Evans, David Collier
Hardcover
Quantitative statistical techniques
Swanepoel Swanepoel, Vivier Vivier, …
Paperback
(2)R751 Discovery Miles 7 510
|