![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Mathematics > Probability & statistics
Bayesian analysis has developed rapidly in applications in the last
two decades and research in Bayesian methods remains dynamic and
fast-growing. Dramatic advances in modelling concepts and
computational technologies now enable routine application of
Bayesian analysis using increasingly realistic stochastic models,
and this drives the adoption of Bayesian approaches in many areas
of science, technology, commerce, and industry.
This handbook is focused on the analytical dimension in researching international entrepreneurship. It offers a diverse collection of chapters focused on qualitative and quantitative methods that are being practised and can be used by future researchers in the field of international entrepreneurship. The qualitative cluster covers articles, conceptual and empirical chapters as well as literature reviews, whereas the quantitative cluster analyses international entrepreneurship through a broad range of statistical methods such as regressions, panel data, structural equation modelling as well as decision-making and optimisation models in certain and uncertain circumstances. This book is essential reading for researchers, scholars and practitioners who want to learn and implement new methods in analysing entrepreneurial opportunities across national borders.
This book contains international perspectives that unifies the themes of strategic management, decision theory, and data science. It contains thought-provoking presentations of case studies backed by adequate analysis adding significance to the discussions. Most of the decision-making models in use do take due advantage of collection and processing of relevant data using appropriate analytics oriented to provide inputs into effective decision-making. The book showcases applications in diverse fields including banking and insurance, portfolio management, inventory analysis, performance assessment of comparable economic agents, managing utilities in a health-care facility, reducing traffic snarls on highways, monitoring achievement of some of the sustainable development goals in a country or state, and similar other areas that showcase policy implications. It holds immense value for researchers as well as professionals responsible for organizational decisions.
This proceedings book presents state-of-the-art developments in theory, methodology, and applications of network analysis across sociology, computational science, education research, literature studies, political science, international relations, social media research, and urban studies. The papers comprising this collection were presented at the Fifth 'Networks in the Global World' conference organized by the Centre for German and European Studies of St. Petersburg University and Bielefeld University and held on July 7-9, 2020. This biannual conference series revolves around key interdisciplinary issues in the focus of network analysts, such as the multidimensional approach to social reality, translation of theories and methods across disciplines, and mixing of data and methods. The distinctive features of this book are the emphasis on in-depth linkages between theory, method, and applications, the blend of qualitative and quantitative methods, and the joint consideration of different network levels, types, and contexts. The topics covered by the papers include interrelation of social and cultural structures, constellations of power, and patterns of interaction in areas ranging from various types of communities (local, international, educational, political, and so on) to social media and literature. The book is useful for practicing researchers, graduate and postgraduate students, and educators interested in network analysis of social relations, politics, economy, and culture. Features that set the book apart from others in the field: * The book offers a unique cross-disciplinary blend of computational and ethnographic network analyses applied to a diverse spectrum of spheres, from literature and education to urban planning and policymaking. * Embracing conceptual, methodological, and empirical works, the book is among the few in network analysis to emphasize connections between theory, method, and applications. * The book brings together authors and empirical contexts from all over the globe, with a particular emphasis on European societies.
This book presents the theory and methods of flexible and generalized uncertainty optimization. Particularly, it describes the theory of generalized uncertainty in the context of optimization modeling. The book starts with an overview of flexible and generalized uncertainty optimization. It covers uncertainties that are both associated with lack of information and are more general than stochastic theory, where well-defined distributions are assumed. Starting from families of distributions that are enclosed by upper and lower functions, the book presents construction methods for obtaining flexible and generalized uncertainty input data that can be used in a flexible and generalized uncertainty optimization model. It then describes the development of the associated optimization model in detail. Written for graduate students and professionals in the broad field of optimization and operations research, this second edition has been revised and extended to include more worked examples and a section on interval multi-objective mini-max regret theory along with its solution method.
This book provides a practical guide to the analysis of data from randomized controlled trials (RCT). It gives an answer to the question of how to estimate the intervention effect in an appropriate way. This problem is examined for different RCT designs, such as RCTs with one follow-up measurement, RCTs with more than one follow-up measurement, cluster RCTs, cross-over trials, stepped wedge trials, and N-of-1 trials. The statistical methods are explained in a non-mathematical way and are illustrated by extensive examples. All datasets used in the book are available for download, so readers can reanalyse the examples to gain a better understanding of the methods used. Although most examples are taken from epidemiological and clinical studies, this book is also highly recommended for researchers working in other fields.
Structural Equation Modeling provides a conceptual and mathematical understanding of structural equation modelling, helping readers across disciplines understand how to test or validate theoretical models, and build relationships between observed variables. In addition to a providing a background understanding of the concepts, it provides step-by-step illustrative applications with AMOS, SPSS and R software programmes. This volume will serve as a useful reference for academic and industry researchers in the fields of engineering, management, psychology, sociology, human resources, and humanities.
This book offers an overview of the statistical methods used in clinical and observational vaccine studies. Pursuing a practical rather than theoretical approach, it presents a range of real-world examples with SAS codes, making the application of the methods straightforward. This revised edition has been significantly expanded to reflect the current interest in this area. It opens with two introductory chapters on the immunology of vaccines to provide readers with the necessary background knowledge. It then continues with an in-depth exploration of the analysis of immunogenicity data. Discussed are, amongst others, maximum likelihood estimation for censored antibody titers, ANCOVA for antibody values, analysis of data of equivalence, and non-inferiority immunogenicity studies. Other topics covered include fitting protection curves to data from vaccine efficacy studies, and the analysis of vaccine safety data. In addition, the book features four new chapters on vaccine field studies: an introductory one, one on randomized vaccine efficacy studies, one on observational vaccine effectiveness studies, and one on the meta-analysis of vaccine efficacy studies. The book offers useful insights for statisticians and epidemiologists working in the pharmaceutical industry or at vaccines institutes, as well as graduate students interested in pharmaceutical statistics.
This book takes a fresh look at the popular and well-established method of maximum likelihood for statistical estimation and inference. It begins with an intuitive introduction to the concepts and background of likelihood, and moves through to the latest developments in maximum likelihood methodology, including general latent variable models and new material for the practical implementation of integrated likelihood using the free ADMB software. Fundamental issues of statistical inference are also examined, with a presentation of some of the philosophical debates underlying the choice of statistical paradigm. Key features: * Provides an accessible introduction to pragmatic maximum likelihood modelling. * Covers more advanced topics, including general forms of latent variable models (including non-linear and non-normal mixed-effects and state-space models) and the use of maximum likelihood variants, such as estimating equations, conditional likelihood, restricted likelihood and integrated likelihood. * Adopts a practical approach, with a focus on providing the relevant tools required by researchers and practitioners who collect and analyze real data. * Presents numerous examples and case studies across a wide range of applications including medicine, biology and ecology. * Features applications from a range of disciplines, with implementation in R, SAS and/or ADMB. * Provides all program code and software extensions on a supporting website. * Confines supporting theory to the final chapters to maintain a readable and pragmatic focus of the preceding chapters. This book is not just an accessible and practical text about maximum likelihood, it is a comprehensive guide to modern maximum likelihood estimation and inference. It will be of interest to readers of all levels, from novice to expert. It will be of great benefit to researchers, and to students of statistics from senior undergraduate to graduate level. For use as a course text, exercises are provided at the end of each chapter.
Data-driven discovery is revolutionizing how we model, predict, and control complex systems. Now with Python and MATLAB (R), this textbook trains mathematical scientists and engineers for the next generation of scientific discovery by offering a broad overview of the growing intersection of data-driven methods, machine learning, applied optimization, and classical fields of engineering mathematics and mathematical physics. With a focus on integrating dynamical systems modeling and control with modern methods in applied machine learning, this text includes methods that were chosen for their relevance, simplicity, and generality. Topics range from introductory to research-level material, making it accessible to advanced undergraduate and beginning graduate students from the engineering and physical sciences. The second edition features new chapters on reinforcement learning and physics-informed machine learning, significant new sections throughout, and chapter exercises. Online supplementary material - including lecture videos per section, homeworks, data, and code in MATLAB (R), Python, Julia, and R - available on databookuw.com.
This monograph presents mathematical theory of statistical models
described by the essentially large number of unknown parameters,
comparable with sample size but can also be much larger. In this
meaning, the proposed theory can be called "essentially
multiparametric." It is developed on the basis of the Kolmogorov
asymptotic approach in which sample size increases along with the
number of unknown parameters.
This book offers essential, systematic information on the assessment of the spatial association between two processes from a statistical standpoint. Divided into eight chapters, the book begins with preliminary concepts, mainly concerning spatial statistics. The following seven chapters focus on the methodologies needed to assess the correlation between two or more processes; from theory introduced 35 years ago, to techniques that have only recently been published. Furthermore, each chapter contains a section on R computations to explore how the methodology works with real data. References and a list of exercises are included at the end of each chapter. The assessment of the correlation between two spatial processes has been tackled from several different perspectives in a variety of applications fields. In particular, the problem of testing for the existence of spatial association between two georeferenced variables is relevant for posterior modeling and inference. One evident application in this context is the quantification of the spatial correlation between two images (processes defined on a rectangular grid in a two-dimensional space). From a statistical perspective, this problem can be handled via hypothesis testing, or by using extensions of the correlation coefficient. In an image-processing framework, these extensions can also be used to define similarity indices between images.
Box and Jenkins (1970) made the idea of obtaining a stationary time series by differencing the given, possibly nonstationary, time series popular. Numerous time series in economics are found to have this property. Subsequently, Granger and Joyeux (1980) and Hosking (1981) found examples of time series whose fractional difference becomes a short memory process, in particular, a white noise, while the initial series has unbounded spectral density at the origin, i.e. exhibits long memory.Further examples of data following long memory were found in hydrology and in network traffic data while in finance the phenomenon of strong dependence was established by dramatic empirical success of long memory processes in modeling the volatility of the asset prices and power transforms of stock market returns.At present there is a need for a text from where an interested reader can methodically learn about some basic asymptotic theory and techniques found useful in the analysis of statistical inference procedures for long memory processes. This text makes an attempt in this direction. The authors provide in a concise style a text at the graduate level summarizing theoretical developments both for short and long memory processes and their applications to statistics. The book also contains some real data applications and mentions some unsolved inference problems for interested researchers in the field.
The first edition of this classic book has become the authoritative reference for physicists desiring to master the finer points of statistical data analysis. This second edition contains all the important material of the first, much of it unavailable from any other sources. In addition, many chapters have been updated with considerable new material, especially in areas concerning the theory and practice of confidence intervals, including the important Feldman?Cousins method. Both frequentist and Bayesian methodologies are presented, with a strong emphasis on techniques useful to physicists and other scientists in the interpretation of experimental data and comparison with scientific theories. This is a valuable textbook for advanced graduate students in the physical sciences as well as a reference for active researchers.
This volume presents new methods and applications in longitudinal data estimation methodology in applied economic. Featuring selected papers from the 2020 the International Conference on Applied Economics (ICOAE 2020) held virtually due to the corona virus pandemic, this book examines interdisciplinary topics such as financial economics, international economics, agricultural economics, marketing and management. Country specific case studies are also featured.
This book takes the reader through real-world examples for how to characterize and measure the productivity and performance of NFPs and education institutions-that is, organisations that produce value for society, which cannot be measured accurately in financial KPIs. It focuses on how best to frame non-profit performance and productivity, and provides a suite of tools for measurement and benchmarking. It further challenges the reader to consider alternative and appropriate uses of quantitative measures, which are fit-for-purpose in individual contexts. It is true that the risk of misusing quantitative measures is ever-present. But does that risk outweigh the benefits of forming a more precise and shared understanding of what could generate better outcomes? There will always be concerns about policy and performance management. Goodheart's Law states that once a measure becomes a target, it is no longer a good measure. This book helps to strike a meaningful balance between what can be measured, what cannot, and how best to use quantitative information in sectors that are often averse to being held up to the light and put on a scale by outsiders.
This is an introductory statistics book designed to provide scientists with practical information needed to apply the most common statistical tests to laboratory research data. The book is designed to be practical and applicable, so only minimal information is devoted to theory or equations. Emphasis is placed on the underlying principles for effective data analysis and survey the statistical tests. It is of special value for scientists who have access to Minitab software. Examples are provides for all the statistical tests and explanation of the interpretation of these results presented with Minitab (similar to results for any common software package). The book is specifically designed to contribute to the AAPS series on advances in the pharmaceutical sciences. It benefits professional scientists or graduate students who have not had a formal statistics class, who had bad experiences in such classes, or who just fear/don't understand statistics. Chapter 1 focuses on terminology and essential elements of statistical testing. Statistics is often complicated by synonyms and this chapter established the terms used in the book and how rudiments interact to create statistical tests. Chapter 2 discussed descriptive statistics that are used to organize and summarize sample results. Chapter 3 discussed basic assumptions of probability, characteristics of a normal distribution, alternative approaches for non-normal distributions and introduces the topic of making inferences about a larger population based on a small sample from that population. Chapter 4 discussed hypothesis testing where computer output is interpreted and decisions are made regarding statistical significance. This chapter also deasl with the determination of appropriate sample sizes. The next three chapters focus on tests that make decisions about a population base on a small subset of information. Chapter 5 looks at statistical tests that evaluate where a significant difference exists. In Chapter 6 the tests try to determine the extent and importance of relationships. In contrast to fifth chapter, Chapter 7 presents tests that evaluate the equivalence, not the difference between levels being tested. The last chapter deals with potential outlier or aberrant values and how to statistically determine if they should be removed from the sample data. Each statistical test presented includes an example problem with the resultant software output and how to interpret the results. Minimal time is spent on the mathematical calculations or theory. For those interested in the associated equations, supplemental figures are presented for each test with respective formulas. In addition, Appendix D presents the equations and proof for every output result for the various examples. Examples and results from the appropriate statistical results are displayed using Minitab 18O. In addition to the results, the required steps to analyze data using Minitab are presented with the examples for those having access to this software. Numerous other software packages are available, including based data analysis with Excel.
Quantum mechanics is arguably one of the most successful scientific theories ever and its applications to chemistry, optics, and information theory are innumerable. This book provides the reader with a rigorous treatment of the main mathematical tools from harmonic analysis which play an essential role in the modern formulation of quantum mechanics. This allows us at the same time to suggest some new ideas and methods, with a special focus on topics such as the Wigner phase space formalism and its applications to the theory of the density operator and its entanglement properties. This book can be used with profit by advanced undergraduate students in mathematics and physics, as well as by confirmed researchers.
This book presents an introduction to linear univariate and multivariate time series analysis, providing brief theoretical insights into each topic, and from the beginning illustrating the theory with software examples. As such, it quickly introduces readers to the peculiarities of each subject from both theoretical and the practical points of view. It also includes numerous examples and real-world applications that demonstrate how to handle different types of time series data. The associated software package, SSMMATLAB, is written in MATLAB and also runs on the free OCTAVE platform. The book focuses on linear time series models using a state space approach, with the Kalman filter and smoother as the main tools for model estimation, prediction and signal extraction. A chapter on state space models describes these tools and provides examples of their use with general state space models. Other topics discussed in the book include ARIMA; and transfer function and structural models; as well as signal extraction using the canonical decomposition in the univariate case, and VAR, VARMA, cointegrated VARMA, VARX, VARMAX, and multivariate structural models in the multivariate case. It also addresses spectral analysis, the use of fixed filters in a model-based approach, and automatic model identification procedures for ARIMA and transfer function models in the presence of outliers, interventions, complex seasonal patterns and other effects like Easter, trading day, etc. This book is intended for both students and researchers in various fields dealing with time series. The software provides numerous automatic procedures to handle common practical situations, but at the same time, readers with programming skills can write their own programs to deal with specific problems. Although the theoretical introduction to each topic is kept to a minimum, readers can consult the companion book 'Multivariate Time Series With Linear State Space Structure', by the same author, if they require more details.
This book shows how to decompose high-dimensional microarrays into small subspaces (Small Matryoshkas, SMs), statistically analyze them, and perform cancer gene diagnosis. The information is useful for genetic experts, anyone who analyzes genetic data, and students to use as practical textbooks.Discriminant analysis is the best approach for microarray consisting of normal and cancer classes. Microarrays are linearly separable data (LSD, Fact 3). However, because most linear discriminant function (LDF) cannot discriminate LSD theoretically and error rates are high, no one had discovered Fact 3 until now. Hard-margin SVM (H-SVM) and Revised IP-OLDF (RIP) can find Fact3 easily. LSD has the Matryoshka structure and is easily decomposed into many SMs (Fact 4). Because all SMs are small samples and LSD, statistical methods analyze SMs easily. However, useful results cannot be obtained. On the other hand, H-SVM and RIP can discriminate two classes in SM entirely. RatioSV is the ratio of SV distance and discriminant range. The maximum RatioSVs of six microarrays is over 11.67%. This fact shows that SV separates two classes by window width (11.67%). Such easy discrimination has been unresolved since 1970. The reason is revealed by facts presented here, so this book can be read and enjoyed like a mystery novel. Many studies point out that it is difficult to separate signal and noise in a high-dimensional gene space. However, the definition of the signal is not clear. Convincing evidence is presented that LSD is a signal. Statistical analysis of the genes contained in the SM cannot provide useful information, but it shows that the discriminant score (DS) discriminated by RIP or H-SVM is easily LSD. For example, the Alon microarray has 2,000 genes which can be divided into 66 SMs. If 66 DSs are used as variables, the result is a 66-dimensional data. These signal data can be analyzed to find malignancy indicators by principal component analysis and cluster analysis.
This handy supplement shows students how to come to the answers
shown in the back of the text. It includes solutions to all of the
odd numbered exercises.
The spectral geometry of infinite graphs deals with three major themes and their interplay: the spectral theory of the Laplacian, the geometry of the underlying graph, and the heat flow with its probabilistic aspects. In this book, all three themes are brought together coherently under the perspective of Dirichlet forms, providing a powerful and unified approach. The book gives a complete account of key topics of infinite graphs, such as essential self-adjointness, Markov uniqueness, spectral estimates, recurrence, and stochastic completeness. A major feature of the book is the use of intrinsic metrics to capture the geometry of graphs. As for manifolds, Dirichlet forms in the graph setting offer a structural understanding of the interaction between spectral theory, geometry and probability. For graphs, however, the presentation is much more accessible and inviting thanks to the discreteness of the underlying space, laying bare the main concepts while preserving the deep insights of the manifold case. Graphs and Discrete Dirichlet Spaces offers a comprehensive treatment of the spectral geometry of graphs, from the very basics to deep and thorough explorations of advanced topics. With modest prerequisites, the book can serve as a basis for a number of topics courses, starting at the undergraduate level.
This thesis presents a revolutionary technique for modelling the dynamics of a quantum system that is strongly coupled to its immediate environment. This is a challenging but timely problem. In particular it is relevant for modelling decoherence in devices such as quantum information processors, and how quantum information moves between spatially separated parts of a quantum system. The key feature of this work is a novel way to represent the dynamics of general open quantum systems as tensor networks, a result which has connections with the Feynman operator calculus and process tensor approaches to quantum mechanics. The tensor network methodology developed here has proven to be extremely powerful: For many situations it may be the most efficient way of calculating open quantum dynamics. This work is abounds with new ideas and invention, and is likely to have a very significant impact on future generations of physicists.
This book discusses various statistical models and their implications for developing landslide susceptibility and risk zonation maps. It also presents a range of statistical techniques, i.e. bivariate and multivariate statistical models and machine learning models, as well as multi-criteria evaluation, pseudo-quantitative and probabilistic approaches. As such, it provides methods and techniques for RS & GIS-based models in spatial distribution for all those engaged in the preparation and development of projects, research, training courses and postgraduate studies. Further, the book offers a valuable resource for students using RS & GIS techniques in their studies. |
![]() ![]() You may like...
Entrepreneurship in Action - The Power…
Eric W. Liguori, Mark Tonelli
Paperback
R837
Discovery Miles 8 370
Teaching-Learning dynamics
Monica Jacobs, Ntombizolile Vakalisa, …
Paperback
R618
Discovery Miles 6 180
Data-Driven Modeling of Cyber-Physical…
Sujit Rokka Chhetri, Mohammad Abdullah Al Faruque
Hardcover
R2,892
Discovery Miles 28 920
Air Traffic Control Automated Systems
Bestugin A.R., Eshenko A.A., …
Hardcover
R3,395
Discovery Miles 33 950
|