Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Science & Mathematics > Mathematics > Probability & statistics
Sheldon Ross's classic bestseller, "Introduction to Probability Models," has been used extensively by professionals and as the primary text for a first undergraduate course in applied probability. It introduces elementary probability theory and stochastic processes, and shows how probability theory can be applied fields such as engineering, computer science, management science, the physical and social sciences, and operations research. The hallmark features of this renowned text remain in this
eleventh edition: superior writing style; excellent exercises and
examples covering the wide breadth of coverage of probability
topic; and real-world applications in engineering, science,
business and economics. The 65% new chapter material includes
coverage of finite capacity queues, insurance risk models, and
Markov chains, as well as updated data.
This book offers essential, systematic information on the assessment of the spatial association between two processes from a statistical standpoint. Divided into eight chapters, the book begins with preliminary concepts, mainly concerning spatial statistics. The following seven chapters focus on the methodologies needed to assess the correlation between two or more processes; from theory introduced 35 years ago, to techniques that have only recently been published. Furthermore, each chapter contains a section on R computations to explore how the methodology works with real data. References and a list of exercises are included at the end of each chapter. The assessment of the correlation between two spatial processes has been tackled from several different perspectives in a variety of applications fields. In particular, the problem of testing for the existence of spatial association between two georeferenced variables is relevant for posterior modeling and inference. One evident application in this context is the quantification of the spatial correlation between two images (processes defined on a rectangular grid in a two-dimensional space). From a statistical perspective, this problem can be handled via hypothesis testing, or by using extensions of the correlation coefficient. In an image-processing framework, these extensions can also be used to define similarity indices between images.
This book takes the reader through real-world examples for how to characterize and measure the productivity and performance of NFPs and education institutions-that is, organisations that produce value for society, which cannot be measured accurately in financial KPIs. It focuses on how best to frame non-profit performance and productivity, and provides a suite of tools for measurement and benchmarking. It further challenges the reader to consider alternative and appropriate uses of quantitative measures, which are fit-for-purpose in individual contexts. It is true that the risk of misusing quantitative measures is ever-present. But does that risk outweigh the benefits of forming a more precise and shared understanding of what could generate better outcomes? There will always be concerns about policy and performance management. Goodheart's Law states that once a measure becomes a target, it is no longer a good measure. This book helps to strike a meaningful balance between what can be measured, what cannot, and how best to use quantitative information in sectors that are often averse to being held up to the light and put on a scale by outsiders.
This volume presents new methods and applications in longitudinal data estimation methodology in applied economic. Featuring selected papers from the 2020 the International Conference on Applied Economics (ICOAE 2020) held virtually due to the corona virus pandemic, this book examines interdisciplinary topics such as financial economics, international economics, agricultural economics, marketing and management. Country specific case studies are also featured.
The spectral geometry of infinite graphs deals with three major themes and their interplay: the spectral theory of the Laplacian, the geometry of the underlying graph, and the heat flow with its probabilistic aspects. In this book, all three themes are brought together coherently under the perspective of Dirichlet forms, providing a powerful and unified approach. The book gives a complete account of key topics of infinite graphs, such as essential self-adjointness, Markov uniqueness, spectral estimates, recurrence, and stochastic completeness. A major feature of the book is the use of intrinsic metrics to capture the geometry of graphs. As for manifolds, Dirichlet forms in the graph setting offer a structural understanding of the interaction between spectral theory, geometry and probability. For graphs, however, the presentation is much more accessible and inviting thanks to the discreteness of the underlying space, laying bare the main concepts while preserving the deep insights of the manifold case. Graphs and Discrete Dirichlet Spaces offers a comprehensive treatment of the spectral geometry of graphs, from the very basics to deep and thorough explorations of advanced topics. With modest prerequisites, the book can serve as a basis for a number of topics courses, starting at the undergraduate level.
This book explores official statistics and their social function in modern societies. Digitisation and globalisation are creating completely new opportunities and risks, a context in which facts (can) play an enormously important part if they are produced with a quality that makes them credible and purpose-specific. In order for this to actually happen, official statistics must continue to actively pursue the modernisation of their working methods. This book is not about the technical and methodological challenges associated with digitisation and globalisation; rather, it focuses on statistical sociology, which scientifically deals with the peculiarities and pitfalls of governing-by-numbers, and assigns statistics a suitable position in the future informational ecosystem. Further, the book provides a comprehensive overview of modern issues in official statistics, embodied in a historical and conceptual framework that endows it with different and innovative perspectives. Central to this work is the quality of statistical information provided by official statistics. The implementation of the UN Sustainable Development Goals in the form of indicators is another driving force in the search for answers, and is addressed here. This book will be of interest to a broad readership. The topics of sociology, epistemology, statistical history and the management of production processes, which are important for official statistics and their role in social decision-making processes, are generally not dealt with in statistics books. The book is primary intended for official statisticians, but researchers and advanced students in statistics, economics, sociology and the political sciences will find the book equally stimulating. Last but not least, it offers a valuable source of reflection for policymakers and stakeholders.
Quantum mechanics is arguably one of the most successful scientific theories ever and its applications to chemistry, optics, and information theory are innumerable. This book provides the reader with a rigorous treatment of the main mathematical tools from harmonic analysis which play an essential role in the modern formulation of quantum mechanics. This allows us at the same time to suggest some new ideas and methods, with a special focus on topics such as the Wigner phase space formalism and its applications to the theory of the density operator and its entanglement properties. This book can be used with profit by advanced undergraduate students in mathematics and physics, as well as by confirmed researchers.
The material given provides basic statistical techniques required by students of engineering, computer science and business studies. The concepts are put in a systematic way and exercises are given at the end of each chapter. This book has been written with the purpose of providing the basic statistical techniques required by students of Engineering, Computer Science, Business Studies and Medicine for the statistical work in their field, which involves Probability Distributions of a Single Random Variable. It also aims to provide a sound basis for students of Mathematics, Statistics, Actuarial Science, Financial Engineering, Biostatistics, Operational Research, Physical Science and Research Methodology, who intend to pursue further study in Probability and Statistics at graduate level.
This is an introductory statistics book designed to provide scientists with practical information needed to apply the most common statistical tests to laboratory research data. The book is designed to be practical and applicable, so only minimal information is devoted to theory or equations. Emphasis is placed on the underlying principles for effective data analysis and survey the statistical tests. It is of special value for scientists who have access to Minitab software. Examples are provides for all the statistical tests and explanation of the interpretation of these results presented with Minitab (similar to results for any common software package). The book is specifically designed to contribute to the AAPS series on advances in the pharmaceutical sciences. It benefits professional scientists or graduate students who have not had a formal statistics class, who had bad experiences in such classes, or who just fear/don't understand statistics. Chapter 1 focuses on terminology and essential elements of statistical testing. Statistics is often complicated by synonyms and this chapter established the terms used in the book and how rudiments interact to create statistical tests. Chapter 2 discussed descriptive statistics that are used to organize and summarize sample results. Chapter 3 discussed basic assumptions of probability, characteristics of a normal distribution, alternative approaches for non-normal distributions and introduces the topic of making inferences about a larger population based on a small sample from that population. Chapter 4 discussed hypothesis testing where computer output is interpreted and decisions are made regarding statistical significance. This chapter also deasl with the determination of appropriate sample sizes. The next three chapters focus on tests that make decisions about a population base on a small subset of information. Chapter 5 looks at statistical tests that evaluate where a significant difference exists. In Chapter 6 the tests try to determine the extent and importance of relationships. In contrast to fifth chapter, Chapter 7 presents tests that evaluate the equivalence, not the difference between levels being tested. The last chapter deals with potential outlier or aberrant values and how to statistically determine if they should be removed from the sample data. Each statistical test presented includes an example problem with the resultant software output and how to interpret the results. Minimal time is spent on the mathematical calculations or theory. For those interested in the associated equations, supplemental figures are presented for each test with respective formulas. In addition, Appendix D presents the equations and proof for every output result for the various examples. Examples and results from the appropriate statistical results are displayed using Minitab 18O. In addition to the results, the required steps to analyze data using Minitab are presented with the examples for those having access to this software. Numerous other software packages are available, including based data analysis with Excel.
Starting with the basic linear model where the design and covariance matrices are of full rank, this book demonstrates how the same statistical ideas can be used to explore the more general linear model with rank-deficient design and/or covariance matrices. The unified treatment presented here provides a clearer understanding of the general linear model from a statistical perspective, thus avoiding the complex matrix-algebraic arguments that are often used in the rank-deficient case. Elegant geometric arguments are used as needed.The book has a very broad coverage, from illustrative practical examples in Regression and Analysis of Variance alongside their implementation using R, to providing comprehensive theory of the general linear model with 181 worked-out examples, 227 exercises with solutions, 152 exercises without solutions (so that they may be used as assignments in a course), and 320 up-to-date references.This completely updated and new edition of Linear Models: An Integrated Approach includes the following features:
This book shows how to decompose high-dimensional microarrays into small subspaces (Small Matryoshkas, SMs), statistically analyze them, and perform cancer gene diagnosis. The information is useful for genetic experts, anyone who analyzes genetic data, and students to use as practical textbooks.Discriminant analysis is the best approach for microarray consisting of normal and cancer classes. Microarrays are linearly separable data (LSD, Fact 3). However, because most linear discriminant function (LDF) cannot discriminate LSD theoretically and error rates are high, no one had discovered Fact 3 until now. Hard-margin SVM (H-SVM) and Revised IP-OLDF (RIP) can find Fact3 easily. LSD has the Matryoshka structure and is easily decomposed into many SMs (Fact 4). Because all SMs are small samples and LSD, statistical methods analyze SMs easily. However, useful results cannot be obtained. On the other hand, H-SVM and RIP can discriminate two classes in SM entirely. RatioSV is the ratio of SV distance and discriminant range. The maximum RatioSVs of six microarrays is over 11.67%. This fact shows that SV separates two classes by window width (11.67%). Such easy discrimination has been unresolved since 1970. The reason is revealed by facts presented here, so this book can be read and enjoyed like a mystery novel. Many studies point out that it is difficult to separate signal and noise in a high-dimensional gene space. However, the definition of the signal is not clear. Convincing evidence is presented that LSD is a signal. Statistical analysis of the genes contained in the SM cannot provide useful information, but it shows that the discriminant score (DS) discriminated by RIP or H-SVM is easily LSD. For example, the Alon microarray has 2,000 genes which can be divided into 66 SMs. If 66 DSs are used as variables, the result is a 66-dimensional data. These signal data can be analyzed to find malignancy indicators by principal component analysis and cluster analysis.
Modelling trends and cycles in economic time series has a long history, with the use of linear trends and moving averages forming the basic tool kit of economists until the 1970s. Several developments in econometrics then led to an overhaul of the techniques used to extract trends and cycles from time series. In this second edition, Terence Mills expands on the research in the area of trends and cycles over the last (almost) two decades, to highlight to students and researchers the variety of techniques and the considerations that underpin their choice for modelling trends and cycles.
This thesis presents a revolutionary technique for modelling the dynamics of a quantum system that is strongly coupled to its immediate environment. This is a challenging but timely problem. In particular it is relevant for modelling decoherence in devices such as quantum information processors, and how quantum information moves between spatially separated parts of a quantum system. The key feature of this work is a novel way to represent the dynamics of general open quantum systems as tensor networks, a result which has connections with the Feynman operator calculus and process tensor approaches to quantum mechanics. The tensor network methodology developed here has proven to be extremely powerful: For many situations it may be the most efficient way of calculating open quantum dynamics. This work is abounds with new ideas and invention, and is likely to have a very significant impact on future generations of physicists.
This book presents an introduction to linear univariate and multivariate time series analysis, providing brief theoretical insights into each topic, and from the beginning illustrating the theory with software examples. As such, it quickly introduces readers to the peculiarities of each subject from both theoretical and the practical points of view. It also includes numerous examples and real-world applications that demonstrate how to handle different types of time series data. The associated software package, SSMMATLAB, is written in MATLAB and also runs on the free OCTAVE platform. The book focuses on linear time series models using a state space approach, with the Kalman filter and smoother as the main tools for model estimation, prediction and signal extraction. A chapter on state space models describes these tools and provides examples of their use with general state space models. Other topics discussed in the book include ARIMA; and transfer function and structural models; as well as signal extraction using the canonical decomposition in the univariate case, and VAR, VARMA, cointegrated VARMA, VARX, VARMAX, and multivariate structural models in the multivariate case. It also addresses spectral analysis, the use of fixed filters in a model-based approach, and automatic model identification procedures for ARIMA and transfer function models in the presence of outliers, interventions, complex seasonal patterns and other effects like Easter, trading day, etc. This book is intended for both students and researchers in various fields dealing with time series. The software provides numerous automatic procedures to handle common practical situations, but at the same time, readers with programming skills can write their own programs to deal with specific problems. Although the theoretical introduction to each topic is kept to a minimum, readers can consult the companion book 'Multivariate Time Series With Linear State Space Structure', by the same author, if they require more details.
This monograph has arisen out of a number of attempts spanning almost five decades to understand how one might examine the evolution of densities in systems whose dynamics are described by differential delay equations. Though the authors have no definitive solution to the problem, they offer this contribution in an attempt to define the problem as they see it, and to sketch out several obvious attempts that have been suggested to solve the problem and which seem to have failed. They hope that by being available to the general mathematical community, they will inspire others to consider-and hopefully solve-the problem. Serious attempts have been made by all of the authors over the years and they have made reference to these where appropriate.
This volume is a tribute to Professor Dietrich von Rosen on the occasion of his 65th birthday. It contains a collection of twenty original papers. The contents of the papers evolve around multivariate analysis and random matrices with topics such as high-dimensional analysis, goodness-of-fit measures, variable selection and information criteria, inference of covariance structures, the Wishart distribution and growth curve models.
The Super Bowl is the most watched sporting event in the United States. But what does participating in this event mean for the players, the halftime performers, and the cities who host the games? Is there an economic benefit from being a part of the Super Bowl and if so, how much? This Palgrave Pivot examines the economic consequences for those who participate in the Super Bowl. The book fills in gaps in the literature by examining the benefits and costs of being involved in the game. Previously, the literature has largely ignored the affect the game has had on the careers of the players, particularly the stars of the game. The economic benefit of being the halftime performer has not been considered in the literature at all. While there have been past studies about the economic impact on the cities who host of the game, this book will expand on previous research and update it with new data.
This book discusses various statistical models and their implications for developing landslide susceptibility and risk zonation maps. It also presents a range of statistical techniques, i.e. bivariate and multivariate statistical models and machine learning models, as well as multi-criteria evaluation, pseudo-quantitative and probabilistic approaches. As such, it provides methods and techniques for RS & GIS-based models in spatial distribution for all those engaged in the preparation and development of projects, research, training courses and postgraduate studies. Further, the book offers a valuable resource for students using RS & GIS techniques in their studies.
This book describes various mathematical models that can be used to better understand the spread of novel Coronavirus Disease 2019 (COVID-19) and help to fight against various challenges that have been developed due to COVID-19. The book presents a statistical analysis of the data related to the COVID-19 outbreak, especially the infection speed, death and fatality rates in major countries and some states of India like Gujarat, Maharashtra, Madhya Pradesh and Delhi. Each chapter with distinctive mathematical model also has numerical results to support the efficacy of these models. Each model described in this book provides its unique prediction policy to reduce the spread of COVID-19. This book is beneficial for practitioners, educators, researchers and policymakers handling the crisis of COVID-19 pandemic.
This book explores nonparametric statistical process control. It provides an up-to-date overview of nonparametric Shewhart-type univariate control charts, and reviews the recent literature on nonparametric charts, particularly multivariate schemes. Further, it discusses observations tied to the monitored population quantile, focusing on the Shewhart Sign chart. The book also addresses the issue of practically assuming the normality and the independence when a process is statistically monitored, and examines in detail change-point analysis-based distribution-free control charts designed for Phase I applications. Moreover, it introduces six distribution-free EWMA schemes for simultaneously monitoring the location and scale parameters of a univariate continuous process, and establishes two nonparametric Shewhart-type control charts based on order statistics with signaling runs-type rules. Lastly, the book proposes novel and effective method for early disease detection.
This textbook introduces the use of Python programming for exploring and modelling data in the field of Earth Sciences. It drives the reader from his very first steps with Python, like setting up the environment and starting writing the first lines of codes, to proficient use in visualizing, analyzing, and modelling data in the field of Earth Science. Each chapter contains explicative examples of code, and each script is commented in detail. The book is minded for very beginners in Python programming, and it can be used in teaching courses at master or PhD levels. Also, Early careers and experienced researchers who would like to start learning Python programming for the solution of geological problems will benefit the reading of the book.
This book highlights interdisciplinary insights, latest research results, and technological trends in Business Intelligence and Modelling in fields such as: Business Intelligence, Business Transformation, Knowledge Dissemination & Implementation, Modeling for Logistics, Business Informatics, Business Model Innovation, Simulation Modelling, E-Business, Enterprise & Conceptual Modelling, etc. The book is divided into eight sections, grouping emerging marketing technologies together in a close examination of practices, problems and trends. The chapters have been written by researchers and practitioners that demonstrate a special orientation in Strategic Marketing and Business Intelligence. This volume shares their recent contributions to the field and showcases their exchange of insights.
This book is a step-by-step guide for instructors on how to teach a psychology research methods course at the undergraduate or graduate level. It provides various approaches for teaching the course including lecture topics, difficult concepts for students, sample labs, test questions, syllabus guides and policies, as well as a detailed description of the requirements for the final experimental paper. This book is also supplemented with anecdotes from the author's years of experience teaching research methods classes. Chapters in this book include information on how to deliver more effective lectures, issues you may encounter with students, examples of weekly labs, tips for teaching research methods online, and much more. This book is targeted towards the undergraduate or graduate professor who has either not yet taught research methods or who wants to improve his or her course. Using step by step directions, any teacher will be able to follow the guidelines found in this book that will help them succeed.How to Teach a Course in Research Methods for Psychology Students is a valuable resource for anyone teaching a quantitative research methods course at the college or university level.
|
You may like...
Number Tracing with Brielle
Brielle Vivienne, Jacqueline Regano
Hardcover
R535
Discovery Miles 5 350
|