![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
Box and Jenkins (1970) made the idea of obtaining a stationary time series by differencing the given, possibly nonstationary, time series popular. Numerous time series in economics are found to have this property. Subsequently, Granger and Joyeux (1980) and Hosking (1981) found examples of time series whose fractional difference becomes a short memory process, in particular, a white noise, while the initial series has unbounded spectral density at the origin, i.e. exhibits long memory.Further examples of data following long memory were found in hydrology and in network traffic data while in finance the phenomenon of strong dependence was established by dramatic empirical success of long memory processes in modeling the volatility of the asset prices and power transforms of stock market returns.At present there is a need for a text from where an interested reader can methodically learn about some basic asymptotic theory and techniques found useful in the analysis of statistical inference procedures for long memory processes. This text makes an attempt in this direction. The authors provide in a concise style a text at the graduate level summarizing theoretical developments both for short and long memory processes and their applications to statistics. The book also contains some real data applications and mentions some unsolved inference problems for interested researchers in the field.
This research monograph provides a synthesis of a number of statistical tests and measures, which, at first consideration, appear disjoint and unrelated. Numerous comparisons of permutation and classical statistical methods are presented, and the two methods are compared via probability values and, where appropriate, measures of effect size. Permutation statistical methods, compared to classical statistical methods, do not rely on theoretical distributions, avoid the usual assumptions of normality and homogeneity of variance, and depend only on the data at hand. This text takes a unique approach to explaining statistics by integrating a large variety of statistical methods, and establishing the rigor of a topic that to many may seem to be a nascent field in statistics. This topic is new in that it took modern computing power to make permutation methods available to people working in the mainstream of research. lly-informed="" audience,="" and="" can="" also="" easily="" serve="" as="" textbook="" in="" graduate="" course="" departments="" such="" statistics,="" psychology,="" or="" biology.="" particular,="" the="" audience="" for="" book="" is="" teachers="" of="" practicing="" statisticians,="" applied="" quantitative="" students="" fields="" medical="" research,="" epidemiology,="" public="" health,="" biology.
This is the first book to compare eight LDFs by different types of datasets, such as Fisher's iris data, medical data with collinearities, Swiss banknote data that is a linearly separable data (LSD), student pass/fail determination using student attributes, 18 pass/fail determinations using exam scores, Japanese automobile data, and six microarray datasets (the datasets) that are LSD. We developed the 100-fold cross-validation for the small sample method (Method 1) instead of the LOO method. We proposed a simple model selection procedure to choose the best model having minimum M2 and Revised IP-OLDF based on MNM criterion was found to be better than other M2s in the above datasets. We compared two statistical LDFs and six MP-based LDFs. Those were Fisher's LDF, logistic regression, three SVMs, Revised IP-OLDF, and another two OLDFs. Only a hard-margin SVM (H-SVM) and Revised IP-OLDF could discriminate LSD theoretically (Problem 2). We solved the defect of the generalized inverse matrices (Problem 3). For more than 10 years, many researchers have struggled to analyze the microarray dataset that is LSD (Problem 5). If we call the linearly separable model "Matroska," the dataset consists of numerous smaller Matroskas in it. We develop the Matroska feature selection method (Method 2). It finds the surprising structure of the dataset that is the disjoint union of several small Matroskas. Our theory and methods reveal new facts of gene analysis.
Inference infinite sampling is a new development that is essential for the field of sampling. In addition to covering the majority of well known sampling plans and procedures, this study covers the important topics of superpopulation approach, randomized response, non-response and resampling techniques. The authors also provide extensive sets of problems ranging in difficulty, making this book beneficial to students.
For the first two editions of the book Probability (GTM 95), each chapter included a comprehensive and diverse set of relevant exercises. While the work on the third edition was still in progress, it was decided that it would be more appropriate to publish a separate book that would comprise all of the exercises from previous editions, in addition tomany new exercises. Most of the material in this book consists of exercises created by Shiryaev, collected and compiled over the course of many years while working on many interesting topics.Many of the exercises resulted from discussions that took place during special seminars for graduate and undergraduate students. Many of the exercises included in the book contain helpful hints and other relevant information. Lastly, the author has included an appendix at the end of the book that contains a summary of the main results, notation and terminology from Probability Theory that are used throughout the present book. This Appendix also contains additional material from Combinatorics, Potential Theory and Markov Chains, which is not covered in the book, but is nevertheless needed for many of the exercises included here."
This book provides a generalised approach to fractal dimension theory from the standpoint of asymmetric topology by employing the concept of a fractal structure. The fractal dimension is the main invariant of a fractal set, and provides useful information regarding the irregularities it presents when examined at a suitable level of detail. New theoretical models for calculating the fractal dimension of any subset with respect to a fractal structure are posed to generalise both the Hausdorff and box-counting dimensions. Some specific results for self-similar sets are also proved. Unlike classical fractal dimensions, these new models can be used with empirical applications of fractal dimension including non-Euclidean contexts. In addition, the book applies these fractal dimensions to explore long-memory in financial markets. In particular, novel results linking both fractal dimension and the Hurst exponent are provided. As such, the book provides a number of algorithms for properly calculating the self-similarity exponent of a wide range of processes, including (fractional) Brownian motion and Levy stable processes. The algorithms also make it possible to analyse long-memory in real stocks and international indexes. This book is addressed to those researchers interested in fractal geometry, self-similarity patterns, and computational applications involving fractal dimension and Hurst exponent.
Epidemiologic Studies in Cancer Prevention and Screening is the first comprehensive overview of the evidence base for both cancer prevention and screening. This book is directed to the many professionals in government, academia, public health and health care who need up to date information on the potential for reducing the impact of cancer, including physicians, nurses, epidemiologists, and research scientists. The main aim of the book is to provide a realistic appraisal of the evidence for both cancer prevention and cancer screening. In addition, the book provides an accounting of the extent programs based on available knowledge have impacted populations. It does this through: 1. Presentation of a rigorous and realistic evaluation of the evidence for population-based interventions in prevention of and screening for cancer, with particular relevance to those believed to be applicable now, or on the cusp of application 2. Evaluation of the relative contributions of prevention and screening 3. Discussion of how, within the health systems with which the authors are familiar, prevention and screening for cancer can be enhanced. Overview of the evidence base for cancer prevention and screening, as demonstrated in Epidemiologic Studies in Cancer Prevention and Screening, is critically important given current debates within the scientific community. Of the five components of cancer control, prevention, early detection (including screening) treatment, rehabilitation and palliative care, prevention is regarded as the most important. Yet the knowledge available to prevent many cancers is incomplete, and even if we know the main causal factors for a cancer, we often lack the understanding to put this knowledge into effect. Further, with the long natural history of most cancers, it could take many years to make an appreciable impact upon the incidence of the cancer. Because of these facts, many have come to believe that screening has the most potential for reduction of the burden of cancer. Yet, through trying to apply the knowledge gained on screening for cancer, the scientific community has recognized that screening can have major disadvantages and achieve little at substantial cost. This reduces the resources that are potentially available both for prevention and for treatment.
Reliability and Safety of Complex Technical Systems and Processes offers a comprehensive approach to the analysis, identification, evaluation, prediction and optimization of complex technical systems operation, reliability and safety. Its main emphasis is on multistate systems with ageing components, changes to their structure, and their components reliability and safety parameters during the operation processes. Reliability and Safety of Complex Technical Systems and Processes presents integrated models for the reliability, availability and safety of complex non-repairable and repairable multistate technical systems, with reference to their operation processes and their practical applications to real industrial systems. The authors consider variables in different operation states, reliability and safety structures, and the reliability and safety parameters of components, as well as suggesting a cost analysis for complex technical systems. Researchers and industry practitioners will find information on a wide range of complex technical systems in Reliability and Safety of Complex Technical Systems and Processes. It may prove an easy-to-use guide to reliability and safety evaluations of real complex technical systems, both during their operation and at the design stages.
For courses in Probability and Random Processes. "Probability, Statistics, and Random Processes for Engineers, 4e "is a useful text for electrical and computer engineers. This book is a comprehensive treatment of probability and random processes that, more than any other available source, combines "rigor" with "accessibility." Beginning with the fundamentals of probability theory and requiring only college-level calculus, the book develops all the tools needed to understand more advanced topics such as random sequences, continuous-time random processes, and statistical signal processing. The book progresses at a leisurely pace, never assuming more knowledge than contained in the material already covered. Rigor is established by developing all results from the basic axioms and carefully defining and discussing such advanced notions as stochastic convergence, stochastic integrals and resolution of stochastic processes.
What are the current trends in housing? Is my planned project commercially viable? What should be my marketing and advertisement strategies? These are just some of the questions real estate agents, landlords and developers ask researchers to answer. But to find the answers, researchers are faced with a wide variety of methods that measure housing preferences and choices. To select and value a valid research method, one needs a well-structured overview of the methods that are used in housing preference and housing choice research. This comprehensive introduction to this field offers just such an overview. It discusses and compares numerous methods, detailing the potential limitation of each one, and it reaches beyond methodology, illustrating how thoughtful consideration of methods and techniques in research can help researchers and other professionals to deliver products and services that are more in line with residents needs."
This book presents powerful techniques for solving global optimization problems on manifolds by means of evolutionary algorithms, and shows in practice how these techniques can be applied to solve real-world problems. It describes recent findings and well-known key facts in general and differential topology, revisiting them all in the context of application to current optimization problems. Special emphasis is put on game theory problems. Here, these problems are reformulated as constrained global optimization tasks and solved with the help of Fuzzy ASA. In addition, more abstract examples, including minimizations of well-known functions, are also included. Although the Fuzzy ASA approach has been chosen as the main optimizing paradigm, the book suggests that other metaheuristic methods could be used as well. Some of them are introduced, together with their advantages and disadvantages. Readers should possess some knowledge of linear algebra, and of basic concepts of numerical analysis and probability theory. Many necessary definitions and fundamental results are provided, with the formal mathematical requirements limited to a minimum, while the focus is kept firmly on continuous problems. The book offers a valuable resource for students, researchers and practitioners. It is suitable for university courses on optimization and for self-study.
This book presents the breadth and diversity of empirical and practical work done on statistics education around the world. A wide range of methods are used to respond to the research questions that form it's base. Case studies of single students or teachers aimed at understanding reasoning processes, large-scale experimental studies attempting to generalize trends in the teaching and learning of statistics are both employed. Various epistemological stances are described and utilized. The teaching and learning of statistics is presented in multiple contexts in the book. These include designed settings for young children, students in formal schooling, tertiary level students, vocational schools, and teacher professional development. A diversity is evident also in the choices of what to teach (curriculum), when to teach (learning trajectory), how to teach (pedagogy), how to demonstrate evidence of learning (assessment) and what challenges teachers and students face when they solve statistical problems (reasoning and thinking).
Biological and other natural processes have always been a source of inspiration for computer science and information technology. Many emerging problem solving techniques integrate advanced evolution and cooperation strategies, encompassing a range of spatio-temporal scales for visionary conceptualization of evolutionary computation. The previous editions of NICSO were held in Granada, Spain (2006), Acireale, Italy (2007), Tenerife, Spain (2008), and again in Granada in 2010. NICSO evolved to be one of the most interesting and profiled workshops in nature inspired computing. NICSO 2011 has offered an inspiring environment for debating the state of the art ideas and techniques in nature inspired cooperative strategies and a comprehensive image on recent applications of these ideas and techniques. The topics covered by this volume include Swarm Intelligence (such as Ant and Bee Colony Optimization), Genetic Algorithms, Multiagent Systems, Coevolution and Cooperation strategies, Adversarial Models, Synergic Building Blocks, Complex Networks, Social Impact Models, Evolutionary Design, Self Organized Criticality, Evolving Systems, Cellular Automata, Hybrid Algorithms, and Membrane Computing (P-Systems).
Strategy and Statistics in Clinical Trials deals with the research processes and the role of statistics in these processes. The book offers real-life case studies and provides a practical, how to guide to biomedical R&D. It describes the statistical building blocks and concepts of clinical trials and promotes effective cooperation between statisticians and important other parties. The discussion is organized around 15 chapters. After providing an overview of clinical development and statistics, the book explores questions when planning clinical trials, along with the attributes of medical products. It then explains how to set research objectives and goes on to consider statistical thinking, estimation, testing procedures, and statistical significance, explanation and prediction. The rest of the book focuses on exploratory and confirmatory clinical trials; hypothesis testing and multiplicity; elements of clinical trial design; choosing trial endpoints; and determination of sample size. This book is for all individuals engaged in clinical research who are interested in a better understanding of statistics, including professional clinical researchers, professors, physicians, and researchers in laboratory. It will also be of interest to corporate and government laboratories, clinical research nurses, members of the allied health professions, and post-doctoral and graduate students.
This book introduces the ade4 package for R which provides multivariate methods for the analysis of ecological data. It is implemented around the mathematical concept of the duality diagram, and provides a unified framework for multivariate analysis. The authors offer a detailed presentation of the theoretical framework of the duality diagram and also of its application to real-world ecological problems. These two goals may seem contradictory, as they concern two separate groups of scientists, namely statisticians and ecologists. However, statistical ecology has become a scientific discipline of its own, and the good use of multivariate data analysis methods by ecologists implies a fair knowledge of the mathematical properties of these methods. The organization of the book is based on ecological questions, but these questions correspond to particular classes of data analysis methods. The first chapters present both usual and multiway data analysis methods. Further chapters are dedicated for example to the analysis of spatial data, of phylogenetic structures, and of biodiversity patterns. One chapter deals with multivariate data analysis graphs. In each chapter, the basic mathematical definitions of the methods and the outputs of the R functions available in ade4 are detailed in two different boxes. The text of the book itself can be read independently from these boxes. Thus the book offers the opportunity to find information about the ecological situation from which a question raises alongside the mathematical properties of methods that can be applied to answer this question, as well as the details of software outputs. Each example and all the graphs in this book come with executable R code.
The contributors to this volume include many of the distinguished researchers in this area. Many of these scholars have collaborated with Joseph McKean to develop underlying theory for these methods, obtain small sample corrections, and develop efficient algorithms for their computation. The papers cover the scope of the area, including robust nonparametric rank-based procedures through Bayesian and big data rank-based analyses. Areas of application include biostatistics and spatial areas. Over the last 30 years, robust rank-based and nonparametric methods have developed considerably. These procedures generalize traditional Wilcoxon-type methods for one- and two-sample location problems. Research into these procedures has culminated in complete analyses for many of the models used in practice including linear, generalized linear, mixed, and nonlinear models. Settings are both multivariate and univariate. With the development of R packages in these areas, computation of these procedures is easily shared with readers and implemented. This book is developed from the International Conference on Robust Rank-Based and Nonparametric Methods, held at Western Michigan University in April 2015.
This book is devoted to Professor Jurgen Lehn, who passed away on September 29, 2008, at the age of 67. It contains invited papers that were presented at the Wo- shop on Recent Developments in Applied Probability and Statistics Dedicated to the Memory of Professor Jurgen Lehn, Middle East Technical University (METU), Ankara, April 23-24, 2009, which was jointly organized by the Technische Univ- sitat Darmstadt (TUD) and METU. The papers present surveys on recent devel- ments in the area of applied probability and statistics. In addition, papers from the Panel Discussion: Impact of Mathematics in Science, Technology and Economics are included. Jurgen Lehn was born on the 28th of April, 1941 in Karlsruhe. From 1961 to 1968 he studied mathematics in Freiburg and Karlsruhe, and obtained a Diploma in Mathematics from the University of Karlsruhe in 1968. He obtained his Ph.D. at the University of Regensburg in 1972, and his Habilitation at the University of Karlsruhe in 1978. Later in 1978, he became a C3 level professor of Mathematical Statistics at the University of Marburg. In 1980 he was promoted to a C4 level professorship in mathematics at the TUD where he was a researcher until his death."
This book gives a unifying framework for estimating the abundance of open populations: populations subject to births, deaths and movement, given imperfect measurements or samples of the populations. The focus is primarily on populations of vertebrates for which dynamics are typically modelled within the framework of an annual cycle, and for which stochastic variability in the demographic processes is usually modest. Discrete-time models are developed in which animals can be assigned to discrete states such as age class, gender, maturity, population (within a metapopulation), or species (for multi-species models). The book goes well beyond estimation of abundance, allowing inference on underlying population processes such as birth or recruitment, survival and movement. This requires the formulation and fitting of population dynamics models. The resulting fitted models yield both estimates of abundance and estimates of parameters characterizing the underlying processes.
The systematic study of existence, uniqueness, and properties of solutions to stochastic differential equations in infinite dimensions arising from practical problems characterizes this volume that is intended for graduate students and for pure and applied mathematicians, physicists, engineers, professionals working with mathematical models of finance. Major methods include compactness, coercivity, monotonicity, in a variety of set-ups. The authors emphasize the fundamental work of Gikhman and Skorokhod on the existence and uniqueness of solutions to stochastic differential equations and present its extension to infinite dimension. They also generalize the work of Khasminskii on stability and stationary distributions of solutions. New results, applications, and examples of stochastic partial differential equations are included. This clear and detailed presentation gives the basics of the infinite dimensional version of the classic books of Gikhman and Skorokhod and of Khasminskii in one concise volume that covers the main topics in infinite dimensional stochastic PDE's. By appropriate selection of material, the volume can be adapted for a 1- or 2-semester course, and can prepare the reader for research in this rapidly expanding area.
This unified volume is a collection of invited chapters presenting recent developments in the field of data analysis, with applications to reliability and inference, data mining, bioinformatics, lifetime data, and neural networks. The book is a useful reference for graduate students, researchers, and practitioners in statistics, mathematics, engineering, economics, social science, bioengineering, and bioscience.
Hardbound. A comprehensive reference work for teaching at graduate level and research in empirical finance. The chapters cover a wide range of statistical and probabilistic methods applied to a variety of financial methods and are written by internationally renowned experts.
This book offers hands-on statistical tools for business professionals by focusing on the practical application of a single-equation regression. The authors discuss commonly applied econometric procedures, which are useful in building regression models for economic forecasting and supporting business decisions. A significant part of the book is devoted to traps and pitfalls in implementing regression analysis in real-world scenarios. The book consists of nine chapters, the final two of which are fully devoted to case studies. Today's business environment is characterised by a huge amount of economic data. Making successful business decisions under such data-abundant conditions requires objective analytical tools, which can help to identify and quantify multiple relationships between dozens of economic variables. Single-equation regression analysis, which is discussed in this book, is one such tool. The book offers a valuable guide and is relevant in various areas of economic and business analysis, including marketing, financial and operational management.
This book provides a basic grounding in the use of probability to model random financial phenomena of uncertainty, and is targeted at an advanced undergraduate and graduate level. It should appeal to finance students looking for a firm theoretical guide to the deep end of derivatives and investments. Bankers and finance professionals in the fields of investments, derivatives, and risk management should also find the book useful in bringing probability and finance together. The book contains applications of both discrete time theory and continuous time mathematics, and is extensive in scope. Distribution theory, conditional probability, and conditional expectation are covered comprehensively, and applications to modeling state space securities under market equilibrium are made. Martingale is studied, leading to consideration of equivalent martingale measures, fundamental theorems of asset pricing, change of numeraire and discounting, risk-adjusted and forward-neutral measures, minimal and maximal prices of contingent claims, Markovian models, and the existence of martingale measures preserving the Markov property. Discrete stochastic calculus and multiperiod models leading to no-arbitrage pricing of contingent claims are also to be found in this book, as well as the theory of Markov Chains and appropriate applications in credit modeling. Measure-theoretic probability, moments, characteristic functions, inequalities, and central limit theorems are examined. The theory of risk aversion and utility, and ideas of risk premia are considered. Other application topics include optimal consumption and investment problems and interest rate theory. |
You may like...
Web Services - Concepts, Methodologies…
Information Reso Management Association
Hardcover
R8,957
Discovery Miles 89 570
Learning from Imbalanced Data Sets
Alberto Fernandez, Salvador Garcia, …
Hardcover
R4,006
Discovery Miles 40 060
Brands & Gaming - The Computer Gaming…
D. Nichols, T. Farrand, …
Hardcover
R1,402
Discovery Miles 14 020
Computer Vision in Sports
Thomas B. Moeslund, Graham Thomas, …
Hardcover
R3,480
Discovery Miles 34 800
Web-Based Services - Concepts…
Information Reso Management Association
Hardcover
R16,893
Discovery Miles 168 930
JavaScript for Sound Artists - Learn to…
William Turner, Steve Leonard
Paperback
R1,437
Discovery Miles 14 370
Intro to Python for Computer Science and…
Paul Deitel
Paperback
|