![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
This book presents various recently developed and traditional statistical techniques, which are increasingly being applied in social science research. The social sciences cover diverse phenomena arising in society, the economy and the environment, some of which are too complex to allow concrete statements; some cannot be defined by direct observations or measurements; some are culture- (or region-) specific, while others are generic and common. Statistics, being a scientific method - as distinct from a 'science' related to any one type of phenomena - is used to make inductive inferences regarding various phenomena. The book addresses both qualitative and quantitative research (a combination of which is essential in social science research) and offers valuable supplementary reading at an advanced level for researchers.
In this thesis, the author develops numerical techniques for tracking and characterising the convoluted nodal lines in three-dimensional space, analysing their geometry on the small scale, as well as their global fractality and topological complexity---including knotting---on the large scale. The work is highly visual, and illustrated with many beautiful diagrams revealing this unanticipated aspect of the physics of waves. Linear superpositions of waves create interference patterns, which means in some places they strengthen one another, while in others they completely cancel each other out. This latter phenomenon occurs on 'vortex lines' in three dimensions. In general wave superpositions modelling e.g. chaotic cavity modes, these vortex lines form dense tangles that have never been visualised on the large scale before, and cannot be analysed mathematically by any known techniques.
This book focuses on the application and development of information geometric methods in the analysis, classification and retrieval of images and signals. It provides introductory chapters to help those new to information geometry and applies the theory to several applications. This area has developed rapidly over recent years, propelled by the major theoretical developments in information geometry, efficient data and image acquisition and the desire to process and interpret large databases of digital information. The book addresses both the transfer of methodology to practitioners involved in database analysis and in its efficient computational implementation.
The research articles in this volume cover timely quantitative psychology topics, including new methods in item response theory, computerized adaptive testing, cognitive diagnostic modeling, and psychological scaling. Topics within general quantitative methodology include structural equation modeling, factor analysis, causal modeling, mediation, missing data methods, and longitudinal data analysis. These methods will appeal, in particular, to researchers in the social sciences. The 80th annual meeting took place in Beijing, China, between the 12th and 16th of July, 2015. Previous volumes to showcase work from the Psychometric Society's Meeting are New Developments in Quantitative Psychology: Presentations from the 77th Annual Psychometric Society Meeting (Springer, 2013), Quantitative Psychology Research: The 78th Annual Meeting of the Psychometric Society (Springer, 2015), and Quantitative Psychology Research: The 79th Annual Meeting of the Psychometric Society, Wisconsin, USA, 2014 (Springer, 2015).
This book proposes the formulation of an efficient methodology that estimates energy system uncertainty and predicts Remaining Useful Life (RUL) accurately with significantly reduced RUL prediction uncertainty. Renewable and non-renewable sources of energy are being used to supply the demands of societies worldwide. These sources are mainly thermo-chemo-electro-mechanical systems that are subject to uncertainty in future loading conditions, material properties, process noise, and other design parameters.It book informs the reader of existing and new ideas that will be implemented in RUL prediction of energy systems in the future. The book provides case studies, illustrations, graphs, and charts. Its chapters consider engineering, reliability, prognostics and health management, probabilistic multibody dynamical analysis, peridynamic and finite-element modelling, computer science, and mathematics.
The purpose of this book is to present a comprehensive account of the different definitions of stochastic integration for fBm, and to give applications of the resulting theory. Particular emphasis is placed on studying the relations between the different approaches. Readers are assumed to be familiar with probability theory and stochastic analysis, although the mathematical techniques used in the book are thoroughly exposed and some of the necessary prerequisites, such as classical white noise theory and fractional calculus, are recalled in the appendices. This book will be a valuable reference for graduate students and researchers in mathematics, biology, meteorology, physics, engineering and finance.
Hereditary systems (or systems with either delay or after-effects)
are widely used to model processes in physics, mechanics, control,
economics and biology. An important element in their study is their
stability. Stability conditions for difference equations with delay
can be obtained using a Lyapunov functional.
This book covers applied statistics for the social sciences with upper-level undergraduate students in mind. The chapters are based on lecture notes from an introductory statistics course the author has taught for a number of years. The book integrates statistics into the research process, with early chapters covering basic philosophical issues underpinning the process of scientific research. These include the concepts of deductive reasoning and the falsifiability of hypotheses, the development of a research question and hypotheses, and the process of data collection and measurement. Probability theory is then covered extensively with a focus on its role in laying the foundation for statistical reasoning and inference. After illustrating the Central Limit Theorem, later chapters address the key, basic statistical methods used in social science research, including various z and t tests and confidence intervals, nonparametric chi square tests, one-way analysis of variance, correlation, simple regression, and multiple regression, with a discussion of the key issues involved in thinking about causal processes. Concepts and topics are illustrated using both real and simulated data. The penultimate chapter presents rules and suggestions for the successful presentation of statistics in tabular and graphic formats, and the final chapter offers suggestions for subsequent reading and study.
The Bayesian network is one of the most important architectures for representing and reasoning with multivariate probability distributions. When used in conjunction with specialized informatics, possibilities of real-world applications are achieved. Probabilistic Methods for BioInformatics explains the application of probability and statistics, in particular Bayesian networks, to genetics. This book provides background material on probability, statistics, and genetics, and then moves on to discuss Bayesian networks and applications to bioinformatics. Rather than getting bogged down in proofs and algorithms,
probabilistic methods used for biological information and Bayesian
networks are explained in an accessible way using applications and
case studies. The many useful applications of Bayesian networks
that have been developed in the past 10 years are discussed.
Forming a review of all the significant work in the field that will
arguably become the most prevalent method in biological data
analysis.
This book presents the R software environment as a key tool for oceanographic computations and provides a rationale for using R over the more widely-used tools of the field such as MATLAB. Kelley provides a general introduction to R before introducing the 'oce' package. This package greatly simplifies oceanographic analysis by handling the details of discipline-specific file formats, calculations, and plots. Designed for real-world application and developed with open-source protocols, oce supports a broad range of practical work. Generic functions take care of general operations such as subsetting and plotting data, while specialized functions address more specific tasks such as tidal decomposition, hydrographic analysis, and ADCP coordinate transformation. In addition, the package makes it easy to document work, because its functions automatically update processing logs stored within its data objects. Kelley teaches key R functions using classic examples from the history of oceanography, specifically the work of Alfred Redfield, Gordon Riley, J. Tuzo Wilson, and Walter Munk. Acknowledging the pervasive popularity of MATLAB, the book provides advice to users who would like to switch to R. Including a suite of real-life applications and over 100 exercises and solutions, the treatment is ideal for oceanographers, technicians, and students who want to add R to their list of tools for oceanographic analysis.
This book provides a rigorous mathematical treatment of the non-linear stochastic filtering problem using modern methods. Particular emphasis is placed on the theoretical analysis of numerical methods for the solution of the filtering problem via particle methods. The book should provide sufficient background to enable study of the recent literature. While no prior knowledge of stochastic filtering is required, readers are assumed to be familiar with measure theory, probability theory and the basics of stochastic processes. Most of the technical results that are required are stated and proved in the appendices. Exercises and solutions are included.
With an emphasis on models and techniques, this textbook introduces many of the fundamental concepts of stochastic modeling that are now a vital component of almost every scientific investigation. In particular, emphasis is placed onlaying the foundationfor solvingproblemsin reliability, insurance, finance, and credit risk. The material has been carefully selected to cover the basic concepts and techniques on each topic, making this an ideal introductory gateway to more advanced learning. With exercises and solutions to selected problems accompanying each chapter, this textbook is for a wide audience including advanced undergraduate and beginning-level graduate students, researchers, and practitioners in mathematics, statistics, engineering, and economics."
This second edition sees the light three years after the first one: too short a time to feel seriously concerned to redesign the entire book, but sufficient to be challenged by the prospect of sharpening our investigation on the working of econometric dynamic models and to be inclined to change the title of the new edition by dropping the "Topics in" of the former edition. After considerable soul searching we agreed to include several results related to topics already covered, as well as additional sections devoted to new and sophisticated techniques, which hinge mostly on the latest research work on linear matrix polynomials by the second author. This explains the growth of chapter one and the deeper insight into representation theorems in the last chapter of the book. The role of the second chapter is that of providing a bridge between the mathematical techniques in the backstage and the econometric profiles in the forefront of dynamic modelling. For this purpose, we decided to add a new section where the reader can find the stochastic rationale of vector autoregressive specifications in econometrics. The third (and last) chapter improves on that of the first edition by re- ing the fruits of the thorough analytic equipment previously drawn up."
A wide variety of processes occur on multiple scales, either naturally or as a consequence of measurement. This book contains methodology for the analysis of data that arise from such multiscale processes. The book brings together a number of recent developments and makes them accessible to a wider audience. Taking a Bayesian approach allows for full accounting of uncertainty, and also addresses the delicate issue of uncertainty at multiple scales. The Bayesian approach also facilitates the use of knowledge from prior experience or data, and these methods can handle different amounts of prior knowledge at different scales, as often occurs in practice. The book is aimed at statisticians, applied mathematicians, and engineers working on problems dealing with multiscale processes in time and/or space, such as in engineering, finance, and environmetrics. The book will also be of interest to those working on multiscale computation research. The main prerequisites are knowledge of Bayesian statistics and basic Markov chain Monte Carlo methods. A number of real-world examples are thoroughly analyzed in order to demonstrate the methods and to assist the readers in applying these methods to their own work. To further assist readers, the authors are making source code (for R) available for many of the basic methods discussed herein.
Statistical Methods in Food and Consumer Research continues to be
the only book to focus solely on the statistical techniques used in
sensory testing of foods, pharmaceuticals, cosmetics, and other
consumer products.
This book offers a practical guide to Agent Based economic modeling, adopting a "learning by doing" approach to help the reader master the fundamental tools needed to create and analyze Agent Based models. After providing them with a basic "toolkit" for Agent Based modeling, it present and discusses didactic models of real financial and economic systems in detail. While stressing the main features and advantages of the bottom-up perspective inherent to this approach, the book also highlights the logic and practical steps that characterize the model building procedure. A detailed description of the underlying codes, developed using R and C, is also provided. In addition, each didactic model is accompanied by exercises and applications designed to promote active learning on the part of the reader. Following the same approach, the book also presents several complementary tools required for the analysis and validation of the models, such as sensitivity experiments, calibration exercises, economic network and statistical distributions analysis. By the end of the book, the reader will have gained a deeper understanding of the Agent Based methodology and be prepared to use the fundamental techniques required to start developing their own economic models. Accordingly, "Economics with Heterogeneous Interacting Agents" will be of particular interest to graduate and postgraduate students, as well as to academic institutions and lecturers interested in including an overview of the AB approach to economic modeling in their courses.
An in-depth look at current issues, new research findings, and interdisciplinary exchange in survey methodology and processing Survey Measurement and Process Quality extends the marriage of traditional survey issues and continuous quality improvement further than any other contemporary volume. It documents the current state of the field, reports new research findings, and promotes interdisciplinary exchange in questionnaire design, data collection, data processing, quality assessment, and effects of errors on estimation and analysis. The book's five sections discuss a broad range of issues and topics in each of five major areas, including
Survey Measurement and Process Quality is an indispensable resource for survey practitioners and managers as well as an excellent supplemental text for undergraduate and graduate courses and special seminars.
This book describes recent trends in growth curve modelling research in various subject areas, both theoretical and applied. It explains and explores the growth curve model as a valuable tool for gaining insights into several research topics of interest to academics and practitioners alike. The book's primary goal is to disseminate applications of the growth curve model to real-world problems, and to address related theoretical issues. The book will be of interest to a broad readership: for applied statisticians, it illustrates the importance of growth curve modelling as applied to actual field data; for more theoretically inclined statisticians, it highlights a number of theoretical issues that warrant further investigation.
This book describes methods for designing and analyzing experiments that are conducted using a computer code, a computer experiment, and, when possible, a physical experiment. Computer experiments continue to increase in popularity as surrogates for and adjuncts to physical experiments. Since the publication of the first edition, there have been many methodological advances and software developments to implement these new methodologies. The computer experiments literature has emphasized the construction of algorithms for various data analysis tasks (design construction, prediction, sensitivity analysis, calibration among others), and the development of web-based repositories of designs for immediate application. While it is written at a level that is accessible to readers with Masters-level training in Statistics, the book is written in sufficient detail to be useful for practitioners and researchers. New to this revised and expanded edition: * An expanded presentation of basic material on computer experiments and Gaussian processes with additional simulations and examples * A new comparison of plug-in prediction methodologies for real-valued simulator output * An enlarged discussion of space-filling designs including Latin Hypercube designs (LHDs), near-orthogonal designs, and nonrectangular regions * A chapter length description of process-based designs for optimization, to improve good overall fit, quantile estimation, and Pareto optimization * A new chapter describing graphical and numerical sensitivity analysis tools * Substantial new material on calibration-based prediction and inference for calibration parameters * Lists of software that can be used to fit models discussed in the book to aid practitioners
This textbook is the result of the enhancement of several courses on non-equilibrium statistics, stochastic processes, stochastic differential equations, anomalous diffusion and disorder. The target audience includes students of physics, mathematics, biology, chemistry, and engineering at undergraduate and graduate level with a grasp of the basic elements of mathematics and physics of the fourth year of a typical undergraduate course. The little-known physical and mathematical concepts are described in sections and specific exercises throughout the text, as well as in appendices. Physical-mathematical motivation is the main driving force for the development of this text. It presents the academic topics of probability theory and stochastic processes as well as new educational aspects in the presentation of non-equilibrium statistical theory and stochastic differential equations.. In particular it discusses the problem of irreversibility in that context and the dynamics of Fokker-Planck. An introduction on fluctuations around metastable and unstable points are given. It also describes relaxation theory of non-stationary Markov periodic in time systems. The theory of finite and infinite transport in disordered networks, with a discussion of the issue of anomalous diffusion is introduced. Further, it provides the basis for establishing the relationship between quantum aspects of the theory of linear response and the calculation of diffusion coefficients in amorphous systems.
This proceedings volume contains nine selected papers that were presented in the International Symposium in Statistics, 2012 held at Memorial University from July 16 to 18. These nine papers cover three different areas for longitudinal data analysis, four dealing with longitudinal data subject to measurement errors, four on incomplete longitudinal data analysis, and the last one for inferences for longitudinal data subject to outliers. Unlike in the independence setup, the inferences in measurement errors, missing values, and/or outlier models, are not adequately discussed in the longitudinal setup. The papers in the present volume provide details on successes and further challenges in these three areas for longitudinal data analysis. This volume is the first outlet with current research in three important areas in the longitudinal setup. The nine papers presented in three parts clearly reveal the similarities and differences in inference techniques used for three different longitudinal setups. Because the research problems considered in this volume are encountered in many real life studies in biomedical, clinical, epidemiology, socioeconomic, econometrics, and engineering fields, the volume should be useful to the researchers including graduate students in these areas.
This is a how-and-why-to-do-it book for students and scientists in all the behavioral sciences. It presents sophisticated statistical methods for analyzing continuous-time records of behavior, and integrates many recent developments in ethology, mathematical modelling, statistics, and technology. These new methods are explicitly designed to handle sequential or simultaneous acts where neither the duration nor the sequence of the acts is predetermined, which is often the case if the time scale on which behavior is studied is relatively short. The authors show how to analyze behavioral data starting with a basic model, the continuous time Markov chain. They then indicate how and when this model can be generalized and demonstrate the suitability of their approach for detecting, for example, the effects of different experimental treatments or of gradual changes in the social or physical environment. Competitive interactions such as predator-prey or host-parasite are also good subjects for this type of analysis. There are eight chapters and many worked examples, leading the reader through the mathematical processes and their applications. Students and researchers in all fields of behavioural science will find this book incomparably useful for planning and performing data analysis.
The subject of the book is advanced statistical analyses for quantitative research synthesis (meta-analysis), and selected practical issues relating to research synthesis that are not covered in detail in the many existing introductory books on research synthesis (or meta-analysis). Complex statistical issues are arising more frequently as the primary research that is summarized in quantitative syntheses itself becomes more complex, and as researchers who are conducting meta-analyses become more ambitious in the questions they wish to address. Also as researchers have gained more experience in conducting research syntheses, several key issues have persisted and now appear fundamental to the enterprise of summarizing research. Specifically the book describes multivariate analyses for several indices commonly used in meta-analysis (e.g., correlations, effect sizes, proportions and/or odds ratios), will outline how to do power analysis for meta-analysis (again for each of the different kinds of study outcome indices), and examines issues around research quality and research design and their roles in synthesis. For each of the statistical topics we will examine the different possible statistical models (i.e., fixed, random, and mixed models) that could be adopted by a researcher. In dealing with the issues of study quality and research design it covers a number of specific topics that are of broad concern to research synthesists. In many fields a current issue is how to make sense of results when studies using several different designs appear in a research literature (e.g., Morris & Deshon, 1997, 2002). In education and other social sciences a critical aspect of this issue is how one might incorporate qualitative (e.g., case study) research within a synthesis. In medicine, related issues concern whether and how to summarize observational studies, and whether they should be combined with randomized controlled trials (or even if they should be combined at all). For each topic, included is a worked example (e.g., for the statistical analyses) and/or a detailed description of a published research synthesis that deals with the practical (non-statistical) issues covered. |
You may like...
Third Millennium Thinking - Creating…
Saul Perlmutter, Robert Maccoun, …
Paperback
R438
Discovery Miles 4 380
|