![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
This book is a first course in statistics for students of biology. Most of the examples have an ecological bias, but illustrate principles which have direct relevance for biologists doing laboratory work. The structured approach begins with basic concepts, and progresses towards an appreciation of the needs and use of analysis of variance and regression, and includes the use of computer statistical packages. The work is clearly explained with worked examples of real-life biological problems, and should be suitable for undergraduate students engaged in quantitative biological work. Biostatistics should give students a sound grasp of the key principles of biological statistics without overwhelming detail, and should allow students to quickly apply techniques to their own work and data.
Quantum mechanics is arguably one of the most successful scientific theories ever and its applications to chemistry, optics, and information theory are innumerable. This book provides the reader with a rigorous treatment of the main mathematical tools from harmonic analysis which play an essential role in the modern formulation of quantum mechanics. This allows us at the same time to suggest some new ideas and methods, with a special focus on topics such as the Wigner phase space formalism and its applications to the theory of the density operator and its entanglement properties. This book can be used with profit by advanced undergraduate students in mathematics and physics, as well as by confirmed researchers.
This book brings together carefully selected, peer-reviewed works on mathematical biology presented at the BIOMAT International Symposium on Mathematical and Computational Biology, which was held at the Institute of Numerical Mathematics, Russian Academy of Sciences, in October 2017, in Moscow. Topics covered include, but are not limited to, the evolution of spatial patterns on metapopulations, problems related to cardiovascular diseases and modeled by boundary control techniques in hemodynamics, algebraic modeling of the genetic code, and multi-step biochemical pathways. Also, new results are presented on topics like pattern recognition of probability distribution of amino acids, somitogenesis through reaction-diffusion models, mathematical modeling of infectious diseases, and many others. Experts, scientific practitioners, graduate students and professionals working in various interdisciplinary fields will find this book a rich resource for research and applications alike.
The self-avoiding walk is a classical model in statistical mechanics, probability theory and mathematical physics. It is also a simple model of polymer entropy which is useful in modelling phase behaviour in polymers. This monograph provides an authoritative examination of interacting self-avoiding walks, presenting aspects of the thermodynamic limit, phase behaviour, scaling and critical exponents for lattice polygons, lattice animals and surfaces. It also includes a comprehensive account of constructive methods in models of adsorbing, collapsing, and pulled walks, animals and networks, and for models of walks in confined geometries. Additional topics include scaling, knotting in lattice polygons, generating function methods for directed models of walks and polygons, and an introduction to the Edwards model. This essential second edition includes recent breakthroughs in the field, as well as maintaining the older but still relevant topics. New chapters include an expanded presentation of directed models, an exploration of methods and results for the hexagonal lattice, and a chapter devoted to the Monte Carlo methods.
Molecular-Genetic and Statistical Techniques for Behavioral and Neural Research presents the most exciting molecular and recombinant DNA techniques used in the analysis of brain function and behavior, a critical piece of the puzzle for clinicians, scientists, course instructors and advanced undergraduate and graduate students. Chapters examine neuroinformatics, genetic and neurobehavioral databases and data mining, also providing an analysis of natural genetic variation and principles and applications of forward (mutagenesis) and reverse genetics (gene targeting). In addition, the book discusses gene expression and its role in brain function and behavior, along with ethical issues in the use of animals in genetics testing. Written and edited by leading international experts, this book provides a clear presentation of the frontiers of basic research as well as translationally relevant techniques that are used by neurobehavioral geneticists.
Multilevel modelling facilitates the analysis of hierarchical data where observations may be nested within higher levels of classification. In health care research, for example, a study may be undertaken to determine the variability of patient outcomes where these also vary by hospital or health care region. Inference can then be made on the efficacy of health care practices.
Health care professionals and public health researchers interested in the application of statistics will benefit greatly from this text. It will also be of interest to postgraduate students studying medical statistics.
The first edition of Theory of Rank Tests (1967) has been the
precursor to a unified and theoretically motivated treatise of the
basic theory of tests based on ranks of the sample observations.
For more than 25 years, it helped raise a generation of
statisticians in cultivating their theoretical research in this
fertile area, as well as in using these tools in their application
oriented research. The present edition not only aims to revive this
classical text by updating the findings but also by incorporating
several other important areas which were either not properly
developed before 1965 or have gone through an evolutionary
development during the past 30 years. This edition therefore aims
to fulfill the needs of academic as well as professional
statisticians who want to pursue nonparametrics in their academic
projects, consultation, and applied research works.
Handbook of Statistics: Disease Modelling and Public Health, Part B, Volume 37 addresses new challenges in existing and emerging diseases. As a two part volume, this title covers an extensive range of techniques in the field, with this book including chapters on Reaction diffusion equations and their application on bacterial communication, Spike and slab methods in disease modeling, Mathematical modeling of mass screening and parameter estimation, Individual-based and agent-based models for infectious disease transmission and evolution: an overview, and a section on Visual Clustering of Static and Dynamic High Dimensional Data. This volume covers the lack of availability of complete data relating to disease symptoms and disease epidemiology, one of the biggest challenges facing vaccine developers, public health planners, epidemiologists and health sector researchers.
Recently, the use of statistical tools, methodologies, and models in human resource management (HRM) has increased because of human resources (HR) analytics and predictive HR decision making. To utilize these technological tools, HR managers and students must increase their knowledge of the resources' optimum application. Statistical Tools and Analysis in Human Resources Management is a critical scholarly resource that presents in-depth details on the application of statistics in every sphere of HR functions for optimal decision-making and analytical solutions. Featuring coverage on a broad range of topics such as leadership, industrial relations, training and development, and diversity management, this book is geared towards managers, professionals, upper-level students, administrators, and researchers seeking current information on the integration of HRM technologies.
This book focuses on recent advances, approaches, theories and applications related to mixture models. In particular, it presents recent unsupervised and semi-supervised frameworks that consider mixture models as their main tool. The chapters considers mixture models involving several interesting and challenging problems such as parameters estimation, model selection, feature selection, etc. The goal of this book is to summarize the recent advances and modern approaches related to these problems. Each contributor presents novel research, a practical study, or novel applications based on mixture models, or a survey of the literature. Reports advances on classic problems in mixture modeling such as parameter estimation, model selection, and feature selection; Present theoretical and practical developments in mixture-based modeling and their importance in different applications; Discusses perspectives and challenging future works related to mixture modeling.
A comprehensive and accessible guide to panel data analysis using EViews software This book explores the use of EViews software in creating panel data analysis using appropriate empirical models and real datasets. Guidance is given on developing alternative descriptive statistical summaries for evaluation and providing policy analysis based on pool panel data. Various alternative models based on panel data are explored, including univariate general linear models, fixed effect models and causal models, and guidance on the advantages and disadvantages of each one is given. Panel Data Analysis using EViews : Provides step-by-step guidance on how to apply EViews software to panel data analysis using appropriate empirical models and real datasets. Examines a variety of panel data models along with the author s own empirical findings, demonstrating the advantages and limitations of each model. Presents growth models, time-related effects models, and polynomial models, in addition to the models which are commonly applied for panel data. Includes more than 250 examples divided into three groups of models (stacked, unstacked, and structured panel data), together with notes and comments. Provides guidance on which models not to use in a given scenario, along with advice on viable alternatives. Explores recent new developments in panel data analysis An essential tool for advanced undergraduate or graduate students and applied researchers in finance, econometrics and population studies. Statisticians and data analysts involved with data collected over long time periods will also find this book a useful resource.
This book describes computational problems related to kernel density estimation (KDE) - one of the most important and widely used data smoothing techniques. A very detailed description of novel FFT-based algorithms for both KDE computations and bandwidth selection are presented. The theory of KDE appears to have matured and is now well developed and understood. However, there is not much progress observed in terms of performance improvements. This book is an attempt to remedy this. The book primarily addresses researchers and advanced graduate or postgraduate students who are interested in KDE and its computational aspects. The book contains both some background and much more sophisticated material, hence also more experienced researchers in the KDE area may find it interesting. The presented material is richly illustrated with many numerical examples using both artificial and real datasets. Also, a number of practical applications related to KDE are presented.
Statistics in Practice A new series of practical books outlining the use of statistical techniques in a wide range of application areas:
The book focuses on system dependability modeling and calculation, considering the impact of s-dependency and uncertainty. The best suited approaches for practical system dependability modeling and calculation, (1) the minimal cut approach, (2) the Markov process approach, and (3) the Markov minimal cut approach as a combination of (1) and (2) are described in detail and applied to several examples. The stringently used Boolean logic during the whole development process of the approaches is the key for the combination of the approaches on a common basis. For large and complex systems, efficient approximation approaches, e.g. the probable Markov path approach, have been developed, which can take into account s-dependencies be-tween components of complex system structures. A comprehensive analysis of aleatory uncertainty (due to randomness) and epistemic uncertainty (due to lack of knowledge), and their combination, developed on the basis of basic reliability indices and evaluated with the Monte Carlo simulation method, has been carried out. The uncertainty impact on system dependability is investigated and discussed using several examples with different levels of difficulty. The applications cover a wide variety of large and complex (real-world) systems. Actual state-of-the-art definitions of terms of the IEC 60050-192:2015 standard, as well as the dependability indices, are used uniformly in all six chapters of the book.
This introductory book enables researchers and students of all backgrounds to compute interrater agreements for nominal data. It presents an overview of available indices, requirements, and steps to be taken in a research project with regard to reliability, preceded by agreement. The book explains the importance of computing the interrater agreement and how to calculate the corresponding indices. Furthermore, it discusses current views on chance expected agreement and problems related to different research situations, so as to help the reader consider what must be taken into account in order to achieve a proper use of the indices. The book offers a practical guide for researchers, Ph.D. and master students, including those without any previous training in statistics (such as in sociology, psychology or medicine), as well as policymakers who have to make decisions based on research outcomes in which these types of indices are used.
Hardbound. Major theoretical advances were made in this area of research, and in the course of these developments order statistics has also found important applications in many diverse areas. These include life-testing and reliability, robustness studies, statistical quality control, filtering theory, signal processing, image processing, and radar target detection. Theoretical researchers working on theoretical and methodological advancements on order statistics and applied statisticians and engineers developing new and innovative applications of order statistics have been successfully brought together to create this handbook. For the convenience of readers, the subject matter has been divided into two volumes. This volume focuses on theory and methods, and volume 17 deals primarily with applications. Each volume has been divided into parts, each part specializing in one aspect of order statistics. The articles in this volume have been classified into
The fascinating world of canonical moments--a unique look at this
practical, powerful statistical and probability tool
This book presents a broad range of statistical techniques to address emerging needs in the field of repeated measures. It also provides a comprehensive overview of extensions of generalized linear models for the bivariate exponential family of distributions, which represent a new development in analysing repeated measures data. The demand for statistical models for correlated outcomes has grown rapidly recently, mainly due to presence of two types of underlying associations: associations between outcomes, and associations between explanatory variables and outcomes. The book systematically addresses key problems arising in the modelling of repeated measures data, bearing in mind those factors that play a major role in estimating the underlying relationships between covariates and outcome variables for correlated outcome data. In addition, it presents new approaches to addressing current challenges in the field of repeated measures and models based on conditional and joint probabilities. Markov models of first and higher orders are used for conditional models in addition to conditional probabilities as a function of covariates. Similarly, joint models are developed using both marginal-conditional probabilities as well as joint probabilities as a function of covariates. In addition to generalized linear models for bivariate outcomes, it highlights extended semi-parametric models for continuous failure time data and their applications in order to include models for a broader range of outcome variables that researchers encounter in various fields. The book further discusses the problem of analysing repeated measures data for failure time in the competing risk framework, which is now taking on an increasingly important role in the field of survival analysis, reliability and actuarial science. Details on how to perform the analyses are included in each chapter and supplemented with newly developed R packages and functions along with SAS codes and macro/IML. It is a valuable resource for researchers, graduate students and other users of statistical techniques for analysing repeated measures data.
This book, dedicated to Winfried Stute on the occasion of his 70th birthday, presents a unique collection of contributions by leading experts in statistics, stochastic processes, mathematical finance and insurance. The individual chapters cover a wide variety of topics ranging from nonparametric estimation, regression modelling and asymptotic bounds for estimators, to shot-noise processes in finance, option pricing and volatility modelling. The book also features review articles, e.g. on survival analysis.
This book addresses mathematics in a wide variety of applications, ranging from problems in electronics, energy and the environment, to mechanics and mechatronics. Using the classification system defined in the EU Framework Programme for Research and Innovation H2020, several of the topics covered belong to the challenge climate action, environment, resource efficiency and raw materials; and some to health, demographic change and wellbeing; while others belong to Europe in a changing world - inclusive, innovative and reflective societies. The 19th European Conference on Mathematics for Industry, ECMI2016, was held in Santiago de Compostela, Spain in June 2016. The proceedings of this conference include the plenary lectures, ECMI awards and special lectures, mini-symposia (including the description of each mini-symposium) and contributed talks. The ECMI conferences are organized by the European Consortium for Mathematics in Industry with the aim of promoting interaction between academy and industry, leading to innovation in both fields and providing unique opportunities to discuss the latest ideas, problems and methodologies, and contributing to the advancement of science and technology. They also encourage industrial sectors to propose challenging problems where mathematicians can provide insights and fresh perspectives. Lastly, the ECMI conferences are one of the main forums in which significant advances in industrial mathematics are presented, bringing together prominent figures from business, science and academia to promote the use of innovative mathematics in industry.
Existence Theory for Generalized Newtonian Fluids provides a rigorous mathematical treatment of the existence of weak solutions to generalized Navier-Stokes equations modeling Non-Newtonian fluid flows. The book presents classical results, developments over the last 50 years of research, and recent results with proofs. |
You may like...
Statistics For Business And Economics
David Anderson, James Cochran, …
Paperback
(1)
Theory of Games and Economic Behavior
John Von Neumann, Oskar Morgenstern
Hardcover
Big Data Analytics and Information…
Farouk Nathoo, Ejaz Ahmed
Hardcover
Statistics for Management and Economics
Gerald Keller, Nicoleta Gaciu
Paperback
Fundamentals of Social Research Methods
Claire Bless, Craig Higson-Smith, …
Paperback
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim
Paperback
|