![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
These proceedings report on the conference "Math Everywhere," celebrating the 60th birthday of the mathematician Vincenzo Capasso. The conference promoted ideas Capasso has pursued and shared the open atmosphere he is known for. Topic sections include: Deterministic and Stochastic Systems. Mathematical Problems in Biology, Medicine and Ecology. Mathematical Problems in Industry and Economics. The broad spectrum of contributions to this volume demonstrates the truth of its title: Math is Everywhere, indeed.
Introduction to Convolutional Codes with Applications is an introduction to the basic concepts of convolutional codes, their structure and classification, various error correction and decoding techniques for convolutionally encoded data, and some of the most common applications. The definition and representations, distance properties, and important classes of convolutional codes are also discussed in detail. The book provides the first comprehensive description of table-driven correction and decoding of convolutionally encoded data. Complete examples of Viterbi, sequential, and majority-logic decoding technique are also included, allowing a quick comparison among the different decoding approaches. Introduction to Convolutional Codes with Applications summarizes the research of the last two decades on applications of convolutional codes in hybrid ARQ protocols. A new classification allows a natural way of studying the underlying concepts of hybrid schemes and accommodates all of the new research. A novel application of fast decodable invertible convolutional codes for lost packet recovery in high speed networks is described. This opens the door for using convolutional coding for error recovery in high speed networks. Practicing communications, electronics, and networking engineers who want to get a better grasp of the underlying concepts of convolutional coding and its applications will greatly benefit by the simple and concise style of explanation. An up-to-date bibliography of over 300 papers is included. Also suitable for use as a textbook or a reference text in an advanced course on coding theory with emphasis on convolutional codes.
This book covers the latest results in the field of risk analysis. Presented topics include probabilistic models in cancer research, models and methods in longevity, epidemiology of cancer risk, engineering reliability and economical risk problems. The contributions of this volume originate from the 5th International Conference on Risk Analysis (ICRA 5). The conference brought together researchers and practitioners working in the field of risk analysis in order to present new theoretical and computational methods with applications in biology, environmental sciences, public health, economics and finance.
An easy to read survey of data analysis, linear regression models and analysis of variance. The extensive development of the linear model includes the use of the linear model approach to analysis of variance provides a strong link to statistical software packages, and is complemented by a thorough overview of theory. It is assumed that the reader has the background equivalent to an introductory book in statistical inference. Can be read easily by those who have had brief exposure to calculus and linear algebra. Intended for first year graduate students in business, social and the biological sciences. Provides the student with the necessary statistics background for a course in research methodology. In addition, undergraduate statistics majors will find this text useful as a survey of linear models and their applications.
Probability Theory, Theory of Random Processes and Mathematical Statistics are important areas of modern mathematics and its applications. They develop rigorous models for a proper treatment for various 'random' phenomena which we encounter in the real world. They provide us with numerous tools for an analysis, prediction and, ultimately, control of random phenomena. Statistics itself helps with choice of a proper mathematical model (e.g., by estimation of unknown parameters) on the basis of statistical data collected by observations. This volume is intended to be a concise textbook for a graduate level course, with carefully selected topics representing the most important areas of modern Probability, Random Processes and Statistics. The first part (Ch. 1-3) can serve as a self-contained, elementary introduction to Probability, Random Processes and Statistics. It contains a number of relatively sim ple and typical examples of random phenomena which allow a natural introduction of general structures and methods. Only knowledge of elements of real/complex analysis, linear algebra and ordinary differential equations is required here. The second part (Ch. 4-6) provides a foundation of Stochastic Analysis, gives information on basic models of random processes and tools to study them. Here a familiarity with elements of functional analysis is necessary. Our intention to make this course fast-moving made it necessary to present important material in a form of examples."
Growth-curve models are generalized multivariate analysis-of-variance models. The basic idea of the models is to use different polynomials to fit different treatment groups involved in the longitudinal study. It is not uncommon, however, to find outliers and influential observations in growth data that heavily affect statistical inference in growth curve models. This book provides a comprehensive introduction to the theory of growth curve models with an emphasis on statistical diagnostics. A variety of issues on model fittings and model diagnostics are addressed, and many criteria for outlier detection and influential observation identification are created within likelihood and Bayesian frameworks. This book is intended for postgraduates and statisticians whose research involves longitudinal study, multivariate analysis and statistical diagnostics, and also for scientists who analyze longitudinal data and repeated measures. The authors provide theoretical details on the model fittings and also emphasize the application of growth curve models to practical data analysis, which are reflected in the analysis of practical examples given in each chapter. The book assumes a basic knowledge of matrix algebra and linear regression. Jian-Xin Pan is a lecturer in Medical Statistics of Keele University in the U.K. He has published more than twenty papers on growth curve models, statistical diagnostics and linear/non-linear mixed models. He has a long-standing research interest in longitudinal data analysis and repeated measures in medicine and agriculture. Kai-Tai Fang is a chair professor in Statistics of Hong Kong Baptist University and a fellow of the Institute of Mathematical Statistics. He has published several books with Springer-Verlag, Chapman & Hall, and Science Press and is an author or co-author of over one hundred papers. His research interest includes generalized multivariate analysis, elliptically contoured distributions and uniform design.
It appears that we live in an age of disasters: the mighty Missis sippi and Missouri flood millions of acres, earthquakes hit Tokyo and California, airplanes crash due to mechanical failure and the seemingly ever increasing wind speeds make the storms more and more frightening. While all these may seem to be unexpected phenomena to the man on the street, they are actually happening according to well defined rules of science known as extreme value theory. We know that records must be broken in the future, so if a flood design is based on the worst case of the past then we are not really prepared against floods. Materials will fail due to fatigue, so if the body of an aircraft looks fine to the naked eye, it might still suddenly fail if the aircraft has been in operation over an extended period of time. Our theory has by now penetrated the so cial sciences, the medical profession, economics and even astronomy. We believe that our field has come of age. In or er to fully utilize the great progress in the theory of extremes and its ever increasing acceptance in practice, an international conference was organized in which equal weight was given to theory and practice. This book is Volume I of the Proceedings of this conference. In selecting the papers for Volume lour guide was to have authoritative works with a large variety of coverage of both theory and practice."
The focus of this book is on bilevel programming which combines elements of hierarchical optimization and game theory. The basic model addresses the problem where two decision-makers, each with their individual objectives, act and react in a noncooperative manner. The actions of one affect the choices and payoffs available to the other but neither player can completely dominate the other in the traditional sense. Over the last 20 years there has been a steady growth in research related to theory and solution methodologies for bilevel programming. This interest stems from the inherent complexity and consequent challenge of the underlying mathematics, as well as the applicability of the bilevel model to many real-world situations. The primary aim of this book is to provide a historical perspective on algorithmic development and to highlight those implementations that have proved to be the most efficient in their class. A corollary aim is to provide a sampling of applications in order to demonstrate the versatility of the basic model and the limitations of current technology. What is unique about this book is its comprehensive and integrated treatment of theory, algorithms and implementation issues. It is the first text that offers researchers and practitioners an elementary understanding of how to solve bilevel programs and a perspective on what success has been achieved in the field. Audience: Includes management scientists, operations researchers, industrial engineers, mathematicians and economists.
This volume is intended to stimulate a change in the practice of decision support, advocating an interdisciplinary approach centred on both social and natural sciences, both theory and practice. It addresses the issue of analysis and management of uncertainty and risk in decision support corresponding to the aims of Integrated Assessment. A pluralistic method is necessary to account for legitimate plural interpretations of uncertainty and multiple risk perceptions. A wide range of methods and tools is presented to contribute to adequate and effective pluralistic uncertainty management and risk analysis in decision support endeavours. Special attention is given to the development of one such approach, the Pluralistic fRamework for Integrated uncertainty Management and risk Analysis (PRIMA), of which the practical value is explored in the context of the Environmental Outlooks produced by the Dutch Institute for Public Health and Environment (RIVM). Audience: This book will be of interest to researchers and practitioners whose work involves decision support, uncertainty management, risk analysis, environmental planning, and Integrated Assessment.
The book is a collection of essays on various issues in philosophy of science, with special emphasis on the foundations of probability and statistics, and quantum mechanics. The main topics, addressed by some of the most outstanding researchers in the field, are subjective probability, Bayesian statistics, probability kinematics, causal decision making, probability and realism in quantum mechanics.
Algorithmic Principles of Mathematical Programming investigates the
mathematical structures and principles underlying the design of
efficient algorithms for optimization problems. Recent advances in
algorithmic theory have shown that the traditionally separate areas
of discrete optimization, linear programming, and nonlinear
optimization are closely linked. This book offers a comprehensive
introduction to the whole subject and leads the reader to the
frontiers of current research. The prerequisites to use the book
are very elementary. All the tools from numerical linear algebra
and calculus are fully reviewed and developed. Rather than
attempting to be encyclopedic, the book illustrates the important
basic techniques with typical problems. The focus is on efficient
algorithms with respect to practical usefulness. Algorithmic
complexity theory is presented with the goal of helping the reader
understand the concepts without having to become a theoretical
specialist. Further theory is outlined and supplemented with
pointers to the relevant literature.
Advances in Stochastic Modelling and Data Analysis presents the most recent developments in the field, together with their applications, mainly in the areas of insurance, finance, forecasting and marketing. In addition, the possible interactions between data analysis, artificial intelligence, decision support systems and multicriteria analysis are examined by top researchers. Audience: A wide readership drawn from theoretical and applied mathematicians, such as operations researchers, management scientists, statisticians, computer scientists, bankers, marketing managers, forecasters, and scientific societies such as EURO and TIMS.
This book prepares students to execute the quantitative and computational needs of the finance industry. The quantitative methods are explained in detail with examples from real financial problems like option pricing, risk management, portfolio selection, etc. Codes are provided in R programming language to execute the methods. Tables and figures, often with real data, illustrate the codes. References to related work are intended to aid the reader to pursue areas of specific interest in further detail. The comprehensive background with economic, statistical, mathematical, and computational theory strengthens the understanding. The coverage is broad, and linkages between different sections are explained. The primary audience is graduate students, while it should also be accessible to advanced undergraduates. Practitioners working in the finance industry will also benefit.
This book has been prepared to help psychiatrists expand their knowledge of statistical methods and fills the gaps in their applications as well as introduces data analysis software. The book emphasizes the classification of fundamental statistical methods in psychiatry research that are precise and simple. Professionals in the field of mental health and allied subjects without any mathematical background can easily understand all the relevant statistical methods and carry out the analysis and interpret the results in their respective fields without consulting a statistician. The sequence of the chapters, the sections within the chapters, the subsections within the sections, and the points within the subsections have all been arranged to help professionals in classification refine their knowledge in statistical methods and fill the gaps, if any. Emphasizing simplicity, the fundamental statistical methods are demonstrated by means of arithmetical examples that may be reworked with pencil and paper in a matter of minutes. The results of the rework have to be checked by using SPSS, and in this way professionals are introduced to this psychiatrist-friendly data analysis software. Topics covered include: An overview of psychiatry research The organization and collection of data Descriptive statistics The basis of statistical inference Tests of significance Correlational data analysis Multivariate data analysis Meta-analysis Reporting the results Statistical software The language of the book is very simple and covers all aspects of statistical methods starting from organization and collection of data to descriptive statistics, statistical inference, multivariate analysis, and meta-analysis. Two chapters on computer applications deal with the most popular data analysis software: SPSS. The book will be very valuable to professionals and post-graduate students in psychiatry and allied fields, such as psychiatric social work, clinical psychology, psychiatric nursing, and mental health education and administration. "
This BASS book Series publishes selected high-quality papers reflecting recent advances in the design and biostatistical analysis of biopharmaceutical experiments - particularly biopharmaceutical clinical trials. The papers were selected from invited presentations at the Biopharmaceutical Applied Statistics Symposium (BASS), which was founded by the first Editor in 1994 and has since become the premier international conference in biopharmaceutical statistics. The primary aims of the BASS are: 1) to raise funding to support graduate students in biostatistics programs, and 2) to provide an opportunity for professionals engaged in pharmaceutical drug research and development to share insights into solving the problems they encounter. The BASS book series is initially divided into three volumes addressing: 1) Design of Clinical Trials; 2) Biostatistical Analysis of Clinical Trials; and 3) Pharmaceutical Applications. This book is the third of the 3-volume book series. The topics covered include: Targeted Learning of Optimal Individualized Treatment Rules under Cost Constraints, Uses of Mixture Normal Distribution in Genomics and Otherwise, Personalized Medicine - Design Considerations, Adaptive Biomarker Subpopulation and Tumor Type Selection in Phase III Oncology Trials, High Dimensional Data in Genomics; Synergy or Additivity - The Importance of Defining the Primary Endpoint, Full Bayesian Adaptive Dose Finding Using Toxicity Probability Interval (TPI), Alpha-recycling for the Analyses of Primary and Secondary Endpoints of Clinical Trials, Expanded Interpretations of Results of Carcinogenicity Studies of Pharmaceuticals, Randomized Clinical Trials for Orphan Drug Development, Mediation Modeling in Randomized Trials with Non-normal Outcome Variables, Statistical Considerations in Using Images in Clinical Trials, Interesting Applications over 30 Years of Consulting, Uncovering Fraud, Misconduct and Other Data Quality Issues in Clinical Trials, Development and Evaluation of High Dimensional Prognostic Models, and Design and Analysis of Biosimilar Studies.
The year 2000 is the centenary year of the publication of Bachelier's thesis which - together with Harry Markovitz Ph.D. dissertation on portfolio selection in 1952 and Fischer Black's and Myron Scholes' solution of an option pricing problem in 1973 - is considered as the starting point of modern finance as a mathematical discipline. On this remarkable anniversary the workshop on mathematical finance held at the University of Konstanz brought together practitioners, economists and mathematicians to discuss the state of the art. Apart from contributions to the known discrete, Brownian, and LA(c)vy process models, first attempts to describe a market in a reasonable way by a fractional Brownian motion model are presented, opening many new aspects for practitioners and new problems for mathematicians. As most dynamical financial problems are stochastic filtering or control problems many talks presented adaptations of control methods and techniques to the classical financial problems in a [ portfolio selection a [ irreversible investment a [ risk sensitive asset allocation a [ capital asset pricing a [ hedging contingent claims a [ option pricing a [ interest rate theory. The contributions of practitioners link the theoretical results to the steadily increasing flow of real world problems from financial institutions into mathematical laboratories. The present volume reflects this exchange of theoretical and applied results, methods and techniques that made the workshop a fruitful contribution to the interdisciplinary work in mathematical finance.
This book discusses research, methods, and recent developments in the interdisciplinary field that spans research in visualization, eye tracking, human-computer interaction, and psychology. It presents extended versions of papers from the First Workshop on Eye Tracking and Visualization (ETVIS), which was organized as a workshop of the IEEE VIS Conference 2015. Topics include visualization and visual analytics of eye-tracking data, metrics and cognitive models, eye-tracking experiments in the context of visualization interfaces, and eye tracking in 3D and immersive environments. The extended ETVIS papers are complemented by a chapter offering an overview of visualization approaches for analyzing eye-tracking data and a chapter that discusses electrooculography (EOG) as an alternative of acquiring information about eye movements. Covering scientific visualization, information visualization, and visual analytics, this book is a valuable resource for eye-tracking researchers within the visualization community.
This volume is devoted to the development of an algebraic model of databases. The first chapter presents a general introduction. The following sixteen chapters are divided into three main parts. Part I deals with various aspects of universal algebra. The chapters of Part I discuss topics such as sets, algebras and models, fundamental structures, categories, the category of sets, topoi, fuzzy sets, varieties of algebras, axiomatic classes, category algebra and algebraic theories. Part II deals with different approaches to the algebraization of predicate calculus. This material is intended to be applied chiefly to databases, although some discussion of pure algebraic applications is also given. Discussed here are topics such as Boolean algebras and propositional calculus, Halmos algebras and predicate calculus, connections with model theory, and the categorial approach to algebraic logic. Part III is concerned specifically with the algebraic model of databases, which considers the database as an algebraic structure. Topics dealt with in this part are the algebraic aspects of databases, their equivalence and restructuring, symmetries and the Galois theory of databases, and constructions in database theory. The volume closes with a discussion and conclusions, and an extensive bibliography. For mathematicians, computer scientists and database engineers, with an interest in applications of algebra and logic.
A timely collection of advanced, original material in the area of statistical methodology motivated by geometric problems, dedicated to the influential work of Kanti V. Mardia This volume celebrates Kanti V. Mardia's long and influential career in statistics. A common theme unifying much of Mardia s work is the importance of geometry in statistics, and to highlight the areas emphasized in his research this book brings together 16 contributions from high-profile researchers in the field. Geometry Driven Statistics covers a wide range of application areas including directional data, shape analysis, spatial data, climate science, fingerprints, image analysis, computer vision and bioinformatics. The book will appeal to statisticians and others with an interest in data motivated by geometric considerations. Summarizing the state of the art, examining some new developments and presenting a vision for the future, Geometry Driven Statistics will enable the reader to broaden knowledge of important research areas in statistics and gain a new appreciation of the work and influence of Kanti V. Mardia.
On various examples ranging from geosciences to environmental sciences, this book explains how to generate an adequate description of uncertainty, how to justify semiheuristic algorithms for processing uncertainty, and how to make these algorithms more computationally efficient. It explains in what sense the existing approach to uncertainty as a combination of random and systematic components is only an approximation, presents a more adequate three-component model with an additional periodic error component, and explains how uncertainty propagation techniques can be extended to this model. The book provides a justification for a practically efficient heuristic technique (based on fuzzy decision-making). It explains how the computational complexity of uncertainty processing can be reduced. The book also shows how to take into account that in real life, the information about uncertainty is often only partially known, and, on several practical examples, explains how to extract the missing information about uncertainty from the available data.
This book covers the statistical models and methods that are used to understand human genetics, following the historical and recent developments of human genetics. Starting with Mendel's first experiments to genome-wide association studies, the book describes how genetic information can be incorporated into statistical models to discover disease genes. All commonly used approaches in statistical genetics (e.g. aggregation analysis, segregation, linkage analysis, etc), are used, but the focus of the book is modern approaches to association analysis. Numerous examples illustrate key points throughout the text, both of Mendelian and complex genetic disorders. The intended audience is statisticians, biostatisticians, epidemiologists and quantitatively- oriented geneticists and health scientists wanting to learn about statistical methods for genetic analysis, whether to better analyze genetic data, or to pursue research in methodology. A background in intermediate level statistical methods is required. The authors include few mathematical derivations, and the exercises provide problems for students with a broad range of skill levels. No background in genetics is assumed.
This unique book explains how to fashion useful regression models from commonly available data to erect models essential for evidence-based road safety management and research. Composed from techniques and best practices presented over many years of lectures and workshops, The Art of Regression Modeling in Road Safety illustrates that fruitful modeling cannot be done without substantive knowledge about the modeled phenomenon. Class-tested in courses and workshops across North America, the book is ideal for professionals, researchers, university professors, and graduate students with an interest in, or responsibilities related to, road safety. This book also: * Presents for the first time a powerful analytical tool for road safety researchers and practitioners * Includes problems and solutions in each chapter as well as data and spreadsheets for running models and PowerPoint presentation slides * Features pedagogy well-suited for graduate courses and workshops including problems, solutions, and PowerPoint presentations * Equips readers to perform all analyses on a spreadsheet without requiring mastery of complex and costly software * Emphasizes understanding without esoteric mathematics * Makes assumptions visible and explains their role and consequences
Least squares estimation, when used appropriately, is a powerful research tool. A deeper understanding of the regression concepts is essential for achieving optimal benefits from a least squares analysis. This book builds on the fundamentals of statistical methods and provides appropriate concepts that will allow a scientist to use least squares as an effective research tool. This book is aimed at the scientist who wishes to gain a working knowledge of regression analysis. The basic purpose of this book is to develop an understanding of least squares and related statistical methods without becoming excessively mathematical. It is the outgrowth of more than 30 years of consulting experience with scientists and many years of teaching an appied regression course to graduate students. This book seves as an excellent text for a service course on regression for non-statisticians and as a reference for researchers. It also provides a bridge between a two-semester introduction to statistical methods and a thoeretical linear models course. This book emphasizes the concepts and the analysis of data sets. It provides a review of the key concepts in simple linear regression, matrix operations, and multiple regression. Methods and criteria for selecting regression variables and geometric interpretations are discussed. Polynomial, trigonometric, analysis of variance, nonlinear, time series, logistic, random effects, and mixed effects models are also discussed. Detailed case studies and exercises based on real data sets are used to reinforce the concepts. John O. Rawlings, Professor Emeritus in the Department of Statistics at North Carolina State University, retired after 34 years of teaching, consulting, and research in statistical methods. He was instrumental in developing, and for many years taught, the course on which this text is based. He is a Fellow of the American Statistical Association and the Crop Science Society of America. Sastry G. Pantula is Professor and Directory of Graduate Programs in the Department of Statistics at North Carolina State University. He is a member of the Academy of Outstanding Teachers at North Carolina State University. David A. Dickey is Professor of Statistics at North Carolina State University. He is a member of the Academy of Outstanding Teachers at North Carolina State University.
A treatment of estimating unknown parameters, testing hypotheses and estimating confidence intervals in linear models. Readers will find here presentations of the Gauss-Markoff model, the analysis of variance, the multivariate model, the model with unknown variance and covariance components and the regression model as well as the mixed model for estimating random parameters. A chapter on the robust estimation of parameters and several examples have been added to this second edition. The necessary theorems of vector and matrix algebra and the probability distributions of test statistics are derived so as to make this book self-contained. Geodesy students as well as those in the natural sciences and engineering will find the emphasis on the geodetic application of statistical models extremely useful. |
You may like...
Advances in Quantum Monte Carlo
Shigenori Tanaka, Stuart M. Rothstein, …
Hardcover
R5,469
Discovery Miles 54 690
SAS Graphics for Clinical Trials by…
Kriss Harris, Richann Watson
Hardcover
R1,510
Discovery Miles 15 100
Fundamentals of Social Research Methods
Claire Bless, Craig Higson-Smith, …
Paperback
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim
Paperback
|