![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
This book examines information processing performed by bio-systems at all scales: from genomes, cells and proteins to cognitive and even social systems. It introduces a theoretical/conceptual principle based on quantum information and non-Kolmogorov probability theory to explain information processing phenomena in biology as a whole. The book begins with an introduction followed by two chapters devoted to fundamentals, one covering classical and quantum probability, which also contains a brief introduction to quantum formalism, and another on an information approach to molecular biology, genetics and epigenetics. It then goes on to examine adaptive dynamics, including applications to biology, and non-Kolmogorov probability theory. Next, the book discusses the possibility to apply the quantum formalism to model biological evolution, especially at the cellular level: genetic and epigenetic evolutions. It also presents a model of the epigenetic cellular evolution based on the mathematical formalism of open quantum systems. The last two chapters of the book explore foundational problems of quantum mechanics and demonstrate the power of usage of positive operator valued measures (POVMs) in biological science. This book will appeal to a diverse group of readers including experts in biology, cognitive science, decision making, sociology, psychology, and physics; mathematicians working on problems of quantum probability and information and researchers in quantum foundations.
The papers contained in this volume are an indication of the topics th discussed and the interests of the participants of The 9 International Conference on Probability in Banach Spaces, held at Sandjberg, Denmark, August 16-21, 1993. A glance at the table of contents indicates the broad range of topics covered at this conference. What defines research in this field is not so much the topics considered but the generality of the ques tions that are asked. The goal is to examine the behavior of large classes of stochastic processes and to describe it in terms of a few simple prop erties that the processes share. The reward of research like this is that occasionally one can gain deep insight, even about familiar processes, by stripping away details, that in hindsight turn out to be extraneous. A good understanding about the disciplines involved in this field can be obtained from the recent book, Probability in Banach Spaces, Springer-Verlag, by M. Ledoux and M. Thlagrand. On page 5, of this book, there is a list of previous conferences in probability in Banach spaces, including the other eight international conferences. One can see that research in this field over the last twenty years has contributed significantly to knowledge in probability and has had important applications in many other branches of mathematics, most notably in statistics and functional analysis."
A comprehensive look at how probability and statistics is applied to the investment process Finance has become increasingly more quantitative, drawing on techniques in probability and statistics that many finance practitioners have not had exposure to before. In order to keep up, you need a firm understanding of this discipline."Probability and Statistics for Finance" addresses this issue by showing you how to apply quantitative methods to portfolios, and in all matter of your practices, in a clear, concise manner. Informative and accessible, this guide starts off with the basics and builds to an intermediate level of mastery. - Outlines an array of topics in probability and statistics and how to apply them in the world of finance- Includes detailed discussions of descriptive statistics, basic probability theory, inductive statistics, and multivariate analysis- Offers real-world illustrations of the issues addressed throughout the textThe authors cover a wide range of topics in this book, which can be used by all finance professionals as well as students aspiring to enter the field of finance.
Mathematical Statistics for Economics and Business, Second Edition, provides a comprehensive introduction to the principles of mathematical statistics which underpin statistical analyses in the fields of economics, business, and econometrics. The selection of topics in this textbook is designed to provide students with a conceptual foundation that will facilitate a substantial understanding of statistical applications in these subjects. This new edition has been updated throughout and now also includes a downloadable Student Answer Manual containing detailed solutions to half of the over 300 end-of-chapter problems. After introducing the concepts of probability, random variables, and probability density functions, the author develops the key concepts of mathematical statistics, most notably: expectation, sampling, asymptotics, and the main families of distributions. The latter half of the book is then devoted to the theories of estimation and hypothesis testing with associated examples and problems that indicate their wide applicability in economics and business. Features of the new edition include: a reorganization of topic flow and presentation to facilitate reading and understanding; inclusion of additional topics of relevance to statistics and econometric applications; a more streamlined and simple-to-understand notation for multiple integration and multiple summation over general sets or vector arguments; updated examples; new end-of-chapter problems; a solution manual for students; a comprehensive answer manual for instructors; and a theorem and definition map. This book has evolved from numerous graduate courses in mathematical statistics and econometrics taught by the author, and will be ideal for students beginning graduate study as well as for advanced undergraduates.
This book examines advanced Bayesian computational methods. It presents methods for sampling from posterior distributions and discusses how to compute posterior quantities of interest using Markov chain Monte Carlo (MCMC) samples. This book examines each of these issues in detail and heavily focuses on computing various posterior quantities of interest from a given MCMC sample. Several topics are addressed, including techniques for MCMC sampling, Monte Carlo methods for estimation of posterior quantities, improving simulation accuracy, marginal posterior density estimation, estimation of normalizing constants, constrained parameter problems, highest posterior density interval calculations, computation of posterior modes, and posterior computations for proportional hazards models and Dirichlet process models. The authors also discuss computions involving model comparisons, including both nested and non-nested models, marginal likelihood methods, ratios of normalizing constants, Bayes factors, the Savage-Dickey density ratio, Stochastic Search Variable Selection, Bayesian Model Averaging, the reverse jump algorithm, and model adequacy using predictive and latent residual approaches. The book presents an equal mixture of theory and applications involving real data. The book is intended as a graduate textbook or a reference book for a one semester course at the advanced masters or Ph.D. level. It would also serve as a useful reference book for applied or theoretical researchers as well as practitioners. Ming-Hui Chen is Associate Professor of Mathematical Sciences at Worcester Polytechnic Institute, Qu-Man Shao is Assistant Professor of Mathematics at the University of Oregon. Joseph G. Ibrahim is Associate Professor of Biostatistics at the Harvard School of Public Health and Dana-Farber Cancer Institute.
This volume provides an overview of the field of Astrostatistics understood as the sub-discipline dedicated to the statistical analysis of astronomical data. It presents examples of the application of the various methodologies now available to current open issues in astronomical research. The technical aspects related to the scientific analysis of the upcoming petabyte-scale databases are emphasized given the importance that scalable Knowledge Discovery techniques will have for the full exploitation of these databases. Based on the 2011 Astrostatistics and Data Mining in Large Astronomical Databases conference and school, this volume gathers examples of the work by leading authors in the areas of Astrophysics and Statistics, including a significant contribution from the various teams that prepared for the processing and analysis of the Gaia data.
Researchers in a number of disciplines deal with large text sets requiring both text management and text analysis. Faced with a large amount of textual data collected in marketing surveys, literary investigations, historical archives and documentary data bases, these researchers require assistance with organizing, describing and comparing texts. Exploring Textual Data demonstrates how exploratory multivariate statistical methods such as correspondence analysis and cluster analysis can be used to help investigate, assimilate and evaluate textual data. The main text does not contain any strictly mathematical demonstrations, making it accessible to a large audience. This book is very user-friendly with proofs abstracted in the appendices. Full definitions of concepts, implementations of procedures and rules for reading and interpreting results are fully explored. A succession of examples is intended to allow the reader to appreciate the variety of actual and potential applications and the complementary processing methods. A glossary of terms is provided.
The analysis of the characteristics of walks on ordinals is a powerful new technique for building mathematical structures, developed by the author over the last twenty years. This is the first book-length exposition of this method. Particular emphasis is placed on applications which are presented in a unified and comprehensive manner and which stretch across several areas of mathematics such as set theory, combinatorics, general topology, functional analysis, and general algebra. The intended audience for this book are graduate students and researchers working in these areas interested in mastering and applying these methods.
This book was written for those who need to know how to collect,
analyze and present data. It is meant to be a first course for
practitioners, a book for private study or brush-up on statistics,
and supplementary reading for general statistics classes.
Bayesian analyses have made important inroads in modern clinical research due, in part, to the incorporation of the traditional tools of noninformative priors as well as the modern innovations of adaptive randomization and predictive power. Presenting an introductory perspective to modern Bayesian procedures, Elementary Bayesian Biostatistics explores Bayesian principles and illustrates their application to healthcare research. Building on the basics of classic biostatistics and algebra, this easy-to-read book provides a clear overview of the subject. It focuses on the history and mathematical foundation of Bayesian procedures, before discussing their implementation in healthcare research from first principles. The author also elaborates on the current controversies between Bayesian and frequentist biostatisticians. The book concludes with recommendations for Bayesians to improve their standing in the clinical trials community. Calculus derivations are relegated to the appendices so as not to overly complicate the main text. As Bayesian methods gain more acceptance in healthcare, it is necessary for clinical scientists to understand Bayesian principles. Applying Bayesian analyses to modern healthcare research issues, this lucid introduction helps readers make the correct choices in the development of clinical research programs.
A carefully written text, suitable as an introductory course for second or third year students. The main scope of the text guides students towards a critical understanding and handling of data sets together with the ensuing testing of hypotheses. This approach distinguishes it from many other texts using statistical decision theory as their underlying philosophy. This volume covers concepts from probability theory, backed by numerous problems with selected answers.
Combinatorial (or discrete) optimization is one of the most active fields in the interface of operations research, computer science, and applied math ematics. Combinatorial optimization problems arise in various applications, including communications network design, VLSI design, machine vision, air line crew scheduling, corporate planning, computer-aided design and man ufacturing, database query design, cellular telephone frequency assignment, constraint directed reasoning, and computational biology. Furthermore, combinatorial optimization problems occur in many diverse areas such as linear and integer programming, graph theory, artificial intelligence, and number theory. All these problems, when formulated mathematically as the minimization or maximization of a certain function defined on some domain, have a commonality of discreteness. Historically, combinatorial optimization starts with linear programming. Linear programming has an entire range of important applications including production planning and distribution, personnel assignment, finance, alloca tion of economic resources, circuit simulation, and control systems. Leonid Kantorovich and Tjalling Koopmans received the Nobel Prize (1975) for their work on the optimal allocation of resources. Two important discover ies, the ellipsoid method (1979) and interior point approaches (1984) both provide polynomial time algorithms for linear programming. These algo rithms have had a profound effect in combinatorial optimization. Many polynomial-time solvable combinatorial optimization problems are special cases of linear programming (e.g. matching and maximum flow). In addi tion, linear programming relaxations are often the basis for many approxi mation algorithms for solving NP-hard problems (e.g. dual heuristics)."
Markov decision process (MDP) models are widely used for modeling
sequential decision-making problems that arise in engineering,
economics, computer science, and the social sciences. Many
real-world problems modeled by MDPs have huge state and/or action
spaces, giving an opening to the curse of dimensionality and so
making practical solution of the resulting models intractable. In
other cases, the system of interest is too complex to allow
explicit specification of some of the MDP model parameters, but
simulation samples are readily available (e.g., for random
transitions and costs). For these settings, various sampling and
population-based algorithms have been developed to overcome the
difficulties of computing an optimal solution in terms of a policy
and/or value function. Specific approaches include adaptive
sampling, evolutionary policy iteration, evolutionary random policy
search, and model reference adaptive search.
This text is based on a set of not es produced for courses given for gradu- ate students in mathematics, computer science and biochemistry during the academic year 1998-1999 at the University of Turku in Turku and at the Royal Institute of Technology (KTH) in Stockholm. The course in Turku was organized by Professor Mats Gyllenberg's groupl and was also included 2 within the postgraduate program ComBi , a Graduate School in Compu- tational Biology, Bioinformatics, and Biometry, directed by Professor Esko Ukkonen at the University of Helsinki. The purpose of the courses was to give a thorough and systematic intro duc ti on to probabilistic modelling in bioinformatics for advanced undergraduate and graduate students who had a fairly limited background in prob ability theory, but were otherwise well trained in mathematics and were already familiar with at least some of the techniques of algorithmic sequence analysis. Portions of the material have also been lectured at shorter graduate courses and seminars both in Finland and in Sweden. The initial set of notes circulated also for a time outside those two countries via the World Wide Web. The intermediate course in probability theory and techniques of discrete mathematics held by the author at the University College of Sodertorn (Hud- dinge, Sweden) during the academic year 1997-1998 has also influenced the presentation. The opportunity to give this course is hereby gratefully ac- knowledged.
The standard approach of most introductory books for practical statistics is that readers first learn the minimum mathematical basics of statistics and rudimentary concepts of statistical methodology. They then are given examples of analyses of data obtained from natural and social phenomena so that they can grasp practical definitions of statistical methods. Finally they go on to acquaint themselves with statistical software for the PC and analyze similar data to expand and deepen their understanding of statistical methods. This book, however, takes a slightly different approach, using simulation data instead of actual data to illustrate the functions of statistical methods. Also, R programs listed in the book help readers realize clearly how these methods work to bring intrinsic values of data to the surface. R is free software enabling users to handle vectors, matrices, data frames, and so on. For example, when a statistical theory indicates that an event happens with a 5 % probability, readers can confirm the fact using R programs that this event actually occurs with roughly that probability, by handling data generated by pseudo-random numbers. Simulation gives readers populations with known backgrounds and the nature of the population can be adjusted easily. This feature of the simulation data helps provide a clear picture of statistical methods painlessly. Most readers of introductory books of statistics for practical purposes do not like complex mathematical formulae, but they do not mind using a PC to produce various numbers and graphs by handling a huge variety of numbers. If they know the characteristics of these numbers beforehand, they treat them with ease. Struggling with actual data should come later. Conventional books on this topic frighten readers by presenting unidentified data to them indiscriminately. This book provides a new path to statistical concepts and practical skills in a readily accessible manner. "
Hardbound. In this volume prominent workers in the field discuss various time series methods in the time domain. The topics included are autoregressive-moving average models, control, estimation, identification, model selection, non-linear time series, non-stationary time series, prediction, robustness, sampling designs, signal attenuation, and speech recognition. This volume complements Handbook of Statistics 3: Time Series in the Frequency Domain.
The present textbook contains the recordsof a two-semester course on que- ing theory, including an introduction to matrix-analytic methods. This course comprises four hours oflectures and two hours of exercises per week andhas been taughtattheUniversity of Trier, Germany, for about ten years in - quence. The course is directed to last year undergraduate and?rst year gr- uate students of applied probability and computer science, who have already completed an introduction to probability theory. Its purpose is to present - terial that is close enough to concrete queueing models and their applications, while providing a sound mathematical foundation for the analysis of these. Thus the goal of the present book is two-fold. On the one hand, students who are mainly interested in applications easily feel bored by elaborate mathematical questions in the theory of stochastic processes. The presentation of the mathematical foundations in our courses is chosen to cover only the necessary results, which are needed for a solid foundation of the methods of queueing analysis. Further, students oriented - wards applications expect to have a justi?cation for their mathematical efforts in terms of immediate use in queueing analysis. This is the main reason why we have decided to introduce new mathematical concepts only when they will be used in the immediate sequel. On the other hand, students of applied probability do not want any heur- tic derivations just for the sake of yielding fast results for the model at hand.
Pure and applied stochastic analysis and random fields form the subject of this book. The collection of articles on these topics represent the state of the art of the research in the field, with particular attention being devoted to stochastic models in finance. Some are review articles, others are original papers; taken together, they will apprise the reader of much of the current activity in the area.
Scientific visualization may be defined as the transformation of numerical scientific data into informative graphical displays. The text introduces a nonverbal model to subdisciplines that until now has mostly employed mathematical or verbal-conceptual models. The focus is on how scientific visualization can help revolutionize the manner in which the tendencies for (dis)similar numerical values to cluster together in location on a map are explored and analyzed. In doing so, the concept known as spatial autocorrelation - which characterizes these tendencies - is further demystified.
This book is the outcome of the CIMPA School on Statistical Methods and Applications in Insurance and Finance, held in Marrakech and Kelaat M'gouna (Morocco) in April 2013. It presents two lectures and seven refereed papers from the school, offering the reader important insights into key topics. The first of the lectures, by Frederic Viens, addresses risk management via hedging in discrete and continuous time, while the second, by Boualem Djehiche, reviews statistical estimation methods applied to life and disability insurance. The refereed papers offer diverse perspectives and extensive discussions on subjects including optimal control, financial modeling using stochastic differential equations, pricing and hedging of financial derivatives, and sensitivity analysis. Each chapter of the volume includes a comprehensive bibliography to promote further research.
This text, intended for a first course in performance evaluation, provides a self-contained treatment of all aspects of queueing theory. It starts by introducing readers to the terminology and usefulness of queueing theory and continues by considering Markovian queues in equilibrium, Little's law, reversibility, transient analysis and computation, and the M/G/1 queueing system. Subsequent chapters treat the theory of networks of queues and computational algorithms for networks of queues. Stochastic Petri networks, including those whose solutions can be given in product form, are covered in detail. A chapter on discrete-time queueing systems, which are of recent interest, discusses arrival processes, Geom/Geom/m queueing models, and case studies of discrete-time queueing networks arising in industrial applications. This third edition includes a new chapter on current models of network traffic as well as sixteen new homework problems on discrete-time models and a revised and updated set of references. The discussion of network traffic models includes a survey of continuous and discrete time models, a detailed discussion of burstiness, a complete introduction to self-similar traffic and a presentation of solution techniques. Solutions for all of the homework problems in this text are available in a separate volume.
Queueing is an aspect of modern life that we encounter at every step in our daily activities. Whether it happens at the checkout counter in the supermarket or in accessing the Internet, the basic phenomenon of queueing arises whenever a shared facility needs to be accessed for service by a ]arge number of jobs or customers. The study of queueing is important as it gravides both a theoretical background to the kind of service that we may expect from such a facility and the way in which the facility itself may be designed to provide some specified grade of service to its customers. Our study of queueing was basically motivated by its use in the study of communication systems and computer networks. The various computers, routers and switches in such a network may be modelled as individual queues. The whole system may itself be modelled as a queueing network providing the required service to the messages, packets or cells that need to be carried. Application of queueing theory provides the theoretical framework for the design and study of such networks. The purpose of this book is to support a course on queueing systems at the senior undergraduate or graduate Ievels. Such a course would then provide the theoretical background on which a subsequent course on the performance modeHing and analysis of computer networks may be based.
In this book, we synthesize a rich and vast literature on econometric challenges associated with accounting choices and their causal effects. Identi?cation and es- mation of endogenous causal effects is particularly challenging as observable data are rarely directly linked to the causal effect of interest. A common strategy is to employ logically consistent probability assessment via Bayes' theorem to connect observable data to the causal effect of interest. For example, the implications of earnings management as equilibrium reporting behavior is a centerpiece of our explorations. Rather than offering recipes or algorithms, the book surveys our - periences with accounting and econometrics. That is, we focus on why rather than how. The book can be utilized in a variety of venues. On the surface it is geared - ward graduate studies and surely this is where its roots lie. If we're serious about our studies, that is, if we tackle interesting and challenging problems, then there is a natural progression. Our research addresses problems that are not well - derstood then incorporates them throughout our curricula as our understanding improves and to improve our understanding (in other words, learning and c- riculum development are endogenous). For accounting to be a vibrant academic discipline, we believe it is essential these issues be confronted in the undergr- uate classroom as well as graduate studies. We hope we've made some progress with examples which will encourage these developments.
Entropy optimization is a useful combination of classical engineering theory (entropy) with mathematical optimization. The resulting entropy optimization models have proved their usefulness with successful applications in areas such as image reconstruction, pattern recognition, statistical inference, queuing theory, spectral analysis, statistical mechanics, transportation planning, urban and regional planning, input-output analysis, portfolio investment, information analysis, and linear and nonlinear programming. While entropy optimization has been used in different fields, a good number of applicable solution methods have been loosely constructed without sufficient mathematical treatment. A systematic presentation with proper mathematical treatment of this material is needed by practitioners and researchers alike in all application areas. The purpose of this book is to meet this need. Entropy Optimization and Mathematical Programming offers perspectives that meet the needs of diverse user communities so that the users can apply entropy optimization techniques with complete comfort and ease. With this consideration, the authors focus on the entropy optimization problems in finite dimensional Euclidean space such that only some basic familiarity with optimization is required of the reader. |
You may like...
Statistics for Management and Economics
Gerald Keller, Nicoleta Gaciu
Paperback
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim
Paperback
Integrated Population Biology and…
Arni S.R. Srinivasa Rao, C.R. Rao
Hardcover
R6,219
Discovery Miles 62 190
Fat Chance - Probability from 0 to 1
Benedict Gross, Joe Harris, …
Hardcover
R1,923
Discovery Miles 19 230
|