![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Mathematics > Probability & statistics
Images are all around us The proliferation of low-cost, high-quality imaging devices has led to an explosion in acquired images. When these images are acquired from a microscope, telescope, satellite, or medical imaging device, there is a statistical image processing task: the inference of something--an artery, a road, a DNA marker, an oil spill--from imagery, possibly noisy, blurry, or incomplete. A great many textbooks have been written on image processing. However this book does not so much focus on images, per se, but rather on spatial data sets, with one or more measurements taken over a two or higher dimensional space, and to which standard image-processing algorithms may not apply. There are many important data analysis methods developed in this text for such statistical image problems. Examples abound throughout remote sensing (satellite data mapping, data assimilation, climate-change studies, land use), medical imaging (organ segmentation, anomaly detection), computer vision (image classification, segmentation), and other 2D/3D problems (biological imaging, porous media). The goal, then, of this text is to address methods for solving multidimensional statistical problems. The text strikes a balance between mathematics and theory on the one hand, versus applications and algorithms on the other, by deliberately developing the basic theory (Part I), the mathematical modeling (Part II), and the algorithmic and numerical methods (Part III) of solving a given problem. The particular emphases of the book include inverse problems, multidimensional modeling, random fields, and hierarchical methods.
Collecting together contributed lectures and mini-courses, this book details the research presented in a special semester titled "Geometric mechanics - variational and stochastic methods" run in the first half of 2015 at the Centre Interfacultaire Bernoulli (CIB) of the Ecole Polytechnique Federale de Lausanne. The aim of the semester was to develop a common language needed to handle the wide variety of problems and phenomena occurring in stochastic geometric mechanics. It gathered mathematicians and scientists from several different areas of mathematics (from analysis, probability, numerical analysis and statistics, to algebra, geometry, topology, representation theory, and dynamical systems theory) and also areas of mathematical physics, control theory, robotics, and the life sciences, with the aim of developing the new research area in a concentrated joint effort, both from the theoretical and applied points of view. The lectures were given by leading specialists in different areas of mathematics and its applications, building bridges among the various communities involved and working jointly on developing the envisaged new interdisciplinary subject of stochastic geometric mechanics.
This handbook presents a systematic overview of approaches to, diversity, and problems involved in interdisciplinary rating methodologies. Historically, the purpose of ratings is to achieve information transparency regarding a given body's activities, whether in the field of finance, banking, or sports for example. This book focuses on commonly used rating methods in three important fields: finance, sports, and the social sector. In the world of finance, investment decisions are largely shaped by how positively or negatively economies or financial instruments are rated. Ratings have thus become a basis of trust for investors. Similarly, sports evaluation and funding are largely based on core ratings. From local communities to groups of nations, public investment and funding are also dependent on how these bodies are continuously rated against expected performance targets. As such, ratings need to reflect the consensus of all stakeholders on selected aspects of the work and how to evaluate their success. The public should also have the opportunity to participate in this process. The authors examine current rating approaches from a variety of proposals that are closest to the public consensus, analyzing the rating models and summarizing the methods of their construction. This handbook offers a valuable reference guide for managers, analysts, economists, business informatics specialists, and researchers alike.
This book collects several contributions, written both by statisticians and medical doctors, which focus on the identification of new diagnostic, therapeutic and organizational strategies in order to improve the occurrence of clinical outcomes for Acute Coronary Syndromes (ACS) patients. The work is structured in two different parts: the first one is focused on cooperative project mainly on statistical analysis of large clinical and administrative databases; the second one faces the development of innovative diagnostic techniques, with specific reference to genetic and proteomic, and the evolution of new imaging techniques for the early identification of patients at major risk of thrombotic, arrhythmic complications and at risk of poor revascularization.
This book reviews nonparametric Bayesian methods and models that have proven useful in the context of data analysis. Rather than providing an encyclopedic review of probability models, the book's structure follows a data analysis perspective. As such, the chapters are organized by traditional data analysis problems. In selecting specific nonparametric models, simpler and more traditional models are favored over specialized ones. The discussed methods are illustrated with a wealth of examples, including applications ranging from stylized examples to case studies from recent literature. The book also includes an extensive discussion of computational methods and details on their implementation. R code for many examples is included in online software pages.
Written for corporate strategic planners and market researchers as well as students of management, this book offers the most complete introduction to the methodology and applications of ordinal time series analysis available in book form. Particularly useful for managers who seek a reliable and accessible means of analyzing the strategic performance of firms, products, industries, or political entities, the ordinal time series approach uses simple data, longitudinal analysis, and rank positions to produce results that more accurately reflect the dynamics of competitive position and corporate performance than those generated by more traditional methods which rely on absolute numbers and complicated analyses. The level of mathematical sophistication required is that of college introductory mathematics for business, making the methodology widely accessible. The contributors explain how to use the methodology and how to collect the appropriate data, review the statistical procedures involved, and examine numerous real-world applications of ordinal time series analysis. The book begins by introducing the notion of formalizing managerial intuition about strategic situations by employing rankings over time to describe the performance of products, firms, and departments, for example. Having established the advantages of using ordinal data, the contributors illustrate the use of rank statistics and show how to incorporate uncertain aspects of strategic situations in an ordinal context. A separate chapter covers information statistics that describe the aggregate behavior of a group of organizations over time. The contributors then present a series of examples demonstrating the wide applicabilityof ordinal time series analysis to various types of situations. Included are an analysis of the transportation industry over a 30-year period, an ordinal analysis of corporate performance, the application of ordinal analysis to the problem of product strategy, a look at world export activity, and an examination of international competition in the microelectronics industry. Throughout, particular attention is given to providing the reader with the background and information necessary to successfully employ ordinal time series methodology in his or her own environment.
This monograph presents mathematical theory of statistical models
described by the essentially large number of unknown parameters,
comparable with sample size but can also be much larger. In this
meaning, the proposed theory can be called "essentially
multiparametric." It is developed on the basis of the Kolmogorov
asymptotic approach in which sample size increases along with the
number of unknown parameters.
This book provides an up-to-date review of the general principles of and techniques for confirmatory adaptive designs. Confirmatory adaptive designs are a generalization of group sequential designs. With these designs, interim analyses are performed in order to stop the trial prematurely under control of the Type I error rate. In adaptive designs, it is also permissible to perform a data-driven change of relevant aspects of the study design at interim stages. This includes, for example, a sample-size reassessment, a treatment-arm selection or a selection of a pre-specified sub-population. Essentially, this adaptive methodology was introduced in the 1990s. Since then, it has become popular and the object of intense discussion and still represents a rapidly growing field of statistical research. This book describes adaptive design methodology at an elementary level, while also considering designing and planning issues as well as methods for analyzing an adaptively planned trial. This includes estimation methods and methods for the determination of an overall p-value. Part I of the book provides the group sequential methods that are necessary for understanding and applying the adaptive design methodology supplied in Parts II and III of the book. The book contains many examples that illustrate use of the methods for practical application. The book is primarily written for applied statisticians from academia and industry who are interested in confirmatory adaptive designs. It is assumed that readers are familiar with the basic principles of descriptive statistics, parameter estimation and statistical testing. This book will also be suitable for an advanced statistical course for applied statisticians or clinicians with a sound statistical background.
"Stochastic Tools in Mathematics and Science" covers basic stochastic tools used in physics, chemistry, engineering and the life sciences. The topics covered include conditional expectations, stochastic processes, Brownian motion and its relation to partial differential equations, Langevin equations, the Liouville and Fokker-Planck equations, as well as Markov chain Monte Carlo algorithms, renormalization, basic statistical mechanics, and generalized Langevin equations and the Mori-Zwanzig formalism. The applications include sampling algorithms, data assimilation, prediction from partial data, spectral analysis, and turbulence. The book is based on lecture notes from a class that has attracted graduate and advanced undergraduate students from mathematics and from many other science departments at the University of California, Berkeley. Each chapter is followed by exercises. The book will be useful for scientists and engineers working in a wide range of fields and applications. For this new edition the material has been thoroughly reorganized and updated, and new sections on scaling, sampling, filtering and data assimilation, based on recent research, have been added. There are additional figures and exercises. Review of earlier edition: "This is an excellent concise textbook which can be used for self-study by graduate and advanced undergraduate students and as a recommended textbook for an introductory course on probabilistic tools in science." Mathematical Reviews, 2006
This book describes methods for designing and analyzing experiments that are conducted using a computer code, a computer experiment, and, when possible, a physical experiment. Computer experiments continue to increase in popularity as surrogates for and adjuncts to physical experiments. Since the publication of the first edition, there have been many methodological advances and software developments to implement these new methodologies. The computer experiments literature has emphasized the construction of algorithms for various data analysis tasks (design construction, prediction, sensitivity analysis, calibration among others), and the development of web-based repositories of designs for immediate application. While it is written at a level that is accessible to readers with Masters-level training in Statistics, the book is written in sufficient detail to be useful for practitioners and researchers. New to this revised and expanded edition: * An expanded presentation of basic material on computer experiments and Gaussian processes with additional simulations and examples * A new comparison of plug-in prediction methodologies for real-valued simulator output * An enlarged discussion of space-filling designs including Latin Hypercube designs (LHDs), near-orthogonal designs, and nonrectangular regions * A chapter length description of process-based designs for optimization, to improve good overall fit, quantile estimation, and Pareto optimization * A new chapter describing graphical and numerical sensitivity analysis tools * Substantial new material on calibration-based prediction and inference for calibration parameters * Lists of software that can be used to fit models discussed in the book to aid practitioners
Contributions in this volume focus on computationally efficient algorithms and rigorous mathematical theories for analyzing large-scale networks. Researchers and students in mathematics, economics, statistics, computer science and engineering will find this collection a valuable resource filled with the latest research in network analysis. Computational aspects and applications of large-scale networks in market models, neural networks, social networks, power transmission grids, maximum clique problem, telecommunication networks, and complexity graphs are included with new tools for efficient network analysis of large-scale networks. This proceeding is a result of the 7th International Conference in Network Analysis, held at the Higher School of Economics, Nizhny Novgorod in June 2017. The conference brought together scientists, engineers, and researchers from academia, industry, and government.
The application of statistical methods to physics is essential. This unique book on statistical physics offers an advanced approach with numerous applications to the modern problems students are confronted with. Therefore the text contains more concepts and methods in statistics than the student would need for statistical mechanics alone. Methods from mathematical statistics and stochastics for the analysis of data are discussed as well. The book is divided into two parts, focusing first on the modeling of statistical systems and then on the analysis of these systems. Problems with hints for solution help the students to deepen their knowledge. The third edition has been updated and enlarged with new sections deepening the knowledge about data analysis. Moreover, a customized set of problems with solutions is accessible on the Web at extras.springer.com."
This book introduces advanced undergraduate, graduate students and practitioners to statistical methods for ranking data. An important aspect of nonparametric statistics is oriented towards the use of ranking data. Rank correlation is defined through the notion of distance functions and the notion of compatibility is introduced to deal with incomplete data. Ranking data are also modeled using a variety of modern tools such as CART, MCMC, EM algorithm and factor analysis. This book deals with statistical methods used for analyzing such data and provides a novel and unifying approach for hypotheses testing. The techniques described in the book are illustrated with examples and the statistical software is provided on the authors' website.
The Bible's grand narrative about Israel's Exodus from Egypt is central to Biblical religion, Jewish, Christian, and Muslim identity and the formation of the academic disciplines studying the ancient Near East. It has also been a pervasive theme in artistic and popular imagination.Israel's Exodus in Transdisciplinary Perspectiveis a pioneering worksurveying this tradition in unprecedented breadth, combiningarchaeological discovery, quantitative methodology and close literary reading. Archaeologists, Egyptologists, Biblical Scholars, Computer Scientists, Geoscientists and other experts contribute their diverse approaches in a novel, transdisciplinary consideration of ancient topography, Egyptian and Near Eastern parallels to the Exodus story, the historicity of the Exodus, the interface of the Exodus question with archaeological fieldwork on emergent Israel, the formation of biblical literature, and the cultural memory of the Exodus in ancient Israel and beyond. This edited volume contains research presented at the groundbreaking symposium"Out of Egypt: Israel s Exodus Between Text and Memory, History and Imagination"""held in 2013at the Qualcomm Institute of the University of California, San Diego. The combination of 44 contributions by an international group of scholars from diverse disciplines makes this the first such transdisciplinary study of ancient text and history. In the original conference and with this new volume, revolutionary media, such as a 3D immersive virtual reality environment, impart innovative, Exodus-based research to a wider audience. Out of archaeology, ancient texts, science and technology emergean up-to-date picture of the Exodus for the21stCentury and a new standard for collaborative research."
Recent years have seen significant advances in the use of risk analysis in many government agencies and private corporations. These advances are reflected both in the state of practice of risk analysis, and in the status of governmental requirements and industry standards. Because current risk and reliability models are often used to regulatory decisions, it is critical that inference methods used in these models be robust and technically sound. The goal of Bayesian Inference for Probabilistic Risk Assessment is to provide a Bayesian foundation for framing probabilistic problems and performing inference on these problems. It is aimed at scientists and engineers who perform or review risk analyses and it provides an analytical structure for combining data and information from various sources to generate estimates of the parameters of uncertainty distributions used in risk and reliability models. Inference in the book employs a modern computational approach known as Markov chain Monte Carlo (MCMC). MCMC methods were described in the early 1950s in research into Monte Carlo sampling at Los Alamos. Recently, with the advance of computing power and improved analysis algorithms, MCMC is increasingly being used for a wide range of Bayesian inference problems in a variety of disciplines. MCMC is effectively (although not literally) numerical (Monte Carlo) integration by way of Markov chains. Inference is performed by sampling from a target distribution (i.e., a specially constructed Markov chain, based upon the inference problem) until convergence (to the posterior distribution) is achieved. The MCMC approach may be implemented using custom-written routines or existing general purpose commercial or open-source software. This book uses an open-source program called OpenBUGS (commonly referred to as WinBUGS) to solve the inference problems that are described. A powerful feature of OpenBUGS is its automatic selection of an appropriate MCMC sampling scheme for a given problem. The approach that is taken in this book is to provide analysis "building blocks" that can be modified, combined, or used as-is to solve a variety of challenging problems. The MCMC approach used is implemented via textual scripts similar to a macro-type programming language. Accompanying each script is a graphical Bayesian network illustrating the elements of the script and the overall inference problem being solved. The book also covers the important topic of MCMC convergence.
This book discusses the psychological traits associated with drug consumption through the statistical analysis of a new database with information on 1885 respondents and use of 18 drugs. After reviewing published works on the psychological profiles of drug users and describing the data mining and machine learning methods used, it demonstrates that the personality traits (five factor model, impulsivity, and sensation seeking) together with simple demographic data make it possible to predict the risk of consumption of individual drugs with a sensitivity and specificity above 70% for most drugs. It also analyzes the correlations of use of different substances and describes the groups of drugs with correlated use, identifying significant differences in personality profiles for users of different drugs. The book is intended for advanced undergraduates and first-year PhD students, as well as researchers and practitioners. Although no previous knowledge of machine learning, advanced data mining concepts or modern psychology of personality is assumed, familiarity with basic statistics and some experience in the use of probabilities would be helpful. For a more detailed introduction to statistical methods, the book provides recommendations for undergraduate textbooks.
This book offers a comprehensive and accessible exposition of Euclidean Distance Matrices (EDMs) and rigidity theory of bar-and-joint frameworks. It is based on the one-to-one correspondence between EDMs and projected Gram matrices. Accordingly the machinery of semidefinite programming is a common thread that runs throughout the book. As a result, two parallel approaches to rigidity theory are presented. The first is traditional and more intuitive approach that is based on a vector representation of point configuration. The second is based on a Gram matrix representation of point configuration. Euclidean Distance Matrices and Their Applications in Rigidity Theory begins by establishing the necessary background needed for the rest of the book. The focus of Chapter 1 is on pertinent results from matrix theory, graph theory and convexity theory, while Chapter 2 is devoted to positive semidefinite (PSD) matrices due to the key role these matrices play in our approach. Chapters 3 to 7 provide detailed studies of EDMs, and in particular their various characterizations, classes, eigenvalues and geometry. Chapter 8 serves as a transitional chapter between EDMs and rigidity theory. Chapters 9 and 10 cover local and universal rigidities of bar-and-joint frameworks. This book is self-contained and should be accessible to a wide audience including students and researchers in statistics, operations research, computational biochemistry, engineering, computer science and mathematics.
A ground-breaking and practical treatment of probability and stochastic processes "A Modern Theory of Random Variation" is a new and radical re-formulation of the mathematical underpinnings of subjects as diverse as investment, communication engineering, and quantum mechanics. Setting aside the classical theory of probability measure spaces, the book utilizes a mathematically rigorous version of the theory of random variation that bases itself exclusively on finitely additive probability distribution functions. In place of twentieth century Lebesgue integration and measure theory, the author uses the simpler concept of Riemann sums, and the non-absolute Riemann-type integration of Henstock. Readers are supplied with an accessible approach to standard elements of probability theory such as the central limmit theorem and Brownian motion as well as remarkable, new results on Feynman diagrams and stochastic integrals. Throughout the book, detailed numerical demonstrations accompany the discussions of abstract mathematical theory, from the simplest elements of the subject to the most complex. In addition, an array of numerical examples and vivid illustrations showcase how the presented methods and applications can be undertaken at various levels of complexity. "A Modern Theory of Random Variation" is a suitable book for courses on mathematical analysis, probability theory, and mathematical finance at the upper-undergraduate and graduate levels. The book is also an indispensible resource for researchers and practitioners who are seeking new concepts, techniques and methodologies in data analysis, numerical calculation, and financial asset valuation. Patrick Muldowney, PhD, served as lecturer at the Magee Business School of the UNiversity of Ulster for over twenty years. Dr. Muldowney has published extensively in his areas of research, including integration theory, financial mathematics, and random variation.
This volume is composed of peer-reviewed papers that have developed from the First Conference of the International Society for NonParametric Statistics (ISNPS). This inaugural conference took place in Chalkidiki, Greece, June 15-19, 2012. It was organized with the co-sponsorship of the IMS, the ISI, and other organizations. M.G. Akritas, S.N. Lahiri, and D.N. Politis are the first executive committee members of ISNPS, and the editors of this volume. ISNPS has a distinguished Advisory Committee that includes Professors R.Beran, P.Bickel, R. Carroll, D. Cook, P. Hall, R. Johnson, B. Lindsay, E. Parzen, P. Robinson, M. Rosenblatt, G. Roussas, T. SubbaRao, and G. Wahba. The Charting Committee of ISNPS consists of more than 50 prominent researchers from all over the world. The chapters in this volume bring forth recent advances and trends in several areas of nonparametric statistics. In this way, the volume facilitates the exchange of research ideas, promotes collaboration among researchers from all over the world, and contributes to the further development of the field.The conference program included over 250 talks, including special invited talks, plenary talks, and contributed talks on all areas of nonparametric statistics. Out of these talks, some of the most pertinent ones have been refereed and developed into chapters that share both research and developments in the field.
This book presents state-of-the-art solution methods and applications of stochastic optimal control. It is a collection of extended papers discussed at the traditional Liverpool workshop on controlled stochastic processes with participants from both the east and the west. New problems are formulated, and progresses of ongoing research are reported. Topics covered in this book include theoretical results and numerical methods for Markov and semi-Markov decision processes, optimal stopping of Markov processes, stochastic games, problems with partial information, optimal filtering, robust control, Q-learning, and self-organizing algorithms. Real-life case studies and applications, e.g., queueing systems, forest management, control of water resources, marketing science, and healthcare, are presented. Scientific researchers and postgraduate students interested in stochastic optimal control,- as well as practitioners will find this book appealing and a valuable reference.
Since the early eighties, Ali Suleyman Ustunelhas beenone of the
main contributors to the field of Malliavin calculus. In a workshop
held in Paris, June 2010 several prominent researchers gave
exciting talks in honor of his 60th birthday. The present volume
includes scientific contributions from this workshop.
This book compiles and presents new developments in statistical causal inference. The accompanying data and computer programs are publicly available so readers may replicate the model development and data analysis presented in each chapter. In this way, methodology is taught so that readers may implement it directly. The book brings together experts engaged in causal inference research to present and discuss recent issues in causal inference methodological development. This is also a timely look at causal inference applied to scenarios that range from clinical trials to mediation and public health research more broadly. In an academic setting, this book will serve as a reference and guide to a course in causal inference at the graduate level (Master's or Doctorate). It is particularly relevant for students pursuing degrees in statistics, biostatistics, and computational biology. Researchers and data analysts in public health and biomedical research will also find this book to be an important reference.
Presents a unique study of Integrative Problem-Solving (IPS). The consideration of 'Decadence' is essential in the scientific study of environmental and other problems and their rigorous solution, because the broad context within which the problems emerge can affect their solution. Stochastic reasoning underlines the conceptual and methodological framework of IPS, and its formulation has a mathematical life of its own that accounts for the multidisciplinarity of real world problems, the multisourced uncertainties characterizing their solution, and the different thinking modes of the people involved. Only by interpolating between the full range of disciplines (including stochastic mathematics, physical science, neuropsychology, philosophy, and sociology) and the associated thinking modes can scientists arrive at a satisfactory account of problem-solving, and be able to distinguish between a technically complete problem-solution, and a solution that has social impact.
This book focuses on the application and development of information geometric methods in the analysis, classification and retrieval of images and signals. It provides introductory chapters to help those new to information geometry and applies the theory to several applications. This area has developed rapidly over recent years, propelled by the major theoretical developments in information geometry, efficient data and image acquisition and the desire to process and interpret large databases of digital information. The book addresses both the transfer of methodology to practitioners involved in database analysis and in its efficient computational implementation. |
![]() ![]() You may like...
Microgrid Architectures, Control and…
Naser Mahdavi Tabatabaei, Ersan Kabalci, …
Hardcover
R5,025
Discovery Miles 50 250
Power Amplifiers for the S-, C-, X- and…
Mladen Bozanic, Saurabh Sinha
Hardcover
Astronomical Time Series - Proceedings…
Dan Maoz, Amiel Sternberg, …
Hardcover
R4,613
Discovery Miles 46 130
Technology for Success - Computer…
Mark Ciampa, Jill West, …
Paperback
![]()
Metaheuristics Algorithms in Power…
Erik Cuevas, Emilio Barocio Espejo, …
Hardcover
R3,035
Discovery Miles 30 350
|