![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
I became interested in Random Vibration during the preparation of my PhD dissertation, which was concerned with the seismic response of nuclear reactor cores. I was initiated into this field through the cla.ssical books by Y.K.Lin, S.H.Crandall and a few others. After the completion of my PhD, in 1981, my supervisor M.Gera.din encouraged me to prepare a course in Random Vibration for fourth and fifth year students in Aeronautics, at the University of Liege. There was at the time very little material available in French on that subject. A first draft was produced during 1983 and 1984 and revised in 1986. These notes were published by the Presses Poly techniques et Universitaires Romandes (Lausanne, Suisse) in 1990. When Kluwer decided to publish an English translation ofthe book in 1992, I had to choose between letting Kluwer translate the French text in-extenso or doing it myself, which would allow me to carry out a sustantial revision of the book. I took the second option and decided to rewrite or delete some of the original text and include new material, based on my personal experience, or reflecting recent technical advances. Chapter 6, devoted to the response of multi degree offreedom structures, has been completely rewritten, and Chapter 11 on random fatigue is entirely new. The computer programs which have been developed in parallel with these chapters have been incorporated in the general purpose finite element software SAMCEF, developed at the University of Liege.
Providing researchers in economics, finance, and statistics with an up-to-date introduction to applying Bayesian techniques to empirical studies, this book covers the full range of the new numerical techniques which have been developed over the last thirty years. Notably, these are: Monte Carlo sampling, antithetic replication, importance sampling, and Gibbs sampling. The author covers both advances in theory and modern approaches to numerical and applied problems, and includes applications drawn from a variety of different fields within economics, while also providing a quick overview of the underlying statistical ideas of Bayesian thought. The result is a book which presents a roadmap of applied economic questions that can now be addressed empirically with Bayesian methods. Consequently, many researchers will find this a readily readable survey of this growing topic.
First published in 2002. Routledge is an imprint of Taylor & Francis, an informa company.
'Et moi, ..~ si lavait su CO.llUlJalt en revc:nir, One acMcc matbcmatica bu JaIdcred the human rac:c. It bu put COIDIDOD _ beet je n'y serais point aBe.' Jules Verne wbac it bdoup, 0Jl !be~ IbcII _t to !be dusty cauialcr Iabc&d 'diMardod__ The series is divergent; thc:reforc we may be -'. I!.ticT. Bc:I1 able to do something with it. O. Hcavisidc Mathematics is a tool for thought. A highly necessary tool in a world when: both feedback and non- linearities abound. Similarly. all kinds of parts of mathematics serve as tools for other parts and for other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statcmalts as: 'One service topology has rendered mathematical physics ...*; 'One service logic has rendered c0m- puter science ...'; 'One service category theory has rendered mathematics ...'. All arguably true. And all statements obtainable this way form part of the raison d'etre of this series. This series, Mathematics and Its Applications. started in 19n. Now that over one hundred volumes have appeared it seems opportune to reexamine its scope. At the time I wrote "Growing specialization and diversification have brought a host of monographs and textbooks on increasingly specialized topics. However. the 'tree' of knowledge of mathematics and related fields does not grow only by putting forth new branc:hes. It also happens, quite often in fact, that branches which were thought to be completely.
Aggregation plays a central role in many of the technological tasks we are faced with. The importance of this process will become even greater as we move more and more toward becoming an information-cent.ered society, us is happening with the rapid growth of the Internet and the World Wirle Weh. Here we shall be faced with many issues related to the fusion of information. One very pressing issue here is the development of mechanisms to help search for information, a problem that clearly has a strong aggregation-related component. More generally, in order to model the sophisticated ways in which human beings process information, as well as going beyond the human capa bilities, we need provide a basket of aggregation tools. The centrality of aggregation in human thought can be be very clearly seen by looking at neural networks, a technology motivated by modeling the human brain. One can see that the basic operations involved in these networks are learning and aggregation. The Ordered Weighted Averaging (OWA) operators provide a parameter ized family of aggregation operators which include many of the well-known operators such as the maximum, minimum and the simple average."
Cybersecurity Analytics is for the cybersecurity student and professional who wants to learn data science techniques critical for tackling cybersecurity challenges, and for the data science student and professional who wants to learn about cybersecurity adaptations. Trying to build a malware detector, a phishing email detector, or just interested in finding patterns in your datasets? This book can let you do it on your own. Numerous examples and datasets links are included so that the reader can "learn by doing." Anyone with a basic college-level calculus course and some probability knowledge can easily understand most of the material. The book includes chapters containing: unsupervised learning, semi-supervised learning, supervised learning, text mining, natural language processing, and more. It also includes background on security, statistics, and linear algebra. The website for the book contains a listing of datasets, updates, and other resources for serious practitioners.
"Examines classic algorithms, geometric diagrams, and mechanical principles for enhances visualization of statistical estimation procedures and mathematical concepts in physics, engineering, and computer programming."
Identifying the sources and measuring the impact of haphazard variations are important in any number of research applications, from clinical trials and genetics to industrial design and psychometric testing. Only in very simple situations can such variations be represented effectively by independent, identically distributed random variables or by random sampling from a hypothetical infinite population.
"Configural Frequency Analysis" (CFA) provides an up-to-the-minute
comprehensive introduction to its techniques, models, and
applications. Written in a formal yet accessible style, actual
empirical data examples are used to illustrate key concepts.
Step-by-step program sequences are used to show readers how to
employ CFA methods using commercial software packages, such as SAS,
SPSS, SYSTAT, S-Plus, or those written specifically to perform CFA.
"Configural Frequency Analysis" (CFA) provides an up-to-the-minute
comprehensive introduction to its techniques, models, and
applications. Written in a formal yet accessible style, actual
empirical data examples are used to illustrate key concepts.
Step-by-step program sequences are used to show readers how to
employ CFA methods using commercial software packages, such as SAS,
SPSS, SYSTAT, S-Plus, or those written specifically to perform CFA.
For almost fifty years, Richard M. Dudley has been extremely influential in the development of several areas of Probability. His work on Gaussian processes led to the understanding of the basic fact that their sample boundedness and continuity should be characterized in terms of proper measures of complexity of their parameter spaces equipped with the intrinsic covariance metric. His sufficient condition for sample continuity in terms of metric entropy is widely used and was proved by X. Fernique to be necessary for stationary Gaussian processes, whereas its more subtle versions (majorizing measures) were proved by M. Talagrand to be necessary in general. Together with V. N. Vapnik and A. Y. Cervonenkis, R. M. Dudley is a founder of the modern theory of empirical processes in general spaces. His work on uniform central limit theorems (under bracketing entropy conditions and for Vapnik-Cervonenkis classes), greatly extends classical results that go back to A. N. Kolmogorov and M. D. Donsker, and became the starting point of a new line of research, continued in the work of Dudley and others, that developed empirical processes into one of the major tools in mathematical statistics and statistical learning theory. As a consequence of Dudley's early work on weak convergence of probability measures on non-separable metric spaces, the Skorohod topology on the space of regulated right-continuous functions can be replaced, in the study of weak convergence of the empirical distribution function, by the supremum norm. In a further recent step Dudley replaces this norm by the stronger p-variation norms, which then allows replacing compact differentiability of many statistical functionals by Fr chet differentiability in the delta method. Richard M. Dudley has also made important contributions to mathematical statistics, the theory of weak convergence, relativistic Markov processes, differentiability of nonlinear operators and several other areas of mathematics. Professor Dudley has been the adviser to thirty PhD's and is a Professor of Mathematics at the Massachusetts Institute of Technology.
The main purpose of this handbook is to summarize and to put in order the ideas, methods, results and literature on the theory of random evolutions and their applications to the evolutionary stochastic systems in random media, and also to present some new trends in the theory of random evolutions and their applications. In physical language, a random evolution ( RE ) is a model for a dynamical sys tem whose state of evolution is subject to random variations. Such systems arise in all branches of science. For example, random Hamiltonian and Schrodinger equations with random potential in quantum mechanics, Maxwell's equation with a random refractive index in electrodynamics, transport equations associated with the trajec tory of a particle whose speed and direction change at random, etc. There are the examples of a single abstract situation in which an evolving system changes its "mode of evolution" or "law of motion" because of random changes of the "environment" or in a "medium." So, in mathematical language, a RE is a solution of stochastic operator integral equations in a Banach space. The operator coefficients of such equations depend on random parameters. Of course, in such generality, our equation includes any homogeneous linear evolving system. Particular examples of such equations were studied in physical applications many years ago. A general mathematical theory of such equations has been developed since 1969, the Theory of Random Evolutions."
This volume collects authoritative contributions on analytical methods and mathematical statistics. The methods presented include resampling techniques; the minimization of divergence; estimation theory and regression, eventually under shape or other constraints or long memory; and iterative approximations when the optimal solution is difficult to achieve. It also investigates probability distributions with respect to their stability, heavy-tailness, Fisher information and other aspects, both asymptotically and non-asymptotically. The book not only presents the latest mathematical and statistical methods and their extensions, but also offers solutions to real-world problems including option pricing. The selected, peer-reviewed contributions were originally presented at the workshop on Analytical Methods in Statistics, AMISTAT 2015, held in Prague, Czech Republic, November 10-13, 2015.
Probabilistic modeling represents a subject arising in many branches of mathematics, economics, and computer science. Such modeling connects pure mathematics with applied sciences. Similarly, data analyzing and statistics are situated on the border between pure mathematics and applied sciences. Therefore, when probabilistic modeling meets statistics, it is a very interesting occasion that has gained much research recently. With the increase of these technologies in life and work, it has become somewhat essential in the workplace to have planning, timetabling, scheduling, decision making, optimization, simulation, data analysis, and risk analysis and process modeling. However, there are still many difficulties and challenges that arrive in these sectors during the process of planning or decision making. There continues to be the need for more research on the impact of such probabilistic modeling with other approaches. Analyzing Data Through Probabilistic Modeling in Statistics is an essential reference source that builds on the available literature in the field of probabilistic modeling, statistics, operational research, planning and scheduling, data extrapolation in decision making, probabilistic interpolation and extrapolation in simulation, stochastic processes, and decision analysis. This text will provide the resources necessary for economics and management sciences and for mathematics and computer sciences. This book is ideal for interested technology developers, decision makers, mathematicians, statisticians and practitioners, stakeholders, researchers, academicians, and students looking to further their research exposure to pertinent topics in operations research and probabilistic modeling.
This book presents recent developments in multivariate and robust statistical methods. Featuring contributions by leading experts in the field it covers various topics, including multivariate and high-dimensional methods, time series, graphical models, robust estimation, supervised learning and normal extremes. It will appeal to statistics and data science researchers, PhD students and practitioners who are interested in modern multivariate and robust statistics. The book is dedicated to David E. Tyler on the occasion of his pending retirement and also includes a review contribution on the popular Tyler’s shape matrix.
This book aims to present the impact of Artificial Intelligence (AI) and Big Data in healthcare for medical decision making and data analysis in myriad fields including Radiology, Radiomics, Radiogenomics, Oncology, Pharmacology, COVID-19 prognosis, Cardiac imaging, Neuroradiology, Psychiatry and others. This will include topics such as Artificial Intelligence of Thing (AIOT), Explainable Artificial Intelligence (XAI), Distributed learning, Blockchain of Internet of Things (BIOT), Cybersecurity, and Internet of (Medical) Things (IoTs). Healthcare providers will learn how to leverage Big Data analytics and AI as methodology for accurate analysis based on their clinical data repositories and clinical decision support. The capacity to recognize patterns and transform large amounts of data into usable information for precision medicine assists healthcare professionals in achieving these objectives. Intelligent Health has the potential to monitor patients at risk with underlying conditions and track their progress during therapy. Some of the greatest challenges in using these technologies are based on legal and ethical concerns of using medical data and adequately representing and servicing disparate patient populations. One major potential benefit of this technology is to make health systems more sustainable and standardized. Privacy and data security, establishing protocols, appropriate governance, and improving technologies will be among the crucial priorities for Digital Transformation in Healthcare.
Analysis of Failure and Survival Data is an essential textbook for graduate-level students of survival analysis and reliability and a valuable reference for practitioners. It focuses on the many techniques that appear in popular software packages, including plotting product-limit survival curves, hazard plots, and probability plots in the context of censored data. The author integrates S-Plus and Minitab output throughout the text, along with a variety of real data sets so readers can see how the theory and methods are applied. He also incorporates exercises in each chapter that provide valuable problem-solving experience.
This book deals with estimating and testing the probability of an event. It aims at providing practitioners with refined and easy to use techniques as well as initiating a new field of research in theoretical statistics. Practical, comprehensive tables for data analysis of the experimental state of investigations are included, as well as an accompanying CD-ROM with extensive tables for measurement intervals and prediction regions for testing. Statisticians and practitioners will find this book an essential reference.
The Structural Theory of Probability addresses the interpretation of probability, often debated in the scientific community. This problem has been examined for centuries; perhaps no other mathematical calculation suffuses mankind's efforts at survival as amply as probability. In the dawn of the 20th century David Hilbert included the foundations of the probability calculus within the most vital mathematical problems; Dr. Rocchi's topical and ever-timely volume proposes a novel, exhaustive solution to this vibrant issue. Paolo Rocchi, a versatile IBM scientist, outlines a new philosophical and mathematical approach inspired by well-tested software techniques. Through the prism of computer technology he provides an innovative view on the theory of probability. Dr. Rocchi discusses in detail the mathematical tools used to clarify the meaning of probability, integrating with care numerous examples and case studies. The comprehensiveness and originality of its mathematical development make this volume an inspiring read for researchers and students alike. From a review by the Mathematical Association of America Online: "[The author's] basis thesis is this: Probability theory from Pascal to Kolmogorov and onwards has focused on events as sets of outcomes or results, and probability as a measure attached to these sets. But this ignores the structure of the processes which lead to the outcomes, and the author explores how taking into account the details of the processes would lead to a more fundamental understanding of the nature of probability. This is an interesting idea, and the author makes it clear that at present this is a work in process and not yet a finished product, for hesays that he has tried to give "an impulse in the right direction" with his theory. ... One hopes that in due course the author will develop his theories further and present overwhelmingly persuasive examples of the advantages of his approach." - Ramachandran Bharath
Written by one of the masters of the foundation of measurement,
Louis Narens' new book thoroughly examines the basis for the
measurement-theoretic concept of meaningfulness and presents a new
theory about the role of numbers and invariance in science. The
book associates with each portion of mathematical science a subject
matter that the portion of science is intended to investigate or
describe. It considers those quantitative or empirical assertions
and relationships that belong to the subject matter to be
meaningful (for that portion of science) and those that do not
belong to be meaningless.
This book focuses on the meaning of statistical inference and estimation. Statistical inference is concerned with the problems of estimation of population parameters and testing hypotheses. Primarily aimed at undergraduate and postgraduate students of statistics, the book is also useful to professionals and researchers in statistical, medical, social and other disciplines. It discusses current methodological techniques used in statistics and related interdisciplinary areas. Every concept is supported with relevant research examples to help readers to find the most suitable application. Statistical tools have been presented by using real-life examples, removing the "fear factor" usually associated with this complex subject. The book will help readers to discover diverse perspectives of statistical theory followed by relevant worked-out examples. Keeping in mind the needs of readers, as well as constantly changing scenarios, the material is presented in an easy-to-understand form.
Classical statistical techniques fail to cope well with deviations from a standard distribution. Robust statistical methods take into account these deviations while estimating the parameters of parametric models, thus increasing the accuracy of the inference. Research into robust methods is flourishing, with new methods being developed and different applications considered. "Robust Statistics" sets out to explain the use of robust methods and their theoretical justification. It provides an up-to-date overview of the theory and practical application of the robust statistical methods in regression, multivariate analysis, generalized linear models and time series. This unique book: Enables the reader to select and use the most appropriate robust method for their particular statistical model. Features computational algorithms for the core methods. Covers regression methods for data mining applications. Includes examples with real data and applications using the S-Plus robust statistics library. Describes the theoretical and operational aspects of robust methods separately, so the reader can choose to focus on one or the other. Supported by a supplementary website featuring time-limited S-Plus download, along with datasets and S-Plus code to allow the reader to reproduce the examples given in the book. "Robust Statistics" aims to stimulate the use of robust methods as a powerful tool to increase the reliability and accuracy of statistical modelling and data analysis. It is ideal for researchers, practitioners and graduate students of statistics, electrical, chemical and biochemical engineering, and computer vision. There is also much to benefit researchers from other sciences, suchas biotechnology, who need to use robust statistical methods in their work.
This book deals with the almost sure asymptotic behaviour of linearly transformed sequences of independent random variables, vectors and elements of topological vector spaces. The main subjects dealing with series of independent random elements on topological vector spaces, and in particular, in sequence spaces, as well as with generalized summability methods which are treated here are strong limit theorems for operator-normed (matrix normed) sums of independent finite-dimensional random vectors and their applications; almost sure asymptotic behaviour of realizations of one-dimensional and multi-dimensional Gaussian Markov sequences; various conditions providing almost sure continuity of sample paths of Gaussian Markov processes; and almost sure asymptotic behaviour of solutions of one-dimensional and multi-dimensional stochastic recurrence equations of special interest. Many topics, especially those related to strong limit theorems for operator-normed sums of independent random vectors, appear in monographic literature for the first time. Audience: The book is aimed at experts in probability theory, theory of random processes and mathematical statistics who are interested in the almost sure asymptotic behaviour in summability schemes, like operator normed sums and weighted sums, etc. Numerous sections will be of use to those who work in Gaussian processes, stochastic recurrence equations, and probability theory in topological vector spaces. As the exposition of the material is consistent and self-contained it can also be recommended as a textbook for university courses.
Hardbound. This volume of the Handbook is concerned particularly with the frequency side, or spectrum, approach to time series analysis. This approach involves essential use of sinusoids and bands of (angular) frequency, with Fourier transforms playing an important role. A principal activity is thinking of systems, their inputs, outputs, and behavior in sinusoidal terms. In many cases, the frequency side approach turns out to be simpler with respect to computational, mathematical, and statistical aspects. In the frequency approach, an assumption of stationarity is commonly made. However, the essential roles played by the techniques of complex demodulation and seasonal adjustment show that stationarity is far from being a necessary condition. Assumptions of Gaussianity and linearity are also commonly made and yet, as a variety of the papers illustrate, these assumptions are not necessary. This volume complements Handbook of Statistics 5: Time Series in the
|
You may like...
Earthquake Hazard and Seismic Risk…
Serguei Balassanian, Armando Cisternas, …
Hardcover
R4,086
Discovery Miles 40 860
Life and Narrative - The Risks and…
Brian Schiff, A. Elizabeth McKim, …
Hardcover
R1,932
Discovery Miles 19 320
Showdown At The Red Lion - The Life And…
Charles Van Onselen
Paperback
Historic Columbus Crimes - Mama's in the…
David Meyers, Elise Meyers Walker
Paperback
|