![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Mathematics > Probability & statistics
Change-point problems arise in a variety of experimental and mathematical sciences, as well as in engineering and health sciences. This rigorously researched text provides a comprehensive review of recent probabilistic methods for detecting various types of possible changes in the distribution of chronologically ordered observations. Further developing the already well-established theory of weighted approximations and weak convergence, the authors provide a thorough survey of parametric and non-parametric methods, regression and time series models together with sequential methods. All but the most basic models are carefully developed with detailed proofs, and illustrated by using a number of data sets. Contains a thorough survey of:
One SCI\'ice mathematics bas rendered the 'Et moi, ...si j'avait su comment en revcnir. je n'y serais point aile: human race. It bas put common sc:nsc back where it belongs, on the topmost shelf next Jules Verne to the dusty canister labelled 'discarded n- sense'. The series is divergent; therefore we may be able to do something with it. Eric T. Bell O. Hcavisidc Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and non- linearities abound. Similarly. all kinds of parts of mathematics serve as tools for other parts and for other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One service topology has rendered mathematical physics .. :; 'One service logic has rendered com- puter science .. :; 'One service category theory has rendered mathematics .. :. All arguably true. And all statements obtainable this way form part of the raison d'etre of this series.
Gives a holistic approach to machine learning and data science applications, from design to deployment and quality assurance, as an overarching cyclical process; Bridges machine learning and software engineering to build a shared set of best practices useful to both academia and the industry; Discusses deployment options for different types of models and data to help practitioners reason and make informed choices. Emphasizes the role of coding standards and software architecture alongside statistical rigor to implement reproducible and scalable machine learning models Key Features: A complete guide to software engineering for machine learning and data science applications, from choosing the right hardware to analysing algorithms and designing scalable architectures. Surveys the state of the art of the software and frameworks used to build and run machine learning applications, comparing and contrasting their trade-offs. Comes with a complete case study in natural language understanding which illustrates the principles and the tools covered in the book. Code available from GitHub. Provides a multi-disciplinary view of how traditional software learning practices can be integrated with the workflows of domain experts and the unique characteristics of software in which data play a central role.
The use of statistics is fundamental to many endeavors in biology and geology. For students and professionals in these fields, there is no better way to build a statistical background than to present the concepts and techniques in a context relevant to their interests. Statistics with Applications in Biology and Geology provides a practical introduction to using fundamental parametric statistical models frequently applied to data analysis in biology and geology.
Contains a compact disc with nearly 200 microcomputer programs illustrating a wide range of reliability and statistical analyses Mechanical Reliability Improvement provides probability and statistical concepts developed using pseudorandom numbers enumeration-, simulation-, and randomization-based statistical analyses for comparison of the test performance of alternative designs, as well as simulation- and randomization-based tests for examination of the credibility of statistical presumptions and discusses centroid and moment of inertia analogies for mean and variance the organization structure of completely randomized, randomized complete block, and split spot experiment test programs
This book presents up-to-date mathematical results in asymptotic theory on nonlinear regression on the basis of various asymptotic expansions of least squares, its characteristics, and its distribution functions of functionals of Least Squares Estimator. It is divided into four chapters. In Chapter 1 assertions on the probability of large deviation of normal Least Squares Estimator of regression function parameters are made. Chapter 2 indicates conditions for Least Moduli Estimator asymptotic normality. An asymptotic expansion of Least Squares Estimator as well as its distribution function are obtained and two initial terms of these asymptotic expansions are calculated. Separately, the Berry-Esseen inequality for Least Squares Estimator distribution is deduced. In the third chapter asymptotic expansions related to functionals of Least Squares Estimator are dealt with. Lastly, Chapter 4 offers a comparison of the powers of statistical tests based on Least Squares Estimators. The Appendix gives an overview of subsidiary facts and a list of principal notations. Additional background information, grouped per chapter, is presented in the Commentary section. The volume concludes with an extensive Bibliography. Audience: This book will be of interest to mathematicians and statisticians whose work involves stochastic analysis, probability theory, mathematics of engineering, mathematical modelling, systems theory or cybernetics.
How do preprocessing steps such as tokenization, stemming, and removing stop words affect predictive models? Build beginning-to-end workflows for predictive modeling using text as features Compare traditional machine learning methods and deep learning methods for text data
This second edition of "A Beginner's Guide to Finite Mathematics" takes a distinctly applied approach to finite mathematics at the freshman and sophomore level. Topics are presented sequentially: the book opens with a brief review of sets and numbers, followed by an introduction to data sets, histograms, means and medians. Counting techniques and the Binomial Theorem are covered, which provides the foundation for elementary probability theory; this, in turn, leads to basic statistics. This new edition includes chapters on game theory and financial mathematics. Requiring little mathematical background beyond high school algebra, the text will be especially useful for business and liberal arts majors.
This book illustrates the current work of leading multilevel modeling (MLM) researchers from around the world. The book's goal is to critically examine the real problems that occur when trying to use MLMs in applied research, such as power, experimental design, and model violations. This presentation of cutting-edge work and statistical innovations in multilevel modeling includes topics such as growth modeling, repeated measures analysis, nonlinear modeling, outlier detection, and meta analysis. This volume will be beneficial for researchers with advanced statistical training and extensive experience in applying multilevel models, especially in the areas of education; clinical intervention; social, developmental and health psychology, and other behavioral sciences; or as a supplement for an introductory graduate-level course.
First published in 2002. Routledge is an imprint of Taylor & Francis, an informa company.
The series is devoted to the publication of monographs and high-level textbooks in mathematics, mathematical methods and their applications. Apart from covering important areas of current interest, a major aim is to make topics of an interdisciplinary nature accessible to the non-specialist. The works in this series are addressed to advanced students and researchers in mathematics and theoretical physics. In addition, it can serve as a guide for lectures and seminars on a graduate level. The series de Gruyter Studies in Mathematics was founded ca. 35 years ago by the late Professor Heinz Bauer and Professor Peter Gabriel with the aim to establish a series of monographs and textbooks of high standard, written by scholars with an international reputation presenting current fields of research in pure and applied mathematics. While the editorial board of the Studies has changed with the years, the aspirations of the Studies are unchanged. In times of rapid growth of mathematical knowledge carefully written monographs and textbooks written by experts are needed more than ever, not least to pave the way for the next generation of mathematicians. In this sense the editorial board and the publisher of the Studies are devoted to continue the Studies as a service to the mathematical community. Please submit any book proposals to Niels Jacob. Titles in planning include Flavia Smarazzo and Alberto Tesei, Measure Theory: Radon Measures, Young Measures, and Applications to Parabolic Problems (2019) Elena Cordero and Luigi Rodino, Time-Frequency Analysis of Operators (2019) Mark M. Meerschaert, Alla Sikorskii, and Mohsen Zayernouri, Stochastic and Computational Models for Fractional Calculus, second edition (2020) Mariusz Lemanczyk, Ergodic Theory: Spectral Theory, Joinings, and Their Applications (2020) Marco Abate, Holomorphic Dynamics on Hyperbolic Complex Manifolds (2021) Miroslava Antic, Joeri Van der Veken, and Luc Vrancken, Differential Geometry of Submanifolds: Submanifolds of Almost Complex Spaces and Almost Product Spaces (2021) Kai Liu, Ilpo Laine, and Lianzhong Yang, Complex Differential-Difference Equations (2021) Rajendra Vasant Gurjar, Kayo Masuda, and Masayoshi Miyanishi, Affine Space Fibrations (2022)
Identifying the sources and measuring the impact of haphazard variations are important in any number of research applications, from clinical trials and genetics to industrial design and psychometric testing. Only in very simple situations can such variations be represented effectively by independent, identically distributed random variables or by random sampling from a hypothetical infinite population.
"Examines classic algorithms, geometric diagrams, and mechanical principles for enhances visualization of statistical estimation procedures and mathematical concepts in physics, engineering, and computer programming."
The 1952 Nobel physics laureate Felix Bloch (1905-83) was one of the titans of twentieth-century physics. He laid the fundamentals for the theory of solids and has been called the "father of solid-state physics." His numerous, valuable contributions include the theory of magnetism, measurement of the magnetic moment of the neutron, nuclear magnetic resonance, and the infrared problem in quantum electrodynamics.Statistical mechanics is a crucial subject which explores the understanding of the physical behaviour of many-body systems that create the world around us. Bloch's first-year graduate course at Stanford University was the highlight for several generations of students. Upon his retirement, he worked on a book based on the course. Unfortunately, at the time of his death, the writing was incomplete.This book has been prepared by Professor John Dirk Walecka from Bloch's unfinished masterpiece. It also includes three sets of Bloch's handwritten lecture notes (dating from 1949, 1969 and 1976), and details of lecture notes taken in 1976 by Brian Serot, who gave an invaluable opinion of the course from a student's perspective. All of Bloch's problem sets, some dating back to 1933, have been included.The book is accessible to anyone in the physical sciences at the advanced undergraduate level or the first-year graduate level.
"Configural Frequency Analysis" (CFA) provides an up-to-the-minute
comprehensive introduction to its techniques, models, and
applications. Written in a formal yet accessible style, actual
empirical data examples are used to illustrate key concepts.
Step-by-step program sequences are used to show readers how to
employ CFA methods using commercial software packages, such as SAS,
SPSS, SYSTAT, S-Plus, or those written specifically to perform CFA.
* 16 accompanying datasets across a wide range of contexts (e.g. academic, corporate, sports, marketing) * Clear step-by-step instructions on executing the analyses. * Clear guidance on how to interpret results. * Primary instruction in R but added sections for Python coders. * Discussion exercises and data exercises for each of the main chapters. * Final chapter of practice material and datasets ideal for class homework or project work.
"Configural Frequency Analysis" (CFA) provides an up-to-the-minute
comprehensive introduction to its techniques, models, and
applications. Written in a formal yet accessible style, actual
empirical data examples are used to illustrate key concepts.
Step-by-step program sequences are used to show readers how to
employ CFA methods using commercial software packages, such as SAS,
SPSS, SYSTAT, S-Plus, or those written specifically to perform CFA.
I became interested in Random Vibration during the preparation of my PhD dissertation, which was concerned with the seismic response of nuclear reactor cores. I was initiated into this field through the cla.ssical books by Y.K.Lin, S.H.Crandall and a few others. After the completion of my PhD, in 1981, my supervisor M.Gera.din encouraged me to prepare a course in Random Vibration for fourth and fifth year students in Aeronautics, at the University of Liege. There was at the time very little material available in French on that subject. A first draft was produced during 1983 and 1984 and revised in 1986. These notes were published by the Presses Poly techniques et Universitaires Romandes (Lausanne, Suisse) in 1990. When Kluwer decided to publish an English translation ofthe book in 1992, I had to choose between letting Kluwer translate the French text in-extenso or doing it myself, which would allow me to carry out a sustantial revision of the book. I took the second option and decided to rewrite or delete some of the original text and include new material, based on my personal experience, or reflecting recent technical advances. Chapter 6, devoted to the response of multi degree offreedom structures, has been completely rewritten, and Chapter 11 on random fatigue is entirely new. The computer programs which have been developed in parallel with these chapters have been incorporated in the general purpose finite element software SAMCEF, developed at the University of Liege.
Most applications generate large datasets, like social networking and social influence programs, smart cities applications, smart house environments, Cloud applications, public web sites, scientific experiments and simulations, data warehouse, monitoring platforms, and e-government services. Data grows rapidly, since applications produce continuously increasing volumes of both unstructured and structured data. Large-scale interconnected systems aim to aggregate and efficiently exploit the power of widely distributed resources. In this context, major solutions for scalability, mobility, reliability, fault tolerance and security are required to achieve high performance and to create a smart environment. The impact on data processing, transfer and storage is the need to re-evaluate the approaches and solutions to better answer the user needs. A variety of solutions for specific applications and platforms exist so a thorough and systematic analysis of existing solutions for data science, data analytics, methods and algorithms used in Big Data processing and storage environments is significant in designing and implementing a smart environment. Fundamental issues pertaining to smart environments (smart cities, ambient assisted leaving, smart houses, green houses, cyber physical systems, etc.) are reviewed. Most of the current efforts still do not adequately address the heterogeneity of different distributed systems, the interoperability between them, and the systems resilience. This book will primarily encompass practical approaches that promote research in all aspects of data processing, data analytics, data processing in different type of systems: Cluster Computing, Grid Computing, Peer-to-Peer, Cloud/Edge/Fog Computing, all involving elements of heterogeneity, having a large variety of tools and software to manage them. The main role of resource management techniques in this domain is to create the suitable frameworks for development of applications and deployment in smart environments, with respect to high performance. The book focuses on topics covering algorithms, architectures, management models, high performance computing techniques and large-scale distributed systems.
Focused on practical matters: this book will not cover Shiny concepts, but practical tools and methodologies to use for production. Based on experience: this book will be a formalization of several years of experience building Shiny applications. Original content: this book will present new methodology and tooling, not just do a review of what already exists.
Aggregation plays a central role in many of the technological tasks we are faced with. The importance of this process will become even greater as we move more and more toward becoming an information-cent.ered society, us is happening with the rapid growth of the Internet and the World Wirle Weh. Here we shall be faced with many issues related to the fusion of information. One very pressing issue here is the development of mechanisms to help search for information, a problem that clearly has a strong aggregation-related component. More generally, in order to model the sophisticated ways in which human beings process information, as well as going beyond the human capa bilities, we need provide a basket of aggregation tools. The centrality of aggregation in human thought can be be very clearly seen by looking at neural networks, a technology motivated by modeling the human brain. One can see that the basic operations involved in these networks are learning and aggregation. The Ordered Weighted Averaging (OWA) operators provide a parameter ized family of aggregation operators which include many of the well-known operators such as the maximum, minimum and the simple average."
Analysis of Failure and Survival Data is an essential textbook for graduate-level students of survival analysis and reliability and a valuable reference for practitioners. It focuses on the many techniques that appear in popular software packages, including plotting product-limit survival curves, hazard plots, and probability plots in the context of censored data. The author integrates S-Plus and Minitab output throughout the text, along with a variety of real data sets so readers can see how the theory and methods are applied. He also incorporates exercises in each chapter that provide valuable problem-solving experience.
Interpreting Basic Statistics gives students valuable practice in interpreting statistical reporting as it actually appears in peer-reviewed journals. Features of the ninth edition: * Covers a broad array of basic statistical concepts, including topics drawn from the New Statistics * Up-to-date journal excerpts reflecting contemporary styles in statistical reporting * Strong emphasis on data visualization * Ancillary materials include data sets with almost two hours of accompanying tutorial videos, which will help students and instructors apply lessons from the book to real-life scenarios About this book Each of the 63 exercises in the book contain three central components: 1) an introduction to a statistical concept, 2) a brief excerpt from a published research article that uses the statistical concept, and 3) a set of questions (with answers) that guides students into deeper learning about the concept. The questions on the journal excerpts promote learning by helping students * interpret information in tables and figures, * perform simple calculations to further their interpretations, * critique data-reporting techniques, and * evaluate procedures used to collect data. The questions in each exercise are divided into two parts: (1) Factual Questions and (2) Questions for Discussion. The Factual Questions require careful reading for details, while the discussion questions show that interpreting statistics is more than a mathematical exercise. These questions require students to apply good judgment as well as statistical reasoning in arriving at appropriate interpretations. Each exercise covers a limited number of topics, making it easy to coordinate the exercises with lectures or a traditional statistics textbook.
Statisticians of the Centuries aims to demonstrate the achievements of statistics to a broad audience, and to commemorate the work of celebrated statisticians. This is done through short biographies that put the statistical work in its historical and sociological context, emphasizing contributions to science and society in the broadest terms rather than narrow technical achievement. The discipline is treated from its earliest times and only individuals born prior to the 20th Century are included. The volume arose through the initiative of the International Statistical Institute (ISI), the principal representative association for international statistics (founded in 1885). Extensive consultations within the statistical community, and with prominent members of ISI in particular, led to the names of the 104 individuals who are included in the volume. The biographies were contributed by 73 authors from across the world. The editors are the well-known statisticians Chris Heyde and Eugene Seneta. Chris Heyde is Professor of Statistics at both Columbia University in New York and the Australian National University in Canberra. He is also Director of the Center for Applied Probability at Columbia. He has twice served as Vice President of the ISI, and also as President of the ISI's Bernoulli Society. Eugene Seneta is Professor of Mathematical Statistics at the University of Sydney and a Member of the ISI. His historical writings focus on 19th Century France and the Russian Empire. He has taught courses on the history of probability-based statistics in U.S. universities. Both editors are Fellows of the Australian Academy of Science and have, at various times, been awarded the Pitman Medal of the Statistical Society of Australia for their distinguished research contributions.
This book presents new research in probability theory using ideas from mathematical logic. It is a general study of stochastic processes on adapted probability spaces, employing the concept of similarity of stochastic processes based on the notion of adapted distribution. The authors use ideas from model theory and methods from nonstandard analysis. The construction of spaces with certain richness properties, defined by insights from model theory, becomes easy using nonstandard methods, but remains difficult or impossible without them. |
![]() ![]() You may like...
Scale Transitions As Foundations Of…
Ioan Merches, Maricel Agop, …
Hardcover
R3,591
Discovery Miles 35 910
Fluid Dynamics - Part 1: Classical Fluid…
Anatoly I. Ruban, Jitesh S. B. Gajjar
Hardcover
R2,400
Discovery Miles 24 000
|