![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
The main body of this book is devoted to statistical physics, whereas much less emphasis is given to thermodynamics. In particular, the idea is to present the most important outcomes of thermodynamics - most notably, the laws of thermodynamics - as conclusions from derivations in statistical physics. Special emphasis is on subjects that are vital to engineering education. These include, first of all, quantum statistics, like the Fermi-Dirac distribution, as well as diffusion processes, both of which are fundamental to a sound understanding of semiconductor devices. Another important issue for electrical engineering students is understanding of the mechanisms of noise generation and stochastic dynamics in physical systems, most notably in electric circuitry. Accordingly, the fluctuation-dissipation theorem of statistical mechanics, which is the theoretical basis for understanding thermal noise processes in systems, is presented from a signals-and-systems point of view, in a way that is readily accessible for engineering students and in relation with other courses in the electrical engineering curriculum, like courses on random processes.
New Perspectives in Partial Least Squares and Related Methods shares original, peer-reviewed research from presentations during the 2012 partial least squares methods meeting (PLS 2012). This was the 7th meeting in the series of PLS conferences and the first to take place in the USA. PLS is an abbreviation for Partial Least Squares and is also sometimes expanded as projection to latent structures. This is an approach for modeling relations between data matrices of different types of variables measured on the same set of objects. The twenty-two papers in this volume, which include three invited contributions from our keynote speakers, provide a comprehensive overview of the current state of the most advanced research related to PLS and related methods. Prominent scientists from around the world took part in PLS 2012 and their contributions covered the multiple dimensions of the partial least squares-based methods. These exciting theoretical developments ranged from partial least squares regression and correlation, component based path modeling to regularized regression and subspace visualization. In following the tradition of the six previous PLS meetings, these contributions also included a large variety of PLS approaches such as PLS metamodels, variable selection, sparse PLS regression, distance based PLS, significance vs. reliability, and non-linear PLS. Finally, these contributions applied PLS methods to data originating from the traditional econometric/economic data to genomics data, brain images, information systems, epidemiology, and chemical spectroscopy. Such a broad and comprehensive volume will also encourage new uses of PLS models in work by researchers and students in many fields.
The book considers foundational thinking in quantum theory, focusing on the role the fundamental principles and principle thinking there, including thinking that leads to the invention of new principles, which is, the book contends, one of the ultimate achievements of theoretical thinking in physics and beyond. The focus on principles, prominent during the rise and in the immediate aftermath of quantum theory, has been uncommon in more recent discussions and debates concerning it. The book argues, however, that exploring the fundamental principles and principle thinking is exceptionally helpful in addressing the key issues at stake in quantum foundations and the seemingly interminable debates concerning them. Principle thinking led to major breakthroughs throughout the history of quantum theory, beginning with the old quantum theory and quantum mechanics, the first definitive quantum theory, which it remains within its proper (nonrelativistic) scope. It has, the book also argues, been equally important in quantum field theory, which has been the frontier of quantum theory for quite a while now, and more recently, in quantum information theory, where principle thinking was given new prominence. The approach allows the book to develop a new understanding of both the history and philosophy of quantum theory, from Planck's quantum to the Higgs boson, and beyond, and of the thinking the key founding figures, such as Einstein, Bohr, Heisenberg, Schroedinger, and Dirac, as well as some among more recent theorists. The book also extensively considers the nature of quantum probability, and contains a new interpretation of quantum mechanics, "the statistical Copenhagen interpretation." Overall, the book's argument is guided by what Heisenberg called "the spirit of Copenhagen," which is defined by three great divorces from the preceding foundational thinking in physics-reality from realism, probability from causality, and locality from relativity-and defined the fundamental principles of quantum theory accordingly.
This text presents the two complementary aspects of thermal physics as an integrated theory of the properties of matter. Conceptual understanding is promoted by thorough development of basic concepts. In contrast to many texts, statistical mechanics, including discussion of the required probability theory, is presented first. This provides a statistical foundation for the concept of entropy, which is central to thermal physics. A unique feature of the book is the development of entropy based on Boltzmann's 1877 definition; this avoids contradictions or ad hoc corrections found in other texts. Detailed fundamentals provide a natural grounding for advanced topics, such as black-body radiation and quantum gases. An extensive set of problems (solutions are available for lecturers through the OUP website), many including explicit computations, advance the core content by probing essential concepts. The text is designed for a two-semester undergraduate course but can be adapted for one-semester courses emphasizing either aspect of thermal physics. It is also suitable for graduate study.
The contributors to this volume include many of the distinguished researchers in this area. Many of these scholars have collaborated with Joseph McKean to develop underlying theory for these methods, obtain small sample corrections, and develop efficient algorithms for their computation. The papers cover the scope of the area, including robust nonparametric rank-based procedures through Bayesian and big data rank-based analyses. Areas of application include biostatistics and spatial areas. Over the last 30 years, robust rank-based and nonparametric methods have developed considerably. These procedures generalize traditional Wilcoxon-type methods for one- and two-sample location problems. Research into these procedures has culminated in complete analyses for many of the models used in practice including linear, generalized linear, mixed, and nonlinear models. Settings are both multivariate and univariate. With the development of R packages in these areas, computation of these procedures is easily shared with readers and implemented. This book is developed from the International Conference on Robust Rank-Based and Nonparametric Methods, held at Western Michigan University in April 2015.
Stochastic analysis has a variety of applications to biological systems as well as physical and engineering problems, and its applications to finance and insurance have bloomed exponentially in recent times. The goal of this book is to present a broad overview of the range of applications of stochastic analysis and some of its recent theoretical developments. This includes numerical simulation, error analysis, parameter estimation, as well as control and robustness properties for stochastic equations. The book also covers the areas of backward stochastic differential equations via the (non-linear) G-Brownian motion and the case of jump processes. Concerning the applications to finance, many of the articles deal with the valuation and hedging of credit risk in various forms, and include recent results on markets with transaction costs.
Box and Jenkins (1970) made the idea of obtaining a stationary time series by differencing the given, possibly nonstationary, time series popular. Numerous time series in economics are found to have this property. Subsequently, Granger and Joyeux (1980) and Hosking (1981) found examples of time series whose fractional difference becomes a short memory process, in particular, a white noise, while the initial series has unbounded spectral density at the origin, i.e. exhibits long memory.Further examples of data following long memory were found in hydrology and in network traffic data while in finance the phenomenon of strong dependence was established by dramatic empirical success of long memory processes in modeling the volatility of the asset prices and power transforms of stock market returns.At present there is a need for a text from where an interested reader can methodically learn about some basic asymptotic theory and techniques found useful in the analysis of statistical inference procedures for long memory processes. This text makes an attempt in this direction. The authors provide in a concise style a text at the graduate level summarizing theoretical developments both for short and long memory processes and their applications to statistics. The book also contains some real data applications and mentions some unsolved inference problems for interested researchers in the field.
This is the first book to compare eight LDFs by different types of datasets, such as Fisher's iris data, medical data with collinearities, Swiss banknote data that is a linearly separable data (LSD), student pass/fail determination using student attributes, 18 pass/fail determinations using exam scores, Japanese automobile data, and six microarray datasets (the datasets) that are LSD. We developed the 100-fold cross-validation for the small sample method (Method 1) instead of the LOO method. We proposed a simple model selection procedure to choose the best model having minimum M2 and Revised IP-OLDF based on MNM criterion was found to be better than other M2s in the above datasets. We compared two statistical LDFs and six MP-based LDFs. Those were Fisher's LDF, logistic regression, three SVMs, Revised IP-OLDF, and another two OLDFs. Only a hard-margin SVM (H-SVM) and Revised IP-OLDF could discriminate LSD theoretically (Problem 2). We solved the defect of the generalized inverse matrices (Problem 3). For more than 10 years, many researchers have struggled to analyze the microarray dataset that is LSD (Problem 5). If we call the linearly separable model "Matroska," the dataset consists of numerous smaller Matroskas in it. We develop the Matroska feature selection method (Method 2). It finds the surprising structure of the dataset that is the disjoint union of several small Matroskas. Our theory and methods reveal new facts of gene analysis.
How can large bonuses sometimes make CEOs less productive?Why is revenge so important to us?How can confusing directions actually help us?Why is there a difference between what we think will make us happy and what really makes us happy? In his groundbreaking book, Predictably Irrational, social scientist Dan Ariely revealed the multiple biases that lead us to make unwise decisions. Now, in The Upside of Irrationality, he exposes the surprising negative and positive effects irrationality can have on our lives. Focusing on our behaviors at work and in relationships, he offers new insights and eye-opening truths about what really motivates us on the job, how one unwise action can become a long-term bad habit, how we learn to love the ones we're with, and more. The Upside of Irrationality will change the way we see ourselves at work and at home--and cast our irrational behaviors in a more nuanced light.
Machine learning is concerned with the analysis of large data and multiple variables. However, it is also often more sensitive than traditional statistical methods to analyze small data. The first volume reviewed subjects like optimal scaling, neural networks, factor analysis, partial least squares, discriminant analysis, canonical analysis, and fuzzy modeling. This second volume includes various clustering models, support vector machines, Bayesian networks, discrete wavelet analysis, genetic programming, association rule learning, anomaly detection, correspondence analysis, and other subjects. Both the theoretical bases and the step by step analyses are described for the benefit of non-mathematical readers. Each chapter can be studied without the need to consult other chapters. Traditional statistical tests are, sometimes, priors to machine learning methods, and they are also, sometimes, used as contrast tests. To those wishing to obtain more knowledge of them, we recommend to additionally study (1) Statistics Applied to Clinical Studies 5th Edition 2012, (2) SPSS for Starters Part One and Two 2012, and (3) Statistical Analysis of Clinical Data on a Pocket Calculator Part One and Two 2012, written by the same authors, and edited by Springer, New York.
Inference infinite sampling is a new development that is essential for the field of sampling. In addition to covering the majority of well known sampling plans and procedures, this study covers the important topics of superpopulation approach, randomized response, non-response and resampling techniques. The authors also provide extensive sets of problems ranging in difficulty, making this book beneficial to students.
Epidemiologic Studies in Cancer Prevention and Screening is the first comprehensive overview of the evidence base for both cancer prevention and screening. This book is directed to the many professionals in government, academia, public health and health care who need up to date information on the potential for reducing the impact of cancer, including physicians, nurses, epidemiologists, and research scientists. The main aim of the book is to provide a realistic appraisal of the evidence for both cancer prevention and cancer screening. In addition, the book provides an accounting of the extent programs based on available knowledge have impacted populations. It does this through: 1. Presentation of a rigorous and realistic evaluation of the evidence for population-based interventions in prevention of and screening for cancer, with particular relevance to those believed to be applicable now, or on the cusp of application 2. Evaluation of the relative contributions of prevention and screening 3. Discussion of how, within the health systems with which the authors are familiar, prevention and screening for cancer can be enhanced. Overview of the evidence base for cancer prevention and screening, as demonstrated in Epidemiologic Studies in Cancer Prevention and Screening, is critically important given current debates within the scientific community. Of the five components of cancer control, prevention, early detection (including screening) treatment, rehabilitation and palliative care, prevention is regarded as the most important. Yet the knowledge available to prevent many cancers is incomplete, and even if we know the main causal factors for a cancer, we often lack the understanding to put this knowledge into effect. Further, with the long natural history of most cancers, it could take many years to make an appreciable impact upon the incidence of the cancer. Because of these facts, many have come to believe that screening has the most potential for reduction of the burden of cancer. Yet, through trying to apply the knowledge gained on screening for cancer, the scientific community has recognized that screening can have major disadvantages and achieve little at substantial cost. This reduces the resources that are potentially available both for prevention and for treatment.
In this, his most famous work, Pierre-Simon, Marquis de Laplace lays out a system for reasoning based on probability. The single most famous piece introduced in this work is the rule of succession, which calculates the probability that a trial will be a success based on the number of times it has succeeded in the past. Students of mathematics will find A Philosophical Essay on Probabilities an essential read for understanding this complex field of study and applying its truths to their lives. French mathematician PIERRE-SIMON, MARQUIS DE LAPLACE (1749-1827) was essential in the formation of mathematical physics. He spent much of his life working on mathematical astronomy and even suggested the existence of black holes. Laplace is also known for his work on probability.
What are the current trends in housing? Is my planned project commercially viable? What should be my marketing and advertisement strategies? These are just some of the questions real estate agents, landlords and developers ask researchers to answer. But to find the answers, researchers are faced with a wide variety of methods that measure housing preferences and choices. To select and value a valid research method, one needs a well-structured overview of the methods that are used in housing preference and housing choice research. This comprehensive introduction to this field offers just such an overview. It discusses and compares numerous methods, detailing the potential limitation of each one, and it reaches beyond methodology, illustrating how thoughtful consideration of methods and techniques in research can help researchers and other professionals to deliver products and services that are more in line with residents needs."
This book treats the notion of morphisms in spatial analysis, paralleling these concepts in spatial statistics (Part I) and spatial econometrics (Part II). The principal concept is morphism (e.g., isomorphisms, homomorphisms, and allomorphisms), which is defined as a structure preserving the functional linkage between mathematical properties or operations in spatial statistics and spatial econometrics, among other disciplines. The purpose of this book is to present selected conceptions in both domains that are structurally the same, even though their labelling and the notation for their elements may differ. As the approaches presented here are applied to empirical materials in geography and economics, the book will also be of interest to scholars of regional science, quantitative geography and the geospatial sciences. It is a follow-up to the book "Non-standard Spatial Statistics and Spatial Econometrics" by the same authors, which was published by Springer in 2011.
This book presents powerful techniques for solving global optimization problems on manifolds by means of evolutionary algorithms, and shows in practice how these techniques can be applied to solve real-world problems. It describes recent findings and well-known key facts in general and differential topology, revisiting them all in the context of application to current optimization problems. Special emphasis is put on game theory problems. Here, these problems are reformulated as constrained global optimization tasks and solved with the help of Fuzzy ASA. In addition, more abstract examples, including minimizations of well-known functions, are also included. Although the Fuzzy ASA approach has been chosen as the main optimizing paradigm, the book suggests that other metaheuristic methods could be used as well. Some of them are introduced, together with their advantages and disadvantages. Readers should possess some knowledge of linear algebra, and of basic concepts of numerical analysis and probability theory. Many necessary definitions and fundamental results are provided, with the formal mathematical requirements limited to a minimum, while the focus is kept firmly on continuous problems. The book offers a valuable resource for students, researchers and practitioners. It is suitable for university courses on optimization and for self-study.
This book offers hands-on statistical tools for business professionals by focusing on the practical application of a single-equation regression. The authors discuss commonly applied econometric procedures, which are useful in building regression models for economic forecasting and supporting business decisions. A significant part of the book is devoted to traps and pitfalls in implementing regression analysis in real-world scenarios. The book consists of nine chapters, the final two of which are fully devoted to case studies. Today's business environment is characterised by a huge amount of economic data. Making successful business decisions under such data-abundant conditions requires objective analytical tools, which can help to identify and quantify multiple relationships between dozens of economic variables. Single-equation regression analysis, which is discussed in this book, is one such tool. The book offers a valuable guide and is relevant in various areas of economic and business analysis, including marketing, financial and operational management.
Biological and other natural processes have always been a source of inspiration for computer science and information technology. Many emerging problem solving techniques integrate advanced evolution and cooperation strategies, encompassing a range of spatio-temporal scales for visionary conceptualization of evolutionary computation. The previous editions of NICSO were held in Granada, Spain (2006), Acireale, Italy (2007), Tenerife, Spain (2008), and again in Granada in 2010. NICSO evolved to be one of the most interesting and profiled workshops in nature inspired computing. NICSO 2011 has offered an inspiring environment for debating the state of the art ideas and techniques in nature inspired cooperative strategies and a comprehensive image on recent applications of these ideas and techniques. The topics covered by this volume include Swarm Intelligence (such as Ant and Bee Colony Optimization), Genetic Algorithms, Multiagent Systems, Coevolution and Cooperation strategies, Adversarial Models, Synergic Building Blocks, Complex Networks, Social Impact Models, Evolutionary Design, Self Organized Criticality, Evolving Systems, Cellular Automata, Hybrid Algorithms, and Membrane Computing (P-Systems).
This book presents the breadth and diversity of empirical and practical work done on statistics education around the world. A wide range of methods are used to respond to the research questions that form it's base. Case studies of single students or teachers aimed at understanding reasoning processes, large-scale experimental studies attempting to generalize trends in the teaching and learning of statistics are both employed. Various epistemological stances are described and utilized. The teaching and learning of statistics is presented in multiple contexts in the book. These include designed settings for young children, students in formal schooling, tertiary level students, vocational schools, and teacher professional development. A diversity is evident also in the choices of what to teach (curriculum), when to teach (learning trajectory), how to teach (pedagogy), how to demonstrate evidence of learning (assessment) and what challenges teachers and students face when they solve statistical problems (reasoning and thinking).
This book collects some recent developments in stochastic control theory with applications to financial mathematics. We first address standard stochastic control problems from the viewpoint of the recently developed weak dynamic programming principle. A special emphasis is put on the regularity issues and, in particular, on the behavior of the value function near the boundary. We then provide a quick review of the main tools from viscosity solutions which allow to overcome all regularity problems. We next address the class of stochastic target problems which extends in a nontrivial way the standard stochastic control problems. Here the theory of viscosity solutions plays a crucial role in the derivation of the dynamic programming equation as the infinitesimal counterpart of the corresponding geometric dynamic programming equation. The various developments of this theory have been stimulated by applications in finance and by relevant connections with geometric flows. Namely, the second order extension was motivated by illiquidity modeling, and the controlled loss version was introduced following the problem of quantile hedging. The third part specializes to an overview of Backward stochastic differential equations, and their extensions to the quadratic case. "
For the first two editions of the book Probability (GTM 95), each chapter included a comprehensive and diverse set of relevant exercises. While the work on the third edition was still in progress, it was decided that it would be more appropriate to publish a separate book that would comprise all of the exercises from previous editions, in addition tomany new exercises. Most of the material in this book consists of exercises created by Shiryaev, collected and compiled over the course of many years while working on many interesting topics.Many of the exercises resulted from discussions that took place during special seminars for graduate and undergraduate students. Many of the exercises included in the book contain helpful hints and other relevant information. Lastly, the author has included an appendix at the end of the book that contains a summary of the main results, notation and terminology from Probability Theory that are used throughout the present book. This Appendix also contains additional material from Combinatorics, Potential Theory and Markov Chains, which is not covered in the book, but is nevertheless needed for many of the exercises included here."
Reliability and Safety of Complex Technical Systems and Processes offers a comprehensive approach to the analysis, identification, evaluation, prediction and optimization of complex technical systems operation, reliability and safety. Its main emphasis is on multistate systems with ageing components, changes to their structure, and their components reliability and safety parameters during the operation processes. Reliability and Safety of Complex Technical Systems and Processes presents integrated models for the reliability, availability and safety of complex non-repairable and repairable multistate technical systems, with reference to their operation processes and their practical applications to real industrial systems. The authors consider variables in different operation states, reliability and safety structures, and the reliability and safety parameters of components, as well as suggesting a cost analysis for complex technical systems. Researchers and industry practitioners will find information on a wide range of complex technical systems in Reliability and Safety of Complex Technical Systems and Processes. It may prove an easy-to-use guide to reliability and safety evaluations of real complex technical systems, both during their operation and at the design stages.
This book gives a unifying framework for estimating the abundance of open populations: populations subject to births, deaths and movement, given imperfect measurements or samples of the populations. The focus is primarily on populations of vertebrates for which dynamics are typically modelled within the framework of an annual cycle, and for which stochastic variability in the demographic processes is usually modest. Discrete-time models are developed in which animals can be assigned to discrete states such as age class, gender, maturity, population (within a metapopulation), or species (for multi-species models). The book goes well beyond estimation of abundance, allowing inference on underlying population processes such as birth or recruitment, survival and movement. This requires the formulation and fitting of population dynamics models. The resulting fitted models yield both estimates of abundance and estimates of parameters characterizing the underlying processes.
A well-balanced and accessible introduction to the elementary quantitative methods and Microsoft(R) Office Excel(R) applications used to guide business decision making Featuring quantitative techniques essential for modeling modern business situations, Introduction to Quantitative Methods in Business: With Applications Using Microsoft(R) Office Excel(R) provides guidance to assessing real-world data sets using Excel. The book presents a balanced approach to the mathematical tools and techniques with applications used in the areas of business, finance, economics, marketing, and operations. The authors begin by establishing a solid foundation of basic mathematics and statistics before moving on to more advanced concepts. The first part of the book starts by developing basic quantitative techniques such as arithmetic operations, functions and graphs, and elementary differentiations (rates of change), and integration. After a review of these techniques, the second part details both linear and nonlinear models of business activity. Extensively classroom-tested, Introduction to Quantitative Methods in Business: With Applications Using Microsoft(R) Office Excel(R) also includes: * Numerous examples and practice problems that emphasize real-world business quantitative techniques and applications * Excel-based computer software routines that explore calculations for an assortment of tasks, including graphing, formula usage, solving equations, and data analysis * End-of-chapter sections detailing the Excel applications and techniques used to address data and solutions using large data sets * A companion website that includes chapter summaries, Excel data sets, sample exams and quizzes, lecture slides, and an Instructors Solutions Manual Introduction to Quantitative Methods in Business: With Applications Using Microsoft(R) Office Excel(R) is an excellent textbook for undergraduate-level courses on quantitative methods in business, economics, finance, marketing, operations, and statistics. The book is also an ideal reference for readers with little or no quantitative background who require a better understanding of basic mathematical and statistical concepts used in economics and business. Bharat Kolluri, Ph.D., is Professor of Economics in the Department of Economics, Finance, and Insurance at the University of Hartford. A member of the American Economics Association, his research interests include econometrics, business statistics, quantitative decision making, applied macroeconomics, applied microeconomics, and corporate finance. Michael J. Panik, Ph.D., is Professor Emeritus in the Department of Economics, Finance, and Insurance at the University of Hartford. He has served as a consultant to the Connecticut Department of Motor Vehicles as well as to a variety of health care organizations. In addition, Dr. Panik is the author of numerous books, including Growth Curve Modeling: Theory and Applications and Statistical Inference: A Short Course, both published by Wiley. Rao N. Singamsetti, Ph.D., is Associate Professor in the Department of Economics, Finance, and Insurance at the University of Hartford. A member of the American Economics Association, his research interests include the status of war on poverty in the United States since the 1960s and forecasting foreign exchange rates using econometric methods. |
You may like...
Hardy Inequalities on Homogeneous Groups
Durvudkhan Suragan, Michael Ruzhansky
Hardcover
R1,841
Discovery Miles 18 410
Geometric and Harmonic Analysis on…
Ali Baklouti, Takaaki Nomura
Hardcover
R2,671
Discovery Miles 26 710
|