![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Probability & statistics
This monograph is intended for scientists and TCAD engineers who are interested in physics-based simulation of Si and SiGe devices. The common theoretical background of the drift-diffusion, hydrodynamic, and Monte-Carlo models and their synergy are discussed and it is shown how these models form a consistent hierarchy of simulation tools. The basis of this hierarchy is the full-band Monte-Carlo device model which is discussed in detail, including its numerical and stochastic properties. The drift-diffusion and hydrodynamic models for large-signal, small-signal, and noise analysis are derived from the Boltzmann transport equation in such a way that all transport and noise parameters can be obtained by Monte-Carlo simulations. With this hierarchy of simulation tools the device characteristics of strained Si MOSFETs and SiGe HBTs are analysed and the accuracy of the momentum-based models is assessed by comparison with the Monte-Carlo device simulator.
Spatial data analysis is a fast growing area and Voronoi diagrams provide a means of naturally partitioning space into subregions to facilitate spatial data manipulation, modelling of spatial structures, pattern recognition and locational optimization. With such versatility, the Voronoi diagram and its relative, the Delaunay triangulation, provide valuable tools for the analysis of spatial data. This is a rapidly growing research area and in this fully updated second edition the authors provide an up-to-date and comprehensive unification of all the previous literature on the subject of Voronoi diagrams. Features:&UL; &LI; Expands on the highly acclaimed first edition&LI; Provides an up-to-date and comprehensive survey of the existing literature on Voronoi diagrams&LI; Includes a useful compendium of applications&LI; Contains an extensive bibliography&/UL; The authors guide the reader through all the necessary mathematical background, before introducing a number of generalizations of Voronoi diagrams in Chapter 3. The subsequent chapters cover algorithms, random Voronoi diagrams, spatial interpolation, multivariate data manipulation, spatial process models, point pattern analysis and locational optimization. Emphasis of a particular perspective is deliberately avoided in order to provide a comprehensive and balanced treatment of the topic. A wide range of applications are discussed, enabling this book to serve as an important reference volume on the topic. The text will appeal to students and researchers studying spatial data in a number of areas, in particular applied probability, computational geometry and Geographic Information Science (GIS). This book will appeal equally to those whose interests in Voronoi diagrams are theoretical, practical or both.
The present text introduces the student to the basic ideas of estimation and hypothesis testing early in the course after a rather brief introduction to data organization and some simple ideas about probability. Estimation and hypothesis testing are discussed in terms of the two-sample problem. The book exploits nonparametric ideas that rely on nothing more complicated than sample differences Y-X, referred to as elementary estimates, to define the Wilcoxon-Mann-Whitney test statistics and the related point and interval estimates. The ideas behind elementary estimates are then applied to the one-sample problem and to linear regression and rank correlation. Discussion of the Kruskal-Wallis and Friedman procedures for the k-sample problem rounds out the nonparametric coverage. The concluding chapters provide a discussion of Chi-square tests for the analysis of categorical data and introduce the student to the analysis of binomial data including the computation of power and sample size. Most chapters in the book have an appendix discussing relevant Minitab commands.
Scan statistics is currently one of the most active and important areas of research in applied probability and statistics, having applications to a wide variety of fields: archaeology, astronomy, bioinformatics, biosurveillance, molecular biology, genetics, computer science, electrical engineering, geography, material sciences, physics, reconnaissance, reliability and quality control, telecommunication, and epidemiology. Filling a gap in the literature, this self-contained volume brings together a collection of selected chapters illustrating the depth and diversity of theory, methods and applications in the area of scan statistics.
Knowledge acquisition is one of the most important aspects influencing the quality of methods used in artificial intelligence and the reliability of expert systems. The various issues dealt with in this volume concern many different approaches to the handling of partial knowledge and to the ensuing methods for reasoning and decision making under uncertainty, as applied to problems in artificial intelligence. The volume is composed of the invited and contributed papers presented at the Workshop on Mathematical Models for Handling Partial Knowledge in Artificial Intelligence, held at the Ettore Majorana Center for Scientific Culture of Erice (Sicily, Italy) on June 19-25, 1994, in the framework of the International School of Mathematics "G.Stampacchia." It includes also a transcription of the roundtable held during the workshop to promote discussions on fundamental issues, since in the choice of invited speakers we have tried to maintain a balance between the various schools of knowl edge and uncertainty modeling. Choquet expected utility models are discussed in the paper by Alain Chateauneuf: they allow the separation of perception of uncertainty or risk from the valuation of outcomes, and can be of help in decision mak ing. Petr Hajek shows that reasoning in fuzzy logic may be put on a strict logical (formal) basis, so contributing to our understanding of what fuzzy logic is and what one is doing when applying fuzzy reasoning."
This guide is for practicing statisticians and data scientists who use IBM SPSS for statistical analysis of big data in business and finance. This is the first of a two-part guide to SPSS for Windows, introducing data entry into SPSS, along with elementary statistical and graphical methods for summarizing and presenting data. Part I also covers the rudiments of hypothesis testing and business forecasting while Part II will present multivariate statistical methods, more advanced forecasting methods, and multivariate methods. IBM SPSS Statistics offers a powerful set of statistical and information analysis systems that run on a wide variety of personal computers. The software is built around routines that have been developed, tested, and widely used for more than 20 years. As such, IBM SPSS Statistics is extensively used in industry, commerce, banking, local and national governments, and education. Just a small subset of users of the package include the major clearing banks, the BBC, British Gas, British Airways, British Telecom, the Consumer Association, Eurotunnel, GSK, TfL, the NHS, Shell, Unilever, and W.H.S. Although the emphasis in this guide is on applications of IBM SPSS Statistics, there is a need for users to be aware of the statistical assumptions and rationales underpinning correct and meaningful application of the techniques available in the package; therefore, such assumptions are discussed, and methods of assessing their validity are described. Also presented is the logic underlying the computation of the more commonly used test statistics in the area of hypothesis testing. Mathematical background is kept to a minimum.
During the past decade interest in quality management has greatly increased. One of the central elements of Total Quality Management is Statistical Process Control, more commonly known as SPC. This book describes the pitfalls and traps which businesses encounter when implementing and assuring SPC. Illustrations are given from practical experience in various companies. The following subjects are discussed: implementation of SPC, activity plan for achieving statistically controlled processes, statistical tools, and lastly, consolidation and improvement of the results. Also, an extensive checklist is provided with which a business can determine to what extent it has succeeded in the actual application of SPC. Audience: This volume is written for companies which are going to implement SPC, or which need a new impetus in order to get SPC properly off the ground. It will be of interest in particular to researchers whose work involves statistics and probability, production, operation and manufacturing management, industrial organisation and mathematical and quantitative methods. It will also appeal to specialists in engineering and management, for example in the electronic industry, discrete parts industry, process industry, automotive and aircraft industry and food industry.
This book is the third revised and updated English edition of the German textbook \Versuchsplanung und Modellwahl" by Helge Toutenburg which was based on more than 15 years experience of lectures on the course \- sign of Experiments" at the University of Munich and interactions with the statisticians from industries and other areas of applied sciences and en- neering. This is a type of resource/ reference book which contains statistical methods used by researchers in applied areas. Because of the diverse ex- ples combined with software demonstrations it is also useful as a textbook in more advanced courses, The applications of design of experiments have seen a signi?cant growth in the last few decades in di?erent areas like industries, pharmaceutical sciences, medical sciences, engineering sciences etc. The second edition of this book received appreciation from academicians, teachers, students and applied statisticians. As a consequence, Springer-Verlag invited Helge Toutenburg to revise it and he invited Shalabh for the third edition of the book. In our experience with students, statisticians from industries and - searchers from other ?elds of experimental sciences, we realized the importance of several topics in the design of experiments which will - crease the utility of this book. Moreover we experienced that these topics are mostly explained only theoretically in most of the available books.
This book discusses recent developments and the latest research in statistics and its applications, primarily in agriculture and industry, survey sampling and biostatistics, gathering articles on a wide variety of topics. Written by leading academics, scientists, researchers and scholars from around the globe to mark the platinum jubilee of the Department of Statistics, University of Calcutta in 2016, the book is a valuable resource for statisticians, aspiring researchers and professionals across educational levels and disciplines.
A balanced presentation of both theoretical and applied material with numerous problem sets to illustrate important concepts. Demonstrates the use of computers and calculators to facilitate problem solving, as well as numerous applications to illustrate basic theory.
Asymptotic methods belong to the, perhaps, most romantic area of modern mathematics. They are widely known and have been used in me chanics, physics and other exact sciences for many, many decades. But more than this, asymptotic ideas are found in all branches of human knowledge, indeed in all areas of life. In this broader context they have not and perhaps cannot be fully formalized. However, they are mar velous, they leave room for fantasy, guesses and intuition; they bring us very near to the border of the realm of art. Many books have been written and published about asymptotic meth ods. Most of them presume a mathematically sophisticated reader. The authors here attempt to describe asymptotic methods on a more accessi ble level, hoping to address a wider range of readers. They have avoided the extreme of banishing formulae entirely, as done in some popular science books that attempt to describe mathematical methods with no mathematics. This is impossible (and not wise). Rather, the authors have tried to keep the mathematics at a moderate level. At the same time, using simple examples, they think they have been able to illustrate all the key ideas of asymptotic methods and approaches, to depict in de tail the results of their application to various branches of knowledg- from astronomy, mechanics, and physics to biology, psychology and art. The book is supplemented by several appendices, one of which con tains the profound ideas of R. G."
The editors draw on a 3-year project that analyzed a Portuguese area in detail, comparing this study with papers from other regions. Applications include the estimation of technical efficiency in agricultural grazing systems (dairy, beef and mixed) and specifically for dairy farms. The conclusions indicate that it is now necessary to help small dairy farms in order to make them more efficient. These results can be compared with the technical efficiency of a sample of Spanish dairy processing firms presented by Magdalena Kapelko and co-authors.
The term singular spectrum comes from the spectral (eigenvalue) decomposition of a matrix A into its set (spectrum) of eigenvalues. These eigenvalues, A, are the numbers that make the matrix A -AI singular. The term singular spectrum analysis* is unfortunate since the traditional eigenvalue decomposition involving multivariate data is also an analysis of the singular spectrum. More properly, singular spectrum analysis (SSA) should be called the analysis of time series using the singular spectrum. Spectral decomposition of matrices is fundamental to much the ory of linear algebra and it has many applications to problems in the natural and related sciences. Its widespread use as a tool for time series analysis is fairly recent, however, emerging to a large extent from applications of dynamical systems theory (sometimes called chaos theory). SSA was introduced into chaos theory by Fraedrich (1986) and Broomhead and King (l986a). Prior to this, SSA was used in biological oceanography by Colebrook (1978). In the digi tal signal processing community, the approach is also known as the Karhunen-Loeve (K-L) expansion (Pike et aI., 1984). Like other techniques based on spectral decomposition, SSA is attractive in that it holds a promise for a reduction in the dimen- * Singular spectrum analysis is sometimes called singular systems analysis or singular spectrum approach. vii viii Preface sionality. This reduction in dimensionality is often accompanied by a simpler explanation of the underlying physics.
The book develops the capabilities arising from the cooperation between mathematicians and statisticians working in insurance and finance fields. It gathers some of the papers presented at the conference MAF2010, held in Ravello (Amalfi coast), and successively, after a reviewing process, worked out to this aim.
Approach your problems from the right end It isn't that they can't see the solution. It is and begin with the answers. Then one day, that they can't see the problem. perhaps you will find the final question. G. K. Chesterton. The Scandal of Father 'The Hermit Clad in Crane Feathers' in R. Brown 'The point of a Pin'. van Gulik's The Chinese Maze Murders. Growing specialization and diversification have brought a host of monographs and textbooks on increasingly specialized topics. However, the "tree" of knowledge of mathematics and related fields does not grow only by putting forth new branches. It also happens, quite often in fact, that branches which were thought to be completely disparate are suddenly seen to be related. Further, the kind and level of sophistication of mathematics applied in various sciences has changed drastically in recent years: measure theory is used (non-trivially) in regional and theoretical economics; algebraic geometry interacts with physics; the Minkowsky lemma, coding theory and the structure of water meet one another in packing and covering theory; quantum fields, crystal defects and mathematical programming profit from homotopy theory; Lie algebras are relevant to filtering; and prediction and electrical engineering can use Stein spaces. And in addition to this there are such new emerging subdisciplines as "experimental mathematics," "CFD," "completely integrable systems," "chaos, synergetics and large-scale order," which are almost impossible to fit into the existing classification schemes. They draw upon widely different sections of mathematics.
This book covers a range of statistical methods useful in the analysis of medical data, from the simple to the sophisticated, and shows how they may be applied using the latest versions of S-PLUS and S-PLUS 6. In each chapter several sets of medical data are explored and analysed using a mixture of graphical and model fitting approaches. At the end of each chapter the S-PLUS script files are listed, enabling readers to reproduce all the analyses and graphics in the chapter. These script files can be downloaded from a web site. The aim of the book is to show how to use S-PLUS as a powerful environment for undertaking a variety of statistical analyses from simple inference to complex model fitting, and for providing informative graphics. All such methods are of increasing importance in handling data from a variety of medical investigations including epidemiological studies and clinical trials. The mix of real data examples and background theory make this book useful for students and researchers alike. For the former, exercises are provided at the end of each chapter to increase their fluency in using the command line language of the S-PLUS software. Professor Brian Everitt is Head of the Department of Biostatistics and Computing at the Institute of Psychiatry in London and Sophia Rabe-Hesketh is a senior lecturer in the same department. Professor Everitt is the author of over 30 books on statistics including two previously co-authored with Dr. Rabe-Hesketh.
This book presents the state of the art of biostatistical methods and their applications in clinical oncology. Many methodologies established today in biostatistics have been brought about through its applications to the design and analysis of oncology clinical studies. This field of oncology, now in the midst of evolution owing to rapid advances in biotechnologies and cancer genomics, is becoming one of the most promising disease fields in the shift toward personalized medicine. Modern developments of diagnosis and therapeutics of cancer have also been continuously fueled by recent progress in establishing the infrastructure for conducting more complex, large-scale clinical trials and observational studies. The field of cancer clinical studies therefore will continue to provide many new statistical challenges that warrant further progress in the methodology and practice of biostatistics. This book provides a systematic coverage of various stages of cancer clinical studies. Topics from modern cancer clinical trials include phase I clinical trials for combination therapies, exploratory phase II trials with multiple endpoints/treatments, and confirmative biomarker-based phase III trials with interim monitoring and adaptation. It also covers important areas of cancer screening, prognostic analysis, and the analysis of large-scale molecular data in the era of big data.
Classic biostatistics, a branch of statistical science, has as its main focus the applications of statistics in public health, the life sciences, and the pharmaceutical industry. Modern biostatistics, beyond just a simple application of statistics, is a confluence of statistics and knowledge of multiple intertwined fields. The application demands, the advancements in computer technology, and the rapid growth of life science data (e.g., genomics data) have promoted the formation of modern biostatistics. There are at least three characteristics of modern biostatistics: (1) in-depth engagement in the application fields that require penetration of knowledge across several fields, (2) high-level complexity of data because they are longitudinal, incomplete, or latent because they are heterogeneous due to a mixture of data or experiment types, because of high-dimensionality, which may make meaningful reduction impossible, or because of extremely small or large size; and (3) dynamics, the speed of development in methodology and analyses, has to match the fast growth of data with a constantly changing face. This book is written for researchers, biostatisticians/statisticians, and scientists who are interested in quantitative analyses. The goal is to introduce modern methods in biostatistics and help researchers and students quickly grasp key concepts and methods. Many methods can solve the same problem and many problems can be solved by the same method, which becomes apparent when those topics are discussed in this single volume.
This book moves systematically through the topic of applied probability from an introductory chapter to such topics as random variables and vectors, stochastic processes, estimation, testing and regression. The topics are well chosen and the presentation is enriched by many examples from real life. Each chapter concludes with many original, solved and unsolved problems and hundreds of multiple choice questions, enabling those unfamiliar with the topics to master them. Additionally appealing are historical notes on the mathematicians mentioned throughout, and a useful bibliography. A distinguishing character of the book is its thorough and succinct handling of the varied topics.
Statistical Inference for Ergodic Diffusion Processes encompasses a wealth of results from over ten years of mathematical literature. It provides a comprehensive overview of existing techniques, and presents - for the first time in book form - many new techniques and approaches. An elementary introduction to the field at the start of the book introduces a class of examples - both non-standard and classical - that reappear as the investigation progresses to illustrate the merits and demerits of the procedures. The statements of the problems are in the spirit of classical mathematical statistics, and special attention is paid to asymptotically efficient procedures. Today, diffusion processes are widely used in applied problems in fields such as physics, mechanics and, in particular, financial mathematics. This book provides a state-of-the-art reference that will prove invaluable to researchers, and graduate and postgraduate students, in areas such as financial mathematics, economics, physics, mechanics and the biomedical sciences.
This book provides a thorough development of the powerful methods of heavy traffic analysis and approximations with applications to a wide variety of stochastic (e.g. queueing and communication) networks, for both controlled and uncontrolled systems. The approximating models are reflected stochastic differential equations. The analytical and numerical methods yield considerable simplifications and insights and good approximations to both path properties and optimal controls under broad conditions on the data and structure. The general theory is developed, with possibly state dependent parameters, and specialized to many different cases of practical interest. Control problems in telecommunications and applications to scheduling, admissions control, polling, and elsewhere are treated. The necessary probability background is reviewed, including a detailed survey of reflected stochastic differential equations, weak convergence theory, methods for characterizing limit processes, and ergodic problems.
As information technologies become increasingly distributed and accessible to larger number of people and as commercial and government organizations are challenged to scale their applications and services to larger market shares, while reducing costs, there is demand for software methodologies and appli- tions to provide the following features: Richer application end-to-end functionality; Reduction of human involvement in the design and deployment of the software; Flexibility of software behaviour; and Reuse and composition of existing software applications and systems in novel or adaptive ways. When designing new distributed software systems, the above broad requi- ments and their translation into implementations are typically addressed by partial complementarities and overlapping technologies and this situation gives rise to significant software engineering challenges. Some of the challenges that may arise are: determining the components that the distributed applications should contain, organizing the application components, and determining the assumptions that one needs to make in order to implement distributed scalable and flexible applications, etc.
This is the first book on the subject since its introduction more than fifty years ago, and it can be used as a graduate text or as a reference work. It features all of the key results, many very useful tables, and a large number of research problems. The book will be of interest to those interested in one of the most fascinating areas of discrete mathematics, connected to statistics and coding theory, with applications to computer science and cryptography. It will be useful for anyone who is running experiments, whether in a chemistry lab or a manufacturing plant (trying to make those alloys stronger), or in agricultural or medical research. Sam Hedayat is Professor of Statistics and Senior Scholar in the Department of Mathematics, Statistics, and Computer Science, University of Illinois, Chicago. Neil J.A. Sloane is with AT&T Bell Labs (now AT&T Labs). John Stufken is Professor Statistics at Iowa State University.
Intended for advanced undergraduates and graduate students, this book is a practical guide to the use of probability and statistics in experimental physics. The emphasis is on applications and understanding, on theorems and techniques actually used in research. The text is not a comprehensive text in probability and statistics; proofs are sometimes omitted if they do not contribute to intuition in understanding the theorem. The problems, some with worked solutions, introduce the student to the use of computers; occasional reference is made to routines available in the CERN library, but other systems, such as Maple, can also be used. Topics covered include: basic concepts; definitions; some simple results independent of specific distributions; discrete distributions; the normal and other continuous distributions; generating and characteristic functions; the Monte Carlo method and computer simulations; multi-dimensional distributions; the central limit theorem; inverse probability and confidence belts; estimation methods; curve fitting and likelihood ratios; interpolating functions; fitting data with constraints; robust estimation methods. This second edition introduces a new method for dealing with small samples, such as may arise in search experiments, when the data are of low probability. It also includes a new chapter on queuing problems (including a simple, but useful buffer length example). In addition new sections discuss over- and under-coverage using confidence belts, the extended maximum-likelihood method, the use of confidence belts for discrete distributions, estimation of correlation coefficients, and the effective variance method for fitting y = f(x) when both x and y have measurement errors. A complete Solutions Manual is available. |
You may like...
Advances in Synthesis Gas: Methods…
Mohammad Reza Rahimpour, Mohammad Amin Makarem, …
Paperback
R4,540
Discovery Miles 45 400
Business By Grace - How I Built A…
Zibusiso Mkhwanazi, Steven Zwane
Paperback
Singular Elliptic Problems - Bifurcation…
Marius Ghergu, Vicentiu Radulescu
Hardcover
R2,808
Discovery Miles 28 080
Ski Tips for Kids - Fun Instructional…
Mike Clelland, Alex Everett
Paperback
R329
Discovery Miles 3 290
|