Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Science & Mathematics > Mathematics > Probability & statistics
Global simultaneous development is becoming more necessary as the cost of developing medical products continues to grow. The strategy of using multiregional clinical trials (MRCTs) has become the preferred method for developing new medicines. Implementing the same protocol to include subjects from many geographical regions around the world, MRCTs can speed up the patient enrolment, thus resulting in quicker drug development and obtaining faster approval of the drug globally. After the publication of the editors' first volume on this topic, there have been new developments on MRCTs. The International Council for Harmonisation (ICH) issued ICH E17, a guideline document on MRCTs, in November 2017, laying out principles on MRCTs. Beyond E17, new methodologies have been developed as well. Simultaneous Global New Drug Development: Multi-Regional Clinical Trials after ICH E17 collects chapters providing interpretations of principles in ICH E17 and new ideas of implementing MRCTs. Authors are from different regions, and from academia and industry. In addition, in contrast to the first book, new perspectives are brought to MRCT from regulatory agencies. This book will be of particular interest to biostatisticians working in late stage clinical development of medical products. It will also be especially helpful for statisticians in regulatory agencies, and medical research institutes. This book is comprehensive across the MRCT topic spectrum, including Issues regarding ICH E17 Implementation MRCT Design and Analysis Methodologies Perspectives from authorities in regulatory agencies, as well as statisticians practicing in the medical product industry Many examples of real-life applications based on actual MRCTs.
This book covers statistical consequences of breaches of research integrity such as fabrication and falsification of data, and researcher glitches summarized as questionable research practices. It is unique in that it discusses how unwarranted data manipulation harms research results and that questionable research practices are often caused by researchers' inadequate mastery of the statistical methods and procedures they use for their data analysis. The author's solution to prevent problems concerning the trustworthiness of research results, no matter how they originated, is to publish data in publicly available repositories and encourage researchers not trained as statisticians not to overestimate their statistical skills and resort to professional support from statisticians or methodologists. The author discusses some of his experiences concerning mutual trust, fear of repercussions, and the bystander effect as conditions limiting revelation of colleagues' possible integrity breaches. He explains why people are unable to mimic real data and why data fabrication using statistical models stills falls short of credibility. Confirmatory and exploratory research and the usefulness of preregistration, and the counter-intuitive nature of statistics are discussed. The author questions the usefulness of statistical advice concerning frequentist hypothesis testing, Bayes-factor use, alternative statistics education, and reduction of situational disturbances like performance pressure, as stand-alone means to reduce questionable research practices when researchers lack experience with statistics.
1) Focuses on the concepts and implementation strategies of various Deep Learning algorithms through properly curated examples. 2) The subject area will be valid for the next 10 years or so, as Deep Learning theory/algorithms and their applications will not be outdated easily. Hence there will be demand for such a book in the market. 3) In comparison to other titles, this book rigorously covers mathematical and conceptual details of relevant topics.
Features First book on uncertainty quantification in variational inequalities emerging from various network, economic, and engineering models. Completely self-contained and lucid in style Aimed for a diverse audience including applied mathematicians, engineers, economists, and professionals from academia Includes the most recent developments on the subject which so far have only been available in the research literature.
Features Collects and discusses the ideas underpinning decision making through optimization tools in a simple and straightforward manner. Suitable for an undergraduate course in optimization-based decision making, or as a supplementary resource for courses in operations research and management science. Self-contained coverage of traditional and more modern optimization models, while not requiring a previous background in decision theory.
Covers deep learning fundamentals; Focuses on applications; Covers human emotion analysis and deep learning; Explains how to use web based techniques for deep learning applications; Includes coverage of autonomous vehicles and deep learning
Interviewer Effects from a Total Survey Error Perspective presents a comprehensive collection of state-of-the-art research on interviewer-administered survey data collection. Interviewers play an essential role in the collection of the high-quality survey data used to learn about our society and improve the human condition. Although many surveys are conducted using self-administered modes, interviewer-administered modes continue to be optimal for surveys that require high levels of participation, include difficult-to-survey populations, and collect biophysical data. Survey interviewing is complex, multifaceted, and challenging. Interviewers are responsible for locating sampled units, contacting sampled individuals and convincing them to cooperate, asking questions on a variety of topics, collecting other kinds of data, and providing data about respondents and the interview environment. Careful attention to the methodology that underlies survey interviewing is essential for interviewer-administered data collections to succeed. In 2019, survey methodologists, survey practitioners, and survey operations specialists participated in an international workshop at the University of Nebraska-Lincoln to identify best practices for surveys employing interviewers and outline an agenda for future methodological research. This book features 23 chapters on survey interviewing by these worldwide leaders in the theory and practice of survey interviewing. Chapters include: The legacy of Dr. Charles F. Cannell's groundbreaking research on training survey interviewers and the theory of survey interviewing Best practices for training survey interviewers Interviewer management and monitoring during data collection The complex effects of interviewers on survey nonresponse Collecting survey measures and survey paradata in different modes Designing studies to estimate and evaluate interviewer effects Best practices for analyzing interviewer effects Key gaps in the research literature, including an agenda for future methodological research Chapter appendices available to download from https://digitalcommons.unl.edu/sociw/ Written for managers of survey interviewers, survey methodologists, and students interested in the survey data collection process, this unique reference uses the Total Survey Error framework to examine optimal approaches to survey interviewing, presenting state-of-the-art methodological research on all stages of the survey process involving interviewers. Acknowledging the important history of survey interviewing while looking to the future, this one-of-a-kind reference provides researchers and practitioners with a roadmap for maximizing data quality in interviewer-administered surveys.
Complex Survey Data Analysis with SAS (R) is an invaluable resource for applied researchers analyzing data generated from a sample design involving any combination of stratification, clustering, unequal weights, or finite population correction factors. After clearly explaining how the presence of these features can invalidate the assumptions underlying most traditional statistical techniques, this book equips readers with the knowledge to confidently account for them during the estimation and inference process by employing the SURVEY family of SAS/STAT (R) procedures. The book offers comprehensive coverage of the most essential topics, including: Drawing random samples Descriptive statistics for continuous and categorical variables Fitting and interpreting linear and logistic regression models Survival analysis Domain estimation Replication variance estimation methods Weight adjustment and imputation methods for handling missing data The easy-to-follow examples are drawn from real-world survey data sets spanning multiple disciplines, all of which can be downloaded for free along with syntax files from the author's website: http://mason.gmu.edu/~tlewis18/. While other books may touch on some of the same issues and nuances of complex survey data analysis, none features SAS exclusively and as exhaustively. Another unique aspect of this book is its abundance of handy workarounds for certain techniques not yet supported as of SAS Version 9.4, such as the ratio estimator for a total and the bootstrap for variance estimation. Taylor H. Lewis is a PhD graduate of the Joint Program in Survey Methodology at the University of Maryland, College Park, and an adjunct professor in the George Mason University Department of Statistics. An avid SAS user for 15 years, he is a SAS Certified Advanced programmer and a nationally recognized SAS educator who has produced dozens of papers and workshops illustrating how to efficiently and effectively conduct statistical analyses using SAS.
This book defines and investigates the concept of a random object. To accomplish this task in a natural way, it brings together three major areas; statistical inference, measure-theoretic probability theory and stochastic processes. This point of view has not been explored by existing textbooks; one would need material on real analysis, measure and probability theory, as well as stochastic processes - in addition to at least one text on statistics- to capture the detail and depth of material that has gone into this volume. Presents and illustrates 'random objects' in different contexts, under a unified framework, starting with rudimentary results on random variables and random sequences, all the way up to stochastic partial differential equations. Reviews rudimentary probability and introduces statistical inference, from basic to advanced, thus making the transition from basic statistical modeling and estimation to advanced topics more natural and concrete. Compact and comprehensive presentation of the material that will be useful to a reader from the mathematics and statistical sciences, at any stage of their career, either as a graduate student, an instructor, or an academician conducting research and requiring quick references and examples to classic topics. Includes 378 exercises, with the solutions manual available on the book's website. 121 illustrative examples of the concepts presented in the text (many including multiple items in a single example). The book is targeted towards students at the master's and Ph.D. levels, as well as, academicians in the mathematics, statistics and related disciplines. Basic knowledge of calculus and matrix algebra is required. Prior knowledge of probability or measure theory is welcomed but not necessary.
This book concentrates on mining networks, a subfield within data science. Data science uses scientific and computational tools to extract valuable knowledge from large data sets. Once data is processed and cleaned, it is analyzed and presented to support decision-making processes. Data science and machine learning tools have become widely used in companies of all sizes. Networks are often large-scale, decentralized, and evolve dynamically over time. Mining complex networks aim to understand the principles governing the organization and the behavior of such networks is crucial for a broad range of fields of study. Here are a few selected typical applications of mining networks: Community detection (which users on some social media platforms are close friends). Link prediction (who is likely to connect to whom on such platforms). Node attribute prediction (what advertisement should be shown to a given user of a particular platform to match their interests). Influential node detection (which social media users would be the best ambassadors of a specific product). This textbook is suitable for an upper-year undergraduate course or a graduate course in programs such as data science, mathematics, computer science, business, engineering, physics, statistics, and social science. This book can be successfully used by all enthusiasts of data science at various levels of sophistication to expand their knowledge or consider changing their career path. Jupiter notebooks (in Python and Julia) accompany the book and can be accessed on https://www.ryerson.ca/mining-complex-networks/. These not only contain all the experiments presented in the book, but also include additional material. Bogumil Kaminski is the Chairman of the Scientific Council for the Discipline of Economics and Finance at SGH Warsaw School of Economics. He is also an Adjunct Professor at the Data Science Laboratory at Ryerson University. Bogumil is an expert in applications of mathematical modeling to solving complex real-life problems. He is also a substantial open-source contributor to the development of the Julia language and its package ecosystem. Pawel Pralat is a Professor of Mathematics in Ryerson University, whose main research interests are in random graph theory, especially in modeling and mining complex networks. He is the Director of Fields-CQAM Lab on Computational Methods in Industrial Mathematics in The Fields Institute for Research in Mathematical Sciences and has pursued collaborations with various industry partners as well as the Government of Canada. He has written over 170 papers and three books with 130 plus collaborators. Francois Theberge holds a B.Sc. degree in applied mathematics from the University of Ottawa, a M.Sc. in telecommunications from INRS and a PhD in electrical engineering from McGill University. He has been employed by the Government of Canada since 1996 where he was involved in the creation of the data science team as well as the research group now known as the Tutte Institute for Mathematics and Computing. He also holds an adjunct professorial position in the Department of Mathematics and Statistics at the University of Ottawa. His current interests include relational-data mining and deep learning.
Statistical Models in Toxicology presents an up-to-date and comprehensive account of statistical theory topics that occur in toxicology. The attention given by statisticians to the problem of health risk estimation for environmental and occupational exposures in the last few decades has created excitement and optimism among both statisticians and toxicologists. The development of modern statistical techniques with solid mathematical foundations in the twentieth century and the advent of modern computers in the latter part of the century gave way to the development of many statistical models and methods to describe toxicological processes and attempts to solve the associated problems. Not only have the models enjoyed a high level of elegance and sophistication mathematically, but they are widely used by industry and government regulatory agencies. Features: Focuses on describing the statistical models in environmental toxicology that facilitate the assessment of risk mainly in humans. The properties and shortfalls of each model are discussed, and its impact in the process of risk assessment is examined. Discusses models that assess the risk of mixtures of chemicals. Presents statistical models that are developed for risk estimation in different aspects of environmental toxicology including cancer and carcinogenic substances. Includes models for developmental and reproductive toxicity risk assessment, risk assessment in continuous outcomes, and developmental neurotoxicity. Contains numerous examples and exercises. Statistical Models in Toxicology introduces a wide variety of statistical models that are currently utilized for dose-response modeling and risk analysis. These models are often developed based on design and regulatory guidelines of toxicological experiments. The book is suitable for practitioners or it can be used as a textbook for advanced undergraduate or graduate students of mathematics and statistics.
This is the sixth volume in a ten-volume set designed for publication in 1997. It reprints in book form a selection of the most important and influential articles on probability, econometrics and economic games which cumulatively have had a major impact on the development of modern economics. There are 242 articles, dating from 1936 to 1996. Many of them were originally published in relatively inaccessible journals and may not, therefore, be available in the archives of many university libraries. The volumes are available separately and also as a complete ten-volume set. The contributors include D. Ellsberg, R.M. Hogart, J.B. Kadane, B.O. Koopmans, E.L. Lehman, D.F. Nicholls, H. Rubin, T.J. Sarjent, L.H. Summers and C.R. Wymer. This particular volume deals with the econometric exploration and diagnosis.
This is the seventh volume in a ten-volume set designed for publication in 1997. It reprints in book form a selection of the most important and influential articles on probability, econometrics and economic games which cumulatively have had a major impact on the development of modern economics. There are 242 articles, dating from 1936 to 1996. Many of them were originally published in relatively inaccessible journals and may not, therefore, be available in the archives of many university libraries. The volumes are available separately and also as a complete ten-volume set. The contributors include D. Ellsberg, R.M. Hogart, J.B. Kadane, B.O. Koopmans, E.L. Lehman, D.F. Nicholls, H. Rubin, T.J. Sarjent, L.H. Summers and C.R. Wymer. This particular volume deals with the probability approach to simultaneous equations.
This is the tenth volume in a ten-volume set designed for publication in 1997. It reprints in book form a selection of the most important and influential articles on probability, econometrics and economic games which cumulatively have had a major impact on the development of modern economics. There are 242 articles, dating from 1936 to 1996. Many of them were originally published in relatively inaccessible journals and may not, therefore, be available in the archives of many university libraries. The volumes are available separately and also as a complete ten-volume set. The contributors include D. Ellsberg, R.M. Hogart, J.B. Kadane, B.O. Koopmans, E.L. Lehman, D.F. Nicholls, H. Rubin, T.J. Sarjent, L.H. Summers and C.R. Wymer. This particular volume deals with discrete and coontinuous systems.
This is the fifth volume in a ten-volume set designed for publication in 1997. It reprints in book form a selection of the most important and influential articles on probability, econometrics and economic games which cumulatively have had a major impact on the development of modern economics. There are 242 articles, dating from 1936 to 1996. Many of them were originally published in relatively inaccessible journals and may not, therefore, be available in the archives of many university libraries. The volumes are available separately and also as a complete ten-volume set. The contributors include D. Ellsberg, R.M. Hogart, J.B. Kadane, B.O. Koopmans, E.L. Lehman, D.F. Nicholls, H. Rubin, T.J. Sarjent, L.H. Summers and C.R. Wymer. This particular volume deals with the statistical theory that underlies the science of econometrics.
This is the second volume in a ten-volume set designed for publication in 1997. It reprints in book form a selection of the most important and influential articles on probability, econometrics and economic games which cumulatively have had a major impact on the development of modern economics. There are 242 articles, dating from 1936 to 1996. Many of them were originally published in relatively inaccessible journals and may not, therefore, be available in the archives of many university libraries. The volumes are available separately and also as a complete ten-volume set. The contributors include D. Ellsberg, R.M. Hogart, J.B. Kadane, B.O. Koopmans, E.L. Lehman, D.F. Nicholls, H. Rubin, T.J. Sarjent, L.H. Summers and C.R. Wymer. This particular volume deals with paradox and ambiguity.
This is the fourth volume in a ten-volume set designed for publication in 1997. It reprints in book form a selection of the most important and influential articles on probability, econometrics and economic games which cumulatively have had a major impact on the development of modern economics. There are 242 articles, dating from 1936 to 1996. Many of them were originally published in relatively inaccessible journals and may not, therefore, be available in the archives of many university libraries. The volumes are available separately and also as a complete ten-volume set. The contributors include D. Ellsberg, R.M. Hogart, J.B. Kadane, B.O. Koopmans, E.L. Lehman, D.F. Nicholls, H. Rubin, T.J. Sarjent, L.H. Summers and C.R. Wymer. This particular volume deals with the dialogues and beliefs that underpin probability concepts.
This is the third volume in a ten-volume set designed for publication in 1997. It reprints in book form a selection of the most important and influential articles on probability, econometrics and economic games which cumulatively have had a major impact on the development of modern economics. There are 242 articles, dating from 1936 to 1996. Many of them were originally published in relatively inaccessible journals and may not, therefore, be available in the archives of many university libraries. The volumes are available separately and also as a complete ten-volume set. The contributors include D. Ellsberg, R.M. Hogart, J.B. Kadane, B.O. Koopmans, E.L. Lehman, D.F. Nicholls, H. Rubin, T.J. Sarjent, L.H. Summers and C.R. Wymer. This particular volume deals with economic games and the functions of bargaining and solutions.
This is the ninth volume in a ten-volume set designed for publication in 1997. It reprints in book form a selection of the most important and influential articles on probability, econometrics and economic games which cumulatively have had a major impact on the development of modern economics. There are 242 articles, dating from 1936 to 1996. Many of them were originally published in relatively inaccessible journals and may not, therefore, be available in the archives of many university libraries. The volumes are available separately and also as a complete ten-volume set. The contributors include D. Ellsberg, R.M. Hogart, J.B. Kadane, B.O. Koopmans, E.L. Lehman, D.F. Nicholls, H. Rubin, T.J. Sarjent, L.H. Summers and C.R. Wymer. This particular volume deals with a reappraisal of econometrics.
Demonstrates how to use SAS for the examples and exercises in the textbook
Contains information for using R software with the examples in the textbook Sampling: Design and Analysis, 3rd edition by Sharon L. Lohr.
Provides an insight to safety managers in analyzing bad events and the ways to deal with them. Covers randomness, uncertainty, and predictability in detail. Explains concepts including reverse stress testing, real-time monitoring, and predictive maintenance in a comprehensive manner. Presents mathematical analysis of incidents and accidents using statistics and probabilities theories.
Surrogates: a graduate textbook, or professional handbook, on topics at the interface between machine learning, spatial statistics, computer simulation, meta-modeling (i.e., emulation), design of experiments, and optimization. Experimentation through simulation, "human out-of-the-loop" statistical support (focusing on the science), management of dynamic processes, online and real-time analysis, automation, and practical application are at the forefront. Topics include: Gaussian process (GP) regression for flexible nonparametric and nonlinear modeling. Applications to uncertainty quantification, sensitivity analysis, calibration of computer models to field data, sequential design/active learning and (blackbox/Bayesian) optimization under uncertainty. Advanced topics include treed partitioning, local GP approximation, modeling of simulation experiments (e.g., agent-based models) with coupled nonlinear mean and variance (heteroskedastic) models. Treatment appreciates historical response surface methodology (RSM) and canonical examples, but emphasizes contemporary methods and implementation in R at modern scale. Rmarkdown facilitates a fully reproducible tour, complete with motivation from, application to, and illustration with, compelling real-data examples. Presentation targets numerically competent practitioners in engineering, physical, and biological sciences. Writing is statistical in form, but the subjects are not about statistics. Rather, they're about prediction and synthesis under uncertainty; about visualization and information, design and decision making, computing and clean code.
Cyberspace is changing the face of crime. For criminals it has become a place for rich collaboration and learning, not just within one country; and a place where new kinds of crimes can be carried out, and a vehicle for committing conventional crimes with unprecedented range, scale, and speed. Law enforcement faces a challenge in keeping up and dealing with this new environment. The news is not all bad - collecting and analyzing data about criminals and their activities can provide new levels of insight into what they are doing and how they are doing it. However, using data analytics requires a change of process and new skills that (so far) many law enforcement organizations have had difficulty leveraging. Cyberspace, Data Analytics, and Policing surveys the changes that cyberspace has brought to criminality and to policing with enough technical content to expose the issues and suggest ways in which law enforcement organizations can adapt. Key Features: Provides a non-technical but robust overview of how cyberspace enables new kinds of crime and changes existing crimes. Describes how criminals exploit the ability to communicate globally to learn, form groups, and acquire cybertools. Describes how law enforcement can use the ability to collect data and apply analytics to better protect society and to discover and prosecute criminals. Provides examples from open-source data of how hot spot and intelligence-led policing can benefit law enforcement. Describes how law enforcement can exploit the ability to communicate globally to collaborate in dealing with trans-national crime.
Demonstrates how to use SAS for the examples and exercises in the textbook |
You may like...
Statistics For Business And Economics
David Anderson, James Cochran, …
Paperback
(1)
R2,109 Discovery Miles 21 090
Statistics for Management and Economics
Gerald Keller, Nicoleta Gaciu
Paperback
Mathematical Statistics with…
William Mendenhall, Dennis Wackerly, …
Paperback
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim
Paperback
Introduction to the Practice of…
David S Moore, George P. McCabe, …
Paperback
R2,359
Discovery Miles 23 590
|