![]() |
![]() |
Your cart is empty |
||
Books > Medicine > General issues > Public health & preventive medicine > Epidemiology & medical statistics
- The first practical introduction to second-order and growth mixture models using Mplus 8.4 -Introduces simple and complex models through incremental steps with increasing complexity -Each model is presented with figures with associated syntax that highlight what the statistics mean, Mplus applications, and an interpretation of results, to maximize understanding. - Second-order and growth mixture modeling is increasingly being used in various disciplines to analyze changes in individual attributes such as personal behaviors and relationships over time
- The first practical introduction to second-order and growth mixture models using Mplus 8.4 -Introduces simple and complex models through incremental steps with increasing complexity -Each model is presented with figures with associated syntax that highlight what the statistics mean, Mplus applications, and an interpretation of results, to maximize understanding. - Second-order and growth mixture modeling is increasingly being used in various disciplines to analyze changes in individual attributes such as personal behaviors and relationships over time
Towards a Digital Health Ecology : NHS Digital Adoption through the COVID-19 Looking Glass is about technology adoption in the UK's National Health Service (NHS) as told from the inflection point of a disaster. In 2020 the world lived through a disaster of epic proportions, devastating humanity around the globe. It took a microscopic virus to wreak havoc on our healthcare system and force the adoption of technology in a way that had never been seen before. This book tells the story of digital technology take-up in the NHS through the lens of that disaster. This book documents use of technology in the NHS through the lens of the first pandemic shock. Our healthcare system, paid for by general taxation and free at the point of demand, was conceived and developed in a firmly analogue world. Created in 1948, the NHS predates the invention of the World Wide Web by some forty years. This is not a book simply about technology, it is a study of the painful process of reengineering a mammoth and byzantine system that was built for a different era. The digital health sector is a microcosm of the wider healthcare system, through which grand themes of social inequality, public trust, private versus commercial interests, values and beliefs are played out. The sector is a clash of competing discourses: the civic and doing good for society; the market and wealth creation; the industrial creating more efficient and effective systems; the project expressed as innovation and experimentation; lastly the notion of vitality and leading a happier, healthy life. Each of these discourses exists in a state of flux and tension with the other. This book is offered as a critique of the role of digital technologies within healthcare. It is an examination of competing interests, approaches, and ideologies. It is a story of system complexity told through analysis and personal stories.
This book is suitable to be used as a textbook for all levels of students in medical school. It is also useful as a reference book for students interested in the application of biostatistics in medicine. Materials from the Introduction to Chapter 6 are similar to those of an elementary statistical textbook.This book is more modern than the current textbook in medical statistics. In this book, biostatistics and epidemiologic concepts are nicely blended. In contrast to the fallacy of the p-value, it introduces the Bayes factor as a measure of the evidence hidden in the sample data. It illustrates the application of the regression to the mean in medicine. Many epidemiologic concepts such as sensitivity and specificity of the diagnostic test, classification and discrimination, types of bias, etc. are discussed in the book.Chapter 7 on 'Correlation and Regression' includes the concept of regression to the mean, generalized linear (Poisson and Logistic) regression models, and discrimination of new data to belong to which sample data sets. Chapter 8 covers the nonparametric inference, including Kolmogorov and Smirnov test. Via the estimation and hypothesis testing, sample sizes are determined in Chapter 9. Chapter 10 discusses the study of design for collecting sample data, including cohort, cross-sectional, case-control, and clinical trial. In addition, types of bias are expounded as a last section in Chapter 10.Chapter 11 covers in detail the inference on contingency tables, including 2 x 2, two-way, and three-way. Five tests (Pearson, log-odds-ratio, Fisher-Irwin, McNemar, and Ejigou-McHugh) are listed in Section 11.1. Six tests (Pearson, First-order interaction, Yate's linear trend, Stuart's marginal homogeneity, Kendall, and Wilcoxon-Mann-Whitney) are described in Section 11.2. Three tests (Pearson, log-odds-ratio on first-order interaction, Barlett's on second-order interaction) and Simpson's paradox are covered in Section 11.3.Chapter 12 covers analysis of survival data. Two methods (life-table and Kaplan-Meier) are introduced for estimating the survivor function in Section 12.2. Four methods (maximum likelihood, Armitage's preference, Wald's sequential sign, and Armitage's restricted sequential) for comparing two survival curves are covered in Section 12.3. Proportional hazard model and the log-rank test are discussed, respectively, in Section 12.4 and 12.5.In addition, advanced techniques in comparing two survival curves are included in the book such as Armitage's preference method, Armitage's restricted sequential test and Wald's sequential sign test. Also, inference on contingency tables are treated in more detail than other books.
Coherent treatment of a variety of approaches to multiple comparisons Broad coverage of topics, with contributions by internationally leading experts Detailed treatment of applications in medicine and life sciences Suitable for researchers, lecturers / students, and practitioners
Measurement error arises ubiquitously in applications and has been of long-standing concern in a variety of fields, including medical research, epidemiological studies, economics, environmental studies, and survey research. While several research monographs are available to summarize methods and strategies of handling different measurement error problems, research in this area continues to attract extensive attention. The Handbook of Measurement Error Models provides overviews of various topics on measurement error problems. It collects carefully edited chapters concerning issues of measurement error and evolving statistical methods, with a good balance of methodology and applications. It is prepared for readers who wish to start research and gain insights into challenges, methods, and applications related to error-prone data. It also serves as a reference text on statistical methods and applications pertinent to measurement error models, for researchers and data analysts alike. Features: Provides an account of past development and modern advancement concerning measurement error problems Highlights the challenges induced by error-contaminated data Introduces off-the-shelf methods for mitigating deleterious impacts of measurement error Describes state-of-the-art strategies for conducting in-depth research
The integrated nested Laplace approximation (INLA) is a recent computational method that can fit Bayesian models in a fraction of the time required by typical Markov chain Monte Carlo (MCMC) methods. INLA focuses on marginal inference on the model parameters of latent Gaussian Markov random fields models and exploits conditional independence properties in the model for computational speed. Bayesian Inference with INLA provides a description of INLA and its associated R package for model fitting. This book describes the underlying methodology as well as how to fit a wide range of models with R. Topics covered include generalized linear mixed-effects models, multilevel models, spatial and spatio-temporal models, smoothing methods, survival analysis, imputation of missing values, and mixture models. Advanced features of the INLA package and how to extend the number of priors and latent models available in the package are discussed. All examples in the book are fully reproducible and datasets and R code are available from the book website. This book will be helpful to researchers from different areas with some background in Bayesian inference that want to apply the INLA method in their work. The examples cover topics on biostatistics, econometrics, education, environmental science, epidemiology, public health, and the social sciences.
The cost for bringing new medicine from discovery to market has nearly doubled in the last decade and has now reached $2.6 billion. There is an urgent need to make drug development less time-consuming and less costly. Innovative trial designs/ analyses such as the Bayesian approach are essential to meet this need. This book will be the first to provide comprehensive coverage of Bayesian applications across the span of drug development, from discovery, to clinical trial, to manufacturing with practical examples. This book will have a wide appeal to statisticians, scientists, and physicians working in drug development who are motivated to accelerate and streamline the drug development process, as well as students who aspire to work in this field. The advantages of this book are: Provides motivating, worked, practical case examples with easy to grasp models, technical details, and computational codes to run the analyses Balances practical examples with best practices on trial simulation and reporting, as well as regulatory perspectives Chapters written by authors who are individual contributors in their respective topics Dr. Mani Lakshminarayanan is a researcher and statistical consultant with more than 30 years of experience in the pharmaceutical industry. He has published over 50 articles, technical reports, and book chapters besides serving as a referee for several journals. He has a PhD in Statistics from Southern Methodist University, Dallas, Texas and is a Fellow of the American Statistical Association. Dr. Fanni Natanegara has over 15 years of pharmaceutical experience and is currently Principal Research Scientist and Group Leader for the Early Phase Neuroscience Statistics team at Eli Lilly and Company. She played a key role in the Advanced Analytics team to provide Bayesian education and statistical consultation at Eli Lilly. Dr. Natanegara is the chair of the cross industry-regulatory-academic DIA BSWG to ensure that Bayesian methods are appropriately utilized for design and analysis throughout the drug-development process.
Published in 1986: This book tells the story of how various persons and groups have successfully dealt with a type of problem which may threaten the lives and health of every group of humans - every community. The problem is that of a polluted environment.
"Accurate and fully explicit mathematical models and derivations make the proposed method truly universal irrespective of the geographical location and the kind of virus epidemic." Minvydas Ragulskis, Kaunas University of Technology, Lithuania The effects of a pandemic on public, personal and freight transport can be sudden and massive, and yet transport is vital to the functioning of an advanced economy and society. On the other hand, transport, due to social mobility, has a decisive influence on the speed and scope of epidemic spread. This book presents a complete methodology for assessing the hazards, and probability and risks of viral transmission on transport services, using as a detailed example the SARS-CoV-2 coronavirus pandemic. It gives proposals and recommendations for estimating human deaths caused by virus infection in transport. Significantly, it considers not only passenger transport but also freight transport, such as delivery or parcel services. The tools include a matrix of hazard assessment in various transportation services, with a methodology for estimating the probability of virus transmission through both droplets and surface contact. These allow estimation of the effects of infections and consequent epidemic risk in all kinds of transport services, including freight, and provide methods for forecasting and risk management which determine transport safety. Rafal Burdzik is a professor in the Faculty of Transport and Aviation Engineering at Silesian University of Technology, Poland, with more than 20 years of transport research experience.
This book is about building platforms for pandemic prediction. It provides an overview of probabilistic prediction for pandemic modeling based on a data-driven approach. It also provides guidance on building platforms with currently available technology using tools such as R, Shiny, and interactive plotting programs. The focus is on the integration of statistics and computing tools rather than on an in-depth analysis of all possibilities on each side. Readers can follow different reading paths through the book, depending on their needs. The book is meant as a basis for further investigation of statistical modelling, implementation tools, monitoring aspects, and software functionalities. Features: A general but parsimonious class of models to perform statistical prediction for epidemics, using a Bayesian approach Implementation of automated routines to obtain daily prediction results How to interactively visualize the model results Strategies for monitoring the performance of the predictions and identifying potential issues in the results Discusses the many decisions required to develop and publish online platforms Supplemented by an R package and its specific functionalities to model epidemic outbreaks The book is geared towards practitioners with an interest in the development and presentation of results in an online platform of statistical analysis of epidemiological data. The primary audience includes applied statisticians, biostatisticians, computer scientists, epidemiologists, and professionals interested in learning more about epidemic modelling in general, including the COVID-19 pandemic, and platform building. The authors are professors at the Statistics Department at Universidade Federal de Minas Gerais. Their research records exhibit contributions applied to a number of areas of Science, including Epidemiology. Their research activities include books published with Chapman and Hall/CRC and papers in high quality journals. They have also been involved with academic management of graduate programs in Statistics and one of them is currently the President of the Brazilian Statistical Association.
This book is about building platforms for pandemic prediction. It provides an overview of probabilistic prediction for pandemic modeling based on a data-driven approach. It also provides guidance on building platforms with currently available technology using tools such as R, Shiny, and interactive plotting programs. The focus is on the integration of statistics and computing tools rather than on an in-depth analysis of all possibilities on each side. Readers can follow different reading paths through the book, depending on their needs. The book is meant as a basis for further investigation of statistical modelling, implementation tools, monitoring aspects, and software functionalities. Features: A general but parsimonious class of models to perform statistical prediction for epidemics, using a Bayesian approach Implementation of automated routines to obtain daily prediction results How to interactively visualize the model results Strategies for monitoring the performance of the predictions and identifying potential issues in the results Discusses the many decisions required to develop and publish online platforms Supplemented by an R package and its specific functionalities to model epidemic outbreaks The book is geared towards practitioners with an interest in the development and presentation of results in an online platform of statistical analysis of epidemiological data. The primary audience includes applied statisticians, biostatisticians, computer scientists, epidemiologists, and professionals interested in learning more about epidemic modelling in general, including the COVID-19 pandemic, and platform building. The authors are professors at the Statistics Department at Universidade Federal de Minas Gerais. Their research records exhibit contributions applied to a number of areas of Science, including Epidemiology. Their research activities include books published with Chapman and Hall/CRC and papers in high quality journals. They have also been involved with academic management of graduate programs in Statistics and one of them is currently the President of the Brazilian Statistical Association.
The vector-borne Zika virus joins avian influenza, Ebola, and yellow fever as recent public health crises threatening pandemicity. By a combination of stochastic modeling and economic geography, this book proposes two key causes together explain the explosive spread of the worst of the vector-borne outbreaks. Ecosystems in which such pathogens are largely controlled by environmental stochasticity are being drastically streamlined by both agribusiness-led deforestation and deficits in public health and environmental sanitation. Consequently, a subset of infections that once burned out relatively quickly in local forests are now propagating across susceptible human populations whose vulnerability to infection is often exacerbated in structurally adjusted cities. The resulting outbreaks are characterized by greater global extent, duration, and momentum. As infectious diseases in an age of nation states and global health programs cannot, as much of the present modeling literature presumes, be described by interacting populations of host, vector, and pathogen alone, a series of control theory models is also introduced here. These models, useful to researchers and health officials alike, explicitly address interactions between government ministries and the pathogens they aim to control.
Statistical concepts provide scientific framework in experimental studies, including randomized controlled trials. In order to design, monitor, analyze and draw conclusions scientifically from such clinical trials, clinical investigators and statisticians should have a firm grasp of the requisite statistical concepts. The Handbook of Statistical Methods for Randomized Controlled Trials presents these statistical concepts in a logical sequence from beginning to end and can be used as a textbook in a course or as a reference on statistical methods for randomized controlled trials. Part I provides a brief historical background on modern randomized controlled trials and introduces statistical concepts central to planning, monitoring and analysis of randomized controlled trials. Part II describes statistical methods for analysis of different types of outcomes and the associated statistical distributions used in testing the statistical hypotheses regarding the clinical questions. Part III describes some of the most used experimental designs for randomized controlled trials including the sample size estimation necessary in planning. Part IV describe statistical methods used in interim analysis for monitoring of efficacy and safety data. Part V describe important issues in statistical analyses such as multiple testing, subgroup analysis, competing risks and joint models for longitudinal markers and clinical outcomes. Part VI addresses selected miscellaneous topics in design and analysis including multiple assignment randomization trials, analysis of safety outcomes, non-inferiority trials, incorporating historical data, and validation of surrogate outcomes.
The book offers comprehensive coverage of the most essential topics, including: A general overview of pandemics and their outbreak behavior. A detailed overview of CI techniques. Intelligent modeling, prediction and diagnostic measures for pandemics. Prognostic models. Post-pandemic socio-economic structure.
This title includes a number of Open Access chapters. The book provides a comprehensive perspective on the subject of obesity epidemiology, pathophysiology, and management of obesity. The chapters provide a better understanding of obesity and obesity-related diseases and offer an integrative framework for individualized dietary and exercise programs, behavior modification, pharmaceutical approaches, surgery, and population interventions to reduce the growing epidemic of obesity.
This new volume provides exhaustive knowledge on a wide range of natural products and holistic concepts that have provided promising in the treatment of leishmaniasis. Including the major natural therapies as well as traditional formulations, over 300 medicinal plants and 150 isolated compounds that are reported to have beneficial results in the treatment of the disease are explored in this comprehensive work. This book also acts as an important resource on various anti-inflammatory plants used to treat various inflammatory conditions of the disease.
Single-Arm Phase II Survival Trial Design provides a comprehensive summary to the most commonly- used methods for single-arm phase II trial design with time-to-event endpoints. Single-arm phase II trials are a key component for successfully developing advanced cancer drugs and treatments, particular for target therapy and immunotherapy in which time-to-event endpoints are often the primary endpoints. Most test statistics for single-arm phase II trial design with time-to-event endpoints are not available in commercial software. Key Features: Covers the most frequently used methods for single-arm phase II trial design with time-to-event endpoints in a comprehensive fashion. Provides new material on phase II immunotherapy trial design and phase II trial design with TTP ratio endpoint. Illustrates trial designs by real clinical trial examples Includes R code for all methods proposed in the book, enabling straightforward sample size calculation.
This book is a third-party evaluation of H1N1 prevention and control effects in China. Based on the characteristic of H1N1 pandemic around the world and current public health management system in China, this book evaluates the comprehensive effects by considering the countermeasures, joint prevent and control mechanism operated by central and local government, the cost and benefit effects and also the social influence during the whole process. Using the methods of interview and questionnaire, it investigates the central and local government, disease control and prevention center, hospital, community, school and enterprise in Beijing, Fujian, Henan, Guangdong and Sichuan provinces, and also presents the response from the public, patient and close contacts to evaluate the overall effects from different stakeholders. Assessment findings and policy suggestions are included in the book on the way to improve the efficiency of public health emergency system in China. This book provides a good reference to researchers and officials in public management, crisis management and public health studies.
The Multiplayer Classroom: Game Plans is a companion to The Multiplayer Classroom: Designing Coursework as a Game, now in its second edition from CRC Press. This book covers four multiplayer classroom projects played in the real world in real time to teach and entertain. They were funded by grants or institutions, collaborations between Lee Sheldon, as writer/designer, and subject matter experts in various fields. They are written to be accessible to anyone--designer, educator, or layperson--interested in game-based learning. The subjects are increasingly relevant in this day and age: physical fitness, Mandarin, cybersecurity, and especially an online class exploring culture and identity on the internet that is unlike any online class you have ever seen. Read the annotated, often-suspenseful stories of how each game, with its unique challenges, thrills, and spills, was built. Lee Sheldon began his writing career in television as a writer-producer, eventually writing more than 200 shows ranging from Charlie's Angels (writer) to Edge of Night (head writer) to Star Trek: The Next Generation (writer-producer). Having written and designed more than forty commercial and applied video games, Lee spearheaded the first full writing for games concentration in North America at Rensselaer Polytechnic Institute and the second writing concentration at Worcester Polytechnic Institute. He is a regular lecturer and consultant on game design and writing in the United States and abroad. His most recent commercial game, the award-winning The Lion's Song, is currently on Steam. For the past two years he consulted on an "escape room in a box," funded by NASA, that gives visitors to hundreds of science museums and planetariums the opportunity to play colonizers on the moon. He is currently writing his second mystery novel.
The Multiplayer Classroom: Game Plans is a companion to The Multiplayer Classroom: Designing Coursework as a Game, now in its second edition from CRC Press. This book covers four multiplayer classroom projects played in the real world in real time to teach and entertain. They were funded by grants or institutions, collaborations between Lee Sheldon, as writer/designer, and subject matter experts in various fields. They are written to be accessible to anyone--designer, educator, or layperson--interested in game-based learning. The subjects are increasingly relevant in this day and age: physical fitness, Mandarin, cybersecurity, and especially an online class exploring culture and identity on the internet that is unlike any online class you have ever seen. Read the annotated, often-suspenseful stories of how each game, with its unique challenges, thrills, and spills, was built. Lee Sheldon began his writing career in television as a writer-producer, eventually writing more than 200 shows ranging from Charlie's Angels (writer) to Edge of Night (head writer) to Star Trek: The Next Generation (writer-producer). Having written and designed more than forty commercial and applied video games, Lee spearheaded the first full writing for games concentration in North America at Rensselaer Polytechnic Institute and the second writing concentration at Worcester Polytechnic Institute. He is a regular lecturer and consultant on game design and writing in the United States and abroad. His most recent commercial game, the award-winning The Lion's Song, is currently on Steam. For the past two years he consulted on an "escape room in a box," funded by NASA, that gives visitors to hundreds of science museums and planetariums the opportunity to play colonizers on the moon. He is currently writing his second mystery novel.
A thorough treatment of the statistical methods used to analyze doubly truncated data In The Statistical Analysis of Doubly Truncated Data, an expert team of statisticians delivers an up-to-date review of existing methods used to deal with randomly truncated data, with a focus on the challenging problem of random double truncation. The authors comprehensively introduce doubly truncated data before moving on to discussions of the latest developments in the field. The book offers readers examples with R code along with real data from astronomy, engineering, and the biomedical sciences to illustrate and highlight the methods described within. Linear regression models for doubly truncated responses are provided and the influence of the bandwidth in the performance of kernel-type estimators, as well as guidelines for the selection of the smoothing parameter, are explored. Fully nonparametric and semiparametric estimators are explored and illustrated with real data. R code for reproducing the data examples is also provided. The book also offers: A thorough introduction to the existing methods that deal with randomly truncated data Comprehensive explorations of linear regression models for doubly truncated responses Practical discussions of the influence of bandwidth in the performance of kernel-type estimators and guidelines for the selection of the smoothing parameter In-depth examinations of nonparametric and semiparametric estimators Perfect for statistical professionals with some background in mathematical statistics, biostatisticians, and mathematicians with an interest in survival analysis and epidemiology, The Statistical Analysis of Doubly Truncated Data is also an invaluable addition to the libraries of biomedical scientists and practitioners, as well as postgraduate students studying survival analysis.
"This is truly an outstanding book. [It] brings together all of the latest research in clinical trials methodology and how it can be applied to drug development.... Chang et al provide applications to industry-supported trials. This will allow statisticians in the industry community to take these methods seriously." Jay Herson, Johns Hopkins University The pharmaceutical industry's approach to drug discovery and development has rapidly transformed in the last decade from the more traditional Research and Development (R & D) approach to a more innovative approach in which strategies are employed to compress and optimize the clinical development plan and associated timelines. However, these strategies are generally being considered on an individual trial basis and not as part of a fully integrated overall development program. Such optimization at the trial level is somewhat near-sighted and does not ensure cost, time, or development efficiency of the overall program. This book seeks to address this imbalance by establishing a statistical framework for overall/global clinical development optimization and providing tactics and techniques to support such optimization, including clinical trial simulations. Provides a statistical framework for achieve global optimization in each phase of the drug development process. Describes specific techniques to support optimization including adaptive designs, precision medicine, survival-endpoints, dose finding and multiple testing. Gives practical approaches to handling missing data in clinical trials using SAS. Looks at key controversial issues from both a clinical and statistical perspective. Presents a generous number of case studies from multiple therapeutic areas that help motivate and illustrate the statistical methods introduced in the book. Puts great emphasis on software implementation of the statistical methods with multiple examples of software code (both SAS and R). It is important for statisticians to possess a deep knowledge of the drug development process beyond statistical considerations. For these reasons, this book incorporates both statistical and "clinical/medical" perspectives.
The world continues to lose more than a million lives each year to the HIV epidemic, and nearly two million individuals were infected with HIV in 2017 alone. The new Sustainable Development Goals, adopted by countries of the United Nations in September 2015, include a commitment to end the AIDS epidemic by 2030. Considerable emphasis on prevention of new infections and treatment of those living with HIV will be needed to make this goal achievable. With nearly 37 million people now living with HIV, it is a communicable disease that behaves like a noncommunicable disease. Nutritional management is integral to comprehensive HIV care and treatment. Improved nutritional status and weight gain can increase recovery and strength of individuals living with HIV/AIDS, improve dietary diversity and caloric intake, and improve quality of life. This book highlights evidence-based research linking nutrition and HIV and identifies research gaps to inform the development of guidelines and policies for the United Nations' Sustainable Development Goals. A comprehensive approach that includes nutritional interventions is likely to maximize the benefit of antiretroviral therapy in preventing HIV disease progression and other adverse outcomes in HIV-infected men and women. Modification of nutritional status has been shown to enhance the quality of life of those suffering HIV/AIDS, both physically in terms of improved body mass index and immunological markers, and psychologically, by improving symptoms of depression. While the primary focus for those infected should remain on antiretroviral treatment and increasing its availability and coverage, improvement of nutritional status plays a complementary role in the management of HIV infection.
Self-Controlled Case Series Studies: A Modelling Guide with R provides the first comprehensive account of the self-controlled case series (SCCS) method, a statistical technique for investigating associations between outcome events and time-varying exposures. The method only requires information from individuals who have experienced the event of interest, and automatically controls for multiplicative time-invariant confounders, even when these are unmeasured or unknown. It is increasingly being used in epidemiology, most frequently to study the safety of vaccines and pharmaceutical drugs. Key features of the book include: A thorough yet accessible description of the SCCS method, with mathematical details provided in separate starred sections. Comprehensive discussion of assumptions and how they may be verified. A detailed account of different SCCS models, extensions of the SCCS method, and the design of SCCS studies. Extensive practical illustrations and worked examples from epidemiology. Full computer code from the associated R package SCCS, which includes all the data sets used in the book. The book is aimed at a broad range of readers, including epidemiologists and medical statisticians who wish to use the SCCS method, and also researchers with an interest in statistical methodology. The three authors have been closely involved with the inception, development, popularisation and programming of the SCCS method. |
![]() ![]() You may like...
Madam & Eve 2018 - The Guptas Ate My…
Stephen Francis, Rico Schacherl
Paperback
Extremisms In Africa
Alain Tschudin, Stephen Buchanan-Clarke, …
Paperback
![]()
The Asian Aspiration - Why And How…
Greg Mills, Olusegun Obasanjo, …
Paperback
|