![]() |
![]() |
Your cart is empty |
||
Books > Reference & Interdisciplinary > Communication studies > Data analysis
As the analysis of big datasets in sports performance becomes a more entrenched part of the sporting landscape, so the value of sport scientists and analysts with formal training in data analytics grows. Sports Analytics: Analysis, Visualisation and Decision Making in Sports Performance provides the most authoritative and comprehensive guide to the use of analytics in sport and its application in sports performance, coaching, talent identification and sports medicine available. Employing an approach-based structure and integrating problem-based learning throughout the text, the book clearly defines the difference between analytics and analysis and goes on to explain and illustrate methods including: Interactive visualisation Simulation and modelling Geospatial data analysis Spatiotemporal analysis Machine learning Genomic data analysis Social network analysis Offering a mixed-methods case study chapter, no other book offers the same level of scientific grounding or practical application in sports data analytics. Sports Analytics is essential reading for all students of sports analytics, and useful supplementary reading for students and professionals in talent identification and development, sports performance analysis, sports medicine and applied computer science.
Critical Theory and Qualitative Data Analysis in Education offers a path-breaking explanation of how critical theories can be used within the analysis of qualitative data to inform research processes, such as data collection, analysis, and interpretation. This contributed volume offers examples of qualitative data analysis techniques and exemplars of empirical studies that employ critical theory concepts in data analysis. By creating a clear and accessible bridge between data analysis and critical social theories, this book helps scholars and researchers effectively translate their research designs and findings to multiple audiences for more equitable outcomes and disruption of historical and contemporary inequality.
Introduces new and advanced methods of model discovery for time-series data using artificial intelligence. Implements topological approaches to distill "machine-intuitive" models from complex dynamics data. Introduces a new paradigm for a parsimonious model of a dynamical system without resorting to differential equations. Heralds a new era in data-driven science and engineering based on the operational concept of "computational intuition".
"Using Web and Paper Questionnaires for Data-Based Decision Making maintains the same strengths as Thomas?s previous book: it is clearly written, easy to understand, and has plenty of examples and guides for those implementing these ideas. Designed as a cookbook, it superbly enables educators to write, administer, and analyze a survey." Sandra L. Stein, Professor of Education Rider University Learn to use questionnaires and data-based decision making to support school improvement! How effectively are teachers implementing the new literacy program? What do parents think of the proposed homework policy? Is bullying a growing problem? Understanding how to create appropriate questionnaires is essential in making data-based decisions that improve school policies, processes, and procedures. Using Web and Paper Questionnaires for Data-Based Decision Making is a practical handbook for creating exceptional questionnaires for a variety of purposes, including data-based decision making. Author Susan J. Thomas provides authoritative guidance for planning a survey project, creating a questionnaire, gathering data, and analyzing and communicating the results to a variety of audiences. Features of this reader-friendly guidebook include
Offering suggestions for successfully using both Web-based and paper-based questionnaires, this practitioner-focused manual summarizes the key steps of successful survey projects and identifies critical success factors for each step. Designed primarily for principals, district-level administrators, and teachers, this invaluable resource is also suitable for policymakers, state-level administrators, and graduate students in education and social sciences.
As a result of the COVID-19 pandemic, medical statistics and public health data have become staples of newsfeeds worldwide, with infection rates, deaths, case fatality and the mysterious R figure featuring regularly. However, we don't all have the statistical background needed to translate this information into knowledge. In this lively account, Stephen Senn explains these statistical phenomena and demonstrates how statistics is essential to making rational decisions about medical care. The second edition has been thoroughly updated to cover developments of the last two decades and includes a new chapter on medical statistical challenges of COVID-19, along with additional material on infectious disease modelling and representation of women in clinical trials. Senn entertains with anecdotes, puzzles and paradoxes, while tackling big themes including: clinical trials and the development of medicines, life tables, vaccines and their risks or lack of them, smoking and lung cancer, and even the power of prayer.
View the Table of Contents. Read Chapter 1. "Auerbach and Silverstein write at a level that can be
understood by beginners but is sophisticated enough for
scholars...informative and interesting." Qualitative Data is meant for the novice researcher who needs guidance on what specifically to do when faced with a sea of information. It takes readers through the qualitative research process, beginning with an examination of the basic philosophy of qualitative research, and ending with planning and carrying out a qualitative research study. It provides an explicit, step-by-step procedure that will take the researcher from the raw text of interview data through data analysis and theory construction to the creation of a publishable work. The volume provides actual examples based on the authors' own work, including two published pieces in the appendix, so that readers can follow examples for each step of the process, from the project's inception to its finished product. The volume also includes an appendix explaining how to implement these data analysis procedures using NVIVO, a qualitative data analysis program.
This book is concerned with data in which the observations are independent and in which the response is multivariate. Anthony Atkinson has been Professor of Statistics at the London School of Economics since 1989. Before that he was a Professor at Imperial College, London. He is the author of Plots, Transformations, and Regression, co-author of Optimum Experimental Designs, and joint editor of The Fascination of Statistics, a volume celebrating the centenary of the International Statistical Institute. Professor Atkinson has served as editor of The Journal of the Royal Statistical Society, Series B and as associate editor of Biometrika and Technometrics. He has published well over 100 articles in these and other journals including The Annals of Statistics, Biometrics, The Journal of the American Statistical Association, and Statistics and Computing. Marco Riani, after receiving his Ph.D. in Statistics in 1995 from the University of Florence, joined the Faculty of Economics at Parma University as postdoctoral fellow. In 1997 he won the prize for the best Italian Ph.D. thesis in Statistics. He is currently Associate Professor of Statistics in the University of Parma. He has published in Technometrics, The Journal of Computational and Graphical Statistics, The Journal of Business and Economic Statistics, The Journal of Forecasting, Environmetrics, Computational Statistics and Data Analysis, Metron, and other journals. From the reviews: "The book requires knowledge of multivariate statistical methods, because it provides only basic background information on the methods considered (although with excellent references for futher reading at the end of each chapter). Each chapter alsoincludes exercises with solutions...This book could serve as an excellent text for an advanced course on modern multivariate statistics, as it is intended." Technometrics, November 2004 "This book is full of interest for anyone undertaking multivariate analyses, clearly emphasizing that uncritical use of standard methods can be misleading." Short Book Reviews of the International Statistical Institute, December 2004 "This book is an interesting complement to various textbooks on multivariate statistics." Biometrics, December 2005 "This book discusses multivariate data from a different perspective. a ] it is an excellent book for researchers with interests in multivariate data and cluster analysis. It may also be a good reference for students of advanced statistics and practitioners working with large volumes of data a ] ." (Kassim S. Mwitondi, Journal of Applied Statistics, Vol. 32 (4), 2005) "This is a companion to an earlier book a ] both of which feature many informative graphs. Here, the forward search has been applied in detail to classical multivariate approaches used with Gaussian data. a ] One valuable feature of the book is the way that the illustrations concentrate on a relatively small number a ] . This makes it easy to concentrate on the application a ] . The implications of this book also strengthen the importance of data visualization, as well as providing a valuable approach to visualization." (Paul Hewson, Journal of the Royal Statistical Society Series A, Vol. 168 (2), 2005) "This book is a companion to Atkinson a ] . The objective is to identify outliers, appreciate their influence a ] which would result in an overall improvement. a ] Graphical tools arewidely used, resulting in three hundred and ninety figures. Each chapter is followed by extensive exercises and their solutions, and the book could be used as an advanced textbook for multivariate analysis courses. Web-sites provide the relevant software a ] . This book is full of interest for anyone undertaking multivariate analyses a ] ." (B.J.T. Morgan, Short Book Reviews International Statistical Institute, Vol. 24 (3), 2004) "This book discusses forward search (FS), a method using graphs to explore and model continuous multivariate data a ] . Its viewpoint is toward applications, and it demonstrates the merits of FS using a variety of examples, with a thorough discussion of statistical issues and interpretation of results. a ] This book could serve as an excellent text for an advanced course on modern multivariate statistics, as it is intended." (Tena Ipsilantis Katsaounis, Technometrics, Vol. 46 (4), November, 2004) "The theoretical exercises with detailed solutions at the end of each chapter are extremely useful. I would recommend this book to practitioners who analyze moderately sized multivariate data. Of course, anyone associated with the application of statistics should find the book interesting to read." (Tathgata Banerjee, Journal of the American Statistical Association, March 2006)
Classification and regression trees (CART) is one of the several contemporary statistical techniques with good promise for research in many academic fields. There are very few books on CART, especially on applied CART. This book, as a good practical primer with a focus on applications, introduces the relatively new statistical technique of CART as a powerful analytical tool. The easy-to-understand (non-technical) language and illustrative graphs (tables) as well as the use of the popular statistical software program (SPSS) appeal to readers without strong statistical background. This book helps readers understand the foundation, the operation, and the interpretation of CART analysis, thus becoming knowledgeable consumers and skillful users of CART. The chapter on advanced CART procedures not yet well-discussed in the literature allows readers to effectively seek further empowerment of their research designs by extending the analytical power of CART to a whole new level. This highly practical book is specifically written for academic researchers, data analysts, and graduate students in many disciplines such as economics, social sciences, medical sciences, and sport sciences who do not have strong statistical background but still strive to take full advantage of CART as a powerful analytical tool for research in their fields.
For many organizations data is a by-product, but for the smarter ones it is the heartbeat of their business. Most businesses have a wealth of data buried in their systems which, if used effectively, could increase revenue, reduce costs and risk and improve customer satisfaction and employee experience. Beginning with how to choose projects which reflect your organization's goals and how to make the business case for investing in data, this book then takes the reader through the five 'waves' of organizational data maturity. It takes the reader from getting started on the data journey with some quick wins, to how data can help your business become a leading innovator which systematically outperforms competitors. Data and Analytics Strategy for Business outlines how to build consistent, high-quality sources of data which will create business value and explores how automation, AI and machine learning can improve performance and decision making. Filled with real-world examples and case studies, this book is a stage-by-stage guide to designing and implementing a results-driven data strategy.
Visual displays play a crucial role in knowledge generation and communication. The purpose of the volume is to provide researchers with a framework that helps them use visual displays to organize and interpret data; and to communicate their findings in a comprehensible way within different research (e.g., quantitative, mixed methods) and testing traditions that improves the presentation and understanding of findings. Further, this book includes contributions from leading scholars in testing and quantitative, qualitative, and mixed methods research, and results reporting. The volume's focal question is: What are the best principles and practices for the use of visual displays in the research and testing process, which broadly includes the analysis, organization, interpretation, and communication of data? The volume is organized into four sections. Section I provides a rationale for this volume; namely, that including visual displays in research and testing can enhance comprehension and processing efficiency. Section II includes addresses theoretical frameworks and universal design principles for visual displays. Section III examines the use of visual displays in quantitative, qualitative, and mixed methods research. Section IV focuses on using visual displays to report testing and assessment data.
Visual displays play a crucial role in knowledge generation and communication. The purpose of the volume is to provide researchers with a framework that helps them use visual displays to organize and interpret data; and to communicate their findings in a comprehensible way within different research (e.g., quantitative, mixed methods) and testing traditions that improves the presentation and understanding of findings. Further, this book includes contributions from leading scholars in testing and quantitative, qualitative, and mixed methods research, and results reporting. The volume's focal question is: What are the best principles and practices for the use of visual displays in the research and testing process, which broadly includes the analysis, organization, interpretation, and communication of data? The volume is organized into four sections. Section I provides a rationale for this volume; namely, that including visual displays in research and testing can enhance comprehension and processing efficiency. Section II includes addresses theoretical frameworks and universal design principles for visual displays. Section III examines the use of visual displays in quantitative, qualitative, and mixed methods research. Section IV focuses on using visual displays to report testing and assessment data.
Praise forEnvisioning the Survey Interview of the Future "This book is an excellent introduction to some brave new technologies . . . and their possible impacts on the way surveys might be conducted. Anyone interested in the future of survey methodology should read this book." Norman M. Bradburn, PhD, National Opinion Research Center, University of Chicago "Envisioning the Survey Interview of the Future gathers some of the brightest minds in alternative methods of gathering self-report data, with an eye toward the future self-report sample survey. Conrad and Schober, by assembling a group of talented survey researchers and creative inventors of new software-based tools to gather information from human subjects, have created a volume of importance to all interested in imagining future ways of interviewing." Robert M. Groves, PhD, Survey Research Center, University of Michigan This collaboration provides extensive insight into the impact of communication technology on survey research As previously unimaginable communication technologies rapidly become commonplace, survey researchers are presented with both opportunities and obstacles when collecting and interpreting data based on human response. Envisioning the Survey Interview of the Future explores the increasing influence of emerging technologies on the data collection process and, in particular, self-report data collection in interviews, providing the key principles for using these new modes of communication. With contributions written by leading researchers in the fields of survey methodology and communication technology, this compilation integrates the use of modern technological developments with establishedsocial science theory. The book familiarizes readers with these new modes of communication by discussing the challenges to accuracy, legitimacy, and confidentiality that researchers must anticipate while collecting data, and it also provides tools for adopting new technologies in order to obtain high-quality results with minimal error or bias. Envisioning the Survey Interview of the Future addresses questions that researchers in survey methodology and communication technology must consider, such as: How and when should new communication technology be adopted in the interview process? What are the principles that extend beyond particular technologies? Why do respondents answer questions from a computer differently than questions from a human interviewer? How can systems adapt to respondents' thinking and feeling? What new ethical concerns about privacy and confidentiality are raised from using new communication technologies? With its multidisciplinary approach, extensive discussion of existing and future technologies, and practical guidelines for adopting new technology, Envisioning the Survey Interview of the Future is an essential resource for survey methodologists, questionnaire designers, and communication technologists in any field that conducts survey research. It also serves as an excellent supplement for courses in research methods at the upper-undergraduate or graduate level.
Making sense of sports performance data can be a challenging task but is nevertheless an essential part of performance analysis investigations. Focusing on techniques used in the analysis of sport performance, this book introduces the fundamental principles of data analysis, explores the most important tools used in data analysis, and offers guidance on the presentation of results. The book covers key topics such as:
The book includes worked examples from real sport, offering clear guidance to the reader and bringing the subject to life. This book is invaluable reading for any student, researcher or analyst working in sport performance or undertaking a sport-related research project or methods course"
This dissertation comprises five studies analyzing daily stock returns of listed firms. Studies one and two shed light on corporate diversification through M&A and how related risk dynamics affect shareholder wealth. Carrying over the risk analysis methodology 'GARCH' to external events in studies three and four, the author individually scrutinizes the adverse implications of bank failures and bailouts in the 2007-2009 financial crisis. Finding opposing return shocks, he identifies the limits of the 'symmetric' GARCH. As observed of the behavior of stock return data, volatility reacts asymmetrically to positive and negative return shocks. The advanced EGARCH incorporates this so called 'leverage effect'. Applying the EGARCH in his final study, the author can simultaneously scrutinize the adverse bank events with an appropriate econometric foundation.
This textbook presents an innovative new perspective on the economics of development, including insights from a broad range of disciplines. It starts with the current state of affairs, a discussion of data availability, reliability, and analysis, and an historic overview of the deep influence of fundamental factors on human prosperity. Next, it focuses on the role of human interaction in terms of trade, capital, and knowledge flows, as well as the associated implications for institutions, contracts, and finance. The book also highlights differences in the development paths of emerging countries in order to provide a better understanding of the concepts of development and the Millennium Development Goals. Insights from other disciplines are used help to understand human development with regard to other issues, such as inequalities, health, demography, education, and poverty. The book concludes by emphasizing the importance of connections, location, and human interaction in determining future prosperity.
Throughout the world, voters lack access to information about politicians, government performance, and public services. Efforts to remedy these informational deficits are numerous. Yet do informational campaigns influence voter behavior and increase democratic accountability? Through the first project of the Metaketa Initiative, sponsored by the Evidence in Governance and Politics (EGAP) research network, this book aims to address this substantive question and at the same time introduce a new model for cumulative learning that increases coordination among otherwise independent researcher teams. It presents the overall results (using meta-analysis) from six independently conducted but coordinated field experimental studies, the results from each individual study, and the findings from a related evaluation of whether practitioners utilize this information as expected. It also discusses lessons learned from EGAP's efforts to coordinate field experiments, increase replication of theoretically important studies across contexts, and increase the external validity of field experimental research.
" The Data Quality Assessment Framework "shows you how to measure and monitor data quality, ensuring quality over time. You ll start with general concepts of measurement and work your way through a detailed framework of more than three dozen measurement types related to five objective dimensions of quality: completeness, timeliness, consistency, validity, and integrity. Ongoing measurement, rather than one time activities will help your organization reach a new level of data quality. This plain-language approach to measuring data can be understood by both business and IT and provides practical guidance on how to apply the DQAF within any organization enabling you to prioritize measurements and effectively report on results. Strategies for using data measurement to govern and improve the quality of data and guidelines for applying the framework within a data asset are included. You ll come away able to prioritize which measurement types to implement, knowing where to place them in a data flow and how frequently to measure. Common conceptual models for defining and storing of data quality results for purposes of trend analysis are also included as well as generic business requirements for ongoing measuring and monitoring including calculations and comparisons that make the measurements meaningful and help understand trends and detect anomalies.
An interdisciplinary look at interaction in the standardized survey interview This volume presents a theoretical and empirical inquiry into the interaction between interviewers and respondents in standardized research interviews. The editors include a range of articles that showcase the perspectives of conversation analysts, ethnomethodologists, and survey methodologists, to gain a more complete picture of interaction in the standardized survey interview than was previously available. This book is the first to focus solely on the interactional substrate or conversational architecture of interviewing. It offers a range of insights into standardized interviewing as interaction and forms a bridge between survey methodology and the study of interaction and tacit practices. The articles are arranged into four subject groups: theoretical orientations, survey recruitment, interaction during the substantive interview, and interaction and survey data quality. Articles include:
Standardization and Tacit Knowledge serves as a one-of-a-kind reference for survey methodologists, linguists, and researchers and also as a postgraduate coursebook in survey interviewing.
Public Policy Analytics: Code & Context for Data Science in Government teaches readers how to address complex public policy problems with data and analytics using reproducible methods in R. Each of the eight chapters provides a detailed case study, showing readers: how to develop exploratory indicators; understand 'spatial process' and develop spatial analytics; how to develop 'useful' predictive analytics; how to convey these outputs to non-technical decision-makers through the medium of data visualization; and why, ultimately, data science and 'Planning' are one and the same. A graduate-level introduction to data science, this book will appeal to researchers and data scientists at the intersection of data analytics and public policy, as well as readers who wish to understand how algorithms will affect the future of government.
Achieve successful digital transformation with this authoritative guide designed specifically for established organizations. At a time where even the most recognized business models are under threat, organizations risk devastation if they do not transition successfully to the new digital reality. Yet what works for digital natives does not always work for established organizations. Recognized as one of the world's top global executives leading innovative transformation, Neetan Chopra's deep experience of steering organizations through digital disruption drives the practical approach of Accelerated Digital Transformation. Having designed transformation journeys, overcome setbacks and driven outcomes within multiple leading companies, Neetan Chopra tackles key factors for established organizations including inertia, impetus, outcomes, digital capabilities and culture. The book is underpinned by a tried and tested framework that will guide readers step by step through the entire digital transformation journey. This will be an essential resource for leaders, managers and practitioners leading and executing digital transformation.
Distribution-free resampling methods permutation tests, decision trees, and the bootstrap are used today in virtually every research area. A Practitioner s Guide to Resampling for Data Analysis, Data Mining, and Modeling explains how to use the bootstrap to estimate the precision of sample-based estimates and to determine sample size, data permutations to test hypotheses, and the readily-interpreted decision tree to replace arcane regression methods. Highlights
Statistics practitioners will find the methods described in the text easy to learn and to apply in a broad range of subject areas from A for Accounting, Agriculture, Anthropology, Aquatic science, Archaeology, Astronomy, and Atmospheric science to V for Virology and Vocational Guidance, and Z for Zoology. Practitioners and research workers and in the biomedical, engineering and social sciences, as well as advanced students in biology, business, dentistry, medicine, psychology, public health, sociology, and statistics will find an easily-grasped guide to estimation, testing hypotheses and model building.
This is the first comprehensive overview of the 'science of science,' an emerging interdisciplinary field that relies on big data to unveil the reproducible patterns that govern individual scientific careers and the workings of science. It explores the roots of scientific impact, the role of productivity and creativity, when and what kind of collaborations are effective, the impact of failure and success in a scientific career, and what metrics can tell us about the fundamental workings of science. The book relies on data to draw actionable insights, which can be applied by individuals to further their career or decision makers to enhance the role of science in society. With anecdotes and detailed, easy-to-follow explanations of the research, this book is accessible to all scientists and graduate students, policymakers, and administrators with an interest in the wider scientific enterprise.
This book has won the CHOICE Outstanding Academic Title award 2014. A century of education and education reform along with the last three decades of high-stakes testing and accountability reveals a disturbing paradox: Education has a steadfast commitment to testing and grading despite decades of research, theory, and philosophy that reveal the corrosive consequences of both testing and grading within an education system designed to support human agency and democratic principles. This edited volume brings together a collection of essays that confronts the failure of testing and grading and then offers practical and detailed examinations of implementing at the macro and micro levels of education teaching and learning free of the weight of testing and grading. The book explores the historical failure of testing and grading; the theoretical and philosophical arguments against testing and grading; the negative influence of testing and grading on social justice, race, class, and gender; and the role of testing and grading in perpetuating a deficit perspective of children, learning, race, and class. The chapters fall under two broad sections: Part I: "Degrading Learning, Detesting Education: The Failure of High-Stake Accountability in Education" includes essays on the historical, theoretical, and philosophical arguments against testing and grading; Part II: "De-Grading and De-Testing in a Time of High-Stakes Education Reform" presents practical experiments in de-testing and de-grading classrooms for authentic learning experiences.
Provides a thorough coverage and comparison of a wide array of time series models and methods: Exponential Smoothing, Holt Winters, ARMA and ARIMA, deep learning models including RNNs, LSTMs, GRUs, and ensemble models composed of combinations of these models. Introduces the factor table representation of ARMA and ARIMA models. This representation is not available in any other book at this level and is extremely useful in both practice and pedagogy. Uses real world examples that can be readily found via web links from sources such as the US Bureau of Statistics, Department of Transportation and the World Bank. There is an accompanying R package that is easy to use and requires little or no previous R experience. The package implements the wide variety of models and methods presented in the book and has tremendous pedagogical use.
Leverage the power of Talent Intelligence (TI) to make evidence-informed decisions that drive business performance by using data about people, skills, jobs, business functions and geographies. Improved access to people and business data has created huge opportunities for the HR function. However, simply having access to this data is not enough. HR professionals need to know how to analyse the data, know what questions to ask of it and where and how the insights from the data can add the most value. Talent Intelligence is a practical guide that explains everything HR professionals need to know to achieve this. It outlines what Talent Intelligence (TI) is why it's important, how to use it to improve business results and includes guidance on how HR professionals can build the business case for it. This book also explains how and why talent intelligence is different from workforce planning, sourcing research and standard predictive HR analytics and shows how to assess where in the organization talent intelligence can have the biggest impact and how to demonstrate the results to all stakeholders. Most importantly, this book covers KPIs and metrics for success, short-term and long-term TI goals, an outline of what success looks like and the skills needed for effective Talent Intelligence. It also features case studies from organizations including Philips, Barclays and Kimberly-Clark. |
![]() ![]() You may like...
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim
Paperback
|