![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Reference & Interdisciplinary > Communication studies > Data analysis
As the analysis of big datasets in sports performance becomes a more entrenched part of the sporting landscape, so the value of sport scientists and analysts with formal training in data analytics grows. Sports Analytics: Analysis, Visualisation and Decision Making in Sports Performance provides the most authoritative and comprehensive guide to the use of analytics in sport and its application in sports performance, coaching, talent identification and sports medicine available. Employing an approach-based structure and integrating problem-based learning throughout the text, the book clearly defines the difference between analytics and analysis and goes on to explain and illustrate methods including: Interactive visualisation Simulation and modelling Geospatial data analysis Spatiotemporal analysis Machine learning Genomic data analysis Social network analysis Offering a mixed-methods case study chapter, no other book offers the same level of scientific grounding or practical application in sports data analytics. Sports Analytics is essential reading for all students of sports analytics, and useful supplementary reading for students and professionals in talent identification and development, sports performance analysis, sports medicine and applied computer science.
In this fascinating follow-up to the bestselling Information is Beautiful and Knowledge is Beautiful, the king of infographics David McCandless uses spectacular visuals to give us all a bit of good news. We are living in the Information Age, in which we are constantly bombarded with data - on television, in print and online. How can we relate to this mind-numbing overload? Enter David McCandless and his amazing infographics: simple, elegant ways to understand information too complex or abstract to grasp any way but visually. In his unique signature style, he creates dazzling displays that blend facts with their connections, contexts and relationships, making information meaningful, entertaining - and beautiful. In his highly anticipated third book, McCandless illustrates positive news from around the world, for an informative, engaging and uplifting collection of new infographic art.
A comprehensive compilation of new developments in data linkage methodology The increasing availability of large administrative databases has led to a dramatic rise in the use of data linkage, yet the standard texts on linkage are still those which describe the seminal work from the 1950-60s, with some updates. Linkage and analysis of data across sources remains problematic due to lack of discriminatory and accurate identifiers, missing data and regulatory issues. Recent developments in data linkage methodology have concentrated on bias and analysis of linked data, novel approaches to organising relationships between databases and privacy-preserving linkage. Methodological Developments in Data Linkage brings together a collection of contributions from members of the international data linkage community, covering cutting edge methodology in this field. It presents opportunities and challenges provided by linkage of large and often complex datasets, including analysis problems, legal and security aspects, models for data access and the development of novel research areas. New methods for handling uncertainty in analysis of linked data, solutions for anonymised linkage and alternative models for data collection are also discussed. Key Features : Presents cutting edge methods for a topic of increasing importance to a wide range of research areas, with applications to data linkage systems internationally Covers the essential issues associated with data linkage today Includes examples based on real data linkage systems, highlighting the opportunities, successes and challenges that the increasing availability of linkage data provides Novel approach incorporates technical aspects of both linkage, management and analysis of linked data This book will be of core interest to academics, government employees, data holders, data managers, analysts and statisticians who use administrative data. It will also appeal to researchers in a variety of areas, including epidemiology, biostatistics, social statistics, informatics, policy and public health.
This book is concerned with data in which the observations are independent and in which the response is multivariate. Anthony Atkinson has been Professor of Statistics at the London School of Economics since 1989. Before that he was a Professor at Imperial College, London. He is the author of Plots, Transformations, and Regression, co-author of Optimum Experimental Designs, and joint editor of The Fascination of Statistics, a volume celebrating the centenary of the International Statistical Institute. Professor Atkinson has served as editor of The Journal of the Royal Statistical Society, Series B and as associate editor of Biometrika and Technometrics. He has published well over 100 articles in these and other journals including The Annals of Statistics, Biometrics, The Journal of the American Statistical Association, and Statistics and Computing. Marco Riani, after receiving his Ph.D. in Statistics in 1995 from the University of Florence, joined the Faculty of Economics at Parma University as postdoctoral fellow. In 1997 he won the prize for the best Italian Ph.D. thesis in Statistics. He is currently Associate Professor of Statistics in the University of Parma. He has published in Technometrics, The Journal of Computational and Graphical Statistics, The Journal of Business and Economic Statistics, The Journal of Forecasting, Environmetrics, Computational Statistics and Data Analysis, Metron, and other journals. From the reviews: "The book requires knowledge of multivariate statistical methods, because it provides only basic background information on the methods considered (although with excellent references for futher reading at the end of each chapter). Each chapter alsoincludes exercises with solutions...This book could serve as an excellent text for an advanced course on modern multivariate statistics, as it is intended." Technometrics, November 2004 "This book is full of interest for anyone undertaking multivariate analyses, clearly emphasizing that uncritical use of standard methods can be misleading." Short Book Reviews of the International Statistical Institute, December 2004 "This book is an interesting complement to various textbooks on multivariate statistics." Biometrics, December 2005 "This book discusses multivariate data from a different perspective. a ] it is an excellent book for researchers with interests in multivariate data and cluster analysis. It may also be a good reference for students of advanced statistics and practitioners working with large volumes of data a ] ." (Kassim S. Mwitondi, Journal of Applied Statistics, Vol. 32 (4), 2005) "This is a companion to an earlier book a ] both of which feature many informative graphs. Here, the forward search has been applied in detail to classical multivariate approaches used with Gaussian data. a ] One valuable feature of the book is the way that the illustrations concentrate on a relatively small number a ] . This makes it easy to concentrate on the application a ] . The implications of this book also strengthen the importance of data visualization, as well as providing a valuable approach to visualization." (Paul Hewson, Journal of the Royal Statistical Society Series A, Vol. 168 (2), 2005) "This book is a companion to Atkinson a ] . The objective is to identify outliers, appreciate their influence a ] which would result in an overall improvement. a ] Graphical tools arewidely used, resulting in three hundred and ninety figures. Each chapter is followed by extensive exercises and their solutions, and the book could be used as an advanced textbook for multivariate analysis courses. Web-sites provide the relevant software a ] . This book is full of interest for anyone undertaking multivariate analyses a ] ." (B.J.T. Morgan, Short Book Reviews International Statistical Institute, Vol. 24 (3), 2004) "This book discusses forward search (FS), a method using graphs to explore and model continuous multivariate data a ] . Its viewpoint is toward applications, and it demonstrates the merits of FS using a variety of examples, with a thorough discussion of statistical issues and interpretation of results. a ] This book could serve as an excellent text for an advanced course on modern multivariate statistics, as it is intended." (Tena Ipsilantis Katsaounis, Technometrics, Vol. 46 (4), November, 2004) "The theoretical exercises with detailed solutions at the end of each chapter are extremely useful. I would recommend this book to practitioners who analyze moderately sized multivariate data. Of course, anyone associated with the application of statistics should find the book interesting to read." (Tathgata Banerjee, Journal of the American Statistical Association, March 2006)
A century of education and education reform, along with more than three decades of high-stakes testing and accountability, reveals a disturbing paradox: education has a steadfast commitment to testing and grading. This commitment persists despite ample research, theory, and philosophy revealing the corrosive consequences of both testing and grading in an education system designed to support human agency and democratic principles. This revised edited volume brings together a collection of updated and new essays that confronts the failure of testing and grading. The book explores the historical failure of testing and grading; the theoretical and philosophical arguments against testing and grading; the negative influence of tests and grades on social justice, race, class, and gender; and the role that they play in perpetuating a deficit perspective of children. The chapters fall under two broad sections. Part I, Degrading Learning, Detesting Education: The Failure of High-Stake Accountability in Education, includes essays on the historical, theoretical, and philosophical arguments against testing and grading. Part II, De-Grading and De-Testing in a Time of High-Stakes Education Reform, presents practical experiments in de-testing and de-grading classrooms for authentic learning experiences.
Optimization techniques are at the core of data science, including data analysis and machine learning. An understanding of basic optimization techniques and their fundamental properties provides important grounding for students, researchers, and practitioners in these areas. This text covers the fundamentals of optimization algorithms in a compact, self-contained way, focusing on the techniques most relevant to data science. An introductory chapter demonstrates that many standard problems in data science can be formulated as optimization problems. Next, many fundamental methods in optimization are described and analyzed, including: gradient and accelerated gradient methods for unconstrained optimization of smooth (especially convex) functions; the stochastic gradient method, a workhorse algorithm in machine learning; the coordinate descent approach; several key algorithms for constrained optimization problems; algorithms for minimizing nonsmooth functions arising in data science; foundations of the analysis of nonsmooth functions and optimization duality; and the back-propagation approach, relevant to neural networks.
Petty trade helped vast numbers of people to survive the crisis faced by post-Soviet Russia. The book analyses how this survival technique was carried out in practice. On the basis of his fieldwork research, the author shows how people coped with rapid social change and places their activities within a context of government policies, migration flows and entrepreneurial strategies. "This is an original work based on extensive fieldwork research. Wielecki skillfully intertwined "ethnographic meat" with "the bones of theory", which has resulted in a "flesh-and-blood" anthropology." Michal Buchowski "This is an immensely insightful exploration of petty trade in post-Soviet Russia. The author laces his genuine ethnographic work in a coherent account of the concepts of uncertainty, embeddedness, and informal economy." Violetta Zentai
Since the early days of performance assessment, human ratings have been subject to various forms of error and bias. Expert raters often come up with different ratings for the very same performance and it seems that assessment outcomes largely depend upon which raters happen to assign the rating. This book provides an introduction to many-facet Rasch measurement (MFRM), a psychometric approach that establishes a coherent framework for drawing reliable, valid, and fair inferences from rater-mediated assessments, thus answering the problem of fallible human ratings. Revised and updated throughout, the Second Edition includes a stronger focus on the Facets computer program, emphasizing the pivotal role that MFRM plays for validating the interpretations and uses of assessment outcomes.
Praise forEnvisioning the Survey Interview of the Future "This book is an excellent introduction to some brave new technologies . . . and their possible impacts on the way surveys might be conducted. Anyone interested in the future of survey methodology should read this book." Norman M. Bradburn, PhD, National Opinion Research Center, University of Chicago "Envisioning the Survey Interview of the Future gathers some of the brightest minds in alternative methods of gathering self-report data, with an eye toward the future self-report sample survey. Conrad and Schober, by assembling a group of talented survey researchers and creative inventors of new software-based tools to gather information from human subjects, have created a volume of importance to all interested in imagining future ways of interviewing." Robert M. Groves, PhD, Survey Research Center, University of Michigan This collaboration provides extensive insight into the impact of communication technology on survey research As previously unimaginable communication technologies rapidly become commonplace, survey researchers are presented with both opportunities and obstacles when collecting and interpreting data based on human response. Envisioning the Survey Interview of the Future explores the increasing influence of emerging technologies on the data collection process and, in particular, self-report data collection in interviews, providing the key principles for using these new modes of communication. With contributions written by leading researchers in the fields of survey methodology and communication technology, this compilation integrates the use of modern technological developments with establishedsocial science theory. The book familiarizes readers with these new modes of communication by discussing the challenges to accuracy, legitimacy, and confidentiality that researchers must anticipate while collecting data, and it also provides tools for adopting new technologies in order to obtain high-quality results with minimal error or bias. Envisioning the Survey Interview of the Future addresses questions that researchers in survey methodology and communication technology must consider, such as: How and when should new communication technology be adopted in the interview process? What are the principles that extend beyond particular technologies? Why do respondents answer questions from a computer differently than questions from a human interviewer? How can systems adapt to respondents' thinking and feeling? What new ethical concerns about privacy and confidentiality are raised from using new communication technologies? With its multidisciplinary approach, extensive discussion of existing and future technologies, and practical guidelines for adopting new technology, Envisioning the Survey Interview of the Future is an essential resource for survey methodologists, questionnaire designers, and communication technologists in any field that conducts survey research. It also serves as an excellent supplement for courses in research methods at the upper-undergraduate or graduate level.
What happens to risk as the economic horizon goes to zero and risk is seen as an exposure to a change in state that may occur instantaneously at any time? All activities that have been undertaken statically at a fixed finite horizon can now be reconsidered dynamically at a zero time horizon, with arrival rates at the core of the modeling. This book, aimed at practitioners and researchers in financial risk, delivers the theoretical framework and various applications of the newly established dynamic conic finance theory. The result is a nonlinear non-Gaussian valuation framework for risk management in finance. Risk-free assets disappear and low risk portfolios must pay for their risk reduction with negative expected returns. Hedges may be constructed to enhance value by exploiting risk interactions. Dynamic trading mechanisms are synthesized by machine learning algorithms. Optimal exposures are designed for option positioning simultaneously across all strikes and maturities.
Introduces new and advanced methods of model discovery for time-series data using artificial intelligence. Implements topological approaches to distill "machine-intuitive" models from complex dynamics data. Introduces a new paradigm for a parsimonious model of a dynamical system without resorting to differential equations. Heralds a new era in data-driven science and engineering based on the operational concept of "computational intuition".
Making sense of sports performance data can be a challenging task but is nevertheless an essential part of performance analysis investigations. Focusing on techniques used in the analysis of sport performance, this book introduces the fundamental principles of data analysis, explores the most important tools used in data analysis, and offers guidance on the presentation of results. The book covers key topics such as:
The book includes worked examples from real sport, offering clear guidance to the reader and bringing the subject to life. This book is invaluable reading for any student, researcher or analyst working in sport performance or undertaking a sport-related research project or methods course"
This textbook bypasses the need for advanced mathematics by providing in-text computer code, allowing students to explore Bayesian data analysis without the calculus background normally considered a prerequisite for this material. Now, students can use the best methods without needing advanced mathematical techniques. This approach goes beyond "frequentist" concepts of p-values and null hypothesis testing, using the full power of modern probability theory to solve real-world problems. The book offers a fully self-contained course, which demonstrates analysis techniques throughout with worked examples crafted specifically for students in the behavioral and neural sciences. The book presents two general algorithms that help students solve the measurement and model selection (also called "hypothesis testing") problems most frequently encountered in real-world applications.
This book has won the CHOICE Outstanding Academic Title award 2014. A century of education and education reform along with the last three decades of high-stakes testing and accountability reveals a disturbing paradox: Education has a steadfast commitment to testing and grading despite decades of research, theory, and philosophy that reveal the corrosive consequences of both testing and grading within an education system designed to support human agency and democratic principles. This edited volume brings together a collection of essays that confronts the failure of testing and grading and then offers practical and detailed examinations of implementing at the macro and micro levels of education teaching and learning free of the weight of testing and grading. The book explores the historical failure of testing and grading; the theoretical and philosophical arguments against testing and grading; the negative influence of testing and grading on social justice, race, class, and gender; and the role of testing and grading in perpetuating a deficit perspective of children, learning, race, and class. The chapters fall under two broad sections: Part I: "Degrading Learning, Detesting Education: The Failure of High-Stake Accountability in Education" includes essays on the historical, theoretical, and philosophical arguments against testing and grading; Part II: "De-Grading and De-Testing in a Time of High-Stakes Education Reform" presents practical experiments in de-testing and de-grading classrooms for authentic learning experiences.
An interdisciplinary look at interaction in the standardized survey interview This volume presents a theoretical and empirical inquiry into the interaction between interviewers and respondents in standardized research interviews. The editors include a range of articles that showcase the perspectives of conversation analysts, ethnomethodologists, and survey methodologists, to gain a more complete picture of interaction in the standardized survey interview than was previously available. This book is the first to focus solely on the interactional substrate or conversational architecture of interviewing. It offers a range of insights into standardized interviewing as interaction and forms a bridge between survey methodology and the study of interaction and tacit practices. The articles are arranged into four subject groups: theoretical orientations, survey recruitment, interaction during the substantive interview, and interaction and survey data quality. Articles include:
Standardization and Tacit Knowledge serves as a one-of-a-kind reference for survey methodologists, linguists, and researchers and also as a postgraduate coursebook in survey interviewing.
High-dimensional probability offers insight into the behavior of random vectors, random matrices, random subspaces, and objects used to quantify uncertainty in high dimensions. Drawing on ideas from probability, analysis, and geometry, it lends itself to applications in mathematics, statistics, theoretical computer science, signal processing, optimization, and more. It is the first to integrate theory, key tools, and modern applications of high-dimensional probability. Concentration inequalities form the core, and it covers both classical results such as Hoeffding's and Chernoff's inequalities and modern developments such as the matrix Bernstein's inequality. It then introduces the powerful methods based on stochastic processes, including such tools as Slepian's, Sudakov's, and Dudley's inequalities, as well as generic chaining and bounds based on VC dimension. A broad range of illustrations is embedded throughout, including classical and modern results for covariance estimation, clustering, networks, semidefinite programming, coding, dimension reduction, matrix completion, machine learning, compressed sensing, and sparse regression.
This book comprises three studies on minority shareholder monitoring in Germany. Mandatory disclosure requirements have increased transparency. An analysis of the information that is publicly available is presented, regardless of the size of the target corporation. The second essay in the form of an event study pays special attention to the German supervisory board and its appointment for a fixed term. Capital markets perceive an activist effort as being more credible under certain circumstances. The study as a whole is empirical evidence for increased minority shareholder activity in Germany. The evidence presented supports the strong shareholder rights perspective. It conflicts with the weak shareholder rights view brought forward in the international literature.
" The Data Quality Assessment Framework "shows you how to measure and monitor data quality, ensuring quality over time. You ll start with general concepts of measurement and work your way through a detailed framework of more than three dozen measurement types related to five objective dimensions of quality: completeness, timeliness, consistency, validity, and integrity. Ongoing measurement, rather than one time activities will help your organization reach a new level of data quality. This plain-language approach to measuring data can be understood by both business and IT and provides practical guidance on how to apply the DQAF within any organization enabling you to prioritize measurements and effectively report on results. Strategies for using data measurement to govern and improve the quality of data and guidelines for applying the framework within a data asset are included. You ll come away able to prioritize which measurement types to implement, knowing where to place them in a data flow and how frequently to measure. Common conceptual models for defining and storing of data quality results for purposes of trend analysis are also included as well as generic business requirements for ongoing measuring and monitoring including calculations and comparisons that make the measurements meaningful and help understand trends and detect anomalies.
Survey data are used in many disciplines including Social Sciences, Economics and Psychology. Interviewers' behaviour might affect the quality of such data. This book presents the results of new research on interviewers' motivation and behaviour. A substantial number of contributions address deviant behaviour, methods for assessing the impact of such behaviour on data quality and tools for detecting faked interviews. Further chapters discuss methods for preventing undesirable interviewer effects. Apart from specific methodological contributions, the chapters of the book also provide a unique collection of examples of deviant behaviour and its detection - a topic not overly present in literature despite its substantial prevalence in survey field work. The volume includes 13 peer reviewed papers presented at an international workshop in Rauischholzhausen in October 2011.
Identifying factors which stimulate regional growth and international competitiveness and using them for forecasting are the aims of this book. Departing from the theory of comparative advantages and their impact, the author demonstrates that such an approach has to be based on a sound theoretical foundation and on appropriate, advanced econometric methods. He proposes the use of heuristic optimization techniques, Monte Carlo simulation experiments and Lasso-type estimators to avoid bias or misleading findings, which might be the result of applying standard regression methods when key assumptions are not satisfied. In addition, the author demonstrates how some heuristic optimization-based methods can be used to obtain forecasts of industrial production in Russia and Germany founded on past observations and some leading indicators.
This dissertation comprises five studies analyzing daily stock returns of listed firms. Studies one and two shed light on corporate diversification through M&A and how related risk dynamics affect shareholder wealth. Carrying over the risk analysis methodology 'GARCH' to external events in studies three and four, the author individually scrutinizes the adverse implications of bank failures and bailouts in the 2007-2009 financial crisis. Finding opposing return shocks, he identifies the limits of the 'symmetric' GARCH. As observed of the behavior of stock return data, volatility reacts asymmetrically to positive and negative return shocks. The advanced EGARCH incorporates this so called 'leverage effect'. Applying the EGARCH in his final study, the author can simultaneously scrutinize the adverse bank events with an appropriate econometric foundation.
Distribution-free resampling methods permutation tests, decision trees, and the bootstrap are used today in virtually every research area. A Practitioner s Guide to Resampling for Data Analysis, Data Mining, and Modeling explains how to use the bootstrap to estimate the precision of sample-based estimates and to determine sample size, data permutations to test hypotheses, and the readily-interpreted decision tree to replace arcane regression methods. Highlights
Statistics practitioners will find the methods described in the text easy to learn and to apply in a broad range of subject areas from A for Accounting, Agriculture, Anthropology, Aquatic science, Archaeology, Astronomy, and Atmospheric science to V for Virology and Vocational Guidance, and Z for Zoology. Practitioners and research workers and in the biomedical, engineering and social sciences, as well as advanced students in biology, business, dentistry, medicine, psychology, public health, sociology, and statistics will find an easily-grasped guide to estimation, testing hypotheses and model building.
Public Policy Analytics: Code & Context for Data Science in Government teaches readers how to address complex public policy problems with data and analytics using reproducible methods in R. Each of the eight chapters provides a detailed case study, showing readers: how to develop exploratory indicators; understand 'spatial process' and develop spatial analytics; how to develop 'useful' predictive analytics; how to convey these outputs to non-technical decision-makers through the medium of data visualization; and why, ultimately, data science and 'Planning' are one and the same. A graduate-level introduction to data science, this book will appeal to researchers and data scientists at the intersection of data analytics and public policy, as well as readers who wish to understand how algorithms will affect the future of government.
This book introduces readers to the methods, types of data, and scale of analysis used in the context of health. The challenges of working with big data are explored throughout the book, while the benefits are also emphasized through the discoveries made possible by linking large datasets. Methods include thorough case studies from statistics, as well as the newest facets of data analytics: data visualization, modeling and simulation, and machine learning. The diversity of datasets is illustrated through chapters on networked data, image processing, and text, in addition to typical structured numerical datasets. While the methods, types of data, and scale have been individually covered elsewhere, by bringing them all together under one "umbrella" the book highlights synergies, while also helping scholars fluidly switch between tools as needed. New challenges and emerging frontiers are also discussed, helping scholars grasp how methods will need to change in response to the latest challenges in health.
Human error is implicated in nearly all aviation accidents, yet most investigation and prevention programs are not designed around any theoretical framework of human error. Appropriate for all levels of expertise, the book provides the knowledge and tools required to conduct a human error analysis of accidents, regardless of operational setting (i.e. military, commercial, or general aviation). The book contains a complete description of the Human Factors Analysis and Classification System (HFACS), which incorporates James Reason's model of latent and active failures as a foundation. Widely disseminated among military and civilian organizations, HFACS encompasses all aspects of human error, including the conditions of operators and elements of supervisory and organizational failure. It attracts a very broad readership. Specifically, the book serves as the main textbook for a course in aviation accident investigation taught by one of the authors at the University of Illinois. This book will also be used in courses designed for military safety officers and flight surgeons in the U.S. Navy, Army and the Canadian Defense Force, who currently utilize the HFACS system during aviation accident investigations. Additionally, the book has been incorporated into the popular workshop on accident analysis and prevention provided by the authors at several professional conferences world-wide. The book is also targeted for students attending Embry-Riddle Aeronautical University which has satellite campuses throughout the world and offers a course in human factors accident investigation for many of its majors. In addition, the book will be incorporated into courses offered by Transportation Safety International and the Southern California Safety Institute. Finally, this book serves as an excellent reference guide for many safety professionals and investigators already in the field. |
You may like...
Applying Data Science and Learning…
Goran Trajkovski, Marylee Demeter, …
Hardcover
R5,333
Discovery Miles 53 330
Pattern Recognition and Classification…
Eva Volna, Martin Kotyrba, …
Hardcover
R4,634
Discovery Miles 46 340
Numerical Methods in Environmental Data…
Moses Eterigho Emetere
Paperback
R2,428
Discovery Miles 24 280
Spatial Regression Analysis Using…
Daniel A. Griffith, Yongwan Chun, …
Paperback
R3,015
Discovery Miles 30 150
|