![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Reference & Interdisciplinary > Communication studies > Data analysis
"Data Analysis and Visualization in Genomics and Proteomics" is the
first book addressing integrative data analysis and visualization
in this field. It addresses important techniques for the
interpretation of data originating from multiple sources, encoded
in different formats or protocols, and processed by multiple
systems. One of the first systematic overviews of the problem of
biological data integration using computational approachesThis book
provides scientists and students with the basis for the development
and application of integrative computational methods to analyse
biological data on a systemic scalePlaces emphasis on the
processing of multiple data and knowledge resources, and the
combination of different models and systems
Critical Theory and Qualitative Data Analysis in Education offers a path-breaking explanation of how critical theories can be used within the analysis of qualitative data to inform research processes, such as data collection, analysis, and interpretation. This contributed volume offers examples of qualitative data analysis techniques and exemplars of empirical studies that employ critical theory concepts in data analysis. By creating a clear and accessible bridge between data analysis and critical social theories, this book helps scholars and researchers effectively translate their research designs and findings to multiple audiences for more equitable outcomes and disruption of historical and contemporary inequality.
As the analysis of big datasets in sports performance becomes a more entrenched part of the sporting landscape, so the value of sport scientists and analysts with formal training in data analytics grows. Sports Analytics: Analysis, Visualisation and Decision Making in Sports Performance provides the most authoritative and comprehensive guide to the use of analytics in sport and its application in sports performance, coaching, talent identification and sports medicine available. Employing an approach-based structure and integrating problem-based learning throughout the text, the book clearly defines the difference between analytics and analysis and goes on to explain and illustrate methods including: Interactive visualisation Simulation and modelling Geospatial data analysis Spatiotemporal analysis Machine learning Genomic data analysis Social network analysis Offering a mixed-methods case study chapter, no other book offers the same level of scientific grounding or practical application in sports data analytics. Sports Analytics is essential reading for all students of sports analytics, and useful supplementary reading for students and professionals in talent identification and development, sports performance analysis, sports medicine and applied computer science.
Classification and regression trees (CART) is one of the several contemporary statistical techniques with good promise for research in many academic fields. There are very few books on CART, especially on applied CART. This book, as a good practical primer with a focus on applications, introduces the relatively new statistical technique of CART as a powerful analytical tool. The easy-to-understand (non-technical) language and illustrative graphs (tables) as well as the use of the popular statistical software program (SPSS) appeal to readers without strong statistical background. This book helps readers understand the foundation, the operation, and the interpretation of CART analysis, thus becoming knowledgeable consumers and skillful users of CART. The chapter on advanced CART procedures not yet well-discussed in the literature allows readers to effectively seek further empowerment of their research designs by extending the analytical power of CART to a whole new level. This highly practical book is specifically written for academic researchers, data analysts, and graduate students in many disciplines such as economics, social sciences, medical sciences, and sport sciences who do not have strong statistical background but still strive to take full advantage of CART as a powerful analytical tool for research in their fields.
This book is concerned with data in which the observations are independent and in which the response is multivariate. Anthony Atkinson has been Professor of Statistics at the London School of Economics since 1989. Before that he was a Professor at Imperial College, London. He is the author of Plots, Transformations, and Regression, co-author of Optimum Experimental Designs, and joint editor of The Fascination of Statistics, a volume celebrating the centenary of the International Statistical Institute. Professor Atkinson has served as editor of The Journal of the Royal Statistical Society, Series B and as associate editor of Biometrika and Technometrics. He has published well over 100 articles in these and other journals including The Annals of Statistics, Biometrics, The Journal of the American Statistical Association, and Statistics and Computing. Marco Riani, after receiving his Ph.D. in Statistics in 1995 from the University of Florence, joined the Faculty of Economics at Parma University as postdoctoral fellow. In 1997 he won the prize for the best Italian Ph.D. thesis in Statistics. He is currently Associate Professor of Statistics in the University of Parma. He has published in Technometrics, The Journal of Computational and Graphical Statistics, The Journal of Business and Economic Statistics, The Journal of Forecasting, Environmetrics, Computational Statistics and Data Analysis, Metron, and other journals. From the reviews: "The book requires knowledge of multivariate statistical methods, because it provides only basic background information on the methods considered (although with excellent references for futher reading at the end of each chapter). Each chapter alsoincludes exercises with solutions...This book could serve as an excellent text for an advanced course on modern multivariate statistics, as it is intended." Technometrics, November 2004 "This book is full of interest for anyone undertaking multivariate analyses, clearly emphasizing that uncritical use of standard methods can be misleading." Short Book Reviews of the International Statistical Institute, December 2004 "This book is an interesting complement to various textbooks on multivariate statistics." Biometrics, December 2005 "This book discusses multivariate data from a different perspective. a ] it is an excellent book for researchers with interests in multivariate data and cluster analysis. It may also be a good reference for students of advanced statistics and practitioners working with large volumes of data a ] ." (Kassim S. Mwitondi, Journal of Applied Statistics, Vol. 32 (4), 2005) "This is a companion to an earlier book a ] both of which feature many informative graphs. Here, the forward search has been applied in detail to classical multivariate approaches used with Gaussian data. a ] One valuable feature of the book is the way that the illustrations concentrate on a relatively small number a ] . This makes it easy to concentrate on the application a ] . The implications of this book also strengthen the importance of data visualization, as well as providing a valuable approach to visualization." (Paul Hewson, Journal of the Royal Statistical Society Series A, Vol. 168 (2), 2005) "This book is a companion to Atkinson a ] . The objective is to identify outliers, appreciate their influence a ] which would result in an overall improvement. a ] Graphical tools arewidely used, resulting in three hundred and ninety figures. Each chapter is followed by extensive exercises and their solutions, and the book could be used as an advanced textbook for multivariate analysis courses. Web-sites provide the relevant software a ] . This book is full of interest for anyone undertaking multivariate analyses a ] ." (B.J.T. Morgan, Short Book Reviews International Statistical Institute, Vol. 24 (3), 2004) "This book discusses forward search (FS), a method using graphs to explore and model continuous multivariate data a ] . Its viewpoint is toward applications, and it demonstrates the merits of FS using a variety of examples, with a thorough discussion of statistical issues and interpretation of results. a ] This book could serve as an excellent text for an advanced course on modern multivariate statistics, as it is intended." (Tena Ipsilantis Katsaounis, Technometrics, Vol. 46 (4), November, 2004) "The theoretical exercises with detailed solutions at the end of each chapter are extremely useful. I would recommend this book to practitioners who analyze moderately sized multivariate data. Of course, anyone associated with the application of statistics should find the book interesting to read." (Tathgata Banerjee, Journal of the American Statistical Association, March 2006)
Introduces new and advanced methods of model discovery for time-series data using artificial intelligence. Implements topological approaches to distill "machine-intuitive" models from complex dynamics data. Introduces a new paradigm for a parsimonious model of a dynamical system without resorting to differential equations. Heralds a new era in data-driven science and engineering based on the operational concept of "computational intuition".
A century of education and education reform, along with more than three decades of high-stakes testing and accountability, reveals a disturbing paradox: education has a steadfast commitment to testing and grading. This commitment persists despite ample research, theory, and philosophy revealing the corrosive consequences of both testing and grading in an education system designed to support human agency and democratic principles. This revised edited volume brings together a collection of updated and new essays that confronts the failure of testing and grading. The book explores the historical failure of testing and grading; the theoretical and philosophical arguments against testing and grading; the negative influence of tests and grades on social justice, race, class, and gender; and the role that they play in perpetuating a deficit perspective of children. The chapters fall under two broad sections. Part I, Degrading Learning, Detesting Education: The Failure of High-Stake Accountability in Education, includes essays on the historical, theoretical, and philosophical arguments against testing and grading. Part II, De-Grading and De-Testing in a Time of High-Stakes Education Reform, presents practical experiments in de-testing and de-grading classrooms for authentic learning experiences.
Petty trade helped vast numbers of people to survive the crisis faced by post-Soviet Russia. The book analyses how this survival technique was carried out in practice. On the basis of his fieldwork research, the author shows how people coped with rapid social change and places their activities within a context of government policies, migration flows and entrepreneurial strategies. "This is an original work based on extensive fieldwork research. Wielecki skillfully intertwined "ethnographic meat" with "the bones of theory", which has resulted in a "flesh-and-blood" anthropology." Michal Buchowski "This is an immensely insightful exploration of petty trade in post-Soviet Russia. The author laces his genuine ethnographic work in a coherent account of the concepts of uncertainty, embeddedness, and informal economy." Violetta Zentai
Since the early days of performance assessment, human ratings have been subject to various forms of error and bias. Expert raters often come up with different ratings for the very same performance and it seems that assessment outcomes largely depend upon which raters happen to assign the rating. This book provides an introduction to many-facet Rasch measurement (MFRM), a psychometric approach that establishes a coherent framework for drawing reliable, valid, and fair inferences from rater-mediated assessments, thus answering the problem of fallible human ratings. Revised and updated throughout, the Second Edition includes a stronger focus on the Facets computer program, emphasizing the pivotal role that MFRM plays for validating the interpretations and uses of assessment outcomes.
Praise forEnvisioning the Survey Interview of the Future "This book is an excellent introduction to some brave new technologies . . . and their possible impacts on the way surveys might be conducted. Anyone interested in the future of survey methodology should read this book." Norman M. Bradburn, PhD, National Opinion Research Center, University of Chicago "Envisioning the Survey Interview of the Future gathers some of the brightest minds in alternative methods of gathering self-report data, with an eye toward the future self-report sample survey. Conrad and Schober, by assembling a group of talented survey researchers and creative inventors of new software-based tools to gather information from human subjects, have created a volume of importance to all interested in imagining future ways of interviewing." Robert M. Groves, PhD, Survey Research Center, University of Michigan This collaboration provides extensive insight into the impact of communication technology on survey research As previously unimaginable communication technologies rapidly become commonplace, survey researchers are presented with both opportunities and obstacles when collecting and interpreting data based on human response. Envisioning the Survey Interview of the Future explores the increasing influence of emerging technologies on the data collection process and, in particular, self-report data collection in interviews, providing the key principles for using these new modes of communication. With contributions written by leading researchers in the fields of survey methodology and communication technology, this compilation integrates the use of modern technological developments with establishedsocial science theory. The book familiarizes readers with these new modes of communication by discussing the challenges to accuracy, legitimacy, and confidentiality that researchers must anticipate while collecting data, and it also provides tools for adopting new technologies in order to obtain high-quality results with minimal error or bias. Envisioning the Survey Interview of the Future addresses questions that researchers in survey methodology and communication technology must consider, such as: How and when should new communication technology be adopted in the interview process? What are the principles that extend beyond particular technologies? Why do respondents answer questions from a computer differently than questions from a human interviewer? How can systems adapt to respondents' thinking and feeling? What new ethical concerns about privacy and confidentiality are raised from using new communication technologies? With its multidisciplinary approach, extensive discussion of existing and future technologies, and practical guidelines for adopting new technology, Envisioning the Survey Interview of the Future is an essential resource for survey methodologists, questionnaire designers, and communication technologists in any field that conducts survey research. It also serves as an excellent supplement for courses in research methods at the upper-undergraduate or graduate level.
Don't simply show your data tell a story with it! Storytelling with Data teaches you the fundamentals of data visualization and how to communicate effectively with data. You'll discover the power of storytelling and the way to make data a pivotal point in your story. The lessons in this illuminative text are grounded in theory, but made accessible through numerous real-world examples ready for immediate application to your next graph or presentation. Storytelling is not an inherent skill, especially when it comes to data visualization, and the tools at our disposal don't make it any easier. This book demonstrates how to go beyond conventional tools to reach the root of your data, and how to use your data to create an engaging, informative, compelling story. Specifically, you'll learn how to: * Understand the importance of context and audience * Determine the appropriate type of graph for your situation * Recognize and eliminate the clutter clouding your information * Direct your audience's attention to the most important parts of your data * Think like a designer and utilize concepts of design in data visualization * Leverage the power of storytelling to help your message resonate with your audience Together, the lessons in this book will help you turn your data into high impact visual stories that stick with your audience. Rid your world of ineffective graphs, one exploding 3D pie chart at a time. There is a story in your data Storytelling with Data will give you the skills and power to tell it!
Making sense of sports performance data can be a challenging task but is nevertheless an essential part of performance analysis investigations. Focusing on techniques used in the analysis of sport performance, this book introduces the fundamental principles of data analysis, explores the most important tools used in data analysis, and offers guidance on the presentation of results. The book covers key topics such as:
The book includes worked examples from real sport, offering clear guidance to the reader and bringing the subject to life. This book is invaluable reading for any student, researcher or analyst working in sport performance or undertaking a sport-related research project or methods course"
Throughout the world, voters lack access to information about politicians, government performance, and public services. Efforts to remedy these informational deficits are numerous. Yet do informational campaigns influence voter behavior and increase democratic accountability? Through the first project of the Metaketa Initiative, sponsored by the Evidence in Governance and Politics (EGAP) research network, this book aims to address this substantive question and at the same time introduce a new model for cumulative learning that increases coordination among otherwise independent researcher teams. It presents the overall results (using meta-analysis) from six independently conducted but coordinated field experimental studies, the results from each individual study, and the findings from a related evaluation of whether practitioners utilize this information as expected. It also discusses lessons learned from EGAP's efforts to coordinate field experiments, increase replication of theoretically important studies across contexts, and increase the external validity of field experimental research.
This book has won the CHOICE Outstanding Academic Title award 2014. A century of education and education reform along with the last three decades of high-stakes testing and accountability reveals a disturbing paradox: Education has a steadfast commitment to testing and grading despite decades of research, theory, and philosophy that reveal the corrosive consequences of both testing and grading within an education system designed to support human agency and democratic principles. This edited volume brings together a collection of essays that confronts the failure of testing and grading and then offers practical and detailed examinations of implementing at the macro and micro levels of education teaching and learning free of the weight of testing and grading. The book explores the historical failure of testing and grading; the theoretical and philosophical arguments against testing and grading; the negative influence of testing and grading on social justice, race, class, and gender; and the role of testing and grading in perpetuating a deficit perspective of children, learning, race, and class. The chapters fall under two broad sections: Part I: "Degrading Learning, Detesting Education: The Failure of High-Stake Accountability in Education" includes essays on the historical, theoretical, and philosophical arguments against testing and grading; Part II: "De-Grading and De-Testing in a Time of High-Stakes Education Reform" presents practical experiments in de-testing and de-grading classrooms for authentic learning experiences.
This book comprises three studies on minority shareholder monitoring in Germany. Mandatory disclosure requirements have increased transparency. An analysis of the information that is publicly available is presented, regardless of the size of the target corporation. The second essay in the form of an event study pays special attention to the German supervisory board and its appointment for a fixed term. Capital markets perceive an activist effort as being more credible under certain circumstances. The study as a whole is empirical evidence for increased minority shareholder activity in Germany. The evidence presented supports the strong shareholder rights perspective. It conflicts with the weak shareholder rights view brought forward in the international literature.
" The Data Quality Assessment Framework "shows you how to measure and monitor data quality, ensuring quality over time. You ll start with general concepts of measurement and work your way through a detailed framework of more than three dozen measurement types related to five objective dimensions of quality: completeness, timeliness, consistency, validity, and integrity. Ongoing measurement, rather than one time activities will help your organization reach a new level of data quality. This plain-language approach to measuring data can be understood by both business and IT and provides practical guidance on how to apply the DQAF within any organization enabling you to prioritize measurements and effectively report on results. Strategies for using data measurement to govern and improve the quality of data and guidelines for applying the framework within a data asset are included. You ll come away able to prioritize which measurement types to implement, knowing where to place them in a data flow and how frequently to measure. Common conceptual models for defining and storing of data quality results for purposes of trend analysis are also included as well as generic business requirements for ongoing measuring and monitoring including calculations and comparisons that make the measurements meaningful and help understand trends and detect anomalies.
An interdisciplinary look at interaction in the standardized survey interview This volume presents a theoretical and empirical inquiry into the interaction between interviewers and respondents in standardized research interviews. The editors include a range of articles that showcase the perspectives of conversation analysts, ethnomethodologists, and survey methodologists, to gain a more complete picture of interaction in the standardized survey interview than was previously available. This book is the first to focus solely on the interactional substrate or conversational architecture of interviewing. It offers a range of insights into standardized interviewing as interaction and forms a bridge between survey methodology and the study of interaction and tacit practices. The articles are arranged into four subject groups: theoretical orientations, survey recruitment, interaction during the substantive interview, and interaction and survey data quality. Articles include:
Standardization and Tacit Knowledge serves as a one-of-a-kind reference for survey methodologists, linguists, and researchers and also as a postgraduate coursebook in survey interviewing.
Survey data are used in many disciplines including Social Sciences, Economics and Psychology. Interviewers' behaviour might affect the quality of such data. This book presents the results of new research on interviewers' motivation and behaviour. A substantial number of contributions address deviant behaviour, methods for assessing the impact of such behaviour on data quality and tools for detecting faked interviews. Further chapters discuss methods for preventing undesirable interviewer effects. Apart from specific methodological contributions, the chapters of the book also provide a unique collection of examples of deviant behaviour and its detection - a topic not overly present in literature despite its substantial prevalence in survey field work. The volume includes 13 peer reviewed papers presented at an international workshop in Rauischholzhausen in October 2011.
Identifying factors which stimulate regional growth and international competitiveness and using them for forecasting are the aims of this book. Departing from the theory of comparative advantages and their impact, the author demonstrates that such an approach has to be based on a sound theoretical foundation and on appropriate, advanced econometric methods. He proposes the use of heuristic optimization techniques, Monte Carlo simulation experiments and Lasso-type estimators to avoid bias or misleading findings, which might be the result of applying standard regression methods when key assumptions are not satisfied. In addition, the author demonstrates how some heuristic optimization-based methods can be used to obtain forecasts of industrial production in Russia and Germany founded on past observations and some leading indicators.
This dissertation comprises five studies analyzing daily stock returns of listed firms. Studies one and two shed light on corporate diversification through M&A and how related risk dynamics affect shareholder wealth. Carrying over the risk analysis methodology 'GARCH' to external events in studies three and four, the author individually scrutinizes the adverse implications of bank failures and bailouts in the 2007-2009 financial crisis. Finding opposing return shocks, he identifies the limits of the 'symmetric' GARCH. As observed of the behavior of stock return data, volatility reacts asymmetrically to positive and negative return shocks. The advanced EGARCH incorporates this so called 'leverage effect'. Applying the EGARCH in his final study, the author can simultaneously scrutinize the adverse bank events with an appropriate econometric foundation.
Jump-start your career as a data scientist--learn to develop datasets for exploration, analysis, and machine learning SQL for Data Scientists: A Beginner's Guide for Building Datasets for Analysis is a resource that's dedicated to the Structured Query Language (SQL) and dataset design skills that data scientists use most. Aspiring data scientists will learn how to how to construct datasets for exploration, analysis, and machine learning. You can also discover how to approach query design and develop SQL code to extract data insights while avoiding common pitfalls. You may be one of many people who are entering the field of Data Science from a range of professions and educational backgrounds, such as business analytics, social science, physics, economics, and computer science. Like many of them, you may have conducted analyses using spreadsheets as data sources, but never retrieved and engineered datasets from a relational database using SQL, which is a programming language designed for managing databases and extracting data. This guide for data scientists differs from other instructional guides on the subject. It doesn't cover SQL broadly. Instead, you'll learn the subset of SQL skills that data analysts and data scientists use frequently. You'll also gain practical advice and direction on "how to think about constructing your dataset." Gain an understanding of relational database structure, query design, and SQL syntax Develop queries to construct datasets for use in applications like interactive reports and machine learning algorithms Review strategies and approaches so you can design analytical datasets Practice your techniques with the provided database and SQL code In this book, author Renee Teate shares knowledge gained during a 15-year career working with data, in roles ranging from database developer to data analyst to data scientist. She guides you through SQL code and dataset design concepts from an industry practitioner's perspective, moving your data scientist career forward!
Distribution-free resampling methods permutation tests, decision trees, and the bootstrap are used today in virtually every research area. A Practitioner s Guide to Resampling for Data Analysis, Data Mining, and Modeling explains how to use the bootstrap to estimate the precision of sample-based estimates and to determine sample size, data permutations to test hypotheses, and the readily-interpreted decision tree to replace arcane regression methods. Highlights
Statistics practitioners will find the methods described in the text easy to learn and to apply in a broad range of subject areas from A for Accounting, Agriculture, Anthropology, Aquatic science, Archaeology, Astronomy, and Atmospheric science to V for Virology and Vocational Guidance, and Z for Zoology. Practitioners and research workers and in the biomedical, engineering and social sciences, as well as advanced students in biology, business, dentistry, medicine, psychology, public health, sociology, and statistics will find an easily-grasped guide to estimation, testing hypotheses and model building.
THE NEW YORK TIMES BESTSELLER AN ECONOMIST BOOK OF THE YEAR 2017 Insightful, surprising and with ground-breaking revelations about our society, Everybody Lies exposes the secrets embedded in our internet searches, with a foreword by bestselling author Steven Pinker Everybody lies, to friends, lovers, doctors, pollsters - and to themselves. In Internet searches, however, people confess their secrets - about sexless marriages, mental health problems, even racist views. Seth Stephens-Davidowitz, an economist and former Google data scientist, shows that this could just be the most important dataset ever collected. This huge database of secrets - unprecedented in human history - offers astonishing, even revolutionary, insights into humankind. Anxiety, for instance, does not increase after a terrorist attack. Crime levels drop when a violent film is released. And racist searches are no higher in Republican areas than in Democrat ones. Stephens-Davidowitz reveals information we can use to change our culture, and the questions we're afraid to ask that might be essential to our health - both emotional and physical. Insightful, funny, and always surprising, Everybody Lies exposes the biases and secrets embedded deeply within us, at a time when things are harder to predict than ever.
This is a book about how ecologists can integrate remote sensing and GIS in their research. It will allow readers to get started with the application of remote sensing and to understand its potential and limitations. Using practical examples, the book covers all necessary steps from planning field campaigns to deriving ecologically relevant information through remote sensing and modelling of species distributions. An Introduction to Spatial Data Analysis introduces spatial data handling using the open source software Quantum GIS (QGIS). In addition, readers will be guided through their first steps in the R programming language. The authors explain the fundamentals of spatial data handling and analysis, empowering the reader to turn data acquired in the field into actual spatial data. Readers will learn to process and analyse spatial data of different types and interpret the data and results. After finishing this book, readers will be able to address questions such as "What is the distance to the border of the protected area?", "Which points are located close to a road?", "Which fraction of land cover types exist in my study area?" using different software and techniques. This book is for novice spatial data users and does not assume any prior knowledge of spatial data itself or practical experience working with such data sets. Readers will likely include student and professional ecologists, geographers and any environmental scientists or practitioners who need to collect, visualize and analyse spatial data. The software used is the widely applied open source scientific programs QGIS and R. All scripts and data sets used in the book will be provided online at book.ecosens.org. This book covers specific methods including: what to consider before collecting in situ data how to work with spatial data collected in situ the difference between raster and vector data how to acquire further vector and raster data how to create relevant environmental information how to combine and analyse in situ and remote sensing data how to create useful maps for field work and presentations how to use QGIS and R for spatial analysis how to develop analysis scripts
Public Policy Analytics: Code & Context for Data Science in Government teaches readers how to address complex public policy problems with data and analytics using reproducible methods in R. Each of the eight chapters provides a detailed case study, showing readers: how to develop exploratory indicators; understand 'spatial process' and develop spatial analytics; how to develop 'useful' predictive analytics; how to convey these outputs to non-technical decision-makers through the medium of data visualization; and why, ultimately, data science and 'Planning' are one and the same. A graduate-level introduction to data science, this book will appeal to researchers and data scientists at the intersection of data analytics and public policy, as well as readers who wish to understand how algorithms will affect the future of government. |
You may like...
Sapiens - A Brief History Of Humankind
Yuval Noah Harari
Paperback
(4)
Giving Well - The Ethics of Philanthropy
Patricia Illingworth, Thomas Pogge, …
Hardcover
R1,847
Discovery Miles 18 470
|