![]() |
![]() |
Your cart is empty |
||
Books > Reference & Interdisciplinary > Communication studies > Data analysis
Today's students create and are confronted with many kinds of data in multiple formats. Data literacy enables students and researchers to access, interpret, critically assess, manage, handle, and ethically use data. The Data Literacy Cookbook includes a variety of approaches to and lesson plans for teaching data literacy, from simple activities to self-paced learning modules to for-credit and discipline-specific courses. Sixty-five recipes are organized into nine sections based on learning outcomes: Interpreting Polls and Surveys Finding and Evaluating Data Data Manipulation and Transformation Data Visualization Data Management and Sharing Geospatial Data Data in the Disciplines Data Literacy Outreach and Engagement Data Literacy Programs and Curricula Many sections have overlapping learning outcomes, so you can combine recipes from multiple sections to whip up a scaffolded curriculum. The Data Literacy Cookbook provides librarians with lesson plans, strategies, and activities to help guide students as both consumers and producers in the data life cycle.
Big Data and methods for analyzing large data sets such as machine learning have in recent times deeply transformed scientific practice in many fields. However, an epistemological study of these novel tools is still largely lacking. After a conceptual analysis of the notion of data and a brief introduction into the methodological dichotomy between inductivism and hypothetico-deductivism, several controversial theses regarding big data approaches are discussed. These include, whether correlation replaces causation, whether the end of theory is in sight and whether big data approaches constitute entirely novel scientific methodology. In this Element, I defend an inductivist view of big data research and argue that the type of induction employed by the most successful big data algorithms is variational induction in the tradition of Mill's methods. Based on this insight, the before-mentioned epistemological issues can be systematically addressed.
In the fast moving world of the fourth industrial revolution not everyone needs to be a data scientist but everyone should be data literate, with the ability to read, analyze and communicate with data. It is not enough for a business to have the best data if those using it don't understand the right questions to ask or how to use the information generated to make decisions. Be Data Literate is the essential guide to developing the curiosity, creativity and critical thinking necessary to make anyone data literate, without retraining as a data scientist or statistician. With learnings to show development and real-world examples from industries implementing data literacy skills, this book explains how to confidently read and speak the 'language of data' in the modern business environment and everyday life. Be Data Literate is a practical guide to understanding the four levels of analytics, how to analyze data and the key steps to making smarter, data-informed decisions. Written by a founding pioneer and worldwide leading expert on data literacy, this book empowers professionals with the skills they need to succeed in the digital world.
The massive volume of data generated in modern applications can overwhelm our ability to conveniently transmit, store, and index it. For many scenarios, building a compact summary of a dataset that is vastly smaller enables flexibility and efficiency in a range of queries over the data, in exchange for some approximation. This comprehensive introduction to data summarization, aimed at practitioners and students, showcases the algorithms, their behavior, and the mathematical underpinnings of their operation. The coverage starts with simple sums and approximate counts, building to more advanced probabilistic structures such as the Bloom Filter, distinct value summaries, sketches, and quantile summaries. Summaries are described for specific types of data, such as geometric data, graphs, and vectors and matrices. The authors offer detailed descriptions of and pseudocode for key algorithms that have been incorporated in systems from companies such as Google, Apple, Microsoft, Netflix and Twitter.
In the global race to reach the end of AIDS, why is the world slipping off track? The answer has to do with stigma, money, and data. Global funding for AIDS response is declining. Tough choices must be made: some people will win and some will lose. Global aid agencies and governments use health data to make these choices. While aid agencies prioritize a shrinking list of countries, many governments deny that sex workers, men who have sex with men, drug users, and transgender people exist. Since no data is gathered about their needs, life-saving services are not funded, and the lack of data reinforces the denial. The Uncounted cracks open this and other data paradoxes through interviews with global health leaders and activists, ethnographic research, analysis of gaps in mathematical models, and the author's experience as an activist and senior official. It shows what is counted, what is not, and why empowering communities to gather their own data could be key to ending AIDS.
This comprehensive book, rich with applications, offers a quantitative framework for the analysis of the various capture-recapture models for open animal populations, while also addressing associated computational methods. The state of our wildlife populations provides a litmus test for the state of our environment, especially in light of global warming and the increasing pollution of our land, seas, and air. In addition to monitoring our food resources such as fisheries, we need to protect endangered species from the effects of human activities (e.g. rhinos, whales, or encroachments on the habitat of orangutans). Pests must be be controlled, whether insects or viruses, and we need to cope with growing feral populations such as opossums, rabbits, and pigs. Accordingly, we need to obtain information about a given population's dynamics, concerning e.g. mortality, birth, growth, breeding, sex, and migration, and determine whether the respective population is increasing , static, or declining. There are many methods for obtaining population information, but the most useful (and most work-intensive) is generically known as "capture-recapture," where we mark or tag a representative sample of individuals from the population and follow that sample over time using recaptures, resightings, or dead recoveries. Marks can be natural, such as stripes, fin profiles, and even DNA; or artificial, such as spots on insects. Attached tags can, for example, be simple bands or streamers, or more sophisticated variants such as radio and sonic transmitters. To estimate population parameters, sophisticated and complex mathematical models have been devised on the basis of recapture information and computer packages. This book addresses the analysis of such models. It is primarily intended for ecologists and wildlife managers who wish to apply the methods to the types of problems discussed above, though it will also benefit researchers and graduate students in ecology. Familiarity with basic statistical concepts is essential.
Focus on the most important and most often overlooked factor in a successful Tableau project-data. Without a reliable data source, you will not achieve the results you hope for in Tableau. This book does more than teach the mechanics of data preparation. It teaches you: how to look at data in a new way, to recognize the most common issues that hinder analytics, and how to mitigate those factors one by one. Tableau can change the course of business, but the old adage of "garbage in, garbage out" is the hard truth that hides behind every Tableau sales pitch. That amazing sales demo does not work as well with bad data. The unfortunate reality is that almost all data starts out in a less-than-perfect state. Data prep is hard. Traditionally, we were forced into the world of the database where complex ETL (Extract, Transform, Load) operations created by the data team did all the heavy lifting for us. Fortunately, we have moved past those days. With the introduction of the Tableau Data Prep tool you can now handle most of the common Data Prep and cleanup tasks on your own, at your desk, and without the help of the data team. This essential book will guide you through: The layout and important parts of the Tableau Data Prep tool Connecting to data Data quality and consistency The shape of the data. Is the data oriented in columns or rows? How to decide? Why does it matter? What is the level of detail in the source data? Why is that important? Combining source data to bring in more fields and rows Saving the data flow and the results of our data prep work Common cleanup and setup tasks in Tableau Desktop What You Will Learn Recognize data sources that are good candidates for analytics in Tableau Connect to local, server, and cloud-based data sources Profile data to better understand its content and structure Rename fields, adjust data types, group data points, and aggregate numeric data Pivot data Join data from local, server, and cloud-based sources for unified analytics Review the steps and results of each phase of the Data Prep process Output new data sources that can be reviewed in Tableau or any other analytics tool Who This Book Is For Tableau Desktop users who want to: connect to data, profile the data to identify common issues, clean up those issues, join to additional data sources, and save the newly cleaned, joined data so that it can be used more effectively in Tableau
At first glance, the skills required to work in the data science field appear to be self-explanatory. Do not be fooled. Impactful data science demands an interdisciplinary knowledge of business philosophy, project management, salesmanship, presentation, and more. In Managing Your Data Science Projects, author Robert de Graaf explores important concepts that are frequently overlooked in much of the instructional literature that is available to data scientists new to the field. If your completed models are to be used and maintained most effectively, you must be able to present and sell them within your organization in a compelling way. The value of data science within an organization cannot be overstated. Thus, it is vital that strategies and communication between teams are dexterously managed. Three main ways that data science strategy is used in a company is to research its customers, assess risk analytics, and log operational measurements. These all require different managerial instincts, backgrounds, and experiences, and de Graaf cogently breaks down the unique reasons behind each. They must align seamlessly to eventually be adopted as dynamic models. Data science is a relatively new discipline, and as such, internal processes for it are not as well-developed within an operational business as others. With Managing Your Data Science Projects, you will learn how to create products that solve important problems for your customers and ensure that the initial success is sustained throughout the product's intended life. Your users will trust you and your models, and most importantly, you will be a more well-rounded and effectual data scientist throughout your career. Who This Book Is For Early-career data scientists, managers of data scientists, and those interested in entering the field of data science
Data literacy is one of the key skills that companies are looking for but it's a specialist skill - currently. This book is your comprehensive guide to becoming data literate: understand data analytics, how to use data insights effectively in your organisation, and how to talk about data with experts and non-experts confidently.
More students study management and organization studies than ever, the number of business schools worldwide continues to rise, and more management research is being published in a greater number of journals than could have been imagined twenty years ago. Dennis Tourish looks beneath the surface of this progress to expose a field in crisis and in need of radical reform. He identifies the ways in which management research has lost its way, including a remoteness from the practical problems that managers and employees face, a failure to replicate key research findings, poor writing, endless obscure theorizing, and an increasing number of research papers being retracted for fraud and other forms of malpractice. Tourish suggests fundamental changes to remedy these issues, enabling management research to become more robust, more interesting and more valuable to society. A must read for academics, practising managers, university administrators and policy makers within higher education.
Statistics for Social Work with SPSS provides readers with a user-friendly, evidence-based, and practical resource to help them make sense of, organize, analyze, and interpret data in contemporary contexts. It incorporates one of the most well-known statistics software applications, the Statistical Package for the Social Science (SPSS), within each chapter to help readers integrate their knowledge either manually or with the assistance of technology. The book begins with a brief introduction to statistics and research, followed by chapters that address variables, frequency distributions, measures of central tendency, and measures of variability. Additional chapters cover probability and hypothesis testing; normal distribution and Z score; correlation; simple linear regression; one-way ANOVA; and more. Each chapter features concise, simple explanations of key terms, formulas, and calculations; study questions and answers; specific SPSS instructions on computerized computations; and evidence-based, practical examples to support the learning experience. Presenting students with highly accessible and universally understandable statistical concepts, Statistics for Social Work with SPSS is an ideal textbook for undergraduate and graduate-level courses in social work statistics, as well as research-based courses within the social and behavioral sciences.
This is the ideal book to get you up and running with the basics of qualitative data analysis. It breaks everything down into a series of simple steps and introduces the practical tools and techniques you need to turn your transcripts into meaningful research. Using multidisciplinary data from interviews and focus groups Jamie Harding provides clear guidance on how to apply key research skills such as making summaries, identifying similarities, drawing comparisons and using codes. The book sets out real world applicable advice, provides easy to follow best practice and helps you to: * Manage and sort your data * Find your argument and define your conclusions * Answer your research question * Write up your research for assessment and dissemination Clear, pragmatic and honest this book will give you the perfect framework to start understanding your qualitative data and to finish your research project.
'I couldn't imagine a better guidebook for making sense of a tragic and momentous time in our lives. Covid by Numbers is comprehensive yet concise, impeccably clear and always humane' Tim Harford How many people have died because of COVID-19? Which countries have been hit hardest by the virus? What are the benefits and harms of different vaccines? How does COVID-19 compare to the Spanish flu? How have the lockdown measures affected the economy, mental health and crime? This year we have been bombarded by statistics - seven day rolling averages, rates of infection, excess deaths. Never have numbers been more central to our national conversation, and never has it been more important that we think about them clearly. In the media and in their Observer column, Professor Sir David Spiegelhalter and RSS Statistical Ambassador Anthony Masters have interpreted these statistics, offering a vital public service by giving us the tools we need to make sense of the virus for ourselves and holding the government to account. In Covid by Numbers, they crunch the data on a year like no other, exposing the leading misconceptions about the virus and the vaccine, and answering our essential questions. This timely, concise and approachable book offers a rare depth of insight into one of the greatest upheavals in history, and a trustworthy guide to these most uncertain of times.
A comprehensive compilation of new developments in data linkage methodology The increasing availability of large administrative databases has led to a dramatic rise in the use of data linkage, yet the standard texts on linkage are still those which describe the seminal work from the 1950-60s, with some updates. Linkage and analysis of data across sources remains problematic due to lack of discriminatory and accurate identifiers, missing data and regulatory issues. Recent developments in data linkage methodology have concentrated on bias and analysis of linked data, novel approaches to organising relationships between databases and privacy-preserving linkage. Methodological Developments in Data Linkage brings together a collection of contributions from members of the international data linkage community, covering cutting edge methodology in this field. It presents opportunities and challenges provided by linkage of large and often complex datasets, including analysis problems, legal and security aspects, models for data access and the development of novel research areas. New methods for handling uncertainty in analysis of linked data, solutions for anonymised linkage and alternative models for data collection are also discussed. Key Features : Presents cutting edge methods for a topic of increasing importance to a wide range of research areas, with applications to data linkage systems internationally Covers the essential issues associated with data linkage today Includes examples based on real data linkage systems, highlighting the opportunities, successes and challenges that the increasing availability of linkage data provides Novel approach incorporates technical aspects of both linkage, management and analysis of linked data This book will be of core interest to academics, government employees, data holders, data managers, analysts and statisticians who use administrative data. It will also appeal to researchers in a variety of areas, including epidemiology, biostatistics, social statistics, informatics, policy and public health.
Geometric and topological inference deals with the retrieval of information about a geometric object using only a finite set of possibly noisy sample points. It has connections to manifold learning and provides the mathematical and algorithmic foundations of the rapidly evolving field of topological data analysis. Building on a rigorous treatment of simplicial complexes and distance functions, this self-contained book covers key aspects of the field, from data representation and combinatorial questions to manifold reconstruction and persistent homology. It can serve as a textbook for graduate students or researchers in mathematics, computer science and engineering interested in a geometric approach to data science.
Geometric and topological inference deals with the retrieval of information about a geometric object using only a finite set of possibly noisy sample points. It has connections to manifold learning and provides the mathematical and algorithmic foundations of the rapidly evolving field of topological data analysis. Building on a rigorous treatment of simplicial complexes and distance functions, this self-contained book covers key aspects of the field, from data representation and combinatorial questions to manifold reconstruction and persistent homology. It can serve as a textbook for graduate students or researchers in mathematics, computer science and engineering interested in a geometric approach to data science.
Faculty members, scholars, and researchers often ask where they should publish their work; which outlets are most suitable to showcase their research? Which journals should they publish in to ensure their work is read and cited? How can the impact of their scholarly output be maximized? The answers to these and related questions affect not only individual scholars, but also academic and research institution stakeholders who are under constant pressure to create and implement organizational policies, evaluation measures and reward systems that encourage quality, high impact research from their members. The explosion of academic research in recent years, along with advances in information technology, has given rise to omnipresent and increasingly important scholarly metrics. These measures need to be assessed and used carefully, however, as their widespread availability often tempts users to jump to improper conclusions without considering several caveats. While various quantitative tools enable the ranking, evaluating, categorizing, and comparing of journals and articles, metrics such as author or article citation counts, journal impact factors, and related measures of institutional research output are somewhat inconsistent with traditional goals and objectives of higher education research and scholarly academic endeavors. This book provides guidance to individual researchers, research organizations, and academic institutions as they grapple with rapidly developing issues surrounding scholarly metrics and their potential value to both policy-makers, as evaluation and measurement tools, and individual scholars, as a way to identify colleagues for potential collaboration, promote their position as public intellectuals, and support intellectual community engagement.
Is college worth the cost? Should I worry about arsenic in my rice? Can we recycle pollution? Real questions of personal finance, public health, and social policy require sober, data-driven analyses. This unique text provides students with the tools of quantitative reasoning to answer such questions. The text models how to clarify the question, recognize and avoid bias, isolate relevant factors, gather data, and construct numerical analyses for interpretation. Themes and techniques are repeated across chapters, with a progression in mathematical sophistication over the course of the book, which helps the student get comfortable with the process of thinking in numbers. This textbook includes references to source materials and suggested further reading, making it user-friendly for motivated undergraduate students. The many detailed problems and worked solutions in the text and extensive appendices help the reader learn mathematical areas such as algebra, functions, graphs, and probability. End-of-chapter problem material provides practice for students, and suggested projects are provided with each chapter. A solutions manual is available online for instructors.
An extensively revised and expanded third edition of the successful textbook on analysis and visualization of social networks integrating theory, applications, and professional software for performing network analysis (Pajek). The main structural concepts and their applications in social research are introduced with exercises. Pajek software and datasets are available, so readers can learn network analysis through application and case studies. In the end readers will have the knowledge, skills, and tools to apply social network analysis across different disciplines. A fundamental redesign of the menu structure and the capability to analyze much larger networks required a new edition. This edition presents several new operations including community detection, generalized main paths searches, new network indices, advanced visualization approaches, and instructions for installing Pajek under MacOSX. This third edition is up-to-date with Pajek version 5 and it introduces PajekXXL for very large networks and Pajek3XL for huge networks.
An extensively revised and expanded third edition of the successful textbook on analysis and visualization of social networks integrating theory, applications, and professional software for performing network analysis (Pajek). The main structural concepts and their applications in social research are introduced with exercises. Pajek software and datasets are available, so readers can learn network analysis through application and case studies. In the end readers will have the knowledge, skills, and tools to apply social network analysis across different disciplines. A fundamental redesign of the menu structure and the capability to analyze much larger networks required a new edition. This edition presents several new operations including community detection, generalized main paths searches, new network indices, advanced visualization approaches, and instructions for installing Pajek under MacOSX. This third edition is up-to-date with Pajek version 5 and it introduces PajekXXL for very large networks and Pajek3XL for huge networks.
This book provides thorough and comprehensive coverage of most of the new and important quantitative methods of data analysis for graduate students and practitioners. In recent years, data analysis methods have exploded alongside advanced computing power, and it is critical to understand such methods to get the most out of data, and to extract signal from noise. The book excels in explaining difficult concepts through simple explanations and detailed explanatory illustrations. Most unique is the focus on confidence limits for power spectra and their proper interpretation, something rare or completely missing in other books. Likewise, there is a thorough discussion of how to assess uncertainty via use of Expectancy, and the easy to apply and understand Bootstrap method. The book is written so that descriptions of each method are as self-contained as possible. Many examples are presented to clarify interpretations, as are user tips in highlighted boxes.
A new and important contribution to the re-emergent field of comparative anthropology, this book argues that comparative ethnographic methods are essential for more contextually sophisticated accounts of a number of pressing human concerns today. The book includes expert accounts from an international team of scholars, showing how these methods can be used to illuminate important theoretical and practical projects. Illustrated with examples of successful inter-disciplinary projects, it highlights the challenges, benefits, and innovative strategies involved in working collaboratively across disciplines. Through its focus on practical methodological and logistical accounts, it will be of value to both seasoned researchers who seek practical models for conducting their own cutting-edge comparative research, and to teachers and students who are looking for first-person accounts of comparative ethnographic research.
Data is humanity's most important new resource. It has the capacity to provide insight into every aspect of our lives, the planet and the universe at large; it changes not only what we know but also how we know it. Exploiting the value of data could improve our existence as much as - if not more than - previous technological revolutions. Yet data without empathy is useless. There is a tendency in data science to forget about the human needs and feelings of the people who make up the data, the people who work with the data, and those expected to understand the results. Without empathy, this precious resource is at best underused, at worst misused. Data: A Guide to Humans will help you understand how to properly exploit data, why this is so important, and how companies and governments are currently using data. It makes a compelling case for empathy as the crucial factor in elevating our understanding of data to something which can make a lasting and essential contribution to your business, your life and maybe even the world.
Data Presentation with SPSS Explained provides students with all the information they need to conduct small scale analysis of research projects using SPSS and present their results appropriately in their reports. Quantitative data can be collected in the form of a questionnaire, survey or experimental study. This book focuses on presenting this data clearly, in the form of tables and graphs, along with creating basic summary statistics. Data Presentation with SPSS Explained uses an example survey that is clearly explained step-by-step throughout the book. This allows readers to follow the procedures, and easily apply each step in the process to their own research and findings. No prior knowledge of statistics or SPSS is assumed, and everything in the book is carefully explained in a helpful and user-friendly way using worked examples. This book is the perfect companion for students from a range of disciplines including psychology, business, communication, education, health, humanities, marketing and nursing - many of whom are unaware that this extremely helpful program is available at their institution for their use.
This book presents the field of sports statistics to two very distinct target audiences, namely academicians, in order to raise their interest in this growing field, and, on the other hand, sports fans, who, even without advanced mathematical knowledge, will be able to understand the data analysis and gain new insights into their favourite sports. The book thus offers a unique perspective on this attractive topic by combining sports analytics, data visualisation and advanced statistical procedures to extract new findings from sports data such as improved rankings or prediction methods. Bringing together insights from football, tennis, basketball, track and field, and baseball, the book will appeal to aficionados of any sport, and, thanks to its cutting-edge data analysis tools, will provide the reader with completely new insights into their favourite sport in an engaging and user-friendly way. |
![]() ![]() You may like...
Agents and Multi-agent Systems…
Gordan Jezic, Yun-Heh Jessica Chen-Burger, …
Hardcover
R5,640
Discovery Miles 56 400
Order and Structure in Syntax I
Laura R Bailey, Michelle Sheehan
Hardcover
R1,496
Discovery Miles 14 960
Temporal Type Theory - A Topos-Theoretic…
Patrick Schultz, David I. Spivak
Hardcover
Information and Communication Technology…
Simon Fong, Shyam Akashe, …
Hardcover
R5,756
Discovery Miles 57 560
Smart Trends in Computing and…
Yudong Zhang, Tomonoby Senjyu, …
Hardcover
R5,674
Discovery Miles 56 740
|