![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Reference & Interdisciplinary > Communication studies > Data analysis
Despite businesses often being based on creating desirable experiences, products and services for consumers, many fail to consider the end user in their planning and development processes. This book is here to change that. User experience research, also known as UX research, focuses on understanding user behaviours, needs and motivations through a range of observational techniques, task analysis and other methodologies. User Research is a practical guide that shows readers how to use the vast array of user research methods available. Written by one of the UK's leading UX research professionals, readers can benefit from in-depth knowledge that explores the fundamentals of user research. Covering all the key research methods including face-to-face user testing, card sorting, surveys, A/B testing and many more, the book gives expert insight into the nuances, advantages and disadvantages of each, while also providing guidance on how to interpret, analyze and share the data once it has been obtained. Now in its second edition, User Research provides a new chapter on research operations and infrastructure as well as new material on combining user research methodologies.
Meaningful use of advanced Bayesian methods requires a good understanding of the fundamentals. This engaging book explains the ideas that underpin the construction and analysis of Bayesian models, with particular focus on computational methods and schemes. The unique features of the text are the extensive discussion of available software packages combined with a brief but complete and mathematically rigorous introduction to Bayesian inference. The text introduces Monte Carlo methods, Markov chain Monte Carlo methods, and Bayesian software, with additional material on model validation and comparison, transdimensional MCMC, and conditionally Gaussian models. The inclusion of problems makes the book suitable as a textbook for a first graduate-level course in Bayesian computation with a focus on Monte Carlo methods. The extensive discussion of Bayesian software - R/R-INLA, OpenBUGS, JAGS, STAN, and BayesX - makes it useful also for researchers and graduate students from beyond statistics.
Human error is implicated in nearly all aviation accidents, yet most investigation and prevention programs are not designed around any theoretical framework of human error. Appropriate for all levels of expertise, the book provides the knowledge and tools required to conduct a human error analysis of accidents, regardless of operational setting (i.e. military, commercial, or general aviation). The book contains a complete description of the Human Factors Analysis and Classification System (HFACS), which incorporates James Reason's model of latent and active failures as a foundation. Widely disseminated among military and civilian organizations, HFACS encompasses all aspects of human error, including the conditions of operators and elements of supervisory and organizational failure. It attracts a very broad readership. Specifically, the book serves as the main textbook for a course in aviation accident investigation taught by one of the authors at the University of Illinois. This book will also be used in courses designed for military safety officers and flight surgeons in the U.S. Navy, Army and the Canadian Defense Force, who currently utilize the HFACS system during aviation accident investigations. Additionally, the book has been incorporated into the popular workshop on accident analysis and prevention provided by the authors at several professional conferences world-wide. The book is also targeted for students attending Embry-Riddle Aeronautical University which has satellite campuses throughout the world and offers a course in human factors accident investigation for many of its majors. In addition, the book will be incorporated into courses offered by Transportation Safety International and the Southern California Safety Institute. Finally, this book serves as an excellent reference guide for many safety professionals and investigators already in the field.
A valuable new edition of a standard reference The use of statistical methods for categorical data has increased dramatically, particularly for applications in the biomedical and social sciences. An Introduction to Categorical Data Analysis, Third Edition summarizes these methods and shows readers how to use them using software. Readers will find a unified generalized linear models approach that connects logistic regression and loglinear models for discrete data with normal regression for continuous data. Adding to the value in the new edition is: - Illustrations of the use of R software to perform all the analyses in the book - A new chapter on alternative methods for categorical data, including smoothing and regularization methods (such as the lasso), classification methods such as linear discriminant analysis and classification trees, and cluster analysis - New sections in many chapters introducing the Bayesian approach for the methods of that chapter - More than 70 analyses of data sets to illustrate application of the methods, and about 200 exercises, many containing other data sets - An appendix showing how to use SAS, Stata, and SPSS, and an appendix with short solutions to most odd-numbered exercises Written in an applied, nontechnical style, this book illustrates the methods using a wide variety of real data, including medical clinical trials, environmental questions, drug use by teenagers, horseshoe crab mating, basketball shooting, correlates of happiness, and much more. An Introduction to Categorical Data Analysis, Third Edition is an invaluable tool for statisticians and biostatisticians as well as methodologists in the social and behavioral sciences, medicine and public health, marketing, education, and the biological and agricultural sciences.
Data Presentation with SPSS Explained provides students with all the information they need to conduct small scale analysis of research projects using SPSS and present their results appropriately in their reports. Quantitative data can be collected in the form of a questionnaire, survey or experimental study. This book focuses on presenting this data clearly, in the form of tables and graphs, along with creating basic summary statistics. Data Presentation with SPSS Explained uses an example survey that is clearly explained step-by-step throughout the book. This allows readers to follow the procedures, and easily apply each step in the process to their own research and findings. No prior knowledge of statistics or SPSS is assumed, and everything in the book is carefully explained in a helpful and user-friendly way using worked examples. This book is the perfect companion for students from a range of disciplines including psychology, business, communication, education, health, humanities, marketing and nursing - many of whom are unaware that this extremely helpful program is available at their institution for their use.
The social sciences are becoming datafied. The questions once considered the domain of sociologists are now answered by data scientists operating on large datasets and breaking with methodological tradition, for better or worse. The traditional social sciences, such as sociology or anthropology, are under the double threat of becoming marginalized or even irrelevant, both from new methods of research which require more computational skills and from increasing competition from the corporate world which gains an additional advantage based on data access. However, unlike data scientists, sociologists and anthropologists have a long history of doing qualitative research. The more quantified datasets we have, the more difficult it is to interpret them without adding layers of qualitative interpretation. Big Data therefore needs Thick Data. This book presents the available arsenal of new methods and tools for studying society both quantitatively and qualitatively, opening ground for the social sciences to take the lead in analysing digital behaviour. It shows that Big Data can and should be supplemented and interpreted through thick data as well as cultural analysis. Thick Big Data is critically important for students and researchers in the social sciences to understand the possibilities of digital analysis, both in the quantitative and qualitative area, and to successfully build mixed-methods approaches.
Since long before computers were even thought of, data has been collected and organized by diverse cultures across the world. Once access to the Internet became a reality for large swathes of the world's population, the amount of data generated each day became huge, and continues to grow exponentially. It includes all our uploaded documents, video, and photos, all our social media traffic, our online shopping, even the GPS data from our cars. 'Big Data' represents a qualitative change, not simply a quantitative one. The term refers both to the new technologies involved, and to the way it can be used by business and government. Dawn E. Holmes uses a variety of case studies to explain how data is stored, analysed, and exploited by a variety of bodies from big companies to organizations concerned with disease control. Big data is transforming the way businesses operate, and the way medical research can be carried out. At the same time, it raises important ethical issues; Holmes discusses cases such as the Snowden affair, data security, and domestic smart devices which can be hijacked by hackers. ABOUT THE SERIES: The Very Short Introductions series from Oxford University Press contains hundreds of titles in almost every subject area. These pocket-sized books are the perfect way to get ahead in a new subject quickly. Our expert authors combine facts, analysis, perspective, new ideas, and enthusiasm to make interesting and challenging topics highly readable.
This book showcases the different ways in which contemporary forms of data analysis are being used in urban planning and management. It highlights the emerging possibilities that city-regional governance, technology and data have for better planning and urban management - and discusses how you can apply them to your research. Including perspectives from across the globe, it's packed with examples of good practice and helps to demystify the process of using big and open data. Learn about different kinds of emergent data sources and how they are processed, visualised and presented. Understand how spatial analysis and GIS are used in city planning. See examples of how contemporary data analytics methods are being applied in a variety of contexts, such as 'smart' city management and megacities. Aimed at upper undergraduate and postgraduate students studying spatial analysis and planning, this timely text is the perfect companion to enable you to apply data analytics approaches in your research.
Throughout the world, voters lack access to information about politicians, government performance, and public services. Efforts to remedy these informational deficits are numerous. Yet do informational campaigns influence voter behavior and increase democratic accountability? Through the first project of the Metaketa Initiative, sponsored by the Evidence in Governance and Politics (EGAP) research network, this book aims to address this substantive question and at the same time introduce a new model for cumulative learning that increases coordination among otherwise independent researcher teams. It presents the overall results (using meta-analysis) from six independently conducted but coordinated field experimental studies, the results from each individual study, and the findings from a related evaluation of whether practitioners utilize this information as expected. It also discusses lessons learned from EGAP's efforts to coordinate field experiments, increase replication of theoretically important studies across contexts, and increase the external validity of field experimental research.
A clear easy-to-read guide to presenting your message using statistical data Poor presentation of data is everywhere; basic principles are forgotten or ignored. As a result, audiences are presented with confusing tables and charts that do not make immediate sense. This book is intended to be read by all who present data in any form. The author, a chartered statistician who has run many courses on the subject of data presentation, presents numerous examples alongside an explanation of how improvements can be made and basic principles to adopt. He advocates following four key C words in all messages: Clear, Concise, Correct and Consistent. Following the principles in the book will lead to clearer, simpler and easier to understand messages which can then be assimilated faster. Anyone from student to researcher, journalist to policy adviser, charity worker to government statistician, will benefit from reading this book. More importantly, it will also benefit the recipients of the presented data. Ed Swires-Hennessy, a recognised expert in the presentation of statistics, explains and clearly describes a set of principles of clear and objective statistical communication. This book should be required reading for all those who present statistics. Richard Laux, UK Statistics Authority I think this is a fantastic book and hope everyone who presents data or statistics makes time to read it first. David Marder, Chief Media Adviser, Office for National Statistics, UK Ed s book makes his tried-and-tested material widely available to anyone concerned with understanding and presenting data. It is full of interesting insights, is highly practical and packed with sensible suggestions and nice ideas that you immediately want to try out. Dr Shirley Coleman, Principal Statistician, Industrial Statistics Research Unit, School of Mathematics and Statistics, Newcastle University, UK
A human disaster is defined as a hazardous event that overwhelms the capacity of the local community to respond to the needs of the affected population. Medical and public health responses aim to provide care efficiently and promptly but all too often, responses are hampered by recurring mistakes. Analysing the factors at play such as the scale and frequency of disasters and the variety of challenges they present, is central to developing more effective response plans. However the complexity of disasters often precludes reliable data collection, hampering the accuracy of the results, conclusions and recommendations required to improve responses. Disaster Evaluation Research: A field guide presents a new approach to the study of disaster by incorporating a mixed-methods research approach. This practical manual provides a range of reliable methods, robust approaches and proven techniques for the gathering and analyzing of data. Written by leading evaluation scientists with a wealth of experience, the authors present their 'EIGHT Step Model' for disaster evaluation studies. This framework applies evaluation science to disaster responses, helping scientists to select key stakeholders effectively, write evaluation questions, use logic models and mixed-methods research design, prepare sampling plans, collect and analyse data, and prepare a final report. This guide also features useful tools for carrying out evaluations including; evaluation questions, indicators and data sources, resources, and questionnaires used in past evaluation studies. Using a clear, accessible and step-by-step style this practical manual is easy to use in the field and essential reading for medical and public health professionals involved in disaster preparedness and response, humanitarian relief workers, policy analysts, evaluation scientists and epidemiologists.
Focus on the most important and most often overlooked factor in a successful Tableau project-data. Without a reliable data source, you will not achieve the results you hope for in Tableau. This book does more than teach the mechanics of data preparation. It teaches you: how to look at data in a new way, to recognize the most common issues that hinder analytics, and how to mitigate those factors one by one. Tableau can change the course of business, but the old adage of "garbage in, garbage out" is the hard truth that hides behind every Tableau sales pitch. That amazing sales demo does not work as well with bad data. The unfortunate reality is that almost all data starts out in a less-than-perfect state. Data prep is hard. Traditionally, we were forced into the world of the database where complex ETL (Extract, Transform, Load) operations created by the data team did all the heavy lifting for us. Fortunately, we have moved past those days. With the introduction of the Tableau Data Prep tool you can now handle most of the common Data Prep and cleanup tasks on your own, at your desk, and without the help of the data team. This essential book will guide you through: The layout and important parts of the Tableau Data Prep tool Connecting to data Data quality and consistency The shape of the data. Is the data oriented in columns or rows? How to decide? Why does it matter? What is the level of detail in the source data? Why is that important? Combining source data to bring in more fields and rows Saving the data flow and the results of our data prep work Common cleanup and setup tasks in Tableau Desktop What You Will Learn Recognize data sources that are good candidates for analytics in Tableau Connect to local, server, and cloud-based data sources Profile data to better understand its content and structure Rename fields, adjust data types, group data points, and aggregate numeric data Pivot data Join data from local, server, and cloud-based sources for unified analytics Review the steps and results of each phase of the Data Prep process Output new data sources that can be reviewed in Tableau or any other analytics tool Who This Book Is For Tableau Desktop users who want to: connect to data, profile the data to identify common issues, clean up those issues, join to additional data sources, and save the newly cleaned, joined data so that it can be used more effectively in Tableau
Every country, every subnational government, and every district has a designated population, and this has a bearing on politics in ways most citizens and policymakers are barely aware of. Population and Politics provides a comprehensive evaluation of the political implications stemming from the size of a political unit - on social cohesion, the number of representatives, overall representativeness, particularism ('pork'), citizen engagement and participation, political trust, electoral contestation, leadership succession, professionalism in government, power concentration in the central apparatus of the state, government intervention, civil conflict, and overall political power. A multimethod approach combines field research in small states and islands with cross-country and within-country data analysis. Population and Politics will be of interest to academics, policymakers, and anyone concerned with decentralization and multilevel governance.
Network thinking and network analysis are rapidly expanding features of ecological research. Network analysis of ecological systems include representations and modelling of the interactions in an ecosystem, in which species or factors are joined by pairwise connections. This book provides an overview of ecological network analysis including generating processes, the relationship between structure and dynamic function, and statistics and models for these networks. Starting with a general introduction to the composition of networks and their characteristics, it includes details on such topics as measures of network complexity, applications of spectral graph theory, how best to include indirect species interactions, and multilayer, multiplex and multilevel networks. Graduate students and researchers who want to develop and understand ecological networks in their research will find this volume inspiring and helpful. Detailed guidance to those already working in network ecology but looking for advice is also included.
Networks are everywhere: networks of friends, transportation networks and the Web. Neurons in our brains and proteins within our bodies form networks that determine our intelligence and survival. This modern, accessible textbook introduces the basics of network science for a wide range of job sectors from management to marketing, from biology to engineering, and from neuroscience to the social sciences. Students will develop important, practical skills and learn to write code for using networks in their areas of interest - even as they are just learning to program with Python. Extensive sets of tutorials and homework problems provide plenty of hands-on practice and longer programming tutorials online further enhance students' programming skills. This intuitive and direct approach makes the book ideal for a first course, aimed at a wide audience without a strong background in mathematics or computing but with a desire to learn the fundamentals and applications of network science.
Spatial data analysis has seen explosive growth in recent years. Both in mainstream statistics and econometrics as well as in many applied ?elds, the attention to space, location, and interaction has become an important feature of scholarly work. The methodsdevelopedto dealwith problemsofspatialpatternrecognition,spatialau- correlation, and spatial heterogeneity have seen greatly increased adoption, in part due to the availability of user friendlydesktopsoftware. Throughhis theoretical and appliedwork,ArthurGetishasbeena majorcontributing?gureinthisdevelopment. In this volume, we take both a retrospective and a prospective view of the ?eld. We use the occasion of the retirement and move to emeritus status of Arthur Getis to highlight the contributions of his work. In addition, we aim to place it into perspective in light of the current state of the art and future directions in spatial data analysis. To this end, we elected to combine reprints of selected classic contributions by Getiswithchapterswrittenbykeyspatialscientists.Thesescholarswerespeci?cally invited to react to the earlier work by Getis with an eye toward assessing its impact, tracing out the evolution of related research, and to re?ect on the future broadening of spatial analysis. The organizationof the book follows four main themes in Getis' contributions: * Spatial analysis * Pattern analysis * Local statistics * Applications For each of these themes, the chapters provide a historical perspective on early methodological developments and theoretical insights, assessments of these c- tributions in light of the current state of the art, as well as descriptions of new techniques and applications.
Questioning Numbers: How to Read and Critique Research is a
critical companion for students in research methods courses in any
of the social sciences. This book helps teach students how to read
and critique research that employs numbers in the course of
empirical argument. Author Karin Gwinn Wilkins provides a list of
guidelines for reading research and also presents a critical
approach to judging and using numbers in navigating and changing
social worlds.
This book integrates philosophy of science, data acquisition methods, and statistical modeling techniques to present readers with a forward-thinking perspective on clinical science. It reviews modern research practices in clinical psychology that support the goals of psychological science, study designs that promote good research, and quantitative methods that can test specific scientific questions. It covers new themes in research including intensive longitudinal designs, neurobiology, developmental psychopathology, and advanced computational methods such as machine learning. Core chapters examine significant statistical topics, for example missing data, causality, meta-analysis, latent variable analysis, and dyadic data analysis. A balanced overview of observational and experimental designs is also supplied, including preclinical research and intervention science. This is a foundational resource that supports the methodological training of the current and future generations of clinical psychological scientists.
Network thinking and network analysis are rapidly expanding features of ecological research. Network analysis of ecological systems include representations and modelling of the interactions in an ecosystem, in which species or factors are joined by pairwise connections. This book provides an overview of ecological network analysis including generating processes, the relationship between structure and dynamic function, and statistics and models for these networks. Starting with a general introduction to the composition of networks and their characteristics, it includes details on such topics as measures of network complexity, applications of spectral graph theory, how best to include indirect species interactions, and multilayer, multiplex and multilevel networks. Graduate students and researchers who want to develop and understand ecological networks in their research will find this volume inspiring and helpful. Detailed guidance to those already working in network ecology but looking for advice is also included.
The increased and widespread availability of large network data resources in recent years has resulted in a growing need for effective methods for their analysis. The challenge is to detect patterns that provide a better understanding of the data. However, this is not a straightforward task because of the size of the data sets and the computer power required for the analysis. The solution is to devise methods for approximately answering the questions posed, and these methods will vary depending on the data sets under scrutiny. This cutting-edge text introduces biological concepts and biotechnologies producing the data, graph and network theory, cluster analysis and machine learning, before discussing the thought processes and creativity involved in the analysis of large-scale biological and medical data sets, using a wide range of real-life examples. Bringing together leading experts, this text provides an ideal introduction to and insight into the interdisciplinary field of network data analysis in biomedicine.
Python is one of the most popular programming languages, widely used for data analysis and modelling, and is fast becoming the leading choice for scientists and engineers. Unlike other textbooks introducing Python, typically organised by language syntax, this book uses many examples from across Biology, Chemistry, Physics, Earth science, and Engineering to teach and motivate students in science and engineering. The text is organised by the tasks and workflows students undertake day-to-day, helping them see the connections between programming tools and their disciplines. The pace of study is carefully developed for complete beginners, and a spiral pedagogy is used so concepts are introduced across multiple chapters, allowing readers to engage with topics more than once. "Try This!" exercises and online Jupyter notebooks encourage students to test their new knowledge, and further develop their programming skills. Online solutions are available for instructors, alongside discipline-specific homework problems across the sciences and engineering.
Whilst a great deal of progress has been made in recent decades, concerns persist about the course of the social sciences. Progress in these disciplines is hard to assess and core scientific goals such as discovery, transparency, reproducibility, and cumulation remain frustratingly out of reach. Despite having technical acumen and an array tools at their disposal, today's social scientists may be only slightly better equipped to vanquish error and construct an edifice of truth than their forbears - who conducted analyses with slide rules and wrote up results with typewriters. This volume considers the challenges facing the social sciences, as well as possible solutions. In doing so, we adopt a systemic view of the subject matter. What are the rules and norms governing behavior in the social sciences? What kinds of research, and which sorts of researcher, succeed and fail under the current system? In what ways does this incentive structure serve, or subvert, the goal of scientific progress?
Learn how to make the right decisions for your business with the help of Python recipes and the expertise of data leaders Key Features Learn and practice various clustering techniques to gather market insights Explore real-life use cases from the business world to contextualize your learning Work your way through practical recipes that will reinforce what you have learned Book DescriptionOne of the most valuable contributions of data science is toward helping businesses make the right decisions. Understanding this complicated confluence of two disparate worlds, as well as a fiercely competitive market, calls for all the guidance you can get. The Art of Data-Driven Business is your invaluable guide to gaining a business-driven perspective, as well as leveraging the power of machine learning (ML) to guide decision-making in your business. This book provides a common ground of discussion for several profiles within a company. You'll begin by looking at how to use Python and its many libraries for machine learning. Experienced data scientists may want to skip this short introduction, but you'll soon get to the meat of the book and explore the many and varied ways ML with Python can be applied to the domain of business decisions through real-world business problems that you can tackle by yourself. As you advance, you'll gain practical insights into the value that ML can provide to your business, as well as the technical ability to apply a wide variety of tried-and-tested ML methods. By the end of this Python book, you'll have learned the value of basing your business decisions on data-driven methodologies and have developed the Python skills needed to apply what you've learned in the real world. What you will learn Create effective dashboards with the seaborn library Predict whether a customer will cancel their subscription to a service Analyze key pricing metrics with pandas Recommend the right products to your customers Determine the costs and benefits of promotions Segment your customers using clustering algorithms Who this book is forThis book is for data scientists, machine learning engineers and developers, data engineers, and business decision makers who want to apply data science for business process optimization and develop the skills needed to implement data science projects in marketing, sales, pricing, customer success, ad tech, and more from a business perspective. Other professionals looking to explore how data science can be used to improve business operations, as well as individuals with technical skills who want to back their technical proposal with a strong business case will also find this book useful.
Even though many data analytics tools have been developed in the past years, their usage in the field of cyber twin warrants new approaches that consider various aspects including unified data representation, zero-day attack detection, data sharing across threat detection systems, real-time analysis, sampling, dimensionality reduction, resource-constrained data processing, and time series analysis for anomaly detection. Further study is required to fully understand the opportunities, benefits, and difficulties of data analytics and the internet of things in today's modern world. New Approaches to Data Analytics and Internet of Things Through Digital Twin considers how data analytics and the internet of things can be used successfully within the field of digital twin as well as the potential future directions of these technologies. Covering key topics such as edge networks, deep learning, intelligent data analytics, and knowledge discovery, this reference work is ideal for computer scientists, industry professionals, researchers, scholars, practitioners, academicians, instructors, and students.
Elementary Statistics: A Guide to Data Analysis Using R provides students with an introduction to both the field of statistics and R, one of the most widely used languages for statistical computing, analysis, and graphing in a variety of fields, including the sciences, finance, banking, health care, e-commerce, and marketing. Part I provides an overview of both statistics and R. Part II focuses on descriptive statistics and probability. In Part III, students learn about discrete and continuous probability distributions with chapters addressing probability distributions, binominal probability distributions, and normal probability distributions. Part IV speaks to statistical inference with content covering confidence intervals, hypothesis testing, chi-square tests and F-distributions. The final part explores additional statistical inference and assumptions, including correlation, regression, and nonparametric statistics. Helpful appendices provide students with an index of terminology, an index of applications, a glossary of symbols, and a guide to the most common R commands. Elementary Statistics is an ideal resource for introductory courses in undergraduate statistics, graduate statistics, and data analysis across the disciplines. |
You may like...
Towards Extensible and Adaptable Methods…
Shampa Chakraverty, Anil Goel, …
Hardcover
R2,717
Discovery Miles 27 170
Handbook of Research on High Performance…
Marijana Despotovi -Zraki, Veljko Milutinovi, …
Hardcover
R8,099
Discovery Miles 80 990
Applications of Computer Algebra…
Ilias S. Kotsireas, Edgar Martinez-Moro
Hardcover
Coupled Mathematical Models for Physical…
Luis L. Bonilla, Efthimios Kaxiras, …
Hardcover
Time And Age: Time Machines, Relativity…
Michael Mark Woolfson
Paperback
R1,019
Discovery Miles 10 190
High Performance Computing in Science…
Wolfgang E. Nagel, Dietmar H. Kroener, …
Hardcover
R5,236
Discovery Miles 52 360
|