![]() |
![]() |
Your cart is empty |
||
Books > Reference & Interdisciplinary > Communication studies > Data analysis
This book provides thorough and comprehensive coverage of most of the new and important quantitative methods of data analysis for graduate students and practitioners. In recent years, data analysis methods have exploded alongside advanced computing power, and it is critical to understand such methods to get the most out of data, and to extract signal from noise. The book excels in explaining difficult concepts through simple explanations and detailed explanatory illustrations. Most unique is the focus on confidence limits for power spectra and their proper interpretation, something rare or completely missing in other books. Likewise, there is a thorough discussion of how to assess uncertainty via use of Expectancy, and the easy to apply and understand Bootstrap method. The book is written so that descriptions of each method are as self-contained as possible. Many examples are presented to clarify interpretations, as are user tips in highlighted boxes.
In this fascinating follow-up to the bestselling Information is Beautiful and Knowledge is Beautiful, the king of infographics David McCandless uses spectacular visuals to give us all a bit of good news. We are living in the Information Age, in which we are constantly bombarded with data - on television, in print and online. How can we relate to this mind-numbing overload? Enter David McCandless and his amazing infographics: simple, elegant ways to understand information too complex or abstract to grasp any way but visually. In his unique signature style, he creates dazzling displays that blend facts with their connections, contexts and relationships, making information meaningful, entertaining - and beautiful. In his highly anticipated third book, McCandless illustrates positive news from around the world, for an informative, engaging and uplifting collection of new infographic art.
Experts in data analytics and power engineering present techniques addressing the needs of modern power systems, covering theory and applications related to power system reliability, efficiency, and security. With topics spanning large-scale and distributed optimization, statistical learning, big data analytics, graph theory, and game theory, this is an essential resource for graduate students and researchers in academia and industry with backgrounds in power systems engineering, applied mathematics, and computer science.
This book integrates philosophy of science, data acquisition methods, and statistical modeling techniques to present readers with a forward-thinking perspective on clinical science. It reviews modern research practices in clinical psychology that support the goals of psychological science, study designs that promote good research, and quantitative methods that can test specific scientific questions. It covers new themes in research including intensive longitudinal designs, neurobiology, developmental psychopathology, and advanced computational methods such as machine learning. Core chapters examine significant statistical topics, for example missing data, causality, meta-analysis, latent variable analysis, and dyadic data analysis. A balanced overview of observational and experimental designs is also supplied, including preclinical research and intervention science. This is a foundational resource that supports the methodological training of the current and future generations of clinical psychological scientists.
The vast volume of financial data that exists and the globalisation of financial markets create new challenges for researchers and practitioners in economics and finance. Computational data analysis techniques can contribute significantly within this context, by providing a rigorous analytic framework for decision-making and support, in areas such as financial times series analysis and forecasting, risk assessment, trading, asset management, and pricing. The aim of this edited volume is to present, in a unified context, some recent advances in the field, covering the theory, the methodologies, and the applications of computational data analysis methods in economics and finance. The volume consists of papers published in the fifth volume of the Journal of "Computational Optimization in Economics & Finance" (published by Nova Science Publishers). The contents of this volume cover a wide range of topics, including among others stock market applications, corporate finance, corporate performance, as well as macroeconomic issues.
The burgeoning field of data analysis is expanding at an incredible pace due to the proliferation of data collection in almost every area of science. The enormous data sets now routinely encountered in the sciences provide an incentive to develop mathematical techniques and computational algorithms that help synthesize, interpret and give meaning to the data in the context of its scientific setting. A specific aim of this book is to integrate standard scientific computing methods with data analysis. By doing so, it brings together, in a self-consistent fashion, the key ideas from: * statistics, * time-frequency analysis, and * low-dimensional reductions The blend of these ideas provides meaningful insight into the data sets one is faced with in every scientific subject today, including those generated from complex dynamical systems. This is a particularly exciting field and much of the final part of the book is driven by intuitive examples from it, showing how the three areas can be used in combination to give critical insight into the fundamental workings of various problems. Data-Driven Modeling and Scientific Computation is a survey of practical numerical solution techniques for ordinary and partial differential equations as well as algorithms for data manipulation and analysis. Emphasis is on the implementation of numerical schemes to practical problems in the engineering, biological and physical sciences. An accessible introductory-to-advanced text, this book fully integrates MATLAB and its versatile and high-level programming functionality, while bringing together computational and data skills for both undergraduate and graduate students in scientific computing.
A comprehensive guide to data analysis techniques for physical scientists, providing a valuable resource for advanced undergraduate and graduate students, as well as seasoned researchers. The book begins with an extensive discussion of the foundational concepts and methods of probability and statistics under both the frequentist and Bayesian interpretations of probability. It next presents basic concepts and techniques used for measurements of particle production cross-sections, correlation functions, and particle identification. Much attention is devoted to notions of statistical and systematic errors, beginning with intuitive discussions and progressively introducing the more formal concepts of confidence intervals, credible range, and hypothesis testing. The book also includes an in-depth discussion of the methods used to unfold or correct data for instrumental effects associated with measurement and process noise as well as particle and event losses, before ending with a presentation of elementary Monte Carlo techniques.
In order to assist a hospital in managing its resources and patients, modelling the length of stay is highly important. Recent health scholarship and practice has largely remained empirical, dwelling on primary data. This is critically important, first, because health planners generally rely on data to establish trends and patterns of disease burden at national or regional level. Secondly, epidemiologists depend on data to investigate possible risk factors of the disease. Yet the use of routine or secondary data has, in recent years, proved increasingly significant in such endeavours. Various units within the health systems collected such data primarily as part of the process for surveillance, monitoring and evaluation. Such data is sometimes periodically supplemented by population-based sample survey datasets. Thirdly, coupled with statistical tools, public health professionals are able to analyze health data and breathe life into what may turn out to be meaningless data. The main focus of this book is to present and showcase advanced modelling of routine or secondary survey data. Studies demonstrate that statistical literacy and knowledge are needed to understand health research outputs. The advent of user-friendly statistical packages combined with computing power and widespread availability of public health data resulted in more reported epidemiological studies in literature. However, analysis of secondary data, has some unique challenges. These are most widely reported health literature, so far has failed to recognize resulting in inappropriate analysis, and erroneous conclusions. This book presents the application of advanced statistical techniques to real examples emanating from routine or secondary survey data. These are essentially datasets in which the two editors have been involved, demonstrating how to tackle these challenges. Some of these challenges are: the complex sampling design of the surveys, the hierarchical nature of the data, the dependence of data at the sampled cluster and missing data among many more challenges. Using data from the Health Management Information System (HMIS), and Demographic and Health Survey (DHS), we provide various approaches and techniques of dealing with data complexity, how to handle correlated or clustered data. Each chapter presents an example code, which can be used to analyze similar data in R, Stata or SPSS. To make the book more concise, we have provided the codes on the book's website. The book considers four main topics in the field of health sciences research: (i) structural equation modeling; (ii) spatial and spatio-temporal modeling; (iii) correlated or clustered copula modeling; and (iv) survival analysis. The book has potential to impact methodologists, including students undertaking Master's or Doctoral level programmes as well as other researchers seeking some related reference on quantitative analysis in public health or health sciences or other areas where data of similar nature would be applicable. Further the book can be a resource to public health professionals interested in quantitative approaches to answer questions of epidemiological nature. Each chapter starts with a motivating background, review of statistical methods, analysis and results, ending discussion and possible recommendations.
The approximation and the estimation of nonparametric functions by projections on an orthonormal basis of functions are useful in data analysis. This book presents series estimators defined by projections on bases of functions, they extend the estimators of densities to mixture models, deconvolution and inverse problems, to semi-parametric and nonparametric models for regressions, hazard functions and diffusions. They are estimated in the Hilbert spaces with respect to the distribution function of the regressors and their optimal rates of convergence are proved. Their mean square errors depend on the size of the basis which is consistently estimated by cross-validation. Wavelets estimators are defined and studied in the same models.The choice of the basis, with suitable parametrizations, and their estimation improve the existing methods and leads to applications to a wide class of models. The rates of convergence of the series estimators are the best among all nonparametric estimators with a great improvement in multidimensional models. Original methods are developed for the estimation in deconvolution and inverse problems. The asymptotic properties of test statistics based on the estimators are also established.
Written for anyone beginning a research project, this introductory book takes you through the process of analysing your data from start to finish. The author sets out an easy-to-use model for coding data in order to break it down into parts, and then to reassemble it to create a meaningful picture of the phenomenon under study. Full of useful advice, the book guides the reader through the last difficult integrating phase of qualitative analysis including diagramming, memoing, thinking aloud, and using one's feelings, and how to incorporate the use of software where appropriate. Ideal for third year undergraduate students, master students, postgraduates and anybody beginning a research project, the book includes examples covering a wide range of subjects - making the book useful for students across the social science disciplines. Hennie Boeije is currently an Associate Professor with the Department of Methodology and Statistics of the Faculty of Social and Behavioural Sciences at Utrecht University, The Netherlands.
From the quality of the air we breathe to the national leaders we choose, data and statistics are a pervasive feature of daily life and daily news. But how do news, numbers and public opinion interact with each other - and with what impacts on society at large? Featuring an international roster of established and emerging scholars, this book is the first comprehensive collection of research into the little understood processes underpinning the uses/misuses of statistical information in journalism and their socio-psychological and political effects. Moving beyond the hype around "data journalism," News, Numbers and Public Opinion delves into a range of more latent, fundamental questions such as: * Is it true that most citizens and journalists do not have the necessary skills and resources to critically process and assess numbers? * How do/should journalists make sense of the increasingly data-driven world? * What strategies, formats and frames do journalists use to gather and represent different types of statistical data in their stories? * What are the socio-psychological and political effects of such data gathering and representation routines, formats and frames on the way people acquire knowledge and form attitudes? * What skills and resources do journalists and publics need to deal effectively with the influx of numbers into in daily work and life - and how can newsrooms and journalism schools meet that need? The book is a must-read for not only journalists, journalism and media scholars, statisticians and data scientists but also anybody interested in the interplay between journalism, statistics and society.
This book is aimed primarily at microbiologists who are undertaking research, and who require a basic knowledge of statistics to analyse their experimental data. Computer software employing a wide range of data analysis methods is widely available to experimental scientists. The availability of this software, however, makes it even more essential that microbiologists understand the basic principles of statistics. Statistical analysis of data can be complex with many different methods of approach, each of which applies in a particular experimental circumstance. In addition, most statistical software commercially available is complex and difficult to use. Hence, it is easy to apply an incorrect statistical method to data and to draw the wrong conclusions from an experiment. The purpose of this book is an attempt to present the basic logic of statistics as clearly as possible and therefore, to dispel some of the myths that often surround the subject. The book is presented as a series of 2018Statnotes', many of which were originally published in the 2018Microbiologist' by the Society for Applied Microbiology, each of which deals with various topics including the nature of variables, comparing the means of two or more groups, non-parametric statistics, analysis of variance, correlating variables, and more complex methods such as multiple linear regression and factor analysis. In each case, the relevant statistical methods are illustrated with scenarios and real experimental data drawn from experiments in microbiology. The text will incorporate a glossary of the most commonly used statistical terms and a section to aid the investigator to select the most appropriate test.
Human error is implicated in nearly all aviation accidents, yet most investigation and prevention programs are not designed around any theoretical framework of human error. Appropriate for all levels of expertise, the book provides the knowledge and tools required to conduct a human error analysis of accidents, regardless of operational setting (i.e. military, commercial, or general aviation). The book contains a complete description of the Human Factors Analysis and Classification System (HFACS), which incorporates James Reason's model of latent and active failures as a foundation. Widely disseminated among military and civilian organizations, HFACS encompasses all aspects of human error, including the conditions of operators and elements of supervisory and organizational failure. It attracts a very broad readership. Specifically, the book serves as the main textbook for a course in aviation accident investigation taught by one of the authors at the University of Illinois. This book will also be used in courses designed for military safety officers and flight surgeons in the U.S. Navy, Army and the Canadian Defense Force, who currently utilize the HFACS system during aviation accident investigations. Additionally, the book has been incorporated into the popular workshop on accident analysis and prevention provided by the authors at several professional conferences world-wide. The book is also targeted for students attending Embry-Riddle Aeronautical University which has satellite campuses throughout the world and offers a course in human factors accident investigation for many of its majors. In addition, the book will be incorporated into courses offered by Transportation Safety International and the Southern California Safety Institute. Finally, this book serves as an excellent reference guide for many safety professionals and investigators already in the field.
The last decade has witnessed various technological advances in life sciences, especially high throughput technologies. These technologies provide a way to perform parallel scientific studies in a very short period of time with low cost. High throughput techniques, mainly, next generation sequencing, microarray and mass spectrometry, have strengthened the omics vision in the last decades (study of complete system) and now resulted in well-developed branches of omics i.e., genomics, transcriptomics, proteomics and metabolomics, which deal with almost every level of central dogma of life. Practice of high throughput techniques throughout the world with different aims and objectives resulted in a voluminous data, which required computational applications, i.e., database, algorithm and software to store, process and get biological interpretation from primary raw data. Researchers from different fields are looking to analyze these raw data for different purposes, but lacking of proper information and knowledge in proper documented form creates different kinds of hurdles and raises the challenges. This book contains thirteen chapters that deal with different computational biology/bioinformatics resources and concepts which are already in practice by the scientific community or can be utilized to handle various aspects of different classes of omics data. It includes different computational concepts, algorithm, resources and recent trends belonging to the four major branches of omics (i.e., genomics, transcriptomics, proteomics and metabolomics), including integrative omics. It will help all scholars who are working in any branch of computational omics and bioinformatics field as well as those who would like to perform research at a systemic biology through computational approaches.
Edited by Terri D. Pigott, Ann Marie Ryan, and Charles Tocci, the purpose of this volume is to present high-quality reviews that examine change to teaching practice from a variety of perspectives and a range of disciplines with an eye toward the enormous scope of the field. Taken as a whole, this volume presents a compelling profile of the core challenges and opportunities facing those engaged in the work of changing teaching practice and those who research these efforts. Divided into four sections, the first section of this volume delves into the history and policy of changing teaching practice, the second set of chapters consider the capacity of teachers to make changes, the third set of chapters review literature examining how to change practice in numerous settings in various ways, and the final section of the volume centers on emerging issues for practice. This volume considers some of the most critical problems facing educators and scholars today: how our history shapes our present-day possibilities, how we develop the capacity of educators to change and improve practice, the innumerable aspects that can be changed, which dimensions of teaching should we prioritize, and what emerging issues will shape this work in the coming years?
This exciting new textbook offers an accessible, business-focused overview of the key theoretical concepts underpinning modern data analytics. It provides engaging and practical advice on using the key software tools, including SAS Visual Analytics, R and DataRobot, that are used in organisations to help make effective data-driven decisions. Combining theory with hands-on practical examples, this essential text includes cutting edge coverage of new areas of interest including social media analytics, design thinking and the ethical implications of using big data. A wealth of learning features including exercises, cases, online resources and data sets help students to develop analytic problem-solving skills. With its management perspective on analytics and its coverage of a range of popular software tools, this is an ideal essential text for upper-level undergraduate, postgraduate and MBA students. It is also ideal for practitioners wanting to understand the broader organisational context of big data analysis and to engage critically with the tools and techniques of business analytics. Accompanying online resources for this title can be found at bloomsburyonlineresources.com/business-analytics. These resources are designed to support teaching and learning when using this textbook and are available at no extra cost.
The general theme of this book is to encourage the use of relevant methodology in data mining which is or could be applied to the interplay of education, statistics and computer science to solve psychometric issues and challenges in the new generation of assessments. In addition to item response data, other data collected in the process of assessment and learning will be utilized to help solve psychometric challenges and facilitate learning and other educational applications. Process data include those collected or available for collection during the process of assessment and instructional phase such as responding sequence data, log files, the use of help features, the content of web searches, etc. Some book chapters present the general exploration of process data in large -scale assessment. Further, other chapters also address how to integrate psychometrics and learning analytics in assessment and survey, how to use data mining techniques for security and cheating detection, how to use more assessment results to facilitate student's learning and guide teacher's instructional efforts. The book includes both theoretical and methodological presentations that might guide the future in this area, as well as illustrations of efforts to implement big data analytics that might be instructive to those in the field of learning and psychometrics. The context of the effort is diverse, including K-12, higher education, financial planning, and survey utilization. It is hoped that readers can learn from different disciplines, especially those who are specialized in assessment, would be critical to expand the ideas of what we can do with data analytics for informing assessment practices.
Data Analytics and Data-based Decision-making are hot topics now. Big Data has entered the common parlance. Many kinds of data are generated by business, social media, machines, and more. Organizations have a choice: they can be buried under the avalanche of data, or they can do something with it to increase competitive advantage. A new field of Data Science is born, and Data Scientist has been called the sexiest job of the decade. Students across a variety of academic departments, including business, computer science, statistics, and engineering are attracted to the idea of discovering new insights and ideas from data. This is a proposal for a short and lucid book on this whole area. It is designed to provide a student with the intuition behind this evolving area, along with a solid toolset of the major data mining techniques and platforms, all within a single semester- or quarter-long course.
A complete and comprehensive collaboration providing insight on future approaches to telephone survey methodology Over the past fifteen years, advances in technology have transformed the field of survey methodology, from how interviews are conducted to the management and analysis of compiled data. Advances in Telephone Survey Methodology is an all--encompassing and authoritative resource that presents a theoretical, methodological, and statistical treatment of current practices while also establishing a discussion on how state--of--the--art developments in telecommunications have and will continue to revolutionize the telephone survey process. Seventy--five prominent international researchers and practitioners from government, academic, and private sectors have collaborated on this pioneering volume to discuss basic survey techniques and introduce the future directions of the telephone survey. Concepts and findings are organized in four parts--sampling and estimation, data collection, operations, and nonresponse--equipping the reader with the needed practical applications to approach issues such as choice of target population, sample design, questionnaire construction, interviewing training, and measurement error. The book also introduces important topics that have been overlooked in previous literature, including: The impact of mobile telephones on telephone surveys and the rising presence of mobile--only households worldwide The design and construction of questionnaires using Computer Assisted Telephone Interviewing (CATI) software The emerging use of wireless communication and Voice over Internet Protocol (VoIP) versus the telephone Methods for measuring andimproving interviewer performance and productivity Privacy, confidentiality, and respondent burden as main factors in telephone survey nonresponse Procedures for the adjustment of nonresponse in telephone surveys In--depth reviews of the literature presented along with a full bibliography, assembled from references throughout the world Advances in Telephone Survey Methodology is an indispensable reference for survey researchers and practitioners in almost any discipline involving research methods such as sociology, social psychology, survey methodology, and statistics. This book also serves as an excellent text for courses and seminars on survey methods at the undergraduate and graduate levels.
"Using Web and Paper Questionnaires for Data-Based Decision Making maintains the same strengths as Thomas?s previous book: it is clearly written, easy to understand, and has plenty of examples and guides for those implementing these ideas. Designed as a cookbook, it superbly enables educators to write, administer, and analyze a survey." Sandra L. Stein, Professor of Education Rider University Learn to use questionnaires and data-based decision making to support school improvement! How effectively are teachers implementing the new literacy program? What do parents think of the proposed homework policy? Is bullying a growing problem? Understanding how to create appropriate questionnaires is essential in making data-based decisions that improve school policies, processes, and procedures. Using Web and Paper Questionnaires for Data-Based Decision Making is a practical handbook for creating exceptional questionnaires for a variety of purposes, including data-based decision making. Author Susan J. Thomas provides authoritative guidance for planning a survey project, creating a questionnaire, gathering data, and analyzing and communicating the results to a variety of audiences. Features of this reader-friendly guidebook include
Offering suggestions for successfully using both Web-based and paper-based questionnaires, this practitioner-focused manual summarizes the key steps of successful survey projects and identifies critical success factors for each step. Designed primarily for principals, district-level administrators, and teachers, this invaluable resource is also suitable for policymakers, state-level administrators, and graduate students in education and social sciences.
"The authors discuss self-administered questionnaires, the content and format of the questionnaire, "user-friendly" questionnaires and response categories, and survey implementation. They offer excellent checklists for deciding whether or not to use a mail questionnaire, for constructing questions and response categories, for minimizing bias, for writing questionnaire specifications, for formatting and finalizing questionnaires, and for motivating respondents and writing cover letters."
How do you decide whether a self-administered questionnaire is appropriate for your research question? This book provides readers with an answer to this question while giving them all the basic tools needed for conducting a self-administered or mail survey. Updated to include data from the 2000 Census, the authors show how to develop questions and format a user-friendly questionnaire; pretest, pilot test, and revise questionnaires; and write advance and cover letters that help motivate and increase response rates. They describe how to track and time follow-ups to non-respondents; estimate personnel requirements; and determine the costs of a self-administered or mailed survey. They also demonstrate how to process, edit, and code questionnaires; keep records; fully document how the questionnaire was developed and administered; and how the data collected is related to the questionnaire. New to this edition is expanded coverage on Web-based questionnaires, and literacy and language issues.
This accessible introduction to the theory and practice of longitudinal research takes the reader through the strengths and weaknesses of this kind of research, making clear: how to design a longitudinal study; how to collect data most effectively; how to make the best use of statistical techniques; and how to interpret results. Although the book provides a broad overview of the field, the focus is always on the practical issues arising out of longitudinal research. This book supplies the student with all that they need to get started and acts as a manual for dealing with opportunities and pitfalls. It is the ideal primer for this growing area of social research. |
![]() ![]() You may like...
International Symposium on Mathematics…
Tsuyoshi Takagi, Masato Wakayama, …
Hardcover
R1,671
Discovery Miles 16 710
Evolutionary Multi-Agent Systems - From…
Aleksander Byrski, Marek Kisiel-Dorohinicki
Hardcover
R4,556
Discovery Miles 45 560
Evolutionary Algorithms, Swarm Dynamics…
Ivan Zelinka, Guanrong Chen
Hardcover
Research Software Engineering with…
Damien Irving, Kate Hertweck, …
Paperback
R1,874
Discovery Miles 18 740
New Approaches to Circle Packing in a…
Peter Gabor Szabo, Mihaly Csaba Markot, …
Hardcover
R3,018
Discovery Miles 30 180
Quantum Random Number Generation…
Christian Kollmitzer, Stefan Schauer, …
Hardcover
R3,890
Discovery Miles 38 900
Geometry, Algebra and Applications: From…
Marco Castrillon Lopez, Luis Hernandez Encinas, …
Hardcover
Statistical Applications from Clinical…
Jianchang Lin, Bushi Wang, …
Hardcover
R6,400
Discovery Miles 64 000
|