![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Reference & Interdisciplinary > Communication studies > Data analysis
Open government data (OGD) has developed rapidly in recent years due to various benefits that can be derived through transparency and public access. However, researchers emphasize a lack of use instead of lack of disclosure as a key problem in OGD's present development. Previous studies have approached this issue either from the supply-side, focusing on data quantity and quality, or from the demand-side, focusing on factors that affect users' acceptance of OGD, but seldom consider both sides at the same time. This unique study compares the supply and demand sides of OGD and explores possible directions for the future development of OGD portals based on the discovered mismatches between the two. The authors improve OGD utilization by balancing the supply-side and demand-side according to citizens' demands through OGD portals. Based on the concept of an OGD ecosystem, four connected studies are explored. The first study built an evaluation framework for understanding the development of the OGD supply-side. The second study focuses on a survey conducted to analyze the awareness and utilization of OGD portals by citizens, who are the primary users and major beneficiaries of OGD on the demand-side. A third study compares the supply and demand sides based on Diffusion of Innovation theory. A final study tests the proposed usability criteria for building an OGD portal by carrying out a between-subjects experiment including a virtual agent. Each case study examines a unique aspect of OGD in China, and also offers reflections on future directions for developing OGD. Providing a unique and enhanced theoretical and practical understanding of OGD and its usage, as well as proposing directions for OGD portals' future development in order to encourage citizens' OGD utilization, this is a must-read for researchers and policymakers examining the impact and possibilities of OGD.
Mathematicians have skills that, if deepened in the right ways, would enable them to use data to answer questions important to them and others, and report those answers in compelling ways. Data science combines parts of mathematics, statistics, computer science. Gaining such power and the ability to teach has reinvigorated the careers of mathematicians. This handbook will assist mathematicians to better understand the opportunities presented by data science. As it applies to the curriculum, research, and career opportunities, data science is a fast-growing field. Contributors from both academics and industry present their views on these opportunities and how to advantage them.
Data is constantly increasing and data analysts are in higher demand than ever. This book is an essential guide to the role of data analyst. Aspiring data analysts will discover what data analysts do all day, what skills they will need for the role, and what regulations they will be required to adhere to. Practising data analysts can explore useful data analysis tools, methods and techniques, brush up on best practices and look at how they can advance their career.
"Data Analysis and Visualization in Genomics and Proteomics" is the
first book addressing integrative data analysis and visualization
in this field. It addresses important techniques for the
interpretation of data originating from multiple sources, encoded
in different formats or protocols, and processed by multiple
systems. One of the first systematic overviews of the problem of
biological data integration using computational approachesThis book
provides scientists and students with the basis for the development
and application of integrative computational methods to analyse
biological data on a systemic scalePlaces emphasis on the
processing of multiple data and knowledge resources, and the
combination of different models and systems
Critical Theory and Qualitative Data Analysis in Education offers a path-breaking explanation of how critical theories can be used within the analysis of qualitative data to inform research processes, such as data collection, analysis, and interpretation. This contributed volume offers examples of qualitative data analysis techniques and exemplars of empirical studies that employ critical theory concepts in data analysis. By creating a clear and accessible bridge between data analysis and critical social theories, this book helps scholars and researchers effectively translate their research designs and findings to multiple audiences for more equitable outcomes and disruption of historical and contemporary inequality.
As the analysis of big datasets in sports performance becomes a more entrenched part of the sporting landscape, so the value of sport scientists and analysts with formal training in data analytics grows. Sports Analytics: Analysis, Visualisation and Decision Making in Sports Performance provides the most authoritative and comprehensive guide to the use of analytics in sport and its application in sports performance, coaching, talent identification and sports medicine available. Employing an approach-based structure and integrating problem-based learning throughout the text, the book clearly defines the difference between analytics and analysis and goes on to explain and illustrate methods including: Interactive visualisation Simulation and modelling Geospatial data analysis Spatiotemporal analysis Machine learning Genomic data analysis Social network analysis Offering a mixed-methods case study chapter, no other book offers the same level of scientific grounding or practical application in sports data analytics. Sports Analytics is essential reading for all students of sports analytics, and useful supplementary reading for students and professionals in talent identification and development, sports performance analysis, sports medicine and applied computer science.
A comprehensive compilation of new developments in data linkage methodology The increasing availability of large administrative databases has led to a dramatic rise in the use of data linkage, yet the standard texts on linkage are still those which describe the seminal work from the 1950-60s, with some updates. Linkage and analysis of data across sources remains problematic due to lack of discriminatory and accurate identifiers, missing data and regulatory issues. Recent developments in data linkage methodology have concentrated on bias and analysis of linked data, novel approaches to organising relationships between databases and privacy-preserving linkage. Methodological Developments in Data Linkage brings together a collection of contributions from members of the international data linkage community, covering cutting edge methodology in this field. It presents opportunities and challenges provided by linkage of large and often complex datasets, including analysis problems, legal and security aspects, models for data access and the development of novel research areas. New methods for handling uncertainty in analysis of linked data, solutions for anonymised linkage and alternative models for data collection are also discussed. Key Features : Presents cutting edge methods for a topic of increasing importance to a wide range of research areas, with applications to data linkage systems internationally Covers the essential issues associated with data linkage today Includes examples based on real data linkage systems, highlighting the opportunities, successes and challenges that the increasing availability of linkage data provides Novel approach incorporates technical aspects of both linkage, management and analysis of linked data This book will be of core interest to academics, government employees, data holders, data managers, analysts and statisticians who use administrative data. It will also appeal to researchers in a variety of areas, including epidemiology, biostatistics, social statistics, informatics, policy and public health.
Classification and regression trees (CART) is one of the several contemporary statistical techniques with good promise for research in many academic fields. There are very few books on CART, especially on applied CART. This book, as a good practical primer with a focus on applications, introduces the relatively new statistical technique of CART as a powerful analytical tool. The easy-to-understand (non-technical) language and illustrative graphs (tables) as well as the use of the popular statistical software program (SPSS) appeal to readers without strong statistical background. This book helps readers understand the foundation, the operation, and the interpretation of CART analysis, thus becoming knowledgeable consumers and skillful users of CART. The chapter on advanced CART procedures not yet well-discussed in the literature allows readers to effectively seek further empowerment of their research designs by extending the analytical power of CART to a whole new level. This highly practical book is specifically written for academic researchers, data analysts, and graduate students in many disciplines such as economics, social sciences, medical sciences, and sport sciences who do not have strong statistical background but still strive to take full advantage of CART as a powerful analytical tool for research in their fields.
This book is concerned with data in which the observations are independent and in which the response is multivariate. Anthony Atkinson has been Professor of Statistics at the London School of Economics since 1989. Before that he was a Professor at Imperial College, London. He is the author of Plots, Transformations, and Regression, co-author of Optimum Experimental Designs, and joint editor of The Fascination of Statistics, a volume celebrating the centenary of the International Statistical Institute. Professor Atkinson has served as editor of The Journal of the Royal Statistical Society, Series B and as associate editor of Biometrika and Technometrics. He has published well over 100 articles in these and other journals including The Annals of Statistics, Biometrics, The Journal of the American Statistical Association, and Statistics and Computing. Marco Riani, after receiving his Ph.D. in Statistics in 1995 from the University of Florence, joined the Faculty of Economics at Parma University as postdoctoral fellow. In 1997 he won the prize for the best Italian Ph.D. thesis in Statistics. He is currently Associate Professor of Statistics in the University of Parma. He has published in Technometrics, The Journal of Computational and Graphical Statistics, The Journal of Business and Economic Statistics, The Journal of Forecasting, Environmetrics, Computational Statistics and Data Analysis, Metron, and other journals. From the reviews: "The book requires knowledge of multivariate statistical methods, because it provides only basic background information on the methods considered (although with excellent references for futher reading at the end of each chapter). Each chapter alsoincludes exercises with solutions...This book could serve as an excellent text for an advanced course on modern multivariate statistics, as it is intended." Technometrics, November 2004 "This book is full of interest for anyone undertaking multivariate analyses, clearly emphasizing that uncritical use of standard methods can be misleading." Short Book Reviews of the International Statistical Institute, December 2004 "This book is an interesting complement to various textbooks on multivariate statistics." Biometrics, December 2005 "This book discusses multivariate data from a different perspective. a ] it is an excellent book for researchers with interests in multivariate data and cluster analysis. It may also be a good reference for students of advanced statistics and practitioners working with large volumes of data a ] ." (Kassim S. Mwitondi, Journal of Applied Statistics, Vol. 32 (4), 2005) "This is a companion to an earlier book a ] both of which feature many informative graphs. Here, the forward search has been applied in detail to classical multivariate approaches used with Gaussian data. a ] One valuable feature of the book is the way that the illustrations concentrate on a relatively small number a ] . This makes it easy to concentrate on the application a ] . The implications of this book also strengthen the importance of data visualization, as well as providing a valuable approach to visualization." (Paul Hewson, Journal of the Royal Statistical Society Series A, Vol. 168 (2), 2005) "This book is a companion to Atkinson a ] . The objective is to identify outliers, appreciate their influence a ] which would result in an overall improvement. a ] Graphical tools arewidely used, resulting in three hundred and ninety figures. Each chapter is followed by extensive exercises and their solutions, and the book could be used as an advanced textbook for multivariate analysis courses. Web-sites provide the relevant software a ] . This book is full of interest for anyone undertaking multivariate analyses a ] ." (B.J.T. Morgan, Short Book Reviews International Statistical Institute, Vol. 24 (3), 2004) "This book discusses forward search (FS), a method using graphs to explore and model continuous multivariate data a ] . Its viewpoint is toward applications, and it demonstrates the merits of FS using a variety of examples, with a thorough discussion of statistical issues and interpretation of results. a ] This book could serve as an excellent text for an advanced course on modern multivariate statistics, as it is intended." (Tena Ipsilantis Katsaounis, Technometrics, Vol. 46 (4), November, 2004) "The theoretical exercises with detailed solutions at the end of each chapter are extremely useful. I would recommend this book to practitioners who analyze moderately sized multivariate data. Of course, anyone associated with the application of statistics should find the book interesting to read." (Tathgata Banerjee, Journal of the American Statistical Association, March 2006)
Optimization techniques are at the core of data science, including data analysis and machine learning. An understanding of basic optimization techniques and their fundamental properties provides important grounding for students, researchers, and practitioners in these areas. This text covers the fundamentals of optimization algorithms in a compact, self-contained way, focusing on the techniques most relevant to data science. An introductory chapter demonstrates that many standard problems in data science can be formulated as optimization problems. Next, many fundamental methods in optimization are described and analyzed, including: gradient and accelerated gradient methods for unconstrained optimization of smooth (especially convex) functions; the stochastic gradient method, a workhorse algorithm in machine learning; the coordinate descent approach; several key algorithms for constrained optimization problems; algorithms for minimizing nonsmooth functions arising in data science; foundations of the analysis of nonsmooth functions and optimization duality; and the back-propagation approach, relevant to neural networks.
A century of education and education reform, along with more than three decades of high-stakes testing and accountability, reveals a disturbing paradox: education has a steadfast commitment to testing and grading. This commitment persists despite ample research, theory, and philosophy revealing the corrosive consequences of both testing and grading in an education system designed to support human agency and democratic principles. This revised edited volume brings together a collection of updated and new essays that confronts the failure of testing and grading. The book explores the historical failure of testing and grading; the theoretical and philosophical arguments against testing and grading; the negative influence of tests and grades on social justice, race, class, and gender; and the role that they play in perpetuating a deficit perspective of children. The chapters fall under two broad sections. Part I, Degrading Learning, Detesting Education: The Failure of High-Stake Accountability in Education, includes essays on the historical, theoretical, and philosophical arguments against testing and grading. Part II, De-Grading and De-Testing in a Time of High-Stakes Education Reform, presents practical experiments in de-testing and de-grading classrooms for authentic learning experiences.
What happens to risk as the economic horizon goes to zero and risk is seen as an exposure to a change in state that may occur instantaneously at any time? All activities that have been undertaken statically at a fixed finite horizon can now be reconsidered dynamically at a zero time horizon, with arrival rates at the core of the modeling. This book, aimed at practitioners and researchers in financial risk, delivers the theoretical framework and various applications of the newly established dynamic conic finance theory. The result is a nonlinear non-Gaussian valuation framework for risk management in finance. Risk-free assets disappear and low risk portfolios must pay for their risk reduction with negative expected returns. Hedges may be constructed to enhance value by exploiting risk interactions. Dynamic trading mechanisms are synthesized by machine learning algorithms. Optimal exposures are designed for option positioning simultaneously across all strikes and maturities.
Petty trade helped vast numbers of people to survive the crisis faced by post-Soviet Russia. The book analyses how this survival technique was carried out in practice. On the basis of his fieldwork research, the author shows how people coped with rapid social change and places their activities within a context of government policies, migration flows and entrepreneurial strategies. "This is an original work based on extensive fieldwork research. Wielecki skillfully intertwined "ethnographic meat" with "the bones of theory", which has resulted in a "flesh-and-blood" anthropology." Michal Buchowski "This is an immensely insightful exploration of petty trade in post-Soviet Russia. The author laces his genuine ethnographic work in a coherent account of the concepts of uncertainty, embeddedness, and informal economy." Violetta Zentai
Since the early days of performance assessment, human ratings have been subject to various forms of error and bias. Expert raters often come up with different ratings for the very same performance and it seems that assessment outcomes largely depend upon which raters happen to assign the rating. This book provides an introduction to many-facet Rasch measurement (MFRM), a psychometric approach that establishes a coherent framework for drawing reliable, valid, and fair inferences from rater-mediated assessments, thus answering the problem of fallible human ratings. Revised and updated throughout, the Second Edition includes a stronger focus on the Facets computer program, emphasizing the pivotal role that MFRM plays for validating the interpretations and uses of assessment outcomes.
Praise forEnvisioning the Survey Interview of the Future "This book is an excellent introduction to some brave new technologies . . . and their possible impacts on the way surveys might be conducted. Anyone interested in the future of survey methodology should read this book." Norman M. Bradburn, PhD, National Opinion Research Center, University of Chicago "Envisioning the Survey Interview of the Future gathers some of the brightest minds in alternative methods of gathering self-report data, with an eye toward the future self-report sample survey. Conrad and Schober, by assembling a group of talented survey researchers and creative inventors of new software-based tools to gather information from human subjects, have created a volume of importance to all interested in imagining future ways of interviewing." Robert M. Groves, PhD, Survey Research Center, University of Michigan This collaboration provides extensive insight into the impact of communication technology on survey research As previously unimaginable communication technologies rapidly become commonplace, survey researchers are presented with both opportunities and obstacles when collecting and interpreting data based on human response. Envisioning the Survey Interview of the Future explores the increasing influence of emerging technologies on the data collection process and, in particular, self-report data collection in interviews, providing the key principles for using these new modes of communication. With contributions written by leading researchers in the fields of survey methodology and communication technology, this compilation integrates the use of modern technological developments with establishedsocial science theory. The book familiarizes readers with these new modes of communication by discussing the challenges to accuracy, legitimacy, and confidentiality that researchers must anticipate while collecting data, and it also provides tools for adopting new technologies in order to obtain high-quality results with minimal error or bias. Envisioning the Survey Interview of the Future addresses questions that researchers in survey methodology and communication technology must consider, such as: How and when should new communication technology be adopted in the interview process? What are the principles that extend beyond particular technologies? Why do respondents answer questions from a computer differently than questions from a human interviewer? How can systems adapt to respondents' thinking and feeling? What new ethical concerns about privacy and confidentiality are raised from using new communication technologies? With its multidisciplinary approach, extensive discussion of existing and future technologies, and practical guidelines for adopting new technology, Envisioning the Survey Interview of the Future is an essential resource for survey methodologists, questionnaire designers, and communication technologists in any field that conducts survey research. It also serves as an excellent supplement for courses in research methods at the upper-undergraduate or graduate level.
Introduces new and advanced methods of model discovery for time-series data using artificial intelligence. Implements topological approaches to distill "machine-intuitive" models from complex dynamics data. Introduces a new paradigm for a parsimonious model of a dynamical system without resorting to differential equations. Heralds a new era in data-driven science and engineering based on the operational concept of "computational intuition".
Making sense of sports performance data can be a challenging task but is nevertheless an essential part of performance analysis investigations. Focusing on techniques used in the analysis of sport performance, this book introduces the fundamental principles of data analysis, explores the most important tools used in data analysis, and offers guidance on the presentation of results. The book covers key topics such as:
The book includes worked examples from real sport, offering clear guidance to the reader and bringing the subject to life. This book is invaluable reading for any student, researcher or analyst working in sport performance or undertaking a sport-related research project or methods course"
This textbook bypasses the need for advanced mathematics by providing in-text computer code, allowing students to explore Bayesian data analysis without the calculus background normally considered a prerequisite for this material. Now, students can use the best methods without needing advanced mathematical techniques. This approach goes beyond "frequentist" concepts of p-values and null hypothesis testing, using the full power of modern probability theory to solve real-world problems. The book offers a fully self-contained course, which demonstrates analysis techniques throughout with worked examples crafted specifically for students in the behavioral and neural sciences. The book presents two general algorithms that help students solve the measurement and model selection (also called "hypothesis testing") problems most frequently encountered in real-world applications.
This book has won the CHOICE Outstanding Academic Title award 2014. A century of education and education reform along with the last three decades of high-stakes testing and accountability reveals a disturbing paradox: Education has a steadfast commitment to testing and grading despite decades of research, theory, and philosophy that reveal the corrosive consequences of both testing and grading within an education system designed to support human agency and democratic principles. This edited volume brings together a collection of essays that confronts the failure of testing and grading and then offers practical and detailed examinations of implementing at the macro and micro levels of education teaching and learning free of the weight of testing and grading. The book explores the historical failure of testing and grading; the theoretical and philosophical arguments against testing and grading; the negative influence of testing and grading on social justice, race, class, and gender; and the role of testing and grading in perpetuating a deficit perspective of children, learning, race, and class. The chapters fall under two broad sections: Part I: "Degrading Learning, Detesting Education: The Failure of High-Stake Accountability in Education" includes essays on the historical, theoretical, and philosophical arguments against testing and grading; Part II: "De-Grading and De-Testing in a Time of High-Stakes Education Reform" presents practical experiments in de-testing and de-grading classrooms for authentic learning experiences.
High-dimensional probability offers insight into the behavior of random vectors, random matrices, random subspaces, and objects used to quantify uncertainty in high dimensions. Drawing on ideas from probability, analysis, and geometry, it lends itself to applications in mathematics, statistics, theoretical computer science, signal processing, optimization, and more. It is the first to integrate theory, key tools, and modern applications of high-dimensional probability. Concentration inequalities form the core, and it covers both classical results such as Hoeffding's and Chernoff's inequalities and modern developments such as the matrix Bernstein's inequality. It then introduces the powerful methods based on stochastic processes, including such tools as Slepian's, Sudakov's, and Dudley's inequalities, as well as generic chaining and bounds based on VC dimension. A broad range of illustrations is embedded throughout, including classical and modern results for covariance estimation, clustering, networks, semidefinite programming, coding, dimension reduction, matrix completion, machine learning, compressed sensing, and sparse regression.
This book comprises three studies on minority shareholder monitoring in Germany. Mandatory disclosure requirements have increased transparency. An analysis of the information that is publicly available is presented, regardless of the size of the target corporation. The second essay in the form of an event study pays special attention to the German supervisory board and its appointment for a fixed term. Capital markets perceive an activist effort as being more credible under certain circumstances. The study as a whole is empirical evidence for increased minority shareholder activity in Germany. The evidence presented supports the strong shareholder rights perspective. It conflicts with the weak shareholder rights view brought forward in the international literature.
An interdisciplinary look at interaction in the standardized survey interview This volume presents a theoretical and empirical inquiry into the interaction between interviewers and respondents in standardized research interviews. The editors include a range of articles that showcase the perspectives of conversation analysts, ethnomethodologists, and survey methodologists, to gain a more complete picture of interaction in the standardized survey interview than was previously available. This book is the first to focus solely on the interactional substrate or conversational architecture of interviewing. It offers a range of insights into standardized interviewing as interaction and forms a bridge between survey methodology and the study of interaction and tacit practices. The articles are arranged into four subject groups: theoretical orientations, survey recruitment, interaction during the substantive interview, and interaction and survey data quality. Articles include:
Standardization and Tacit Knowledge serves as a one-of-a-kind reference for survey methodologists, linguists, and researchers and also as a postgraduate coursebook in survey interviewing.
Survey data are used in many disciplines including Social Sciences, Economics and Psychology. Interviewers' behaviour might affect the quality of such data. This book presents the results of new research on interviewers' motivation and behaviour. A substantial number of contributions address deviant behaviour, methods for assessing the impact of such behaviour on data quality and tools for detecting faked interviews. Further chapters discuss methods for preventing undesirable interviewer effects. Apart from specific methodological contributions, the chapters of the book also provide a unique collection of examples of deviant behaviour and its detection - a topic not overly present in literature despite its substantial prevalence in survey field work. The volume includes 13 peer reviewed papers presented at an international workshop in Rauischholzhausen in October 2011.
Identifying factors which stimulate regional growth and international competitiveness and using them for forecasting are the aims of this book. Departing from the theory of comparative advantages and their impact, the author demonstrates that such an approach has to be based on a sound theoretical foundation and on appropriate, advanced econometric methods. He proposes the use of heuristic optimization techniques, Monte Carlo simulation experiments and Lasso-type estimators to avoid bias or misleading findings, which might be the result of applying standard regression methods when key assumptions are not satisfied. In addition, the author demonstrates how some heuristic optimization-based methods can be used to obtain forecasts of industrial production in Russia and Germany founded on past observations and some leading indicators.
" The Data Quality Assessment Framework "shows you how to measure and monitor data quality, ensuring quality over time. You ll start with general concepts of measurement and work your way through a detailed framework of more than three dozen measurement types related to five objective dimensions of quality: completeness, timeliness, consistency, validity, and integrity. Ongoing measurement, rather than one time activities will help your organization reach a new level of data quality. This plain-language approach to measuring data can be understood by both business and IT and provides practical guidance on how to apply the DQAF within any organization enabling you to prioritize measurements and effectively report on results. Strategies for using data measurement to govern and improve the quality of data and guidelines for applying the framework within a data asset are included. You ll come away able to prioritize which measurement types to implement, knowing where to place them in a data flow and how frequently to measure. Common conceptual models for defining and storing of data quality results for purposes of trend analysis are also included as well as generic business requirements for ongoing measuring and monitoring including calculations and comparisons that make the measurements meaningful and help understand trends and detect anomalies.
|
You may like...
OOIS 2000 - 6th International Conference…
Dilip Patel, Islam Choudhury, …
Paperback
R1,459
Discovery Miles 14 590
|