Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Showing 1 - 2 of 2 matches in All Departments
Why bibliometrics is useful for understanding the global dynamics of science but generate perverse effects when applied inappropriately in research evaluation and university rankings. The research evaluation market is booming. "Ranking," "metrics," "h-index," and "impact factors" are reigning buzzwords. Government and research administrators want to evaluate everything-teachers, professors, training programs, universities-using quantitative indicators. Among the tools used to measure "research excellence," bibliometrics-aggregate data on publications and citations-has become dominant. Bibliometrics is hailed as an "objective" measure of research quality, a quantitative measure more useful than "subjective" and intuitive evaluation methods such as peer review that have been used since scientific papers were first published in the seventeenth century. In this book, Yves Gingras offers a spirited argument against an unquestioning reliance on bibliometrics as an indicator of research quality. Gingras shows that bibliometric rankings have no real scientific validity, rarely measuring what they pretend to. Although the study of publication and citation patterns, at the proper scales, can yield insights on the global dynamics of science over time, ill-defined quantitative indicators often generate perverse and unintended effects on the direction of research. Moreover, abuse of bibliometrics occurs when data is manipulated to boost rankings. Gingras looks at the politics of evaluation and argues that using numbers can be a way to control scientists and diminish their autonomy in the evaluation process. Proposing precise criteria for establishing the validity of indicators at a given scale of analysis, Gingras questions why universities are so eager to let invalid indicators influence their research strategy.
How the increasing reliance on metrics to evaluate scholarly publications has produced new forms of academic fraud and misconduct. The traditional academic imperative to "publish or perish" is increasingly coupled with the newer necessity of "impact or perish"-the requirement that a publication have "impact," as measured by a variety of metrics, including citations, views, and downloads. Gaming the Metrics examines how the increasing reliance on metrics to evaluate scholarly publications has produced radically new forms of academic fraud and misconduct. The contributors show that the metrics-based "audit culture" has changed the ecology of research, fostering the gaming and manipulation of quantitative indicators, which lead to the invention of such novel forms of misconduct as citation rings and variously rigged peer reviews. The chapters, written by both scholars and those in the trenches of academic publication, provide a map of academic fraud and misconduct today. They consider such topics as the shortcomings of metrics, the gaming of impact factors, the emergence of so-called predatory journals, the "salami slicing" of scientific findings, the rigging of global university rankings, and the creation of new watchdogs and forensic practices.
|
You may like...
|