Published research re-submitted by fake but less prestigious authors is rejected

I wasn’t aware of this 1982 article[fusion_builder_container hundred_percent=”yes” overflow=”visible”][fusion_builder_row][fusion_builder_column type=”1_1″ background_position=”left top” background_color=”” border_size=”” border_color=”” border_style=”solid” spacing=”yes” background_image=”” background_repeat=”no-repeat” padding=”” margin_top=”0px” margin_bottom=”0px” class=”” id=”” animation_type=”” animation_speed=”0.3″ animation_direction=”left” hide_on_mobile=”no” center_content=”no” min_height=”none”][1. Douglas P. Peters and Stephen J. Ceci (1982). Peer-review practices of psychological journals: The fate of published articles, submitted again. Behavioral and Brain Sciences, 5, pp 187-195 doi:10.1017/S0140525X00011183] that demonstrates the bias against authors from less prestigious institutions in the academic publishing process. The abstract below says it all, but some notes:

  • the more prestigious the institution of an author, the more you should treat it as if the review process was lenient
  • don’t discount unpublished working papers from faculty outside the US: it could be very good but not published entirely due to biases
  • nowadays, the peer-review process is never “blind”, as the first thing a referee does when receiving a paper is googling its title to find the authors (and authors put it online from early on to establish precedency)

Abstract:

A growing interest in and concern about the adequacy and fairness of modern peer-review practices in publication and funding are apparent across a wide range of scientific disciplines. Although questions about reliability, accountability, reviewer bias, and competence have been raised, there has been very little direct research on these variables.

 
The present investigation was an attempt to study the peer-review process directly, in the natural setting of actual journal referee evaluations of submitted manuscripts. As test materials we selected 12 already published research articles by investigators from prestigious and highly productive American psychology departments, one article from each of 12 highly regarded and widely read American psychology journals with high rejection rates (80%) and nonblind refereeing practices.

 
With fictitious names and institutions substituted for the original ones (e.g., Tri-Valley Center for Human Potential), the altered manuscripts were formally resubmitted to the journals that had originally refereed and published them 18 to 32 months earlier.Of the sample of 38 editors and reviewers, only three (8%) detected the resubmissions. This result allowed nine of the 12 articles to continue through the review process to receive an actual evaluation: eight of the nine were rejected. Sixteen of the 18 referees (89%) recommended against publication and the editors concurred. The grounds for rejection were in many cases described as “serious methodological flaws.” A number of possible interpretations of these data are reviewed and evaluated.

[/fusion_builder_column][/fusion_builder_row][/fusion_builder_container]

Share the Post:

Related Posts