Thursday, April 30, 2015

Peer-Review Problems, add sexism to the list

Fiona Ingleby recently submitted a manuscript to a PLoS journal and received one terrible and sexist review rejecting it (interestingly the manuscript investigated the progression of PhD students to post docs in biology and found evidence for gender bias in their data). She and her co-author appealed three weeks ago with no response from the journal at which point she posted excerpts from the review to twitter, unleashing a storm of outrage towards the reviewer and PLoS. I'll let everyone else comment on the problems of sexism that this raises (here, here, here, and here) and focus on the problems of the current peer-review system that this situation highlights.

Objectivity
One of the first problems was that the reviewer was not blind to the authors which allowed them to bring their sexist attitudes to the review. Even more inappropriately the reviewer took time to research the authors and made other comments about them including that they were too young and inexperienced. Introducing double-blind reviews may help prevent or slow down such easy efforts to undermine particular researchers. However, in practice, especially when publishing in specialized journals and when you are firmly planted into your research line, it is fairly easy to figure out who the senior author is on a manuscript, blinded or not. Generally, more than one person serves as a gatekeeper to publication with at least two, sometimes three and other times even more. By having more than one reviewer you can get different perspectives and improve the reach and impact of the paper. It can also be good for editors where consensus among reviewers indicates the potential high-impact quality of publications. But by having more than one reviewer, a pragmatic issue may be to help the editor screen out any bad reviews. Two reviewers with completely opposite opinions may indicate that the paper needs clarification or that one of the reviewers is trying to tank a paper (oh no scientists are people with emotions and egos too!) or is not qualified to review the paper (oh no sometimes reviews are passed off to under qualified trainees). This issue of objectivity and double-blind review leads into the next point of pre-registration.

Pre-registration
The review had no critiques that could be used to improve the manuscript. The closest that the review came to actually commenting on the science was that the manuscript was “methodologically weak” and “has fundamental flaws and weaknesses that cannot be adequately addressed by mere revision of the manuscript, however extensive." Anytime that a review attempts to go after the methodology, it irks me a little. If we think back to the origin of any particular study, it has to go through at least one if not more reviews before you can even carry out the research. Every study gets a once over from IRB and other institutions to make sure that the study crosses some bar of ethics and usually includes that it is being carried out with good science. Next, many, if not most studies are carried out with funding. Funding success at federal levels is hovering around 10% and only slightly better in other realms meaning that any research that has funding has potential implications and is being carried out in a way that may achieve those implications that it beats 9 other solid research proposals. So it always seems odd that its not until after the study is complete that apparent methodological flaws make a study unworthy of publishing as if every published paper is perfect in its formulation and the way that it was carried out. Pre-registration, in which a study's methodology is submitted and if it is accepted is published regardless of the results, allows us to avoid these issues and provides early feedback in the publishing process.

Slow
Three weeks! Three weeks of sitting on an appeal that justifiably points out that the review did not actually comment on the science of the article and was sexist seems like a poor move for PLoS. Now the outrage from the science community is reiterating that sentiment. The circumstances of this review raises so many questions, what was the editor thinking sending that review out, why was there only one reviewer, why would you appeal after a terrible review process and not go somewhere else? Post-hoc review is slow with quick decisions on the order of a month and slow decisions that more than a year (not including shopping to multiple journals or going down the prestige ladder). Before even submitting a manuscript for review, researchers are faced with questions that often pit time to publication against ease of publication. Many researchers always go down the prestige ladder just because the turn around (i.e. rejection) can be fast and if something hits, its worth the time. Other researchers would rather publish fast and will choose specialized journals with reputations for turning around manuscripts quickly.

Could we improve the quality and speed of peer-review by giving more credit or reward (some people think this option removes objectivity) for peer-review? Perhaps peer-review could improve if we shift how we approach it. Instead of thinking of ourselves as the gate-keepers of the sanctity of science we could see ourselves as collaborators in trying to improve science and disseminate science, messy though it may be.

No comments:

Post a Comment