The Peer Review Process for Scientific Publications: Trouble in Paradise?

In February 2014, Nature Magazine published a report stating that between 2008 and 2013 more than 120 peered reviewed journal articles were retracted after scientists discovered that they were actually computer generated articles. Amazingly, this was not the first instance—Nature reported in 2005 that a major computer conference was fooled into accepting a paper that was generated by computer. These articles should have been through the peer review process, but what happened?

Here’s how the process is supposed to work:

  1. When a researcher has finished with a series of experiments and feels he has enough data to report to the general scientific community he prepares a manuscript.
  2. He then submits the manuscript to a scientific journal where he wants to see it published.
  3. The editor of the journal typically will review the article, and if he deems it worthy of consideration, he will send it to other selected scientists for review and comment.
  4. In effect, it is the authors’ jury of his peers reviewing the article and the reviewers almost always remain anonymous to the author and to one another. The reviewers may come back with suggestions, corrections, or outright recommend the publication or rejection of the paper.
  5. Once this process is complete, the editor informs the author that the article is either accepted or rejected for publication.

Ideally this process is meant to help the author see flaws in his work and offer suggestions or corrections.  The process offers the editors the scrutiny necessary by a fresh pair of eyes to review and comment on submitted manuscripts by experts in the field.  So what has happened with these fraudulent papers?

How did these fraudulent papers skirt the peer review process?

At first I was indignant reading about these fraudulent papers and wondering how they got through the peer review process. Giving it a bit more thought I realized that designing a fraudulent manuscript even without the help of a computer is a relatively easy process.  Any seasoned scientist can imagine experiments and fake believable results without ever stepping foot in the lab.  The peer review process is not designed to and cannot pick up this type of fraud.  It would be impossible for any editor or reviewer to expose this type of fraud without seeing the raw data and observing the actual lab environment.

I have had the opportunity to review papers that have been submitted for peer review and time and time again, I’ve noticed that reviewers fail to pick up errors in experimental design and in data analysis. I’ve pointed out many troubling errors, including too few replicate experiments, small sample size, the use of uncalibrated equipment for analysis of samples, and data and statistical analysis that is inadequate or faulty.  Yet far too often, I find that few if any of the scientists tasked with peer review of these manuscripts have pointed these errors out.

Being a good scientist these days requires one to focus on a very narrow area of expertise.  As a consequence, few scientists are good statisticians and many lack in their skills in experimental design.

What can be done to improve peer review?

Editors should send manuscripts to statisticians for review and comment on the data analysis.  Journals should expand the materials and methods sections; this section of a manuscript is where the author tells us what he used and how he did his experiments. Expanding this section would allow more basic information about reagents, test subjects (whether animal or plant), equipment and calibration procedures used in experiments. This would be a small step forward to get more information into the hands of reviewers and readers as they assess the quality of the science and its results.

As in any endeavor, if someone wants to cheat and create fraudulent data it will be difficult to detect.  Science largely depends on the trustworthiness and ethical standards of the scientist doing the work.  There are no accountants running around checking on the lab notebooks—not that it would help, after an experiment is finished all is discarded. But the broader scientific community can be more rigorous, insisting on a higher level of standardization and reporting of data.