A recent example of the shortcomings of Peer Review
Alright, I made a post on the issues of peer review before, namely that it is not very good at detecting fraud/wrong data, because peer reviewers simply lack the resources to reproduce and confirm the data through experiments. Now we can see another example of that. Over at Gramstain, adam ratner blogs about a high-profile retraction, of a paper published by Nature that has been retracted because the authors (including a Nobel prizewinner actually!) and other groups have been unable to reproduce the data. Interesting read when you are interested in scholarly communication like me, but then he says:
There is often a great deal of controversy following a retraction like this, but it seems that this is one good example that the system of peer-review, publication, and independent replication works well as a road to scientific truth.
Sadly, for this case, I would have to say that peer review does not deserve any credit for this truthful revelation. I would say this is a good example of peer-review having a hard time detecting errors in such papers due to lack of resources. And while it is most assuredly the best thing science has to validate new research before communicating it to the larger audiences, it is far from perfect, and should not be promoted as such. Well, after identifying this shortcoming of peer review, I do agree with him that this is a good argument for ‘making published research reports and even primary data as widely available as possible’. Open Access makes searching/going through (pr)eprints (digital versions of scientific literature) easier, thus it should theoretically be easier to find flaws and discourage authors from doing sloppy/dishonest work, peer reviewed or not. And that is indeed what makes scholarly communication so effective: the seemingly more significant the article, the more eyes to catch the bugs. And if they were never that significant to begin with, nobody will work with the data, so even if the data is wrong, it will have little to no effect.
However, one problem that continues to plague the system, is that of the citation count problem: it does not contain the context in which a citation is placed (Opthof, 1996). Scholars might continue to cite this either as valid or invalid work, raising its “importance level”. In some rather absurd cases, they will keep being cited as valid sources. the paper by Dong et al. (2005) reports on this issue:
Invalid articles may pose a considerable bias on the journal IF. Retracted articles may continue to be cited by others as valid work. Pfeifer and Snodgrass  identified 82 completely retracted articles, analyzed their subsequent use in the scientific literature, and found that these retractions were still cited hundreds of times to support scientific concepts. Kochan and Budd  showed that retracted papers by John Darsee based on fabricated data were still positively cited in the cardiology literature although years had passed since retraction. Budd et al.  obtained all retractions from MEDLINE between 1966 and August 1997 and found that many papers still cited retracted papers as valid research long after the retraction notice.
This weakness and/or misuse of the citation count is truly a disservice to the quality of science!