Archive

Archive for December, 2010

A Proposal To Improve Peer Review: A Unified Peer-to-Peer Review Platform (part 3)

December 8, 2010 Leave a comment

This is part 3 of my blog series where I present our proposal/working paper to improve scholarly communication. You can find part 1 here and part 2 here.

Towards Scholarly Communication 2.0: Peer-to-Peer Review & Ranking in Open Access Preprint Repositories is the title of our working paper and can be downloaded for free over at SSRN.

A quick recap of the previous two parts:
In part 1 I’ve talked about the potential of a unified peer review platform to improve the certification function of scholarly communication. The bottleneck, however, is that journal publishers and their editors have neither the time nor the motives to contribute to such a platform systematically, continuously and publicly. Our proposal then shifted from a “unified peer review platform” to a “unified peer-to-peer review platform”. In other words, we’ve focused on designing a peer review model that can work independently of journal publishers (and their editors). This lead to our choice of Open Access preprint repositories as the technical foundation for our model: access to manuscripts eligible for peer reviews and a platform for scholars to gather and share their works.

In part 2 I’ve provided an overview of the functions of the peer-to-peer review model. Specifically it covered seven important activities that journal editors carry out that have to be compensated for in a peer-to-peer review environment. It also explains how participants can remain anonymous while still being rewarded proportionally to their contributions with an impact metric for reviews.

In part 3 of the blog series I’ll focus more on the actual peer-to-peer review process of our model. We explain how we think our proposal can be realized functionally and technically. There will be a lot of quoting from our working paper, since a lot has already been succinctly stated there. I’ll add an original flow chart to make this information easier to digest.

Focusing on the Peer-to-Peer Review Process
This section describes the functions that allow scholars to proficiently peer review manuscripts and evaluate these peer reviews. These activities generate Reviewer Credits and are added to their Reviewer Impact. The peer-to-peer review model borrows heavily from the traditional journal peer review. Since the traditional journal peer review is the most established peer review system, providing similar functionality is a practical step to increase its feasibility.

An Activity Diagram of the Peer-to-Peer Review Process.

An Activity Diagram of the Peer-to-Peer Review Process.


This activity diagram is divided in what the registered peer reviewer sees and inputs, and the “internal processes” that take place “behind the scenes” but aren’t visible to the users.

The peer-to-peer review process starts when registered scholars make themselves available for (blind) peer review. A manuscript selection system, which will be automated as much as possible, attempts to find and present scholars with suitable manuscripts to choose from. At this stage, scholars are free to turn down any of the peer review offers without facing any penalties (to their Reviewer Impact). There will, however, be a limit to how many times a scholar can reject peer review offers before they’re asked to “replenish” their number of limits with either peer reviews or other important tasks. This measure is to improve the probability of scholars ending up with manuscripts that they can peer review with no conflicts of interests.

Two peer reviewers are required for each manuscript. Each peer reviewer has two primary tasks per peer review session:

  1. peer review the manuscript;
  2. peer review the other peer review.

Specifically designed quality assessment instruments, designed around the relevant characteristics of sound manuscripts, are available for these peer review processes. These activities result in two reports per peer reviewer:

  1. the peer review report of the manuscript;
  2. the “peer review report” of the peer review of the other peer reviewer.

After the reviews have been exchanged with the authors and the other peer reviewer, the involved parties are provided with the opportunity to discuss the reports. Discussions can be held on a private discussion board that can only be accessed by the respective participants to iron out difficulties. Peer reviewers can review the feedback and optionally revise their reports and their scoring. Like the traditional journal peer review, this setup requires two scholars for the actual peer reviewing which also helps maintain the same level of efficiency and workload.

To achieve a high degree of objectivity and efficiency, the structure of the peer reviews has to encourage peer reviewers to comment and score on all the significant characteristics of sound research papers [Brown 2004; Davison, Vreede and Briggs 2005]. Similarly, the model has to provide proper accommodations for peer reviewers to systematically assess the quality of the peer reviews. To improve its feasibility, the design of such accommodations will be based on existing peer review quality assessment instruments [Jefferson, Wager and Davidoff 2002; Landkroon et al. 2006; Van Rooyen, Black and Godlee 1999].

An example of utilizing these instruments is a form with room for ratings and comments for each significant characteristic. Each rating for a manuscript can have a different weight factor, depending on the characteristic, and the scores will accumulate to a total score. The assessments of the peer reviews themselves could be based on similar ratings and a “highlight” tool to support each rating. Each highlighted part of the other peer review report represents something the other peer review lacks or has extra compared to their own peer review report.

Example Peer Review Form for Manuscripts. Questions are based on a working paper by Brown 2004.

Example Peer Review Form for Manuscripts. Questions are based on a working paper by Brown 2004.


Example Peer Review Form for Peer Reviews. Questions are based on a paper by Landkroon et al. 2006.

Example Peer Review Form for Peer Reviews. Questions are based on a paper by Landkroon et al. 2006.


Finalizing Peer Review Processes
A peer review process is not finalized until all peer review reports have been signed off by their respective peer reviewers (within their predetermined time limits). Until that point, peer reviews are open for revision as many times as considered appropriate. This allows peer reviewers to engage in a multiple tiered peer review process. The parties involved can decide how many times they are willing to discuss and revise their reports. They can also agree not to discuss or change and only write a review and submit it immediately without seeing the other report until it has been finalized. When there is still a disagreement in the scoring, the notification function can be used to request other peers from reviewing how the evaluations were performed. Using the notification function likely takes a longer time than for the peer reviewers of those respective sessions to discuss and come to a middle way regarding the scoring by themselves, however.

Each peer review assignment is constrained by predetermined time limits. The default time limit for an entire process is, for example, one month after two peer reviewers have accepted the peer review assignment. Authors and peer reviewers can consent to change the default time limit during the acceptance phase. Any reviewer who has not “signed off” by then will have Reviewer Credits extracted until the reviewers of the reports sign off or when the application for a peer review is terminated. This measure is to prevent a process from going on for a far longer time than agreed to beforehand, which is not desirable for any party.

An example of how the termination can work: a termination can happen when no new deadline, agreed by the authors and peer reviewers in question, has been set two weeks after it has passed the original deadline. In the case of termination one or more new peer reviewers will have to be assigned to the peer review session to achieve the minimum of two peer reviews per manuscript. The involved parties can unanimously extend the peer review session as often as they wish, but at a maximum of two weeks for each extension. This provides the authors with the option to cancel a peer review session every two weeks. No Reviewer Credits are assigned with a cancellation.

When authors are not content after having gone through a peer review process, they can leave manuscripts “open” for others peer reviewers to start a new peer review session. The newer peer reviewers will have access to peer review reports of previous sessions, creating an additional layer of accountability. Concerning the consequences of multiple peer review sessions for the same manuscripts; in the traditional system the latest peer reviews, before a manuscript is accepted for publication, are the ones that count. In our peer-to-peer review model, the manuscript score is based on what the peer reviewers of the newest session have assigned to them. This is regardless of whether the scores are higher or lower than the previous manuscript scores.

A possible alternative to this is to let the authors decide which results to attach to the manuscript. A disadvantage of authors selecting which set of grades to use is that it could likely weaken the importance of the earlier peer review sessions. To improve accountability and efficiency, previous reviews are not hidden from any future peer reviewers. The reviews will still count and the peer reviewers who have submitted them keep the Reviewer Credits awarded to them. Regardless of how and which sets of grades are utilized, those chosen grades are to be reflected in the rankings and returned search results.

Consequences of Multiple Peer Review Sessions
By allowing more than one peer review session, rules are needed concerning the impact of the previous raters. Granted, the newest peer reviewers cannot simply change the previous grades of others to “correct” their statistics. There are a couple of options available if the newest peer reviewers agree with the authors regarding the need for their additional peer reviews. If the newest peer reviewers agree that the grades do not accurately reflect the peer review quality of the previous peer reviewers, they can put those reports on notification and add arguments to support that decision.

If any peer reviews are put on notification, the involved parties are first offered another opportunity to settle things between themselves. If that does not result in the notified reports being adjusted or the authors agreeing with the assessments, the option to make the notified report openly accessible to registered peers will be available. If the reported evaluators stand by their reports, allow the other parties to make the reports in question public for more peers to evaluate. Let a larger community of peers decide whether the report is justified or not.

If after a certain amount of time and responses the majority agree that the reporting side is right, the notified case has to have its grading adjusted or even removed entirely. At worst, a ban of the registered scholar in question is issued. If the majority of the peers are not in an agreement, the scholar that reported will be the one being reprimanded by this same group of peers. Notifications through e-mail can be sent to peers in that same field to reduce the time these specific reports are found and evaluated. During this time of “public scrutiny” any Reviewer Credits that were accredited are temporarily removed until there is a conclusion.

Suppose that parties have decided to make their notified case available to a larger community of qualified peers, but little to no attention is given to it. This is not an uncommon scenario in an “open” environment with neither encouragement nor enforcement. If at least one of the peer reviewers insists on having a more satisfying conclusion, an option to ensure peer review sessions are finalized is to allow the model to automatically select one new peer reviewer to “audit” the notified case. This “external peer” can be given the authority to adjust the ratings of the assessment that has been reported or invalidate the “notified” status. To improve the accountability of these “external peers”, their actions themselves must be recorded with the particular manuscript and viable for change themselves by any future peer reviewers of those manuscripts.

Another option is to leave the notified works publicly visible as they are, but have the real identities of the “notifiers” and the “notified” attached to that particular peer review process. These options could be made available after a case has been made public, but has received no result at all after two weeks, making it six weeks after the two peer reviewers have accepted it with the default time limit.

Summary
And that’s it for part 3. This peer-to-peer review process pretty much lays the groundwork for the rest of the activities and procedures. In part 4 of this blog series I’ll talk more about how Reviewer Credits are credited and about a special ranking system for manuscripts based on citation counts. The latter is an idea to simulate the journal impact factor, but with more flexibility to improve the accuracy.

In part 5 I focus on answerability: how can this model encourage/enforce objective and professional peer reviewing from scholars? It’s a very important topic, but one that I can best tackle after presenting how Reviewer Credits are credited.

P.S. yEd – Graph Editor is the tool I use for the graphs in this blog post. It’s pretty handy.

yEd is a powerful diagram editor that can be used to quickly and effectively generate high-quality drawings of diagrams.
yEd is freely available and runs on all major platforms: Windows, Unix/Linux, and Mac OS.

 
References

  • Brown, T. 2004, ‘Peer Review and the acceptance of new scientific ideas’, Sense about Science. Available at: http://www.senseaboutscience.org.uk/pdf/PeerReview.pdf [Last accessed 24 July 2009]
  • Davison, R.M., de Vreede, G.J., Briggs, R.O. 2005, ‘On Peer Review Standards For the Information Systems Literature’, Communications of the Association for Information Systems, vol. 16, no. 49, pp. 967-980.
  • Jefferson, T., Wager, E., Davidoff, F. 2002, ‘Measuring the quality of editorial peer review’, Journal of the American Medical Association, vol. 287, no. 21, pp. 2786-2790.
  • Landkroon, A.P., Euser, AM, Veeken, H., Hart, W., Overbeke, A.J.P.M. 2006, ‘Quality assessment of reviewers’ reports using a simple instrument’, Obstetrics And Gynecology, vol. 108, no. 4, pp. 979-985.
  • Van Rooyen, S., Black, N., Godlee, F. 1999, ‘Development of the Review Quality Instrument (RQI) for assessing peer reviews of manuscripts’, Journal of Clinical Epidemiology, vol. 52, no. 7, pp. 625-629.
Advertisements