A Proposal To Improve Peer Review: A Unified Peer-to-Peer Review Platform (part 2)
This is part 2 of my blog series where I present our proposal/working paper to improve scholarly communication through the peer review element. You can find part 1 here.
Towards Scholarly Communication 2.0: Peer-to-Peer Review & Ranking in Open Access Preprint Repositories is the title of our working paper and can be downloaded for free over at SSRN.
A quick recap: In part 1 I’ve presented the opportunities of a unified peer review platform to improve the certification function of scholarly communication. The bottleneck, however, is that journal publishers have no incentives to allow their journal editors to participate to initiate peer reviews and share their peer reviews afterward. Journal editors have neither the time nor the motives to contribute systematically, continuously and publicly. Our proposal then shifted from a “unified peer review platform” to a “unified peer-to-peer review platform”. In other words, we’ve focused on designing a peer review model that can work independently of journal publishers (and their editors). This lead to our choice of Open Access preprint repositories as the technical foundation for our model: access to manuscripts eligible for peer reviews and a platform for scholars to share their works. In this part of the blog series we’ll start exploring the consequences of that shift with…
A General Overview of the Model: Simulating the Journal Editor
Since the journal editor plays a pivotal role in the (journal) peer review process, the peer-to-peer review process needs to compensate for the lack of journal editors to be feasible. In the working paper, we identify seven activities that journal editors carry out with regards to certifying manuscripts. By providing alternatives to execute these activities in a peer-to-peer environment, we’re essentially laying down the functional foundation of our peer-to-peer review model. After that, we can focus on the actual peer-to-peer review process, but that’s for another blog post. Since I’ll present a brief overview of the model here, I’ll also occasionally quote directly from our working paper, since these points are already succinctly phrased there.
The first activity is to screen the manuscripts submitted by authors to determine whether they are worth sending out to their peers for reviews. Manuscripts that they feel are likely not going to be suitable enough to be published in their respective journals are rejected. For our environment, we can provide scholars with the instruments to submit comments and ratings for either the abstract or the entire manuscript. As a screening process, it can be far less thorough but still valuable. By allowing peers to “trust” each others’ ratings, everyday scholars can change into “personal editors” with little additional effort, further improving the screening of interesting/significant papers.
The second activity is to select suitable peer reviewers for the (screened) manuscripts.
There are two elements in this activity. The first element are the qualified scholars for the peer reviews. The second element is the ability to properly match those qualified scholars with the manuscripts that are eligible for peer review. We propose to tackle the first point as follows (page 5):
For the peer-to-peer review environment, qualified scholars can be located in various ways. One way is to integrate the registration database with the author databases of the repositories themselves. Another approach is to grant special authorization to scholarly institutions to manually register their scholars.
The second point, matching qualified scholars with the right manuscripts for peer review, is a lot more difficult. In fact, it’s probably the most difficult thing to achieve with the same effectivity and efficiency. The feasibility of the entire model will largely depend on how well we can achieve this particular process. In the paper we’ll go in greater detail how we think this can be achieved, but for now this will set the direction of the solution that we’re proposing (page 5):
There are several potential approaches to match manuscripts with suitable scholars for peer review. One obvious approach is to let registered scholars choose which manuscripts they wish to peer review. That approach will avoid any issues concerning incompatible expertise between the manuscripts and the peer reviewers. Ensuring a satisfactory level of objectivity will be complicated when scholars can pick whom and what they get to evaluate, however. A more reliable approach is necessary. One such approach is to automate this process by utilizing specialized recommender systems [Adomavicius and Tuzhilin 2005]. Specifically, an automated manuscript/peer reviewer selection system [Basu et al. 2001; Dumais and Nielsen 1992; Dutta 1992; Rodriguez and Bollen 2006; Rodriguez, Bollen and Van de Sompel 2006; Yarowsky and Florian 1999] that matches the peer reviewers’ preferences and expertise with the manuscripts.
“Second” Obligatory Reminder: this is not the complete solution that we propose for selecting suitable peer reviewers for manuscripts with no conflicts of interests in our peer-to-peer environment. We do not think that this process can currently (or even in the nearby future) be fully automated. It is, however, an important element and it will go a long way toward making this entire process less labor intensive. We will cover this important topic more thoroughly later.
The third activity is to act as an intermediate between authors and peer reviewers. Editors serve as an indirect communication channel. An important benefit of these selection and intermediation functions is that peer reviewers can remain anonymous. Anonymity allows them to be more honest about their assessment, as they don’t fear any kind of “retaliation” by the authors for criticizing their manuscripts. Of course, anonymity also means that it’s easier for peer reviewers to criticize a manuscript harshly, justified or not.
The ability to provide scholars with all the instruments they require to peer review anonymously while still being properly credited for it is not a complicated functionality in the digital world. To be completely anonymous to all human parties, but still receive proper credit for every contribution is a unique and significant benefit of a digital scholarly communication system.
To achieve that benefit, the model provides scholars with the option to register with their real identities. They can then opt to carry out activities anonymously under generic “nicknames”. Those activities are then scrutinized and quantified. Their contributions quantified are their Reviewer Impacts. After every valuable contribution, a scholar’s Reviewer Impact is appropriately adjusted and then attached to their real identity. Through this approach, every scholar is responsible for what they do, but it will be their Reviewer Impact that essentially represents the quality of their actions.
The fourth activity is to verify the quality of the manuscripts, where the journal editor essentially acts as a peer reviewer. For this matter, specific accommodations for proper peer reviewing are accounted for in this model. These accommodations can be forms that include the relevant manuscript and peer review criteria to support the peer reviewers in peer reviewing and evaluating each others’ peer reviews properly.
The fifth activity is to verify the quality of the peer reviews; acting as a peer reviewer of the peer reviewers. With or without editors, but especially without, the model provides instruments for peer reviewers to assess the quality of the peer review reports of their peers. As mentioned in the previous activity, these instruments will be forms with the relevant characteristics of proper peer reviews for scholars to provide their feedback with.
The sixth activity is to decide whether to approve the peer reviewed manuscripts for publication, approve after revision or an outright rejection. The peer-to-peer review model is focused on quality control for the sake of “grading” a manuscript and properly crediting peer reviewers for their work. Whether the manuscripts will then be picked up for publication, grant funding, academic ranking or simply remain in the respective repository as a peer reviewed research article is not a focus point. For feasibility purposes, the important thing is to accommodate these different options. Examples of these accommodations are publicly accessible rankings of papers based on their citation counts and based on the grades by peer reviewers.
The seventh activity is to determine the visibility of publications, including the layout of the magazine and the time of publication. A key objective of this model is to offer scholars practical rewards to encourage regular, valuable contributions. With the output and proficiency of the peer reviewers quantified in Reviewer Impact, there are ways to rank them systematically, based on their peer review performance. For a more direct approach, there is a peer reviewer ranking with the names and their Reviewer Impact publicly visible. Indirectly, the system can rank their preprints and improve their visibility by positioning them higher on returned search queries.
Since I feel bad about quoting so much verbatim from our paper for this blog post, here is an original image to somewhat reflect what is stated here (and in the paper) for additional clarity.
Summary Part 2
Our peer-to-peer review model allows scholars to peer review anonymously and credits them for their work. Established quality assessments instruments are included to improve the effectivity and efficiency of these assessments. In the next part (of the paper), we present the workings of the actual peer-to-peer review process in greater detail. We also focus on measures to achieve accountability, efficiency and effectivity with this peer-to-peer review process.
- Adomavicius, G., Tuzhilin, A. 2005, ‘Toward the Next Generation of Recommender Systems: A Survey of the State-of-the-Art and Possible Extensions’, IEEE Transactions on Knowledge and Data Engineering, vol. 17, no. 6, pp. 734-749.
- Basu, C., Hirsh, H., Cohen, W.W., Nevill-Manning, C. 2001, ‘Technical Paper Recommendation: A Study in Combining Multiple Information Sources’, Journal of Artificial Intelligence Research, vol. 14, pp. 231-252.
- Dumais, S.T., Nielsen, J. 1992, ‘Automating the Assignment of Submitted Manuscripts to Reviewers’, in Proceedings of the 15th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Copenhagen, Denmark, pp. 233-244.
- Dutta, A. 1992, ‘A deductive database for automatic referee selection’, Information & Management, vol. 22, no. 6, pp. 371-381.
- Rodriguez, M.A., Bollen, J. 2006, ‘An Algorithm to Determine Peer-Reviewers’, Working Paper. Retrieved June 25, 2008, from http://arxiv.org/abs/cs/0605112v1.
- Rodriguez, M.A., Bollen, J., Van de Sompel, H. 2006, ‘The convergence of digital libraries and the peer-review process’, Journal of Information Science, vol. 32, no. 2, pp. 149-159.
- Yarowsky, D., Florian, R. 1999, ‘Taking the load off the conference chairs: towards a digital paper-routing assistant’, in Proceedings of the 1999 Joint SIGDAT Conference on Empirical Methods in NLP and Very-Large Corpora, University of Maryland, Maryland.