Home > Scholarly Communication > Feasibility of PLoS ONE’s Peer Review Model?

Feasibility of PLoS ONE’s Peer Review Model?

There is a growing urge to deal with the significant issues that plague both the peer review and the scientific paper format. The peer review issues concern the inefficiency and ineffectiveness of validating the quality of a scientific paper. The scientific paper issues concern the inflexibility of communicating more scientific knowledge, e.g. not limited by some journal paper page limit, complementary (raw) data sets or even corrections to the paper. Thanks to the advent of digital communication, these issues are now more apparent than ever. However, that does not mean that the solutions that are conceptualized or even practically realized are the be-all end-all solutions to our problems.

In this blog post, I express my concerns with PLoS ONE’s peer review and publication model as the model that can stand on its own as a/the new peer review model. I suggest some features that are based on existing concepts to complement their peer review system to make the transition to their vision of how (their) peer review should work perhaps more feasible.

What Scholars Want
The topic of today is if and how PLoS ONE’s peer review model stacks up to the traditional peer review models. To make that comparison possible, we need to understand the added value of the traditional journal peer review model. More specifically, we need to understand how effectively the traditional peer review system meets the demands of their scholarly readers and how PLoS ONE’s peer review model currently does it.

So what do the scholars/ readers really want when it comes to scholarly communication? Well, for one, they want accessibility to scientific knowledge. It does not matter how good the scientific knowledge is, it has to be accessible for scholars to utilize it. As equally important as accessibility is the scientific soundness of that scientific knowledge for a simple reason: access to faulty knowledge is useless. The third key element is that the scientific knowledge is relevant/ significant to them. After all, the productivity of having access to scientific sound knowledge is not very high if it is not all that relevant/ significant to them. Scholars need to be efficient with their time and read the things that can contribute to their research.

So journals that provide accessibility and guarantee scientific soundness? Valuable. Journals that manage to provide accessibility, guarantee scientific soundness and filter scientific knowledge on significance? Gold. Almost literally, as they are high up Mount Scholarly Significance and still climbing. And the higher they are on Mount Scholarly Significance, the more funding they will receive and the better their continuity will be. Journals covering all three bases is what is traditionally happening now, both for commercial and Open Access journals. This will not likely change even when it concerns universal Open Access. After all, it is common sense to financially support journals with the higher impacts. It is a better bang for the buck, as they say.

Advertisements

Pages: 1 2 3 4

  1. September 29, 2008 at 5:17 PM

    Man I am way behind on commenting and blog reading…just a quick one at this stage. I really like the idea of embedding an extra assessment of quality from the referees at the beginning. Two reasons for this – firstly it does add that initial rating as you suggest and secondly it stands a good chance of provoking a conversation – which will also push people to rate things.

    I think this was kind of intended when they were printing the referees reports alongside the comments – but I think that wasn’t so good because they were the comments on the original manuscript not the final version, and I think it was also a lot of work for the editorial staff to clean them up for publishing. In many ways just giving the referees the option of pinging a ‘significance’ button solves many of the problems they were having – without compromising the principle of ‘everything that is publishable gets published’

  2. September 29, 2008 at 5:30 PM

    Great ideas here! Clearly, the “Web 2.0” thing to do is to have a multitude of various “impact” measures: initial significance by reviewers/editors, downloads, citations, comments, ratings, etc. And all of these are criteria by which any search can be sorted.
    And of course, the validity of any comment, rating etc, by any user on the site is assessed by having a competent reputation system implemented.
    The technicalities have all been solved, they are around. They just need to be implemented.

  3. September 29, 2008 at 7:39 PM

    I really, really, really hate the “initial significance” idea. The whole point of the P.ONE model, to me, is that reviewers are NOT asked to guess (and no one can do more than guess) about “significance” — whatever that even means. The point is to publish all methodologically and ethically sound research and let the community, over time, sort out what is or is not “significant”.

    I hope P.ONE will not get caught in the “prestige” tarpit in which pretty much every other journal in the world is mired.

  4. September 29, 2008 at 10:50 PM

    bill

    I agree with you that the most accurate and “fair” assessment of a paper’s significance is not an “initial assessment” by the journal it is published in, but e.g. its citation count. However, nobody can deny that the journal impact factor, as much as we know that it is not nearly as accurate as the citation count, can and usually does “kick start” the readership of a particular paper. Which in turn increases its likelihood of being cited. Is that “fair”? Maybe or maybe not.

    But here is the thing: it is UNREALISTIC to expect scholars to treat all “published” papers equally and read them all before objectively “assessing” its impact. That is a dream and it is not going to happen anytime soon, if ever at all, because scholars do not read and thoroughly assess scholarly papers fulltime. There are only so many qualified scholars with only so much time versus the disproportionably growing amount of available papers that are ‘methodologically and ethically sound’ that need to have their significance verified. The proportions are off and it simply will not add up.

    PLoS ONE is only ONE journal, and even that journal, while riding on quite a bit of name recognition of the other PLoS journals, still lacks qualitative comments that can replace the quality of 2 assessments by the 2 peer reviewers of each paper. You honestly expect the scientific community to get this achieved for all the papers out there? If not, how is that more fair, or even as fair to all the scholars who have no clue as to what papers to read? Whether we like it or not, and if used with the right “care”, that “initial assessment” remains a valuable service to the scholars, to have someone else give them some pointers on the quality of the papers they have not read before.

    But as I have said before: you do not see the problem with allowing random “scholars” (with no actual accountability controls inserted) rate the significance of a published paper while withholding the ratings of the actual peer reviewers of that paper? Please give me ONE reason why you think they are apparently not qualified to make that assessment while “everyone else” is? It is rather disingenuous to state such a thing, not to mention highly inefficient for the entire significance assessment process that PLoS ONE tries to achieve.

  5. September 30, 2008 at 2:18 AM

    The proportions are off and it simply will not add up.

    Eh? If you read widely outside your own field, maybe. But I work in cancer biology, by any lights a pretty productive field, and I have no trouble reading titles and abstracts of all the papers that are relevant to me. Then I do my own triage regarding which ones to read in detail, and make my own decisions about their quality or lack thereof.

    how is that more fair, or even as fair to all the scholars who have no clue as to what papers to read?

    If they have no clue what papers to read, they will be relying on reviews and overviews, the kinds of things I mentioned as appropriate value-add for journals/publishers. What you’re pushing for is the status quo in a thin disguise: peer review pushed to do something it cannot do (predict impact/assess quality in a wide context) and produce inadequate metrics with which to report those predictions.

    (Oh, and I didn’t mention the rating system, or suggest that reviewers be excluded from it, or from anything else. I think your own issues are showing in that last paragraph of froth.)

  6. September 30, 2008 at 10:37 AM

    ‘I have no trouble reading titles and abstracts of all the papers that are relevant to me. Then I do my own triage regarding which ones to read in detail, and make my own decisions about their quality or lack thereof.’

    With all the papers, you mean all the journal publications including working papers/preprints or “just” journal publications?

    Because I think it is pretty safe to assume that once ‘methodologically and ethically sound research’ is the only condition that gets papers published, the output of papers will increase even more than it does right now. Add to that Open Access in the (hopefully nearby) future, and we are talking about some serious output of papers for scholars to read. I would REALLY find it hard to believe scholars are able to keep up with reading and assessing the quality of the available papers for themselves. And I believe the consensus right now is that peer reviewers are overworked, and a scarce pool of resource to draw from already.

    ‘Oh, and I didn’t mention the rating system, or suggest that reviewers be excluded from it, or from anything else.’

    The PLoS ONE paper rating system is designed to get an indication of the significance of the published paper. I am suggesting that the peer reviewers of the papers should provide feedback using that same rating system.

    I am unsure how to make sense of your thoughts without including the rating system in it, since that is what I have been talking about all along…

  7. September 30, 2008 at 8:20 PM

    you mean all the journal publications including working papers/preprints or “just” journal publications

    There really aren’t preprints/working papers in my field (Nature Precedings has a few, but that’s it). Biomed researchers are firm believers, by and large, in Secret Science. So, just journal publications — which means, of course, that I’m filtering too, primarily by “does PubMed index this/can Google Scholar find it?”.

    I am unsure how to make sense of your thoughts without including the rating system in it, since that is what I have been talking about all along…

    You’re right, I have gone off on something of a tangent from your original point. Mea culpa. (Turns out it’s my issues showing. Sorry.)

    In re: significance generally (my tangent): I don’t think you can predict it, so I don’t think you can filter for it. I most certainly don’t think you should include it in criteria for deciding whether or not to publish.

    In re: rating systems: insofar as they are all bad, let’s have as many as we can think of (so long as they are Open, not proprietary). I can certainly imagine that, by the time all research is OA and online, and all researchers are as web-savvy as, say, the FriendFeed crowd is, some kind of Slashdot-style ratings/review system might actually be useful. So, as others have commented on the FF thread, let a thousand filters bloom and if any of them prove useful, great — so long as they are not proprietary and no one is locked into them.

    My caveat, though, is that it will be a long time before such systems are useful, and even then they will be limited (like the /. system — it’s really only good for setting to >4 to keep the noise out or 1 if it’s a topic of direct interest, so it’s a pretty coarse filter).

    In the meantime, this pernicious idea of “significance” has got the whole community locked into it. One predictive metric (journal “prestige”) and one retrospective metric (Impact Factor) have the whole filter “market” sewn up between them. Both are proprietary, both are profit-driven not science-driven, both are deeply flawed — and yet, between them they account for (I would estimate) a strong majority of publication, hiring, salary, tenure, granting and similar decision-making in science.

    So I have never really liked PLoS ONE’s rating system — I would much rather people left detailed comments or questions, since ratings are meaningless without context (e.g. aggregated over a community, with some indication of how ‘savvy’ that community is). But on the other hand, I can see how it might evolve into something useful, given time — and it’s not proprietary, and no one is forcing me to use it. So I couldn’t object too strongly if reviewers were asked to give an intial rating — just please don’t call it “significance” or anything like that!

    You say: “I think it is pretty safe to assume that once ‘methodologically and ethically sound research’ is the only condition that gets papers published, the output of papers will increase”. I disagree: if authors are required to show ALL data — no more “data not shown”, no more “representative results are shown” — I think the volume of publication will significantly decrease. Moreover, your statement seems to assume that there is sound research that no journal will publish because it is deemed too insignificant — I don’t think that’s right, either. There are plenty of small, specialized journals (and anyway, now there’s PLoS ONE and BMS Res Notes).

  8. September 30, 2008 at 11:16 PM

    ‘In re: significance generally (my tangent): I don’t think you can predict it, so I don’t think you can filter for it. I most certainly don’t think you should include it in criteria for deciding whether or not to publish.’

    Well, that is a fair point. I would agree completely with it too, except that, as you say, that is not really a good move in terms of profit (or financial sustainability, particularly in the case of OA journals) because it is more difficult to attract readership. And without that, journals cannot do their jobs.

    On that note, and as you have noticed, I was suggesting that the peer reviewers could input their own ratings/comments for it after its PLoS ONE publication. It does not affect the publication chances itself, but more what happens after that. Which is what PLoS ONE is doing right now, but they are leaving out two perfectly qualified people to rate/comment on the significance of the papers: the peer reviewers.

    ‘My caveat, though, is that it will be a long time before such systems are useful, and even then they will be limited (like the /. system — it’s really only good for setting to >4 to keep the noise out or 1 if it’s a topic of direct interest, so it’s a pretty coarse filter).’

    True, but you have got to start somewhere with ratings/comments. PLoS ONE provides another good opportunity (assuming they change it up a bit, with a bit more accountability and context) to increase the value of post publication assessments.

    ‘Both are proprietary, both are profit-driven not science-driven, both are deeply flawed’

    Flawed, yes, but still useful and frankly, until there is something better, the only way to go given the limited time scholars have.

    ‘So I have never really liked PLoS ONE’s rating system — I would much rather people left detailed comments or questions, since ratings are meaningless without context (e.g. aggregated over a community, with some indication of how ’savvy’ that community is).’

    I agree completely, which is why I suggest enforcing those who rate to also comment. As PLoS ONE’s statistics show: there are more comments than ratings (a bit surprising, not really sure if this scales well). Yet, that means that enforcing raters to also provide a (short) comment is certainly doable.

    ‘if authors are required to show ALL data — no more “data not shown”, no more “representative results are shown” — I think the volume of publication will significantly decrease.’

    These are two different concepts. You can have “methodologically and ethically sound research” without providing all the data for readers. Otherwise, that means all the research that has been published right now without having all the data available is somehow not sound research, which is definitely a faulty statement. In addition, I can think of a number of valid reasons why scholars would not want to share all their data, e.g. they want to continue with it and they want to “monopolize” their chances of more discoveries/papers. Which is a valid reason, especially with the “publish or perish” mentality we still have.

    ‘Moreover, your statement seems to assume that there is sound research that no journal will publish because it is deemed too insignificant’

    That is what I think, or at least published in journals that few people read/cite out of. We do not have the “publish or perish” mentality for nothing. And the significance of the journal impact factor seems to indicate that as well.

    ‘I don’t think that’s right, either. There are plenty of small, specialized journals (and anyway, now there’s PLoS ONE and BMS Res Notes).’

    PLoS ONE is just one journal and they are not full of miracle workers. And we have yet to see if PLoS ONE and similar “approaches” towards publication standards can actually hold their own (in readership/financially), let alone change the “publish or perish” mentality.

  9. October 1, 2008 at 1:26 AM

    I suggest enforcing those who rate to also comment

    Aren’t we back with the “more work for readers” problem? If ratings or moderation systems are to be useful, they need to provide a shortcut past the need to read a bunch of comments. If I have to look through the comments to decide whether to pay much heed to the rating, I might as well read the paper. Ratings systems seem to me to become useful — to the limited extent that they are useful — with scale, like /. or amazon.com. As you say, we have got to start somewhere, but perhaps it would be more profitable to work on increasing uptake/volume (forcing comments runs the risk of reducing volume, I should think).

    I think we are pretty much on the same page in re: filters and prediction, though I am perhaps less sanguine than you about how much “better than nothing” is the current system (not much, according to me).

    These are two different concepts.

    Yes, very much so, another tangent, mea culpa again. I take your point about not all data being appropriate for “supplementary section” release, but I think there are mechanisms (like GenBank or GEO + embargoes) to deal with those kinds of datasets.

    or at least published in journals that few people read/cite out of

    This may be a crucial point, and I may be ascribing to others my own habits. If it’s in PubMed, I read it — if it comes up in a search! — and cite it if it’s relevant. I don’t give a rat’s patootie what journal it’s in. If I need to narrow search results, I use more keywords, filter by date, read a couple recent reviews and then redesign the search — I still don’t pay any attention to journal.

    My feeling is that most (all?) researchers work on problems which are so hyper-specific that they are better off with comprehensive search algorithms than some vague idea about which journals are somehow better than others. When it comes to your own little field, information overload ceases to be a problem — rather, it’s hard to find information, or else you wouldn’t be researching it.

    (Tangent: Similar considerations apply to the idea that “show me the data” somehow means more work for someone. I don’t think that’s true: authors should already have the data behind their claims (“expt was repeated three times with similar results”, etc), reviewers are only better off with more data to review, readers do not need to look at the supplementary section if they are content to trust the authors and reviewers. If it’s not my own hyper-specific field, I will probably just go with the trust model — but I’d feel better about that if I knew the data were available anyway, plus if it IS my field I’m going to want to see everything.)

    PLoS ONE is just one journal

    I named another; there’s also Biology Direct, and all three are indexed by PubMed. I should think between them they could mop up all the “not flashy enough for us” papers rejected by other journals.

    we have yet to see if PLoS ONE and similar “approaches” towards publication standards can actually hold their own

    A fair point. BMC is apparently profitable, though, as are Hindawi and Medknow. So Open Access publishers can thrive, which is at least reason to hope that they will continue to be able to support “experimental” models like PLoS ONE, BMC Res Notes and Biol Direct.

  10. October 1, 2008 at 3:10 AM

    ‘Aren’t we back with the “more work for readers” problem? If ratings or moderation systems are to be useful, they need to provide a shortcut past the need to read a bunch of comments. If I have to look through the comments to decide whether to pay much heed to the rating, I might as well read the paper.’

    That is a fair point. Although if implemented well, with a degree of accountability/answerability that all readers are also aware of, I think it should be possible to take advantage of it without actually being forced to read all the comments if readers do not want to. Like in a “I know PLoS can trace the comments to the real people, thus it is less likely that the ratings are not without some intelligent thought, thus the ratings are likely more trustworthy” kind of way. (reading further through your post, I think it is somewhat comparable to your ‘I will probably just go with the trust model — but I’d feel better about that if I knew the data were available anyway’ line of thinking.)

    Secondly, call me a pessimist, but I highly doubt many publications will receive so many ratings + comments that readers will go “that is way too much information on the perceived significance of the paper, no way I can read all of that”. I mean, we would not be having this conversation in the first place if that was the (projected) case. In addition, the “traditional” amount of such feedback/ratings for publications is 2, which is the average number of peer reviewers in peer review process (I think/hope). So with “only” 2 we already have the “status quo”. And that can be reachable, if the peer reviewers can submit theirs.

    ‘(forcing comments runs the risk of reducing volume, I should think)’

    Probably. Although the PLoS ONE statistics so far have proven otherwise (i.e. more comments than ratings). Despite those statistics, I seriously doubt it will scale well. Still, if they do bother to rate, perhaps providing some additional lines of text to support that rating is not all that farfetched. Especially when you let them do it anonymously (e.g. register with their real identity, but offer the option to rate and comment anonymously). I know Cameron is striving for open peer review, but there is no denying scholars prefer blind peer review, so perhaps that same approach can boost the amount of ratings and comments in this rating system context. And thing is, the “only ratings” thing is not that hot of a concept, either. And I prefer 1 rating plus a comment to show that this person has at least thought about it (and at the same time some information for “us” to serve as an “index” to gauge the accuracy of this person’s rating) than 2 ratings with absolutely no context. But this point is highly debatable, I guess.

    ‘I think we are pretty much on the same page in re: filters and prediction, though I am perhaps less sanguine than you about how much “better than nothing” is the current system (not much, according to me).’

    Actually, we probably are on the same page concerning the effectiveness of the “current system” (of PLoS ONE). Which is why I wrote this blog post to throw out a few cents worth of suggestions to perhaps make it better 🙂

    ‘This may be a crucial point, and I may be ascribing to others my own habits. If it’s in PubMed, I read it — if it comes up in a search! — and cite it if it’s relevant. I don’t give a rat’s patootie what journal it’s in. If I need to narrow search results, I use more keywords, filter by date, read a couple recent reviews and then redesign the search — I still don’t pay any attention to journal.’

    Again, I totally agree that there really is no substitute for reading and assessing all the papers yourself (on soundness and significance and what not) if you really want to be sure that the papers are sound and significance (for you/your peers and assuming you are good at what you do, of course). However, that is not the way the publishing/impact game is played for most of the scientific community, and there has to be a reason why scholars/ universities still find it worth playing the impact game this way. And I cannot help but think the limited amount of time and “energy” (or maybe “usability” is a better term?) are two significant reasons for this. Until we can solve these issues we should provide the scientific community with the services most of them have long accepted to save time and discomfort.

    ‘My feeling is that most (all?) researchers work on problems which are so hyper-specific that they are better off with comprehensive search algorithms than some vague idea about which journals are somehow better than others. When it comes to your own little field, information overload ceases to be a problem — rather, it’s hard to find information, or else you wouldn’t be researching it.’

    Idem ditto for this part. And I have to get back to my (hopefully reasonable) assumption that by removing (perceived) “significance” as another condition for publication, more papers will be published/available (assuming the peer reviews can keep up with it, which is another issue that I question). And that goes double with the addition of (universal) Open Access. So I am still not very convinced of this point of yours.

    ‘I named another; there’s also Biology Direct, and all three are indexed by PubMed. I should think between them they could mop up all the “not flashy enough for us” papers rejected by other journals.’

    Assuming you are correct, you are still talking about specific research fields, while I meant all of them. From what I have read, people in the fields of astrophysics and cosmology manage to write a lot of papers. An average of 13 papers per 2 years, based on some survey I have read (and blogged about). Anyway, (OA) journals thrive mostly because they have high journal impact factors. PLoS ONE’s “publish all research that is sound and let readers determine the significance” approach renders its journal impact factor completely useless. So your point is even assuming that this model is (by itself) sustainable, which, unless you change the current “publish or perish” mentality, it is not. Which means there are not and there will not be enough journals to “mop” these (perceived) type of papers.

  11. October 3, 2008 at 9:38 PM

    you are still talking about specific research fields

    You have got me there. I always default to thinking about biomed research, because it’s the only thing I know. In my “defense”, perhaps a field-by-field approach will prove feasible. After all, physics/maths had arXiv long before there was PLoS.

  12. June 6, 2010 at 7:58 AM

    PLoS is a step in the right direction, but the cost of publication is still a limiting factor in getting information to the be published. A new website, http://www.researchvaultonline.com, attempts to answer this problem by allowing research to be posted by the author free of charge, searched by the user free of charge and contains no formal peer review process. Research Vault will open the door for research to become universally accessible for everyone regardless of level of income, prestige or experience.

  1. December 16, 2008 at 8:34 AM
  2. August 6, 2009 at 7:59 AM

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: