Archive

Archive for March, 2008

Does journal bundling compromise the journal peer review quality?

March 29, 2008 Leave a comment

Motivation: Exploring the influence of “journal bundling” on the quality of the journal peer review system.
Problem statement: While information on this topic mainly focuses on journal price barriers (see here, here, here, here, here, here, here, here, here and here), I would like to talk about an equally important factor: peer review quality and journal answerability to their readers. (Sorry, I do not feel like referencing to them properly).
Findings: If we are to believe that competition increases the efficiency and overall quality of the offered products, which have to go through peer review in this context, then journal bundling weakens that competitive nature and thus the quality of the peer reviews.

“Well, glad to see someone knows how to use Google!”

So let us get to the point: Do journal publishers compromise the certification function of the most established scholarly communication model known to man by bundling their journals for subscription?

“Way to be dramatic!”

Well, it is a dramatic issue, it involves all those in the scientific community! But first let us define what this post is referring to when it is talking about “journal bundling”. I will take the description by the Information Access Alliance:

Under the kinds of bundling arrangements IAA believes are anticompetitive, libraries enter long-term, often confidential agreements with large publishers for an electronic subscription to many journals. Usually the bundle is sold with the requirement that a library maintain its historic “spend-level” for hard-copy subscriptions with the publisher. These arrangements have come to be known as the “Big Deal.”

The credibility of journal publishers as the only established executor of peer review (for publication) comes from the golden rule of competition: either you do a good job and get your deserved pay (recognition and paying subscribers) or you make way for others that will. It is not simply a competitive environment of objective third party quality filters, but a competitive environment for survival i.e. financial sustainability. Consider the following: whether research papers are worth sharing “publicly” is first decided by whether it fulfills the (minimum) quality requirements of the journal(s) the manuscript has been submitted to. The responsibility of that verification rests on the journal editors by way of the journal peer review system. The scientific community generally trusts that the journal editors do it to the best of their abilities, because it is their jobs on the line and their results are “open” to all to see and judge for themselves. If they do a bad job and publish research papers of lesser quality, few (if any at all) will read/subscribe to it and existing ones will leave eventually. At which point the journal will stop to matter, as a journal’s significance is determined by whether it is read and used i.e. cited i.e. the journal impact factor.

However, journal bundling weakens this answerability, as journals that one might not subscribe to normally, when it does not meet their requirements, are still being financed/ subscribed to. Bundling lets low(er) profile journals stay financially afloat even though they might not have otherwise, if subscribers were to qualitative assess them on their own merit, and thus the readers/subscribers might be getting shafted on these deals. Bundling of journals removes some of the pressure that journals should normally face when they do not attract subscribers on their own. So for this concept, I am afraid it may be greed taking on a form that, instead of luring paying subscribers by striving for the most appealing scientific works (which is usually correlated with scientific significance), uses sales tricks that bypasses the quality element. And that can never be good in the long run, neither for the journal publishers nor the readers.

The importance of the journal (peer review) system

March 14, 2008 Leave a comment

Problem Statement: An economics professor blogs on the Open Access concept and its supposed role in the death of the journal (peer review) system(!).
Motivation: As someone who has been researching the concept of scholarly communication (including the concepts of Open Access and the (journal) peer review system) I feel it is necessary to provide a much more realistic and supported view of the expected (and already measured) advantages of Open Access and the importance and sturdiness of the journal (peer review) system.
Findings: Even those well versed in Economics can be utterly confusing and nonsensical in other fields.

OK, so I cannot remember exactly what I typed in Google to get me to this post by a Mr. Henry Farrell on Crooked Timber

“Lemme guess: incentives for reviewing?”

But it indirectly lead me to this post by a Mr. Tyler Cowen on Marginal Revolution.

“And…?”

And I must say that is one bizarre post he makes, very bizarre. And I feel it is my duty to give a better share of the context than he did while explaining his rather radical statements.

First of all, I feel there is a very significant lack of understanding concerning the Open Access concept. I think people really need to read at least two resources before they should even think of criticizing Open Access (especially when the criticism lacks any kind of arguments). The first is Peter Suber’s “Open Access Overview” and then there is Stevan Harnad’s “Primer on Peer Review, Payment and Publishing” (mirror link)

Now, I could show some good links on the importance of the journal (peer review) system, too. But frankly, I think its importance is so significant/obvious that I do not actually need links to have that point crystal clear, I should hope. Though this blogger’s statements make me think otherwise, so let us dive in.

I don’t envision the free access system as the status quo but free.

I agree, but while I am thinking about more access to (peer reviewed) scientific literature = more qualified people getting more knowledge = better papers and such, what he means is:

Ultimately there wouldn’t be journals……I suspect refereeing might die

Well, that is rather radical. But let us move back a step and see how he explains this statement:

Papers would be ranked directly in terms of status and popularity rather than ranked through the journals they are published in.

“Hey hey (assuming that I cared), I would tell you to go back further man: it’s missing the reasoning behind this statement.”

I am afraid that is it: there is nothing before the first quote and this quote. He immediately jumps to this conclusion. Personally, I have no idea how he came to this conclusion, and he is not making any attempts to explain it, either. In fact, I do not get the idea he gets it himself, because a little later he says:

Ultimately there wouldn’t be journals and this would make a big difference as journals are the current carrier of selective incentives and status rewards.

OK, so he recognizes that journals are the (current) carrier of selective incentives and status rewards. Indeed, there is a strong link between (the existence of) journals and a paper’s status and popularity. Why that would change because the financing of the journal publishers differ is not explained. Somehow this will magically disappear once journals stop charging their readers (for profit)? How does that happen? I do not get this at all, but I would LOVE to hear that reasoning. Because if that can be proven, well that would make a very strong argument against OA, rather than the drivel that is being spread around now by various parties.

It would be easy to refuse to referee, since you wouldn’t fear being shut out of publication of that journal; I suspect refereeing might die.

I do not get this statement, either. Is he implying that there is no fear for being shut out of publication because the journals stop with existing? In that case there is no point in offering to peer review, so that cannot be it. Then is it because they magically stop caring and/or to matter? How does that happen? And as he claims that people will refuse to peer review if they no longer fear being shut out of publication of the journal in question, is he saying he refuses to peer review for journals because he does not see himself submitting a manuscript to that journal? I can see how there are cases like these occasionally, but talking about it like it is the only or most significant reason to accept/refuse a peer review? Seriously?

And if status were attached to the individual paper rather than the journal, who would bother to become an editor?

First of all, status is already attached to the individual paper: it is called the citation count. And that happens to be one of the most significant indicators of an individual paper’s quality. Not incidentally, the citation count is a significant element to determine the status (i.e. quality) of the journals: the journal impact factor. More info on that at ISI – Thomson Scientific.

The JCR (Journal Citation Reports) provides quantitative tools for ranking, evaluating, categorizing, and comparing journals. The impact factor is one of these; it is a measure of the frequency with which the “average article” in a journal has been cited in a particular year or period.

And for his information: one can already search based on citation counts, since a long time ago, too. And believe it or not, but the significance of the journal (peer review) system did not go downhill since then. And the fact that journal subscription prices continue to rise more and more even in the face of technological advances, does not exactly give me the feeling that I am witnessing a dying breed here. In fact, it seems to be growing stronger and bolder instead! And that does not sound economically comforting to me. Thus, I have yet to read an explanation of why Open Access changes any of this. So why would qualified and interested people not bother to become editors? OK, so now we have arrived at his “conclusion” (he has been making them all along, so why stop now, eh?).

In other words, the partial monopolization of for-fee journals makes it possible to produce status returns to motivate both editors and referees.

Yes, by some also known as the status quo. Thanks for the reminder.

Returning to the free setting, refereeing will survive insofar as writing detailed referee comments on other people’s work helps with your own research; it is interesting to ponder in which fields this might hold.

Points for consistency! Now the only things I am missing are the arguments that lead to these repeatedly stated conclusions.

Here is the deal: scholars like rigorous independent (objective) scrutinies of their works, because passing those gives that work instantly a more credible feel (may still not be “perfect”, but it has passed the first scrutiny so others may more than often not have to do it). That is why the journal (peer review) system that has been established roughly 200 years ago still stands: it works, and it is considered the best that we have. There is no net consensus for a replacement of this system: preprint platforms are not considered replacements of journals by most, if not all of the scientific community. However, one consensus on a shortcoming of the journal (peer review) is that it is, to some degree, restricting the scientific community of access to valuable scientific knowledge.

With the growing feasibility of electronic communication, opening up this access is likely a matter of time. That is just the way it is, it will most likely happen, the only question is when. Unrefereed manuscripts (preprints) is a good example of that. It is not triggered by the Open Access movement, it happened because of technological advances and a growing number of researchers/authors despite the existence of the journal peer review system. If anything, its growing number is evidence that the journal peer review system cannot handle the increasing load. And it is not going anywhere even if we pretend it is not there, or even if we denounce the potential of Open Access. It is one big gorilla that is not only here to stay, but here to grow. So the question really is: when it happens, do we want to be caught with our pants down or not? I do not, so let us work towards preparing for the most likely event, instead of resisting it and go drama queen with the pants near the ankles when it does happen.

Surely, one key element to “handle” this load of newly available information is the quality filter. The journal peer review system becomes even more important for the scientific community, as it is widely established that it is the first necessary quality filter. Yet, the journal peer review system by itself is not enough qualitatively (and in terms of speed). Something that was already established with the proposal of the citation count and the journal impact factor roughly 50 years ago by Mr. Eugene Garfield as supplements to the journal (peer review) system. Where people yelling of the death of the journal (peer review) system then? I do not know, but I do know that if it happened, they would have been clearly wrong, given the significance of the citation count and journal impact factors since then.

Indeed, peer review by itself was not enough then, it is not enough now and it will not be enough later with Open Access becoming even more of a reality. Additionally evident by another new and in significance growing quality metric: J.E. Hirsch’s H-Index. By then, additional quality filters will become that much more important. But as indicated, there is nothing wrong with that, it is simply scholarly communicating evolving to the next phase to handle its next biggest challenge.

Towards a standardized scientific blogging format

March 13, 2008 Leave a comment

While still uncertain of the added value of blogging to scholarly communication, I have decided to offer an idea to make it more efficient in terms of providing the right information to scholars.

“You shameless and weak minded hypocrite…”

Nah not exactly, thinking this one through has made me realize a few more things about the value of scientific blogging. Thus I can say that this has had some value to me, I believe. Anyhow, I will discuss this in more detail in a later blog post, so I can keep this one regarding my idea as clear and concise as possible. My proposal concerns adding a standardized format/style to scientific blogging, taking elements of the journal paper format and style: the scientific blog abstract. Observe:

Motivation
With the (scientific) digital communities’ increasing interest in a concept known as “scientific blogging”, there is a need to optimize this new communication channel for the sake of professional appeal and overall effectivity and efficiency of the communication. As academics are generally busy ladies and gentlemen in pursuit of knowledge, providers of said knowledge have to be as clear and as concise as inhumanly possible to pinpoint that relevancy. Assuredly, academics need to know what is being done, for what reason it is being done and why it utilizes a particular approach. That is how academics can decide whether they see the significance in committing their time to it or simply leave it alone.

Problem statement
Unlike journal papers, blogs are loose cannons: they lack a standardized structure. This runs the risk of reducing the readability and therefore the appeal of the blog posts in question. As journals and scholarly communication in general have demonstrated: scholars prefer standardized templates, as they have a record of improving the readability and writability of journal papers [Anderson, 2004]. A reasonable stance surely, as nobody wants to guess the order of chapters with each new book they read nor start with a technical study book without knowing beforehand what skills it is suppose to teach you. It is even more important in a professional working environment.

Approach
The approach I took to address this problem was very straightforward: I analyzed the journal paper’s style and format, to verify which elements would match a blog post. Of course, there can be confusion in determining which elements were relevant and which were not. One visually obvious characteristic of blog posts is that they are generally significantly shorter than journal papers. Accordingly, the suggestions I make here regarding requirements to style and format concerning scientific blog posting will start out “low”. I define the proper use of style and format proportionate to the “volume” of blog content. Given the theoretical nature of this project, there will be no immediate measurements and validations carried out to confirm the results. The results are derived from the scholarly community’s standard format and style of the primary communication channel of (new) scientific literature and adjusted to this new communication channel.

“Uh, doesn’t that mean you are throwing out a conclusion based on something similarly related, but not quite? With a complete lack of even experimental confirmation? That’s not credible!

Hmm, what to do what to do. I suppose I could visit some of the bigger science blogging scenes around the net and see what they think. Ah well, for now let me finish writing this piece first.

Results
By reviewing the elements of the journal paper style and format, I can say that there are elements that are suitable for every serious scientific blog post. For instance, at least two elements of the journal paper format and style are always applicable to whatever type of blog post a scholar makes, and those two are: Motivation and Problem Statement. As every blog post should have a point: What is the added value of this blog post? Why should this post be interesting to me, the reader? And that is closely related to the problem statement: what issue are we trying to address/solve? Concerning the relevancy of the Approach and Results sections, if the blog post truly contains added value, such as original (research) work by the blog author, then addressing these two elements is relevant.

Conclusion
Scientific blog posts should have a standardized format to stay consistent with the efficiency and professionalism of academics communicating with each other (scholarly communication). This blog post proposes that scientific blogs should adapt the format and style of a journal paper to more efficiently and effectively support academics with finding what they want to read. And at the same time to encourage (scientific) blog authors to think their own story through and work on adding original value by asking them to first “preface” the main blog post content with a description of said added value. Indeed, it is what I would like to call the scientific blog abstract.

“By the way, an abstract is normally only one paragraph, not a freaking page, you poser!”

Settle down, I am just throwing out more food for thought to enhance this “self-proclaimed” relatively original and potentially significant contribution to the scientific blogging movement.

References:

  • Anderson, G. 2004, “How to Write A Paper in Scientific Journal Style and Format”. Website. Bates College. Retrieved March 11, 2008: Link
  • Koopman, P. 1997, “How to Write an Abstract”. Website. Carnegie Mellon University. Retrieved March 11, 2008: Link

Anglophone versus Asian Peer Reviewers

March 11, 2008 Leave a comment

For those active in the field of publishing and scholarly communication in general, you might have heard/read about this survey by Mark Ware & Mike Monkman, Mark Ware Consulting: Peer Review in Scholarly Journals – perspective of the scholarly community: an international study. I found it while going through the American Scientist Open Access Forum. I downloaded and glanced over the summary first because I am a busy man and all…

“Right…”

And I have to admit, I was not too excited about it at first, because the summarized results mostly conformed with other literature on the peer review concept. So I simply put it away in my peer review sources folder and told myself to look at it at some other time. Well, I guess that time is now and, on closer inspection, and with that I mean I downloaded the whole thing, I found the details in the full report to be much more interesting. One particular interesting aspect to me is the difference in responses between Anglophone and Asian respondents. More precisely, check out the following excerpts:

1. Peer review is widely supported. The overwhelming majority (93%) disagree that peer review is unnecessary.
2. Peer review improves the quality of the published paper. Researchers overwhelmingly (90%) said the main area of effectiveness of peer review was in improving the quality of the published paper. In their own experience as authors, 89% said that peer review had improved their last published paper, both in terms of the language or presentation but also in terms of correcting scientific errors.
3. There is a desire for improvement. While the majority (64%) of academics declared themselves satisfied with the current system of peer review used by journals (and just 12% dissatisfied), they were divided on whether the current system is the best that can be achieved, with 36% disagreeing and 32% agreeing. There was a very similar division on whether peer review needs a complete overhaul.

Nothing particularly surprising about that I should think, but the following excerpt (page 18):

Given the generally low level of overall dissatisfaction with peer review, though, it is perhaps surprising that a strong statement like “peer review in journals needs a complete overhaul” did not receive more disagreement – in fact respondents were divided, with 35% disagreeing versus 32% agreeing. There were clear regional differences on these questions, with Anglophones expressing net disagreement (43% opposed versus 27% supporting), while Asian respondents expressed net agreement (47% supporting versus 23% opposed), with Europe/Middle East/Row lying between these extremes.

I must say I am a bit surprised by this result. Why are Asian researchers more open to a complete overhaul of the journal peer review system than their Anglophone counterparts? Is the Asian journal peer review system in general noticeably different? Different enough to warrant this view of a complete overhaul? All of my sources on scholarly communication/peer review so far have been English and I am pretty sure I have never read anything about this distinction before (which makes these results that much more interesting to me).

And it becomes even weirder, on page 21 there is a section called ‘Regional differences on attitudes to peer review’. It lists the following results:

• in terms of overall satisfaction (Q3), Asian were slightly more satisfied than Anglophone respondents;
• but looking at Q4, Asian respondents were more likely to support critical statements about peer review, such as their net agreement for “peer review needs a complete overhaul” or “peer review is holding back scientific communication” compared to net opposition in other regions.
• On the question of the effectiveness, though, Asian respondents were more likely to agree that peer review was effective, especially regarding the detection of academic fraud and plagiarism.

“What the…?”

In its defence, it was also prefaced with this piece:

There are substantial regional differences expressed, primarily between the Anglophone and Asian regions, on the questions of overall satisfaction (Q3), statements about the need for reform etc. (Q4) and the effectiveness of peer review (Q5). These differences are somewhat hard to understand, as they appear contradictory:

I say! What is going on? I do not get it. I suppose they do not have to be contradicting per se, but I agree they are odd stances to have. And since it was also stated that these results were somewhat hard to understand, I do not understand why it has not been mentioned in their summary. But let us focus on another important difference here: Why do Asian peer reviewers experience better results in detecting academic fraud and plagiarism? Are they doing something special that allows them to detect these things at a higher success rate? Do they have more advanced tools/techniques at their disposal to aid them with this? Are Asian institutions tougher on standards where successful frauds are harder to achieve and innocent mistakes harder to lose track of? Do Asian editors/peer reviewers have a more forceful “this could very well be a fraud/mistake” starting stance as opposed to “peer review is about trusting that the authors are honest with their reports”? “Maybe Asians researchers do more of this unethical stuff and are therefore more prone to be found out proportionally (but not necessarily in a higher ratio)?” Regardless, these are things we need to find out for the sake of potentially improving scholarly communication.

Ah well, let us save this for later and continue with this:

15. The average review takes 5 hours and is completed in 3-4 weeks. Reviewers say that they took about 24 days (elapsed time) to complete their last review, with 85% reporting that they took 30 days or less. They spent a median 5 hours (mean 9 hours) per review.

Good to have a number of hours specifically mentioned, it gives a better view of the workload of peer reviews. Another interesting tidbit concerning regional differences (page 42):

Asian and Rest of world respondents reported times over twice as long (13.4 and 12.5 hours respectively) as Anglophone reviewers (5.6 hours).

“I can see why you would find this stuff interesting, a lil bit controversial?”

Now, I personally would be very intrigued to know why there is such a difference between Anglophone and Asian respondents. I am assuming they are talking about peer reviewing scientific works in the same language, so it cannot be a case of language barrier. Why do Asian peer reviewers spend more time peer reviewing? Do Asian editors have a harder time finding the right/available peer reviewers for these manuscripts? Do these scientific works generally have more content? Are Asian peer reviewers more dedicated to get the best (or worst) out of a manuscript? And is that related to their experience that peer review can detect fraud and plagiarism better? “Maybe the manuscripts are generally of lower quality, which means more work for the peer reviewers? Or Asian researchers have more time to spend on peer reviewing?”

Moving on, here is an excerpt from the same page (42):

Comparing responses by the impact factor of the journal for which this review was completed showed that reviewers for high impact factor journals spent much less time on the review, 7 hours, than did reviewers for low impact factor journals (12 hours).

“Aha, my theory is becoming more likely now!”

Not necessarily, truth is, there is geographical bias in peer review/ journal impact factors. And the reason is simple: most of the world reads scientific literature in English. So when you write stuff not in English, it will be more difficult, if not impossible, to reach a large(r) audience. Impact factors of non english journals are therefore generally lower, while the quality does not necessarily, or at all, have to be that much lower as impact factors seem to indicate. Extreme case in point: Einstein was German. Had he written his stuff in German, I doubt it would have been considered of lesser quality. “But only with all other things being equal, you mean. Like (not) being able to communicate his stuff effectively!” I guess there is that, but that still does not make the content itself of lesser quality. Writing “De Aarde is bolvormig” is no less accurate than writing “The Earth is spherical”. Yet, the first sentence would be understood by fewer people (without translation) as opposed to the latter sentence.

However, this is no longer true when we are talking about the same languages here. In that case, one reason that could explain this is that the higher the journal impact factor, the better the editors/ peer reviewers. Which means there are time saving processes on two elements: editors screening and only letting through the better papers for peer reviews (and my theory is that the better the paper, the easier it is to peer review, because there is less room for errors/ improvements to spend the peer reviewers’ time on) and then there are the better peer reviewers which means they can more effectively scan for errors/improvements and point them out.

“Hmm, not that controversial, moving on!”

Page 44:

Asian respondents were more likely the than Anglophones to agree to the self-interested reasons (e.g. 33% of Asians supported “increase the chance of being offered a role in the journal’s editorial team” compared to 16% of Anglophones, and 53% agreed that “to enhance your reputation or further your career” was a reason for reviewing, compared to 42% of Anglophones).

“Aha! Asians are more honest? Or more greedy? Both?”

It could be a cultural thing. Maybe Asians are more competitive by nature? Or because there are less seats to fill? More financial pressure to get a high profile job? “The power of the Asian Parent Syndrome sincere encouragement by their loved ones? “ Well, it could be anything, not very interesting to me I guess. Moving on…

As with reviewers, editors based in Anglophone regions handled more papers than those in other regions, especially Asia and Australasia.

Hmm, more hours per review, fewer number of reviews. Somewhat balanced, but not sure which to prefer, though.

Page 52:

The most common form of feedback to reviewers (used by 58% of editors) is to let them know the publication outcome (Q48).

Regionally, Anglophone and Australasian editors were more likely to give publication outcomes than those from Asia or Rest of world. Asia/Rest of world editors were more likely to give feedback on quality of report than average, and European editors less likely.

Again, why this difference? Do Asian editors have more time to give this feedback? Or do they feel it is necessary to help the peer reviewers improve regardless of time? I do not have any experience, but I think receiving feedback on their perceived quality of the peer review is great. Peer reviewers know how other professionals look at their evaluation skills, which should be important to peer reviewers.

Well, I guess I will end this with a nod to Open Access (or not, depending on how you look at it):
Page 8:

There was a fairly predictable distribution of responses by geographic region, with USA/Canada, Anglophone and Australasia groups reporting about 85% good or excellent access, dropping to 66% for Europe/M.East, 56% for Asia, and 53% for Rest of world (see graph on following page).

Could this be linked to all the questions I have addressed in this post? Who knows. It is reasonable to suggest that the more access people have to scientific literature, the more productive they can be/are in terms of carrying out scientific activities, including doing peer reviews. On the other hand, it seems to be inconsistent with the Asian experience concerning that ‘peer review was effective, especially regarding the detection of academic fraud and plagiarism’. It remains a mind-boggling experience…

Scholarly Electronic Publishing Bibliography

March 10, 2008 Leave a comment

Wow, I just found a great resource on topics related to scholarly communication. And that is the Scholarly Electronic Publishing Bibliography. The newest version is 71, dated: 3/3/2008.

I am going to have a fun time plowing through all those resources! Course, on closer inspection, it seems as if I have already gone through most of the “reforming peer review” and “newer scholarly communication models” articles through other channels over the past year. I guess I did a good job after all in terms of finding the most important information for my research. Nonetheless, a lot more information on the many important elements of scholarly communication is referenced there. It is most certainly worth a link! Oh and on that note, checking out their main page at Digital Scholarship cannot hurt, either!

A recent example of the shortcomings of Peer Review

March 9, 2008 1 comment

Alright, I made a post on the issues of peer review before, namely that it is not very good at detecting fraud/wrong data, because peer reviewers simply lack the resources to reproduce and confirm the data through experiments. Now we can see another example of that. Over at Gramstain, adam ratner blogs about a high-profile retraction, of a paper published by Nature that has been retracted because the authors (including a Nobel prizewinner actually!) and other groups have been unable to reproduce the data. Interesting read when you are interested in scholarly communication like me, but then he says:

There is often a great deal of controversy following a retraction like this, but it seems that this is one good example that the system of peer-review, publication, and independent replication works well as a road to scientific truth.

Sadly, for this case, I would have to say that peer review does not deserve any credit for this truthful revelation. I would say this is a good example of peer-review having a hard time detecting errors in such papers due to lack of resources. And while it is most assuredly the best thing science has to validate new research before communicating it to the larger audiences, it is far from perfect, and should not be promoted as such. Well, after identifying this shortcoming of peer review, I do agree with him that this is a good argument for ‘making published research reports and even primary data as widely available as possible’. Open Access makes searching/going through (pr)eprints (digital versions of scientific literature) easier, thus it should theoretically be easier to find flaws and discourage authors from doing sloppy/dishonest work, peer reviewed or not. And that is indeed what makes scholarly communication so effective: the seemingly more significant the article, the more eyes to catch the bugs. And if they were never that significant to begin with, nobody will work with the data, so even if the data is wrong, it will have little to no effect.

However, one problem that continues to plague the system, is that of the citation count problem: it does not contain the context in which a citation is placed (Opthof, 1996). Scholars might continue to cite this either as valid or invalid work, raising its “importance level”. In some rather absurd cases, they will keep being cited as valid sources. the paper by Dong et al. (2005) reports on this issue:

Invalid articles may pose a considerable bias on the journal IF. Retracted articles may continue to be cited by others as valid work. Pfeifer and Snodgrass [33] identified 82 completely retracted articles, analyzed their subsequent use in the scientific literature, and found that these retractions were still cited hundreds of times to support scientific concepts. Kochan and Budd [34] showed that retracted papers by John Darsee based on fabricated data were still positively cited in the cardiology literature although years had passed since retraction. Budd et al. [35] obtained all retractions from MEDLINE between 1966 and August 1997 and found that many papers still cited retracted papers as valid research long after the retraction notice.

This weakness and/or misuse of the citation count is truly a disservice to the quality of science!

Digital Scholarly Communication & Bottlenecks

March 8, 2008 Leave a comment

Yay, I have found a bunch of interesting articles while screening Nature’s blogs. Or to be more specific, this post by Noah Gray highlights some very interesting issues that I have been working on as well. Namely the issue that academic environments specifically made to receive commentary from scholars do not actually receive enough (or any at all) of them. Anna Kushnir of The Journal of Visualized Experiments made a blog post here concerning this lack of love for scientific blogs/articles on the world wide web. She addresses some good points which, by the way, also have been discussed before in this corner of Nature’s blogs: Peer-to-Peer. In addition, this article by M. Mitchell Waldrop at Scientific American generated quite a bit of commentary on this topic as well. Anyway, rather than being just a cheap advertisement for other digital places, here are my 2 cents…

From what I have intensively(!) read the last year on peer review and scholarly communication…

“This is a good time to put up a disclaimer: this blogger has neither published nor peer reviewed a journal paper before. In this situation, he is an “intensive” dreamer and not a doer!”

..lack of time and lack of “real” incentives are indeed the biggest bottlenecks here. Apart from that, there is just a lack of structure and therefore a lack of efficiency to do this the blog way. Blogs are all over the place and there are no standards and no minimum quality screeners. If a journal asks you to peer review, it is normally a paper that has at least been screened by an experienced editor with a “OK this sounds at least worth a review” rubber stamp on it. That helps, but blogs do not come with that guarantee.

Besides, who knows what they do with those comments? OK, they are blogs written by scholars, but how serious are those pieces? Have they been thought out well, or is it just an “OK this is on my mind right now, and I just want to generate some discussion” kind of articles?

“Like yours?”

Are they truly worth your time then?

“Definitely not!”

Even if their intention is to generate some significant thoughts and make a real paper out of it, how long will that take, and what will be the success rate of it? There are also only few places that have scientific blogs in one place rather than scattered around over obscure URLs that few people will find/visit regularly. Is it still worth your commitment then to search them out and contribute to them? There are way too many “what if” factors associated with blogs that I am pretty sure the whole idea of serious commentary for scientific blogs will not take off anytime soon.

This does not necessarily apply to Open Access organizations such as PLoS and (Open Access) (pr)eprint repositories such as arXiv and Nature Precedings, which have minimum screening and even published articles available. Add to that the interoperability of the various (pr)eprint platforms by way of the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) and it is possible to have search engines such as OAIster as a centralized contact point for these resources. Do not underestimate these protocols and the concept of “harvesters” (read up on this concept here by Swan et al., 2005), as they are dynamic enough to do more than simply aid in interoperability between digital (pr)eprint platforms. (Rodriguez et al., 2006) designed a peer review system that can use OAI-PMH to find suitable peer reviewers and therefore enable a peer review session. Complete with “tagging” these articles that have been peer reviewed.

Thus, it is much more likely that the digital open peer commentary movement will be kicked off by way of (pr)eprint platforms rather than scientific blogs, because it is technically already possible and also scientifically preferable. And with that I mean that the articles are already written in the scientific preferable format> i.e. journal paper style. If you ask me, unless blogs replace journal papers as the default format for scholarly communication, there is no real advantage to having qualified scholars focus on scientific blog posts as opposed to preprints. Course, proof is stronger than argument, so the people at Research Blogging are focusing on making this scientific blog concept more efficient and user friendly to view and filter these blogs on quality.

Then there is the issue of “why spend my time on peer reviewing when nobody is specifically asking me for it, while I am busy with my own research and peer reviews that journal editors(!) have personally requested (and will remember if I say no)”? Peer reviewing is far from a selfish act, but it is not a completely selfless act, either. For one, peer reviewing puts you on the good side of journal editors. And if they believe you are good, they will assign better papers to you, or you can be more lenient with your peer reviewing tasks because you can show them what you have already done and/or are still doing. Posting anonymously on preprints will not do this for you and I doubt “I have already peer reviewed way too many preprints in repository x and y” is a good excuse to them (even if it is true).

Following with that line of thinking, I do not see the big advantage in the way PLoS works: they are inviting commentary on papers that already have been peer reviewed and published (by their journals). Blogs are largely guilty of the same: they are not the standard in communicating new scientific research, that is the task of scientific papers. Therefore, I consider scientific blogs generally more neatly visualized comments on (published) scientific papers. Asking people to review those scientific blogs is akin to asking people to review already peer reviewed and published papers. If we consider the whole concept of “lack of time”, does the scientific community really want potential peer reviewers focusing on (blogs based on) peer reviewed and published journal papers? Is it not more efficient for them to focus on unrefereed/unpublished scientific literature instead? Those published papers will get their “commentary” in the form of other papers carrying on their research based on these published papers if they are significant enough. That is the point of (search engines that filter papers based on) citation counts and journal impact factors. Is that not enough? I just do not see how this will make scholarly communication more productive, having qualified scholars doing more of the same on already scrutinized works. (In PLoS’ defence, I have read an interview by Chris Surridge, the UK-based managing editor of PLoS ONE, that this is their way of replacing the journal impact factor as a quality indicator for (individual) journal papers. Considering that the journal impact factor is generally not considered an accurate quality indicator for (individual) journal papers, I can see sense in their objective. I still think using potential peer reviewers for this is highly questionable, though). Then again, I suppose there are some clear-cut cases where discussions (through blogs) are productive: when it is directly related to the (current) research of the posters/reviewers.

“And, thus?”

OK, putting that aside for later and focusing on the lack of commitment thing. I think to kick start something like this, there is a need for a widely (and with that I especially mean journals and universities/scholars) recognized unified digital environment (involving preprints, such as Nature Precedings, arXiv etc.) where qualified people can “peer review” and be recognized for what they are doing, while still staying anonymous at the same time.

“Huh?”

Depending on how you go at it from a technological perspective, they are not mutually exclusive. I do not want to go into the details of that (yet).

“Regardless, isn’t that just doing it the old way, but then over the Internet?”

Not exactly, one obvious improvement is that you have taken off a “leash” of potential reviewers: rather than peer reviewing when asked, they can peer review whenever there are suitable preprints for them to peer review (on top of being asked, if they feel there is room for that). Also, once that is possible, we have a kind of environment that is more friendly towards more interesting incentives to experiment with to further encourage those qualified to participate. After all, there is no point in doing something for “the open community” when “the open community” is not there to begin with. But once they are there, they might find it more productive and fulfilling to do something.

“Ah, the chicken and the egg problem, how depressing.”

Gotta start somewhere…

“Speaking of which, another problem with scientific blogging is that nobody is really around to finalize these posts. For example, this blog post has been edited a number of times after it was posted: minor errors were fixed and more content/links were added.”

Course, “live” articles can be a good thing, because they have a likelihood of being (more) accurate over an extended period (assuming there are committed people working on improving it, which I doubt). However, if other scholars were to reference to these articles to make their point, and the content has been changed, well that would create confusion and ruin the trustworthiness of it all. So I am sticking with my theory that digital open scientific commentary will work best for (pr)eprint platforms before they find root in the scientific blog concept, if ever.