Jane: Suggesting Journals, Finding Experts
Source: Peter Suber’s Open Access blog
A paper by Martijn J. Schuemie and Jan A. Kors titled “Jane: Suggesting Journals, Finding Experts” published in Bioinformatics (Oxford Journals).
Summary: With an exponentially growing number of articles being published every year, scientists can use some help in determining which journal is most appropriate for publishing their results, and which other scientists can be called upon to review their work.
Jane (Journal/Author Name Estimator) is a freely available web-based application that, on the basis of a sample text (e.g., the title and abstract of a manuscript), can suggest journals and experts who have published similar articles.
I’m not sure if this is really a suitable way of finding relevant journals to submit papers to, but the concept is definitely interesting. If it works, it could truly remove some serious issues like increased circulation of (rejected) manuscripts because they weren’t sent to the right journal for peer reviews. This would increase the overall publishing speed of manuscripts while decreasing the workload of peer reviewers, which is fantastic.
At the time of writing, this tool is limited to the field of biomedicine only, but that is not the biggest issue with this tool. You see, when you run the title and the abstract of this paper through the tool, the journal that published this paper (Bioinformatics) is not even in the top 15 of the results…
To be fair, just like a business intelligence system: the more information, the more accurate it can and should be, assuming the programming logic is sound. So theoretically and practically, it should improve the more information it has and the more people use it. Also, these kind of systems tend to be even more inaccurate when it comes to original/outside the box kind of ideas. Which this one certainly is in the field of biomedicine. It is technically not about biomedicine, but an IT application that is currently specialized for that field. But this type of application could easily work for any other field, since the basics, comparing text and finding similarities, simply requires text to work. Come to think of it, another way of using this system is for verifying the originality/significance of a paper. By running it through a tool like this, one can easily pick out the related papers and see where they differ. Then they can make up their minds whether the paper is original/significance or not. At the same time, the tool can check for plagiarism, since it matches words of papers and such. Very nifty actually.
One way of finding out whether the tool really works, or just how accurately it works, is to somehow add a private link where authors using the tool can submit whether they have actually been accepted by the recommended journal. Or simply state the rank of the recommended journal, and (the rank of) the actual journal it was accepted. That would actually make an additional interesting research. Additionally, it would also be interesting to check whether the recommended journals generally have a higher or lower Journal Impact Factor than the journals that actually published those papers. Because assuming that academics care about the Journal Impact Factor when looking for journals to submit their manuscripts to (and believe me, they do!) then this would be very interesting to know, too.
I haven’t actually read the paper, aside from the title and abstract (which should have been enough for this tool to work, FYI). Therefore, I’m not sure what their conclusions and future recommendations are. But these are definitely decent suggestions I think! Either way, I’ll check back on this tool in a couple more weeks, and see if it has improved or not 🙂