Archive

Archive for April, 2008

Automated Download Scripts vs RapidShare: Cats!

April 25, 2008 3 comments

RapidShare wins the “Most Annoying Anti Download Script” award!

Ugh, for real? Yes, it is real.

Something tells me RapidShare hates automated download scripts more than any other service that provides free space for sharing purposes! I gotta say, this is either the best attempt to get me to go premium for any service or the worst attempt to get me to go premium for any service. It is so confusing that even I have not figured it out yet.

Anyway, since the verification code is 4 characters, I think the easiest way to do this is actually not to search for the characters with that particular cat graphic but the ones that do not look like it. That way, you only have to figure out which ones, usually just 1 or 2, are out of the ordinary and simply skip them.

The value of (Scientific) Blogs to Scholarly Communication?

April 20, 2008 Leave a comment

Motivation: Exploring the role of the (scientific) blogs in contrast to the traditional ways of scholarly communication.

Problem statement: The role of blogs to support scholarly communication is regularly questioned. Given the strictness of the scientific method, how does the free for all “shotgun” culture of blogs fit in, if at all?

Findings: Blogs to highlight “interesting” scientific findings: suitable, although there are more efficient ways to do so (such as RSS of big scientific news/journal sites and initiatives like Connotea). Then again, as a “new” medium for these kind of things, it is highly suitable. After all, blogs are hot now, and you gotta go with the flow! Blogs to generate scientific communication: suitable, but other initiatives such as forums generally offer better features to support discussions. Blogs to contribute original scientific knowledge: lack of accountability, structure and seemingly effort. It is vastly inferior to preprints, i.e. the scientific paper format, so good luck with that.

Conclusion: We may need something that combines the convenient and efficient services of Connotea with the ease of blogging/forums to efficiently update the scientific folks with the relevant findings and generate discussions.

As I have addressed in an earlier post: the scientific paper format has been designed very efficiently. Journal paper abstracts inform the readers of the topic, the problems, the methodologies, the results, the conclusions and ultimately the added value of the papers in one paragraph. Much like a very short summary of the paper, which is also usually free to read, regardless of whether the paper is Open Access or not. This is done to give readers a short text of the paper so they can decide whether the paper is covering the relevant aspects for them to read in more detail. Afterwards, all these elements, and in particular the methodology, results and conclusion, appear in a more detailed description, so feed the need for information.

This is traditionally missing in blogs, because, they are not an outlet for original research work, but somewhat of an alternative communication channel of already original published works. And even when bloggers blog about peer reviewed/published articles, there is rarely a mention of what is theirs and what is simply from the article but reworded in their own words. That being the case, the added value is then a somewhat personal quality filter with no original scientific added value other than the reference to the original article. And in that case, there are other initiatives that can point scholars to relevant papers that are much more efficient, such as the free online reference management by Nature called Connotea which makes sharing and finding papers rather simple.

Anyway, that scientific blogs traditionally lack (1) a standard format/structure and (2) original scientific information, are rather significant issues to question the usefulness of blogs. Improving the original added value with scientific blogs, while still emphasizing their speedy and easy accessibility, are seemingly difficult to realize, enforce and sustain. I blogged about a standard scientific blog format before, in response to the first issue. However, I am unsure of how appealing such an idea is. Scientific bloggers might not find the idea of adding an extra paragraph (blog abstract if you will) describing their original value (or lack thereof) very interesting. Let us see how some of the research blogging community try to solve the second issue.

Research Blogging by BPR3

BPR3 allows readers to easily find blog posts about serious peer-reviewed research, instead of just news reports and press releases. We provide bloggers with an icon they can use to show when they’re talking about a peer-reviewed work that they’ve read and analyzed closely.

Great idea, I am all for more (optional) quality filters. Assuming it is accurate, it could significantly contribute to the advance of scholarly communication and science in general. However, when going through their guidelines, it seems like they envision doing more than just being a scientific quality filter of blog posts. And there is nothing wrong with trying to add more value to the scientific community, by way of blogs, but I think there are some very complex issues at hand here. Let me address the guidelines that focus on the original value: #4, #5 and #7.

4. The post author should have read and understood the entire work cited.
5. The blog post should report accurately and thoughtfully on the research it presents.
7. The post should contain original work by the post author — while some quoting of others is acceptable, the majority of the post should be the author’s own work.

In theory, I think these are significant guidelines. In practice, I wonder whether they are actually enforceable and sustainable. For one, I have a hard time imaging that their reviewers can actually read every paper/article that a blog references to, and then use that knowledge to verify whether the blog authors themselves understand the scientific works they are linking to. Even if they can right now, it is difficult to maintain sustainability if and when the scientific blog community grows. Additionally, I would have to ask how this process is answerable? Who is responsible for making sure bloggers adhere to these guidelines? What is their experience/background? How do we know they did what they said they would do? How do we measure this?

Assuming it can be enforced and sustained, we could be talking about some pretty significant sacrifices for these activities. I mean, they are solid guidelines. In fact, they are so solid that they are much like the requirements for scholars participating in journal peer reviews. So if they do have the skills to perform these quality assurance activities, is it efficient to use their time evaluating already peer reviewed papers to certify a blog post that may or may not have original added value? Come to think of it, what is considered original value for blog posts anyway? For Research Blogging by BPR3, it includes the rewriting and summarizing of the article in their own words. So added value: yes. New knowledge: no. To be fair, if the blog posts concern errors or other things worthy of concern for published papers, then that would be very meaningful, but just to validate a blog post that highlights how interesting the research is? Or to generate a discussion, which may be more suitable for forums? Or a summary/rewrite of the original article? Is that not a lot of effort for little gain as opposed to, say, peer reviewing unrefereed manuscripts (i.e. preprints) for journal publication, which might truly have some real original knowledge to share with the scientific community?

In fact, if they have to blog & review scientific literature, why not blog & review preprints? That way, they can actually provide original value by contributing to validating the so far unrefereed scientific knowledge. It might even support the journal peer review process. Either directly by submitting these blog peer reviews or after the author has improved the scrutinized manuscript. Much more productive and efficient if you ask me. Of course, reviewing preprints is a bit more challenging (and risky) than just covering peer reviewed publications (i.e. postprints), but if you are going to blog about and scrutinize blogs on scientific knowledge, might as well do it right and focus on stuff that has yet to be validated?

“Well, that is only assuming that the people verifying peer reviewed publications have the right expertise and the time to do the same to unrefereed manuscripts.”

Well, I suppose there is a difference between understanding a scientific article and being able to scrutinize it. I wonder if that is the real issue at hand here?

“I ought to be for you, you’ve never done a formal journal peer review before!”

Well, aside from the fact that I was still not sure what I thought of this initiative on a more serious level, that is why I never bothered to apply for “membership”. However, most of these bloggers that are getting these BPR3 tags do not strike me as people that are unable to perform proper peer reviews. But it is true that it takes significantly less time and effort to cover peer reviewed publications as opposed to peer reviewing preprints. So on a less serious note, blogs do provide that quick and dirty highlights of scientific literature. And since blogs are so popular right now, and probably will stay that for quite some time, I guess there is some advantage of going with the flow to reach out to others? It is hard to make up my mind about this, I guess it requires some more thinking on it.

More on the original value of blogs
Over at RealClimate, a blog post concerning the value of blogs and peer review received quite a few comments. A lot of those posts concerns something about climate physics, and I will not go into that because they confuse the hell out of a non-climate guy like me, but I found this particular comment by Myles Allen rather interesting:

I personally would never comment critically in public on a peer-reviewed paper even to point out “obvious problems” (who is the judge of what is obvious here?) without at least exchanging e-mails with the authors to make sure I had understood it correctly (I’m more than happy to criticize non-peer-reviewed material on Channel 4).

I appreciate that publish-first-and-ask-questions-later is “traditional” practice in blogging, but perhaps, as scientists, we should be challenging that practice.

As far as I am concerned, anything that is published and made publicly available, is free to be criticized. In fact, if there are indeed flaws in it, it should be pointed out for the sake of the other readers and scientific progress in general. However, I also agree that, in terms of scientific papers, that should only be done when you are sure of your case. We would not want it to be a standard practice for mudslinging, reputation smearing, eye gouging dirty fights, after all. And indeed, one way of keeping it civilized while trying to provide value is to contact the authors. Additionally, this could also potentially prevent public embarrassment for both parties. One issue with this measure is that it would significantly slow down or even discourage the concept of criticizing peer reviewed/ published research papers. I mean, what if they wait a long time before responding or simply do not respond at all? And the whole idea of blogs is that it is a fast (and easy) communication medium, and removing that element would remove the key motive for the popularity for blogs I think.

Myles Allen continues this over at Nature’s Climate Feedback blog post on Web 2.0:

Just to be clear, I don’t have a problem with blogging per se, if bloggers were to comply with the old-fashioned courtesy of checking with the authors that they have understood a paper correctly before criticizing it in public (as opposed to over coffee or the conference bar).

If bloggers on high-profile sites like RealClimate were to adopt a simple policy of fact-checking comments on papers with the papers’ authors before posting them, and if necessary posting a response from the authors together with their post, it would certainly be a vast improvement on current practice. The argument that the authors can always respond on the blog doesn’t work, because the responsibility for fact-checking should surely be with the blogger, not his or her unsuspecting targets.

As I agree that preventing is better than curing, I think this is a strong point as well. However, going back to the self-corrective nature of scholarly communication: one can also reason that if the blog is sufficiently popular/significant, the truth will come out one way or the other. Either through other blogs responding to it, or in the comments of that blog post. And if the blog is not popular/significance, then nobody will take notice of it, anyhow. So while risky, it is not an impossible situation to correct. Of course, and this is particularly true for blogs, in between the time of sharing faulty information and the correction, it could have traveled quite far already. Hmm, dilemma.

“What? No closing paragraph to give a sense of closure to this piece?”

I guess I should, but I cannot think of any. Then again, a lack of closure kind of fits this topic, considering its young and dynamic nature. So I guess I will write something extra in the “blog abstract” and forgo writing a “that’s all, folks!” paragraph.

Getting This Blogging Thing Down Pat!

April 20, 2008 1 comment

Motivation: As I am a more and more active blogger, I think it is only productive that I go and explore the ways of success with blogging, in order to improve the quality of my blog.
Problem statement: Unsure how to reach out to the right communities to add value. So time to see what I am doing right/wrong!
Findings: Overall, I score pretty OKish on most of those points. Except that I have not been very involved with the rest of the scientific blogging community, and, likewise, I do not plug my own stuff enough on other blogs.
Conclusion: In order for this blog to have more “success”, I need to mingle some more with my fellow bloggers and shamelessly advertise myself some more. I am unsure whether I feel like doing the latter, but mingling with the rest of the community sounds very productive to me.

Over at Nature’s Blog forum, I found an interesting reference to an article by senior editor of Wired magazine, Paul Boutin, who writes about ‘what a number of successful bloggers with successful nonblogging careers say are the ways to think about getting into the business of blogging’ in “So You Want to Be a Blogging Star?”

So let us sum them up and see how I am doing.

“You mean we, of course!”

Don’t expect to get rich.

Well, I got that one down pat, for sure!

“I am already happy if we don’t lose money over this!”

Write about what you want to write about, in your own voice.

Definitely check this one, too. And I feel I can truly write what I want to write about given that I am somewhat anonymous. And that helps with the unrestricted thing, but perhaps less in the credibility department. In addition, I can do more than express myself in my own voice: I have a special web based alter-ego to assist me with the writing and talking thing! Thus I have in fact two voices! Points for extra effort!

“I certainly deserve credit for the added value! In fact, without me, none of this would have been possible!”

Fit blogging into the holes in your schedule.

“Well, you’re a bum. So this one is pathetically easy. Which is pathetic.”

I cannot argue with that…

Just post it already! The hurdle that stops many would-be bloggers is fear of clicking the “Publish” button.

Hmm, I can say I have experienced this as well. I still got a bunch of drafts I have not posted because I feel they lack something. But it is also true that I regularly go back to published blog posts to modify them without any advanced warning. Hmm, what a dilemma.

“The signs of a low quality blog for sure.”

Keep a regular rhythm.

Check. I try to think up of some new relevant stuff to write and write about them as much as I think I can.

Join the community, such as it is. There’s an unwritten rule — actually, it’s written about a lot on blogs — that you should always link back to bloggers whose ideas you repeat, or from whom you get a cool link to another site.

Checking again. I believe proper crediting others is important. I therefore also reference the “intermediate” sources that lead me to the main source as well. Such as Nature’s blog forum thread in this case. And I have bookmarked a whole bunch of interest blogs that I like to visit and occasionally comment on just to join the fray.

Plug yourself. That’s what all the name-brand bloggers do. It’s not bad form to send a short note to a prominent blogger drawing his or her attention to a really good blog you wrote.

Have not mastered this one yet. I am not into directly advertising my blog posts. I find that somewhat sleazy. On the other hand, plenty of sites that are made exactly for this purpose, and would be happy to allow such self plugging. I just have not made up my mind to join them just yet.

So there we have it. Conclusively, I think I can somewhat score pretty well. And I am of course content with my blog and the posts in it. Even though the amount of visitors may be low. I guess I am not doing it for the success anyhow.

“I should think so, because you’ve never had any. Which would make that goal a little bit odd, to say the least!”

Categories: Web (2.0) Tags: ,

Scholarly Communication 101

April 19, 2008 Leave a comment

Motivation: Considering the (current!) focus of this blog on scholarly communication, I wish to give my own take on what it is. This post will probably be expanded over time, to make sure the information is sufficiently comprehensive (not to mention topical and accurate).
Problem statement: No clear overview of a key concept of this blog’s main focus just yet.

I wish to give a little bit of information on the concept of scholarly communication in general. Particularly focused on the scientific paper and the journal publisher as they are the primary container and distribution channel for scientific knowledge respectively .

See, in a nutshell, this is how it works:

  • Scientists do research;
  • Scientists record their activities and their findings in a document/ manuscript;
  • Scientists submit document/ manuscript to an objective (third party) institution (e.g. a journal publisher);
  • The journal publisher fulfills the 4 fundamental functions of scholarly communication: registration, awareness, certification and archive [Roosendaal and Geurts, 1997];
  • If it passes the certification requirement, which is standardly a process called “peer review”, the knowledge will be published in its journal;
  • From this point on the information is further communicated in various forms, television, websites, blogs, radio, forums and so forth;
  • Also reaching perhaps the most important receiver of that communication, their fellow peers, to be utilized for research purposes;
  • Rinse and repeat, arriving at a full circle.

The Scholarly Communication Model
The functions of scholarly communication, which essentially define the purpose and requirements of a scholarly communication model, are so important that I will point to a summary to them (I’m not entirely sure why [Van de Sompel, 2006] is speaking of 5 functions while the original authors mentioned 4 but its addition seems pretty valid to me):

  • Registration, which allows claims of precedence for a scholarly finding.
  • Certification, which establishes the validity of a registered scholarly claim.
  • Awareness, which allows participants in the scholarly system to remain aware of new claims and findings.
  • Archiving, which preserves the scholarly record over time.
  • Rewarding, which rewards participants for their performance in the communication system based on metrics derived from that system.

As the journal publisher is the first (and the most established) to fulfill all of these functions, everything else serves as a reinforcement of this model and these functions. Most of them are reinforcing the awareness and the rewarding functions. Of course they archive them, too, in their own way, but archiving something that has already been “formally” archived is not very significant from the perspective of preservation. The same logic applies to how they do not generally carry out the registration function.

Their role in supporting the certification function is a different matter, though. The full circle concept shows why science is a “self-corrective” process: the knowledge is used, and whoever uses it will confirm its accuracy by having results that conform with what is described in the paper. Thus, while the certification function of the journal model is very significant, the real test of validity comes in applying that knowledge by peers and confirming the expected/described results. In terms of the speed to find and correct errors: the more significant/sensational the science, the quicker and better it is verified by other qualified scholars. Likewise the less significant/sensational the scientific knowledge, the lower the odds of it being put to practice and the longer it takes to have it verified. However, as its utility rate is low, it largely does not matter in the odd chance that the information is wrong: as it is not being used anyhow.

Therefore, even if the certification function of journal publishers fails, which is sadly not a rare occurrence, science has a way to check and correct itself for accuracy eventually. A beautiful system indeed, as long as people are open to modifications (and expectations that it might not be correct). On a somewhat related note, this is how Paul Ginsparg, the founder of the world’s largest Open Access e-print archive, words it on the subject of fraudulent work, in Ars Technica’s “Plagiarism and falsified data slip into the scientific literature: a report” by John Timmer:

“There’s little effect on science,” Dr. Ginsparg said, “since the people who produce high quality work don’t need to plagiarize, and the people who do need to plagiarize don’t produce high enough quality work to affect anything.”

Perhaps a bit rough around the corner, but I think this makes sense, too. And how to find out whether a paper/journal is any good? Well, there are plenty established indicators of quality, aside from the certification function. But since the certification done by the journal publisher is the first quality filter, let us go with that.

Peer Review, Citation Count and the Impact Factor
The certification function of the journal publisher model is called the peer review. Traditionally speaking, peer review is the process of peers scrutinizing the manuscripts, to determine whether the manuscript is of sufficient quality for the institution’s standards standards. There are two main established types of (third party) organizations that can intermediate and oversee this peer review process, and they are the journal publisher and the grant institutions for publication and/or funding purposes.

Instead of blogging my own take on the ins and outs of peer review, there is already so much information on it that I guess that is not necessary. So here are a couple of links I recommended on the topic of scholarly communication and peer review:

Peer Review: the challenges for the humanities and social sciences
Peer Review, a postnote of The Parliamentary Office of Science and Technology

So that is it concerning the peer review. But that in itself is not a very objective measurement of the quality of the paper. As it depends on the journal’s standards, the peer reviewers, the authors and of course the journal editors. So clearly, there is a need for objective indicators of manuscript and journal quality. There are a number of established quality indicators, but the most established two are the citation count and the journal impact respectively.

The (manuscript) citation count is quite simply the number of times a paper has been cited by other (published) articles. So a paper with a citation count of 50, has been cited by 50 papers. The (journal) impact factor is the number of current citations to articles published in a specific journal in a two year period divided by the total number of articles published in the same journal in the corresponding two year period [3]. For example: The journal Cell has an impact factor of 39.191, i.e. every article published in issues of Cell in 1992 and 1993 was quoted in 1994 an average of just over 39 times. Every article in Nature for the same period was quoted in 1994 just over 25 times [4].

More on this at the following links:
3. The Thomson Scientific Impact Factor
4. Impact Factor, Immediacy Index, Cited Half-life
5. Glossary of Thomson Scientific terminology

Financial Sustainability: Traditional vs Open Access
Traditional = Access to literature is charged (subscription, exclusive access and so forth).
Open Access = Access to literature is free of charge.

Open-access (OA) literature is digital, online, free of charge, and free of most copyright and licensing restrictions [6]. OA is compatible with copyright, peer review (and all the major OA initiatives for scientific and scholarly literature insist on its importance), revenue (even profit), print, preservation, prestige, career-advancement, indexing, and other features and supportive services associated with conventional scholarly literature. The legal basis of OA is either the consent of the copyright holder or the public domain, usually the former. The campaign for OA focuses on literature that authors give to the world without expectation of payment.

On scholarly communication and Open Access in general, I found the following good reads on the topic:
6. Open Access Overview. Focusing on open access to peer-reviewed research articles and their preprints.
And two more by the House of Commons: Science and Technology Committee:
7. Scientific Publications: Free for all?
8. Responses to the Committee’s Tenth Report, Session 2003-04, Scientific Publications: Free for all?

Well, that should cover the more important things concerning scholarly communication. Will update as I see fit.

Space: Garbage and Garbage men!

April 15, 2008 Leave a comment

Motivation: Reading about this space hebris story reminds me of another story. Are we ready to create a new generation of state of the art garbage men?
Problem statement: none, apart from the garbage in the air.
Findings: Japanese have a rich imagination. Oh, and there is garbage in space that is ours.
Conclusion: In order to improve your imagination, mingle with the Japanese culture. Oh, and watch out for garbage in space.

The European Space Agency reports and visualizes on our unhealthy living style in space:

Space debris: evolution in pictures

Between the launch of Sputnik on 4 October 1957 and 1 January 2008, approximately 4600 launches have placed some 6000 satellites into orbit, of which about 400 are travelling beyond geostationary orbit or on interplanetary trajectories.

Planetes

Today, it is estimated that only 800 satellites are operational – roughly 45 percent of these are both in LEO and GEO. Space debris comprise the ever-increasing amount of inactive space hardware in orbit around the Earth as well as fragments of spacecraft that have broken up, exploded or otherwise become abandoned. About 50 percent of all trackable objects are due to in-orbit explosion events (about 200) or collision events (less than 10).

“Looks like a new burger design of the Mac, complete with small wobbling guys feasting on it.”

It certainly reminds me of fast food restaurants, in more ways than one.

Anyway, there is a Japanese animation series (anime) called Planetes:

Planetes

Plot Summary: In the year 2075, mankind has reached a point where journeying between Earth, the moon and the space stations is part of daily life. However, the progression of technology in space has also resulted in the problem of the space debris, which can cause excessive and even catastrophic damage to spacecrafts and equipment. This is the story of Technora’s Debris Collecting section, its EVA worker, Hachirota “Hachimaki” Hoshino, and the newcomer to the group, Ai Tanabe.

Space garbage men! Awesome. Who would not dream of being a garbage man, now?

“Excellent career choice. Sounds like a blast!”

More to the point, who will say anime is just cartoon for kids? Lots of interesting stuff in the Japanese (animation) culture, as expected. There is a reason why these guys are so technologically advanced: their vast imagination and need to fulfill their goals.

Journal Peer Review for Preprint Repositories: The Overlay Journal & the RIOJA project

April 14, 2008 Leave a comment

Motivation: My interest in scholarly communication has brought me to the concept of “Overlay Journals” and their role in certifying unrefereed research papers (preprints) in preprint repositories. I cover its function and its (to me visible) added value to scholarly communication. Problem statement: Hmm, well if I have to define a problem, I would have to say perhaps my current lack of news and understanding of this concept? Fortunately, I intent to solve this problem with this blog post (and the digging it took to write it).
Findings: Overlay journals are a cost effective way to run a journal. However, there could be difficulties with raising the awareness of your prestige, if you use repositories as your sole base for peer reviewed literature.
Conclusion: Preprint repositories, combined with new initiatives such as the overlay journals, can contribute more and more value to the scientific communities.

Scholars like the concept of having unrefereed research papers (preprints) available to them. The growing use of and content in preprint repositories are indications of that point. This is particularly true in the fields of physics, economics and math, but other fields are growing into this “custom” of sharing preprints as well.

* A preprint is any version prior to peer review and publication, usually the version submitted to a journal.

From “Open Access Overview” by Peter Suber

One significant problem with preprints and their repositories is that, by definition, the preprints are either unrefereed or refereed but not qualified for a publication in some journal (yet). OK, assuming that most of them are largely correct anyhow, they provide value by communicating their contents much earlier than through the journal (peer review) system. And a lot of them will eventually be published by a journal, albeit often with its contents partly modified/improved. However, whether through the formal journal peer review system or individual scrutiny by peers without publication: certification of research papers is always done to guarantee a degree of quality and validity. If only for that extra sense of security that the material being read is valid. And the task of providing that extra sense of security rests on the shoulders of the journal publishers. Conveniently, there is actually a type of journal that solely focuses on treating preprints in repositories as submissions and submit them through the journal peer review process: the overlay journal.

Overlay Journal.
An open-access journal that takes submissions from the preprints deposited at an archive (perhaps at the author’s initiative), and subjects them to peer review. If approved (perhaps after revision), the postprints are also deposited in an archive with some indication that they have been approved. One such indication would be a new citation that included the name of the journal. Another could be a link from the journal’s online table of contents. A third could be new metadata associated with the file. An overlay journal might be associated with just one archive or with many. Because an overlay journal doesn’t have its own apparatus for disseminating accepted papers, but uses the pre-existing system of interoperable archives, it is a minimalist journal that only performs peer review. It is important to Free Online Scholarship (FOS) as an especially low-investment, easily-launched form of open-access journal.

From Guide to the Open Access Movement by Peter Suber

OK, so the differences are not actually that huge when you think about it, because authors who have preprints deposited in preprint repositories commonly submit them to journals for peer review & publication as well. However, for the overlay journals that focus solely on providing journal peer review without actually having their “own place” for “publications”, well then it certainly is a pretty interesting difference. If popular, this overlay journal/preprint repository combination will really put a foot down in the whole “library as publisher” thing (something mentioned in the blog post before this one). And it actually seems to be a lot like what Green/Gold Open Access is essentially striving for: peer review by journal publishers but archived in OA repositories.

I first heard about this overlay journal concept from Peter Suber in an e-mail exchange some time ago, and he told me about the Repository Interface for Overlaid Journal Archives (RIOJA) project. The RIOJA project investigates technical, social and economic aspects of the overlay of quality assurance onto papers deposited to and stored in eprints repositories. And they have been making some pretty interesting progress, which is going to be a significant step in the right direction for not only Open Access but scholarly communication in general as well!

The latest update is the complete report of the results of a survey they have carried out, titled “Repository Interface for Overlaid Journal Archives: results from an online questionnaire survey”:

The RIOJA project will create an interoperability toolkit to enable the overlay of certification onto papers housed in subject repositories. To inform and shape the project, a survey of Astrophysics and Cosmology researchers has been conducted. The findings from that survey form the basis of this report.

Going through the survey, first a small comment on something written on page 3

It is clear that arXiv provides three of the four “first order” functions of a journal, which have been identified[1] as follows

And then it references to a paper by Prosser, David C. (2005) “Fulfilling the promise of scholarly communication – a comparison between old and new access models”. Now, I recall reading and liking this paper myself. However, the original reference concerning the functions of a journal (a scholarly communication model) is Roosendaal, Hans E. and Peter A. Th. M. Geurts (1997). “Forces and functions in scientific communication: an analysis of their interplay”. And referencing the original source is important, as it contributes to maintaining the accuracy and thus the significance of the citation count as a quality filter for research papers.

“Maybe they figured that by referring to this paper readers will automatically find the other paper as well, hitting two birds with one stone (reference)!”

I wonder about that. It could work in practice I guess, but it sounds rather risky. I think it is still better to go with the direct approach. Anyhow, the complete report is 105 pages long (at least the PDF is), so I will start with the ‘Summary of significant observations’ and see how far I can go through the entire survey myself.

“Drama queen. The actual report is only 47 pages, the rest are references and appendices.”

Hehe, well maybe the actual questions in the survey also reveal interesting things. Anyway, moving to 2.2 Publishing your research, page 3:

The average number of papers produced by a scientist in astrophysics and cosmology over a period of 2 years is 13.

On average, that is one paper every two months! Very nice. I wonder if these are the fastest fields in terms of paper production? Also interesting to know would be their average publication %? I mean, depending on that, this result could prove to be even more or less impressive. On the other hand, the focus of the story is on overlay journals and the use of eprint repositories, so I guess this point is not all that relevant to it.

Concerning the acceptance of new models, page 7:

Concerns were expressed about new and untested models of publishing, the overlay model included. However, the respondents were comfortable with the idea of trying new models and means for publishing of scientific research – provided that it could be ensured that the published research outcomes would be eligible for helping to establish an academic record, for attracting funding and ensure tenure. The following issues received particular mention:

  • Impact, readership, sustainability.
  • The peer review process, with particular emphasis on ensuring quality.
  • Open access, repositories and long term archiving,
  • Clarity and proof of viability of the proposed model.

Without a doubt: accreditation is an important incentive for scholars to do their scholarly things. On closer inspection, I wonder if this displayed order is a coincidence or a reflection of the answers of the survey? Like, going by order of importance? I mean, I can see why (financial) sustainability is very important but accreditation above quality assurance? If indeed by order of importance, then that would have been very interesting 🙂

“Not to mention highly controversial.”

After digging through their ‘3.6 Other comments’ section, I have no reason to believe that the order was based on importance or anything like that. Course, this may prove to be true after all if I start digging through the entire comments section in the appendices and try to classify each of them in these groups, but I do not see any leads to do that 🙂

“Lazy bum.”

Ok, I glanced over the open comments on question ’26. If you would like to add something to this survey or have any further comments, please let us know’, where the comments start on page 93 by the way. I have no reason to believe that scholars value accreditation more than quality assurance, as quality/ peer review is mentioned in most of the comments. Anyway, more on the openness of scholars for newer models on page 8 concerning miscellaneous comments:

An area of concern that was repeatedly mentioned in the respondents’ comments was whether there is a need or even a market for a new journal – irrespective of publishing model – in astrophysics and cosmology. On the other hand, the reportedly substantial use of arXiv, and the fact that the vast majority of the respondents use arXiv to get the full text of a paper, suggests that there may be grounds for further exploration of whether a more efficient and speedy way of publishing quality-assured scientific research might be introduced.

Overall I think this openness to new models to improve scholarly communication is a good thing. Particularly when there it is related to (pr)eprint repositories, given their growing significance (to Open Access) and all. I wonder what arXiv thinks of this, though. Will they experiment (more) with features to support the certification of unrefereed research papers in their archive? Hmm, I hope they get to read it. It’s a lot of food for thought.

Multiple Gmail Accounts: Where is the Privacy?

April 13, 2008 Leave a comment

Problem statement: a very annoying “feature” in Gmail’s multiple account management may cause some serious privacy issues when you do not know about it and handle accordingly.
Motivation:As a fervent user of Gmail and a supporter of Google, I still need to spread the word of this privacy “loophole”.
Findings: Multiple Gmail account management in Gmail does not quite protect your privacy.
Conclusion: Make sure you manually switch to different Gmail accounts if you wish have your main Gmail account hidden.

Now, Gmail is a great (e-mail) service. Good user interface, lots of space, everything sounds good. Heck, Google even allows you to have multiple accounts. That is good, right?

“Right.”

Right, as with multiple accounts, you can e-mail in different environments accordingly. For instance, you could have a Gmail account solely for

  • Online Non Serious Chatting using a random addy like onlinebuddychat (at) gmail.com
  • Online Adult Chatting using a random addy like 20F_USA_blonde (at) gmail.com
  • Professional business using your real name addy like FirstnameLastname (at) gmail.com

*I am not related to these accounts, but I guess there is no need to have them tracked by spambot just to make a point, so I replaced @ with (at) to slow that down.

Indeed, these three different group of contacts never have to meet. That is the power of different gmail accounts. Even better, Gmail lets you forward all the incoming mails to a different e-mail. In this case, you could let all of your e-mails be forwarded to your real addy so you do not have to check up on all of them for e-mails individually and manually. Great, right?

“Right.”

Right, it is user friendly and it has a high degree of manageability from one point of access.

Even better: you can assign a main account and give permission to let that account send e-mails from the other accounts. That means you can have multiple accounts, have all those accounts send mail to one account, and send emails from that one account using the addresses of the other accounts. Fantastic, right?

“Right.”

Wrong. You see, when you use your main account to send e-mails through your other accounts, people can see both gmail addresses!

Note: when you’re sending with a different ‘From:’ address, your Gmail address will still be included in your email header’s sender field, to help prevent your mail from being marked as spam. Most email clients don’t display the sender field, though some versions of Microsoft Outlook may display “From yourusername@gmail.com on behalf of customaddress@mydomain.com.”

This “feature” utterly destroys any privacy you might have had with multiple Gmail accounts. To avoid that, you need to manually log into the other accounts. In this Google Groups thread, you can see a Google employee named Sarah (title: Gmail Guide Yellow) explain this:

Certain spam filters look at the authorized IP addresses for a given domain when deciding whether to accept mail. If mail comes from a Gmail IP address, but the headers indicate it was sent by a non-Gmail address, some domains may refuse to accept it. Using the Sender field helps to ensure that we can deliver legitimate messages to domains using a variety of spam-prevention measures.

However, there are examples given in that thread to avoid this spam problem. So this is not an impossible task to avoid for Google. I refuse to believe that even if I did not see suggestions proposed in that thread and elsewhere. And even so, allowing users to decide for themselves to take that risk of not having their e-mails sent through using their secondary accounts is still better than having your main address revealed every time you use that to send e-mails using the other addresses.

Till that time: only use this feature if you do not mind having both accounts involved revealed to the receiver of your e-mails. If not, sign into your secondary accounts individually/manually to send these e-mails. If you use something like Outlook to send e-mails using Gmail, you can switch accounts there without this problem.