I’ve just finished writing yet another referee report. It’s not fun. It’s duty. Which got me wondering: am I doing my part, or am I a parasite? I get much more referee requests than I have the time to do, and I always feel a bit guilty to decline one. So the question has a practical implication, can I decline with a clear conscience, or should I grit my teeth and try to get more refereeing done?
To answer that, first I have to find out how many papers I have refereed. That’s impossible, I’m not German. My records are spotty and chaotic. After a couple of hours of searching, I managed to find 77 papers. These are certainly not all, but I can’t be missing much, so let’s stick with 77.
Now, I need to compute the refereeing burden I have generated. I have submitted 33 papers for publication, and each paper usually gets 2 or 3 referees. Let’s call it 2.5. Then the burden is 82.5, right? Well, not so fast, because my coauthors share the responsibility for generating this refereeing burden. Should I divide by the average number of coauthors then? Again, not so fast, because I can’t put this responsibility on the shoulders of coauthors that are still not experienced enough to referee. On the same light, I should exclude from my own burden the papers I published when I shouldn’t be refereeing. Therefore I exclude 3 papers. From the remaining 30, I count 130 experienced coauthors, making my burden $30*2.5/(130/30) \approx 17.3$.
Wow. That’s quite the discrepancy. I feel like a fool. I’m doing more than 4 times my fair share. Now I’m curious: am I the only one with such a unbalance, or does the physics community consists 20% of suckers and 80% parasites?
More importantly, is there anything that can be done about it? This was one of the questions that were discussed in a session about publishing in the last Benasque conference, but we couldn’t find a practicable solution. Even from the point of view of a journal it’s very hard to know who the parasites are, because people usually publish with several different journals, and the numbers of papers in any given journal is too small for proper statistics.
For example, let’s say you published 3 papers in Quantum, with 4 (experienced) coauthors on average, and each paper got 2 referee reports. This makes your refereeing burden 1.5. Now let’s imagine that during this time the editors of Quantum asked you to referee 2 papers. You declined them both, claiming once that you were too busy, and another time that it was out of your area of expertise. Does this make you a parasite? Only you know.
Let’s imagine then an egregious case, of someone that published 10 papers with Quantum, got 20 requests for refereeing from them, and declined every single one. That’s a $5\sigma$ parasite. What do you do about it? Desk reject their next submission, on the grounds of parasitism? But what about their coauthors? Maybe they are doing their duty, why should they be punished as well? Perhaps one should compute a global parasitism score from the entire set of authors, and desk reject the paper if it is above a certain threshold? It sounds like a lot of work for something that would rarely happen.
The real parasites are the publishers. Mandatory referring is a flawed concept and should be abandoned. A better model could look similarly to open source software projects on github. Everyone can submit an issue visible to authors. On github people use stars to endorse a project, but for scientific works we probably need another endorsement system. I think it should be closer to the github sponsor feature, but instead of financial help other respectful scientists could leave their recommendations and reviews of the work.
I don’t think the github model can work for scientific research. Anybody can star a project for any reason. The only thing that measures is popularity. And if we relied on random people submitting issues, the result is that the vast majority of papers would never receive any.
With the peer-review system, we know that at least one editor and one or two referees looked at the paper and thought it was ok. Often the referees even go through the paper in detail and correct mistakes. I know: I’ve done that as a referee, and several referees did that to me as an author.
This gives confidence that a paper published in a reputable journal counts for something. It allows scientists from different fields, that lack the knowledge to check correctness themselves, to have a reasonable amount of trust that the result is correct. It allows lay people to trust those results, use them for public policy decisions, or write Wikipedia.
And no, I’m not saying that peer-reviewed papers are perfect. Only that they’re orders of magnitude better than anything else.