There’s a great paper in Neuroimage that I’ve just heard about, Ten ironic rules for non-statistical reviewers by Karl Friston (who is apparently a pretty classy guy). It’s nice because it’s pretty funny in the first place, setting out a series of ‘rules’ a reviewer can use to reject a paper out of hand, but it’s great because it also sets out the non-ironic (and well founded) rebuttals in the appendix of the paper, with an excellent set of references.
Rule number three: submit your comments as late as possible
It is advisable to delay submitting your reviewer comments for as long as possible — preferably after the second reminder from the editorial office. This has three advantages. First, it delays the editorial process and creates an air of frustration, which you might be able to exploit later. Second, it creates the impression that you are extremely busy (providing expert reviews for other papers) and indicates that you have given this paper due consideration, after thinking about it carefully for several months. A related policy, that enhances your reputation with editors, is to submit large numbers of papers to their journal but politely decline invitations to review other people’s papers. This shows that you are focused on your science and are committed to producing high quality scientific reports, without the distraction of peer-review or other inappropriate demands on your time.
Well, I’m probably guilty of this to some degree, I’ve submitted my reviews pretty close to the deadline provided by the journal a few times (sorry!), but never with the intent of being an ass about it, and certainly, never after several months! Having said that, Karma always gets you back; we’ve waited several months for fairly underwhelming comments (in terms of content, not intent).
Rule number five: the over-sampled study
If the number of subjects reported exceeds 32, you can now try a less common, but potentially potent argument of the following sort:
“Reviewer: I would like to commend the authors for studying such a large number of subjects; however, I suspect they have not heard of the fallacy of classical inference. Put simply, when a study is overpowered (with too many subjects), even the smallest treatment effect will appear significant. In this case, although I am sure the population effects reported by the authors are significant; they are probably trivial in quantitative terms. It would have been much more compelling had the authors been able to show a significant effect without resorting to large sample sizes. However, this was not the case and I cannot recommend publication.”
You could even drive your point home with:
“Reviewer: In fact, the neurological model would only consider a finding useful if it could be reproduced three times in three patients. If I have to analyse 100 patients before finding a discernible effect, one has to ask whether this effect has any diagnostic or predictive value.”
Most authors (and editors) will not have heard of this criticism but, after a bit of background reading, will probably try to talk their way out of it by referring to effect sizes (see Appendix 2).
This is great, and there’s a great discussion about it in the linked appendix. Of course, the corollary is the undersampled study, which is Rule number four.
Admittedly, there is an emphasis on neuroimaging (imagine!) but many of these issues can be put in the context of most ecological research, and the paper itself provides an interesting insight into the editorial process, reviewer hijinx and is generally a well written paper highlighting the issues we all face as reviewers from time to time.