van Dijk and others have just published an interesting paper in Current Biology “Publication metrics and success on the academic job market”. The main point in their paper is that it’s important to publish, it’s important to publish lots, and that having a highly cited paper can overcome the disadvantage of not publishing in high impact journals.
The last sentence really caught my eye:
Our results suggest that currently, journal impact factor and academic pedigree are rewarded over the quality of publications, which may dis-incentivize rapid communication of findings, collaboration and interdisciplinary science.
This tone echos what we said in Goring et al. (2014), where we pointed out that early career researchers may be disadvantaged in interdisciplinary research, both by the Matthew effect and because interdisciplinary research often results in lags to publication as disciplinary bridges need to be overcome. With more support for this argument it becomes clear that, either (1) committees need to take the costs of interdisciplinary research into account when evaluating candidates for hiring or tenure, or (2) they need to specify interdisciplinarity as a key criteria in hiring and reward it explicitly. Our metrics help balance the costs of interdisciplinarity against a number of research outcomes, but if these metrics aren’t evaluated then early career researchers are effectively penalized, as van Dijk et al. point out.
van Dijk et al. don’t cite Petersen et al. (2012), but it’s worth pointing out that people have considered what it takes to make it in academia, which makes the statement “This is the first study that quantifies what is predictive of an academic career in terms of becoming a principal investigator” a bit of a dodgy statement in my opinion. Petersen et al. only study Assistant Professors and Professors, but it’s similar enough in intent and results that a reference to the paper should be included (in my opinion).
Finally, I want to point out a couple of peculiarities about the data set and analysis used in this paper.
- The paper assumes that the last author is a PI, and so “success” is measured once you get three last author publications. Weltzin and others (2006) have taken this issue on, and made some important contributions. Tscharntke and others (2007) make the point that the last author is not always a PI in some disciplines, and so the blanket application of this method may be problematic. Indeed, it is my understanding that all of the papers in the (open access) Macrosystems Ecology special edition of Frontiers in Ecology and the Environment are ordered by contribution. So maybe this is an assumption that is slowly but surely breaking down over time (and with good reason).
- PubMed is not an exhaustive database. I have 19 publications on Google Scholar and only 2 on PubMed. I suspect that this is an issue tied largely to whether disciplinary journals are archived by PubMed, but even Shultz (2007) found that Google Scholar often returns a greater number of journal search results than equivalent searches on PubMed. If no effort is made to constrain results to a particular discipline (and it’s not clear to me that that is the case) then it is possible that the results van Dijk present might be compromised.
Part of the reason that van Dijk’s results seem to resonate (check out the paper’s AltMetrics) is that it tells us a lot about what we already know intuitively. Getting papers in good journals matters. Good journals help increase visibility, but even if you can’t get into a good journal, you can still score with a highly cited article. Then, publish. Publish or perish. Finally, and disappointingly, it also doesn’t hurt to be a man (although it had a surprisingly low correlation with success if I’m reading the supplemental material correctly).
So what do you think?