The advantages of taking a chance with a new journal – OpenQuaternary

Full disclosure: I’m on the editorial board of Open Quaternary and also manage the blog, but I am not an Editor in Chief and have attempted to ensure that my role as an author and my role as an editor did not conflict.

Figure 1.  Neotoma and R together at last!
Figure 1. Neotoma and R together at last!

We (myself, Andria Dawson, Gavin L. SimpsonEric GrimmKarthik Ram, Russ Graham and Jack Williams) have a paper in press at a new journal called Open Quaternary.  The paper documents an R package that we developed in collaboration with rOpenSci to access and manipulate data from the Neotoma Paleoecological Database.  In part the project started because of the needs of the PalEON project.  We needed a dynamic way to access pollen data from Neotoma, so that analysis products could be updated as new data entered the database.  We also wanted to exploit the new API developed by Brian Bills and Michael Anderson at Penn State’s Center for Environmental Informatics.

There are lots of thoughts about where to submit journal articles.  Nature’s Research Highlights has a nice summary about a new article in PLoS One (Salinas and Munch, 2015) that looks to identify optimum journals for submission, and Dynamic Ecology discussed the point back in 2013, a post that drew considerable attention (here, here, and here, among others).  When we thought about where to submit I made the conscious choice to choose an Open Source journal. I chose Open Quaternary partly because I’m on the editorial board, but also because I believe that domain specific journals are still a critical part of the publishing landscape, and because I believe in Open Access publishing.

The downside of this decision was that (1) the journal is new, so there’s a risk that people don’t know about it, and it’s less ‘discoverable’; (2) even though it’s supported by an established publishing house (Ubiquity Press) it will not obtain an impact factor until it’s relatively well established.  Although it’s important to argue that impact factors should not make a difference, it’s hard not to believe that they do make a difference.

Figure 2.  When code looks crummy it's not usable.  This has since been fixed.
Figure 2. When code looks crummy it’s not usable. This has since been fixed.

That said, I’m willing to invest in my future and the future of the discipline (hopefully!), and we’ve already seen a clear advantage of investing in Open Quaternary.  During the revision of our proofs we noticed that the journal’s two column format wasn’t well suited the the blocks of code that we presented to illustrate examples in our paper.  We also lost the nice color syntax highlighting that pandoc offers when it renders RMarkdown documents (see examples in our paper’s markdown file).  With the help of the journal’s Publishing Assistant Paige MacKay, Editor in Chief Victoria Herridge and my co-authors we were able to get the journal to publish the article in a single column format, with syntax highlighting supported using highlight.js.

I may not have a paper in Nature, Science or Cell (the other obvious option for this paper /s) but by contributing to the early stages of a new open access publishing platform I was able to change the standards and make future contributions more readable and make sure that my own paper is accessible, readable and that the technical solution we present is easily implemented.

I think that’s a win.  The first issue of Open Quaternary should be out in March, until then you can check out our GitHub repository or the PDF as submitted (compleate with typoes).

It wasn’t hard to achieve gender balance.

If you aren't aware of this figure by now you should be.  Credit: Moss-Racusin et al. 2012.
If you aren’t aware of this figure by now you should be. Credit: Moss-Racusin et al. 2012.

A couple of weeks ago my colleagues and I submitted a session proposal to ESA (Paleoecological patterns, ecological processes, modeled scenarios: Crossing temporal scales to understand an uncertain future) for the 100th anniversary meeting in Baltimore. I’m very proud of our session proposal.  Along with a great topic (and one dear to my heart) we had a long list of potential speakers, but we had to whittle it down to eight for the actual submission.

The speaker list consists of four male and four female researchers, a mix of early career and established researchers from three continents. It wasn’t hard. We were aware of the problem of gender bias, we thought of people who’s work we respected, who have new and exciting viewpoints, and who we would like to see at ESA.  We didn’t try to shoehorn anybody in with false quotas, we didn’t pick people to force a balance.  We simply picked the best people.

Out of the people we invited only two turned us down.  While much has been said about higher rejection rates from female researchers (here, and here for the counterpoint), both of the people who turned us down were male, so, maybe we’re past that now?

This is the first time I’ve tried to organize a session and I’m very happy with the results (although I may have jinxed myself!).  I think the session will be excellent because we have an excellent speakers list and a great narrative thread through the session, but my point is: It was so easy, there ought to be very little excuse for a skewed gender balance.

PS.  Having now been self-congratulatory about gender I want to raise the fact that this speakers list does not address diversity in toto, which has been and continues to be an issue in ecology and the sciences in general.  Recognizing there’s a problem is the first step to overcoming our unconscious biases.

What do citations tell us about the climate divide?

UPDATE:  An interesting turn of events has led me to write a follow-up to this post.

I came across an interesting article in Geoforum this past week:

Jankó, F., Móricz, N., & Papp Vancsó, J. (2014). Reviewing the climate change reviewers: Exploring controversy through report references and citations. Geoforum, 56, 17-34.

The authors are in the Faculty of Economics and the Faculty of Forestry at the University of West-Hungary, in Sopron, about an hour directly south of Vienna, Austria. They take an interesting quantitative and human geographic perspective of the use of citations in understanding the physical science basis of climate change from both scientific and skeptical perspectives. A number of bloggers have taken on the science in the NIPCC (Richard Telford has several posts on his blog), but this paper provides interesting insight into the human aspects of scientific report writing. As such the paper falls much more easily into human geography than it does the physical sciences it seeks to understand.

Figure 1.  Heartland's funders did not particularly like the comparison between climate science and the Unabomber. (image source: wikipedia)
Figure 1. Heartland’s funders did not particularly like the comparison between climate science and the Unabomber. (image source: wikipedia)

The issue of climate change is as much part of the domain of human geography as it is physical geography. In particular the dynamic of ‘skeptical’ backlash against the consensus of anthropogenic climate change is well worth studying.  Understanding resistance to scientific knowledge around climate change will be key to eventually moving forward with adaptation policies that can find broad acceptance.  The public self-reports as being less knowledgeable about climate change than it was in 2007 (Stoutenborough et al., 2014), and multiple, competing narratives are likely to play a role in that dynamic.

Lahsen (2013) points out that without examining the differences in perception between climate groups we risk making the science behind our current understanding of anthropogenic climate change more vulnerable to public backlash, and we frequently see interaction between place and social change within the organizations (Jankó et al. mention the impact of the grossly unpopular Unabomber billboard in Chicago on the Heartland Institute’s network of funders and climate change affiliates).

To study characteristics of resistance and acceptance of the science surrounding climate change, the authors review the citation lists of both the IPCC (AR4 – WG1, the Physical Science Basis) and the NIPCC ‘s Climate Change Reconsidered.  By examining similarities and differences in citations and the use of citations we can understand how the rhetoric around climate change science changes the interpretation of the published literature. Jankó et al. use a great quote from Bruno Latour to help guide the discussion:

Whatever the tactics, the general strategy is easy to grasp: do whatever you need to the former literature to render it as helpful as possible for the claims you are going to make. The rules are simple enough: weaken your enemies, paralyse those you cannot weaken […], help your allies if they are attacked, ensure safe communications with those who supply you with indisputable instruments […], oblige your enemies to fight one another […]; if you are not sure of winning, be humble and understated” (Latour, 1987, pp. 37–38).

Figure 2.  Citations are important, but they're rarely used in an impartial manner. (image source: wikipedia)
Figure 2. Citations are important, but they’re rarely used in an impartial manner. (image source: wikipedia)

I feel like this overstates the case for the IPCC a little bit (though I may be biased). The IPCC is not set up to directly combat ‘skeptical’ literature, as is the case of the NIPCC.  The NIPCC is explicitly structured to mirror and refute the IPCC.  Regardless, we often think that as researchers we use citations in a neutral manner, but I would argue that that’s rarely the case. Citations in the literature are selected to help bolster arguments, they’re selected because we know people, and they’re occasionally massaged to change the point of an argument in an effort to support our own.

So the question becomes, how is the literature used and modified in these summaries to help develop an agenda?

Interestingly Jankó et al. show that only 4.4% of total citations (IPCC + NIPCC) were used in both reports. This was surprising to me. I had expected that many of the primary sources to explain climate systems and their modern behaviour might have made up a much larger proportion of both reports. Jankó et al. include a table with analysis of many of the overlapping citations (Appendix B)and we see that most cases of duplicate citations show similar tone in the treatment of the citations. Differences do exist however.  Where there is extensive overlap in citations Jankó et al. have some very insightful points to make here.  One surprising point was that both reports use particular language around references they like (‘find’, ‘indicate’, ‘report’, ‘show’, ‘conclude’) and don’t like (‘claim’, and ‘contend’), although how the language is applied to individual citations varies between reports (the discussion of Tropical Cyclones is well worth a read).  The other main difference in these overlapping citations is that key NIPCC citations, challenging climate change are often found in the IPCC to support understanding of uncertainties.  Thus, what the IPCC sees as an uncertainty, the NIPCC sees as evidence against anthropogenic climate change.

The real issue that piqued my interest however was the much higher proportion of paleo-journals in the NIPCC literature.  The Holocene is cited 12 times more frequently in the NIPCC than in the IPCC,  Geology and Quaternary Research are both cited 10 times more often.  Why would skeptics cite paleoecological literature at higher rates than the IPCC?  In large part this is due to a key motivation for the NIPCC, and a particular focus in the paleoclimate sections.

The analytical goal of the NIPCC is to increase the perception of uncertainty, attempting to add more ‘non-supportive’ and ‘uncertain’ literature to the argument, and to use that increased uncertainty to take apart the arguments for anthropogenic climate change.  In this way the paleo-literature becomes a tool for skeptics with which to attack our understanding of climate change science.  Indeed, of the 18 references from the Holocene in the NIPCC, only one could be considered ‘Neutral’ while the other 17 were considered to be ‘Not Supporting’ of climate change science.  For Quaternary Research 2 citations were ‘Neutral’ and ’12 were ‘Not Supporting’.  Again, what might be considered uncertainty in the IPCC is considered evidence against in the NIPCC.

Figure 1.  Does showing climate has changed in the past prove that climate change is natural?
Figure 3. Does showing climate has changed in the past prove that climate change is natural?

Jankó et al. explain this trend by showing that the NIPCC uses the past to explain the present in such a way as to downplay the unprecendented nature of modern climate change, while the IPCC uses the past to search for analogues of modern climate change.  Effectively, the NIPCC view stops at the present:  The past was warmer, therefore change is not unprecedented.  The IPCC is searching for ways to explain the future: The past had warmer periods. What caused those changes, what happened  during those periods, and how can we use the past to constrain models for the future?

This, to my mind, is the difference between the camps.  The science marshaled in the IPCC is focused toward improving hypotheses and theoretical (and mechanistic) models.  It is prescriptive science in that uncertainties are identified, and used to improve our understanding of modern and future change.  In the ‘skeptical’ camp, science is marshaled to disprove anthropogenic causes, and when it does, the avenue of research is closed.  It is effectively a descriptive model without an overarching theoretical framework.  This allows it to attach the label ‘skeptical’ to disparate threads of knowledge across the literature, without having to concern itself with how those pieces join together.  Jankó et al. point out that the narrative style of the NIPCC report is structured around an anecdotal style, summarizing each paper individually and often adding textual quotes, while the IPCC synthesizes knowledge from multiple sources and provides block references for statements.  In one we see a descriptive format that highlights any contrary (or uncertain) position, in the other we see an effort to synthesize knowledge into a theoretical framework.

The scientific basis for anthropogenic climate change is strongly grounded in a fairly simple physical model that finds broad based theoretical support across a range of physical sciences.  The scientific community has shown that over time (since at least the 1970s), counter-examples and uncertainties found in the literature have been able to highlight weaknesses in our understanding, bu, rather than collapse the structure, these weaknesses have been marshaled to improve the science and to develop a much more robust scientific understanding of climate change.

Literature Cited

Idso, Craig Douglas, & Siegfried Fred Singer. 2009. Climate change reconsidered: 2009 report of the Nongovernmental International Panel on Climate Change (NIPCC). Nongovernmental International Panel on Climate Change.

Jankó, F., Móricz, N., & Papp Vancsó, J. (2014). Reviewing the climate change reviewers: Exploring controversy through report references and citations. Geoforum56, 17-34.

Jansen, E., J. Overpeck, K.R. Briffa, J.-C. Duplessy, F. Joos, V. Masson-Delmotte, D. Olago, B. Otto-Bliesner, W.R. Peltier, S. Rahmstorf, R. Ramesh, D. Raynaud, D. Rind, O. Solomina, R. Villalba and D. Zhang, 2007: Palaeoclimate. In: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change [Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M. Tignor and H.L. Miller (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.

Latour, B. (1987). Science in action: How to follow scientists and engineers through society. Harvard university press.

Lahsen, M. (2013). Climategate: the role of the social sciencesClimatic change119(3-4), 547-558.

Stoutenborough, J. W., Liu, X., & Vedlitz, A. (2014). Trends in Public Attitudes Toward Climate Change: The Influence of the Economy and Climategate on Risk, Information, and Public Policy. Risk, Hazards & Crisis in Public Policy,5(1), 22-37.

Publication metrics and interdisciplinary research.

van Dijk and others have just published an interesting paper in Current Biology “Publication metrics and success on the academic job market”. The main point in their paper is that it’s important to publish, it’s important to publish lots, and that having a highly cited paper can overcome the disadvantage of not publishing in high impact journals.

The last sentence really caught my eye:

Our results suggest that currently, journal impact factor and academic pedigree are rewarded over the quality of publications, which may dis-incentivize rapid communication of findings, collaboration and interdisciplinary science.

Figure 1.  If you want to be a PI you'd best spend more time writing, and less time watching birds.   "A Girl Writing; The Pet Goldfinch" Henriette Browne (1870)
Figure 1. If you want to be a PI you’d best spend more time writing, and less time watching birds. “A Girl Writing; The Pet Goldfinch” Henriette Browne (1870)

This tone echos what we said in Goring et al. (2014), where we pointed out that early career researchers may be disadvantaged in interdisciplinary research, both by the Matthew effect and because interdisciplinary research often results in lags to publication as disciplinary bridges need to be overcome. With more support for this argument it becomes clear that, either (1) committees need to take the costs of interdisciplinary research into account when evaluating candidates for hiring or tenure, or (2) they need to specify interdisciplinarity as a key criteria in hiring and reward it explicitly. Our metrics help balance the costs of interdisciplinarity against a number of research outcomes, but if these metrics aren’t evaluated then early career researchers are effectively penalized, as van Dijk et al. point out.

van Dijk et al. don’t cite Petersen et al. (2012), but it’s worth pointing out that people have considered what it takes to make it in academia, which makes the statement “This is the first study that quantifies what is predictive of an academic career in terms of becoming a principal investigator” a bit of a dodgy statement in my opinion. Petersen et al. only study Assistant Professors and Professors, but it’s similar enough in intent and results that a reference to the paper should be included (in my opinion).

Finally, I want to point out a couple of peculiarities about the data set and analysis used in this paper.

  1. The paper assumes that the last author is a PI, and so “success” is measured once you get three last author publications. Weltzin and others (2006) have taken this issue on, and made some important contributions. Tscharntke and others (2007) make the point that the last author is not always a PI in some disciplines, and so the blanket application of this method may be problematic. Indeed, it is my understanding that all of the papers in the (open access) Macrosystems Ecology special edition of Frontiers in Ecology and the Environment are ordered by contribution. So maybe this is an assumption that is slowly but surely breaking down over time (and with good reason).
  2. PubMed is not an exhaustive database. I have 19 publications on Google Scholar and only 2 on PubMed. I suspect that this is an issue tied largely to whether disciplinary journals are archived by PubMed, but even Shultz (2007) found that Google Scholar often returns a greater number of journal search results than equivalent searches on PubMed. If no effort is made to constrain results to a particular discipline (and it’s not clear to me that that is the case) then it is possible that the results van Dijk present might be compromised.

Part of the reason that van Dijk’s results seem to resonate (check out the paper’s AltMetrics) is that it tells us a lot about what we already know intuitively. Getting papers in good journals matters. Good journals help increase visibility, but even if you can’t get into a good journal, you can still score with a highly cited article. Then, publish. Publish or perish. Finally, and disappointingly, it also doesn’t hurt to be a man (although it had a surprisingly low correlation with success if I’m reading the supplemental material correctly).

So what do you think?

Macrosystems Ecology: The more we know the less we know.

Dynamic Ecology had a post recently asking why there wasn’t an Ecology Blogosphere. One of the answers was simply that as ecologists we often recognize the depth of knowledge of our peers and as such, are unlikely (or are unwilling) to comment in an area that we have little expertise. This is an important point. I often feel like the longer I stay in academia the more I am surprised when I can explain a concept outside my (fairly broad) subject area clearly and concisely.  It surprises me that I have depth of knowledge in a subject that I don’t directly study.

Of course, it makes sense.  We are constantly exposed to ideas outside our disciplines in seminars, papers, on blogs & twitter, and in general discussions, but at the same time we are also exposed to people with years of intense disciplinary knowledge, who understand the subtleties and implications of their arguments.  This is exciting and frightening.  The more we know about a subject, the more we know what we don’t know.  Plus, we’re trained to listen to other people.  We ‘grew up’ academically under the guidance of others, who often had to correct us, so when we get corrected out of our disciplines we are often likely to defer, rather than fight.

This speaks to a broader issue though, and one that is addressed in the latest issue of Frontiers in Ecology and the Environment.  The challenges of global change require us to come out of our disciplinary shells and to address challenges with a new approach, defined here as Macrosystems Ecology.  At large spatial and temporal scales – the kinds of scales at which we experience life – ecosystems cease being disciplinary.  Jim Heffernan and Pat Soranno, in the lead paper (Heffernan et al., 2014) detail three ecological systems that can’t be understood without cross-scale synthesis using multi-disciplinary teams.

Figure 1.  From Heffernan et al. (2014), multiple scales and disciplines interact to explain patterns of change in the Amazon basin.
Figure 1. From Heffernan et al. (2014), multiple scales and disciplines interact to explain patterns of change in the Amazon basin.

The Amazonian rain forest is a perfect example of a region that is imperiled by global change, and can benefit from a Macrosystems approach.  Climate change and anthropogenic land use drives vegetation change, but vegetation change also drives climate (and, ultimately, land use decisions). This is further compounded by teleconnections related to societal demand for agricultural products around the world and the regional political climate.  To understand and address ecological problems in this region then, we need to understand cross-scale phenomena in ecology, climatology, physical geography, human geography, economics and political science.

Macrosystems proposes a cross-scale effort, linking disciplines through common questions to examine how systems operate at regional to continental scales, and at multiple temporal scales.  These problems are necessarily complex, but by bringing together researchers in multiple disciplines we can begin to develop a more complete understanding of broad-scale ecological systems.

Interdisciplinary research is not something that many of us have trained for as ecologists (or biogeographers, or paleoecologists, or physical geographers. . . but that’s another post).  It is a complex, inter-personal interaction that requires understanding of the cultural norms within other disciplines.  Cheruvelil et al. (2014) do a great job of describing how to achieve and maintain high-functioning teams in large interdisciplinary projects, and Kendra also discusses this further in a post on her own academic blog.

Figure 2.  Interdisciplinary research requires effort in a number of different areas, and these efforts are not recognized under traditional reward structures.
Figure 2. From Goring et al., (2014). Interdisciplinary research requires effort in a number of different areas, and these efforts are not recognized under traditional reward structures.

In Goring et al. (2014) we discuss a peculiar issue that is posed by interdisciplinary research.  The reward system in academia is largely structured to favor disciplinary research.  We refer to this in our paper as a disciplinary silo.  You are in a department of X, you publish in the Journal of X, you go to the International Congress of X and you submit grant requests to the X Program of your funding agency.  All of these pathways are rewarded, and even though we often claim that teaching and broader outreach are important, they are important inasmuch as you need to not screw them up completely (a generalization, but one I’ve heard often enough).

As we move towards greater interdisciplinarity we begin to recognize that simply superimposing the traditional rewards structure onto interdisciplinary projects (Figure 2) leaves a lot to be desired.  This is particularly critical for early-career researchers.  We are asking these researchers (people like me) to collaborate broadly with researchers around the globe, to tackle complex issues in global change ecology, but, when it comes time to assess their research productivity we don’t account for the added burden that interdisciplinary research can require of a researcher.

Now, I admit, this is self-serving.  As an early career researcher, and member of a large interdisciplinary team (PalEON), much of what we propose in Goring et al. (2014) strongly reflects on my own personal experience.  Outreach activities, the complexities of dealing with multiple data sources, large multi-authored papers, posters and talks, and the coordination of researchers across disciplines are all realities for me, and for others in the project, but ultimately, we get evaluated on grants and papers.  The interdisciplinary model of research requires effort that never gets valuated by hiring or tenure committees.

That’s not to say that hiring committees don’t consider this complexity, and I know they’re not just looking for Nature and Science papers, but at the same time, there is a new landscape for researchers out there, and we’re trying to evaluate them with an old map.

In Goring et al. (2014) we propose a broader set of metrics against which to evaluate members of large interdisciplinary teams (or small teams, there’s no reason to be picky).  This list of new metrics (here) includes traditional metrics (numbers of papers, size of grants), but expands the value of co-authorship, recognizing that only one person is first in the authorship list, even if people make critical contributions; provides support for non-disciplinary outputs, like policy reports, dataset generation, non-disciplinary research products (white papers, books) and the creation of tools and teaching materials; and adds value to qualitative contributions, such as facilitation roles, helping people communicate or interact across disciplinary divides.

This was an exciting set of papers to be involved with, all arising from two meetings associated with the NSF Macrosystems Biology program (part of NSF BIO’s Emerging Frontiers program).  I was lucky enough to attend both meetings, the first in Boulder CO, the second in Washington DC.  As a post-doctoral researcher these are the kinds of meetings that are formative for early-career researchers, and clearly, I got a lot out of it.  The Macrosystems Biology program is funding some very exciting programs, and this Frontiers issue attempts to get to the heart of the Macrosystems approach.  It is the result of many hours and days of discussion, and many of the projects are already coming to fruition.  It is an exciting time to be an early-career researcher, hopefully you agree!

Who sees your review?

There’s been a lot of calls for reform to the peer review process, and lots of blog posts about problems and bad experiences with peer review (Simply Statistics, SVPow, and this COPE report)  .  There is lots of evidence that peer review suffers from deficiencies related to author seniority, gender (although see Marsh et al, 2011), and from variability related to the choice of reviewers (see Peters & Ceci, 1982, but the age of this paper should be noted). Indeed, recent work by Thurner and Hanell (2011) and Squazzoni and Gandelli (2012) show how sensitive publication can be to the structure of the discipline (whether homogeneous or fragmented) and the intentions of the reviewers (whether they are competitive or collegial).

To my mind, one of the best, established models of peer review comes from the Copernicus journals of the European Geosciences Union.  I’m actually surprised that these journals are rarely referenced in debates about reviewing practice.  The journals offer two outlets, I’ve published with co-authors in Climate of the Past Discussions (here, here and here), the papers undergo open review by reviewers who may or may not remain anonymous (their choice), and then the response and revised paper goes to ‘print’ in Climate of the Past (still in review, here and here respectively).

This is the kind of open peer review that people have pushed by posting reviews on their blogs (I saw a post on twitter a couple weeks ago, bt can’t find the blog, if anyone has a reference please let me know).  The question is, why not publish in journals that support the kind of open review you want?  There are a number of models out there now, and I believe there is increasing acceptance of these models so we have choice, lets use it.

What inspired me to write the post though was my own recent experience as a reviewer.  I just finished reviewing a fairly good paper that ultimately got rejected.  When I received the editors notice I went to see what the other reviewer had said, only to find that the journal does not release other reviews.  This was the first time this has happened to me and I was surprised.

I review for a number of reasons.  It helps me give back to my disciplinary community, it keeps me up to date on new papers, it gives me an opportunity to deeply read and communicate science in a way that we don’t ordinarily undertake, and it helps me improve my own skills.  The last point comes not only from my own activity, but from reading the reviews of others.  If you want a stronger peer review process, having peers see one another’s reviews is helpful.

So how do we fix the Ph.D/Postdoc glut?

I think that at this point everyone in academia (except funding agencies?) is aware that there is a glut of Ph.Ds and postdoctoral researchers, and, at the same time, budgets are being cut back and departments are hunkering down.  Nature published an editorial in 2011 pointing out the issue, with some contentious points made in the comments.  I’ve seen posts across the science blog-o-sphere about the issue, Mike the Mad Biologist posted recently, there was post-doc-alypse-gate (I got in on the initial twitter hash job (still ongoing), Ethan Perlstein wrote a post and then Prof-like Substance weighed in), but as of yet I haven’t seen a post discussing how individuals can help counter this problem, it’s all institutional. Continue reading So how do we fix the Ph.D/Postdoc glut?