Macrosystems Ecology: The more we know the less we know.

Dynamic Ecology had a post recently asking why there wasn’t an Ecology Blogosphere. One of the answers was simply that as ecologists we often recognize the depth of knowledge of our peers and as such, are unlikely (or are unwilling) to comment in an area that we have little expertise. This is an important point. I often feel like the longer I stay in academia the more I am surprised when I can explain a concept outside my (fairly broad) subject area clearly and concisely.  It surprises me that I have depth of knowledge in a subject that I don’t directly study.

Of course, it makes sense.  We are constantly exposed to ideas outside our disciplines in seminars, papers, on blogs & twitter, and in general discussions, but at the same time we are also exposed to people with years of intense disciplinary knowledge, who understand the subtleties and implications of their arguments.  This is exciting and frightening.  The more we know about a subject, the more we know what we don’t know.  Plus, we’re trained to listen to other people.  We ‘grew up’ academically under the guidance of others, who often had to correct us, so when we get corrected out of our disciplines we are often likely to defer, rather than fight.

This speaks to a broader issue though, and one that is addressed in the latest issue of Frontiers in Ecology and the Environment.  The challenges of global change require us to come out of our disciplinary shells and to address challenges with a new approach, defined here as Macrosystems Ecology.  At large spatial and temporal scales – the kinds of scales at which we experience life – ecosystems cease being disciplinary.  Jim Heffernan and Pat Soranno, in the lead paper (Heffernan et al., 2014) detail three ecological systems that can’t be understood without cross-scale synthesis using multi-disciplinary teams.

Figure 1.  From Heffernan et al. (2014), multiple scales and disciplines interact to explain patterns of change in the Amazon basin.
Figure 1. From Heffernan et al. (2014), multiple scales and disciplines interact to explain patterns of change in the Amazon basin.

The Amazonian rain forest is a perfect example of a region that is imperiled by global change, and can benefit from a Macrosystems approach.  Climate change and anthropogenic land use drives vegetation change, but vegetation change also drives climate (and, ultimately, land use decisions). This is further compounded by teleconnections related to societal demand for agricultural products around the world and the regional political climate.  To understand and address ecological problems in this region then, we need to understand cross-scale phenomena in ecology, climatology, physical geography, human geography, economics and political science.

Macrosystems proposes a cross-scale effort, linking disciplines through common questions to examine how systems operate at regional to continental scales, and at multiple temporal scales.  These problems are necessarily complex, but by bringing together researchers in multiple disciplines we can begin to develop a more complete understanding of broad-scale ecological systems.

Interdisciplinary research is not something that many of us have trained for as ecologists (or biogeographers, or paleoecologists, or physical geographers. . . but that’s another post).  It is a complex, inter-personal interaction that requires understanding of the cultural norms within other disciplines.  Cheruvelil et al. (2014) do a great job of describing how to achieve and maintain high-functioning teams in large interdisciplinary projects, and Kendra also discusses this further in a post on her own academic blog.

Figure 2.  Interdisciplinary research requires effort in a number of different areas, and these efforts are not recognized under traditional reward structures.
Figure 2. From Goring et al., (2014). Interdisciplinary research requires effort in a number of different areas, and these efforts are not recognized under traditional reward structures.

In Goring et al. (2014) we discuss a peculiar issue that is posed by interdisciplinary research.  The reward system in academia is largely structured to favor disciplinary research.  We refer to this in our paper as a disciplinary silo.  You are in a department of X, you publish in the Journal of X, you go to the International Congress of X and you submit grant requests to the X Program of your funding agency.  All of these pathways are rewarded, and even though we often claim that teaching and broader outreach are important, they are important inasmuch as you need to not screw them up completely (a generalization, but one I’ve heard often enough).

As we move towards greater interdisciplinarity we begin to recognize that simply superimposing the traditional rewards structure onto interdisciplinary projects (Figure 2) leaves a lot to be desired.  This is particularly critical for early-career researchers.  We are asking these researchers (people like me) to collaborate broadly with researchers around the globe, to tackle complex issues in global change ecology, but, when it comes time to assess their research productivity we don’t account for the added burden that interdisciplinary research can require of a researcher.

Now, I admit, this is self-serving.  As an early career researcher, and member of a large interdisciplinary team (PalEON), much of what we propose in Goring et al. (2014) strongly reflects on my own personal experience.  Outreach activities, the complexities of dealing with multiple data sources, large multi-authored papers, posters and talks, and the coordination of researchers across disciplines are all realities for me, and for others in the project, but ultimately, we get evaluated on grants and papers.  The interdisciplinary model of research requires effort that never gets valuated by hiring or tenure committees.

That’s not to say that hiring committees don’t consider this complexity, and I know they’re not just looking for Nature and Science papers, but at the same time, there is a new landscape for researchers out there, and we’re trying to evaluate them with an old map.

In Goring et al. (2014) we propose a broader set of metrics against which to evaluate members of large interdisciplinary teams (or small teams, there’s no reason to be picky).  This list of new metrics (here) includes traditional metrics (numbers of papers, size of grants), but expands the value of co-authorship, recognizing that only one person is first in the authorship list, even if people make critical contributions; provides support for non-disciplinary outputs, like policy reports, dataset generation, non-disciplinary research products (white papers, books) and the creation of tools and teaching materials; and adds value to qualitative contributions, such as facilitation roles, helping people communicate or interact across disciplinary divides.

This was an exciting set of papers to be involved with, all arising from two meetings associated with the NSF Macrosystems Biology program (part of NSF BIO’s Emerging Frontiers program).  I was lucky enough to attend both meetings, the first in Boulder CO, the second in Washington DC.  As a post-doctoral researcher these are the kinds of meetings that are formative for early-career researchers, and clearly, I got a lot out of it.  The Macrosystems Biology program is funding some very exciting programs, and this Frontiers issue attempts to get to the heart of the Macrosystems approach.  It is the result of many hours and days of discussion, and many of the projects are already coming to fruition.  It is an exciting time to be an early-career researcher, hopefully you agree!

Advertisements

Sometimes saving time in a bottle isn’t the best idea.

As an academic you have the advantage of meeting people who do some really amazing research.  You also have the advantage of doing really interesting stuff yourself, but you also tend to spend a lot of time thinking about very obscure things.  Things that few other people are also thinking about, and those few people tend to be spread out across the globe.  I had the opportunity to join researchers from around the world at Queen’s University in Belfast, Northern Ireland earlier this month for a meeting about age-depth models, a meeting about how we think about time, and how we use it in our research.

Time is something that paleoecologists tend to think about a lot.  With the Neotoma paleoecological database time is a critical component.  It is how we arrange all the paleoecological data.  From the Neotoma Explorer you can search and plot out mammal fossils at any time in the recent (last 100,000 years or so) past, but what if our fundamental concept of time changes?

Figure 1.  The accelerator in Belfast uses massive magnets to accelerate Carbon particles to 5 million km/h.
Figure 1. The particle accelerator in Belfast uses magnets to accelerate Carbon particles to 5 million km/h.

Most of the ages in Neotoma are relative.  They are derived from from radiocarbon data, either directly, or within a chronology built from several radiocarbon dates (Margo Saher has a post about 14C dating here), which means that there is uncertainty around the ages that we assign to each pollen sample, mammoth bone or plant fossil.  To actually get a radiocarbon date you first need to send a sample of organic material out to a lab (such as the Queen’s University, Belfast Radiocarbon Lab).  The samples at the radiocarbon lab are processed and put in an Accelerator Mass Spectrometer (Figure 1) where molecules of Carbon reach speeds of millions of miles an hour, hurtling through a massive magnet, and are then counted, one at a time.

These counts are used to provide an estimate of age in radiocarbon years.  We then use the IntCal curve to relate radiocarbon ages to calendar ages.  This calibration curve relates absolutely dated material (such as tree rings) to their radiocarbon ages.  We need the IntCal curve since the generation of radiocarbon (14C) in the atmosphere changes over time, so there isn’t a 1:1 relationship between radiocarbon ages and calendar ages.  Radiocarbon (14C) 0 is actually 1950 (associated with atmospheric atomic bomb testing), and by the time you get back to 10,000 14C years ago, the calendar date is about 1,700 years ahead of the radiocarbon age (i.e., 10,000 14C years is equivalent to 11,700 calendar years before present).

Figure 2.  A radiocarbon age estimate (in 14C years; the pink, normally distributed curve) intercepts the IntCal curve (blue ribbon).  The probability density associated with this intercept builds the age estimate for that sample, in calendar years.
Figure 2. A radiocarbon age estimate (in 14C years; the pink, normally distributed curve) intercepts the IntCal curve (blue ribbon). The probability density associated with this intercept builds the age estimate for that sample, in calendar years. [link]
To build a model of age and depth within a pollen core, we link radiocarbon dates to the IntCal curve (calibration) and then link each age estimate together, with their uncertainties, using specialized software such as OxCal, Clam or Bacon.  This then allows us to examine changes in the paleoecological record through time, basically, this allows us to do paleoecology.

A case for updating chronologies

The challenge for a database like Neotoma is that the IntCal curve changes over time (IntCal98, IntCal04, IntCal09, and now IntCal13) and our idea of what makes an acceptable age model (and what constitutes acceptable material for dating) also changes.

If we’re serving up data to allow for broad scale synthesis work, which age models do we provide?  If we provide the original published model only then these models can cause significant problems for researchers working today.  As I mentioned before, by the time we get back 10,000 14C years the old models (built using only 14C ages, not calibrated ages) will be out of sync with newer data in the database, and our ability to discern patterns in the early-Holocene will be affected.  Indeed, identical cores built using different age models and different versions of the IntCal curve could tell us very different things about the timing of species expansions following glaciation, or changes in climate during the mid-Holocene due to shifts in the Intertropical Convergence Zone (for example).

So, if we’re going to archive these published records then we ought to keep the original age models, they’re what’s published after all, and we want to keep them as a snapshot (reproducible science and all that).  However, if we’re going to provide this data to researchers around the world, and across disciplines, for novel research purposes then we need to provide support for synthesis work.  This support requires updating the calibration curves, and potentially, the age-depth models.

So we get (finally) to the point of the meeting.  How do we update age-models in a reliable and reproducible manner?  Interestingly, while the meeting didn’t provide a solution, we’re much closer to an endpoint.  Scripted age-depth modelling software like Clam and Bacon make the task easier, since they provide the ability to numerically reproduce output directly in R.  The continued development of the Neotoma API also helps facilitate this task since it again would allow us to pull data directly from the database, and reproduce age-model construction using a common set of data.

Figure 3.  This chronology depends on only three control points and assumes constant sedimentation from 7000 years before present to the modern.  No matter how you re-build this age model it's going to underestimate uncertainty in this region.
Figure 3. This chronology depends on only three control points and assumes constant sedimentation from 7000 years before present to the modern. No matter how you re-build this age model it’s going to underestimate uncertainty in this region.

One thing that we have identified however are the current limitations to this task.  Quite simply, there’s no point in updating some age-depth models.  The lack of reliable dates (or of any dates) means that new models will be effectively useless.  The lack of metadata in published material is also a critical concern.  While some journals maintain standards for the publication of 14C dates they are only enforced when editors or reviewers are aware of them, and are difficult to enforce post publication.

The issue of making data open and available continues to be an exciting opportunity, but it really does reveal the importance of disciplinary knowledge when exploiting data sources.  Simply put, at this point if you’re going to use a large disciplinary database, unless you find someone who knows the data well, you need to hope that signal is not lost in the noise (and that the signal you find is not an artifact of some other process!).

No one reads your blog: Reflections on the middling bottom.

Two weeks ago Terry McGlynn posted reflections about blogging on Small Pond Science, an excellent blog that combines research, teaching reflections and other assorted topics.  Two weeks ago I didn’t post anything.  Three weeks ago I didn’t post anything.  The week before I posted a comment of Alwynne Beaudoin‘s that is great, but wasn’t really mine (although she gave me permission to post it).  The last thing I posted myself was a long primer on using GitHub that I posted six weeks ago. Continue reading No one reads your blog: Reflections on the middling bottom.

Who sees your review?

There’s been a lot of calls for reform to the peer review process, and lots of blog posts about problems and bad experiences with peer review (Simply Statistics, SVPow, and this COPE report)  .  There is lots of evidence that peer review suffers from deficiencies related to author seniority, gender (although see Marsh et al, 2011), and from variability related to the choice of reviewers (see Peters & Ceci, 1982, but the age of this paper should be noted). Indeed, recent work by Thurner and Hanell (2011) and Squazzoni and Gandelli (2012) show how sensitive publication can be to the structure of the discipline (whether homogeneous or fragmented) and the intentions of the reviewers (whether they are competitive or collegial).

To my mind, one of the best, established models of peer review comes from the Copernicus journals of the European Geosciences Union.  I’m actually surprised that these journals are rarely referenced in debates about reviewing practice.  The journals offer two outlets, I’ve published with co-authors in Climate of the Past Discussions (here, here and here), the papers undergo open review by reviewers who may or may not remain anonymous (their choice), and then the response and revised paper goes to ‘print’ in Climate of the Past (still in review, here and here respectively).

This is the kind of open peer review that people have pushed by posting reviews on their blogs (I saw a post on twitter a couple weeks ago, bt can’t find the blog, if anyone has a reference please let me know).  The question is, why not publish in journals that support the kind of open review you want?  There are a number of models out there now, and I believe there is increasing acceptance of these models so we have choice, lets use it.

What inspired me to write the post though was my own recent experience as a reviewer.  I just finished reviewing a fairly good paper that ultimately got rejected.  When I received the editors notice I went to see what the other reviewer had said, only to find that the journal does not release other reviews.  This was the first time this has happened to me and I was surprised.

I review for a number of reasons.  It helps me give back to my disciplinary community, it keeps me up to date on new papers, it gives me an opportunity to deeply read and communicate science in a way that we don’t ordinarily undertake, and it helps me improve my own skills.  The last point comes not only from my own activity, but from reading the reviews of others.  If you want a stronger peer review process, having peers see one another’s reviews is helpful.

Three new papers in various stages of publication.

I’ve just gone through and put some new papers into my Research page.  I’ve been busy over the past little while and it seems to be paying off.  Here are some of my latest papers, with brief summaries for your enjoyment:

Figure 5 from Goring et al., the relationships between plant richness and smoothed pollen richness and vice versa both show a slightly negative relationship (accounting for very little variability), meaning higher plant richness is associated with lower pollen richness.
Figure 5 from Goring et al., the relationships between plant richness and smoothed pollen richness and vice versa both show a slightly negative relationship (accounting for very little variability), meaning higher plant richness is associated with lower pollen richness.

Goring S, Lacourse T, Pellatt MG, Mathewes RW.  Pollen richness is not correlated to plant species richness in British Columbia, Canada.  Journal of Ecology,   Accepted. [Link][Supplement]

  • Although pollen richness has acted as a proxy for vegetation richness in the literature, our paper shows that this may not be the case.  Taphonomic processes, from release of the pollen to deposition and preservation in lake sediments, appear to degrade the signal of plant richness to the point that there is no significant relationship between plant species richness and pollen taxonomic richness.  The supplementary material includes all the R code and a sample of the raw data (we could not freely share some of the data) used to perform the analysis.

Combourieu-Nebout N, Peyron O, Bout-Roumazeilles V, Goring S, Dormoy I, Joannin S, Sadori L, Siani G, and Magny M. 2013. Holocene vegetation and climate changes in central Mediterranean inferred from a high-resolution marine pollen record (Adriatic Sea). Climate of the Past Discussions, 9:1969-2014. [Link]

  • Another great paper on Holocene and late-Glacial change in the Mediterranean, part of a Special Series in Climate of the Past.  This paper uses multiple proxies, including the use of clay mineral fractions to match climate signals from pollen to sediment transport into the Adriatic from the Po River watershed, sediment blown from the Sahara and sediment transported down the Apennines.  This paper further examines shifts in seasonal precipitation in the central Mediterranean associated with changes in insolation during the Holocene and broader scale shifts in the relative influences of major climate systems in the region.

Gill JL, McLauchlan KK, Skibbe AM, Goring S, Williams JW. Linking abundances of the dung fungus Sporormiella to the density of Plains bison: Implications for assessing grazing by megaherbivores in the paleorecord. Journal of Ecology. Early view: [Link]

  • Three great papers in a row!  This paper uses modern pollen traps in the Konza Prairie LTER to examine the relationship between Sporormiella pollen and bison grazing.  This is an important link to make because Sporormiella has been used to indicate the presence of megafauna such as mammoths and mastadons in the late-glacial.  The declining signal of Sporormiella at Appleman Lake, IN was a key feature in the onset of non-analogue vegetation at the site in the late-Glacial (Gill et al., 2009).  This paper provides an explicit link between the theoretical potential of the spore as an indicator of megafaunal presence and the degree of grazing at sites.

The phone interview!

Figure 1.  This is not what you want to have happen in a phone interview.  Remove all sharp objects & rodents from the room when you take the call.
Figure 1. This is not what you want to have happen in a phone interview. Remove all sharp objects & rodents from the room when you take the call. Credit: Dr. Seuss, One Fish, Two Fish, Red Fish, Blue Fish.

The job market season is right around the corner and many of us are preparing our research statements and CVs, all the while trying to keep our productivity up.

A common feature in the current job market is the phone or Skype interview.  As exciting as this interview may be, the phone interview is fundamentally different from the on-campus interview – they want to whittle the long list down before they bring people to the campus for a broader look at their qualifications.  This means there’s less time for you, the candidate, to find out what you need to know about the faculty. It’s all about telling the committee what they need to know about you in less than an hour.  The details that will be helpful in your later negotiation need to wait, except inasmuch as they show you have thought deeply about who you have been as a researcher, who you would be as a peer and mentor, and where you want to be at tenure. [more after the break] Continue reading The phone interview!

On blogging and collaboration

We’ve submitted a paper to Frontiers of Ecology and the Environment that deals with the art of collaboration in large-scale ecological research.  It’s in review at the moment, so I’m not going to talk too much about it, except in setting up my discussion here.

Jacquelyn Gill has a new post up that talks about the roles of writing, blogging, getting papers out and submitting grant proposals.  One comment she includes is that she has received advice indicating that when push comes to shove, blog posts don’t count toward tenure.  It’s an interesting comment, on one that I suspect comes from someone who doesn’t blog.  While I agree that blogging isn’t going to matter much as far as a direct benefit, I think it plays a strong role in fostering collaboration. Continue reading On blogging and collaboration