How the Earth’s atmosphere shows its face

This is an article I wrote for the European Geosciences Union’s newsletter, GeoQ, issue 9. The issue’s theme is “The Face of the Earth”, and so my article is based on “The Atmosphere of the Earth”. It’s aimed at members of EGU (i.e. a variety of different geoscientists), and I’ve tried to make it understandable to an interested non-scientist audience. I’d love to know whether any non-scientists out there think I’ve pitched it right or not!

Looking from outer space, the Earth’s atmosphere appears as an encapsulating fluid that flows in patterns caused by the rotation of the planet and the heating from the Sun. Up close, however, the atmosphere shows its face in much more detail, helping researchers understand the complex interactions in the Earth system.

Temperature of the atmosphere

The temperature of the Earth is much like the temperature of a person: it is a symptom of everything else that is going on in that person’s body. It may seem like a basic property of the atmosphere, but it is a product of many other aspects of the Earth system, including land and oceans.

Recently, there has been much discussion of the so-called ‘temperature hiatus’, the weakening of the trend in global mean surface air temperature since the late 1990s. Observations, such as those from the HadCRUT4 dataset, appear to show temperatures in the past decade rising more slowly than in the preceding two decades (see figure).

The HadCRUT4 dataset is a combination of ground station and the sea surface temperature measurements, which represent about 85% of Earth’s surface. Recent analysis by Cowtan and Way has tested whether this data contains a bias due to incomplete coverage of the globe, and they conclude that it has led to an underestimate of recent global warming. The authors point out that satellite data, models and isolated weather station data show that regions not covered by the dataset, especially the Arctic, have warmed faster than other parts of the world. Accounting for this gives a trend two and a half times greater than that from HadCRUT4, for temperature since 1997.

So even establishing the magnitude of the temperature hiatus is an ongoing area of research. The range of different studies investigating the causes of it is indicative of just how many different factors affect the air temperature.

Work by Estrada, Perron and Martínez-López explores global temperature data sets and radiative forcing variables (greenhouse gases in the atmosphere, natural changes in composition and land use, and solar irradiance) using statistical techniques. Their method interrogates the data without the use of models, and the authors conclude that the temperature record and the radiative forcing (which describes whether the Earth system has a net warming or cooling) can be described by linear trends punctuated by breaks. In this picture, the hiatus is simply a period with a different trend following a break. But what caused this break to occur?

The results suggest that the predominant cause was an unintended consequence of the 1987 Montreal Protocol, the international treaty to stop the destruction of stratospheric ozone by chlorofluorocarbons (CFCs). CFCs are also greenhouse gases, so reducing them to protect the ozone layer also led to a relative cooling of the atmosphere. Pretis and Allen tested this finding in an energy balance model and found that global mean temperatures are 0.1 °C cooler because of the Montreal Protocol.

Estrada and colleagues also attributed a cooling from the reduction in the methane growth rate in recent years. Methane is a potent but short-lived (about a decade) greenhouse gas, with major natural and anthropogenic sources. The amount of methane in the atmosphere had been growing in the latter half of the 20th century, until it levelled off in the period around 2000 to 2006. The cause of this stagnation is in itself an active research area, with changes to agricultural practices, variability of wetlands, and changing fossil fuel emissions being likely factors.

Others have looked to the oceans to find a cause for the temperature hiatus. Modelling work by Kosaka and Xie shows that it can be explained by recent La Niña events. La Niña events are characterised by cooler tropical Pacific sea surface temperatures and cooler surface air temperatures. By putting observed tropical Pacific sea surface temperatures in to an atmospheric model (which also contained the observed greenhouse gas concentrations), the authors were able to reproduce the hiatus.

This is not necessarily in contradiction to the Estrada study, as Kosaka and Xie do not specify what is causing the sea surface temperatures to be La Niña-like, so the cause could be linked to greenhouse gas warming. A trend towards more La Niña-like conditions since 1950, coinciding with increases in global mean surface temperature, has been identified by L’Heureux et al..

These studies illustrate some of the complex interactions between atmospheric temperature, composition and climate. If temperature is the symptom, then we have seen that the make-up of the atmosphere is one of the many causes. To complicate things further, the symptom can also feed back into the cause. For example, wetland emissions of methane depend on temperature, so a warming Arctic may cause increased methane emissions and therefore even more warming.

The dome of the Jungfraujoch atmospheric observatory in Switzerland is seen  in the distance in this photo.

The dome of the Jungfraujoch atmospheric observatory in Switzerland is seen in the distance in this photo.

Composition of the atmosphere

We are finding ever more sophisticated ways of measuring the atmosphere’s composition: continuous ground-station measurements, sensors attached to weather balloons, aircraft- and ship-based instruments, drones, and satellites are all used to analyse the components of the atmosphere. This array of measurements at different scales is used in combination with models to paint the clearest picture of the atmosphere possible, within current understanding.

The MACC (Monitoring Atmospheric Composition and Climate) project has done just this, by assimilating satellite data into a global model of the atmosphere to produce an 8-year data set of atmospheric composition. The data for carbon monoxide, ozone, nitrogen dioxide and formaldehyde are evaluated against independent satellite, weather balloon, ground station and aircraft observations in Inness et al., which goes on to highlight where the discrepancies lie and also indicates the direction for future work. With so much varied data to consider, this kind of large modelling study is a good way of bringing together the current knowledge of atmospheric composition.

These are just a few facets of the atmosphere, with weather patterns, climate modes, aerosol, boundary layer flows, and interactions with the surface being some of the other parts of the atmospheric system that we take interest in studying in just as much depth.It is thanks to the multitude of ways of observing and describing this encapsulating fluid we have today, that we get the atmosphere to show its face.

You’ll find me over at the MAMM Arctic Methane blog…

I have been quiet here for a while, but I’ve been busy elsewhere! I am going to fly out to Kiruna, Sweden again next week, to do some field work to find out more about methane emissions in the Arctic.

Find out what I’m currently up to at http://arcticmethane.wordpress.com/. You can see the welcome post by Prof John Pyle (my boss and head of the project), and my first post that talks about why it’s so interesting in the first place.

Tag – you’re it! Chasing atmospheric tracers

Beijing pollution

Beijing, a megacity of ~20 million inhabitants, when I was there last September. How to keep track of the pollution being released?

This is a blog post I originally wrote for GeoLog (the most excellent official blog of the EGU), which appeared on 22 March 2013 (in a slightly edited form). 

I’ve been ruminating over the idea for this post for some time now; since last October in fact, when the EGU Twitter Journal Club discussed a paper about tagging (You can find the Storify for the discussion here). Not tagging as in the playground favourite, but the idea of keeping track of certain molecules in your chemical transport model, so you can follow them as they move through the atmosphere and undergo chemical transformations.

After deliberating, cogitating and digesting, I’ve decided to offer my opinion on tagging, which I’ve come to through reading, listening and discussing the topic with colleagues. I thought this might be of general interest, as the concept of tagging frequently sparks debates (in my experience) and seems to arouse stronger and more varied views than I would usually expect from a modelling technique. So with this post, I want to answer the question “what is tagging and what is it used for?” and at least attempt to answer “why does it generate such mixed feelings?”.

I think that the answer to the second may lie in the answer to the first, so let’s start with that. One way of describing tagging is as an accounting method. Doesn’t sound very geophysical? Well, we’re talking the model world here, and we can keep track of – or account for – every little thing we do in our model world. To try and understand the composition of the Earth’s atmosphere, people have constructed computer models to describe both the physical and chemical processes in the atmosphere. I’m particularly interested in trace gases like ozone, which is found in parts-per-billion quantities in the troposphere (roughly the lowest 10-15 km of the atmosphere). Tropospheric ozone is a popular species to study, as it is a greenhouse gas, it’s bad for human and plant health, and it’s one of the key oxidants in the atmosphere.

Another interesting thing about ozone is that it’s not emitted directly. Many other pollutants and greenhouse gases, like methane or nitrogen oxides, are emitted by both natural and anthropogenic sources, but ozone is formed through photochemical reactions, which occur in the presence of both sunlight and nitrogen dioxide. So, to study the ozone in the atmosphere using a model, it needs to include emissions of relevant gases, motions in the atmosphere that move these gases around, chemical reactions that transform one gas into another, as well as other processes like deposition of certain gases to surfaces or removal in rain.

With all this going on, it can be hard to disentangle one process from another, particularly since all the different processes are interlinked. This is where tagging can come in handy as an accounting tool. Say we are interested in how much ozone is added to the atmosphere as a result of activity in a particular city – let’s call it Mega-City One. How could you find this out? Well, one quite common way is to run the model as normal, and then to repeat the simulation but with Mega-City One removed. So you take out any emissions coming from Mega-City One and see what difference it makes. This tells you what would happen if you simply removed that city.

However, you may not want to know about this unrealistic “annihilation” situation, as a city won’t simply disappear overnight. You may be more interested in your normal run (which you think best represents the real world) and what is contributing to the ozone in that run. The non-linearly minded amongst you will see the subtle difference between the two questions. If we were talking about an inert tracer, there would be no difference, as what was emitted by Mega-City One would just stay in the atmosphere unchanged, and would be directly attributable to Mega-City One. The difference with ozone is that it’s not emitted, and the chemistry that creates and destroys it can and does behave nonlinearly.

So to answer the attribution question (in my model simulation, how much ozone is a result of Mega-City One emissions?), tagging is one method that is used. The idea is that you ‘tag’ tracers that you are interested in, and follow them through the model. So you can ‘tag’ all the emissions from Mega-City One, and when one of the nitrogen dioxide molecules from this city undergoes photolysis to make an oxygen atom, which then reacts with molecular oxygen (O2) to form ozone (O3), you know that this ozone molecule can be attributed to Mega-City One. This is done by solving a set of equations for the tags alongside the usual chemistry scheme. This does not disturb the normal running of the model, and you will find that the total amount of ozone attributed to Mega-City One will be different from calculating the difference in ozone between your normal and your annihilation runs.

Why would you want to do either of these things anyway? Well, you might want to know what effect different cities or other sources are actually having, or would have on the atmosphere if they were built. So you’d use the annihilation method. Or, you might want to know what the biggest culprit was for particularly bad air quality in a particular place. So you’d use tagging to find out if it was Mega-City One or Mega-City Two, or if it was the road transport or the power stations that was the biggest source.

I hope I’ve achieved my first aim of answering my first question, about what tagging is and what it’s used for. The second question is not quite so straightforward, but I’ll give my thoughts on the matter: I think the key to why – to put in bluntly – some people aren’t so keen on tagging, and some people aren’t so keen on the annihilation method, stems from different notions of what tagging is really being used for. Ultimately, I think this boils down to finding ways of communicating complex ideas and their applications, without being misconstrued. Existing pre-conceptions probably also play a part, as these will (rightly or wrongly) fill the gaps when someone else’s explanation is incomplete.

Successful communication of complex ideas isn’t easy! It takes time and effort, but it really pays off in the end. Just think, how often have you heard (or taken part in) a heated discussion in which you find everyone was actually in agreement all along, they just didn’t know it yet?

Paper that initially inspired the discussion:

Grewe, V, Dahlmann, K, Matthes, S and Steinbrecht, W. (2012) Attributing ozone to NOx emissions: Implications for climate mitigation measures. Atmospheric Environment. Vol. 59, pp 102-107.

Update on weather station

Weather station MkII

Weather station MkII

A brief post on the weather station saga. A few weeks ago (at least), the anemometer (which measures wind speed) stopped working on my weather station. So yesterday, Dr Turnip (who is himself going viral in the blogosphere, eg here, here and here) and I took the weather station back to the shop (again) and this time, we just got a whole new weather station!

So, although I have a brand new weather station — which will hopefully last a bit longer than the last — I have lost almost 6 months worth of (admittedly shoddy) data. If I’d thought that they’d replace the unit entirely and take the old one, I’d have downloaded the last ~2 months data that was on it. But I didn’t predict that, so now it’s gone.

But, today’s a new day, and the weather station is up and running, so hopefully I’ll be able to collect some slightly dodgy met data once more!

Anti-spider defences?

Image

Although this photo doesn’t have much to do with my post, they were awesome clouds (possibly mammatus, if I squint a bit?) that I observed with my own eyes from the location of the weather station.

How can I keep spiders out of my weather station?

I hadn’t thought about this problem until spiders, and specifically their webs, stopped my rain gauge from working. This happened a few months ago, and it continues to happen regularly. The spiders attach webs to the moving parts of the rain gauge so it never tips, and if it never tips, it never registers any rainfall!

I suspect that regular cleaning is the only way to sort this out, but I am lazy and I dislike spiders, so this is an unlikely solution… The other week I removed the louvredcasing to inspect the batteries (it had stopped transmitting data) and three huge spiders flew out as I slid the louvres off. It was pretty gross, as only one of the spiders remained alive past this point. The last one standing didn’t stand for much longer either. I’m pretty sure it was these spiders that sabotaged the data transmission in some arachnid conspiracy to try and stop me collecting sub-optimal highly non-standard observations. They aren’t doing a bad job, as we had to replace the part entirely, and in the process of trying to fix it we wiped all the observations for the last 3 or 4 months. OOPS. I was rather looking forward to continuing the graph of the oddball rainfall we’ve been having this year.

So if anyone has ideas on how I can make my weather station spider-proof, PLEASE let me know!

Photo diary from flights around the Arctic

This gallery contains 10 photos.

Here are a few snaps from the MAMM field campaign (July 2012). We went flying on the FAAM research aircraft, kitted out to measure many gases, aerosols, and other meteorological parameters. The main aim was to search out Arctic sources … Continue reading