Sunday, April 30, 2017

Energy budget constraints on climate sensitivity in light of inconstant climate feedbacks

The recent NCC paper of that name prompted a comment. Plausible? Yes, certainly. This idea has been doing the rounds for some time, the basic idea being that the response to radiative forcing is not precisely as would be predicted from a simple energy balance model with a fixed sensitivity in which the radiative feedback depends solely on the global mean temperature anomaly, but rather one in which the pattern of warming also affects the radiative feedback. And in particular, during a warming scenario, the feedback is a bit higher (and therefore effective sensitive is a bit lower) during the transient phase than it is at the ultimate warmer equilibrium. This happens in most (almost all) climate models, it's certainly plausible that it applies to the real climate system too. It is the major weakness of the regression-based "Gregory method" for estimating the equilibrium sensitivity of models, in which extrapolation of a warming segment tends to lead to an underestimate of the equilibrium result. Here's a typical example of that from a paper by Maria Rutgenstein:


The regression line based on the first 150 years predicts an equilibrium response of 5.4C (for 4xCO2) but the warming actually continues past 6.2C (and who knows how much further it may continue). There are numerous pics of this sort of thing floating around with multiple models showing qualitatively similar results under a range of scenarios, so this is not just an artefact of the specific experiment shown here.

Kyle Armour has also done a fair amount of research looking at the way regional effects and warming patterns combine with regional feedbacks to generate this sort of behaviour (eg here). This new work tries to quantify this effect in order to more precisely interpret the observed 20th century temperature changes in terms of the equilibrium response. For some reason I can't get through the paywall right now to check the details but in principle it seems entirely reasonable.

Sunday, April 09, 2017

[jules' pics] The morning after the night before

If you let them keep your contact details then, as well as sending you news on their latest fundraising scheme, your Oxbridge college will invite you back for din dins every decade or so. These days it isn't entirely free, but still a good deal considering the liver damage they try to cause you. Maybe the college is hoping the experience will make us all feel more mortal and rush home and make generous legacies in our wills. Although I matriculated at two Oxbridge colleges, I've managed to miss all these college din dins, due to being in foreign, and so last night's was my first. I only really went because I hadn't been to one before, but it exceeded expectation. Twenty five years on and it was very interesting to find out about the paths people had taken. Everyone I spoke to seems to be making good use of their talents, although ... how can a country need that many corporate lawyers? 

The dinner was dark and drunk so no decent pics, and we start this morning after the night before with breakfast at Corpus. The real sign that quarter of a century has passed is that several of the dead people on the walls are people we knew in life!
Cambers-6

Then off for a little walk to try and sober up some more.
Cambers-8 
 
On the first corner is Corpus's special toy, Clocky McClock Face
 
 
King's Parade is just next door to Corpus.
Cambers-2

Here's King's Chapel
Cambers-9

Then to the river where the early punter catches the tourists
Cambers-13

Later in the day, people were lazing in the warmth 
Cambers-14

And the blossom were in full bloom
Cambers-16

But back home, James was left holding the baby...
Fluffy


--
Posted By Blogger to jules' pics at 4/09/2017 09:27:00 PM

Monday, April 03, 2017

2:51:46

Marathon time has come round again. Jules and I decided against a trip to the EGU this year, having just recently gone to the AGU instead. So instead of Vienna Marathon, it was back to Manchester, who had kindly offered an extra discount to make up for the disasters of last year (which didn't actually affect us much as it happened). Due to a clash with some men kicking a pig's bladder around a muddy field, all hotels were getting booked up and expensive quite early on, but I still found a room for a tolerable price, this time at the Horrible Inn Media City, which actually failed to live up to its name by being rather comfortable and coping well with an influx of runners (surely not their usual clientele).


We suddenly realised as the event approached that it didn't really make much sense for jules to waste a weekend and come too, we'd already done The Lowry and it's not that exciting to stand around for 3 hours watching hordes of unknown joggers in Manchester suburbs for the second time. So when I realised that William was also doing it I suggested sharing the room with him, which actually worked out very well as we had a good chat over dinner about the dismal state of climate science while setting up the highly scientific pasta v meat experiment (meat proved to be the winner):


We had a relaxed approach to race day: a decent lie-in, getting up for a leisurely breakfast before strolling the mile down the quayside to the start just in time to hop into the runners' pens. Bumped into Settle Harriers club-mate Fraser who was aiming to cruise round in a comfortable sub-3 (he's much faster than me overall, but wasn't trying too hard for his first road marathon) but otherwise didn't know anyone near the front.


The bag drop (finish area) was actually a fair bit further away, so we just left our stuff at the hotel to return to later. I sacrificed an old unloved t-shirt to the start line gods, don't think WMC even bothered with that as it wasn't cold. As it turned out the organisation was very good this year, with none of the problems of previous events.

I didn't have that much to go on speed-wise: a pitiful 1 sec PB at 10k (39:21) on Boxing Day, a tolerable 1:23:01 half marathon at Blackpool in Feb (though that's not really much better than last year's 1:25:40 at hilly Haweswater). But training seemed to be going ok with no illnesses or setbacks so I thought I should aim a bit higher than last year and plucked 2:50 out of thin air as a possible albeit optimistic target, at least as a starting pace.

Didn't have any arranged running company this time but got chatting to someone on the start line also aiming for 2:50. "Last year I did 2:51, went off far too fast and did the first half in 1:21 then collapsed, won't do that again". Sure enough the second we got over the start line he shot off into the distance. I went past him again at about 18 miles :-) At that point I was still going pretty well and was perfectly on course for my target, but then the 21st mile marker took a long time to arrive and my legs started to hurt and there's still quite a long way to go from there. So I lowered my sights to just getting round the rest as comfortably as possible, and didn't worry too much about the seconds that were slipping away, at first a trickle, but a flood by the end. Can't be too disappointed with the final result of 2:51:46 which is a three minute improvement on last year, though actually a random sampling of strava results suggests that part of this is due to a shorter (but still legal) course. In a way it's a relief that I'm not close enough to 2:45 for this (which would gain entry to the Championship at the London Marathon) to be a serious target, at least for the time being. There are things I could have done a little better for sure (like a few slightly longer runs), but I don't really see where 7 mins improvement could come from.

Food-wise, I had a chunk of home-made Kendal mint cake and a couple of jelly babies every 2nd water station, at least until near the end at which point I couldn't really face any more. Took the water each time, as much to to splash on my hat as to sip. It was a touch warmer than most of my training, but otherwise perfect conditions. Alcohol-free beer at the end was very gratefully received though it took a little while to sip. Hung around in the finish area long enough to pick up a second pint for the stroll/hobble back to hotel. Club-mate came in at 2:56 looking very relaxed. Don't think I'll beat him again!

Saturday, April 01, 2017

BlueSkiesResearch.org.uk: Independence day

We all know by now that Brexit means brexit. However, it is not so clear whether independence means independence or perhaps something else entirely. This has been an interesting and important question in climate science for at least 15 years and probably longer. The basic issue is, how do we interpret the fact that all the major climate models, which have been built at various research centres around the world, generally agree on the major issues of climate change? Ie, that the current increase in CO2 will generate warming at around the rate of 0.2C/decade globally, with the warming rate being higher at high latitudes, over land, at night, and in winter. And that this will be associated with increases in rainfall (though not uniformly everywhere – in fact this being focussed on the wettest areas, with many dry areas becoming drier). Etc etc at various levels of precision. Models disagree on the fine details but agree on the broad picture. But are these forecasts robust, or have we instead merely created multiple copies of the same fundamentally wrong model? We know for sure that some models in the IPCC/CMIP collection are basically copies of other models with very minor changes. Others appear to differ more substantially, but many common concepts and methods are widely shared. This has led some to argue that we can’t really read much into the CMIP model-based consensus as these models are all basically near-replicates and their agreement just means they are all making the same mistakes.

While people have been talking about this issue for a number of years, it seems to us that little real progress has been made in addressing it. In fact, there have been few attempts to even define what "independence" in this context should mean, let alone how it could be measured or how some degree of dependence could be accounted for correctly. Many papers present an argument that runs roughly like this:
  • We want models to be independent but aren’t defining independence rigorously
  • (Some analysis of a semi-quantitative nature)
  • Look, our analysis shows that the models are not independent!
Perhaps I’m not being entirely fair, but there really isn’t a lot to get your teeth into.
 
We’ve been pondering this for some time, and have given a number of presentations of varyng levels of coherence over the last few years. Last August we finally we managed to write something down in a manner that we thought tolerable for publication, as I wrote about at the time. During our trip to the USA, there was a small workshop on this topic which we found very interesting and useful, and that together with reviewer comments helped us to improve the paper in various ways. The final version was accepted recently and has now appeared in ESD. Our basic premise is that independence can and indeed must be defined in a mathematically rigorous manner in order to make any progress on this question. Further, we present one possible definition, show how it can be applied in practice, and what conclusions flow from this.

Our basic idea is to use the standard probabilistic definition of independence: A and B are independent if and only if P(A and B) = P(A) x P(B). In order to make sense of this approach, it has to be applied in a fundamentally Bayesian manner. That is to say, the probabilities (and therefore the independence or otherwise of the models) are not truly properties of the models themselves, but rather properties of a researcher’s knowledge (belief) about the models. So the issue is fundamentally subjective and depends on the background knowledge of the researcher: A and B are conditionally independent given X if and only if P(A and B given X) = P(A given  X) x P(B given X). Depending what one is conditioning on, this approach seems to be flexible and powerful enough to encapsulate in quantitative terms some of the ideas that have been discussed somewhat vaguely. For example, if X is the truth, then we arrive at the truth-centred hypothesis that the errors of an ensemble of models will generally cancel out and the ensemble mean will converge to the truth. It’s not true (or even remotely plausible), but we can see why it was appealing. More realistically, if X is the current distribution of model outputs, and A and B are two additional models, then this corresponds quite closely to the concept of model duplication or near-duplication. If you want more details then have a look at the paper.

Anyway, we don’t expect this to be the last word on the subject, though it may be the last we say about it for some while as we are planning on heading off in a different direction with our climate research for the coming months and perhaps years.