Ahhh, it’s the most magical time of year, with Christmas carols, and presents, and Love, Actually and It’s a Wonderful Life! That’s right, Christmas! Well, it is in CGD Europe, where our pandemic- delayed Christmas party has been reorganised for today, and frankly, the ambience is much, much more my kind of thing. It’s a very sensible 31 degrees Celsius (that’s about 88 for the Americans in the audience), there’s no risk of snow, and nary a caroller in sight. It reminds me of my very Christmassy routine when I lived in East Africa: drive somewhere remote and inaccessible on the 22nd and spend a week birding. My only concession to modernity, in alternate years, was finding a place with an internet connection to follow the Boxing Day Ashes test. Low chance of Birding in the City of London (though you’d be surprised), but I’m happy to have a warm Christmas again. And another side-effect: a slightly earlier than usual links round-up.
I’m not going to give you any warm up this week. We start right in the weeds, with this very good interview of Andrew Gelman by the economist Noah Smith, largely about what constitutes good (and bad) statistical practice. There are two things I really like about this interview. First, it’s not in the form of ‘use this method and not this method’, but virtually all of Gelman’s advice is about how we think about the data and models we are using and how we think critically about the information we are using and the information we are extracting after our investigation of the data. And secondly, this line here, early on “I don’t know if I want researchers to be more careful! It’s good for people to try all sorts of ideas with data collection and analysis without fear… What’s important is not to try to avoid error but rather to be open to criticism and to learn from our mistakes.” I really like this. A lot of researchers have this vague fear that we’re doing the wrong thing, or not using the most up-to-date methods. But really the best way of finding the right thing, and the right method is to give it your best shot, listen carefully to criticism and then improve it. It’s that last step that is so often missing. Sometimes, after a point, a researcher decides that their paper is right, and their question is settled, and after that the remaining task it to defend it like River Tam fighting the Reavers. That’s not what science should look like.
A number of related links this week: Noah himself had a very good substack on the dodgy statistics used in advocacy, focusing on a few run by Oxfam recently. What’s very good about this is that it applies a lot of what Gelman talks about in the previous link. It’s not about better methods, just thinking critically about whether things make sense: the data, the assumptions behind their transformation, the method and so on. Good statistics is, in large part, about curiosity. A similar FT piece by Stephen Cutts takes a similar attack on ODA statistics, something I and my ex-colleague Euan Ritchie have written a lot about. And while we’re thinking about what to measure to tell us what, here’s Jayati Ghosh with four alternatives to GDP. And—Michael Woolcock paper alert!—a wonderful new paper by Kate Bridges and Woolcock on measuring what matters. It is, unsurprisingly, excellent. Also: a timeline of how the most important statistical ideas of the last half-century or so have evolved, paper-by-paper.
The new Bridge academies RCT is out (with a blockbuster set of authors), and finds some very impressive learning gains, which are getting a fair amount of attention on twitter. In the spirit of making sure we are measuring (all of) what matters, I recommend reading it alongside Susannah Hares’ thread about some of the non-learning outcomes observed.
I very much enjoyed this Tim Harford article about the economics of authenticity. It’s full of stories that are both absurd and fully believable. I tend to think that in most domains authenticity is an overrated quality. I don’t care, for example, if my food is ‘authentic’ (to whom? In what way?), but that it is tasty; what premium we are willing to pay for authenticity (that restaurant where we know the back story of the chef, and their deep roots in that cuisine, for example) is I think better understood as an emotional connection to what we’re buying, rooted in storytelling and narrative. Authenticity isn’t the source of economic value, but it’s a way of creating something akin to sentimental value. And if that sentimental value is widely shared, it increases the worth of that good to many other people, as well as yourself.
Really excellent long-read from NPR on the racial wealth gap in the United States, drawing on the research of (and conversations with) Ellora Derenoncourt. Highly recommended.
Berk Ozler dives deep into the new Blattman et. al. paper on the long-run effects of cognitive behavioural therapy to high risk men in Liberia, which I linked to recently. It’s very much worth reading after you’ve looked at the paper.
Lastly, for whatever reason it saw fit, The Ringer ran a deep dive into the greatest ten minute sequence in cinema history: the opening of Up. Even reading about it is an emotional experience. And because it’s been a gruelling links, starting with host of statistics and ending with the most reliably devastating sequence in cinema history, here’s something to cheer you up to finish with: Vladimir Nabakov’s incredibly catty assessments of other authors. While he is completely correct about Dostoyevsky (“a cheap sensationalist, clumsy and vulgar”), I was quite surprised that he so roundly missed the point of William Faulkner and so loved Salinger (whom I love, particularly for the short stories, but assumed Nabakov would look down upon). He also absolutely eviscerates Camus (“A nonentity, means absolutely nothing to me.”) This makes economists seem positively gentle…
Have a great weekend, everyone!
CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.