Tell a story. No, not that one.

14 12 2010

I discovered Sheena Matheiken’s Uniform Project while reading the colour supplements this weekend, and was hooked. I love Matheiken’s inventive sense of fashion, I won’t lie, but I also loved how she took a difficult, complex agenda and hooked it into such a simple narrative.

Sheena Matheiken, The Uniform Project

Read the rest of this entry »

It’s not easy being green

5 07 2010

Here’s an article I wrote for Cambridge University’s science magazine, BlueSci, about why it’s hard to use science to change behaviour.

Imagine you’re a doctor, trying to influence your patients to change their diet and lifestyle, or get them to come to see you more quickly when they feel ill. As a medical professional, you have a great deal of knowledge drawn from research about how to protect and improve people’s health, and your job is to decide how to communicate it to the benefit of your patients. Should you try to scare them with the potential consequences of not changing their ways? Should you give them a balanced view of the pros and cons? Should you tell them it’s harder than they think, or easier? You might be surprised by the answer.

 Scientific research doesn’t happen in isolation. It takes place, usually, because we want to find something out and do something with the result. The sum total of research filters slowly into our culture over time, and often, new developments change our laws, our culture, and our behaviour. Sometimes these are beneficial changes; sometimes not – and sometimes, public reactions are strong enough to halt or delay research in certain fields, like genetic modifications to food or stem cell research. Whenever research shows that a change in behaviour, or in the standards set by society, is needed, the change is almost always slow and painful, no matter how good the evidence is for the change.

Let’s take a historical example first. As the science of automobiles developed, alongside went the science of safety. It was Robert Strange McNamara, working at the Ford Motor Company in 1950, who first suggested that the safety of people in cars could be greatly increased if they wore a simple restraining belt, and did the crash test research to prove it. The seatbelt is one of the simplest and most unanimously beneficial ideas possible – cheap and simple to fit, it reduces the risk of death in a crash by up to 70%, with no downside to wearing one. Seatbelts, though, were not an easy sell, with “You think I’m a dangerous driver?” a common response.  It took nearly forty years, and legal enforcement in most countries, for usage of seatbelts to become the norm. The scientific evidence for the safety and benefits of seatbelts was simple and compelling; but they had to overcome the barriers of habit in the public’s minds to create real and lasting change in behaviour.

The most obvious modern example of where research is intended to influence the behaviour of the public is in global warming; as data is compiled on the warming or cooling of the earth, it rapidly makes its way into the headlines, and into the work of action groups that want us to change our behaviour and cut our energy consumption. The typical reaction to these data is to present them as a scare story; “The Earth is warming more rapidly – we must respond faster”. This tactic has the advantage of being simple and seeming intuitive – being presented with the scientific data about impact gives us a simple reason to change our actions before we have to deal with the anticipated consequences. Unfortunately, what this strategy doesn’t take into account is the emotional response to the science. Most people are scared by the prospect of global warming, and as well as the natural barriers to behaviour change, fear interferes with the absorbing of information.

In the 1970s, college students participated in an experiment designed to persuade them to go and receive a TB vaccination. Some students received scary information about the possible consequences of TB; others received much blander information about the benefits of the vaccine. But the students who received the scary information were less likely to get vaccinated than those who did not, probably because when something scares us, we are just as likely to deny and minimise the risk as we are to try and do something about it. The factor that did increase the students’ likelihood of vaccination was whether or not the information the students received contained a map of the campus health centre and information about its opening hours. This was not new information to the students – they had been living on campus for at least 2 years. But giving a simple guide to how they might take action to minimise their risk changed their behaviour in a way that trying to scare them did not. People’s reactions to information, and how they shape their behaviour based on it, is not as simple as we might think.

A standout example of how the representation of science can go seriously wrong is the MMR vaccine scare. When Dr. Andrew Wakefield called a press conference in 1997, linking MMR to autism and suggesting single vaccinations, perhaps even he did not expect the size of the response. Although there is not, and never truly has been, any scientific evidence reliably linking the MMR vaccine to autism, the story was a compelling one – an apparent rapid rise in a mysterious neurological condition, linked to something given to children to protect them. Even though the NHS, the government, the Royal College of General Practitioners, and others came out with information designed to reassure, the association between the two gathered strength in the public’s mind, and vaccination rates began to drop. Ten years later, there has been some recovery, but vaccination is still not at pre-1997 levels, and may take some time to get back. An association, once created, is not easily broken, whatever the most up-to-date evidence might actually say, and even the huge weight of evidence in support of MMR was not enough to overcome the powerful idea that there might be a link.

So how does a doctor or a global warming researcher influence people to change their behaviour on the basis of evidence? A multitude of experiments have shown that the most effective strategy combines two elements; it gives a simple road map of what to do (like the students’ map of the campus health centre), and it creates the impression that the action suggested is the only socially acceptable one. It turns out people are far more likely to recycle their rubbish if they think most people on the street are doing the same, rather that because of their personal commitment to the environment. It’s never easy to change your behaviour – even if you know all the evidence – so, next time you can’t stop yourself eating too much chocolate, don’t feel too bad about it. Then try making friends with some salad-eaters.

Better off with CVs?

17 02 2010

Just time for a brief post today, but this is a topic that fascinates me. Have you ever made a really big hiring mistake? I know I have, and it’s given me a strong interest in how we actually know that our selection methods are any good.

The common-or-garden job interview? It’s not very good – in fact, some studies show better results by judging strictly from CVs, with no face-to-face contact whatsoever. But there are aspects which can improve the interview – check out this 1994 paper to find out more.

The Scientist’s Toolkit: Know your trend

10 02 2010

“Let me introduce you to a radical and highly complex, story-wrecking mathematical insight. Ready? Numbers go up and down.”

Another very educational piece about why stats can go wonky, from the BBC’s Go Figure series. Michael Blastland looks at the fluctuations of teen pregnancy on the Scottish island of Orkney, which, like the Hawthorne effect, shows some of the dangers of making a story out of what we know.

Looking at the annual figures for teenage pregnancy in Orkney, we can see one of the problems with our tendency to make stories from our data: The long-term annual view show the data fluctuating constantly, but there’s not much of an overall trend one way or the other. But if you only look at the figures from, say, 2002 onwards, then you see a peak followed by a clear decline. Obviously, this is due to the heroic actions of health workers on Orkney, taking action to halve the teen pregnancy rate overall between 1994 and 2006.

All this is great, of course, until you review the figures again at the end of 2007, and discover that they’ve cycled right back to their 1994 peak. (Incidentally, the first graph Blastland shows is one of the most beautifully misleading pieces I’ve ever seen. An excellent example of how you can torture your data until it confesses to anything).

Here’s the thing: data is always “noisy”. There are hundreds, if not thousands, of factors you simply can’t control or account for at any given time, and they will make the data randomly fluctuate up and down. Teenage pregnancy, for instance, shows a seasonal variation: teenagers are most likely to get pregnant at the end of the school year, probably because they’re having sex more on account of the warm weather and lack of schoolwork. If you only look at a short period of time, it’s easy to be convinced that the data show an overall upward or downward trend… but you’ve really got to take the long view to make sure that this isn’t simply random variation, or “noise”. The more data you have, the less vulnerable your data are to random fluctuations – take a look at the line representing Scotland, for instance, which shows some minor variations but is much more flat overall. (We call this the law of large numbers.)

If you really think your data (teenage pregnancies, sales, salaries) are showing an overall trend… make sure you’re taking a long view. Are there seasonal fluctuations you haven’t taken into account? Anomalous weather? What was happening in the economy at the time – are you comparing it to the right things? These things matter.

Is it worth it (financially) to go to uni?

6 02 2010

As someone who employs new graduates, I wonder about this question a lot. It’s a popular one for blog posts and media stories; Penelope Trunk is firmly on the “con” side of education for education’s sake. The BBC offers some rather more unconventional reasons (you might meet a spouse with good middle-class earning potential!), but, with costs and graduate unemployment going up both in the US and UK, and places available likely to face a crunch, I think it’s worth considering soberly just what you’re likely to get out of it.

I was prompted to make this post by this Times Online article, which argues that so-called “Mickey Mouse degrees” like golf course management and brewing are in fact a smart bet, leading to profitable careers because they directly prepare you for a particular sector. The article also trots out the “a degree is worth X thousand pounds over a lifetime” figures. As usual, the top-earning degrees are quite comfortably medicine and law, with engineering and modern languages tailing behind by some way.

But I can’t help wondering whether this actually proves anything, really. The single greatest predictive factor of success – both in university and in a career – is intelligence. Medicine and law courses are highly sought-after. They’re immensely challenging careers, intellectually. Universities have enough competition for places to set the entry bar extremely high, and pick and choose who they accept, which makes the degree, in some respects, just a “surrogate marker” for intelligence. If these courses were effectively open entry, I wonder if the graduate earning potential would be somewhat diluted. It certainly helps, though, that vocational courses directly give you the skills you’ll need on graduation, rather than instilling “soft skills”. While I want to hire, and work with, people who can think intelligently and critically, I’m yet to come across a uni that does much, in its courses, to instil the kinds of soft skills that are crucial when first starting work, and one of my key jobs as a graduate manager is to give them a crash course in these skills, and quickly. The social and interpersonal skills tested by any job are very different from the skills imparted by your average undergraduate course.

I suspect that, on top of the vocational nature of some of these “Mickey Mouse courses” (and I don’t agree with that name) also comes from the fact that the graduates on them are determined and focused enough to dedicate themselves to a course in something very specific. I further suspect that this determination and focus starts before the graduate begins the degree. In the absence of a study that controls for intelligence, it’s hard to tell how much is really added by the degree itself. But my advice for seventeen- and eighteen-year-olds, based on my experience at work, would be this: If you aren’t sure what you want to do after university, and you aren’t passionate about any particular subject, and you aren’t sure you’ll get into a top university – strongly consider working for a year or two, and thinking about it.

The Scientist’s Toolkit: Check your prejudices.

2 02 2010

Some things make me sad. Some things make me angry. This particular article makes me both, but in all fairness, Aaron Sell’s anger is both more justified and more righteous.

For those of you who have missed the blog kerfuffle, Aaron Sell, a psychologist for the Centre for Evolutionary Psychology, recently published an article studying aggression and suggesting that individuals who perceive themselves to be stronger, or more attractive, are more likely to behave aggressively. This research was picked up and published by the Sunday Times as an article titled, “Blonde women born to be warrior princesses“.

It’s hard to know where to start with all the things that are wrong with this.  Sell’s research did not refer to blondes at all. Sell details, in his subsequent angry letter to the Times, how the journalist, John Harlow, told him he was writing a piece about blondes, and asked him whether blondes exhibited more anger. Sell pointed out that his work didn’t look at hair colour at all, but agreed to re-analyse the data on this basis. He found no link between hair colour, entitlement and aggressive behaviour, and told Harlow so. Harlow’s article subsequently appeared, not only claiming that “blondes are more aggressive and more determined to get their own way”, but attributing some completely outrageous and utterly fabricated quotes directly to Sell. “This is southern California – the natural habitat of the privileged blonde”?

I’d really like to believe that this was a one-off, but it’s hard to. It’s clear that Harlow had the story already written in his mind, and chose not to let the lack of actual facts get in his way. There’s been some online coverage of this egregious example of reporting (try here and here) and some discussion of the role of a responsible press in not totally fabricating stories and quotes from whole cloth in defiance of evidence (can you tell this bothers me?).  But I actually think the real lesson is slightly different.

Newspapers, on the whole, find it far more convenient to tell us what we already believe – changing people’s minds is time-consuming, difficult, and they don’t like it much. We’re all disposed to seek out and overvalue information that confirm the beliefs we already have  (confirmation bias) – some nifty studies have been done on the phenomenon. Harlow’s study panders shamelessly to our prejudices and our stereotypes. It’s a bit controversial, but not so much so that we can’t secretly, lazily, accept it as true because it ties in with some of our other social shortcuts. This is why we do science; because we can’t fully trust our brains to evaluate evidence effectively when we already have beliefs on a topic. We will always be inclined to seek out and accept the information that confirms what we already believe – it’s so much easier than re-evaluating those beliefs.

I don’t know about all of you, but when I’m reading the paper from now on, I’m going to very carefully evaluate any story reporting a study on how it plays to my prejudices. Because if it does, I need to be extra, extra careful before I accept any part of it. And since the Times has refused to print Aaron Sell’s letter, or alter or remove the original article, please help make it up to him by reading his excellent original research.

Mind Over Matter

27 01 2010

My article on why it’s so hard to learn science and maths is featured in the 17th issue of BlueSci, Cambridge University’s science magazine.

Download the PDF here.


Get every new post delivered to your Inbox.