It’s not easy being green

5 07 2010

Here’s an article I wrote for Cambridge University’s science magazine, BlueSci, about why it’s hard to use science to change behaviour.

Imagine you’re a doctor, trying to influence your patients to change their diet and lifestyle, or get them to come to see you more quickly when they feel ill. As a medical professional, you have a great deal of knowledge drawn from research about how to protect and improve people’s health, and your job is to decide how to communicate it to the benefit of your patients. Should you try to scare them with the potential consequences of not changing their ways? Should you give them a balanced view of the pros and cons? Should you tell them it’s harder than they think, or easier? You might be surprised by the answer.

 Scientific research doesn’t happen in isolation. It takes place, usually, because we want to find something out and do something with the result. The sum total of research filters slowly into our culture over time, and often, new developments change our laws, our culture, and our behaviour. Sometimes these are beneficial changes; sometimes not – and sometimes, public reactions are strong enough to halt or delay research in certain fields, like genetic modifications to food or stem cell research. Whenever research shows that a change in behaviour, or in the standards set by society, is needed, the change is almost always slow and painful, no matter how good the evidence is for the change.

Let’s take a historical example first. As the science of automobiles developed, alongside went the science of safety. It was Robert Strange McNamara, working at the Ford Motor Company in 1950, who first suggested that the safety of people in cars could be greatly increased if they wore a simple restraining belt, and did the crash test research to prove it. The seatbelt is one of the simplest and most unanimously beneficial ideas possible – cheap and simple to fit, it reduces the risk of death in a crash by up to 70%, with no downside to wearing one. Seatbelts, though, were not an easy sell, with “You think I’m a dangerous driver?” a common response.  It took nearly forty years, and legal enforcement in most countries, for usage of seatbelts to become the norm. The scientific evidence for the safety and benefits of seatbelts was simple and compelling; but they had to overcome the barriers of habit in the public’s minds to create real and lasting change in behaviour.

The most obvious modern example of where research is intended to influence the behaviour of the public is in global warming; as data is compiled on the warming or cooling of the earth, it rapidly makes its way into the headlines, and into the work of action groups that want us to change our behaviour and cut our energy consumption. The typical reaction to these data is to present them as a scare story; “The Earth is warming more rapidly – we must respond faster”. This tactic has the advantage of being simple and seeming intuitive – being presented with the scientific data about impact gives us a simple reason to change our actions before we have to deal with the anticipated consequences. Unfortunately, what this strategy doesn’t take into account is the emotional response to the science. Most people are scared by the prospect of global warming, and as well as the natural barriers to behaviour change, fear interferes with the absorbing of information.

In the 1970s, college students participated in an experiment designed to persuade them to go and receive a TB vaccination. Some students received scary information about the possible consequences of TB; others received much blander information about the benefits of the vaccine. But the students who received the scary information were less likely to get vaccinated than those who did not, probably because when something scares us, we are just as likely to deny and minimise the risk as we are to try and do something about it. The factor that did increase the students’ likelihood of vaccination was whether or not the information the students received contained a map of the campus health centre and information about its opening hours. This was not new information to the students – they had been living on campus for at least 2 years. But giving a simple guide to how they might take action to minimise their risk changed their behaviour in a way that trying to scare them did not. People’s reactions to information, and how they shape their behaviour based on it, is not as simple as we might think.

A standout example of how the representation of science can go seriously wrong is the MMR vaccine scare. When Dr. Andrew Wakefield called a press conference in 1997, linking MMR to autism and suggesting single vaccinations, perhaps even he did not expect the size of the response. Although there is not, and never truly has been, any scientific evidence reliably linking the MMR vaccine to autism, the story was a compelling one – an apparent rapid rise in a mysterious neurological condition, linked to something given to children to protect them. Even though the NHS, the government, the Royal College of General Practitioners, and others came out with information designed to reassure, the association between the two gathered strength in the public’s mind, and vaccination rates began to drop. Ten years later, there has been some recovery, but vaccination is still not at pre-1997 levels, and may take some time to get back. An association, once created, is not easily broken, whatever the most up-to-date evidence might actually say, and even the huge weight of evidence in support of MMR was not enough to overcome the powerful idea that there might be a link.

So how does a doctor or a global warming researcher influence people to change their behaviour on the basis of evidence? A multitude of experiments have shown that the most effective strategy combines two elements; it gives a simple road map of what to do (like the students’ map of the campus health centre), and it creates the impression that the action suggested is the only socially acceptable one. It turns out people are far more likely to recycle their rubbish if they think most people on the street are doing the same, rather that because of their personal commitment to the environment. It’s never easy to change your behaviour – even if you know all the evidence – so, next time you can’t stop yourself eating too much chocolate, don’t feel too bad about it. Then try making friends with some salad-eaters.

Advertisements




The Scientist’s Toolkit: Know your trend

10 02 2010

“Let me introduce you to a radical and highly complex, story-wrecking mathematical insight. Ready? Numbers go up and down.”

Another very educational piece about why stats can go wonky, from the BBC’s Go Figure series. Michael Blastland looks at the fluctuations of teen pregnancy on the Scottish island of Orkney, which, like the Hawthorne effect, shows some of the dangers of making a story out of what we know.

Looking at the annual figures for teenage pregnancy in Orkney, we can see one of the problems with our tendency to make stories from our data: The long-term annual view show the data fluctuating constantly, but there’s not much of an overall trend one way or the other. But if you only look at the figures from, say, 2002 onwards, then you see a peak followed by a clear decline. Obviously, this is due to the heroic actions of health workers on Orkney, taking action to halve the teen pregnancy rate overall between 1994 and 2006.

All this is great, of course, until you review the figures again at the end of 2007, and discover that they’ve cycled right back to their 1994 peak. (Incidentally, the first graph Blastland shows is one of the most beautifully misleading pieces I’ve ever seen. An excellent example of how you can torture your data until it confesses to anything).

Here’s the thing: data is always “noisy”. There are hundreds, if not thousands, of factors you simply can’t control or account for at any given time, and they will make the data randomly fluctuate up and down. Teenage pregnancy, for instance, shows a seasonal variation: teenagers are most likely to get pregnant at the end of the school year, probably because they’re having sex more on account of the warm weather and lack of schoolwork. If you only look at a short period of time, it’s easy to be convinced that the data show an overall upward or downward trend… but you’ve really got to take the long view to make sure that this isn’t simply random variation, or “noise”. The more data you have, the less vulnerable your data are to random fluctuations – take a look at the line representing Scotland, for instance, which shows some minor variations but is much more flat overall. (We call this the law of large numbers.)

If you really think your data (teenage pregnancies, sales, salaries) are showing an overall trend… make sure you’re taking a long view. Are there seasonal fluctuations you haven’t taken into account? Anomalous weather? What was happening in the economy at the time – are you comparing it to the right things? These things matter.





The Scientist’s Toolkit: Check your prejudices.

2 02 2010

Some things make me sad. Some things make me angry. This particular article makes me both, but in all fairness, Aaron Sell’s anger is both more justified and more righteous.

For those of you who have missed the blog kerfuffle, Aaron Sell, a psychologist for the Centre for Evolutionary Psychology, recently published an article studying aggression and suggesting that individuals who perceive themselves to be stronger, or more attractive, are more likely to behave aggressively. This research was picked up and published by the Sunday Times as an article titled, “Blonde women born to be warrior princesses“.

It’s hard to know where to start with all the things that are wrong with this.  Sell’s research did not refer to blondes at all. Sell details, in his subsequent angry letter to the Times, how the journalist, John Harlow, told him he was writing a piece about blondes, and asked him whether blondes exhibited more anger. Sell pointed out that his work didn’t look at hair colour at all, but agreed to re-analyse the data on this basis. He found no link between hair colour, entitlement and aggressive behaviour, and told Harlow so. Harlow’s article subsequently appeared, not only claiming that “blondes are more aggressive and more determined to get their own way”, but attributing some completely outrageous and utterly fabricated quotes directly to Sell. “This is southern California – the natural habitat of the privileged blonde”?

I’d really like to believe that this was a one-off, but it’s hard to. It’s clear that Harlow had the story already written in his mind, and chose not to let the lack of actual facts get in his way. There’s been some online coverage of this egregious example of reporting (try here and here) and some discussion of the role of a responsible press in not totally fabricating stories and quotes from whole cloth in defiance of evidence (can you tell this bothers me?).  But I actually think the real lesson is slightly different.

Newspapers, on the whole, find it far more convenient to tell us what we already believe – changing people’s minds is time-consuming, difficult, and they don’t like it much. We’re all disposed to seek out and overvalue information that confirm the beliefs we already have  (confirmation bias) – some nifty studies have been done on the phenomenon. Harlow’s study panders shamelessly to our prejudices and our stereotypes. It’s a bit controversial, but not so much so that we can’t secretly, lazily, accept it as true because it ties in with some of our other social shortcuts. This is why we do science; because we can’t fully trust our brains to evaluate evidence effectively when we already have beliefs on a topic. We will always be inclined to seek out and accept the information that confirms what we already believe – it’s so much easier than re-evaluating those beliefs.

I don’t know about all of you, but when I’m reading the paper from now on, I’m going to very carefully evaluate any story reporting a study on how it plays to my prejudices. Because if it does, I need to be extra, extra careful before I accept any part of it. And since the Times has refused to print Aaron Sell’s letter, or alter or remove the original article, please help make it up to him by reading his excellent original research.





Mind Over Matter

27 01 2010

My article on why it’s so hard to learn science and maths is featured in the 17th issue of BlueSci, Cambridge University’s science magazine.

Download the PDF here.





Gender bias is dead. Long live gender bias.

21 12 2009

Women’s lib is dead. Positive discrimination is right out. We’ve won all of our battles for equality. Right? If women aren’t in the boardroom, it’s because they’re choosing not to be – not to work the hours, not to take the stress. Or it’s something inherent to women’s work behaviour. They don’t push. They say “I’m grateful to have a job”, when they should be saying, “I am the linchpin of this organization. Up the offer or I walk”.

No, the one thing I think it’s not OK to say is that women might not get to the top of organizations because we are still subconsciously far harder on them than we are on men. All of us. I’ve often wondered if a man who walked and talked and acted the exact same way as I did would ever get told he was “abrupt”, or “not a team player”. I’ve often wondered if the same assumptions would be made about this hypothetical him. I have, needless to say, suspected that they would not.

In the spirit of my scientific credentials, obviously, I can’t make a statement like that without testing it. And the only way to test something like this is in a controlled trial. And there is a way to do a controlled trial – remotely, like, say online. What would happen if two people supposedly presented themselves, and produced work, and all-in-all were judged over a period of time, exactly the same, except that one was a man and one was a woman?

James Chartrand knows. The story of how a female writer came to work primarily under a male pseudonym, because the same work got more bids, better pay, and more respect, is fascinating and depressing. I wish I could believe that this was unusual. I really do. The people who paid more for “James’s” work than that of a female writer, and praised it more highly, almost certainly had no idea that gender was a factor in how they responded. How can there be equality in the workplace when we still understand our own brains, the filters through which we see and judge people, so poorly?





Yes, your team really do need to concentrate

8 12 2009

A personal vindication for me, this one: Pop-ups and email alerts significantly slow down work by breaking our concentration.

(Source: Wales Online. Original study Cardiff University.)

I’ve often wondered why it isn’t more acceptable to simply turn off email and the Blackberry when you need to concentrate on something. You’ll get it done faster, and your ideas will probably better. You’ll certainly enjoy it more.  Yet, every time I’ve done this, I’ve felt the need to hide it (and usually to work somewhere away from my normal desk so people don’t come to find me to ask me why I’m not answering emails). In fact, can’t we turn the alert all the way off? Why not batch-process all emails every couple of hours, maximum?

Every study of cognitive psychology (i.e. of the ways we perceive and process the world) has to deal with the fact that we only have a limited amount of attention, and it is quite literally not possible for us to focus it on two things at once. Multitasking, as Henry Ford might have said had he lived this long, is bunk.

I’m going to keep on trying to change attitudes slowly with this one by taking time to concentrate when I need it, and telling people that’s what I’m doing. It’s part of a broader issue, though, I think – the fact that the work environment is often pretty much unconducive to the kinds of work that need to be done. What are the barriers to us all stopping pretending that we can do everything at once? Is it just attitude?





Should the media educate?

1 12 2009

On Thursday I attended a talk run by BlueSci magazine and featuring Michael Claessens, of the European Commission’s Research Directorate, to discuss why it’s continuing to be so hard to communicate science (including psychology) effectively through the media. It has to be said that, in my view, Claessens was pretty pessimistic about the results of thirty years (!) of active engagement with the popular media by science, even though regular checks of the baseline scientific literacy of the population has shown some improvements.

Claessens did point out that the media have no duty to educate anyone – they’re a business and they exist to sell themselves. The BBC are really the only notable exception to this fact, and it should be pointed out that the BBC produces some fantastic science and psychology programming. (Download some of All in the Mind if you have some spare time on your way to work.) They also have to deal with editorial policy, space constraints, sub-editors, the need for eye-grabbing and usually misleading headlines, and, often, a lack of time to find out what the facts actually are before going to press.

That’s why I love blogs. Claessens wasn’t quite so keen, but blogs don’t suffer from space constraints, or publishing deadlines, or subeditors. Blogs are free (normally). Blogs can build a community in and around their readers. Blogs can specialise in any area they choose, and have proved that they can build up huge readerships. Blogs don’t have a responsibility to educate either, but many of them do, or try to, and in most cases they do it for love. They also have the chance to build something over time, which is how you do education. Slowly, in pieces, over time.

Claessens talked about the need to communicate a simple message; something I struggle with – don’t we all? He concluded that sometimes science just isn’t simple. I don’t agree. To communicate something, anything, takes a story. Sometimes stories aren’t particularly simple, but if we can’t break something down into simpler components in order to tell someone else about it, isn’t that a deficiency in us? If science is “un-simple” and therefore can’t be communicated, how did we enlightened types ever manage to learn it in the first place?

That’s what gets me about this whole discussion, I think. Somewhere buried in it all is the assumption that there are the special clever people who understand science, and the other people, who don’t or can’t. I don’t buy it.