It’s not easy being green

5 07 2010

Here’s an article I wrote for Cambridge University’s science magazine, BlueSci, about why it’s hard to use science to change behaviour.

Imagine you’re a doctor, trying to influence your patients to change their diet and lifestyle, or get them to come to see you more quickly when they feel ill. As a medical professional, you have a great deal of knowledge drawn from research about how to protect and improve people’s health, and your job is to decide how to communicate it to the benefit of your patients. Should you try to scare them with the potential consequences of not changing their ways? Should you give them a balanced view of the pros and cons? Should you tell them it’s harder than they think, or easier? You might be surprised by the answer.

 Scientific research doesn’t happen in isolation. It takes place, usually, because we want to find something out and do something with the result. The sum total of research filters slowly into our culture over time, and often, new developments change our laws, our culture, and our behaviour. Sometimes these are beneficial changes; sometimes not – and sometimes, public reactions are strong enough to halt or delay research in certain fields, like genetic modifications to food or stem cell research. Whenever research shows that a change in behaviour, or in the standards set by society, is needed, the change is almost always slow and painful, no matter how good the evidence is for the change.

Let’s take a historical example first. As the science of automobiles developed, alongside went the science of safety. It was Robert Strange McNamara, working at the Ford Motor Company in 1950, who first suggested that the safety of people in cars could be greatly increased if they wore a simple restraining belt, and did the crash test research to prove it. The seatbelt is one of the simplest and most unanimously beneficial ideas possible – cheap and simple to fit, it reduces the risk of death in a crash by up to 70%, with no downside to wearing one. Seatbelts, though, were not an easy sell, with “You think I’m a dangerous driver?” a common response.  It took nearly forty years, and legal enforcement in most countries, for usage of seatbelts to become the norm. The scientific evidence for the safety and benefits of seatbelts was simple and compelling; but they had to overcome the barriers of habit in the public’s minds to create real and lasting change in behaviour.

The most obvious modern example of where research is intended to influence the behaviour of the public is in global warming; as data is compiled on the warming or cooling of the earth, it rapidly makes its way into the headlines, and into the work of action groups that want us to change our behaviour and cut our energy consumption. The typical reaction to these data is to present them as a scare story; “The Earth is warming more rapidly – we must respond faster”. This tactic has the advantage of being simple and seeming intuitive – being presented with the scientific data about impact gives us a simple reason to change our actions before we have to deal with the anticipated consequences. Unfortunately, what this strategy doesn’t take into account is the emotional response to the science. Most people are scared by the prospect of global warming, and as well as the natural barriers to behaviour change, fear interferes with the absorbing of information.

In the 1970s, college students participated in an experiment designed to persuade them to go and receive a TB vaccination. Some students received scary information about the possible consequences of TB; others received much blander information about the benefits of the vaccine. But the students who received the scary information were less likely to get vaccinated than those who did not, probably because when something scares us, we are just as likely to deny and minimise the risk as we are to try and do something about it. The factor that did increase the students’ likelihood of vaccination was whether or not the information the students received contained a map of the campus health centre and information about its opening hours. This was not new information to the students – they had been living on campus for at least 2 years. But giving a simple guide to how they might take action to minimise their risk changed their behaviour in a way that trying to scare them did not. People’s reactions to information, and how they shape their behaviour based on it, is not as simple as we might think.

A standout example of how the representation of science can go seriously wrong is the MMR vaccine scare. When Dr. Andrew Wakefield called a press conference in 1997, linking MMR to autism and suggesting single vaccinations, perhaps even he did not expect the size of the response. Although there is not, and never truly has been, any scientific evidence reliably linking the MMR vaccine to autism, the story was a compelling one – an apparent rapid rise in a mysterious neurological condition, linked to something given to children to protect them. Even though the NHS, the government, the Royal College of General Practitioners, and others came out with information designed to reassure, the association between the two gathered strength in the public’s mind, and vaccination rates began to drop. Ten years later, there has been some recovery, but vaccination is still not at pre-1997 levels, and may take some time to get back. An association, once created, is not easily broken, whatever the most up-to-date evidence might actually say, and even the huge weight of evidence in support of MMR was not enough to overcome the powerful idea that there might be a link.

So how does a doctor or a global warming researcher influence people to change their behaviour on the basis of evidence? A multitude of experiments have shown that the most effective strategy combines two elements; it gives a simple road map of what to do (like the students’ map of the campus health centre), and it creates the impression that the action suggested is the only socially acceptable one. It turns out people are far more likely to recycle their rubbish if they think most people on the street are doing the same, rather that because of their personal commitment to the environment. It’s never easy to change your behaviour – even if you know all the evidence – so, next time you can’t stop yourself eating too much chocolate, don’t feel too bad about it. Then try making friends with some salad-eaters.

Advertisements




What does it mean to be a scientist?

5 11 2009

I write this blog not just because I want to be a scientist of organisations. I write it because I’d like you to be one as well.

It doesn’t involve a white coat or a microscope (although I borrow the imagery liberally, as you might have noticed.) What it involves mean different things to different people, but I think it comes down to a mindset.

It means being curious about why things happen and why they don’t happen, and setting out to find out more about both. It means pushing forward the frontiers of knowledge, one tiny piece of data at a time. It means not believing anything that can’t be sufficiently proved AND replicated, and being prepared to challenge and revise your beliefs when new information shows that they may be mistaken. It means, as both Isaac Newton and Google Scholar like to say,  standing on the shoulders of giants. It means never taking anything for granted. And it means never being really, absolutely sure of anything. It’s scary.

It doesn’t, to me, mean having a PhD, or an MSc, or even an A-Level. It doesn’t mean ever darkening the door of a lab. It does mean being aware that, while the human brain is a phenomenal information-processing machine, it has a number of inbuilt bugs that mean we can’t always rely on experience and what we know instinctively. The first and single most important step you can take, as a scientist of organizations, is to care how well things are done – to care enough to try to find out what’s known about the best way to do things. If you have ever searched for research or reviews on hiring or organizational change, you are an organizational scientist.

But it’s not enough just to care, and to look, because the volume of information in all sectors we’re now faced with is overwhelming, and sadly, some of it is of far higher a quality than others. (Here’s a hint; don’t take information on health from the Daily Mail.) If you have the mindset – if you care – then the next most important thing is to refine your skills of evaluation; to know where to go and how to evaluate the information that you find. It’s my goal in this blog to give you the tools to evaluate what is known.

If you’ve never taken science, you could do far worse than to start by reading Ben Goldacre’s Bad Science book and blog. You’ll find them funny, practical, and informative on how to evaluate research and make what you do know more effective. I’ll be building up a toolkit for the aspiring and existing scientist as this blog goes along, so watch this space.

If you’re still reading, you’re probably a scientist already. Good luck and have fun.