It’s not easy being green

5 07 2010

Here’s an article I wrote for Cambridge University’s science magazine, BlueSci, about why it’s hard to use science to change behaviour.

Imagine you’re a doctor, trying to influence your patients to change their diet and lifestyle, or get them to come to see you more quickly when they feel ill. As a medical professional, you have a great deal of knowledge drawn from research about how to protect and improve people’s health, and your job is to decide how to communicate it to the benefit of your patients. Should you try to scare them with the potential consequences of not changing their ways? Should you give them a balanced view of the pros and cons? Should you tell them it’s harder than they think, or easier? You might be surprised by the answer.

 Scientific research doesn’t happen in isolation. It takes place, usually, because we want to find something out and do something with the result. The sum total of research filters slowly into our culture over time, and often, new developments change our laws, our culture, and our behaviour. Sometimes these are beneficial changes; sometimes not – and sometimes, public reactions are strong enough to halt or delay research in certain fields, like genetic modifications to food or stem cell research. Whenever research shows that a change in behaviour, or in the standards set by society, is needed, the change is almost always slow and painful, no matter how good the evidence is for the change.

Let’s take a historical example first. As the science of automobiles developed, alongside went the science of safety. It was Robert Strange McNamara, working at the Ford Motor Company in 1950, who first suggested that the safety of people in cars could be greatly increased if they wore a simple restraining belt, and did the crash test research to prove it. The seatbelt is one of the simplest and most unanimously beneficial ideas possible – cheap and simple to fit, it reduces the risk of death in a crash by up to 70%, with no downside to wearing one. Seatbelts, though, were not an easy sell, with “You think I’m a dangerous driver?” a common response.  It took nearly forty years, and legal enforcement in most countries, for usage of seatbelts to become the norm. The scientific evidence for the safety and benefits of seatbelts was simple and compelling; but they had to overcome the barriers of habit in the public’s minds to create real and lasting change in behaviour.

The most obvious modern example of where research is intended to influence the behaviour of the public is in global warming; as data is compiled on the warming or cooling of the earth, it rapidly makes its way into the headlines, and into the work of action groups that want us to change our behaviour and cut our energy consumption. The typical reaction to these data is to present them as a scare story; “The Earth is warming more rapidly – we must respond faster”. This tactic has the advantage of being simple and seeming intuitive – being presented with the scientific data about impact gives us a simple reason to change our actions before we have to deal with the anticipated consequences. Unfortunately, what this strategy doesn’t take into account is the emotional response to the science. Most people are scared by the prospect of global warming, and as well as the natural barriers to behaviour change, fear interferes with the absorbing of information.

In the 1970s, college students participated in an experiment designed to persuade them to go and receive a TB vaccination. Some students received scary information about the possible consequences of TB; others received much blander information about the benefits of the vaccine. But the students who received the scary information were less likely to get vaccinated than those who did not, probably because when something scares us, we are just as likely to deny and minimise the risk as we are to try and do something about it. The factor that did increase the students’ likelihood of vaccination was whether or not the information the students received contained a map of the campus health centre and information about its opening hours. This was not new information to the students – they had been living on campus for at least 2 years. But giving a simple guide to how they might take action to minimise their risk changed their behaviour in a way that trying to scare them did not. People’s reactions to information, and how they shape their behaviour based on it, is not as simple as we might think.

A standout example of how the representation of science can go seriously wrong is the MMR vaccine scare. When Dr. Andrew Wakefield called a press conference in 1997, linking MMR to autism and suggesting single vaccinations, perhaps even he did not expect the size of the response. Although there is not, and never truly has been, any scientific evidence reliably linking the MMR vaccine to autism, the story was a compelling one – an apparent rapid rise in a mysterious neurological condition, linked to something given to children to protect them. Even though the NHS, the government, the Royal College of General Practitioners, and others came out with information designed to reassure, the association between the two gathered strength in the public’s mind, and vaccination rates began to drop. Ten years later, there has been some recovery, but vaccination is still not at pre-1997 levels, and may take some time to get back. An association, once created, is not easily broken, whatever the most up-to-date evidence might actually say, and even the huge weight of evidence in support of MMR was not enough to overcome the powerful idea that there might be a link.

So how does a doctor or a global warming researcher influence people to change their behaviour on the basis of evidence? A multitude of experiments have shown that the most effective strategy combines two elements; it gives a simple road map of what to do (like the students’ map of the campus health centre), and it creates the impression that the action suggested is the only socially acceptable one. It turns out people are far more likely to recycle their rubbish if they think most people on the street are doing the same, rather that because of their personal commitment to the environment. It’s never easy to change your behaviour – even if you know all the evidence – so, next time you can’t stop yourself eating too much chocolate, don’t feel too bad about it. Then try making friends with some salad-eaters.

Advertisements




The Hawthorne Effect, or, a lesson in the power of a story

31 10 2009

The Hawthorne Effect is one of the most familiar stories in the history of organizational psychology. Like most familiar stories, it’s also a little bit wrong.

The most famous of the experiments carried out in the General Electric Hawthorne Plant in the 1920s and 1930s to determine the best ways to increase productivity involved the lighting provided in workrooms. The researchers thought, not unreasonably, that increasing the level of lighting in the workrooms might increase the productivity of the workers, whether by allowing them to see better, keeping them more alert, or factors not otherwise accounted for. And productivity – easily measured on a production line – indeed increased. The factor that got everybody’s attention was what happened in the other experimental conditions. Where lighting intensity was not changed, productivity increased. Where lighting intensity was decreased… productivity increased. The researchers not unnaturally concluded that they were neglecting an important element of the psychology of the participants, and that by merely making them aware that they were participating in an experiment, the participants were stimulated to work harder. This wasn’t an unreasonable explanation, particularly given what we know now about the profound power of people’s expectations in an experimental setting. All trials of medical drugs, for instance, are now “double-blind” (neither doctor nor patient knows if the patient is receiving the drug being tested, or a placebo) so that neither’s expectations can cloud the actual influence of the drug.

The Hawthorne effect has enjoyed a prominent place in psychology textbooks and experimental methodology ever since. The reality, of course, is not quite as simple as the story. While productivity did increase briefly in response to numerous small tweaks in working conditions, the effect is not particularly significant, and researchers working since have disputed most of the claimed increases in productivity. One enduring idea is that the workers appreciated being asked for their ideas, and worked harder due to this increased motivation – and while this is by no means a bad moral, there’s no particular reason to believe that this was the key factor at work. The workers could also have felt a desire to “please” the experimenters by showing a change, or simply worked harder in response to being observed more closely.

For me, the real moral of the Hawthorne effect is in the seductive power of the story. Many, perhaps most, of those who repeat it have never read any of the academic writing on the subject, and most textbook mentions of it do not mention the dozens of other experiments outside of lighting that took place. The thing about the mythical version is that it’s a great story. Change, outcome, surprise, attributed cause, attributed effect – simple and dynamic. The human mind is hard-wired to tell stories, and if the data don’t particularly fit our preferred version, we have a strong tendency to change – or just forget – the inconvenient ones.

Are the Hawthorne studies the story of workers being motivated simply by being involved? Or of workers being motivated by the fear of losing their jobs? Or of over-eager researchers over-interpreting their data? It could be one, several, or all of the above. As usual in life, the reality is a little more complicated than we like our stories to be.