The workplace or the workers?

18 11 2009

Interesting article in last Wednesday’s Guardian – William Tate considers how the press responds to failures in an organizational system, and whether it makes more sense to criticise the staff or the system.

Sometimes it’s the workplace that’s stupid, not the staff

While cases like Baby P’s are tragic, I’m inclined to agree with Tate and with Eileen Munro, to whom he’s responding; the people involved in a system are, usually, motivated and caring people doing the best they can within a set of constraints.

What interests me is Tate’s division between the “system” and the staff. Where do we draw that distinction, and is it ever really meaningful? We all like to talk about “culture” and the system”, imagining it as something that controls and shapes us, something that washes around us like a fog. But it doesn’t exist outside of our heads. Inasmuch as there is something called a “culture” and something called a “system”, we create it. All of us, everyday, in the way we tilt our heads when that guy we don’t really like says something friendly, in the way we structure our workdays, in the way we respond in a crisis. It’s like money. It exists only because we all agree to pretend it does.

“The system” doesn’t control people. They control it. They created it because, I think, it’s easier to absolve some of the responsibility for your actions to an amorphous and impersonal idea that you never have to face. Ever bewailed your company’s “culture” or processes? What did you do to change it when you did that?

You change it when you refuse to accept it. You change it by the way you think. If you don’t like a culture, change it.

Advertisements




Science is interesting – and if you don’t agree?

14 11 2009

Check out the clip below, in which Richard Dawkins is rebuked for his famously acerbic rhetorical style by Neil Tyson, an astrophysicist and US TV presenter. Dawkins responds in rather pithy form. (NB: Not safe for work.)

Tyson raises some very important points about the role of a Professor for the Public Understanding of Science, and their responsibility not to dismiss those who are already inclined to hear their message. Dawkins is disinclined to have much truck with anyone who doesn’t accept that science is both interesting and valuable, but on the whole I have to say that my sympathies in this debate are with Tyson, who is an eminent scientist and communicator of science himself.

Dawkins’ occupation of the Simonyi Professorship for the Public Understanding of Science has certainly done a great deal to raise the profile of the the professorship and of himself, and Marcus du Sautoy is undoubtedly already finding him a hard act to follow. But I find it hard to think of what Dawkins himself has done to increase the public understanding of science, other than to very publicly endorse atheism and criticise religion. And in his defence, the atheism issue is contentious enough that it becomes the one and only issue he is asked about in many contexts. Dawkins is a brilliant biologist and ethologist, and a brilliant communicator – his “The Selfish Gene” and “The Blind Watchmaker” are readable, lucid, and extremely funny. (No, really. Try them if you don’t believe me – they’re readily understandable even to those who don’t have a science background.)

But I don’t believe that there is, or should be, anything in the world whose existence as a “good” thing we should accept without question, even science. And to take on the role of a communicator of science is to accept that there are those out there who are disinclined to look favourably on them. What is the point of communicating only to those who already agree with you? What is the use of writing people off completely?

My goal is to communicate to you that psychology is a science, a relevant and applied science, that has improved education, justice, work. If I can’t do that, then the failure is mine. If I didn’t believe that was possible, I wouldn’t try. If Dawkins really believed what he says in the above clip, then he should not have accepted the Professorship.





How normal are you?

12 11 2009

You’ll have run across psychometric tests – the Stanford-Binet IQ test, the Myers-Briggs Type Indicator. Increasingly, they’re used in job selection and assessment, career guidance, and university entrance. Maybe you’ve taken one, or had to review the results. Were you confident of what they were testing?

Psychometrics have had something of a bad reputation, due in part to the interest taken in them during World Wars I and II for classifying recruits, and the inevitable existence of numerous poorly-constructed “pop” psychology tests in the media and elsewhere. Properly constructed and validated psychometrics aren’t, of course, perfect, but they have been thoroughly and repeatedly studied and validated, and can’t be dismissed as uninformative.

What are psychometics, really? Simply, they are ways of measuring constructs, like “intelligence”, or “preference”. Crucially, they don’t measure against some abstract standard of perfection. Essentially, all psychometrics are based on the bell-shaped normal distribution, where the majority of the population are clustered in the middle of a distribution, with increasingly fewer results at both the top and bottom of the curve. The majority of people, therefore, will always fall somewhere in the middle of the distribution in any population. What a psychometric can do is to tell you where – according. of course, to the result you get on any particular day.

(A note on the word “normal”. In this, its most common use in psychology, it doesn’t mean “good” – it just describes the pattern seen across a population. It is an endless fascination to me how almost all terms meaning “common” or “usual” come to mean “good”, and “unusual” bad.)

A psychometric might measure intelligence, but it doesn’t measure what we all understand, abstractly, as the quality “intelligence”. It measures a precisely defined, constructed idea of “intelligence”, and as mentioned above, the crucial aspect is that it only measures your results on any given day. Anything you can define, you can measure – but when you define something you necessarily limit it. No psychometric definition of “intelligence” can capture all the facets that most people understand in that term. And most qualities aren’t fixed – nobody has just one level of “intelligence”. Your performance on a test depends on how motivated you are at the time, not to mention your existing knowledge, and maybe even how tired you are on the day. All tests are also filtered through the language they’re applied in, the culture they’re written for, and the assumptions they depend on, which is one of the reasons tests should always be taken in the participant’s native language.

Psychometrics can’t tell you how intelligent you are. But they can tell you where, on a given day, you rank in the population according to a necessarily imperfect definition that misses some things out. Maybe.





What does it mean to be a scientist?

5 11 2009

I write this blog not just because I want to be a scientist of organisations. I write it because I’d like you to be one as well.

It doesn’t involve a white coat or a microscope (although I borrow the imagery liberally, as you might have noticed.) What it involves mean different things to different people, but I think it comes down to a mindset.

It means being curious about why things happen and why they don’t happen, and setting out to find out more about both. It means pushing forward the frontiers of knowledge, one tiny piece of data at a time. It means not believing anything that can’t be sufficiently proved AND replicated, and being prepared to challenge and revise your beliefs when new information shows that they may be mistaken. It means, as both Isaac Newton and Google Scholar like to say,  standing on the shoulders of giants. It means never taking anything for granted. And it means never being really, absolutely sure of anything. It’s scary.

It doesn’t, to me, mean having a PhD, or an MSc, or even an A-Level. It doesn’t mean ever darkening the door of a lab. It does mean being aware that, while the human brain is a phenomenal information-processing machine, it has a number of inbuilt bugs that mean we can’t always rely on experience and what we know instinctively. The first and single most important step you can take, as a scientist of organizations, is to care how well things are done – to care enough to try to find out what’s known about the best way to do things. If you have ever searched for research or reviews on hiring or organizational change, you are an organizational scientist.

But it’s not enough just to care, and to look, because the volume of information in all sectors we’re now faced with is overwhelming, and sadly, some of it is of far higher a quality than others. (Here’s a hint; don’t take information on health from the Daily Mail.) If you have the mindset – if you care – then the next most important thing is to refine your skills of evaluation; to know where to go and how to evaluate the information that you find. It’s my goal in this blog to give you the tools to evaluate what is known.

If you’ve never taken science, you could do far worse than to start by reading Ben Goldacre’s Bad Science book and blog. You’ll find them funny, practical, and informative on how to evaluate research and make what you do know more effective. I’ll be building up a toolkit for the aspiring and existing scientist as this blog goes along, so watch this space.

If you’re still reading, you’re probably a scientist already. Good luck and have fun.

 





Learning about learning

4 11 2009

We tend to think of learning as just something we do: a general skill that we can apply to anything, and that lets us generalise things we learn in one context to another context. Let’s take an example; if you learn, say, how to conduct a successful coaching session in a training room environment, it should be easy to transfer that skill to the real-life environments you will be faced with. This assumption, in fact, basically underlays every training and development programme in existence.

You can probably see where this is going; like so many assumptions about the brain, we’ve discovered on investigation that it’s a little more complicated than that. Memory turns out to be a very context-dependent process; it’s much easier to remember what you’ve learned when you’re in the same environment as when you learned it, hearing the same sounds, looking at the same people, because when that information got encoded into your brain, it was encoded along with all the other data passing through at the time. If you’ve ever had a memory rush back vividly when you heard part of a song, or caught a whiff of a scent, you’ve experienced this phenomenon.

The classic study on how learning is affected by context was done by Godden and Baddeley in 1975; rather brilliantly, they persuaded scuba divers to memorise lists of words both on land, and some metres underwater. Godden and Baddeley found that the divers remembered the words much better in the same context they’d learnt them, either underwater or on land, because that evironment provided the “cues” they needed to effectively remember. We see the same phenomenon in babies; by tying a ribbon round a baby’s ankle and attaching it to a mobile above his cot, he will learn relatively quickly that by kicking his leg, he can make his mobile jiggle. But if one small thing about the scene is changed – the colour of the mobile, the wallpaper in the room – he has to learn the process all over again. We are brilliant at learning specific things, but what we learn IS specific – we learn it in a context, and a particular way, and it’s not always easy to take it somewhere else.

Think about it. Do you train your staff in a conference room or training suite, somewhere they never need to use the skills you’re trying to teach them? Are they getting to practice what they need to in the environment in which they’ll actually need to use it, or are you assuming that they will be able to generalise from their training environment into the environment they actually need to work in?





The Hawthorne Effect, or, a lesson in the power of a story

31 10 2009

The Hawthorne Effect is one of the most familiar stories in the history of organizational psychology. Like most familiar stories, it’s also a little bit wrong.

The most famous of the experiments carried out in the General Electric Hawthorne Plant in the 1920s and 1930s to determine the best ways to increase productivity involved the lighting provided in workrooms. The researchers thought, not unreasonably, that increasing the level of lighting in the workrooms might increase the productivity of the workers, whether by allowing them to see better, keeping them more alert, or factors not otherwise accounted for. And productivity – easily measured on a production line – indeed increased. The factor that got everybody’s attention was what happened in the other experimental conditions. Where lighting intensity was not changed, productivity increased. Where lighting intensity was decreased… productivity increased. The researchers not unnaturally concluded that they were neglecting an important element of the psychology of the participants, and that by merely making them aware that they were participating in an experiment, the participants were stimulated to work harder. This wasn’t an unreasonable explanation, particularly given what we know now about the profound power of people’s expectations in an experimental setting. All trials of medical drugs, for instance, are now “double-blind” (neither doctor nor patient knows if the patient is receiving the drug being tested, or a placebo) so that neither’s expectations can cloud the actual influence of the drug.

The Hawthorne effect has enjoyed a prominent place in psychology textbooks and experimental methodology ever since. The reality, of course, is not quite as simple as the story. While productivity did increase briefly in response to numerous small tweaks in working conditions, the effect is not particularly significant, and researchers working since have disputed most of the claimed increases in productivity. One enduring idea is that the workers appreciated being asked for their ideas, and worked harder due to this increased motivation – and while this is by no means a bad moral, there’s no particular reason to believe that this was the key factor at work. The workers could also have felt a desire to “please” the experimenters by showing a change, or simply worked harder in response to being observed more closely.

For me, the real moral of the Hawthorne effect is in the seductive power of the story. Many, perhaps most, of those who repeat it have never read any of the academic writing on the subject, and most textbook mentions of it do not mention the dozens of other experiments outside of lighting that took place. The thing about the mythical version is that it’s a great story. Change, outcome, surprise, attributed cause, attributed effect – simple and dynamic. The human mind is hard-wired to tell stories, and if the data don’t particularly fit our preferred version, we have a strong tendency to change – or just forget – the inconvenient ones.

Are the Hawthorne studies the story of workers being motivated simply by being involved? Or of workers being motivated by the fear of losing their jobs? Or of over-eager researchers over-interpreting their data? It could be one, several, or all of the above. As usual in life, the reality is a little more complicated than we like our stories to be.





Common sense is often wrong

27 10 2009

Yeah, you heard me.

On growing up, and entering paid employment, one of the biggest disappointments I faced was discovering how little expertise there truly was in most of the working world. As a child, I blithely assumed that adults doing their jobs Really Knew all about their jobs. Turns out, most people use a combination of common sense, trial, and error.

There’s just one problem. Common sense is often wrong.

The human brain is an absolutely incredible processing device. At some things, like reading other people, it’s so fast and accurate that you can form an accurate impression of someone’s intelligence in less than 60 seconds, not to mention subconsciously process a host of other signals that the person is giving out. (Some correctly, some wrongly, but hey, nobody’s perfect.)  But at some things, unfortunately, we’re very bad. We’re inclined, for instance, only to seek out information that supports the view we already have (the confirmation bias). And to see what has happened in the past as far more predictable than it really was (hindsight bias). Add all of these up, and our intuitive psychology about how other people will react is, unfortunately, often way off base. For instance, you might think that taking young offenders to adult prisons to try and “scare them straight” is a good idea, right? Wrong. It increases reoffending rates, not decreases them.

For an example that affects every one of us in organizations: It’s often assumed that, if we pay people more to do particular things, they’ll do more of that thing. In reality, some research has shown that financial incentives erode people’s intrinsic motivation for tasks; the more you pay someone to do something, the more they assume that the task isn’t intrinsically worth doing – otherwise, why would you have to pay them? Studies have found a link between increased pay and increased performance quantity, but not performance quality – in other words, you can get people to do more of certain things, but not necessarily to do it better.

Beware of making decisions just because they’re “common sense”. It’s often not quite that simple.