Better off with CVs?

17 02 2010

Just time for a brief post today, but this is a topic that fascinates me. Have you ever made a really big hiring mistake? I know I have, and it’s given me a strong interest in how we actually know that our selection methods are any good.

The common-or-garden job interview? It’s not very good – in fact, some studies show better results by judging strictly from CVs, with no face-to-face contact whatsoever. But there are aspects which can improve the interview – check out this 1994 paper to find out more.

“There aren’t any stupid people out there”

26 01 2010

Thus spake Ben Goldacre, at his lecture on Risk and the Media at Darwin College, Cambridge last Friday. Check out the talk when it becomes available on iTunes shortly – it’s provocative, informative, and hilarious.

As a practicing NHS doctor, Ben’s argument was that he has seen hundreds, if not thousands, of people confronted with weighing complex evidence and making decisions with enormous consequences, and they do, with admirable comprehension, because they are extremely motivated to do so.

Psychology, for the record, backs him up. Motivated adults (and children) do better in IQ tests, in interviews, in jobs, in college.  Sometimes they do better than people who are, objectively speaking, smarter. You’ve probably heard the stories of people, in moments of great stress, displaying superhuman strength, because they’re highly motivated to save themselves or the ones they love.

All this, I can’t help thinking, gives the lie to the idea that we, as practitioners of best practice (which, as I like to say, isn’t always obvious) should simplify things, should dumb down, because we don’t think the people we need to convince can understand it. All that shows is a failing in us – we haven’t sufficiently convinced them that it’s worth understanding.

When people need to chew through complex medical studies which sometimes indicate differing things, potentially-conflicting medical advice, all while dealing with the stresses and strains of a health crisis, they manage it. They’re not stupid. Perhaps our job is less to manage the information flow for the people we secretly think are not-so-bright, and more to convince them it’s worth their while to show how bright they are.

Gender bias is dead. Long live gender bias.

21 12 2009

Women’s lib is dead. Positive discrimination is right out. We’ve won all of our battles for equality. Right? If women aren’t in the boardroom, it’s because they’re choosing not to be – not to work the hours, not to take the stress. Or it’s something inherent to women’s work behaviour. They don’t push. They say “I’m grateful to have a job”, when they should be saying, “I am the linchpin of this organization. Up the offer or I walk”.

No, the one thing I think it’s not OK to say is that women might not get to the top of organizations because we are still subconsciously far harder on them than we are on men. All of us. I’ve often wondered if a man who walked and talked and acted the exact same way as I did would ever get told he was “abrupt”, or “not a team player”. I’ve often wondered if the same assumptions would be made about this hypothetical him. I have, needless to say, suspected that they would not.

In the spirit of my scientific credentials, obviously, I can’t make a statement like that without testing it. And the only way to test something like this is in a controlled trial. And there is a way to do a controlled trial – remotely, like, say online. What would happen if two people supposedly presented themselves, and produced work, and all-in-all were judged over a period of time, exactly the same, except that one was a man and one was a woman?

James Chartrand knows. The story of how a female writer came to work primarily under a male pseudonym, because the same work got more bids, better pay, and more respect, is fascinating and depressing. I wish I could believe that this was unusual. I really do. The people who paid more for “James’s” work than that of a female writer, and praised it more highly, almost certainly had no idea that gender was a factor in how they responded. How can there be equality in the workplace when we still understand our own brains, the filters through which we see and judge people, so poorly?

Yes, your team really do need to concentrate

8 12 2009

A personal vindication for me, this one: Pop-ups and email alerts significantly slow down work by breaking our concentration.

(Source: Wales Online. Original study Cardiff University.)

I’ve often wondered why it isn’t more acceptable to simply turn off email and the Blackberry when you need to concentrate on something. You’ll get it done faster, and your ideas will probably better. You’ll certainly enjoy it more.  Yet, every time I’ve done this, I’ve felt the need to hide it (and usually to work somewhere away from my normal desk so people don’t come to find me to ask me why I’m not answering emails). In fact, can’t we turn the alert all the way off? Why not batch-process all emails every couple of hours, maximum?

Every study of cognitive psychology (i.e. of the ways we perceive and process the world) has to deal with the fact that we only have a limited amount of attention, and it is quite literally not possible for us to focus it on two things at once. Multitasking, as Henry Ford might have said had he lived this long, is bunk.

I’m going to keep on trying to change attitudes slowly with this one by taking time to concentrate when I need it, and telling people that’s what I’m doing. It’s part of a broader issue, though, I think – the fact that the work environment is often pretty much unconducive to the kinds of work that need to be done. What are the barriers to us all stopping pretending that we can do everything at once? Is it just attitude?

What does it mean to be a scientist?

5 11 2009

I write this blog not just because I want to be a scientist of organisations. I write it because I’d like you to be one as well.

It doesn’t involve a white coat or a microscope (although I borrow the imagery liberally, as you might have noticed.) What it involves mean different things to different people, but I think it comes down to a mindset.

It means being curious about why things happen and why they don’t happen, and setting out to find out more about both. It means pushing forward the frontiers of knowledge, one tiny piece of data at a time. It means not believing anything that can’t be sufficiently proved AND replicated, and being prepared to challenge and revise your beliefs when new information shows that they may be mistaken. It means, as both Isaac Newton and Google Scholar like to say,  standing on the shoulders of giants. It means never taking anything for granted. And it means never being really, absolutely sure of anything. It’s scary.

It doesn’t, to me, mean having a PhD, or an MSc, or even an A-Level. It doesn’t mean ever darkening the door of a lab. It does mean being aware that, while the human brain is a phenomenal information-processing machine, it has a number of inbuilt bugs that mean we can’t always rely on experience and what we know instinctively. The first and single most important step you can take, as a scientist of organizations, is to care how well things are done – to care enough to try to find out what’s known about the best way to do things. If you have ever searched for research or reviews on hiring or organizational change, you are an organizational scientist.

But it’s not enough just to care, and to look, because the volume of information in all sectors we’re now faced with is overwhelming, and sadly, some of it is of far higher a quality than others. (Here’s a hint; don’t take information on health from the Daily Mail.) If you have the mindset – if you care – then the next most important thing is to refine your skills of evaluation; to know where to go and how to evaluate the information that you find. It’s my goal in this blog to give you the tools to evaluate what is known.

If you’ve never taken science, you could do far worse than to start by reading Ben Goldacre’s Bad Science book and blog. You’ll find them funny, practical, and informative on how to evaluate research and make what you do know more effective. I’ll be building up a toolkit for the aspiring and existing scientist as this blog goes along, so watch this space.

If you’re still reading, you’re probably a scientist already. Good luck and have fun.


Learning about learning

4 11 2009

We tend to think of learning as just something we do: a general skill that we can apply to anything, and that lets us generalise things we learn in one context to another context. Let’s take an example; if you learn, say, how to conduct a successful coaching session in a training room environment, it should be easy to transfer that skill to the real-life environments you will be faced with. This assumption, in fact, basically underlays every training and development programme in existence.

You can probably see where this is going; like so many assumptions about the brain, we’ve discovered on investigation that it’s a little more complicated than that. Memory turns out to be a very context-dependent process; it’s much easier to remember what you’ve learned when you’re in the same environment as when you learned it, hearing the same sounds, looking at the same people, because when that information got encoded into your brain, it was encoded along with all the other data passing through at the time. If you’ve ever had a memory rush back vividly when you heard part of a song, or caught a whiff of a scent, you’ve experienced this phenomenon.

The classic study on how learning is affected by context was done by Godden and Baddeley in 1975; rather brilliantly, they persuaded scuba divers to memorise lists of words both on land, and some metres underwater. Godden and Baddeley found that the divers remembered the words much better in the same context they’d learnt them, either underwater or on land, because that evironment provided the “cues” they needed to effectively remember. We see the same phenomenon in babies; by tying a ribbon round a baby’s ankle and attaching it to a mobile above his cot, he will learn relatively quickly that by kicking his leg, he can make his mobile jiggle. But if one small thing about the scene is changed – the colour of the mobile, the wallpaper in the room – he has to learn the process all over again. We are brilliant at learning specific things, but what we learn IS specific – we learn it in a context, and a particular way, and it’s not always easy to take it somewhere else.

Think about it. Do you train your staff in a conference room or training suite, somewhere they never need to use the skills you’re trying to teach them? Are they getting to practice what they need to in the environment in which they’ll actually need to use it, or are you assuming that they will be able to generalise from their training environment into the environment they actually need to work in?

The Hawthorne Effect, or, a lesson in the power of a story

31 10 2009

The Hawthorne Effect is one of the most familiar stories in the history of organizational psychology. Like most familiar stories, it’s also a little bit wrong.

The most famous of the experiments carried out in the General Electric Hawthorne Plant in the 1920s and 1930s to determine the best ways to increase productivity involved the lighting provided in workrooms. The researchers thought, not unreasonably, that increasing the level of lighting in the workrooms might increase the productivity of the workers, whether by allowing them to see better, keeping them more alert, or factors not otherwise accounted for. And productivity – easily measured on a production line – indeed increased. The factor that got everybody’s attention was what happened in the other experimental conditions. Where lighting intensity was not changed, productivity increased. Where lighting intensity was decreased… productivity increased. The researchers not unnaturally concluded that they were neglecting an important element of the psychology of the participants, and that by merely making them aware that they were participating in an experiment, the participants were stimulated to work harder. This wasn’t an unreasonable explanation, particularly given what we know now about the profound power of people’s expectations in an experimental setting. All trials of medical drugs, for instance, are now “double-blind” (neither doctor nor patient knows if the patient is receiving the drug being tested, or a placebo) so that neither’s expectations can cloud the actual influence of the drug.

The Hawthorne effect has enjoyed a prominent place in psychology textbooks and experimental methodology ever since. The reality, of course, is not quite as simple as the story. While productivity did increase briefly in response to numerous small tweaks in working conditions, the effect is not particularly significant, and researchers working since have disputed most of the claimed increases in productivity. One enduring idea is that the workers appreciated being asked for their ideas, and worked harder due to this increased motivation – and while this is by no means a bad moral, there’s no particular reason to believe that this was the key factor at work. The workers could also have felt a desire to “please” the experimenters by showing a change, or simply worked harder in response to being observed more closely.

For me, the real moral of the Hawthorne effect is in the seductive power of the story. Many, perhaps most, of those who repeat it have never read any of the academic writing on the subject, and most textbook mentions of it do not mention the dozens of other experiments outside of lighting that took place. The thing about the mythical version is that it’s a great story. Change, outcome, surprise, attributed cause, attributed effect – simple and dynamic. The human mind is hard-wired to tell stories, and if the data don’t particularly fit our preferred version, we have a strong tendency to change – or just forget – the inconvenient ones.

Are the Hawthorne studies the story of workers being motivated simply by being involved? Or of workers being motivated by the fear of losing their jobs? Or of over-eager researchers over-interpreting their data? It could be one, several, or all of the above. As usual in life, the reality is a little more complicated than we like our stories to be.