Better off with CVs?

17 02 2010

Just time for a brief post today, but this is a topic that fascinates me. Have you ever made a really big hiring mistake? I know I have, and it’s given me a strong interest in how we actually know that our selection methods are any good.

The common-or-garden job interview? It’s not very good – in fact, some studies show better results by judging strictly from CVs, with no face-to-face contact whatsoever. But there are aspects which can improve the interview – check out this 1994 paper to find out more.





The Scientist’s Toolkit: Check your prejudices.

2 02 2010

Some things make me sad. Some things make me angry. This particular article makes me both, but in all fairness, Aaron Sell’s anger is both more justified and more righteous.

For those of you who have missed the blog kerfuffle, Aaron Sell, a psychologist for the Centre for Evolutionary Psychology, recently published an article studying aggression and suggesting that individuals who perceive themselves to be stronger, or more attractive, are more likely to behave aggressively. This research was picked up and published by the Sunday Times as an article titled, “Blonde women born to be warrior princesses“.

It’s hard to know where to start with all the things that are wrong with this.  Sell’s research did not refer to blondes at all. Sell details, in his subsequent angry letter to the Times, how the journalist, John Harlow, told him he was writing a piece about blondes, and asked him whether blondes exhibited more anger. Sell pointed out that his work didn’t look at hair colour at all, but agreed to re-analyse the data on this basis. He found no link between hair colour, entitlement and aggressive behaviour, and told Harlow so. Harlow’s article subsequently appeared, not only claiming that “blondes are more aggressive and more determined to get their own way”, but attributing some completely outrageous and utterly fabricated quotes directly to Sell. “This is southern California – the natural habitat of the privileged blonde”?

I’d really like to believe that this was a one-off, but it’s hard to. It’s clear that Harlow had the story already written in his mind, and chose not to let the lack of actual facts get in his way. There’s been some online coverage of this egregious example of reporting (try here and here) and some discussion of the role of a responsible press in not totally fabricating stories and quotes from whole cloth in defiance of evidence (can you tell this bothers me?).  But I actually think the real lesson is slightly different.

Newspapers, on the whole, find it far more convenient to tell us what we already believe – changing people’s minds is time-consuming, difficult, and they don’t like it much. We’re all disposed to seek out and overvalue information that confirm the beliefs we already have  (confirmation bias) – some nifty studies have been done on the phenomenon. Harlow’s study panders shamelessly to our prejudices and our stereotypes. It’s a bit controversial, but not so much so that we can’t secretly, lazily, accept it as true because it ties in with some of our other social shortcuts. This is why we do science; because we can’t fully trust our brains to evaluate evidence effectively when we already have beliefs on a topic. We will always be inclined to seek out and accept the information that confirms what we already believe – it’s so much easier than re-evaluating those beliefs.

I don’t know about all of you, but when I’m reading the paper from now on, I’m going to very carefully evaluate any story reporting a study on how it plays to my prejudices. Because if it does, I need to be extra, extra careful before I accept any part of it. And since the Times has refused to print Aaron Sell’s letter, or alter or remove the original article, please help make it up to him by reading his excellent original research.





Mind Over Matter

27 01 2010

My article on why it’s so hard to learn science and maths is featured in the 17th issue of BlueSci, Cambridge University’s science magazine.

Download the PDF here.





“There aren’t any stupid people out there”

26 01 2010

Thus spake Ben Goldacre, at his lecture on Risk and the Media at Darwin College, Cambridge last Friday. Check out the talk when it becomes available on iTunes shortly – it’s provocative, informative, and hilarious.

As a practicing NHS doctor, Ben’s argument was that he has seen hundreds, if not thousands, of people confronted with weighing complex evidence and making decisions with enormous consequences, and they do, with admirable comprehension, because they are extremely motivated to do so.

Psychology, for the record, backs him up. Motivated adults (and children) do better in IQ tests, in interviews, in jobs, in college.  Sometimes they do better than people who are, objectively speaking, smarter. You’ve probably heard the stories of people, in moments of great stress, displaying superhuman strength, because they’re highly motivated to save themselves or the ones they love.

All this, I can’t help thinking, gives the lie to the idea that we, as practitioners of best practice (which, as I like to say, isn’t always obvious) should simplify things, should dumb down, because we don’t think the people we need to convince can understand it. All that shows is a failing in us – we haven’t sufficiently convinced them that it’s worth understanding.

When people need to chew through complex medical studies which sometimes indicate differing things, potentially-conflicting medical advice, all while dealing with the stresses and strains of a health crisis, they manage it. They’re not stupid. Perhaps our job is less to manage the information flow for the people we secretly think are not-so-bright, and more to convince them it’s worth their while to show how bright they are.





Gender bias is dead. Long live gender bias.

21 12 2009

Women’s lib is dead. Positive discrimination is right out. We’ve won all of our battles for equality. Right? If women aren’t in the boardroom, it’s because they’re choosing not to be – not to work the hours, not to take the stress. Or it’s something inherent to women’s work behaviour. They don’t push. They say “I’m grateful to have a job”, when they should be saying, “I am the linchpin of this organization. Up the offer or I walk”.

No, the one thing I think it’s not OK to say is that women might not get to the top of organizations because we are still subconsciously far harder on them than we are on men. All of us. I’ve often wondered if a man who walked and talked and acted the exact same way as I did would ever get told he was “abrupt”, or “not a team player”. I’ve often wondered if the same assumptions would be made about this hypothetical him. I have, needless to say, suspected that they would not.

In the spirit of my scientific credentials, obviously, I can’t make a statement like that without testing it. And the only way to test something like this is in a controlled trial. And there is a way to do a controlled trial – remotely, like, say online. What would happen if two people supposedly presented themselves, and produced work, and all-in-all were judged over a period of time, exactly the same, except that one was a man and one was a woman?

James Chartrand knows. The story of how a female writer came to work primarily under a male pseudonym, because the same work got more bids, better pay, and more respect, is fascinating and depressing. I wish I could believe that this was unusual. I really do. The people who paid more for “James’s” work than that of a female writer, and praised it more highly, almost certainly had no idea that gender was a factor in how they responded. How can there be equality in the workplace when we still understand our own brains, the filters through which we see and judge people, so poorly?





Should the media educate?

1 12 2009

On Thursday I attended a talk run by BlueSci magazine and featuring Michael Claessens, of the European Commission’s Research Directorate, to discuss why it’s continuing to be so hard to communicate science (including psychology) effectively through the media. It has to be said that, in my view, Claessens was pretty pessimistic about the results of thirty years (!) of active engagement with the popular media by science, even though regular checks of the baseline scientific literacy of the population has shown some improvements.

Claessens did point out that the media have no duty to educate anyone – they’re a business and they exist to sell themselves. The BBC are really the only notable exception to this fact, and it should be pointed out that the BBC produces some fantastic science and psychology programming. (Download some of All in the Mind if you have some spare time on your way to work.) They also have to deal with editorial policy, space constraints, sub-editors, the need for eye-grabbing and usually misleading headlines, and, often, a lack of time to find out what the facts actually are before going to press.

That’s why I love blogs. Claessens wasn’t quite so keen, but blogs don’t suffer from space constraints, or publishing deadlines, or subeditors. Blogs are free (normally). Blogs can build a community in and around their readers. Blogs can specialise in any area they choose, and have proved that they can build up huge readerships. Blogs don’t have a responsibility to educate either, but many of them do, or try to, and in most cases they do it for love. They also have the chance to build something over time, which is how you do education. Slowly, in pieces, over time.

Claessens talked about the need to communicate a simple message; something I struggle with – don’t we all? He concluded that sometimes science just isn’t simple. I don’t agree. To communicate something, anything, takes a story. Sometimes stories aren’t particularly simple, but if we can’t break something down into simpler components in order to tell someone else about it, isn’t that a deficiency in us? If science is “un-simple” and therefore can’t be communicated, how did we enlightened types ever manage to learn it in the first place?

That’s what gets me about this whole discussion, I think. Somewhere buried in it all is the assumption that there are the special clever people who understand science, and the other people, who don’t or can’t. I don’t buy it.





A very human disaster

25 11 2009

Virtually every big disaster comes down to human error – Chernobyl, Bhopal, plane crashes. To be more precise, they come down to  small mechanical failures, compounded and created by human panic, or confusion, or failure to communicate. In an unusual work environment, like a nuclear power station or a plane cockpit, the way people relate to each other can make the difference between life and death. How’s that for psychology at work?

I recently read and enjoyed Malcolm Gladwell’s book “Outliers”, which has a significant chapter on the human role in air disasters, and in particular the bad safety problems once suffered by Korean Air. It’s shocking and fascinating to read some of his transcripts of terrible, avoidable disasters that came down to people feeling too inhibited, confused, and distressed to be able to speak up at crucial times. Gladwell reports on two South American pilots who simply felt too inhibited by the assertive and vocal air traffic control crew in New York to be able to communicate that their plane was almost totally out of fuel and due to crash. Both pilots and the passengers lost their lives.

Air Korea’s poor safety record was for some time a source of mystery, as their equipment was as up-to-date and in as good condition as anyone’s. If it weren’t for the “black box” reportings of everything that’s said in the cockpit, it might have remained a mystery. Korean culture is highly deferential to authority, and authority in a Korean plane cockpit was vested and embodied in the captain. Before each flight, the co-pilots and crew would be expected to bring his meals and attend to his every need. The net result of all of this was that in any problem or difficulty, no-one would voice an opinion that ran contrary to that of the captain, despite the desperate need. It’s painful to read the transcripts of the co-pilots trying desperately to communicate that the plane is in trouble to their captain. “Captain, the weather radar has helped us a lot.”

I always think of those co-pilots when I hear it said that people are the same everywhere, or that culture and environment do not affect how people respond. The pilots were unable to overcome the cultural inhibitions binding them, even though they knew well the outcome could be their deaths. Korean Air, by the way, now has an excellent safety record, but getting it involved intensively training their air crews to put deference aside and communicate honestly and robustly in the air. The patterns of culture and communication we live in affect the results of all our lives profoundly on a daily basis – the question is, is it in the way we’d like?





The Scientist’s Toolkit: Understanding the numbers

21 11 2009

Let’s say you’re reading a newspaper over the weekend. Let’s say you spot a front-page headline in this newspaper, all direly big, that says something along the lines of, “EATING YOGHURT DOUBLES YOUR RISK OF BRAIN CANCER!”

Assuming you pay attention, and go on to read the article, should you immediately stop eating yoghurt? After all, “doubling” is an awful lot. But when you read the small print in this kind of article, you’re likely to find out that 1) the baseline risk (i.e. the number of people, out of 1,000, who will get this illness in their lifetime) is extremely low; and 2) the correlation between eating yoghurt and brain cancer adds up to a slightly higher, but still extremely low risk. Let’s say that the number of people who will typically get brain cancer is something like 0.25 per thousand, or one person per four thousand. In the yoghurt-eating contingent, it is found that 0.5 people per thousand will go on to develop brain cancer, or one in two thousand – basically, one extra person per four thousand yoghurt eaters. The newspapers are perfectly entitled to report this as “RISK DOUBLES!”, and usually do.

Now, chances are that you didn’t make a vow to stay away from yoghurt when you read this article, because you’ve read too many like it, and possibly even muttered something about damned lies and statistics before you turned the page. That’s a shame, because we need statistics. Yes, they can be represented all kinds of ways, and some of those ways are more informative and useful than others, but it is statistics that we turn to when we need to know if a study or a programme worked, or whether crime rates really have changed, or if we should start excluding yoghurt from our diets. You need the toolkit to go up close and understand what the numbers are telling you.

There are some excellent resources online , for starters, try the Open University’s Statistics and the Media. And I recommend it so frequently, I feel like a broken record, but pick up Ben Goldacre’s Bad Science too – hilarious, fun to read, and the best simple primer on how to read, understand and criticise a science study that I’ve ever read.





Science is interesting – and if you don’t agree?

14 11 2009

Check out the clip below, in which Richard Dawkins is rebuked for his famously acerbic rhetorical style by Neil Tyson, an astrophysicist and US TV presenter. Dawkins responds in rather pithy form. (NB: Not safe for work.)

Tyson raises some very important points about the role of a Professor for the Public Understanding of Science, and their responsibility not to dismiss those who are already inclined to hear their message. Dawkins is disinclined to have much truck with anyone who doesn’t accept that science is both interesting and valuable, but on the whole I have to say that my sympathies in this debate are with Tyson, who is an eminent scientist and communicator of science himself.

Dawkins’ occupation of the Simonyi Professorship for the Public Understanding of Science has certainly done a great deal to raise the profile of the the professorship and of himself, and Marcus du Sautoy is undoubtedly already finding him a hard act to follow. But I find it hard to think of what Dawkins himself has done to increase the public understanding of science, other than to very publicly endorse atheism and criticise religion. And in his defence, the atheism issue is contentious enough that it becomes the one and only issue he is asked about in many contexts. Dawkins is a brilliant biologist and ethologist, and a brilliant communicator – his “The Selfish Gene” and “The Blind Watchmaker” are readable, lucid, and extremely funny. (No, really. Try them if you don’t believe me – they’re readily understandable even to those who don’t have a science background.)

But I don’t believe that there is, or should be, anything in the world whose existence as a “good” thing we should accept without question, even science. And to take on the role of a communicator of science is to accept that there are those out there who are disinclined to look favourably on them. What is the point of communicating only to those who already agree with you? What is the use of writing people off completely?

My goal is to communicate to you that psychology is a science, a relevant and applied science, that has improved education, justice, work. If I can’t do that, then the failure is mine. If I didn’t believe that was possible, I wouldn’t try. If Dawkins really believed what he says in the above clip, then he should not have accepted the Professorship.





How normal are you?

12 11 2009

You’ll have run across psychometric tests – the Stanford-Binet IQ test, the Myers-Briggs Type Indicator. Increasingly, they’re used in job selection and assessment, career guidance, and university entrance. Maybe you’ve taken one, or had to review the results. Were you confident of what they were testing?

Psychometrics have had something of a bad reputation, due in part to the interest taken in them during World Wars I and II for classifying recruits, and the inevitable existence of numerous poorly-constructed “pop” psychology tests in the media and elsewhere. Properly constructed and validated psychometrics aren’t, of course, perfect, but they have been thoroughly and repeatedly studied and validated, and can’t be dismissed as uninformative.

What are psychometics, really? Simply, they are ways of measuring constructs, like “intelligence”, or “preference”. Crucially, they don’t measure against some abstract standard of perfection. Essentially, all psychometrics are based on the bell-shaped normal distribution, where the majority of the population are clustered in the middle of a distribution, with increasingly fewer results at both the top and bottom of the curve. The majority of people, therefore, will always fall somewhere in the middle of the distribution in any population. What a psychometric can do is to tell you where – according. of course, to the result you get on any particular day.

(A note on the word “normal”. In this, its most common use in psychology, it doesn’t mean “good” – it just describes the pattern seen across a population. It is an endless fascination to me how almost all terms meaning “common” or “usual” come to mean “good”, and “unusual” bad.)

A psychometric might measure intelligence, but it doesn’t measure what we all understand, abstractly, as the quality “intelligence”. It measures a precisely defined, constructed idea of “intelligence”, and as mentioned above, the crucial aspect is that it only measures your results on any given day. Anything you can define, you can measure – but when you define something you necessarily limit it. No psychometric definition of “intelligence” can capture all the facets that most people understand in that term. And most qualities aren’t fixed – nobody has just one level of “intelligence”. Your performance on a test depends on how motivated you are at the time, not to mention your existing knowledge, and maybe even how tired you are on the day. All tests are also filtered through the language they’re applied in, the culture they’re written for, and the assumptions they depend on, which is one of the reasons tests should always be taken in the participant’s native language.

Psychometrics can’t tell you how intelligent you are. But they can tell you where, on a given day, you rank in the population according to a necessarily imperfect definition that misses some things out. Maybe.