Saturday, January 21, 2006

Neuroeconomics: The pleasure of other people’s pain

A paper recently published online in Nature describes how our empathy for other people, and our responses to seeing them in pain, can be modulated by prior interactions in which we deem them to have treated us unfairly (Singer, T. et al. 2006). Even better, they’ve identified brain areas underlying this modulation, illuminating how empathy functions and how it is controlled. Perhaps more controversially, they revealed an intriguing sex difference in how empathy is modulated in light of experience.

This work ties in with recent work on the neural basis of ‘altruistic punishment’, a notion used to explain how human cooperation is maintained in the face of selfish temptations, and evolutionary theorising about the nature of human sociality. And it is also, as far as I read it, informative about how moral intuitions and more explicit knowledge – reasoning even – interact in guiding our emotional, or affective, responses, and the moral judgements we come to about people.

The human capacity for empathy is at the core of our social nature. It enables us to transcend our own egoistic stance and stand in the shoes of others, and see the world from their perspective – a first step towards a genuinely moral stance. In fact, when we imagine ourselves to be in pain, for instance, areas of the brain that are active when people really are in pain become activated, suggesting that the pain we feel when considering the distress of others in not just a metaphorical pain. But does a certain signal of distress or pain always generate the same empathic response, or is it muted or amplified depending on what you know and think about the person on the receiving end of pain? That’s what the current research looked at, but before we get to that I just want to sketch the context in which this work was done.

The pleasure and pain of games
Economists have devised a number of ‘games’ (not particularly fun ones) to explore the human tendency to cooperate, and the associated notions of fairness, revenge and punishment. One of the most famous of these games is the Prisoner’s Dilemma.

In this game, two players imagine themselves to be criminals caught for some crime, but with only enough evidence to convict both of a lesser charge than the real crime entails. If both plead ‘not guilty’, both will get 2 years. If one implicates the other, however, the ‘cheat’ (who ‘defected’ on his partner in crime) will get off scot-free, whereas the schmuck who kept quiet (and ‘cooperated’ with his partner) will get 30 years (if you both cheat on each other you both get 10 years). What should you do? Well, if your partner keeps quiet, and so do you, you both get 2 years; if instead you talk you get off scot-free. What if you partner talks? Then you’ll get 10 years if you also talk and 30 years if you don’t. If your partner cooperates with you and says nothing, you’re better off talking (get clean away with it) than not (2 years); and if you’re ratted out, you’re still better off talking (10 years) than not (30 years). Whatever your partner does, you do better by defecting, and implicating your partner. So the rational thing to de is defect, or talk.

But this leads to a sub-optimal outcome for both parties: 10 years each, instead of the possible 2 if both had kept quiet. The temptation to get an even better deal for yourself leads you to a course of action that leads to a far worse outcome, one even worse than if you’d cooperated! This is why it is a dilemma. To the extent that people deviate from this behaviour they might be said to be irrational (but, I would say, only if you consider rationality in a fairly narrow way) – and humans do deviate from it. They are much more likely to cooperate than the logic above would dictate (because the logic of cooperation is more complicated than the straightforward self-interest assumed above).

There are possible ways out of the Prisoner’s Dilemma, though I won’t go into them here. The important point is that studies with humans in which cooperation and defection lead to monetary rewards rather than prison sentences (but with the same logical structure) have revealed that we’re much more cooperative than the cold logic above would suggest (the most recent research on the routes to cooperation, and the sorts of behaviours and faculties this entails, is a topic for a future post). The flip side to our propensity to cooperate is a tendency to get angry when we’re treated unfairly (hardly front-page news, but it’s an important aspect to incorporate into economic and evolutionary models). Humans are at times driven by revenge, spite, and a desire to punish people or see people punished. And economists have devised a game to explore these motives as well.

The Ultimatum Game is has largely replaced the Prisoner’s Dilemma as the poster child of human irrationality. In the Ultimatum Game, two players are each assigned a role that will determine how they will split some money (say £100) provided by a researcher. One player is designated the proposer, and is given the money and told to decide how to split the money (say, keep £70 and give the other player £30) – but with a proviso: the other player can accept this offer, in which case the money is split as proposed and the game is over, or they can reject the offer, in which case no one gets anything.

Economics traditionally analysed the problem like this. Some money is better than no money, all things being equal, and so people should accept whatever they’re offered, if their goal is to maximise their monetary gain. Now, economics would also traditionally assume that both players, in is this context called agents, would be perfectly rational and therefore realise this simple fact – something is better than nothing. The proposer would therefore know that whatever offer is made will be accepted, so will offer the smallest amount possible (say £1) – which will be accepted (in a perfectly rational world, or rational as construed by classical economics).

Except this isn’t how people play. People generally offer around half the money, though there is quite a bit of cross-cultural variation. And people don’t play it with cold logic; they feel hurt and offended when they are offered a small amount of money, and spite drives them to reject the offer so the tightwad doesn’t benefit either. Researchers working in the relatively young field of neuroeconomics, which under one reading is the study of the neural basis of economic decision-making (another interpretation is the application of economic models to decision-making processes in the brain), have explored the neural basis of decision-making in the Ultimatum Game (Sanfey, A. G. et al. 2003). When players are offered unfair offers (around 20% of the pot), areas of the brain linked to emotion and anxiety (anterior insula) and cognition (dorsolateral prefrontal cortex) become active, suggesting perhaps a tug-of-war between an immediate emotional response (a negative one) and rational thinking about the situation (possible benefit). Most tellingly, the rejection of low offers is associated with increased activity in the anterior insula, suggesting that in these cases emotion wins out (of course, the interaction between reason and emotion will be more suitable and intricate than the mere balancing act I’m implying here).

Altruistic punishment has also been studied in the brain (de Quervain, D. J.-F. et al. 2004). In some games players are given the choice to punish defectors and cheats, or people who otherwise don’t play fair, at some expense to themselves. It has also been shown that being on the receiving end of punishments decreases the likelihood of future defections. Games can be set up so that someone who incurs a cost to punish a cheat is unable to benefit from any increased cooperation and fairness induced in the punishee, so you might expect punishment to disappear (what’s the point if you don’t benefit?) - and yet they’ll still do the punishing. Because the benefits of this punishing accrue to someone else (in the form of increased cooperation in interactions with other people), it is called altruistic punishment. The idea is that you’re not driven by a rational calculation of mere self-interest – you feel emotionally bothered at the transgression of a moral norm, and you are motivated to seek to punish the offender (again, the topic of the evolution of altruistic punishment, and the closely related idea of ‘strong reciprocity’, are beyond the scope of this post). One study on altruistic punishment suggested that people are motivated to punish people that violate the norms of fairness they’re accustomed to in economic games, and derive satisfaction from meting out such punishment, much like achieving any other goal (de Quervain, D. J.-F. et al. 2004).

The empathic brain
We’ve gone from empathy to punishment, and it’s time to come full circle, back to empathy. The present study looks at how empathy is affected by previous experience in the Prisoner’s Dilemma. First, an experimental subject played the Prisoner’s Dilemma with a confederate of the experimenter, who played either fairly or unfairly. Then, each subject was brain scanned while seeing footage of the person they played with receiving an electric shock, similar in intensity to a bee sting, in order to measure their empathic response (activity in certain brain areas). This brain imaging data was then correlated with whether they were treated fairly or unfairly by the person they played with to see if there were any associations.

When players had experienced a fair game, viewing the electric shock treatment cause empathy-related activation of pain-related brain areas (the fronto-insular and anterior cingulated cortices). This was true of both sexes (remember, there’s a fascinating sex difference to reveal yet). Things were different when males and females watched cheats getting their comeuppance. Females showed a slight reduction in empathy-related activity when watching an unfair player, but men showed a markedly reduced level of empathic activity in the same situation. What’s more, this reduction in empathy in men was associated with increased activity in reward-related areas, and this activity correlated with an expressed desire for revenge. Because they felt a greater desire for revenge, they felt less for the person when they got what they opresumably saw as their just deserts. At least for men, learning from experience modulates how much empathy we muster for people in distress depending on whether they’ve treated us well or badly in the past. This might seem obvious (although the neural underpinnings are far from obvious, and need empirical fleshing out), but what isn’t obvious is why there should be sex differences. I have some highly speculative ideas, but I’ll save those for a later post.

These conclusions, at least as far as men are concerned, fit in with other work on the ‘moral sentiments’ and economic models in which social preferences are shaped by learning from experience. And they also tie in with recent work on the neurology of more explicitly moral judgement formation.

Josh Greene has done some excellent work in this area, and the basic gist of his research is that emotion and reason interact in much the same way as empathy and the desire for punishment do in the experiment above (see my article in New Scientist for more). Philosophers have devised a range of moral dilemmas designed to illuminate the nature, or limits, of moral reasoning, as well as our moral intuitions. Oddly, people often say they would behave differently in moral dilemmas that have the same logical form but are framed in different ways.

A classic example is the runaway train dilemma. In this scenario, a train is hurtling down a track, ahead of which are 5 people tied to the track. You’re on the side of the tracks, and there’s a switch you can flick to divert the train down a track on which there is just one person. What do you do? Most people say they would flick the switch, on the utilitarian principle that it is worse for five people to die than one. But consider a variant of this dilemma. The train is heading down the track, and again there are five people stuck on the lines. This time you’re on a footbridge in between the train and the imminent fatalities. Your only option this time is to throw the huge guy standing next to you over the bridge and in front of the train. What do you do now? Nothing – it’s just wrong? Push him – it’s the same problem in the abstract? Even if you say you’d push the guy, on the same utilitarian grounds as before, you might hesitate a bit before deciding that (people who decide to push usually take longer to come to their decision). In fact, some people just wouldn’t push the man – perhaps because ‘authoring’ events is worse than merely ‘editing’ events.

Brain imaging studies reveal that the footbridge scenario activates brain areas associated with emotion that are not triggered to the same degree by the switch-flicking case, and that emotion and reason in a sense compete in coming to a decision about what it is right to do (Greene, J. D. et al. 2001). Much like the increased activity in reward-related brain areas in the empathy game, leaning towards deciding to push the man from the footbridge is associated with increased activity in cognition-related areas, such as the dorsolateral prefrontal cortex. It’s as if it takes great cognitive effort to be able to overcome the emotional revulsion of pushing someone to their death, even if this is to save five others. It takes longer to come to a decision that involves overcoming the emotional response generated by considering up-close-and-personal violence, pain or distress.

So both emotion and reason are deeply implicated in guiding our moral decision-making, and it’s clearly not just one or the other in the driving seat. And of course, the two interact, as this new research shows. Being treated unfairly – perhaps even just knowing some acts unfairly – can reduce our empathy for that person, which plausibly would affect our moral attitude towards seeing that person suffer certain forms of punishment or degradation.

And this possibility seems to have a wider implication – the emotional forces that tango with reason aren’t just affected at the time of making a moral judgement; they’re activation depends on prior experience, learning and knowledge. In this way, we can see how it is possible, by degrees, to begin to regard perhaps certain groups, or types of individual, as not of the same moral status as ourselves: if we either know, or least believe (and this could be for specious reasons) that they are in some way deserving of punishment, then less empathy, and therefore sympathy, will be evoked by seeing these people suffer degradation, humiliation, or, perhaps at the extreme, death. We won’t be as morally engaged with their plight as we otherwise would be, and this can set the stage for the acceptance of more beliefs that further justify their lowly moral status. This is a scary prospect, and unfortunately has historical precedent. Various groups — Jews, Christians and Muslims, people of varying ethic origins, and a variety of social, cultural and political groups — have been stigmatised, harmed and mistreated, and been subject to systematic abuse. This has often been possible through a reduction in empathy, or a complete absence of empathy, for the victims of the perpetrated prejudice, abuse or genocide — a dreadful under-use of the human moral resources. Such short-circuiting of these resources, empathy in particular, has surely played a part in such atrocities as Rwanda and between the Serbs and Croats. Understanding how our moral psychology works is surely an important goal, but it also has dangers. If we know how moral psychology works, and how moral concern can be manipulated and re-focused, then we have a potentially powerful tool for helping to manipulate views about all sorts of groups in society, perhaps terrorists being the most resonant example today. Of course terrorism can only be condemned, but it’s perhaps all too easy to slide from moral judgements about terrorist to the groups we perceive them to belong to, and before we know it we may be sanctioning all sorts of unwanted actions against these groups. If we are aware of our potential biases and prejudices, then perhaps we have a chance to combat them.

References
de Quervain, D. J.-F., Fischbacher, U., Treyer, V., Schellhammer, M., Schnyder, U., Buck, A. & Fehr, E. The neural basis of altruistic punishment. Science 305, 1254–1258 (2004).

Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. An fMRI investigation of emotional engagement in moral Judgment. Science 293, 2105–2108 (2001).

Sanfey, A. G., Rilling, J. K., Aronson, J. A., Nystrom, L. E. & Cohen, J.D. The neural basis of economic decision-making in the Ultimatum Game. Science 300, 1755–1758 (2003).

Singer, T., Seymour, B., O’Doherty, J. P., Stephan, K. E., Dolan, R. J. & Frith, C. D. Empathic neural responses are modulated by the perceived fairness of others. Nature doi:10.1038/nature04271 (2006).

8 Comments:

Blogger Steve said...

'Humanity' by Jonathan Glover is an interesting book for exploring what factors allow wars and atrocities to happen. I remember Glover taking about empathy as resource which can be eroded in various systematic ways, for example, unflattering portrayals of the enemy during wartime.

You've probably read it already, or at least bought it, but just thought I'd mention it anyway.

10:32 pm GMT  
Blogger Dan Jones said...

Yep Steve, you're right, I have Humanity; in fact, I'm in the middle of reading it, and the stuff he says about empathy and its erosion were an inspiration for the post - I should've sourced it I guess. I didn't realise you'd read it, but I'll second your recommendation of it, although it is quite a harrowing read.

11:11 pm GMT  
Blogger potentilla said...

Great blog - I just found it.

Do you know if anyone has tested The Ultimatum Game with large lot sizes? By which I mean, share (say) £1,000, but with £100 the smallest amount that could be offered. Because my guess is that people who punish "irrationally" are also taking account of the (opportunity) cost to them, which at £1 is very small; and if you scaled it all up, you would get a significantly larger proportion of people being "rational" and accepting the offer.

A bit expensive for experiment, perhaps. Maybe if you used students you could have the smallest lot as £10 and still find the effect.

11:47 am GMT  
Blogger Dan Jones said...

Glad you like the log Potentilla. As for using more money in the UG, it has been done by taking the experiment to poor countries, where $100 mightbe equal to a month or more's pay. I can't remember off the top of my head the exact details, but I think people are still likely to sacrifice an offer in the UG if they see it as unfair, even if that deprives them of 2 week's worth of wages. Anthroplogist Joe Henrich has done good work in this area, along with economists Herb Gintis, Colin Camerer, Ernst Fehr and Simon Gatcher - a most of these reseachers have a selection of papers on their websites, which should be easy enough to find.

12:07 pm GMT  
Anonymous Anonymous said...

I would like to think that strong research like this would show those that claim moral behaviour can only come from irrational beliefs in supernatural entities that can punish or reward how wrong they are. Unfortuanately it will make no difference to religous fundamentalists. It is only by accepting this kind of explanation for moral behaviour that gives us any hope of "improvement", there is no room for humans to improve themselves in any religion (except maybe Buddhism)as far as I know..

Thanks for a great post.

4:30 am GMT  
Blogger Dan Jones said...

I agree with Canuckrob that it is nonsense to suggest that we can only be moral if we subscribe to belief in a God (I think human and cultural evolution has given us the moral resources we need to get by). But I think we also need to recognize that we similarly don’t need religion to do evil, which means we have to be careful what we mean when we endorse certain statements, such as the one the quoted by Dawkins from Nobel-prize-winning physicist Steven Weinberg (whose writings I much admire, including this quote until fairly recently):

“With or without religion, you would have good people doing good things and evil people doing evil things. But for good people to do evil things, that takes religion.”

I get the point that is being made, and perhaps it can be salvaged with a reformulation, but as it stands I think it is just wrong. We humans have a range of moral resources, and these can be eroded in various ways (military training, self-determination, the degradations of war and fear for your life) and lead what were previously ‘good’ people to commit atrocities. The soldiers who participated in the My Lai massacre in Vietnam, for instance, were not a specially selected, or statistically unlikely, group of psychos, but normal Americans who had had their moral resources and identities so reduced that they became capable of evil. And it didn’t take religion to effect this.

2:57 pm GMT  
Anonymous Anonymous said...

Dan Jones, I agree with your comment. However I do think religion does hold a special place in it's ability to erode and corrupt a moral stance. Firstly religion discourages critcal thinking amongst it's adherents so they are more vulnerable to manipulation from the relgious leaders or those political leaders that use the religon for their own ends. The military also (less now than in the past) also discourages critical thinking, at least from the rank and file, which makes it easy to erode moral values thorugh demonization of the enemy (who is after all trying to kill you).

However I think religion is the biggest culprit because of the way it is spread i.e. by inculcating children into the religion long before they are capable of critical thinking. We would be up in arms if the state tried to program our children in this way but no one dares to tell parents that they cannot expose their children to the particlar belief system that their particular religous sect promotes, no matter how stupid it may be. (The only exception I can think of is that governments occasionally intervene to save children who are at risk becosue the aprents religion forbids appropriate medical care).

Thanks for this great blog.

6:20 pm GMT  
Anonymous Anonymous said...

Wasn't there a game show with the same basic premise as Prisoner's Dilemma? I believe it was called "Friend or Foe". The team with the lowest amount of money at the end of each round had to decide how the money was split. If both members picked "Friend" they would split the winnings. If one had decided "Friend" and other claimed "Foe", the player who picked "Foe" would get everything. If both had picked "Foe" neither got any prize. It was an interesting show, I'd like to see it come back.

7:49 pm GMT  

Post a Comment

<< Home