Thursday, August 24, 2006

Beware of the Others?

New research published in Nature shows how biases towards members of our social group, and against those outside it, shape how generous we are to people and how we punish others for transgressing social norms.

Humans are socially sticky: we bond into cohesive groups that commonly share a common identity and, often, similar values. This applies to social circles and local communities as much as to nationality and global religious and political affiliation. Such unity can encourage people within the group to pull together, to help one another when in need – in short, to get along.

But there’s a downside to human ‘groupishness’: a mental division between members of the ingroup, to whom social and even moral obligations apply, and various outgroups, to whom they do not. People who live in different groups — geographical, social or ethnic — often treat outgroup members as ‘others’ (something viewers of Lost will be familiar with), frequently arousing enmity and stoking conflict. Note how groups really come into their own and pull together when pitted against other groups in the human speciality of war.

The ingroup–outgroup distinction has the power to distort and bias our attitudes towards outgroup members in pernicious ways. These prejudices are played out locally and globally on a daily basis. When supporters of our football team brawl with the other team’s, we can easily blame our opponents on starting the trouble “They always cause a ruckus, don’t they?”). When our country is at way with another, we’re justifiably retaliating against military aggression (“We’re merely defending ourselves against those lunatics across the border”). Behaviour of ‘our people’ that is deemed to be tolerable can be judged intolerable or immoral or worthy of punishment when people of other groups do the same thing.

The nature of human altruism (helping others), and the role of altruistic punishment (paying a cost to punish those that don’t help others) in establishing cooperation and a bases for sociality, are currently two of the most active areas of research in the behavioural sciences. Nearly every aspect of altruism and cooperation you can think of is being explored: how altruistic behaviour and willingness to punish non-altruists varies across societies with differing social systems (and also what universal trends underlie human altruism); how this variation relates to economic and demographic factors; how people respond to punishment for not cooperating, in both laboratory and real-world situations; the role of institutions that embody social norms of behaviour in maintaining cooperation; and what’s going on in the brain when we cooperate and defect in games of altruism with other human players.

One of the most crucial findings of this research is the extent to which people will incur a cost to punish non-cooperators, and how powerful a force this is in eliciting cooperation from those tempted to defect. Studies with economic games across the world have revealed that the degree to which people will take a monetary hit to punish the unequal division of a sum of money (provided by the experimenter) increases as the split becomes more unequal.

But this isn’t the whole story. Behind the general trend lurks much variation. Perhaps the most important way in which punishing behaviour varies is in the threshold of selfishness that elicits punishment from others. Players living in certain societies won’t punish until the outcomes of dividing money in economic games is grossly unequal, whereas other are much quicker to lay down the law. Some societies even have norms that lead to the punishment of unequal but hyper-fair splits of the money stash (so that the person controlling how much is given to another a player gives away more than 50%), which is something of a puzzle.

The new study addresses a different question, one about altruism, altruistic punishment and groupishness. Do we respond to transgressions of social norms by our ingroup differently than violation of those same norms by members of an outgroup? Are we more forgiving of the former and harsher on the latter by virtue of their group allegiance? The answer looks like a qualified ‘yes’.

Economists Helen Bernhard, Urs Fischbacher and Ernst Fehr took an experimental game called the third-party punishment game (3PPG) to Papua New Guinea to play among the indigenous people. In this version of 3PPG, games involved three players, A, B and C. At the start, the experimenter gave player A the sum of 10 kina (a good day’s wages in Papua New Guinea). Player A, also known as the Dictator, then had to decide an amount to give to player B, the Recipient (who simply received whatever the Dictator offered). Player C, the Third Party, then had a choice of imposing a punishment on player A (the Dictator) in light of what they had given to B (the Recipient). The punishment worked like this: when the Dictator received their initial 10-kina stash, the Third Part player was given 5 kina. The Third Party could then spend 0-2 kina to punish the Dictator, such that for every kina spent by the Third Party, 3 were lost by the Dictator.

Players for the 3PPG were recruited from two distinct and non-hostile indigenous groups, who were then mixed together in four experimental conditions. In one set of games, players A, B and C were from the same group (ABC); in the others, either A and B (AB), A and C (AC), or B and C (BC) were co-members of the same group.

What might we expect to see happening in these groups? Well, the ABC case is like many experiments that have revealed a willingness to punish those that violate norms of sharing — that is, Dictators that try to hog most of the cash for themselves are penalised by Third Parties, which leads many Dictators to revise their strategy (fairness pays more than greed when avarice is fined). So in the ABC game we’d expect to see high punishment by Third Party players for unequal splits and, anticipating this, Dictators would be more likely to offer more equal shares (punishing ‘sharing norm’ violators in this way helps maintain cooperation within groups in which people obey the norm of punishing the selfish). This prediction was fulfilled.

But what of the other conditions? Well if social norms apply to our groups, and serve to promote cooperation within the group, then there’s little need to them to outgroup members. We should neither extended the obligations we feel towards our ingroup brothers and sisters to the outgroup, nor expect them to show us the consideration they show each other. In the AB condition, the Third Party (C), being from a different group the Dictator (A) and Recipient (B), has no obligation to punish the violation of the norm for sharing if the Dictator’s offer is unfair (the punishing norm applies to other members of their ingroup, to promote cooperation within their own group), and no interest in doing so either (if this act of punishment induces the Dictator to be more generous in future, that benefit accrues to the Third Party’s outgroup). A similar logic applies to the AC and BC groups: in both cases, the presence of the outgroup would seem to reduce the propensity to punish (in the AC condition, the Third Party (C), seeing that the Recipient (B) is an outgroup member, does not expect the Dictator (A) to apply the sharing norm to the Recipient, and so would not punish; in the BC condition, the Third Party player does not expect the Dictator to obey the sharing norm for a similar reason).


The findings of the Bernhard et al. study don’t quite fit in with these predictions. Firstly, the same general pattern of punishment in response to unfair behaviour was the same across all conditions: people were more likely to punish as the split became increasingly unequal (to the Recipient’s disadvantage), and paid more to effect greater punishment the stingier the offer made by the Dictator (see figure). However, punishment was much stronger in the ABC condition than in either the AB or AC conditions.

The real anomaly lies with the punishing behaviour in BC, which was, surprisingly and contrary to the expectation stated earlier, higher than in ABC. The difference between BC and ABC is that in the latter the Third Party is a member of the same group as the Dictator, and this fact leads to lower punishment of (or greater lenience towards) the Dictator. The level of punishment seen in ABC is perhaps a sort of baseline for within-group tolerance for norm violation, which becomes more muted (less tolerant) for outgroup members. In the AB condition, which shows weak punishment compared with ABC and BC, it’s perhaps not so much that the Third Party is cutting the Dictator more slack, but is just not interested in incurring a cost to punish a violation of a norm among to people from a different group, among people of a different group.

So Third Parties are more lenient when the Dictator who violates a sharing norm is a member of their own group. The results also show that Third Parties are more willing to punish Dictators who violate the norms for sharing when the Recipient is an ingroup member (irrespective of the whether the Dictator is an in- or outgroup member). By asking the Dictators in the games what their subjective expectations were for receiving punishment for making low offers to Recipients, the researchers found that players expected just what happened. Another way of putting it is that people are much more protective of ‘victims’ of norm violations (in this case being offered an unfair amount) if they are from the same group as the punisher (that is, will be more likely to punish the interloping Dictator). Bernhard et al. call this differential treatment of ingroup and outgroup members, or narrowing of altruistic and protective tendencies ‘parochial altrusim’.

The flip side of the punishing behaviour in these games is how generous people were with other players in the first place. In general, transfers were higher when Dictator and Recipient were members of the same group, which the idea of parochial altruism would suggest. And Dictators were also less likely to make a truly egalitarian share with Recipients if the Third Party was a member of the Dictator’s group: they correctly anticipated that they’d be less punished than if they made the same offer to an ingroup rather than outgroup Recipient in the presence of the same Third Party.

The results of this study are somewhat subtle, and require a bit of getting your head around to see what they might mean. The authors make a number of points in this regard. The first is to note that in all conditions, with mixes of ingroup and outgroup members, there was at least some sharing (altruism) and some punishment (altruistic punishment). In other words, players extended egalitarian norms, even if in a sometimes diluted form, to outgroup members. This is perhaps a problem for theories that see altruism arising through the selective extinction of groups that are less cooperative, and therefore less successful in the long run. Such a process sees differing groups as competing entities, and so they not be expected to include outgroups within sphere of social norms.

The second relates to the unexpected finding of high levels of punishment in the BC condition (even higher than ABC). Without taking into account the factors that influence the balance of cooperation and conflict between groups, this finding is puzzling. One suggestion for it is that punishing an outgroup member who harms an ingroup member might enhance the security of the ingroup by sending out the message “You mess with one of us, you mess with us all”. Just like in gang culture, groups that are known to protect their own with a swift and aggressive response confer a degree of protection on each individual member, as no would-be outgroup aggravator want to bring trouble on their own head.

Although the difference standards to which ingroup and outgroup members were held in this study had no harmful real-world consequences, they are a reminder of how, in one another’s eyes, we are not all equal: some — our ingroup — are more equal than others . This bias, stoked by religious, political or territorial disputes, can easily lead to a moral distancing of ‘them’, and justify whatever actions are perpetrated in ‘our’ name — the consequences of which we all too frequently read about. Perhaps being aware of this potentially dangerous proclivity for parochialism in the social and moral realms, we can take steps to resist the urge and develop an expanded, more encompassing, social and moral framework.

Tuesday, August 15, 2006

Punishment: A Global Tour

A while back I wrote about a paper that explored the role of punishment in maintaining cooperation among unrelated people — currently one of the hottest topics in the human behavioural sciences. But like most (but certainly not all) studies on punishment and cooperation, this research was done with Western subjects in a laboratory setting. Now anthropologist Joe Henrich and colleagues have published a study that looked at punishment and cooperation in diverse societies around the planet.

Henrich and other economists and anthropologists have previously studied how people play economic games in these same societies, and the results suggest that a propensity to punish those who don’t cooperate with us, and instead try to rip us off, is part of human psychology. But how willing people are to punish the greedy, and the costs they’ll incur to do so, differ from society to society. The new study probes this propensity a bit further.

The researchers collected data from more than 1,700 adults in 15 societies, which the authors claim span the full range of human production systems. Using favourites of studies on human cooperation and altruism – the Ultimatum Game (UG) and the Third-Party Punishment Game (3PPG) – the globe-trotting research team collected results that need to be explained by any theory of human altruism, whether based solely on genetic evolution or on gene-culture co-evolution.

In the UG, two players are allotted a sum of money, say $100 (or the local equivalent). Player 1, the Proposer, is told that they will decide how the money will be split between the two players, and can make an offer anywhere between zero and $100, in $10 increments. Player 2, the Chooser, has to decide, for each potential offer ($0, $10, $20 and so on) whether they’d accept the offer. The Proposer’s actual offer is then revealed, and if the Chooser has agreed to accept an offer of that level, the money is split as agreed and the game is over. But there’s a catch: if the Chooser rejects the offer at that level, both players walk away with nothing.

For a Chooser driven to maximise monetary gain (that is, to act in a materially self-regarding way), any offer should be accepted, as some money is better than no money. Seeing the logic of this, Proposers should offer the smallest amount possible, safe in the knowledge that this minimal offer will be snapped up.

So a theory based on pure rationality and self-regard would say. But this isn’t how people generally act. It is well known that Proposers routinely offer up to 50% of the cash pie, and Chooses tend to reject offers below 20%. In other words, people will forgo money (which, economically, is equivalent to incurring a cost) in order to make sure that others don’t unfairly benefit (that is, in order to punish people who behave unfairly and snatch themselves an unequal slice of the pie). This needs explaining, and the underlying reasons for this behaviour need to be incorporated into a larger theory of human cooperation, altruism and self-regard.

So behaviour in the UG is a measure of whether people willing to engage in costly punishment, and because there are only two players in the game Henrich et al. call this second-order punishment. In all of the populations studied, the degree to which people were willing to impose second-order punishment on another player increased as the proposed offer became more unequal – or more unfair, in common parlance. As the offers decreased from 50% to 0%, so too did the likelihood of accepting the offer. Overall, 56% of players rejected offers of 10% or less.

But this general trend masks much variation. In five populations — the Amazonian Tsimane, the Shaur of the Andes, an Isanga village, the Yasawa in Fiji and the Sambu in Kenya — just 15% of people rejected these low offers. At the other end of the spectrum, in four societies — Maragoli in Kenya, Gusil (?), rural Missouri (USA) and the Sursurunga in New Ireland — more than 60% rejected the same offers. Surprisingly, and in contrast to the behaviour of Western students (one of the study populations), 6 of the 14 non-student populations rejected unbalanced offers that were biased in their favour (offers above 50%). This is, prima facie, puzzling, and the rejection of such hyper-fair offers also needs to be explained by an adequate theory of human altruism.

Statistical analyses carried out by the authors revealed that the variation in behaviour in the UG across populations could not be explained by demographic or economic differences between them – we’ll get to what might in a moment. But before doing so, we should look at what was found with the other game, the Third-Party Punishment Game.

The 3PPG is basically like the UG with someone watching, but with some important differences. A pot of money is provided, half of which goes to the Watcher. As before, the Proposer decides on what to offer the Chooser. While the Chooser decides (in private) on the level at which they’ll accept and reject, the Watcher is asked to decide whether to pay 10% of the total stake (20% of their own pile) to punish the Prosper for the full range of possible offers — so the Watcher may say, “Anything less than 40% ($40) and I’ll punish”. This is how the punishment works. Let’s say that the Proposer offered $30 (and the Chooser accepts). Then the Watcher would punish the Proposer, and pay 10% ($10) to do so. This $10 buys the Watcher a ‘30% cost’ on the Proposer, so if there offer was accepted by the Chooser, they’d walk away with $70 minus 30%, or $49. The Watcher would walk away with $50 (their original stake) minus the $10 ‘punishment fee’, or $40, and the Chooser would get their $30.

What should a rational, self-regarding Watcher do in the 3PPG? Well, it would never make sense to punish a Proposer who made an unfair offer, as you’d always lose money this way, and never. But again, this isn’t how people act, and seem prepared to wage what Henrich et al. call third-order punishment. Overall, 60% of Watchers were willing to pay 20% of their endowment (which in these games represented half a day’s wages) to punish Proposers who offered nothing at all. And yet again there was variation: this figure dropped to ~28% among Tsiamne and Hadza (Tanzania) populations, and rose to more than 90% among the Gusil and Maragoli populations). Another statistical analysis suggested that these differences, like those in the UG, were not attributable to demographic/economic factors.

The authors also looked at behaviour in one final game, one that measured a propensity towards altruism and fairness, rather than punishing behaviour: the Dictator Game. This is the same as the UG except that Choosers in fact have no choice; they’re mandatory Accepters. Knowing this, and being therefore relieved of the threat of spiteful rejections by genuine Choosers, ‘offers’ by Proposers tend to be lower.

Taken the results together, what do they tell us about altruism, cooperation and punishment? Well, one theory of the origins of human altruism is that it’s the product of a co-evolutionary process involving genes (and the minds they help build) and culture and the social rules they prescribe. Social norms for punishing cheats can co-evolve with a psychological propensity to engage in punishing behaviour, and as a consequence stabilise cooperation (because people want to avoid the costs of being punished, and so play ball).

If this were true, then we’d expect to see that altruistic behaviour would correlate with punishing behaviour – and the results collected by Henrich et al. support this. In general, societies with high degrees of punishment also tend to harbour greater altruism.

Whether or not the gene-culture co-evolutionary model turns out to be correct, these sorts of real-world studies, and the results the produce, constrain and inform all theories of human altruism, and therein lies perhaps their greatest value.

It’s been a long time….

I’ve had a bit of a break from blogging while I concentrated on a few other things, but I’m back on board again, and plan to make regular updates as before, although I’m sure you all managed just fine without PSOM for a while!