Punishment: A Global Tour
A while back I wrote about a paper that explored the role of punishment in maintaining cooperation among unrelated people — currently one of the hottest topics in the human behavioural sciences. But like most (but certainly not all) studies on punishment and cooperation, this research was done with Western subjects in a laboratory setting. Now anthropologist Joe Henrich and colleagues have published a study that looked at punishment and cooperation in diverse societies around the planet.
Henrich and other economists and anthropologists have previously studied how people play economic games in these same societies, and the results suggest that a propensity to punish those who don’t cooperate with us, and instead try to rip us off, is part of human psychology. But how willing people are to punish the greedy, and the costs they’ll incur to do so, differ from society to society. The new study probes this propensity a bit further.
The researchers collected data from more than 1,700 adults in 15 societies, which the authors claim span the full range of human production systems. Using favourites of studies on human cooperation and altruism – the Ultimatum Game (UG) and the Third-Party Punishment Game (3PPG) – the globe-trotting research team collected results that need to be explained by any theory of human altruism, whether based solely on genetic evolution or on gene-culture co-evolution.
In the UG, two players are allotted a sum of money, say $100 (or the local equivalent). Player 1, the Proposer, is told that they will decide how the money will be split between the two players, and can make an offer anywhere between zero and $100, in $10 increments. Player 2, the Chooser, has to decide, for each potential offer ($0, $10, $20 and so on) whether they’d accept the offer. The Proposer’s actual offer is then revealed, and if the Chooser has agreed to accept an offer of that level, the money is split as agreed and the game is over. But there’s a catch: if the Chooser rejects the offer at that level, both players walk away with nothing.
For a Chooser driven to maximise monetary gain (that is, to act in a materially self-regarding way), any offer should be accepted, as some money is better than no money. Seeing the logic of this, Proposers should offer the smallest amount possible, safe in the knowledge that this minimal offer will be snapped up.
So a theory based on pure rationality and self-regard would say. But this isn’t how people generally act. It is well known that Proposers routinely offer up to 50% of the cash pie, and Chooses tend to reject offers below 20%. In other words, people will forgo money (which, economically, is equivalent to incurring a cost) in order to make sure that others don’t unfairly benefit (that is, in order to punish people who behave unfairly and snatch themselves an unequal slice of the pie). This needs explaining, and the underlying reasons for this behaviour need to be incorporated into a larger theory of human cooperation, altruism and self-regard.
So behaviour in the UG is a measure of whether people willing to engage in costly punishment, and because there are only two players in the game Henrich et al. call this second-order punishment. In all of the populations studied, the degree to which people were willing to impose second-order punishment on another player increased as the proposed offer became more unequal – or more unfair, in common parlance. As the offers decreased from 50% to 0%, so too did the likelihood of accepting the offer. Overall, 56% of players rejected offers of 10% or less.
But this general trend masks much variation. In five populations — the Amazonian Tsimane, the Shaur of the Andes, an Isanga village, the Yasawa in Fiji and the Sambu in Kenya — just 15% of people rejected these low offers. At the other end of the spectrum, in four societies — Maragoli in Kenya, Gusil (?), rural Missouri (USA) and the Sursurunga in New Ireland — more than 60% rejected the same offers. Surprisingly, and in contrast to the behaviour of Western students (one of the study populations), 6 of the 14 non-student populations rejected unbalanced offers that were biased in their favour (offers above 50%). This is, prima facie, puzzling, and the rejection of such hyper-fair offers also needs to be explained by an adequate theory of human altruism.
Statistical analyses carried out by the authors revealed that the variation in behaviour in the UG across populations could not be explained by demographic or economic differences between them – we’ll get to what might in a moment. But before doing so, we should look at what was found with the other game, the Third-Party Punishment Game.
The 3PPG is basically like the UG with someone watching, but with some important differences. A pot of money is provided, half of which goes to the Watcher. As before, the Proposer decides on what to offer the Chooser. While the Chooser decides (in private) on the level at which they’ll accept and reject, the Watcher is asked to decide whether to pay 10% of the total stake (20% of their own pile) to punish the Prosper for the full range of possible offers — so the Watcher may say, “Anything less than 40% ($40) and I’ll punish”. This is how the punishment works. Let’s say that the Proposer offered $30 (and the Chooser accepts). Then the Watcher would punish the Proposer, and pay 10% ($10) to do so. This $10 buys the Watcher a ‘30% cost’ on the Proposer, so if there offer was accepted by the Chooser, they’d walk away with $70 minus 30%, or $49. The Watcher would walk away with $50 (their original stake) minus the $10 ‘punishment fee’, or $40, and the Chooser would get their $30.
What should a rational, self-regarding Watcher do in the 3PPG? Well, it would never make sense to punish a Proposer who made an unfair offer, as you’d always lose money this way, and never. But again, this isn’t how people act, and seem prepared to wage what Henrich et al. call third-order punishment. Overall, 60% of Watchers were willing to pay 20% of their endowment (which in these games represented half a day’s wages) to punish Proposers who offered nothing at all. And yet again there was variation: this figure dropped to ~28% among Tsiamne and Hadza (Tanzania) populations, and rose to more than 90% among the Gusil and Maragoli populations). Another statistical analysis suggested that these differences, like those in the UG, were not attributable to demographic/economic factors.
The authors also looked at behaviour in one final game, one that measured a propensity towards altruism and fairness, rather than punishing behaviour: the Dictator Game. This is the same as the UG except that Choosers in fact have no choice; they’re mandatory Accepters. Knowing this, and being therefore relieved of the threat of spiteful rejections by genuine Choosers, ‘offers’ by Proposers tend to be lower.
Taken the results together, what do they tell us about altruism, cooperation and punishment? Well, one theory of the origins of human altruism is that it’s the product of a co-evolutionary process involving genes (and the minds they help build) and culture and the social rules they prescribe. Social norms for punishing cheats can co-evolve with a psychological propensity to engage in punishing behaviour, and as a consequence stabilise cooperation (because people want to avoid the costs of being punished, and so play ball).
If this were true, then we’d expect to see that altruistic behaviour would correlate with punishing behaviour – and the results collected by Henrich et al. support this. In general, societies with high degrees of punishment also tend to harbour greater altruism.
Whether or not the gene-culture co-evolutionary model turns out to be correct, these sorts of real-world studies, and the results the produce, constrain and inform all theories of human altruism, and therein lies perhaps their greatest value.
Henrich and other economists and anthropologists have previously studied how people play economic games in these same societies, and the results suggest that a propensity to punish those who don’t cooperate with us, and instead try to rip us off, is part of human psychology. But how willing people are to punish the greedy, and the costs they’ll incur to do so, differ from society to society. The new study probes this propensity a bit further.
The researchers collected data from more than 1,700 adults in 15 societies, which the authors claim span the full range of human production systems. Using favourites of studies on human cooperation and altruism – the Ultimatum Game (UG) and the Third-Party Punishment Game (3PPG) – the globe-trotting research team collected results that need to be explained by any theory of human altruism, whether based solely on genetic evolution or on gene-culture co-evolution.
In the UG, two players are allotted a sum of money, say $100 (or the local equivalent). Player 1, the Proposer, is told that they will decide how the money will be split between the two players, and can make an offer anywhere between zero and $100, in $10 increments. Player 2, the Chooser, has to decide, for each potential offer ($0, $10, $20 and so on) whether they’d accept the offer. The Proposer’s actual offer is then revealed, and if the Chooser has agreed to accept an offer of that level, the money is split as agreed and the game is over. But there’s a catch: if the Chooser rejects the offer at that level, both players walk away with nothing.
For a Chooser driven to maximise monetary gain (that is, to act in a materially self-regarding way), any offer should be accepted, as some money is better than no money. Seeing the logic of this, Proposers should offer the smallest amount possible, safe in the knowledge that this minimal offer will be snapped up.
So a theory based on pure rationality and self-regard would say. But this isn’t how people generally act. It is well known that Proposers routinely offer up to 50% of the cash pie, and Chooses tend to reject offers below 20%. In other words, people will forgo money (which, economically, is equivalent to incurring a cost) in order to make sure that others don’t unfairly benefit (that is, in order to punish people who behave unfairly and snatch themselves an unequal slice of the pie). This needs explaining, and the underlying reasons for this behaviour need to be incorporated into a larger theory of human cooperation, altruism and self-regard.
So behaviour in the UG is a measure of whether people willing to engage in costly punishment, and because there are only two players in the game Henrich et al. call this second-order punishment. In all of the populations studied, the degree to which people were willing to impose second-order punishment on another player increased as the proposed offer became more unequal – or more unfair, in common parlance. As the offers decreased from 50% to 0%, so too did the likelihood of accepting the offer. Overall, 56% of players rejected offers of 10% or less.
But this general trend masks much variation. In five populations — the Amazonian Tsimane, the Shaur of the Andes, an Isanga village, the Yasawa in Fiji and the Sambu in Kenya — just 15% of people rejected these low offers. At the other end of the spectrum, in four societies — Maragoli in Kenya, Gusil (?), rural Missouri (USA) and the Sursurunga in New Ireland — more than 60% rejected the same offers. Surprisingly, and in contrast to the behaviour of Western students (one of the study populations), 6 of the 14 non-student populations rejected unbalanced offers that were biased in their favour (offers above 50%). This is, prima facie, puzzling, and the rejection of such hyper-fair offers also needs to be explained by an adequate theory of human altruism.
Statistical analyses carried out by the authors revealed that the variation in behaviour in the UG across populations could not be explained by demographic or economic differences between them – we’ll get to what might in a moment. But before doing so, we should look at what was found with the other game, the Third-Party Punishment Game.
The 3PPG is basically like the UG with someone watching, but with some important differences. A pot of money is provided, half of which goes to the Watcher. As before, the Proposer decides on what to offer the Chooser. While the Chooser decides (in private) on the level at which they’ll accept and reject, the Watcher is asked to decide whether to pay 10% of the total stake (20% of their own pile) to punish the Prosper for the full range of possible offers — so the Watcher may say, “Anything less than 40% ($40) and I’ll punish”. This is how the punishment works. Let’s say that the Proposer offered $30 (and the Chooser accepts). Then the Watcher would punish the Proposer, and pay 10% ($10) to do so. This $10 buys the Watcher a ‘30% cost’ on the Proposer, so if there offer was accepted by the Chooser, they’d walk away with $70 minus 30%, or $49. The Watcher would walk away with $50 (their original stake) minus the $10 ‘punishment fee’, or $40, and the Chooser would get their $30.
What should a rational, self-regarding Watcher do in the 3PPG? Well, it would never make sense to punish a Proposer who made an unfair offer, as you’d always lose money this way, and never. But again, this isn’t how people act, and seem prepared to wage what Henrich et al. call third-order punishment. Overall, 60% of Watchers were willing to pay 20% of their endowment (which in these games represented half a day’s wages) to punish Proposers who offered nothing at all. And yet again there was variation: this figure dropped to ~28% among Tsiamne and Hadza (Tanzania) populations, and rose to more than 90% among the Gusil and Maragoli populations). Another statistical analysis suggested that these differences, like those in the UG, were not attributable to demographic/economic factors.
The authors also looked at behaviour in one final game, one that measured a propensity towards altruism and fairness, rather than punishing behaviour: the Dictator Game. This is the same as the UG except that Choosers in fact have no choice; they’re mandatory Accepters. Knowing this, and being therefore relieved of the threat of spiteful rejections by genuine Choosers, ‘offers’ by Proposers tend to be lower.
Taken the results together, what do they tell us about altruism, cooperation and punishment? Well, one theory of the origins of human altruism is that it’s the product of a co-evolutionary process involving genes (and the minds they help build) and culture and the social rules they prescribe. Social norms for punishing cheats can co-evolve with a psychological propensity to engage in punishing behaviour, and as a consequence stabilise cooperation (because people want to avoid the costs of being punished, and so play ball).
If this were true, then we’d expect to see that altruistic behaviour would correlate with punishing behaviour – and the results collected by Henrich et al. support this. In general, societies with high degrees of punishment also tend to harbour greater altruism.
Whether or not the gene-culture co-evolutionary model turns out to be correct, these sorts of real-world studies, and the results the produce, constrain and inform all theories of human altruism, and therein lies perhaps their greatest value.
0 Comments:
Post a Comment
<< Home