Sunday, April 23, 2006

To cooperate or free-ride: picking the right pond

Cooperation can get off the ground when people can punish cheats, and a new study shows that people choose environments that allow punishment over ones in which cheats go free.

Why do people get together and cooperate? Why do people not ruthlessly pursue their own selfish ends in a battle ‘red in tooth and claw’ in which only the fittest survive? One obvious answer is that cooperation –working as a team, contributing your fair share to a group project – can produce results unattainable through solo effort. Group living has many potential benefits. But cooperative groups are constantly under threat from cheats that want to exploit the system for their own ends – and if enough people do this, the benefits of cooperation come crashing down. A new study in Science by Özgür Gürerk and colleagues, along with an excellent commentary from Joseph Henrich, adds another piece of the puzzle of why and how people come together to form cooperative groups.

The problem of altruism and cooperation has long been a puzzle for evolutionary biology, and has given rise to a number of competing and complementary theories. Two of the most well known – William Hamilton’s theory of kin selection (directing help to kin), and Robert Trivers’s theory of reciprocal altruism (scratching the back of those who scratch yours) – explain much of the cooperation we see in the animal world. Kin-directed altruism is the most ubiquitous type of altruism, although surprisingly few solid examples of reciprocal altruism have been found among animals [1] (reciprocity (perhaps not in the form of direct reciprocal altruism) does, however, seem to be an important feature of human cooperation and altruism).

But when it comes to explaining human altruism these theories fall short of the mark. Humans direct help towards unrelated individuals in a scale unparalleled by any other species, and cooperate in large groups of unrelated individuals, so kin selection is not really of much relevance here. And theoretical studies suggest that reciprocal altruism cannot stabilise cooperation in large groups. The scale and diversity of human cooperation requires something beyond these two explanations [2].

Public spirit
One type of cooperative endeavour that has been explored in great detail is the ‘public-goods game’. These games are designed to reflect public-goods dilemmas in the real world. A public good is anything which everyone can benefit from equally, such as clean air and rivers, or the National Health Service in the UK (which is at least in principle a public good!). The provision of public goods, such as generating sufficient funding for public broadcast stations, is often the product of collective action, and yield a benefit for everyone regardless of whether or not they contributed to the public good. Public-goods dilemmas arise because of an inherent tension in providing public-goods that results from the logic of collective actions. When you make an effort to recycle your waste, you’re contributing to a public good (a cleaner planet) that your neighbours benefit from as much as you do, even if they don’t recycle. Everyone wants this public good, and so has a motivation to contribute to its provision. But there is also a strong temptation to not bother going to the effort of recycling. So long as enough other people are contributing to produce the desired public good, you can direct your energy elsewhere in pursuits that benefit just yourself (or, in the case of donating to public radio/TV, spend your money on something else). This temptation to cheat, or free-ride, however, threatens to unravel the whole cooperative endeavour. If everyone adopts this logic, no one will contribute and you can kiss goodbye to the public good. So that’s the catch.

In spite of the problem of free-riding, groups of people do in fact cooperate - people do recycle, contribute to public broadcasts stations, donate blood, pay taxes and so on. Public-goods dilemmas have been brought into the lab and studied as games, and have confirmed common-sense observations that humans do have a propensity to cooperate - but only under certain conditions.

Public-goods games usually take the following form. A group of, say, 20 players are each given 20 monetary units (MU) by a benevolent experimenter. The players are then given the choice to contribute as many or few MU as they wish to a common pot, while keeping the rest for their private account. The MU in the pot are then counted up, and the experimenter, playing the role of banker, adds a proportion of the total in the common pot (perhaps doubling it). The common pot is then split up equally among the players, regardless of how much or little each player put in. So the maximum ‘profit’ can be made if everyone puts in all their chips, which in my example is doubled and then split (20 x 20 = 400; 400 x 2 = 800 = 40 MU per person after dividing). Such behaviour is a form of cooperation because it enables the group to achieve the best outcome possible (the highest profit for the group as a whole), and this benefit is shared among everyone. However, for profit-maximising individuals the best outcome would be for everyone else to contribute all of their MU and to contribute nothing themselves (19 x 20 = 380; 380 x 2 = 760 = 38 MU per person, plus the 20 the free-rider kept by not contributing, totalling 58 MU).

This game can be played round after round to see whether cooperation reigns, evaporates or never emerges at all. And it can also be tweaked in interesting way to reveal wrinkles on the face of cooperation. One crucial feature that can be added is the ability of players to punish free-riders. As in the game above, players get a stash of MU and contribute (or not) to a pot that is multiplied by some fixed percentage and then split evenly among all players. Then the crucial extra step is added. Each player receives information about what the others players contributed in the round (this can be done anonymously to explore (or eliminate) the role of reputation in public-goods games) – what they contributed and what they earned. Players then have the opportunity to punish people by imposing fines on them, but at a cost to themselves. Typically, For instance, a player might be able to impose a fine on another player of 3 MU at a cost to himself of 1 MU from his private account.

It turns out that when people have the opportunity to punish, they grab hold of it with both hands [3]. People don’t punish indiscriminately; they tend to punish free-riders or cheats – and take pleasure from it (this has been assessed both psychologically and neurologically). This has the effect of making it costly to free-ride and more attractive to cooperate, and public-goods games with punishment options can stabilise high levels of cooperation.

The power of punishment
Punishment in public-goods games raises further questions though. Although it makes sense to cooperate when there are punishers about, why bother to punish free-riders in the first place? Exercising the option to punish does not come for free. The punisher incurs a cost that non-punishing cooperators do not pay, but yet who nonetheless benefit from the higher levels of cooperation promoted by punishing acts. This kind of punishment has therefore been called altruistic punishment (altruistic to the group, not to the punished player, obviously!). Altruistic punishment seems to be a feature of human cooperation, but why do people do it?

In recent years, the idea of ‘strong reciprocity’ has gained increasing theoretical and empirical support as an explanation of the human tendency to cooperate with cooperators and to punish cheats. A strong reciprocator is an individual that “responds kindly to actions that are perceived to be kind and hostily toward actions that are perceived to be hostile” [4]. Modelling studies have shown that under certain conditions strong reciprocity can evolve and do well in competition with other more self-regarding strategies (that is, those that aim to provide the most individual benefit). Indeed, strong reciprocity is what evolutionary game theorists call an ‘evolutionarily stable strategy’ (essentially a strategy that can’t be beaten when common). But the evolution of strong reciprocity is based on different mechanisms than those underlying kin selection and reciprocal altruism. Whereas kin selection and reciprocal altruism can be explained by natural selection among ‘selfish genes’ that contribute to altruistic behaviour (and which are therefore examples of genetic evolution), the evolution of strong reciprocity is couched in terms of gene-culture co-evolution and cultural group selection (not biological gene selection).

This is why the definition of a strong reciprocator given above refers to norms – rules of social conduct that can differ from cultural group to cultural group. Different cultural groups can differ in their social norms on a wide range of issues, such as appropriate dress, rules of conduct with peers and acquaintances, and food rituals, as well as notions of fairness, justice, and right and wrong. (In public-goods games, people are punished for violating the fairness norm “contribute to public goods from which you’ll benefit”.) A society’s norms are not only stored in the minds of its people; they are also embodied in the institutions of the society, such as religious systems of belief, educational policies and practices, and government. The role of institutions, and the norms they sustain, are therefore likely to be an important part of the puzzle of human cooperation.

Institutionalised cooperation
The elegant new study by Özgür Gürerk, Bernd Irlenbusch and Bettina Rockenbach illuminates the effects of different institutions on cooperative behaviour, and more specifically how enabling people to choose the type of institution they are part of aids the evolution of cooperation. Gurerk and colleagues used the tried-and-tested public-goods game, but added a twist. A pool of 84 players were recruited for the study, in which they played 30 ‘rounds’ of the public-goods game, with three stages to each round. The novel aspect of this study came in the first stage, in which players could choose whether to play in a setting in which free-riding (defined in this study as contributing 5 MU or less in a round) went unpunished, or choosing one in which free-riders could be penalised by fellow players (that is, a condition in which players could exercise altruistic punishment). These different sets of rules can be thought of as basic ‘institutions’ (obviously of a narrow kind); Gürerk and colleagues call the condition in which punishing is possible the sanctioning institution (SI) and the punishment-free condition, quite reasonably, the sanction-free institution (SFI).

After choosing whether to play in SI or SFI, the game went as usual: the players contributed or not, and then the common pot was multiplied by a fixed percentage, and the MU divided out among the players (there was one common pot for the SI group and another for the SFI group – the MU were pooled and divided only within groups, not between groups. So one group could do better on a collective and per capita basis). In the SI condition, but not in SFI, players had the opportunity to punish. At the end of each round - after the MU had been counted up, multiplied, and doled out and players had received anonymous information about the behaviour of the other players - each player could impose a sanction on anyone else in the group. These sanctions could be either positive or negative. A positive sanction cost 1 MU to ‘award’ 1 MU to another player, and a negative sanction cost 1 MU to impose a 3 MU fine on another player. (In SI, after the money had been divided out, players just carried on with the next round.)



I’ve summarised some of key results in the table above (other trends and data are shown in the figures to the left and below). The results from the beginning were pretty straightforward: roughly one-third of players pick SI and the remaining two-thirds pick SFI. This might to be taken as an indication that most people have a propensity towards selfishness, and want to at least keep the option of free-riding open. In this study, the choice of institution was also related to how players behaved in the first round (that is, how cooperative (how much they contributed) or selfish (free-riding) they were). In SI, the average contribution in the initial round was 12.7 MU, but only 7.3 MU in SFI; and whereas nearly half of the players in SI contributed 15 MU or more (‘high contributors’), just over 10% were so inclined in SFI (see figure to the left). The incidence of free-riding tells the same story: whereas almost half of the players in SFI hitched a free-ride (43.4%), less than one-fifth did so in SI (16.1%). The majority of players initially opted for an institution in which punishment of free-riding was not a possibility, and then made little more than half of the monetary contribution of the minority who opted for a punishing institution (perhaps because they planned to contribute highly and to therefore expected to avoid punishment).

Cooperation does not seem to be the order of the day, and it seems unlikely that it would get off the ground given this inauspicious start. What’s worse, selfish free-riders initially do really well in SFI (averaging 49.7 MU in the first round). Perhaps even more depressingly, the higher average contribution of 12.7 MU made by players in SI (compared with 7.3 MU in SFI) does not yield a higher average payoff in the first round (38.1 in SI compared with 44.4 MU in SFI; see table). However, free-riding in SI is significantly less attractive than in SFI because many players in SFI impose fines on free-riders. And this has important consequences.

As in previous studies, without the threat of punishment hanging over their heads many people succumb to the temptation to free-ride in SFI. More people free-riding means less people contributing, which means that there’s even less reason to cooperate and contribute because other people are not doing likewise– a vicious cycle that leads to the unravelling of cooperation and a plummeting of contributions. By contrast, contributions in SI gradually increase, and free-riding drops (because of the cost of being punished). But remember the twist in this study: at the beginning of each round players choose whether to player in SI or SFI. So what happened after the initial split of one-third of players into SI and the other two-thirds into SFI?

Despite being initially wary of leaving SFI to join SI, by the end of the experiment nearly every player had switched to SI (92.9% by the end of the game) and was cooperating fully. At the same time, contributions in SFI steadily decreased until they hit rock bottom. The average contribution in round 30 of the experiment, the final round, brings home the difference in behaviours cultivated in SI and SFI: 19.4 MU in SI, compared with nothing in SFI.

The different ‘life histories’ of SI and SFI provide some clues about why people migrate from SFI to SI (despite initial aversion). One potential factor is imitation of successful players – those who gain the greatest payoff. Overall, players in SI do best (average over all rounds in SI = 18.3, and 2.9 in SFI), and so the policy of copying the most successful could explain why players eventually migrate from SFI to SI.

At the beginning of the experiment, however, free-riders in SFI are the most successful players (they reap the greatest rewards), and so imitation should lead to an increase in free-riders in the next round. In fact, this is just what was seen in round 2. But as time passes and SFI sees a decline in cooperative behaviour (because of the prevalence of free-riders), things change and selfishness starts to become self-defeating. From round 5 onwards, high contributors in SI earned more than free-riders in SFI. So imitation of successful players would then promote greater migration from SFI to SI – and again, this is what was seen. What’s more, players moving from SFI to SI tended to switch from free-riding to cooperation (as if they were maximising their payoff). Institutions, in other words, affect behaviour.

This is seen even more clearly when players behaviour on moving institution is examined in more detail. On migrating from SFI to SI, 80.3% of players increased their contribution in two consecutive rounds, and 27.1% have something of a ‘St. Paul moment’ on the Damascene road to SI, and switch from free-riding to full cooperation! Conversely, 70% of players reduce their contribution when leaving SI for SFI, and 20% switch from full cooperation to free-riding. As they say, when in Rome…

The wisdom of crowds
Imitation can explain some of the migratory behaviour of players from SFI to SI. Indeed, so too might rational choice approaches – players might be working out which is the best strategy, and then following the optimal strategy. These explanations face a problem, however: they don’t account for why players switching to SI why they adopt the strategy of strong reciprocators and then punish other free-riders and low contributors. The most successful strategy, from a selfish, self-regarding perspective, would be to contribute at a high level (and therefore avoid damaging punishment) but to avoid incurring the costs of punishing others. What actually happened in the experiment is that 62.9% of players adopted the punishment norm immediately after switching from SFI to SI. If contributing in the first place is a public good (because everyone benefits from it), then carrying the cost of punishing free-riders is a ‘second order’ public good: everyone else benefits from the higher level of contributions that the punisher induces, while the punisher shoulders the cost of punishing-. That is why it is called altruistic punishment.

There is yet another potential mechanism that could explain these results, one that features prominently in theories of cultural evolution, and gene-culture co-evolution: conformist transmission. Cultural information can be passed on in a number of ways – people can imitate those of high prestige or status, in the hope of picking up the skills, behaviour or knowledge that led to their elevated position. Alternatively, individuals can simply adopt or copy the most prevalent forms of behaviour or knowledge – conform to the norms of society, in other words. And humans certainly do conform. In a famous experiment published in 1951, Solomon Asch showed how people will often over-ride their own opinions and express a belief more in tune with a group consensus. Theoretical studies have since shown how conformist transmission of cultural norms can be a powerful force in cultural evolution.

In this study, as players switched from SFI to SI, so too did their behaviour. However, this isn’t explicable through simple imitation or payoff maximisation. However, a propensity to conform to the prevailing norms of the institution that you find yourself in can explain this behaviour. In a head-to-head competition between an institution that maintains norms of punishment of free-riders against one that doesn’t (which this study created), not only do people end up doing better in the SI group, but the whole group does better than SFI. In any case, the cost of following the punishment norm steadily decreases because the threat of punishment means that is there not much free-riding, and therefore not much need (or cost) to punish. So following such prosocial norms as punishing cheats carries only a marginal cost compared with self-centred norms.

The demonstration that the nature of institutions governing the way cooperation and punishment is regulated, and that the role of choice of institution favours those regimes that are more conducive to cooperation, sets the stage for a number of further questions to be explored. Joe Henrich mentions two in his commentary on this research: “What happens if switching institutions is costly, or if information about the payoffs in the other institution is poor? Or, what happens if individuals cannot migrate between institutions, but instead can vote on adopting alternative institutional modifications?”. Answering such questions might help in the design of institutions that foster cooperation on scales from the local to the global, and provide clues about what determines whether certain norms and institutions spread.

Notes
1. Hammerstein, P. Why is reciprocity so rare in social animals? A Protestant appeal. In Genetic and Cultural Evolution of Cooperation (ed. Hammerstein, P.) 83-93 (MIT Press, 2003).

2. See, for example, Genetic and Cultural Evolution of Cooperation (MIT Press, 2003) and Moral Sentiments and Material Interests: The Foundations of Cooperation in Economic Life (MIT Press, 2005).

3. Fehr, E. & Gächter, S. Altruistic punishment in humans. Nature 415, 137-140 (2002).

4. Fehr, E. & Fischbacher, U. The economics of strong reciprocity. In Moral Sentiments and Material Interests: The Foundations of Cooperation in Economic Life (eds Gintis, H., Bowles, S. Boyd, R. & Fehr, E.) 151-191 (MIT Press, 2005).

0 Comments:

Post a Comment

<< Home