Sunday, April 23, 2006

To cooperate or free-ride: picking the right pond

Cooperation can get off the ground when people can punish cheats, and a new study shows that people choose environments that allow punishment over ones in which cheats go free.

Why do people get together and cooperate? Why do people not ruthlessly pursue their own selfish ends in a battle ‘red in tooth and claw’ in which only the fittest survive? One obvious answer is that cooperation –working as a team, contributing your fair share to a group project – can produce results unattainable through solo effort. Group living has many potential benefits. But cooperative groups are constantly under threat from cheats that want to exploit the system for their own ends – and if enough people do this, the benefits of cooperation come crashing down. A new study in Science by Özgür Gürerk and colleagues, along with an excellent commentary from Joseph Henrich, adds another piece of the puzzle of why and how people come together to form cooperative groups.

The problem of altruism and cooperation has long been a puzzle for evolutionary biology, and has given rise to a number of competing and complementary theories. Two of the most well known – William Hamilton’s theory of kin selection (directing help to kin), and Robert Trivers’s theory of reciprocal altruism (scratching the back of those who scratch yours) – explain much of the cooperation we see in the animal world. Kin-directed altruism is the most ubiquitous type of altruism, although surprisingly few solid examples of reciprocal altruism have been found among animals [1] (reciprocity (perhaps not in the form of direct reciprocal altruism) does, however, seem to be an important feature of human cooperation and altruism).

But when it comes to explaining human altruism these theories fall short of the mark. Humans direct help towards unrelated individuals in a scale unparalleled by any other species, and cooperate in large groups of unrelated individuals, so kin selection is not really of much relevance here. And theoretical studies suggest that reciprocal altruism cannot stabilise cooperation in large groups. The scale and diversity of human cooperation requires something beyond these two explanations [2].

Public spirit
One type of cooperative endeavour that has been explored in great detail is the ‘public-goods game’. These games are designed to reflect public-goods dilemmas in the real world. A public good is anything which everyone can benefit from equally, such as clean air and rivers, or the National Health Service in the UK (which is at least in principle a public good!). The provision of public goods, such as generating sufficient funding for public broadcast stations, is often the product of collective action, and yield a benefit for everyone regardless of whether or not they contributed to the public good. Public-goods dilemmas arise because of an inherent tension in providing public-goods that results from the logic of collective actions. When you make an effort to recycle your waste, you’re contributing to a public good (a cleaner planet) that your neighbours benefit from as much as you do, even if they don’t recycle. Everyone wants this public good, and so has a motivation to contribute to its provision. But there is also a strong temptation to not bother going to the effort of recycling. So long as enough other people are contributing to produce the desired public good, you can direct your energy elsewhere in pursuits that benefit just yourself (or, in the case of donating to public radio/TV, spend your money on something else). This temptation to cheat, or free-ride, however, threatens to unravel the whole cooperative endeavour. If everyone adopts this logic, no one will contribute and you can kiss goodbye to the public good. So that’s the catch.

In spite of the problem of free-riding, groups of people do in fact cooperate - people do recycle, contribute to public broadcasts stations, donate blood, pay taxes and so on. Public-goods dilemmas have been brought into the lab and studied as games, and have confirmed common-sense observations that humans do have a propensity to cooperate - but only under certain conditions.

Public-goods games usually take the following form. A group of, say, 20 players are each given 20 monetary units (MU) by a benevolent experimenter. The players are then given the choice to contribute as many or few MU as they wish to a common pot, while keeping the rest for their private account. The MU in the pot are then counted up, and the experimenter, playing the role of banker, adds a proportion of the total in the common pot (perhaps doubling it). The common pot is then split up equally among the players, regardless of how much or little each player put in. So the maximum ‘profit’ can be made if everyone puts in all their chips, which in my example is doubled and then split (20 x 20 = 400; 400 x 2 = 800 = 40 MU per person after dividing). Such behaviour is a form of cooperation because it enables the group to achieve the best outcome possible (the highest profit for the group as a whole), and this benefit is shared among everyone. However, for profit-maximising individuals the best outcome would be for everyone else to contribute all of their MU and to contribute nothing themselves (19 x 20 = 380; 380 x 2 = 760 = 38 MU per person, plus the 20 the free-rider kept by not contributing, totalling 58 MU).

This game can be played round after round to see whether cooperation reigns, evaporates or never emerges at all. And it can also be tweaked in interesting way to reveal wrinkles on the face of cooperation. One crucial feature that can be added is the ability of players to punish free-riders. As in the game above, players get a stash of MU and contribute (or not) to a pot that is multiplied by some fixed percentage and then split evenly among all players. Then the crucial extra step is added. Each player receives information about what the others players contributed in the round (this can be done anonymously to explore (or eliminate) the role of reputation in public-goods games) – what they contributed and what they earned. Players then have the opportunity to punish people by imposing fines on them, but at a cost to themselves. Typically, For instance, a player might be able to impose a fine on another player of 3 MU at a cost to himself of 1 MU from his private account.

It turns out that when people have the opportunity to punish, they grab hold of it with both hands [3]. People don’t punish indiscriminately; they tend to punish free-riders or cheats – and take pleasure from it (this has been assessed both psychologically and neurologically). This has the effect of making it costly to free-ride and more attractive to cooperate, and public-goods games with punishment options can stabilise high levels of cooperation.

The power of punishment
Punishment in public-goods games raises further questions though. Although it makes sense to cooperate when there are punishers about, why bother to punish free-riders in the first place? Exercising the option to punish does not come for free. The punisher incurs a cost that non-punishing cooperators do not pay, but yet who nonetheless benefit from the higher levels of cooperation promoted by punishing acts. This kind of punishment has therefore been called altruistic punishment (altruistic to the group, not to the punished player, obviously!). Altruistic punishment seems to be a feature of human cooperation, but why do people do it?

In recent years, the idea of ‘strong reciprocity’ has gained increasing theoretical and empirical support as an explanation of the human tendency to cooperate with cooperators and to punish cheats. A strong reciprocator is an individual that “responds kindly to actions that are perceived to be kind and hostily toward actions that are perceived to be hostile” [4]. Modelling studies have shown that under certain conditions strong reciprocity can evolve and do well in competition with other more self-regarding strategies (that is, those that aim to provide the most individual benefit). Indeed, strong reciprocity is what evolutionary game theorists call an ‘evolutionarily stable strategy’ (essentially a strategy that can’t be beaten when common). But the evolution of strong reciprocity is based on different mechanisms than those underlying kin selection and reciprocal altruism. Whereas kin selection and reciprocal altruism can be explained by natural selection among ‘selfish genes’ that contribute to altruistic behaviour (and which are therefore examples of genetic evolution), the evolution of strong reciprocity is couched in terms of gene-culture co-evolution and cultural group selection (not biological gene selection).

This is why the definition of a strong reciprocator given above refers to norms – rules of social conduct that can differ from cultural group to cultural group. Different cultural groups can differ in their social norms on a wide range of issues, such as appropriate dress, rules of conduct with peers and acquaintances, and food rituals, as well as notions of fairness, justice, and right and wrong. (In public-goods games, people are punished for violating the fairness norm “contribute to public goods from which you’ll benefit”.) A society’s norms are not only stored in the minds of its people; they are also embodied in the institutions of the society, such as religious systems of belief, educational policies and practices, and government. The role of institutions, and the norms they sustain, are therefore likely to be an important part of the puzzle of human cooperation.

Institutionalised cooperation
The elegant new study by Özgür Gürerk, Bernd Irlenbusch and Bettina Rockenbach illuminates the effects of different institutions on cooperative behaviour, and more specifically how enabling people to choose the type of institution they are part of aids the evolution of cooperation. Gurerk and colleagues used the tried-and-tested public-goods game, but added a twist. A pool of 84 players were recruited for the study, in which they played 30 ‘rounds’ of the public-goods game, with three stages to each round. The novel aspect of this study came in the first stage, in which players could choose whether to play in a setting in which free-riding (defined in this study as contributing 5 MU or less in a round) went unpunished, or choosing one in which free-riders could be penalised by fellow players (that is, a condition in which players could exercise altruistic punishment). These different sets of rules can be thought of as basic ‘institutions’ (obviously of a narrow kind); Gürerk and colleagues call the condition in which punishing is possible the sanctioning institution (SI) and the punishment-free condition, quite reasonably, the sanction-free institution (SFI).

After choosing whether to play in SI or SFI, the game went as usual: the players contributed or not, and then the common pot was multiplied by a fixed percentage, and the MU divided out among the players (there was one common pot for the SI group and another for the SFI group – the MU were pooled and divided only within groups, not between groups. So one group could do better on a collective and per capita basis). In the SI condition, but not in SFI, players had the opportunity to punish. At the end of each round - after the MU had been counted up, multiplied, and doled out and players had received anonymous information about the behaviour of the other players - each player could impose a sanction on anyone else in the group. These sanctions could be either positive or negative. A positive sanction cost 1 MU to ‘award’ 1 MU to another player, and a negative sanction cost 1 MU to impose a 3 MU fine on another player. (In SI, after the money had been divided out, players just carried on with the next round.)



I’ve summarised some of key results in the table above (other trends and data are shown in the figures to the left and below). The results from the beginning were pretty straightforward: roughly one-third of players pick SI and the remaining two-thirds pick SFI. This might to be taken as an indication that most people have a propensity towards selfishness, and want to at least keep the option of free-riding open. In this study, the choice of institution was also related to how players behaved in the first round (that is, how cooperative (how much they contributed) or selfish (free-riding) they were). In SI, the average contribution in the initial round was 12.7 MU, but only 7.3 MU in SFI; and whereas nearly half of the players in SI contributed 15 MU or more (‘high contributors’), just over 10% were so inclined in SFI (see figure to the left). The incidence of free-riding tells the same story: whereas almost half of the players in SFI hitched a free-ride (43.4%), less than one-fifth did so in SI (16.1%). The majority of players initially opted for an institution in which punishment of free-riding was not a possibility, and then made little more than half of the monetary contribution of the minority who opted for a punishing institution (perhaps because they planned to contribute highly and to therefore expected to avoid punishment).

Cooperation does not seem to be the order of the day, and it seems unlikely that it would get off the ground given this inauspicious start. What’s worse, selfish free-riders initially do really well in SFI (averaging 49.7 MU in the first round). Perhaps even more depressingly, the higher average contribution of 12.7 MU made by players in SI (compared with 7.3 MU in SFI) does not yield a higher average payoff in the first round (38.1 in SI compared with 44.4 MU in SFI; see table). However, free-riding in SI is significantly less attractive than in SFI because many players in SFI impose fines on free-riders. And this has important consequences.

As in previous studies, without the threat of punishment hanging over their heads many people succumb to the temptation to free-ride in SFI. More people free-riding means less people contributing, which means that there’s even less reason to cooperate and contribute because other people are not doing likewise– a vicious cycle that leads to the unravelling of cooperation and a plummeting of contributions. By contrast, contributions in SI gradually increase, and free-riding drops (because of the cost of being punished). But remember the twist in this study: at the beginning of each round players choose whether to player in SI or SFI. So what happened after the initial split of one-third of players into SI and the other two-thirds into SFI?

Despite being initially wary of leaving SFI to join SI, by the end of the experiment nearly every player had switched to SI (92.9% by the end of the game) and was cooperating fully. At the same time, contributions in SFI steadily decreased until they hit rock bottom. The average contribution in round 30 of the experiment, the final round, brings home the difference in behaviours cultivated in SI and SFI: 19.4 MU in SI, compared with nothing in SFI.

The different ‘life histories’ of SI and SFI provide some clues about why people migrate from SFI to SI (despite initial aversion). One potential factor is imitation of successful players – those who gain the greatest payoff. Overall, players in SI do best (average over all rounds in SI = 18.3, and 2.9 in SFI), and so the policy of copying the most successful could explain why players eventually migrate from SFI to SI.

At the beginning of the experiment, however, free-riders in SFI are the most successful players (they reap the greatest rewards), and so imitation should lead to an increase in free-riders in the next round. In fact, this is just what was seen in round 2. But as time passes and SFI sees a decline in cooperative behaviour (because of the prevalence of free-riders), things change and selfishness starts to become self-defeating. From round 5 onwards, high contributors in SI earned more than free-riders in SFI. So imitation of successful players would then promote greater migration from SFI to SI – and again, this is what was seen. What’s more, players moving from SFI to SI tended to switch from free-riding to cooperation (as if they were maximising their payoff). Institutions, in other words, affect behaviour.

This is seen even more clearly when players behaviour on moving institution is examined in more detail. On migrating from SFI to SI, 80.3% of players increased their contribution in two consecutive rounds, and 27.1% have something of a ‘St. Paul moment’ on the Damascene road to SI, and switch from free-riding to full cooperation! Conversely, 70% of players reduce their contribution when leaving SI for SFI, and 20% switch from full cooperation to free-riding. As they say, when in Rome…

The wisdom of crowds
Imitation can explain some of the migratory behaviour of players from SFI to SI. Indeed, so too might rational choice approaches – players might be working out which is the best strategy, and then following the optimal strategy. These explanations face a problem, however: they don’t account for why players switching to SI why they adopt the strategy of strong reciprocators and then punish other free-riders and low contributors. The most successful strategy, from a selfish, self-regarding perspective, would be to contribute at a high level (and therefore avoid damaging punishment) but to avoid incurring the costs of punishing others. What actually happened in the experiment is that 62.9% of players adopted the punishment norm immediately after switching from SFI to SI. If contributing in the first place is a public good (because everyone benefits from it), then carrying the cost of punishing free-riders is a ‘second order’ public good: everyone else benefits from the higher level of contributions that the punisher induces, while the punisher shoulders the cost of punishing-. That is why it is called altruistic punishment.

There is yet another potential mechanism that could explain these results, one that features prominently in theories of cultural evolution, and gene-culture co-evolution: conformist transmission. Cultural information can be passed on in a number of ways – people can imitate those of high prestige or status, in the hope of picking up the skills, behaviour or knowledge that led to their elevated position. Alternatively, individuals can simply adopt or copy the most prevalent forms of behaviour or knowledge – conform to the norms of society, in other words. And humans certainly do conform. In a famous experiment published in 1951, Solomon Asch showed how people will often over-ride their own opinions and express a belief more in tune with a group consensus. Theoretical studies have since shown how conformist transmission of cultural norms can be a powerful force in cultural evolution.

In this study, as players switched from SFI to SI, so too did their behaviour. However, this isn’t explicable through simple imitation or payoff maximisation. However, a propensity to conform to the prevailing norms of the institution that you find yourself in can explain this behaviour. In a head-to-head competition between an institution that maintains norms of punishment of free-riders against one that doesn’t (which this study created), not only do people end up doing better in the SI group, but the whole group does better than SFI. In any case, the cost of following the punishment norm steadily decreases because the threat of punishment means that is there not much free-riding, and therefore not much need (or cost) to punish. So following such prosocial norms as punishing cheats carries only a marginal cost compared with self-centred norms.

The demonstration that the nature of institutions governing the way cooperation and punishment is regulated, and that the role of choice of institution favours those regimes that are more conducive to cooperation, sets the stage for a number of further questions to be explored. Joe Henrich mentions two in his commentary on this research: “What happens if switching institutions is costly, or if information about the payoffs in the other institution is poor? Or, what happens if individuals cannot migrate between institutions, but instead can vote on adopting alternative institutional modifications?”. Answering such questions might help in the design of institutions that foster cooperation on scales from the local to the global, and provide clues about what determines whether certain norms and institutions spread.

Notes
1. Hammerstein, P. Why is reciprocity so rare in social animals? A Protestant appeal. In Genetic and Cultural Evolution of Cooperation (ed. Hammerstein, P.) 83-93 (MIT Press, 2003).

2. See, for example, Genetic and Cultural Evolution of Cooperation (MIT Press, 2003) and Moral Sentiments and Material Interests: The Foundations of Cooperation in Economic Life (MIT Press, 2005).

3. Fehr, E. & Gächter, S. Altruistic punishment in humans. Nature 415, 137-140 (2002).

4. Fehr, E. & Fischbacher, U. The economics of strong reciprocity. In Moral Sentiments and Material Interests: The Foundations of Cooperation in Economic Life (eds Gintis, H., Bowles, S. Boyd, R. & Fehr, E.) 151-191 (MIT Press, 2005).

Monday, April 03, 2006

The Path To Intelligence?

New research suggests that it is not brain size that determines ‘braininess’, but the way that the brain develops.

According to the adage, sometimes the journey taken is more important than the destination reached. And now this seems to be true of brain development and intelligence. A paper in the current issue of Nature suggests that differences in intelligence as measured by IQ tests are related to how the brain grows to its final state (which, in this case, probably affects the destination too). The results show that different growth trajectories of a brain region called the cortex, and the rate of change in the thickness of the cortex at different points in development, vary with IQ. It is not the overall thickness of the cortex that relates to intelligence, as might have been supposed, but how the cortex develops.

Research on intelligence, and the comparison of IQ across individuals and groups, is a controversial field. Concerns have been raised about the validity of IQ tests to measure intelligence, and critics have also questioned whether there is a unitary cognitive faculty that can be identified as ‘intelligence’. What’s more, theories of intelligence and IQ testing have historically been misused by a variety of eugenicists and racists pushing particular ideological agendas. Studies on the genetic basis of intelligence are also seen by some as dangerous because they could, if misused (so the argument goes), undermine our general notions of equality and the drive to provide equal educational access regardless of supposed talent (the fear being that some might say “If some people are born stupid, why waste educational resources on them?” There are good reasons why such a comment does not follow from research on intelligence). The fear remains that modern researchers, if not explicitly abusing the concept of intelligence in this way, at least provide a basis for a supposedly scientific approach to discrimination and prejudice.

Although we should certainly take heed of historical precedents in intelligence research, some of the modern fears about such studies are surely misplaced (see Steven Pinker’s The Blank Slate for a robust explanation of why these fears are largely unfounded). IQ tests that assess the three ‘Rs’ – reading, writing and arithmetic – do, however, seem to measure some aspect of cognition that is predictive of later skills and achievements that are widely considered, at least in Western societies, as bearing the hallmark of ‘braininess’, or high intelligence. And so research has continued on this tricky aspect of the mind from a variety of different directions.

Behavioural geneticists have used twin studies to work out the degree of heritability of intelligence (IQ scores), which usually comes out between 50-70%. This figure is often misunderstood, so a quick clarification is in order. Heritability measures the amount of variation in a trait than can be explained by genetic variation. So between 50-70% of the variation seen in IQ scores is attributable to genetic variation. It does not mean that for any given individual 50-70% of their IQ came from their genes, with the remaining coming from their environment; that doesn’t make any sense. Heritability doesn’t even measure the ‘strength’ of the genetic contribution (whatever that might mean) – a trait can have low heritability and yet still have an important genetic underpinning. Take the trait ‘leg number’ in humans. Nearly everyone will have identical genes for building legs, and so there is no genetic variation to explain any differences in leg number that we observe – differences will be attributable to environmental differences (accidents, disease or the presence of teratogenic drugs during development). But that doesn’t mean genetic effects are unimportant in leg development.

Assuming that IQ is related in some way to brain functioning, it is likely to have a genetic component: building a functional brain requires the orchestrated actions of thousands of genes and appropriate environmental inputs, and so building a brain of whatever intelligence will require genetic inputs. Heritability underscores this point by showing that some variation in IQ is attributable to genetic differences, which wouldn’t be the case if genes had no relevance to IQ (though, as noted, a low heritability wouldn’t rule a role for genes).

Given the underpinning assumption of much cognitive neuroscience, that the mind is what the brain does, we should expect to find brain correlates of IQ. Total brain volume is one potential candidate, but it correlates only modestly with IQ, at about 0.3. But the brain is anatomically, and, to a degree, functionally, specialized, so perhaps it is more promising to look at specific areas of the brain suspected of being linked to IQ.

So what areas of the brain might be the best bet to look at? The cortex is promising, as it is associated with higher cognitive functions – the sorts we usually identify as components of intelligence – and has also expanded disproportionately in human evolution (perhaps giving us the edge in cognitive prowess over our primate cousins). The cortex is a sheet of tissue that lies at the top of the brain, and its thickness can be assessed by brain-imaging techniques.

In the new study, Shaw and colleagues used brain scans to look at cortical development, and how cortical thickness changed, in 307 people studied from childhood to adulthood, and who were also tested for IQ (which is relatively stable during development). Overall, cortical thickness showed a weak correlation with IQ (0-0.1), when all ages were considered together. However, when the brain-imaging data and IQ were studied according to age group, a number of age-related correlations emerged. Perhaps surprisingly, cortical thickness was negatively correlated with IQ in early childhood – among young children, a thicker cortex correlates with a lower IQ. By contrast, in late childhood cortical thickness was positively correlated with IQ – thicker cortex correlating with higher IQ.

This is interesting enough, but an even more intriguing pattern was found. The study subjects were divided into different groups on the basis of IQ, into ‘average-‘, ‘high-‘ and ‘superior-intelligence’ groups. The typical pattern of cortical growth for each group was determined, and these were then compared. The results are shown in the figure to the left. What they show is that, perhaps paradoxically, children with ‘superior’ IQs typically have a thinner cortex at age 7 than children of average and high intelligence. But this relative lack of cortex is then cancelled out among ‘superiors’ (with no moral connotation!) as they undergo a period of cortex growth that is much more rapid than typically seen in people with average and high IQ. Having peaked at around 12 years, the cortex then begins to thin out more rapidly in ‘superiors’, and eventually converges in thickness to that seen in ‘averages’ and ‘highs’. (Those of high intelligence show a trajectory intermediate between that of the superiors and the averages.)

Another way of looking at the relationship between cortical thickness and IQ is to plot the rate of change of cortex thickness against age, again stratified by IQ in average-, high- and superior-intelligence groups. These results are shown to the left. Looking at the curve for the superior-intelligence group, the rate of change in thickness of the cortex is much more rapid than in the other two groups; this corresponds to the steep part of the blue curve in the first figure. The rate of change then levels out to that seen in averages and highs. Again, the averages show the smallest variation in rate of change in cortex thickness, with highs intermediate between average and superior. What both depictions show is that it is not how thick the cortex is at any given point that correlates with, and probably underscores, performance on IQ tests; rather, it is the developmental trajectory that the cortex is on that relates to IQ.

It is tricky to explain these patterns. It seems counter-intuitive that people with the highest IQs should start with the least amount of cortex. However, this study does not show what brain development has gone before. Both growth and thinning of brain areas are important for brain development, and ‘pruning’ of excess neurons in the brain underlies the emergence of many functional brain areas – like a sculptor removing excess material from a block of marble that deviates from the desired design (and a smaller sculpture may not be worse, or less complex and intricate, because of its size). So perhaps previous bouts of growth and pruning create a cortex around age 7 that is better ‘optimised’ for the task measured by IQ tests in superiors than in others. In superiors, the cortex then undergoes very rapid growth, followed by the fastest rate of thinning, leaving a cortex roughly the same thickness as in other people, but better at certain cognitive tasks.

But does this study say anything about the genetic determination of IQ? I think we should be cautious in drawing conclusions about this aspect of IQ development from this study. In talking about the results of this study, there is no reason to automatically invoke genetic factors over environmental factors in explaining the different cortical trajectories. It is likely, if not certain, that there are genes that are crucially involved in the development of the cortex; it is hard to see how it could be otherwise. And there may well be genetic variation that explains this variation in cortical development. But there could just as plausibly be environmental differences that explain the variation.

Let’s say IQ is determined by the cortical growth trajectory; then what determines the trajectory? It could be that people from a given environmental background – whether that be classified according to diet, educational stimulation, amount of play in childhood or whatever – tend to respond by moving towards one or another developmental pathway. Indeed, the authors of this study did find a correlation of 0.35 between IQ and socio-economic status.

And of course, there could, and are likely to be, interactions between genes and environment (and not in a simple linear, additive way). For instance, genetic differences might cause the initial set-up and growth of the cortex (or other brain areas) to differ in individuals. These differences could in turn make scholastic achievement more enjoyable (perhaps some people find it easier to remember things or can assimilate information more quickly), and so tend to lead towards the pursuit of those skills measured by IQ tests.

One other possible moral that could be drawn from this research is that it illustrates the importance of plasticity in development, rather than the execution of a genetic program for development. This is reasonable, and perhaps is an example of how the brain can be prepared to respond in certain ways in different environments (if environmental differences switch development onto different trajectories). Of course, genetic differences could be found to be the major factor in explaining cortical-growth differences. If so, plasticity shouldn’t really be contrasted with a genetic program for development – one set of genes might be correlated with a lower level of plasticity (the averages), and another with high plasticity (the superiors). But in both cases genes can underlie the very possibility of plasticity, with one set affording more than the other. If we take genes and plasticity to be placeholders for nature and nurture, we can see why the nature/nurture dichotomy is more imagined than real – each depends and utilises the other. As Matt Ridley says, nature acts via nurture. Genes can be selected for their capacity to contribute to the development of plastic developmental systems.

Future research might address these issues in more detail, but for the moment it seems safe to say that the metaphor of the ‘big’ brain for cleverness is misguided – the secret of intelligence probably lies in the dynamics of cortical development. Like life, the journey is at least as important as the destination.

Sunday, April 02, 2006

Zombies Revisited: Correction and Clarification

A while back I wrote about philosophers’ zombies, and I’ve had a bit of feedback to the effect that I misunderstood some of the positions I outlined, and drew the wrong conclusions from what was said. I realise now the mistakes I made, and want to briefly clear them up.

The most significant errors are in my discussion of David Chalmers, and his views on ‘functional’ and ‘biological’ zombies (see my earlier post for what I meant by these). Richard Chappell, in a comment on the previous post, says “You've got Chalmers completely wrong”. I think this is a little strong, but I certainly was wrong on some points. I claimed that Chalmers believes in the logical possibility of biological zombies and their nomological impossibility, which is correct (David Chalmers, personal communication), but I mistakenly suggested that he takes functional zombies to be nomologically possible, when in fact he doesn’t (although Chalmers argues that functional zombies, like biological zombies, are logically possible). I don’t think my discussion of Chalmers was very clear, and I misinterpreted what he was getting at in some of his responses to Sue Blackmore’s question in Conversations On Consciousness.

This was partly a failure to identify the proper focus of the discussion (which to be fair is not made explicit in the interviews with Blackmore – the distinction between logical and nomological is not made, for instance). I was suspicious of the power of mere logical possibility to tell us anything about the actual world we live in, so I was focusing on what was nomologically possible. But, according to Chalmers, the most interesting question for philosophers with regard to zombies is what is logically possible. I’m still unclear on why logical possibility is so interesting. It seems that all sorts of possibility are logically coherent, but their conceivability doesn’t seem to provide a reason to explain the presence or absence of these imagined possibilities in our world, which is the one we’re interested in explaining. But I’m open to being corrected on this.

Philosopher Gualtiero Piccinini has also suggested to me that Block’s reading of Chalmers is correct, and that I was therefore in error to suggest that Block has misread Chalmers, when I said “[Chalmers] does not seem to believe in the nomological possibility of what we’ve called a biological zombie, and so Block is wrong to say that this sort of zombie is what Chalmers does in fact believe in” (although I was correct that Block accepts the nomological possibility of functional zombies). Here’s what Block said:
“The second sort of zombie is a creature that’s physically exactly like us. This is [David] Chalmers’s zombie, so when Chalmers says he believes in the conceivability and therefore the possibility of zombies, he’s talking about that kind of a zombie. My view is that no one who takes the biological basis of consciousness seriously should really believe in that kind of a zombie. I don’t believe in the possibility of that zombie; I believe that the physiology of the human brain determines our phenomenology and so there couldn’t be a creature like that, physically exactly like us, down to every molecule of the brain, just the same but nobody home, no phenomenology. That zombie I don’t believe in, but the functional zombie I do believe in.”
I took Block to be saying that he rejects the nomological (and logical?) possibility of biological zombies, and further that Block thinks that Chalmers accepts the nomological possibility of biological zombies (which he doesn’t). Block is definitely saying that Chalmers accepts a conception of zombies that Block thinks should properly be rejected, and at the very least that must mean the nomological possibility of a biological zombie, and it seems to be this he has in mind (“the physiology of the human brain determines our phenomenology and so there couldn’t be a creature like that, physically exactly like us, down to every molecule of the brain, just the same but nobody home, no phenomenology”). But because Chalmers rejects this the nomological possibility of a biological zombie, I suggested that Block was wrong to say that this is the sort of zombie Chalmers in fact believes in. So I’m still a little bit lost by this response (though perhaps this isn’t the best characterisation of his position – Block can also be read here as arguing against the logical, not just nomolgical, possibility of a biological zombie, in which case the mistake he thinks Chalmers makes is to suppose that this is conceivable - which Chalmers does claim).

From the feedback I’ve had, and re-looking at what the philosophers said, my tally up should have been the following. Chalmers believes in the logical possibility of both functional and biological zombies, but rejects the nomological possibility of both (Chalmers says “I think that even a computer [in this world] which has really complex intelligent behaviour and functioning would probably be conscious” – in other words, a functional zombie is not nomologically possible). Block, contra Chalmers, accepts the nomological (and also logical) possibility of functional zombies, but not biological zombies (not quite sure what he thinks about the logical possibility of biological zombies). Searle accepts the nomological possibility of functional zombies, and the logical possibility of biological zombies, but rejects the nomological possibility of biological zombies.

So that’s what I should’ve said.