Altruism is one of the last (and most deeply embedded) thorns in the side of evolutionary theory, but these recent developments in game theory have finally given us something to grab onto. And we need to do that, because in principle things are very simple: if we humans evolved, then so did our minds, and if our minds evolved, then so did its behaviours – including altruistic tendencies.

There are problems, though. Altruism isn’t just a case of ‘tit for tat’. We aren’t nice just to family members, previous co-operators or possible future allies. We’re also nice to people we don’t know and people we’ll never meet. We donate money to international charities, we volunteer our time to help society’s less fortunate, and we help old ladies who drop their shopping in the street. It strains plausibility to say that we always do these things in the hope of a return favour, a kind of ‘just in case’ strategy where the principle would be ‘always help everyone in case you need to pull in a favour in return’. That’s a decidedly non-optimal strategy, where the net expenditure of effort (tit) is far greater than the net profit when it occasionally pays off (tat). So that kind of behaviour can’t just be explained as indirect selfish rationality, be it conscious or sub-conscious. The Prisoner’s Dilemma findings are helpful as far as they go, but what a game-theoretic explanation glosses over is the fact that altruistic behaviour can be attributed to that apparently mysterious phenomenon, the conscience.

Still, mysterious as it may be, our consciences are just another aspect of our mental behaviour, so there must be some evolutionary explanation for their existence. One recent suggestion, proposed most eloquently by Daniel Dennett, was initially developed when considering the problem of so-called ‘free riders’ in the problem of the commons, a larger-scale version of the Prisoner’s Dilemma. In game theory terms, a free rider is an agent who draws benefits from a co-operative society without contributing. In a one-to-one situation, free riding can easily be discouraged by a tit-for-tat strategy, as alfimp points out. But in a larger-scale society, where contributions and benefits are pooled, they can be incredibly difficult to shake off.

Imagine a situation where a society evolves as David Axelrod describes. Co-operative agents interact with each other, all contributing resources and drawing on the common good. Now imagine a rogue free rider, an agent who draws a favour (you scratch my back) and later refuses to return it. The problem is that free riding is always going to be beneficial to individuals at cost to society. How can well-behaved co-operative agents avoid being shafted?

Over many generations, the obvious solution is for co-operators to evolve the ability to spot potential free riders in advance and refuse to enter into reciprocal arrangements with them. Then, of course, the canonical free rider response is to evolve a more convincing disguise, fooling co-operators into co-operating after all. Before you know it, you have one of those all-too-common evolutionary arms races, with ever-more-sophisticated disguises and ever-more-sophisticated detectors. This may be how some societies have evolved, but it seems a far cry from the genuine altruistic conscience which we feel we have.

Now here’s the clever part. In this evolutionary arms race, how best might an agent convince his comrades that he really is a genuine co-operator, not a free rider in disguise? Answer: by actually making himself a real-life, genuine co-operator, by erecting psychological barriers to breaking his own promises, and by advertising this fact to everyone else. In other words, a good solution is for organisms to evolve things that everyone knows will force them to be co-operators – and to make it obvious that they’ve evolved these things. And we ought to expect evolution to find good solutions. So evolution will produce organisms who are sincerely moral and who wear their hearts on their sleeves; in short, evolution will give rise to the phenomenon of conscience.

This theory, combined with Axelrod’s, seems to cover all the angles. It explains how a blind and fundamentally selfish process can come up with the genuinely non-cynical form of altruism that we observe in our consciences.

And here’s something to think on. If all this is true, and altruism (read: morality) has just evolved as an optimum solution to a game-theoretic problem, what then for ethics? Is right and wrong just an illusion fobbed off on us by our genes so they can survive and reproduce in a society of self-interested agents? This is a meta-ethical question that straddles the boundary between biology and philosophy.