Yesterday’s NYT had a very interesting story about biologists discovering evidence that humans are naturally helpful to each other — very young humans, before our parents presumably whap it into us. Of course if we didn’t cooperate to some degree we’d have croaked long ago… and if we weren’t kind of warlike, we wouldn’t have survived either.
It all puts me in mind of a “game theory” strategy that I once read was the most useful: Begin by cooperating, but as soon as your opponent does not cooperate, retaliate. This applies to a lot of areas, obviously, but grew out of a very specific problem used in philosophy and now played out by computers in programming tournaments: the prisoner’s dilemma.
Basically, in its various versions, the dilemma is this: You and another person are in a situation where you each must choose to cooperate or to betray each other. You don’t know what the other will do. If you both cooperate, you get a great reward; if you both betray, neither gets a reward. If one cooperates and one betrays… the betrayer gets half the reward.
Any pair of siblings, roommates or lovers will recognize the scenario. When one does all the taking and one does all the giving, misery ensues – though only for the giver. The taker does just fine. Mutually assured destruction comes to mind also: We hoped the Russians loved their children too.
Douglas Hofstadter, in whose “Metamagical Themas” I first read about the dilemma, introduced me to the idea that the game is different if you know you’re only going to play once, versus expecting to have to continue to deal with the same opponent in future. If you only play once, you are better off betraying. But if you both expect to deal with each other again, the game changes.
Hofstadter described the first programming tournament on these lines, and says it was won by an incredibly simple program called Tit For Tat, which simply did whatever the opponent did right back to it. If its opponent always cooperated, then so did Tit For Tat. If its opponent defected (betrayed), Tit For Tat retaliated – once (whereas some programs strategically would keep defecting every time once “trust” was broken). But if Tit For Tat’s opponent cooperated again, it would cooperate again.
Hofstadter writes, “(Tournament architect Robert) Axelrod’s technical term for a program for a strategy that never defects before its opponent does is nice. … Note that ‘nice’ does not mean that a strategy never defects! Tit For Tat defects when provoked, but that is still considered being ‘nice.’ ”
The chapter goes on to explain how Axelrod defined several characteristics of Tit For Tat’s success. TFT was “nice,” but also “provocable” — that is, it would retaliate if provoked; it wasn’t always nice — and “forgiving.” Some strategies that did even better added a fourth: “clarity,” a kind of ability to analyze when the opponent’s behavior wasn’t making any sense. When that happened, they switched to pure defense.
Axelrod, and after him, Hofstadter, are careful to warn against drawing any too broad conclusions. I note the New York Times article says the researcher claims that “inductive parenting,” defined as explaining to kids why they logically stand to gain from cooperating, is best. (Good luck with that.) He doesn’t mention that a well-timed retaliation, followed by forgiveness – and backed up by the wisdom to realize when you’re mired in a losing game – can be really useful too.
Leave a Reply