In economics and game theory, a participant is considered to have superrationality (or renormalized rationality) if they have perfect rationality (and thus maximize their utility) but assume that all other players are superrational too and that a superrational individual will always come up with the same strategy as any other superrational thinker when facing the same problem. Applying this definition, a superrational player playing against a superrational opponent in a prisoner's dilemma will cooperate while a rationally self-interested player would defect.
This decision rule is not a mainstream model within the game theory and was suggested by Douglas Hofstadter in his article, series, and book Metamagical Themas[1] as an alternative type of rational decision making different from the widely accepted game-theoretic one. Hofstadter provided this definition: "Superrational thinkers, by recursive definition, include in their calculations the fact that they are in a group of superrational thinkers."[1] This is equivalent to reasoning as if everyone in the group obeys Kant's categorical imperative: "one should take those actions and only those actions that one would advocate all others take as well."[2]
Unlike the supposed "reciprocating human", the superrational thinker will not always play the equilibrium that maximizes the total social utility and is thus not a philanthropist.
Prisoner's dilemma
The idea of superrationality is that two logical thinkers analyzing the same problem will think of the same correct answer. For example, if two people are both good at math and both have been given the same complicated problem to do, both will get the same right answer. In math, knowing that the two answers are going to be the same doesn't change the value of the problem, but in the game theory, knowing that the answer will be the same might change the answer itself.
The prisoner's dilemma is usually framed in terms of jail sentences for criminals, but it can be stated equally well with cash prizes instead. Two players are each given the choice to cooperate (C) or to defect (D). The players choose without knowing what the other is going to do. If both cooperate, each will get $100. If they both defect, they each get $1. If one cooperates and the other defects, then the defecting player gets $200, while the cooperating player gets nothing.
The four outcomes and the payoff to each player are listed below.
Player B cooperates | Player B defects | |
---|---|---|
Player A cooperates | Both get $100 | Player A: $0 Player B: $200 |
Player A defects | Player A: $200 Player B: $0 | Both get $1 |
One valid way for the players to reason is as follows:
- Assuming the other player defects, if I cooperate I get nothing and if I defect I get a dollar.
- Assuming the other player cooperates, I get $100 if I cooperate and $200 if I defect.
- So whatever the other player does, my payoff is increased by defecting, if only by one dollar.
The conclusion is that the rational thing to do is to defect. This type of reasoning defines game-theoretic rationality and two game-theoretic rational players playing this game both defect and receive a dollar each.
Superrationality is an alternative method of reasoning. First, it is assumed that the answer to a symmetric problem will be the same for all the superrational players. Thus the sameness is taken into account before knowing what the strategy will be. The strategy is found by maximizing the payoff to each player, assuming that they all use the same strategy. Since the superrational player knows that the other superrational player will do the same thing, whatever that might be, there are only two choices for two superrational players. Both will cooperate or both will defect depending on the value of the superrational answer. Thus the two superrational players will both cooperate since this answer maximizes their payoff. Two superrational players playing this game will each walk away with $100.
Note that a superrational player playing against a game-theoretic rational player will defect, since the strategy only assumes that the superrational players will agree. A superrational player playing against a player of uncertain superrationality will sometimes defect and sometimes cooperate, based on the probability of the other player being superrational.
Although standard game theory assumes common knowledge of rationality, it does so in a different way. The game-theoretic analysis maximizes payoffs by allowing each player to change strategies independently of the others, even though in the end, it assumes that the answer in a symmetric game will be the same for all. This is the definition of a game-theoretic Nash equilibrium, which defines a stable strategy as one where no player can improve the payoffs by unilaterally changing course. The superrational equilibrium in a symmetric game is one where all the players' strategies are forced to be the same before the maximization step. (There is no agreed-upon extension of the concept of superrationality to asymmetric games.)
Some argue that superrationality implies a kind of magical thinking in which each player supposes that their decision to cooperate will cause the other player to cooperate, even though there is no communication. Hofstadter points out that the concept of "choice" doesn't apply when the player's goal is to figure something out, and that the decision does not cause the other player to cooperate, but rather the same logic leads to the same answer independent of communication or cause and effect. This debate is over whether it is reasonable for human beings to act in a superrational manner, not over what superrationality means, and is similar to arguments about whether it is reasonable for humans to act in a 'rational' manner, as described by game theory (wherein they can figure out what other players will or have done by asking themselves, what would I do if I was them, and applying backward induction and iterated elimination of dominated strategies).
Probabilistic strategies
For simplicity, the foregoing account of superrationality ignored mixed strategies: the possibility that the best choice could be to flip a coin, or more generally to choose different outcomes with some probability. In the prisoner's dilemma, it is superrational to cooperate with probability 1 even when mixed strategies are admitted, because the average payoff when one player cooperates and the other defects are the same as when both cooperate and so defecting increases the risk of both defecting, which decreases the expected payout. But in some cases, the superrational strategy is mixed.
For example, if the payoffs in are as follows:
- CC – $100/$100
- CD – $0/$1,000,000
- DC – $1,000,000/$0
- DD – $1/$1
So that defecting has a huge reward, the superrational strategy is defecting with a probability of 499,900/999,899 or a little over 49.995%. As the reward increases to infinity, the probability only approaches 1/2 further, and the losses for adopting the simpler strategy of 1/2 (which are already minimal) approach 0. In a less extreme example, if the payoff for one cooperator and one defector was $400 and $0, respectively, the superrational mixed strategy world be defecting with probability 100/299 or about 1/3.
In similar situations with more players, using a randomising device can be essential. One example discussed by Hofstadter is the platonia dilemma: an eccentric trillionaire contacts 20 people, and tells them that if one and only one of them send him or her a telegram (assumed to cost nothing) by noon the next day, that person will receive a billion dollars. If they receive more than one telegram or none at all, no one will get any money, and communication between players is forbidden. In this situation, the superrational thing to do (if it is known that all 20 are superrational) is to send a telegram with probability p=1/20—that is, each recipient essentially rolls a 20-sided die and only sends a telegram if it comes up "1". This maximizes the probability that exactly one telegram is received.
Notice though that this is not the solution in the conventional game-theoretical analysis. Twenty game-theoretically rational players would each send in telegrams and therefore receive nothing. This is because sending telegrams is the dominant strategy; if an individual player sends telegrams they have a chance of receiving money, but if they send no telegrams they cannot get anything. (If all telegrams were guaranteed to arrive, they would only send one, and no one would expect to get any money).
Formalizations and related concepts
Superrationality is a form of Immanuel Kant's categorical imperative,[3][4][5] and is closely related to the concept of Kantian equilibrium proposed by the economist and analytic Marxist John Roemer.[2]
The question of whether to cooperate in a one-shot Prisoner's Dilemma in some circumstances has also come up in the decision theory literature sparked by Newcomb's problem. Causal decision theory suggests that superrationality is irrational, while evidential decision theory endorses lines of reasoning similar to superrationality and recommends cooperation in a Prisoner's Dilemma against a similar opponent.[6][7]
Program equilibrium has been proposed as a mechanistic model of superrationality.[8][9][10]
See also
References
- 1 2 Hofstadter, Douglas (June 1983). "Dilemmas for Superrational Thinkers, Leading Up to a Luring Lottery". Scientific American. 248 (6). – reprinted in: Hofstadter, Douglas (1985). Metamagical Themas. Basic Books. pp. 737–755. ISBN 0-465-04566-9.
- 1 2 Roemer, John E. (2010). "Kantian Equilibrium". The Scandinavian Journal of Economics. 112 (1): 1–24. doi:10.1111/j.1467-9442.2009.01592.x. ISSN 1467-9442. S2CID 13381456.
- ↑ Campbell, Paul J. (January 1984). "Reviews". Mathematics Magazine. 57 (1): 51–55. doi:10.2307/2690298. JSTOR 2690298.
- ↑ Diekmann, Andreas (December 1985). "Volunteer's Dilemma". The Journal of Conflict Resolution. 29 (4): 605–610. doi:10.1177/0022002785029004003. JSTOR 174243. S2CID 143954605.
- ↑ Drescher, Gary L. (2006). Good and Real: Demystifying Paradoxes from Physics to Ethics. MIT Press. pp. Chapter 7. ISBN 9780262042338.
- ↑ Lewis, David (1979). "Prisoners' Dilemma is a Newcomb Problem". Philosophy and Public Affairs. 8 (3): 235–240. doi:10.1093/0195036468.003.0011. JSTOR 2265034.
- ↑ Brams, Steven J. (1975). "Newcomb's Problem and Prisoners' Dilemma". The Journal of Conflict Resolution. 19 (4): 596–612.
- ↑ Howard, J.V. (May 1988). "Cooperation in the Prisoner's Dilemma". Theory and Decision. 24 (3): 203–213. doi:10.1007/BF00148954.
- ↑ Barasz, M.; Christiano, P.; Fallenstein, B.; Herreshoff, M.; LaVictoire, P.; Yudkowsky, E. (2014). "Robust Cooperation in the Prisoner's Dilemma: Program Equilibrium via Provability Logic". arXiv:1401.5577 [cs.GT].
- ↑ Oesterheld, Caspar; Treutlein, Johannes; Grosse, Roger; Conitzer, Vincent; Foerster, Jakob (2023). "Similarity-based Cooperative Equilibrium". Proceedings of the Neural Information Processing Systems (NeurIPS).