One game from game theory which has applications to economics is the prisoner’s dilemma. This game involves two imaginary members of a criminal gang who have been arrested for a crime and are being held separately. The prosecutor offers each prisoner a choice: they can testify that the other person committed the crime, or they can remain silent. Their penalties will then be as follows: if only one prisoner defects and betrays the other, that prisoner gets off and the other gets the full five years. If both prisoners remain silent, then they both get two years on a lesser charge. And if both prisoners betray the other, they both get four years.
The payoffs in the the table below represent the number of years saved from the maximum penalty for each case. The strategies are denoted C for cooperate or D for defect. The strategy where they both defect is a Nash equilibrium because if either of them changes their strategy unilaterally then they will be worse off.
Game theory was invented in large part by the mathematician John von Neumann along with the theory of expected utility. He was serving as an advisor to president Eisenhower on the use of the bomb and used results from game theory to argue in favor of the first strike doctrine. After the Soviets developed their own weapons this morphed into another kind of a strategy called mutually assured destruction or MAD.
Game theory does seem to give a rather bleak view of human nature and later experiments showed that in fact people cooperate much more often than rational choice would suggest. In fact in one typical experiment some 37 percent of people chose to cooperate in the game and this fell to 10 if they knew the other person’s strategy. So there’s something missing from the classical version.
In the quantum version each player’s strategy is now going to be encoded by a qubit which we can denote or
. The joint strategy therefore exists in a two-dimensional Hilbert space spanned by these basis vectors. For example if the strategy for A is
and that of B is
then the joint strategy can be denoted
. The strategies for the two players will be given by unitary operators
and
that only act on the player’s own qubit. Our circuit will then be the one shown in the diagram below.
The two qubits initialized in the state are going to be acted on by a unitary matrix
whose role is to entangle the two qubits. Each of the qubits are then acted on by the individual strategies. Then we apply the inverse of
which ensures that if you put a classical strategy in then you get a classical strategy out at the end. And finally we measure the output state and calculate the payoffs in the same way as in the classical version.
As an example we can choose the entanglement matrix to be a matrix with 1’s on the diagonal, and ‘s on the anti-diagonal, all divided by
. After operating with
we get
which is now entangled. We then calculate the final state, which depends on the individual strategies, and the payoff, which is the penalty for each of the possible outcomes weighted by their probability.
The different possibilities are shown in the table in the video screenshot below. In the classical version (top four rows) the moves are only the identity or the NOT gate which flips from cooperate to defect. But in the quantum version we have extra quantum moves such as the Hadamard transformation. This leads to a new Nash equilibrium which is better than the old one.
So what does this entanglement mean? Well in quantum game theory it is usually thought of as representing some kind of social contract, so it can be used to model things like societal norms or altruism. Another interpretation is to identify player B with the person’s objective outlook, as they think about their own strategy, and player A with their subjective beliefs about what the other person’s going to do. So that context, represented by the top qubit, is going to act as a control. If we choose the entanglement matrix now to be the C-NOT gate, then it has no effect on the initialized input and we can omit it from the the beginning of the circuit. We are therefore left with the same two-qubit entanglement circuit that we’ve been using for projection sequences or decisions in quantum cognition where we have some subjective factors acting on the top qubit and some objective factors acting on the lower qubit.
It follows that we can apply the quarter law to this as before. The uncertainty about the other person’s strategy is going to take the base rate of cooperation, which is 10 percent when you know what the other person’s going to do, and it would add about a quarter to that, which would bring you to around 35 percent expected to cooperate, which is in good agreement with experiment.
Further reading:
Eisert J, Wilkens M and Lewenstein M (1999) Quantum Games and Quantum Strategies. Physical Review Letters 83, 3077 – 3080.
Previous: QEF06 – The Penny Flip Game
Next: QEF08 – Quantizing Propensity
Playlist: Quantum Economics and Finance
Leave a Reply