| Complete Info | Incomplete Info |
Static |
Simultaneous-move
All players' payoffs known
|
Simultaneous-move
Some payoffs unknown
|
Dynamic |
Sequential-move
All players' payoffs known
|
Sequential-move
Some payoffs unknown
|
N.B. There also exists the concept of
imperfect information, where one does not know about the
actions of other players (at least in time for this information to be incorporated into making one's next decision). All combinations (i.e. complete/perfect, complete/perfect, incomplete/perfect, incomplete/imperfect) are possible (see
Wikipedia example).
As an equilibrium concept in a stronger class of games must be true when applied to a weaker class (but not the other way around), one might conceivably just learn the procedures behind Dynamic Games of Incomplete Information and use them to solve all the given classes of problems; However, as might be imagined such procedures can be unwieldy, and quite unnecessary if a problem is known to be of a simpler class, and moreover the intuition behind game theory is probably best built from the ground up. Thus, the progressive approach.
First off, game theory is not simply some namby-pamby academic theory, as likely everyone has used it (with most probably without realising that) in daily life, whenever actions and advantages are in play. A child applies it when he tries to decide whether to lie or not to lie about snarfing the last cookie. A housewife applies it when she tries to decide how hard a bargain to drive when haggling. True, they probably don't think of numerical utilities and draw matrices in their minds, but use it they do.
Now, if only one person is involved in some situation with known payoffs, there is not much to analyse - take a guy who spots a $1 bill, a $10 bill and a $100 bill fluttering in the street. Obviously, he picks up all $111. More generally, he will just do what is best for himself (which it has to be noted implies no moralistic values on what is
"best", though in economics pure monetary profit is generally assumed for simplicity).
1. Static + Complete
The simplest form introduced. Given that players each have access to a certain number of strategies, and the payoffs for each player for each given combination of strategies are fully known by all players (
"common knowledge"), what should each player do? Interestingly, game theory may not always be able to resolve this issue. What game theory tries to do is to discover the
Nash equilibrium, which is a situation where for all players, given the strategies adopted by all other players, they cannot do better than their own strategy.
Nash (1951) proved that
at least one such equilibrium (perhaps in mixed strategies, which involves probabilistic selection of strategies) must exist (though this famous observation was dismissed offhandedly as trivial by von Neumann), for any game with a finite number of strategies and participants. If that is a unique equilibrium, then rational participants should (but
often don't in reality) automatically gravitate towards it (and the
"what should I do" question is answered). There sometimes exist multiple Nash equilibria, though, and in these cases game theory may not be able to serve as a rational guide for what strategy to adopt. (Personal note: Is it rational to presuppose that players with no foreknowledge of each other would then take the guaranteed mixed equilibrium strategy solution as their strategy? - for an example see the
"battle of the sexes" game)
A rather elementary first step for a player would then be not to consider any strategy which in every case offers lower payoffs (
"strictly dominated") than another of his possible strategies, and happily know his next step if only one strategy remains after iterating this process. This may not happen, in which case he can use a more sophisticated method of determining the (possibly multiple) best strategy reponses, for each combination of strategies, for all players. Then, if some combinations exist that are the best response for all players, they are Nash equilibria. In practice, this is often a matter of constructing a (possibly huge) matrix, and underlining the qualifying payoffs to my little Economic undergraduate's heart's content.
An infinite number of strategies is of course possible - take for example the
Cournot duopoly game, where two firms simultaneously decide on the
quantity to produce (even if the maximum quantity is effectively capped by market demand, any fractional number of quantity may be chosen, and as we know the number of real numbers between two distinct real numbers is infinite). In this case we reason by constructing a best response function for each firm (knowing the market demand and cost functions), equating them, and solving them simultaneously. This method can be generalized to any number of firms (as in a tutorial). Of course, writing about the procedure, actually doing the math, and doing the math under examination time constraints, are different things altogether.
Another application is the
Bertrand game, where firms simultaneously decide on
price (as opposed to quantity in Cournot), being able to supply any required quantity. Now if the products are identical, clearly all consumers will flock to the firm with the lower price, resulting in price = marginal cost if the competing firms share the same cost function at equilibrium (and a price just lower than the marginal cost of the higher-costed firm otherwise). Still, what was an obvious conclusion dismissed by a line or two at lower levels comes to two pages of detail in strict game theory. For differentiated products, Bertrand firms will not lose all their business if their price is higher, and in this case the treatment (and math) is much the same as for Cournot.
N.B. The graphical representation of a mixed strategy solution is only easily presented for two players with two strategies (such that two variables can be used to specify fully the best responses). For the scope of examinations iterated elimination of strictly dominated strategies thus may come into play if only to strip a scary-looking multi-strategy matrix down to a more manageable 2x2 one.
2. Dynamic + Complete
Dynamic essentially means that the players now have sequential moves, i.e. some player(s) decide on their strategies after others (and possibly observe the strategies already taken). This is the case in chess and other board games, and indeed some chess endgames would translate into a neat game tree for demonstration purposes. A general idea in solving these sort of games is through
backwards induction, or beginning at the end and determining at each stage what a player would
rationally do. N.B. There is theoretically nothing to stop us from solving most chess-like games through this method alone, other than the sheer scale of the required computations.
The
Stackelberg model considers a dominant firm choosing a
quantity to produce first, before a following firm observing that quantity and choosing its own quantity (in contrast to Cournot, where quantity is also chosen, but simultaneously). The outcome, perhaps non-intuitively, is that the following firm (now with more information) is actually better off in the Cournot game! The reason is that the dominant firm can use the following firm's rationality and knowledge against it, but producing more to begin with. The following firm is then left with the scraps at best. A little knowledge can be a dangerous thing in multiplayer games.
N.B. Moving second is not always worse. When competing on price, being second allows one to marginally undercut the price-setter, for instance.
Sequential games are more naturally presented in tree form (i.e.
extensive form) as compared to matrix form (i.e.
normal form), though initial equilibrium elimination is often more easily done in normal form, before the few remaining equilibria are checked for subgame perfection (as subgame-perfect equilibria
must be Nash equilibria). Note that an equilibrium consists of a complete strategy
even for prior strategies not leading to the player's decision node, while an outcome is just the single strategy (
action) taken by each player.
With
imperfect information, some players may not know when their turn comes some prior strategy taken - this might be the case with simultaneous games within sequential ones, or due to the existence of an
information set with multiple decision nodes (implying that the strategies available at all nodes within the information set are identical). Note that static games of complete information are trivially transformable into dynamic games with maximal information sets for each player.
Sequential games are possibly infinitely long - the solution for an infinite bargaining game where two players take turns to offer a division of some payout (diminishing over time) is derived from the three-period game under backwards induction, under the common-sense stipulation that a player will accept a certain share only if he cannot get any better. The (finite) three period game is simple to analyse this way as the payout at the end of the third period is known.
Then, for the infinite game, the proof involves observing that the game beginning at the third period is essentially the same as that at the first (though if it
actually reaches the third period, the total payout available is lower), and equating the payout for the first player with some arbitrary payoff variable; This variable may then be expressed in terms of just the discount per period (which may be different for each player). Unsurprisingly players who value later payoffs less will end up with a smaller distribution.
Note that for a finitely repeated game, the strategy taken in each stage is the same as that in a one-off game
if there is a unique Nash equilibrium, and not cooperate for communal gain as some might think possible - the logic being that at the last stage, both players will surely play a selfish strategy to maximize their personal gains. But since both know that, there is no incentive to cooperate at the second to last stage, and so on to the first stage. Note however that if multiple Nash equilibriums exist, a cooperating subgame perfect equilibrium possibly involving non one-off game Nash equilibriums is possible! An intriguing variation might be when the number of stages is not fixed, but the game ends randomly.
If the game is infinitely repeated, cooperation is possible if the one-time benefit of being selfish (and
triggering a similar response from the other player for the rest of eternity) outweighs the discounted infinite value stream (which has a finite value due to limits, and of course if no discounting takes place it is never worthwhile to do so). May be applied to collusion between duo/oligopolists in reality. Personal note: A slightly modified version, i.e.
Tit for Tat mirrors the opponent's last move, so the punishment does not last forever but cooperation may resume if the opponent shows contrition (and some recompense) by playing the cooperative strategy again after the initial
"betrayal". If there are possibly errors in playing the strategy, a version of Tit for Tat with some small chance of forgiveness is likely worthwhile. Important: Remember x + x
2 + x
3 + ... = 1/(1-x) for x < 1
3. Static + Incomplete
Incomplete information implies at least one player is uncertain about another player's payoff (directly cribbed from the lecture notes). While conceptually not hard to get around, the introduction of multiple possibilities (and probabilities) further lengthens the necessary calculations. For instance, for the Cournot model with one firm having private information on its own type characteristics (e.g. whether it has a high or low cost production structure), that firm now has multiple optimal strategies depending on its own knowledge, while the other firm has to hedge its strategy knowing only the probability of the first firm's type.
Note that as the probabilities are known, the obvious logical approach would be to weigh each outcome by the probability of it happening, and go for the one with the highest expected payoff (agents are assumed to be risk-neutral here); That, indeed, is the
Bayesian Nash Equilibrium (the text seems to have additional examples based on normal distributions and which require more involved statistics, but this wasn't covered in the lectures). Note that mixed strategy equilibrium of a static game of complete information can be approximated by pure strategies in a properly designed game of incomplete information (by reasoning over uncertainty in the payoffs, as that uncertainty approaches zero), but I seriously doubt the examination will require a proof from scratch of this.
A second-price sealed bid auction (where the winning bidder[s] only pay the highest value bid by losing bidder[s], such as the
COE) has the convenient property of allowing all bidders to simply bid their own valuation as their best strategy, without resorting to second-guessing as with the older first-price auction (where winning bidder[s] pay their own bid, and thus have the incentive to save money by underbidding if they think others have valuations quite a bit lower). N.B. The result for two bidders with valuations each randomly and uniformly distributed on [0,1] after normalization is known to be half of their own valuation. Same (relatively) complex analysis goes for double auctions where a buyer and a seller name their bid and reserve price simultaneously and have the transaction go through at the average of the two if the bid is higher than the price, since they have an incentive to second-guess each other.
4. Dynamic + Incomplete
The most comprehensive class, and one which can encompass all prior classes. Here, the
Perfect Bayesian Equilibrium is defined, which requires beliefs of players within information sets containing more than one possible decision node to be specified and consistent with the equilibrium strategy whenever on the equilibrium path.
As in complete information, for pure strategies usage of the normal form game is useful to quickly eliminate impossible strategy profiles. The surviving ones can then be individually examined to see if they may satisfy perfect Bayesian equilibrium.
A common example is signalling games and pooling/separating equilibriums, coincidentally also covered in both my other economics modules this semester (Health and Labour), and also
in passing here. For such questions, it is useful to think in terms of the various equilibrium combinations (often just four, for the standard Sender/Receiver roles, and two types for each of them) and see if some of them are strictly dominated or have better expectations for the given beliefs.
Went through the 04/05 and 06/07 papers, managing to do all the questions (I think). They (and the midterm) have so far been capped at a maximum difficulty level quite a bit lower than the hardest questions appearing in tutorials (free-form derivations weren't seen in the examinations, which I suppose is fairer as one can't expect to be judged overly on [lucky] creativity); I hope that continues, since some tutorial questions certainly took me more than a couple of hours to complete.All e best! ... over fed(FAT) hamster don escape HAHA