TCHS 4O 2000 [4o's nonsense] alvinny [2] - csq - edchong jenming - joseph - law meepok - mingqi - pea pengkian [2] - qwergopot - woof xinghao - zhengyu HCJC 01S60 [understated sixzero] andy - edwin - jack jiaqi - peter - rex serena SAF 21SA khenghui - jiaming - jinrui [2] ritchie - vicknesh - zhenhao Others Lwei [2] - shaowei - website links - Alien Loves Predator BloggerSG Cute Overload! Cyanide and Happiness Daily Bunny Hamleto Hattrick Magic: The Gathering The Onion The Order of the Stick Perry Bible Fellowship PvP Online Soccernet Sluggy Freelance The Students' Sketchpad Talk Rock Talking Cock.com Tom the Dancing Bug Wikipedia Wulffmorgenthaler |
bert's blog v1.21 Powered by glolg Programmed with Perl 5.6.1 on Apache/1.3.27 (Red Hat Linux) best viewed at 1024 x 768 resolution on Internet Explorer 6.0+ or Mozilla Firefox 1.5+ entry views: 2536 today's page views: 157 (16 mobile) all-time page views: 3207883 most viewed entry: 18739 views most commented entry: 14 comments number of entries: 1203 page created Sun Jan 19, 2025 16:50:29 |
- tagcloud - academics [70] art [8] changelog [49] current events [36] cute stuff [12] gaming [11] music [8] outings [16] philosophy [10] poetry [4] programming [15] rants [5] reviews [8] sport [37] travel [19] work [3] miscellaneous [75] |
- category tags - academics art changelog current events cute stuff gaming miscellaneous music outings philosophy poetry programming rants reviews sport travel work tags in total: 386 |
|
- changelog - changelog v1.07c --------------- * FeedBurner FeedCount added somewhere. * RSS Feed now autodetectable by IE Toolbar. * Couple of blog links added/updated. * Minor bug with special characters in Javascript functions (partially) fixed. Taking a breather from Financial Economics, I figured that this was as good a time to unearth my planned post on game theory as any. It's not going to be anywhere near as comprehensive as I envisaged, but perhaps that's a good thing. So, what is The Game? Nope, it's neither wrestler nor rapper... The short answer is, it's everything. Everything is a game. How can this be so? Let us look at the mathematical definition. While the very first sentence states that there must be a goal "to win", I would suggest that it might more generally and accurately be specified that there must be just a "desired outcome", to avoid being misled by narrower concepts of "victory". An adult might deliberately throw a board game to please a child, and on some level both achieve what they desire - games are often not zero-sum. So we now have a working idea of what a game is. Economics (applied math) teaches us how to analyze stripped-down versions of real-life games, apparently under the assumption (as usual) that these approximations are close enough to reality to be useful. This field is called game theory, and it teaches us how to make decisions when the strategies and their outcomes are laid out before us. A simple example: Consider the game where you are deciding whether to do a pair assignment, and the choices are either to do it (D) or not to do it (N). Suppose you then inform your partner of your choice, and he then has the same choice whether to do it or not (your choice is fixed once you inform him, perhaps because you will be away for a trip). This game may be fully specified as follows: You: D, Partner: D - Both get credit for completing the assignment, utility is, say, 10 for both. You: D, Partner: N - Both get credit for completing the assignment, but partner happier because he managed to get the same result without expending effort, while you had to. Utility 8 for you but 12 for partner. You: N, Partner: D - Both get credit for completing the assignment, but you are happier because you managed to get the same result without expending effort, while your partner had to. Utility 12 for you but 8 for partner. You: N, Partner: N - Assignment not done, both flunk the module and thrown out of school, utility 0 for both. So, what does game theory suggest you do? You could do the assignment first, but then according to this framework your partner would just free-ride. Alternatively, you could just ignore it and throw it onto your partner's lap. Within the logic of this framework, he would rationally realise that sucking his thumb and completing it would be better than having both of you expelled, and so you would surely get to free-ride instead. You would thus choose not to do it, and expect your partner to. Problem is, it doesn't really work this way in practice. Or even close. First off, it assumes a lot about you and your partner. Do neither of you really enjoy doing assignments? Would the learning process confer some utility? How are the utility figures set anyway - out of thin air? This might be expected to be less problematic when something quantifiable is used as a measure (such as money), but even then utility is often not a linear function of cash. Outcomes are also seldom fully predictable (what if the partner falls sick or...?). Secondly, it assumes a one-time interaction, which is very rarely the case in reality. Be a cock for long enough, and soon one's reputation will precede him - he might have a few short-run "wins", but end up a big loser in the long run (not nearly long enough that everyone is dead anyway, to paraphrase Keynes, though). Thirdly, there is bona humana, the irrational and unpredictable in all of us. This might expected to be especially pronounced when one perceives that the outcome of the game doesn't really matter much - have you ever grabbed a can of mushrooms even though it was ten cents more expensive than an identical looking can next to it? I know I have. Cue today's case of a pilot getting canned for pranking his Canadian colleague with unauthorised orders for Canadian pizza. Seems slightly harsh to suspend a person for what might reasonably be laughed off, but then he's a pilot, so... Game theory does attempt to resolve multiple interactions by modelling them as repeated games, but this only comes up with the finding that for a known finite number of repeated games, the one-game outcome will hold for all of them. It may be time to introduce the classic prisoner's dilemma, which unlike the previous assignment example is simultaneous (both players decide effectively at the same time) and rewards cooperation (if both players keep mum, they would be better off than if both rat the other guy out). Of course, the thing is that neither will know if the other will betray him, so game theory expects that both ratting each other out is the expected optimal outcome (Nash equilibrium). Again, this is often not reflected in real life. One explanation might be that the criminal underworld generally doesn't take kindly to snitches, and thus one might rationally choose to keep silent if the alternative is almost certainly to sleep with the fishes in concrete shoes. In this case, the payoff matrix would be inaccurate to begin with. Even with the absence of such guaranteed retribution, the players might still "trust" each other enough to gain the cooperative rewards. This is reflected by Hofstadter's superrationality. An example where superrationality features prominently is the Platonia dilemma, where an eccentric rich guy gathers a number of people together, then separates them and informs them that they can decide whether to try for a billion dollars; If exactly one person tries and the rest do not, that one guy gets a billion dollars. Otherwise, if none try, or more than one tries, no one gets anything. The superrational solution is supposed to be for each person to simulate the roll of a dice with number of faces equal to the number of people involved, and try only if he gets a one. Surprisingly, if everyone does subscribe to this methodology, the chance of the billion dollars being awarded to someone is respectably high - it is ((n-1)/n)^(n-1) where n is the total number of people, and appears to approach a limit of about 36%. In reality, when Scientific American tried something like that, they discovered that almost nobody adhered to it, even taking into consideration that their readers should as a whole be of a certain intellectual calibre. My current opinion on superrationality however is that it is unnecessary - one can just budget for it in the "lower" concept of classical game theory, by having it affect the utility values, and attribute all inaccuracies in classical game theory to improper formulation of a game. In the adult-child board game example, the unexpected outcome (adult losing) can be adequately modelled by incorrect utilities (adult doesn't derive pleasure from beating a kid), or insufficient game level (one might imagine the "game" of the board game itself being subsumed within a larger "game" of the adult-child relationship). Building from the ground up, we can then imagine that there is the game, a game that is the superset of all the smaller games that make up one's life. We may not know exactly what it is, or have it as immutable, but probably have some sort of idea of it at any given point. Then, that must necessarily the game that a person rationally tries to "win", and all other games must be adjusted based on their effect on that single, all-consuming game. For instance, some time ago, I attended an experimental study which was composed of a collection of bargaining games - like the prisoner's dilemma, one would get more points if one cooperated, and less if one did not. The thing is that, in such games not cooperating is, as has been demonstrated, rationally the only no-lose situation - if both don't cooperate, you would at least break even on the deal, but if the other guy is silly enough to, you get to beat him on the deal. Confident in this analysis, I played strictly according to this analysis then. Trouble was, that wasn't really the game. The game was to accumulate as many points as possible, which would be proportionally be translated into a (small) monetary reward at the end of the session. In the end, I finished with an average score only. Likely there were enough other people who hadn't studied economics, were plain nice, or who knew each other to make the non-cooperative strategy poor. Of course, this may not be the point since if everyone did behave like me, then I would still have ended up with an average score (like everyone else in this case). However, now let us say that only the highest scoring of the participants would get a big prize. Suddenly, with enough participants, and high enough points for cooperation, it may seem logical to cooperate sometimes at least since an average score would do nothing towards winning the big prize as the odds are that some participants would happen to cooperate with each other. The exact decision would depend on the specifics of the point structure, and mixed strategies probably would come into play. The same applies in lotteries, for instance - it has often been stated that they are a tax on the mathematically illiterate, but the joke may be on the mathematicians here since the expected utility may be positive despite the expected return not being so. Say, if there are a million tickets priced at two dollars each for a million dollar price, the expected return is negative one dollar per ticket, as each ticket is "worth" a dollar (a one-millionth chance to win a million) but sold for two. However, the almost certain loss of two bucks might not mean anything to a person, while getting a million dollars (admittedly very unlikely) does, without even going into the entertainment value of listening to the lottery results. Yesterday, I attended another experiment, this time on the ultimatum game (it was specified as a study on negotiation, though, which wasn't that accurate since no negotiation actually takes place). Two players take part (their identities are not known to each other), and one player is allowed to propose a division of $10. The second player can accept or reject the proposal. If the proposal is accepted, both players get what the proposal specifies. If it is rejected, both get nothing. Then, according to game theory, the first player could offer the second player one cent, and the second player would accept it because hey, one cent is better than nothing! I was the second player, and must have been up against such a rational player since he offered me fifty cents. Obviously, I rejected it - as I wrote in the survey afterwards, "the satisfaction gained from rejecting a paltry amount far outweighs the possible monetary gain." If the offer were five hundred thousand against $9.5 million, though, I would certainly think twice. The Wikipedia article does describe cases where the equivalent of two weeks' wages (say, $1000 in the local context) being rejected when the experiment was held with real money in Indonesia, just because of the split being perceived as unfair, so it does seem that the possible value of satisfaction gained can be very high indeed. So, the moral of this story? Know the Games and Play them. Don't do something just because it seems correct at a lower game level - Recognize how to "win" each subgame, where each subgame stands in relation to higher subgames, and try to recenter on The Game once in a while. Next: I'm Spinning Around...
Linkback by
|
||||||||||
Copyright © 2006-2025 GLYS. All Rights Reserved. |