字幕表 動画を再生する
Professor Ben Polak: Okay, so last time we came
across a new idea, although it wasn't very new for
a lot of you, and that was the idea of Nash
Equilibrium. What I want to do today is
discuss Nash Equilibrium, see how we find that
equilibrium into rather simple examples.
And then in the second half of the day I want to look at an
application where we actually have some fun and play a game.
At least I hope it's fun. But let's start by putting down
a formal definition. We only used a rather informal
one last week, so here's a formal one.
A strategy profile--remember a profile is one strategy for each
player, so it's going to be S_1*,
S_2*, all the way up to
S_M* if there are M players playing the game--so
this profile is a Nash Equilibrium (and I'm just going
to write NE in this class for Nash Equilibrium from now on)
if, for each i –so for each
player i, her choice--so her choice here is S_i*,
i is part of that profile is a best response to the other
players' choices. Of course, the other players'
choices here are S_--i* so everyone is
playing a best response to everyone else.
Now, this is by far the most commonly used solution concept
in Game Theory. So those of you who are
interviewing for McKenzie or something, you're going to find
that they're going to expect you to know what this is.
So one reason for knowing what it is, is because it's in the
textbooks, it's going to be used in lots of applications,
it's going to be used in your McKenzie interview.
That's not a very good reason and I certainly don't want you
to jump to the conclusion that now we've got to Nash
Equilibrium everything we've done up to know is in some sense
irrelevant. That's not the case.
It's not always going to be the case that people always play a
Nash Equilibrium. For example,
when we played the numbers game, the game when you chose a
number, we've already discussed last
week or last time, that the equilibrium in that
game is for everyone to choose one,
but when we actually played the game, the average was much
higher than that: the average was about 13.
It is true that when we played it repeatedly,
it seemed to converge towards 1,
but the play of the game when we played it just one shot first
time, wasn't a Nash Equilibrium. So we shouldn't form the
mistake of thinking people always play Nash Equilibrium or
people, "if they're rational," play Nash Equilibrium.
Neither of those statements are true.
Nevertheless, there are some good reasons for
thinking about Nash Equilibrium other than the fact it's used by
other people, and let's talk about those a
bit. So I want to put down some
motivations here--the first motivation we already discussed
last time. In fact, somebody in the
audience mentioned it, and it's the idea of "no
regrets." So what is this idea?
It says, suppose we're looking at a Nash Equilibrium.
If we hold the strategies of everyone else fixed,
no individual i has an incentive to deviate,
to move away. Alright, I'll say it again.
Holding everyone else's actions fixed, no individual has any
incentive to move away. Let me be a little more careful
here; no individual has any
strict incentive to move away.
We'll see if that actually matters.
So no individual can do strictly better by moving away.
No individual can do strictly better by deviating,
holding everyone else's actions.
So why I call that "no regret"? It means, having played the
game, suppose you did in fact play a Nash Equilibrium and then
you looked back at what you had done,
and now you know what everyone else has done and you say,
"Do I regret my actions?" And the answer is,
"No, I don't regret my actions because I did the best I could
given what they did." So that seems like a fairly
important sort of central idea for why we should care about
Nash Equilibrium. Here's a second idea,
and we'll see others arise in the course of today.
A second idea is that a Nash Equilibrium can be thought of as
self-fulfilling beliefs. So, in the last week or so
we've talked a fair amount about beliefs.
If I believe the goal keeper's going to dive this way I should
shoot that way and so on. But of course we didn't talk
about any beliefs in particular. These beliefs,
if I believe that--if everyone in the game believes that
everyone else is going to play their part of a particular Nash
Equilibrium then everyone, will in fact,
play their part of that Nash Equilibrium.
Now, why? Why is it the case if everyone
believes that everyone else is playing their part of this
particular Nash Equilibrium that that's so fulfilling and people
actually will play that way? Why is that the case?
Anybody? Can we get this guy in red?
Student: Because your Nash Equilibrium matches the
best response against both. Professor Ben Polak:
Exactly, so it's really--it's almost a repeat of the first
thing. If I think everyone else is
going to play their particular--if I think players 2
through N are going to play S_2* through
S_N*--then by definition my best response is
to play S_1* so I will in fact play my part in the Nash
Equilibrium. Good, so as part of the
definition, we can see these are self-fulfilling beliefs.
Let's just remind ourselves how that arose in the example we
looked at the end last time. I'm not going to go back and
re-analyze it, but I just want to sort of make
sure that we followed it. So, we had this picture last
time in the partnership game in which people were choosing
effort levels. And this line was the best
response for Player 1 as a function of Player 2's choice.
And this line was the best response of Player 2 as a
function of Player 1's choice. This is the picture we saw last
time. And let's just look at how
those--it's no secret here what the Nash Equilibrium is:
the Nash Equilibrium is where the lines cross--but let's just
see how it maps out to those two motivations we just said.
So, how about self-fulfilling beliefs?
Well, if Player--sorry, I put 1, that should be 2--if
Player 1 believes that Player 2 is going to choose this
strategy, then Player 1 should choose
this strategy. If Player 1 thinks Player 2
should take this strategy, then Player 1 should choose
this strategy. If Player 1 thinks Player 2 is
choosing this strategy, then Player I should choose
this strategy and so on; that's what it means to be best
response. But if Player 1 thinks that
Player 2 is playing exactly her Nash strategy then Player 1's
best response is to respond by playing his Nash strategy.
And conversely, if Player 2 thinks Player 1 is
playing his Nash strategy, then Player 2's best response
indeed is to play her Nash strategy.
So, you can that's a self-fulfilling belief.
If both people think that's what's going to happen,
that is indeed what's going to happen.
How about the idea of no regrets?
So here's Player 1; she wakes up the next
morning--oh I'm sorry it was a he wasn't it?
He wakes up the next morning and he says, "I chose
S_1*, do I regret this?"
Well, now he knows what Player 2 chose;
Player 2 chose S_2* and he says, "no that's the best
I could have done. Given that Player 2 did in fact
choose S_2*, I have no regrets about
choosing S_1*; that in fact was my best
response." Notice that wouldn't be true at
the other outcome. So, for example,
if Player 1 had chosen S_1* but Player 2 had
chosen some other strategy, let's say S_2 prime,
then Player I would have regrets.
Player I would wake up the next morning and say,
"oh I thought Player 1 was going to play S_2*;
in fact, she chose S_2 prime.
I regret having chosen S_1*;
I would have rather chosen S_1 prime.
So, only at the Nash Equilibrium are there no
regrets. Everyone okay with that?
This is just revisiting really what we did last time and
underlining these points. So, I want to spend quite a lot
of time today just getting used to the idea of Nash Equilibrium
and trying to find Nash Equilibrium.
(I got to turn off that projector that's in the way
there. Is that going to upset the
lights a lot?) So okay, so what I want to do is I want
to look at some very simple games with a small number of
players to start with, and a small number of
strategies, and I want us to get used to how we would find the
Nash Equilibria in those simple games.
We'll start slowly and then we'll get a little faster.
So, let's start with this game, very simple game with two
players. Each player has three
strategies and I'm not going to motivate this game.
It's just some random game. Player 1 can choose up,
middle, or down, and Player 2 can choose left,
center, and right and the payoffs are
as follows: (0,4) (4,0) (5,3) (4,0) (0,4) (5,3) again (3,5)
(3,5) and (6,6). So, we could discuss--if we had
more time, we could actually play this game--but isn't a very
exciting game, so let's leave our playing for
later, and instead, let's try and figure out what
are the Nash Equilibria in this game and how we're going to go
about finding it. The way we're going to go about
finding them is going to mimic what we did last time.
Last time, we had a more complicated game which was two
players with a continuum of strategies,
and what we did was we figured out Player 1's best responses
and Player 2's best responses: Player 1's best response to
what Player 2's doing and Player 2's best response to what Player
1 is doing; and then we looked where they
coincided and that was the Nash Equilibrium.
We're going to do exactly the same in this simple game here,
so we're going to start off by figuring out what Player 1's
best response looks like. So, in particular,
what would be Player 1's best response if Player 2 chooses
left? Let's get the mikes up so I can
sort of challenge people. Anybody?
Not such a hard question. Do you want to try the woman in
the middle? Who's closest to the woman in
the middle? Yeah.
Student: The middle. Professor Ben Polak:
Okay, so Player 1's best response in this case is middle,
because 4 is bigger than 0, and it's bigger than 3.
So, to mark that fact what we can do is--let's put a
circle--let's make it green--let's put a circle around
that 3. Not hard, so let's do it again.
What is Player 1's best response if Player 2 chooses
center? Let's get somebody on the other
side. (Somebody should feel free to
cold call somebody here.) How about cold calling the guy with
the Yale football shirt on? There we go.
Student: Up. Professor Ben Polak: All
right, so up is the correct answer.
(So, one triumph for Yale football).
So 4 is bigger than 3, bigger than 0,
so good, thank you. Ale why don't you do the same?
Why don't you cold call somebody who's going to tell me
what's the best response for Player 1 to right.
Shout it out. Student: Down.
Professor Ben Polak: Down, okay so best response is
down because 6 is bigger than 5, which is bigger than 5;
I mean 6 is bigger than 5. So what we've done here is
we've found Player 1's best response to left,
best response to center, and best response to right and
in passing notice that each of Player 1 strategies is the best
response to something, so nothing would be knocked out
by our domination arguments here, and nothing would be
knocked out by our never a best response arguments here for
Player 1. Let's do the same for Player 2;
so why don't I keep the mikes doing some cold calling,
so why don't we--you want to cold call somebody at the back
to tell me what is Player 2's best response against up?
Student: Left. Professor Ben Polak: So,
the gentleman in blue says left because 4 is bigger 3,
and 3 is bigger than 0. Let's switch colors and switch
polygons and put squares around these.
I don't insist on the circles and the squares.
If you have a thing for hexagons you can do that
to0--whatever you want. What's Player 2's--let's write
it in here--the answer, the answer was that the best
response was left, and Player 2's best response to
middle--so do you want to grab somebody over there?
Shout it out! Student: Center.
Professor Ben Polak: Because 4 is bigger than 3,
and 3 is bigger than 0, so that's center.
So, in my color coding that puts me at--that gives me a
square here. And finally,
Player 2's best response to down?
And your turn Ale, let's grab somebody.
Student: Left--right. Professor Ben Polak:
Right, okay good. Because 6 is bigger than 5
again, so here we go. What we've done now is found
Player 1's best response function that was just the
analog of finding the best response line we did here.
Here we drew it as a straight line in white,
but we could have drawn it as a sequence of little circles in
green; it would have looked like this.
The same idea, I'm not going to do the whole
thing but there would be a whole bunch of circles looking like
this. And then we found Player 2's
best response for each choice of Player 1 and we used pink
rectangles and that's the same idea as we did over here.
Here we did it in continuous strategies in calculus,
but it's the same idea. So, imagine I'm drawing lots of
pink rectangles. And again, I'm not going to do
it, but you can do it in your notes;
same basic idea. Just as the Nash Equilibrium
must be where these best response lines coincided here
because that's the point at which each player is playing a
best response to each other, so the Nash Equilibrium over
here is--do you want to cold call somebody?
Do you want to just grab somebody in the row behind you
or something, wherever?
Anybody? Yeah, I think you in the Italia
soccer shirt; I think you're going to get
picked on, so okay what's the Nash Equilibrium here?
Student: The down right. Professor Ben Polak: So
the down-right square. So, the Nash Equilibrium here
is--no prizes--no World Cup for that one--is down right.
So, why is that the Nash Equilibrium?
Because at that point Player 1 is playing a best response to
Player 2 and Player 2 is playing a best response to Player I.
Now, I want to try and convince you, particularly those of you
who are worried by the homework assignments, that that was not
rocket science. It's not hard to find Nash
Equilibria in games. It didn't take us very long,
and we went pretty slow actually.
So let's look at another example.
Oh, before I do, notice this--before I leave
this game I'm just going to make one other remark.
Notice that in this game that each strategy of Player 1 is a
best response to something, and each strategy of Player 2
is a best response to something. So, had we used the methods of
earlier on in the class, that's to say,
deleting dominated strategies or deleting strategies that are
never a best response, we'd have gotten nowhere in
this game. So, Nash Equilibrium goes a
little further in narrowing down our predictions.
But we also learned something from that.
We argued last time, in the last few weeks,
that rationality, or even mutual knowledge of
rationality or even common knowledge of rationality
couldn't really get us much further than deleting dominated
strategies, or if you like,
deleting strategies that are never best responses.
So, here we've concluded that the Nash Equilibrium of this
game is down right, a very particular prediction,
but notice a perfectly rational player could here,
could choose middle. The reason they could choose
middle, Player 1 could choose middle is--could be that-- they
say (rationally) they should choose middle because they think
Player 2 is choosing left. Then you say,
"well, why would you think Player 2 is going to choose
left?" Player 1 could say,
"I think Player 2 is going to choose left because Player 2
thinks I'm going to play up." And then you say,
"but how could Player 2 possibly think that you're going
to play up?" And then Player 1 would say,
"I think Player 2 thinks I'm going to play up because Player
2 thinks that I think that he's going to play center."
And then you could say, "how could Player 2 play
…" etc., etc., etc.
And you could see you could work around a little cycle
there. Nobody would be irrational
about anything: Everything would be perfectly
well justified in terms of beliefs.
It's just we wouldn't have a nice fixed point.
We wouldn't have a nice point where it's a simple argument.
In the case of down right, Player 1 thinks it's okay to
play down because he thinks 2 is going to play right,
and Player 2 thinks they're going to play right because he
thinks Player 1's going to play down.
So, just to underline what I'm saying rather messily there,
rationality, and those kinds of arguments
should not lead us to the conclusion that people
necessarily play Nash Equilibrium.
We need a little bit more; we need a little bit more than
that. Nevertheless,
we are going to focus on Nash Equilibrium in the course.
Let's have a look at another example.
Again, we'll keep it simple for now and we'll keep to a
two-player, three-strategy game; up, middle, down,
left, center, right and this time the payoffs
are as follows: (0,2) (2,3) (4,3) (11,1) (3,2)
(0,0) (0,3) (1,0) and (8,0). So, this is a slightly messier
game. The numbers are really just
whatever came into my head when I was writing it down.
Once again, we want to find out what the Nash Equilibrium is in
this game and our method is exactly the same.
We're going to, for each player,
figure out their best responses for each possible choice of the
other player. So, rather than write it out on
the--let me just go to my green circles and red squares and let
me get my mikes up again, so we can cold call people a
bit. Can we get the--where's the
mike? Thank you.
So, Ale do you want to just grab somebody and ask them
what's the best response against left?
Grab anybody. What's the best one against
left? Student: Middle.
Professor Ben Polak: Middle, okay.
Good, so middle is the best response against left,
because 11 is bigger than 0. Myto , do you want to grab
somebody? What's the best response
against center? Student: Middle.
Professor Ben Polak: Middle again because 3 is bigger
than 2 is bigger than 1. And grab somebody at the back
and tell me what the best response is against right?
Anybody just dive in, yep. Student: Down.
Professor Ben Polak: Okay down, good.
Let's do the same for the column player,
so watch the column player's--Ale,
find somebody; take the guy who is two rows
behind you who is about to fall asleep and ask him what's the
best response to up? Student: Best response
is--for which player? Professor Ben Polak: For
Player 2. What's Player 2's best response
for up? Sorry that's unfair,
pick on somebody else. That's all right;
get the guy right behind you. What's the best response for up?
Student: So it's [inaudible]
Professor Ben Polak: So, this is a slightly more tricky
one. The best response to up is
either c or r. This is why we got you to sign
those legal forms. Best ones to up is either c or
r. The best response to middle?
Student: Center. Professor Ben Polak: Is
center, thank you. And the best response to down?
Student: Left. Professor Ben Polak: So,
the best response to down is left.
Here again, it didn't take us very long.
We're getting quicker at this. The Nash Equilibrium in this
case is--let's go back to the guy who was asleep and give him
an easier question this time. So the Nash Equilibrium in this
case is? Student: Nash
Equilibrium would be the two [inaudible]
Professor Ben Polak: Exactly, so the middle of
center. Notice that this is a Nash
Equilibrium. Why is it the Nash Equilibrium?
Because each player is playing a best response to each other.
If I'm Player 1 and I think that Player 2 is playing center,
I should play middle. If I'm Player 2,
I think Player 1 is playing middle, I should play center.
If, in fact, we play this way neither person
has any regrets. Notice they didn't do
particularly well in this game. In particular,
Player 1 would have liked to get this 11 and this 8,
but they don't regret their action because,
given that Player 2 played center, the best they could have
got was 3, which they got by playing middle.
Notice in passing in this game, that best responses needn't be
unique. Sometimes they can be a tie in
your best response. It could happen.
So, what have we seen? We've seen how to find Nash
Equilibrium. And clearly it's very closely
related to the idea of best response, which is an idea we've
been developing over the last week or so.
But let's go back earlier in the course, and ask how does it
relate to the idea of dominance? This will be our third concept
in this course. We had a concept about
dominance, we had a concept about best response,
and now we're at Nash Equilibrium.
It's, I think, obvious how it relates to best
response: it's when best responses coincide.
How about how it relates to dominance?
Well, to do that let's go back. What we're going to do is we're
going to relate the Nash Equilibrium to our idea of
dominance, or of domination,
and to do that an obvious place to start is to go back to a game
that we've seen before. So here's a game where Player 1
and Player 2 are choosing ά and β and the payoffs are
(0,0) (3,-1) (-1,3) and (1,1). So, this is the game we saw the
first time. This is the Prisoner's Dilemma.
We know in this game--I'm not going to bother to cold call
people for it--we know in this game that β
is dominated –is strictly dominated by ά.
It's something that we learned the very first time.
Just to check: so against ά,
choosing ά gets you 0, β
gets you -1. Against β
choosing ά gets you 3, β
gets you 1, and in either case ά is strictly better,
so ά strictly dominates β, and of course it's the
same for the column player since the game's completely symmetric.
So now, let's find the Nash Equilibrium in this game.
I think we know what it's going to be, but let's just do it in
this sort of slow way. So, the best response to ά
must be ά. The best response to β
must be ά, and for the column player the
best response to ά must be ά,
and the best response to β must be ά.
Everyone okay with that? I'm just kind of--I'm rushing
it a bit because it's kind of obvious, is that right?
So, the Nash Equilibrium in this game is (ά,
ά). In other words,
it's what we would have arrived at had we just deleted strictly
dominated strategies. So, here's a case where ά
strictly dominates β, and sure enough Nash
Equilibrium coincides, it gives us the same answer,
both people choose ά in this game.
So, there's nothing--there's no news here, I'm just checking
that we're not getting anything weird going on.
So, let's just be a bit more careful.
How do we know it's the case--I'm going to claim it's
the case, that no strictly dominated strategy--in this case
β--no strictly dominated strategy could ever be played in
a Nash Equilibrium. I claim--and that's a good
thing because we want these ideas to coincide--I claim that
no strictly dominated strategy could ever be played in the Nash
Equilibrium. Why is that the case?
There's a guy up here, yeah. Student: The strictly
dominated strategy isn't the best response of anything.
Professor Ben Polak: Good.
Student: It should be the best response to find the
Nash Equilibrium. Professor Ben Polak:
Good, very good. So a strictly dominated
strategy is never a best response to anything.
In particular, the thing that dominates it
always does better. So, in particular,
it can't be a best response to the thing being played in the
Nash Equilibrium. So that's actually a very good
proof that proves that no strictly dominated strategy
could ever be played in a Nash Equilibrium.
But now we have, unfortunately,
a little wrinkle, and the wrinkle,
unfortunately, is weakly dominated--is weak
domination. So we've argued,
it was fairly easy to argue, that no strictly dominated
strategy is ever going to reappear annoyingly in a Nash
Equilibrium. But unfortunately,
the same is not true about weakly dominated strategies.
And in some sense, you had a foreshadowing of this
problem a little bit on the homework assignment,
where you saw on one of the problems (I think it was the
second problem on the homework assignment) that deleting weakly
dominated strategies can lead you to do things that might make
you feel a little bit uneasy. So weak domination is really
not such a secure notion as strict domination,
and we'll see here as a trivial example of that.
And again, so here's the trivial example,
not an interesting example, but it just makes the point.
So here's a 2 x 2 game. Player 1 can choose up or down
and Player 2 can choose left or right, and the payoffs are
really trivial: (1,1) (0,0) (0,0) (0,0).
So, let's figure out what the Nash Equilibrium is in this game
and I'm not going to bother cold calling because it's too easy.
So the best response for Player 1, if Player 2 plays left is
clear to choose up. And the best response of Player
1 if Player 2 chooses right is--well, either up or down will
do, because either way he gets 0.
So these are both best responses.
Is that correct? They're both best responses;
they both did equally well. Conversely, Player 2's best
response if Player 1 chooses up is, sure enough,
to choose left, and that's kind of the answer
we'd like to have, but unfortunately,
when Player 1 plays down, Player 2's best response is
either left or right. It makes no difference.
They get 0 either way. So, what are the Nash
Equilibria in this game? So, notice the first
observation is that there's more than one Nash Equilibrium in
this game. We haven't seen that before.
There's a Nash Equilibrium with everybody in the room I think,
I hope, thinks it's kind of the sensible prediction in this
game. In this game,
I think all of you, I hope, all of you if you're
Player 1 would choose up and all of you for Player 2 would choose
left. Is that right?
Is that correct? It's hard to imagine coming up
with an argument that wouldn't lead you to choose up and left
in this game. However, unfortunately,
that is--this isn't unfortunate, up left is a Nash
Equilibrium but so is down right.
If Player 2 is choosing right, your best response weakly is to
choose down. If Player 1 is choosing down,
Player 2's best response weakly is to choose right.
Here, this is arriving because of the definition of Nash;
it's a very definite definition. I deleted it now.
When we looked at the definition we said,
something is a Nash Equilibrium was each person is playing a
best response to each other; another way of saying that is
no player has a strict incentive to deviate.
No player can do strictly better by moving away.
So here at down right Player 1 doesn't do strictly better;
it's just a tie if she moves away.
And Player 2 doesn't do strictly better if he moves
away. It's a tie.
He gets 0 either way. So, here we have something
which is going to worry us going on in the course.
Sometimes we're getting--not only are we getting many Nash
Equilibria, that's something which--that shouldn't worry us,
it's a fact of life. But in this case one of the
Nash Equilibria seems silly. If you went and tried to
explain to your roommates and said, "I predict both of these
outcomes in this game," they'd laugh at you.
It's obvious in some sense that this has to be the sensible
prediction. So, just a sort of worrying
note before we move on. So, this was pretty formal and
kind of not very exciting so far, so let's try and move on to
something a little bit more fun. So, what I want to do now is I
want to look at a different game.
Again, we're going to try and find Nash Equilibrium in this
game but we're going to do more than that,
we're going to talk about the game a bit, and a feature of
this game which is--to distinguish it from what we've
done so far--is the game we're about to look at involves many
players, although each player only has a
few strategies. So, what I want to do is I want
to convince you how to find--to discuss how to find Nash
Equilibria in the game which, unlike these games,
doesn't just have two players--it has many
players--but fortunately, not many strategies per player.
So let me use this board. So this is going to be called
The Investment Game and we're going to play this game,
although not for real. So, the players in this game,
as I've just suggested, the players are going to be
you. So, everyone who is getting
sleepy just looking at this kind of analysis should wake up now,
you have to play. The strategies in this game are
very simple, the strategy sets, or the strategy alternatives.
Each of you can choose between investing nothing in a class
project, $0, or invest $10. So, I'm sometimes going to
refer to investing nothing as not investing,
is that okay? That seems like a natural to do.
You're either going to invest $10 or nothing,
you're not going to invest. So that's the players and those
are the strategies, so as usual we're missing
something. What we're missing are the
payoffs. So here are the payoffs;
so the payoffs are as follows: if you don't invest,
if you do not invest, you invest nothing,
then your payoff is $0. So nothing ventured nothing
gained: natural thing. But if you do invest $10,
remember each of you is going to invest $10 then your
individual payoffs are as follows.
Here's the good news. You're going to get a profit of
$5. The way this is going to work
is you're going to invest $10 so you'll make a gross profit of
$15 minus the $10 you originally invested for a net of $5.
So a net profit--so $5 net profit, that's the good news.
But that requires more than 90% of the class to invest,
greater than or equal to 90% of the class to invest.
If more than 90% of the class invests, you're going to make
essentially 50% profit. Unfortunately,
the bad news is you're going to lose your $10,
get nothing back so this is a net loss, if fewer than 90% of
the class invest. I mean a key rule here;
you're not allowed to talk to each other: no communication in
the class. No hand signals,
no secret winks, nothing else.
So, everyone understand the game?
Including up in the balcony, everyone understand the game?
So, what I want you to do--I should say first of all,
we can't play this game for real because there's something
like 250 of you and I don't have that kind of cash lying around.
So we're not--pretend we're playing this for real.
So, without communicating I want each of you to write on the
corner of your notepad whether you're going to invest or not.
You can write Y if you're going to and N if you're not going to
invest. Don't discuss it with each
other; just write down on the corner
of your notepad Y if you're going to invest and N if you're
not going to invest. Don't discuss it guys.
Now, show your neighbor what you did, just so you can--your
neighbor can make you honest. Now, let's have a show of
hands, so what I want to do is I want to have a show of hands,
everybody who invested. Don't look around;
just raise your hands, everyone who invested?
Everyone who didn't invest! Oh that's more than 10%.
Let's do that again. So everyone who invested raised
their hands … and everyone who didn't invest
raise their hands. So I don't know what that is,
maybe that's about half. So now I'm thinking we should
have played this game for real. I want to get some discussion
going about this. I'm going to discuss this for a
while; there's a lot to be said about
this game. Let me borrow that,
can I borrow this? So this guy;
so what did you do? Student: I invested.
Professor Ben Polak: Why did you invest?
Student: Because I think I can lose $10.
Professor Ben Polak: All right, so he's rich,
that's okay. You can pay for lunch.
Who didn't invest, non-investors?
Yeah, here's one, so why didn't you invest?
Shout it out so everyone can hear.
Student: I didn't invest because to make a profit there
needs to be at least a 2:1 chance that everyone else--that
90% invest. Professor Ben Polak: You
mean to make an expected profit? Student: Yeah,
you only get half back but you would lose the whole thing.
Professor Ben Polak: I see, okay so you're doing some
kind of expected calculation. Other reasons out there?
What did you do? Student: I invested.
Professor Ben Polak: I'm finding the suckers in the room.
Okay, so you invested and why? Student: You usually
stand to gain something. I don't see why anybody would
just not invest because they would never gain anything.
Professor Ben Polak: Your name is?
Student: Clayton. Professor Ben Polak: So
Clayton's saying only by investing can I gain something
here, not investing seems kind of--it
doesn't seem brave in some sense.
Anyone else? Student: It's basically
the same game as the (1,1) (0,0) game in terms they're both Nash
Equilibrium, but the payoffs aren't the same
scale and you have to really be risk adverse not to invest,
so I thought it would be-- Professor Ben Polak: So
you invested? Student: I invested,
yeah. Professor Ben Polak: All
right, so let me give this to Ale before I screw up the sound
system here. Okay, so we got different
answers out there. Could people hear each other's
answer a bit? Yeah.
So, we have lots of different views out here.
We have half the people investing and half the people
not investing roughly, and I think you can make
arguments either way. So we'll come back to whether
it's exactly the same as the game we just saw.
So the argument that-- I'm sorry your name--that Patrick
made is it looks a lot like the game we saw before.
We'll see it is related, clearly.
It's related in one sense which we'll discuss now.
So, what are the Nash Equilibria in this game?
Let me get the woman up here? Student: No one invests
and everyone's happy they didn't lose anything;
or everyone invests and nobody's happy.
Professor Ben Polak: Good, I've forgotten your name?
Student: Kate. Professor Ben Polak: So
Kate's saying that there are two Nash Equilibria and (with
apologies to Jude) I'm going to put this on the board.
There are two Nash Equilibria here, one is all invest and
another one is no one invest. These are both Nash Equilibria
in this game. And let's just check that they
are exactly the argument that Kate said.
That if everyone invests then no one would have any regrets,
everyone's best response would be to invest.
If nobody invests, then everyone would be happy
not to have invested, that would be a best response.
It's just--in terms of what Patrick said,
it is a game with two equilibria like the previous
game and there were other similarities to the previous
game, but it's a little bit--the
equilibria in this case are not quite the same as the other
game. In the other game that other
equilibrium, the (0,0) equilibrium seemed like a silly
equilibrium. It isn't like we'd ever predict
it happening.Whereas here, the equilibrium with nobody
investing actually is quite a plausible equilibrium.
If I think no one else is investing then I strictly prefer
not to invest, is that right?
So, two remarks about this. Do you want to do more before I
do two remarks? Well, let's have one remark at
least. How did we find those Nash
Equilibria? What was our sophisticated
mathematical technique for finding the Nash Equilibria on
this game? We should - can you get the
mike on - is it Kate, is that right?
So how do you find the Nash Equilibria?
Student: [Inaudible] Professor Ben Polak: All
right, so in principle you could--I mean that works but in
principle, you could have looked at every
possible combination and there's lots of possible combinations.
It could have been a combination where 1% invested
and 99% didn't, and 2% invested and 98% didn't
and so on and so forth. We could have checked
rigorously what everyone's best response was in each case,
but what did we actually end up doing in this game?
What's the method? The guy in--yeah.
Student: [Inaudible] Professor Ben Polak:
That's true, so that certainly makes it easier.
I claim, however--I mean you're being--both of you are being
more rigorous and more mathematical than I'm tempted to
be here. I think the easy method
here--the easy sophisticated math here is "to guess."
My guess is the easy thing to do is to guess what flight would
be the equilibrium here and then what?
Then check; so a good method here in these
games is to guess and check. And guess and check is really
not a bad way to try and find Nash Equilibria.
The reason it's not a bad way is because checking is fairly
easy. Guessing is hard;
you might miss something. There might be a Nash
Equilibrium hidden under a stone somewhere.
In America it's a "rock." It may hidden under a rock
somewhere. But checking with something
that you--some putative equilibrium, some candid
equilibrium, checking whether it is an
equilibrium is easy because all you have to do is check that
nobody wants to move, nobody wants to deviate.
So again, in practice that's what you're going to end up
doing in this game, and it turns out to be very
easy to guess and check, which is why you're able to
find it. So guess and check is a very
useful method in these games where there are lots and lots of
players, but not many strategies per
player, and it works pretty well.
Okay, now we've got this game up on the board,
I want to spend a while discussing it because it's kind
of important. So, what I want to do now is I
want to remind us what happened just now.
So, what happened just now? Can we raise the yeses again,
the invest again. Raise the not invested,
not invest. And I want to remind you guys
you all owe me $10. What I want to do is I want to
play it again. No communication,
write it down again on the corner of your notepad what
you're going to do. Don't communicate you guys;
show your neighbor. And now we're going to poll
again, so ready. Without cheating,
without looking around you, if you invested--let Jude get a
good view of you--if you invested raise your hand now.
If you didn't invest--okay. All right, can I look at the
investors again? Raise your hands honestly;
we've got a few investors still, so these guys really owe
me money now, that's good.
Let's do it again, third time, hang on a second.
So third time, write it down,
and pretend this is real cash. Now, if you invested the third
time raise your hand. There's a few suckers born
everyday but basically. So, where are we heading here?
Where are we heading pretty rapidly?
We're heading towards an equilibrium;
let's just make sure we confirm that.
So everyone who didn't invest that third time raise your
hands. That's pretty close;
that show of hands is pretty close to a Nash Equilibrium
strategy, is that right? So, here's an example of a
third reason from what we already mentioned last time,
but a third reason why we might be interested in Nash
Equilibria. There are certain circumstances
in which play converges in the natural sense--not in a formal
sense but in a natural sense--to an equilibrium.
With the exception of a few dogged people who want to pay
for my lunch, almost everyone else was
converging to an equilibrium. So play converged fairly
rapidly to the Nash Equilibrium. But we discussed there were two
Nash Equilibria in this game.; Is one of these Nash
Equilibria, ignoring me for a second, is one of these Nash
Equilibria better than the other?
Yeah, clearly the "everyone investing" Nash Equilibrium is
the better one, is that right?
Everyone agree? Everyone investing is a better
Nash Equilibrium for everyone in the class, than everyone not
investing, is that correct? Nevertheless,
where we were converging in this case was what?
Where we're converging was the bad equilibrium.
We were converging rapidly to a very bad equilibrium,
an equilibrium which no one gets anything,
which all that money is left on the table.
So how can that be? How did we converge to this bad
equilibrium? To be a bit more formal,
the bad equilibrium and no invest equilibrium here,
is pareto dominated by the good equilibrium.
Everybody is strictly better off at the good equilibrium than
the bad equilibrium. It pareto dominates,
to use an expression you probably learned in 150 or 115.
Nevertheless, we're going to the bad one;
we're heading to the bad equilibrium.
Why did we end up going to the bad equilibrium rather than the
good equilibrium? Can we get the guy in the gray?
Student: Well, it was a bit of [inaudible]
Professor Ben Polak: Just now you mean?
Say what you mean a bit more, yeah that's good.
Just say a bit more. Student: So it seemed
like people didn't have a lot of confidence that other people
were going to invest. Professor Ben Polak: So
one way of saying that was when we started out we were roughly
even, roughly half/half but that was
already bad for the people who invested and then--so we started
out at half/half which was below that critical threshold of 90%,
is that right? From then on in,
we just tumbled down. So one way to say this--one way
to think about that is it may matter, in this case,
where we started. Suppose the first time we
played the game in this class this morning,
suppose that 93% of the class had invested.
In which case, those 93% would all have made
money. Now I'm--my guess is--I can't
prove this, my guess is, we might have converged the
other way and converged up to the good equilibrium.
Does that make sense? So people figured out that
they--people who didn't invest the first time--actually they
played the best response to the class, so they stayed put.
And those of you did invest, a lot of you started not
investing as you caught up in this spiral downward and we
ended up not investing. But had we started off above
the critical threshold, the threshold here is 90%,
and had you made money the first time around,
then it would have been the non-investors that would have
regretted that choice and they might have switched into
investing, and we might have--I'm not
saying necessarily, but we might have gone the
other way. Yeah, can we get a mike on?
Student: What if it had been like a 30/70 thing or
[inaudible] Professor Ben Polak:
Yes, that's a good question. Suppose we had been close to
the threshold but below, so I don't know is the answer.
We didn't do the experiment but it seems likely that the
higher--the closer we got to the threshold the better chance
there would have been in going up.
My guess is--and this is a guess--my guess is if we started
below the threshold we probably have come down,
and if was from above the threshold, we would probably
have gone up, but that's not a theorem.
I'm just speculating on what might have happened;
speculating on your speculations.
So, here we have a game with two equilibria,
one is good, one is bad;
one is really quite bad, its pareto dominated.
Notice that what happened here, the way we spiraled down
coincides with something we've talked about Nash Equilibrium
already, it coincides with this idea of
a self-fulfilling prediction. Provided you think other people
are not going to invest, you're not going to invest.
So, it's a self-fulfilling prediction to take you down to
not investing. Conversely, provided everyone
thinks everyone else is going to invest, then you're going to go
up to the good equilibrium. I think that corresponds to
what the gentleman said in the middle about a bare market
versus a bull market. If it was a bare market,
it looked like everyone else didn't have confidence in
everyone else investing, and then that was a
self-fulfilling prophesy and we ended up with no investment.
Now, we've seen bad outcomes in the class before.
For example, the very first day we saw a
Prisoner's Dilemma. But I claim that though we're
getting a bad outcome here in the class, this is not a
Prisoner's Dilemma. Why is this not a Prisoner's
Dilemma? What's different between--I
mean both games have an equilibrium which is bad.
Prisoner's Dilemma has the bad equilibrium when nobody tidies
their room or both guys go to jail, but I claim this is not a
Prisoner's Dilemma. Get the guy behind you.
Student: No one suffered but no one gets any payoffs.
Professor Ben Polak: Okay, so maybe the outcome isn't
quite so bad. That's fair enough,
I could have made it worse, I could have made it sort of--I
could have made that--I could have lowered the payoffs until
they were pretty bad. Why else is this not a
Prisoner's Dilemma? The woman who's behind you.
Student: Because there's no strictly dominated strategy.
Professor Ben Polak: Good;
so in Prisoner's Dilemma playing ά
was always a best response. What led us to the bad outcome
in Prisoner's Dilemma was that ά, the defense strategy,
the non-cooperative strategy, the not helping tidy your room
strategy, was always the best thing to do.
Here, in some sense the "good thing," the "moral thing" in
some sense, is to invest but it's not the case that not
investing dominates investing. In fact, if all the people
invest in the room you ought to invest, is that right?
So this is a social problem, but it's not a social problem
of the form of a Prisoner's Dilemma.
So what kind of problem is this? What kind of social problem is
this? The guy in front of you.
Student: Perhaps it's one of cooperation.
Professor Ben Polak: It's one of--okay.
So it's--it would help if people cooperated here but I'm
looking for a different term. The term that came up the first
day--coordination, this is a coordination game.
For those people that read the New Yorker,
I'll put an umlaut over the second "o."
Why is it a coordination game? Because you'd like everyone in
the class to coordinate their responses on invest.
In fact, if they did that, they all would be happy and no
one would have an incentive to defect and there would in fact
be an equilibrium. But unfortunately,
quite often, we fail to have coordination.
Either everyone plays non-invest or,
as happened the first time in class, we get some split with
people losing money. So, I claim that actually this
is not a rare thing in society at all.
There are lots of coordination problems in society.
There are lots of things that look like coordination games.
And often, in coordination games bad outcomes result and I
want to spend most of the rest of today talking about that,
because I think it's important, whether you're an economist or
whatever, so let's talk about it a bit.
What else has the structure of a coordination game,
and therefore can have the outcome that people can be
uncoordinated or can coordinate in the wrong place,
and you end up with a bad equilibrium?
What else looks like that? Let's collect some ideas.
I got a hand way at the back. Can you get the guy who is just
way, way, way at the back, right up against the--yes,
there you go, great thank you.
Wait until the mike gets to you and then yell.
Student: A party on campus is a coordination game.
Professor Ben Polak: Yeah, good okay.
So a party on campus is a coordination game because
what--because you have to coordinate being at the same
place, is that right? That's the idea you had?
Go ahead. Student: [inaudible]
Professor Ben Polak: Good, okay that's good.
So that's another in which--there's two ways in which
a party can be a coordination problem.
One is the problem that if people don't show up it's a
lousy party, so you don't want to show up.
Conversely, if everyone is showing up, it's great,
it's a lot of fun, it's the place to be and
everyone wants to show up. Second--so there's two
equilibria there, showing up and--everyone not
showing up or everyone showing up.
Similar idea, the location of parties which
is what I thought you were driving at, but this similar
idea can occur. So it used to the case in New
Haven that there were different--actually there aren't
many anymore--but there used to be different bars around campus
(none of which you were allowed to go to,
so you don't know about) but anyway, lots of different bars
around campus. And there's a coordination game
where people coordinate on Friday night--or to be more
honest, the graduate students typically Thursday night.
So it used to be the case that one of those bars downtown was
where the drama school people coordinated,
and another one was where the economists coordinated,
and it was really good equilibrium that they didn't
coordinate at the same place. So one of the things you have
to learn when you go to a new town is where is the meeting
point for the kind of party I want to go to.
Again, you're going to have a failure of coordination,
everyone's wandering around the town getting mugged.
What other things look like coordination problems like that?
Again, way back in the corner there, right behind you,
there you go. Student: Maybe warring
parties in the Civil War signing a treaty.
Professor Ben Polak: Okay, that could be a--that
could be a coordination problem. It has a feeling of being a bit
Prisoner's Dilemmerish and in some sense, it's my disarming,
my putting down my arms before you putting down your arms,
that kind of thing. So it could be a coordination
problem, but it has a little bit of a flavor of both.
Go ahead, go on. Okay, other examples?
There's a guy there. While you're there why don't
you get the guy who is behind you right there by the door,
great. Student: Big sports
arena, people deciding whether or not they want to start a
chant or [inaudible] Professor Ben Polak:
Yeah, okay. I'm not quite sure which is the
good outcome or the bad outcome. So there's this thing called,
The Wave. There's this thing called The
Wave that Americans insist on doing, and I guess they think
that's the good outcome. Other things?
Student: Battle of the sexes.
Professor Ben Polak: Let's come back to that one.
Hold that thought and we'll come back to it next time.
You're right that's a good--but let's come back to it next time.
Anything else? Let me try and expand on some
of these meeting place ideas. We talked about pubs to meet
at, or parties to meet at, but think about online sites.
Online sites which are chat sites or dating sites,
or whatever. Those are--those online sites
clearly have the same feature of a party.
You want people to coordinate on the same site.
What about some other economic ideas?
What about some other ideas from economics?
What else has that kind of externality like a meeting
place? Student: I mean it's
excluding some people, but things like newspapers or
something, like we might want only one like global news site.
Professor Ben Polak: So, that's an interesting thought,
because it can go both ways. It could be a bad thing to have
one newspaper for obvious reasons, but in terms of having
a good conversation about what was in the newspaper yesterday
around the--over lunch, it helps that everyone's read
the same thing. Certainly, there's a bandwagon
effect to this with TV shows. If everybody in America watches
the same TV show, which apparently is American
Idol these days, then you can also talk about it
over lunch. Notice that's a really horrible
place to coordinate. There are similar examples to
the American Idol example. When I was growing up in
England, again I'm going to reveal my age;
everybody decided that to be an "in" person in England,
you had to wear flared trousers.
This was--and so to be in you had to wear flared trousers.
This is a horrible coordination problem, right?
So for people who don't believe that could ever have happened in
England that you could ever these sort of fashion goods that
you end up at a horrible equilibrium at the wrong place,
if you don't believe that could ever happen think about the
entirety of the Midwest. I didn't say that,
we're going to cut that out of the film later.
What else is like this? I'm going to get a few ideas
now, this gentleman here. Student: The
establishment of monopolies, because a lot of people use
Microsoft, say then everything is
compatible with Microsoft so more people would use it.
Professor Ben Polak: Good, I've forgotten your name.
Student: Steven. Professor Ben Polak: So,
Steven's pointing out that certain software can act this
way by being a network good. The more people who use
Microsoft and Microsoft programs, the bigger the
advantage to me using Microsoft, and therefore--because I can
exchange programs, I can exchange files if I'm
working with my co-authors and so on,
and so you can have different equilibria coordinating on
different software, and again, you could end up at
the wrong one. I think a lot of people would
argue--but I'm going to stay neutral on this--a lot of people
would argue that Microsoft wasn't necessarily the best
place for the world to end up. There are other technological
network goods like this. These are called network
externalities. An example here would be high
definition television. You want to have one
technological standard that everyone agrees on for things
like high definition televisions because then everyone can
produce TVs to that standard and goods that go along with that
standard, and of course it--each company
who's producing a TV and has a standard line would like theirs
to be chosen as the standard. Again, you could end up at the
wrong place. You could end up with a bad
equilibrium. How about political bandwagons?
In politics, particularly in primaries,
there may be advantage on the Democratic side or on the
Republican side, in having you all vote for the
same candidate in the primary, so they get this big boost and
it doesn't seem like your party's split and so on.
And that could end up--and again, I'm going to remain
neutral on this--it just could end up with the wrong candidate
winning. There's a political bandwagon
effect, the person who wins New Hampshire and Iowa tends then to
win everything, so that's another example.
Any other economic examples? Can we get this guy in here?
Student: Stock exchange. Professor Ben Polak:
Yeah, okay, so in particular which stock exchange to list on.
So there's a huge advantage having lots of stocks in the
same stock exchange. There are shared fixed costs;
there's also liquidity issues and lots of issues of that form
but mostly the form of fixed costs.
So there's a tendency to end up with one stock exchange.
We're not there yet but we do seem to be going that way quite
rapidly. The problem of course being,
that might not be the best stock exchange or it might give
that stock exchange monopoly power.
Let me give one--let me try one more example.
What about bank runs? What's a bank run?
Somebody--what's a bank run? Student: It's when the
public loses confidence in--because their security of
their money in banks, then they rush to withdraw
their deposits. Professor Ben Polak:
Good. So you can imagine a bank as a
natural case where there's two equilibria.
There's a good equilibrium, everyone has confidence in the
bank, everyone leaves their deposits in the bank.
The bank is then able to lend some of that money out on a
higher rate of interest on it. The bank doesn't want to keep
all that money locked up in the vault.
It wants to lend it out to lenders who can pay interest.
That's a good equilibrium for everybody.
However, if people lose confidence in the bank and start
drawing their deposits out then the bank hasn't got enough cash
in its vaults to cover those deposits and the bank goes
under. Now, I used to say at this
point in the class, none of you will have ever seen
a bank run because they stopped happening in America more or
less in the mid 30s. There were lots and lots of
bank runs in America before the 1930s, but since federal deposit
insurance came in, there's far fewer.
However, I really can't say that today because there's a
bank run going on right now. There's as bank run going on
actually in England with a company called Northern
Security--no, Northern Rock--it's called
Northern Rock, as we speak,
and it really is a bank run. I mean, if you looked at the
newspaper yesterday on The New York Times,
you'll see a line of depositors lined up, outside the offices in
London of this bank, trying to get their deposits
out. And you see the Bank of England
trying to intervene to restore confidence.
Just be clear, this isn't simple--this isn't a
simple case of being about the mortgage crisis.
This bank does do mortgages but it doesn't seem to be
particularly involved in the kind of mortgages that have been
attracting all the publicity. It really seems to be a shift
in confidence. A move from the good
equilibrium of everyone remaining invested,
to the bad equilibrium of everyone taking their money out.
Now, there are famous bank runs in American culture,
in the movies anyway. What is the famous bank run in
the movies, in American movies? Student: It's a
Wonderful Life. Professor Ben Polak:
It's a Wonderful Life; there's actually--there's one
in Mary Poppins as well, but we'll do It's a
Wonderful Life. How many of you have seen
It's A Wonderful Life? How many have you not seen
It's a Wonderful Life? Let me get a poll here.
How many people have not seen--keep your hands up a
second, keep your hands up. You need to know that if you're
on a green card, you can lose your green card if
you haven't sent It's a Wonderful Life.
So, in It's a Wonderful Life there's a run on the
bank--actually it's a savings and loan, but never mind.
We'll think of it as a bank. Luckily, it doesn't end up with
the savings or loan, or bank, going under.
Why doesn't the bank go under in It's a Wonderful Life?
Why doesn't it go under? Yeah, the guy in green?
Can we get the guy in green? Student: Everyone agrees
to only take as much as they need [inaudible]
Professor Ben Polak: For the people who didn't hear it,
everyone agrees only to take out a small amount,
perhaps even nothing and therefore the bank run ends.
Everyone realizes the bank isn't going to collapse,
and they're happy to leave their money in the bank.
Now, it's true everyone agrees but what do they agree?
What makes them agree? What makes them agree is Jimmy
Stewart, right? Everyone remember the movie?
So Jimmy Stewart gets up there and he says--he gives a speech,
and he says, look,
more or less,--I mean he doesn't say it in these words
but he would have done it if he had taken the class--he says,
look there are two equilibria in this game.
(I can't do the--whatever it is, the West Pennsylvania
accent, is that what it is, or whatever it is.) But anyway;
there are two equilibria in this game;
there's a bad one where we all draw our money out and we all
lose our homes eventually, and there's a good one where we
all leave our money in and that's better for everyone.,
So let's all leave our money in. But he gives this--he's a bit
more motivationally stimulating than me--but he leads people
leaving their money in. So, what I want to do is,
I want to pick on somebody in the class now--everyone
understands this game, everyone understands there's
two equilibria, everyone understands that one
equilibrium is better. Let's play the game again.
Let's choose the game again, but before I do I'm going to
give the mike to Patrick here and Patrick is going to have
exactly five seconds to persuade you.
Stand up. Patrick's going to have five
seconds to persuade you to tell you whatever he likes starting
now. Student: Okay,
so clearly if we all invest we're always better off,
so everybody should invest. Professor Ben Polak: All
right, now let's see if this is going work.
Okay, so let's see what happens now.
Everybody who is going to invest raise their hands and
everyone who's not investing raise their hands.
Oh we almost made it. We must have almost made it.
So notice what happened here. Give another round of applause
for Patrick. I think he did a good job there.
But there's a lesson here; the lesson here is the game
didn't change. It's the same game that we've
played three times already. This was the fourth time we've
played it. We had a totally different
response in playing this time. Almost all of you,
the vast majority of you, perhaps even 90% of you
invested this time and you did so in response to Patrick.
But Patrick wasn't putting any money down, he wasn't bribing
you, he wasn't writing a contract with you,
he wasn't threatening to break your legs, he just was pointing
out it's a good idea. Now, remember the Prisoner's
Dilemma, in the Prisoner's Dilemma, if Patrick--Patrick
could have got up in the Prisoner's Dilemma and given the
same speech and said look guys, we're all better off if we
choose β in the Prisoner's Dilemma than
if we choose ά; roughly the same speech.
What will you have done in the Prisoner's Dilemma?
You would have all chosen ά anyway.
So Patrick tried to persuade you, or Patrick communicating to
you that you do better by choosing β
in the Prisoner's Dilemma doesn't work but here--don't go
yet. Here it does work.
Why does Patrick--why is Patrick persuasive in this game
but he isn't persuasive in the Prisoner's Dilemma?
Can we get the mike on Katie again?
Why is Patrick persuasive in this game and not before?
Student: He's not trying to get you to play a strictly
dominated strategy. Professor Ben Polak:
He's not trying to get you play a strictly dominated strategy
and more than that, he's trying to persuade you to
play what? To play a Nash Equilibrium.
So, there's a lesson here, in coordination problems,
unlike Prisoner's Dilemma, communication--just
communication, no contracts--communication can
help. And in particular,
what we can persuade each other to do is to play the other Nash
Equilibrium. Now, this gives us one more
motivation for a Nash Equilibrium.
In a Prisoner's Dilemma, to get out of it we needed a
contract, we needed side payments, we needed to change
the payoffs of the game. But a Nash Equilibrium can be a
self-enforcing agreement. We can agree that we're going
to play invest in this game, and indeed we will play invest
without any side payments, without anybody threatening to
break your leg, without any contracts,
without any regulation or the law.
I'm assuming Patrick isn't that violent.
We're going to end up doing the right thing here because it's in
our own interest to do so. So coordination problems which
we've agreed are all over society, whether it comes to
bank runs or bubbles in the market,
or fashion in the Midwest, they're all over society.
Communication can make a difference and we'll pick that
theme up on Monday.