Placeholder Image

字幕表 動画を再生する

  • today.

  • I thought we would talk a bit about logical induction paper out of the Machine Intelligence Research Institute.

  • Very technical paper, very mathematical and can be a bit hard to get your head around, and we're not going to get too far into it for computer file.

  • I just want to explain, like why it's cool and why it's interesting and people who are into it and read the paper themselves and find out about it.

  • One of the things about logical induction as a paper is that it's not immediately obvious why it's ending.

  • I safety paper.

  • Um, but like my perception off the way that married the way that the mission intelligence recessions to shoot is thinking about this is they're saying they're thinking like we're going to be producing at some point artificial general intelligence.

  • And as we talked about in previous videos, there are hundreds of ways really weird, subtle, difficult to predict ways that he's this kind of thing could go very badly wrong, and so we want to be confident about the behavior of our systems before we turn them on.

  • And that means ideally, we want to be making systems that we can, um, actually in the best case that we could actually prove theorems about write things that are well specified enough that we could actually write down and formally proved that will have certain characteristics.

  • And if you look at, like, current machine learning stuff old like date neural networks and that kind of thing, they're just very opaque.

  • Do you mean by that?

  • Because we don't necessarily know exactly why it's doing what it's doing.

  • Yeah, and also just that the the system itself is very is very complex and very contingent on a lot of specifics about the training data on dhe, the architecture and like, there's just yet effectively.

  • We don't understand it well enough, but it's it's like, not formally specified enough, I guess.

  • And so they're trying to come up with the sort of mathematical foundations that we would need to be able to prove important things about powerful A i systems.

  • Before we were talking about hypothetical Future A.

  • I systems, we've got time to print more stamps, so maybe hijacks the world's stamp printing factories.

  • And when we were talking about the stamp collector, it was really useful tohave.

  • This framework of an agent and say this is what an agent is.

  • It's a formerly specified thing on dhe.

  • We can reason about the behavior of agents in general.

  • And then all we need to do is make these fairly straightforward assumptions about our A i system that it will behave like an agent.

  • So we have idealized forms of reasoning, like we have probability theory, which tells you the rules that you need to follow to have good beliefs about the state of the world.

  • Right?

  • And we have, like, like, rational choice theory about, uh, what what rules you need to follow.

  • And it may not be.

  • It may not be like, actually possible to follow those rules, but we can at least formally specify.

  • Like if the thing has these properties, it will do this well.

  • And all we're trying to do is that they're thinking if we're building very powerful A.

  • I systems, the one thing we can expect from them is there going to be able to do thinking well.

  • And so if we can come up with some formalized method of of exactly what we mean by doing thinking, well, then if we do, if we reason about that.

  • That should give us some insight into how these systems will behave.

  • That's the idea.

  • So let's talk about probability theory.

  • Then this is where we should have a demo.

  • Pretty sure about Damon as nice.

  • Yeah, well, this is a fun one.

  • How's it, too?

  • I don't know how to play backgammon.

  • Anyway, here's to die.

  • So basic probability theory, right?

  • I roll the die.

  • Now it's under the cup.

  • And I can ask you was the probability that that is 51 in six, right?

  • One in six because you don't know it could be anything.

  • There was a time when people reasoned about the probabilities of things happening in a very sort of fuzzy way.

  • You'd be playing cards and you'd be like, Oh, you know, that card has shown up here.

  • So I guess it's less likely that he has this hand and people would, um, into it.

  • Yeah, and like there was some sense of, like, clearly we are doing something.

  • What we're doing here is not random.

  • It's meaningful to say that this this outcome or that outcome is more or less likely by a bit, or buy a lotto or whatever But people didn't have a really good explanation of how that all worked until we had probability theory, right?

  • We didn't It wasn't assistant, right?

  • Right.

  • And like practically speaking, you often can't do the full probability theory stuff in your head for a game of cards, especially when you're trying to read people's faces and like stuff that's very hard to quantify.

  • But now we have this understanding that, like even when we can't actually do it, what's going on underneath this probability theory?

  • So that's one thing, right?

  • Straightforward probability.

  • Now let's do a different thing.

  • And now I'm getting now I'm gonna need a pen.

  • Now suppose I give you a sentence?

  • That's like the 10th digit after the decimal point off the square root of 17 is a five.

  • What's the probability that that's true?

  • Well, it's much more difficult Problem.

  • One in 10 right?

  • One in 10 seems like a totally reasonable thing to say, right.

  • It's gonna be one of the digits.

  • We don't know which one, but if I You've got the paper here, you know you could do like division.

  • You could do this if you had long enough.

  • Like if I gave you say an hour with the paper and you could sit here and figure it out on Dhe.

  • Let's say you do it and you you come up and it seems to be like three.

  • I don't know what actually is.

  • I haven't done this calculation.

  • Let's do the calculation about expression.

  • Don't roam around.

  • On 23456789 Tenets of six.

  • It's a six.

  • OK, I picked that out of nowhere.

  • So I suppose you didn't have a calculator you were doing on paper I gave you, you know, half an hour an hour to do the like, long divisional.

  • Whatever is you would have to do to figure this out.

  • What's the probability?

  • That statement is true Now what do you say?

  • I'm gonna say zero right.

  • But you've just done this in a big hurry on paper, right?

  • You might have screwed up.

  • So what's the probability that you made a mistake you forgot to, like, carry the one or something could be one.

  • So it's not zero this yet.

  • There's, like, some smaller probability.

  • Um, and then if I left you in the room again, still with a piece of paper for 100 years.

  • 1000 years, infinite time eventually, assuming that was correct.

  • Eventually you say zero.

  • Right?

  • Um, you come up with some, like, formal proof that this is This is false.

  • Um, whereas you imagine, if I leave you for infinite time with the cup and you can't look at you got look under the cup, it's one sick.

  • Then it's gonna stay 1/6 forever, right?

  • When you're playing cards, you've got these probabilities for which cards you think it depending on the game.

  • And as you observe new things, you're updating your probabilities for these different things as you observe new evidence, and then it may be eventually you'll actually see the thing, and then you can go to 100% 0.

  • Um, whereas in this case, all that's changing is your thinking.

  • But you're doing a similar sort of thing.

  • You are still updating your probabilities for things, But probability theory kind of doesn't have anything to say about this scenario.

  • Like probability is how you update when you've seen your evidence, and here you're not seeing any new evidence.

  • So in principle, whatever the probability of this is, it's one or a zero just as a direct, logical consequence of things that you already know, right?

  • So your uncertainty comes from the fact that you are not logically initiate In order for you to figure things out.

  • It takes you time.

  • And this turns out to be really important because it's so most of the time.

  • What you have is actually kind of a mixture of these types of uncertainty.

  • Right?

  • So let's imagine 1/3 scenario, right?

  • Suppose you're like an A I system, and that's your eyes.

  • Because you have a camera.

  • I do the same thing again.

  • And now I ask you, what's the probability that it's a five?

  • You would say one sick because you don't know.

  • But on the other hand, you have recorded video footage of it going under the thing.

  • The video is your memory of what happened, right?

  • So you're not observing new information.

  • You're still observing the same thing you observe the first time, but you can look at that and say, you know, it looked like with the amount of energy, it hard in the way that it was rotating on which numbers were where it looked like.

  • It probably wasn't gonna land on a five if I asked you on a millisecond deadline.

  • You know what?

  • What was it?

  • What's the probability that it was a five?

  • You're going to give the same number that you gave it the beginning Here, Right, One in six.

  • But you can look at the data.

  • You already have the information, the observations that you've already made.

  • And you can do logical reasoning about them, You can think.

  • Okay.

  • Based on the speed it was going, the angle it was turned out in which which piece is which faces will wear and so on.

  • It seems like, you know, maybe you can run some simulations internally.

  • Something like that.

  • Say, it seems like actually less likely than one in six.

  • Right?

  • If before I thought it was like 60.16 recurring.

  • Now I think it's like 0.15 because all around a 1,000,000 simulations and it seems like that the longer you think about it, yeah, more precise you could be.

  • Maybe you keep thinking about it again, and you get it down to like, you think it's actually 0.13 right?

  • Something like that.

  • But you don't You can't.

  • You don't actually know, right, Because there's still things you don't know.

  • Like you don't know exactly the physical properties of the paper or what, exactly?

  • The inside of the cup is like all the waiting of die like you still have uncertainty left because you haven't seen which way it landed.

  • But by doing some thinking, you're able to take some of your you're able to you, like, reduce your a logical uncertainty.

  • The point is that probability theory, in order to do probability theory properly, to modify your beliefs according to the evidence you observe in a way that satisfies the laws of probability, you have to be logically omniscient.

  • You have to be able to immediately see all of the logical consequences of everything you observe and propagate them throughout your beliefs.

  • Um, and this is like, not really practical in the real world.

  • Like in the real world, observations do not form an orderly queue and come at you one at a time and give you enough time after each one to fully integrate the consequences of each observation.

  • Um, because human beings are bounded, right?

  • We have limited processing ability and limited speed with this kind of logical uncertainty.

  • It feels very intuitive that we can do probabilities based on our logical uncertainty that we can.

  • We can think about it in this way.

  • It makes perfect sense to say that one in 10 is the probability.

  • Until you've done your thinking.

  • One in 10 seems like a perfectly reasonable number.

  • But, like why use one intent?

  • A good answer and, like 50% is not a good answer, because you might look at it and say, Well, this is a logical statement.

  • Um, it's either true or false.

  • That's two possibilities, you know, 50 per cent.

  • Why it weighs one in 10.

  • More sensible.

  • And in fact, you had to do a bit of thinking to get there, right?

  • You had to say, Oh, yeah, it's going to be some digit.

  • There are 10 digits.

  • I don't have any reason to prefer one digit over the other.

  • So 10%.

  • So you are like, always doing reasoning to come up with these probabilities.

  • But the thing is that according to standard probability theory, this is like kind of a nonsense question, because if you imagine, like from the perspective of probability theory, this kind of statement is equivalent to saying, What's the probability that one equals one right or what's the probability of it?

  • Like true equals false.

  • They're all just It's just a logical statement.

  • Um, it doesn't have a probability.

  • It just is true.

  • But if you I can't think infinitely fast, then you need to have answers.

  • You need to have estimates.

  • It estimates you need to have estimates of your answers before you thought for infinite time about them.

  • This is like an important thing for actually getting by in the world because, as I say, like observations don't just line up and wait for you to reason all of their consequences through.

  • So when it comes to empirical uncertainty, we have probability theory, which has these axioms.

  • Um, they said it's sort of the rules that you need to follow in order to do probability.

  • Well, and they're sort of things like that.

  • Probabilities can't be negative.

  • They have to be between zero and one.

  • The If you have a set of mutually exclusive possibilities and that that constitutes everything can happen.

  • Then they only to add up to one.

  • Um, and then, if you're if you're doing probability about like a logical sentences.

  • Then there's all these extra rules that make perfect sense.

  • Like you have a statement, a statement, be You also have a statement A and B.

  • Then statement A and B has to be less likely than A or B or the same could be the same.

  • But like if a has 20% chance of happening, then A and B couldn't possibly have more than 20% chance of happening right.

  • Even if he is guaranteed to happen, it can't.

  • So that's like a rule.

  • If you find in your doing your probabilities, you have A and B that has a higher probability than either A or B.

  • And you've made a mistake, that kind of thing.

  • You have these rules.

  • It would be nice if we could find some way of doing of dealing with logical uncertainty, some some similar set of rules that is useful and so and so related to that.

  • We have these things like like Dutch book arguments, which ah, about, uh, I don't know why they called Dutch.

  • I feel like the English language, just like is so harsh to the Dutch for no reason unjustly maligned.

  • But anyway, um where like if you if basically there's there's theorems, which which you can prove that if your beliefs don't obey these rules, then they will exist Some set of bets by which you're guaranteed to lose money something some sort of bets that you will happily take by looking at your probabilities, Um, that you can't you can't possibly win.

  • Whatever happens, you lose money, and that seems like a stupid like you don't want that in your beliefs.

  • So that's like an argument that says, um, that you believe should obey these rules because if they do, you're at least never gonna end up in a situation where you're guaranteed to lose by betting.

  • And that's like one of the things we want.

  • The least four.

  • We want some kind of equivalent thing for logical uncertainty, and that's what this paper is trying to do.

  • It's trying to come up with a rule.

  • So yet you could you could say, if if you are doing your probability theory stuff in such a way that there are no Dutch books that can be made against you, then that's good and you're doing well.

  • And so when you were talking about advanced A I systems were gonna It's like a good assumption to make that they're at least gonna try to know, have any Dutch books against them in the way they do their ability.

  • So they will probably be a bang probability theory.

  • Um, and it would be nice if we had some equivalent thing for logical uncertainty.

  • That said, if you are satisfying this criteria and in the way that you do this, then you're not going to be like doing obviously stupid things.

  • And that's what this paper is trying to do.

  • When you're talking about probability, there's you can kind of think about what properties you would want your system of deciding probabilities.

  • Tohave.

  • People have written down various things they would want from this system, right?

  • That would be a system of like assigning probabilities to statements, logical statements in and then being able to, like update those overtime.

  • So one thing you would want is you would want it to converge, which just means if you're thinking about a logical statement, you might think of something that makes it seem more likely.

  • And then think of something else like Oh, no, Actually, it's less it should be less likely.

  • And whatever you could imagine, some systems for some statements would just think forever and constantly be changing what they think and never make their mind up.

  • That's no good, right?

  • We want a system that, if you give it infinite time to think, will eventually actually decide what it thinks.

  • So that's one thing you wanted to converge.

  • Secondly, you obviously wanted to converge like two good values.

  • So if something turns out to be provable within the system that it's using, then you would want it to eventually converge to one.

  • If something turns out to be disapproval, you wanted to eventually converge to zero, and then the rest of the time you wanted to be like, well, calibrated.

  • Which means if you take all of the things that it ever thought were 80% likely to be true off those about 80% should actually end up being true, right?

  • It should.

  • When it gives you an estimate of the probability that should be right.

  • In some sense, another one is that it should never, um, this one's like a bit controversial, but it seems reasonable to me that it should never resign probability one to something which is not approved or that can't be approved.

  • And it should never assigned probability zero to something that can't be disproved.

  • Um, they call that non dogmatism at the end of all of this again at infinity after thinking forever the probabilities that it gives two things should follow the rules of probability, right?

  • The ones we talked about before those rules, right?

  • It should, at the end of it, you should end up with, like, a valid probability distribution over all of your stuff there, criterion that they've come up with, which is the equivalent off.

  • There are no Dutch books is based on this algorithm that they've made which, uh, satisfies this criterion and therefore has a whole bunch of these nice properties that you would want a logical inducted tohave.

  • So the way the algorithm works is it's one of the zaniest algorithms I've ever seen.

  • Um, it is it's weird and wonderful.

  • And if you're if you're a person who is interested in interesting algorithms, I would really recommend checking out the paper.

  • It's very technical because we don't have time in this video, and I may go into it beeper on my channel at some point, if I ever come to actually understand it, which, to be honest, I don't fully right now.

  • Um, but it's based around the idea of a prediction market, which is super cool thing.

  • That's what explaining in financial markets as they currently exist, you can have futures, right?

  • And a future futures contract.

  • It's a contract which says I promise to sell you this amount of stuff at this price off a particular thing, right?

  • So you say, take a date a year from now and you say I'm gonna sell you this many gallons of jet fuel for this price.

  • Um, least that's the way it used to be done in practice.

  • The actual jet fuel doesn't move.

  • You just go by the price of jet fuel and you pay the equivalent as if you had, and then they can go on by the jet fuel themselves.

  • But this is like a really useful financial instrument tohave because let's say you run an airline.

  • Your main cost is jet fuel prices a very volatile.

  • If prices go up, you could just go under completely.

  • So you say OK, I'm going to buy a whole bunch of these jet fuel futures.

  • And then if the price goes up, I'm gonna have to pay loads more for jet fuel.

  • But I'll make a load of money on these contracts, and it, if you do it right in, exactly, balances out the cost of that is if the price of jet fuel falls through the floor, then, um, you don't get to save money because you're saving money on jet fuel.

  • But then you're losing all this money on the contract, so it just sort of, like, balances out.

  • Your risk lets you lock in the price that you're gonna pay a year in advance.

  • And that that's just like super useful for all sorts of business is really, really good in agriculture as well, because you don't know what your yield is gonna be in.

  • So So the point is that the price that you can buy these contracts for has to be a really good prediction off what the price of jet fuel is going to be at the time when the contract ends.

  • And you can kind of treat the price of that as a combination of everybody's best estimate.

  • Because if you can predict the price of jet fuel a year in advance better than anybody else.

  • You can make Michael most arbitrary amounts of money doing that.

  • Um, and in doing so, you bring the price to be a more accurate representation, right?

  • If you think the price is too low and it's gonna be more than you're gonna buy a bunch and when you buy it, that raises the price of it because, you know, supply demand.

  • So these prices end up representing humanity's best estimate off the price of jet fuel year from now.

  • Um, but the thing that's cool is you could build a bit like this is just a piece of paper with stuff written on it, right?

  • And these days, it's not even a piece of paper.

  • In principle, you can write anything on there, right?

  • So what if you wrote a contract that said, I promise to pay £100 if such and such holes wins the Grand National and zero otherwise and then that's on the market and people combined sell it right.

  • So if this horse is like guaranteed to win the Grand National, then this contract is effectively £100 note, right?

  • It's it's a sure it's money in the bank.

  • Why is that a phrase?

  • Banks Thanks on that reliable that anyway, if the horses like, guarantee to lose, then it's worth zero.

  • And if the horse has a 50% chance of winning, that thing is going to trade for £50.

  • And so, by making these um contracts and allowing people to trade them, you can get these good like, unbiased and as accurate as you can.

  • Hopeful predictions of future events.

  • And that's what prediction markets before you can make money on them by being good at predicting stuff on.

  • Anybody else can just look at the prices things are trading at and just directly convert them into probabilities.

  • Uh, and that's in a spherical chickens in a vacuum kind of way.

  • Obviously, in practice there's various things that could go wrong.

  • But like in principle, this is a really beautiful way of effectively doing distributed computing where you have everybody is doing their own computation and then you're aggregating them.

  • Using the price mechanism is communication between the different notes, and you get like, supercool.

  • So that's what logical induction does it like, simulates a Prediction market, where all of the contracts are a logical statement.

  • I think in this way it's not $100 it's $1.

  • So it's trading at $1 then it's for sure.

  • It's It's certain saying, You know, this is currently trading at 60 cents, and I think it's more more than 60% likely to be true, so I'll buy some on that seems like a good investment.

  • And so all of the traders in the algorithm are programs that computer programs.

  • Um, and it turns out that if you run this on, this is computer ble.

  • It's not like tractable in a practical sense, because you're running lost numbers of, like, arbitrary programs.

  • But, um, it is, in principle computer ble.

  • It ends up having the market.

  • It as a whole ends up having ends up satisfying all of these really nice properties.

  • Is that because it comes into balance?

  • Yeah, as the as the traders trade with each other, the ones who are, like good at predicting end up making more money on the ones who are bad at predicting, like go bankrupt effectively.

  • You might imagine some trader in this system, which is a program that just goes around looking for situations in which you have A and B and also A and B in the system.

  • And the prices don't work.

  • And it's like arbitrage, right?

  • So and there are people who do this currently on the stock market, there's different things.

  • You can do what you can say.

  • Oh, you know, any time the, um, any time the price is different between two markets, you can make money by buying in one and selling in the other and stuff like that, you could do arbitrage.

  • Um, and it's the same kind of thing you can have.

  • So some of these programs will get rich by just saying, Oh, I notice that this statement for A and B has a probability that's actually higher than the probability of being.

  • So I'm gonna sell that, you know, I'm gonna I'm gonna buy some be to raise, you know, I'm gonna, like respond to that in a way that makes me money.

  • And because there's all of these traders, they will, uh they will eventually cools everything.

  • Tow line up right.

  • And so the thing will converge.

  • This is the equivalent of saying there's no Dutch book for your probabilities.

  • Uh, theological induction criteria in the market satisfies it.

  • If there's no efficiently computer ble trader that can exploit the market, which means that it can, like, find a reliable strategy for making loads of money without risk a CZ long as you have that, then it satisfies the logical induction criterion, which then means you get all of these other properties for free.

  • Some of the properties that it has a kind of crazy like it's okay with self reference it, like, doesn't care about paradoxes.

  • There's all kinds of really cool things that that don't trip it up, bringing us back to what we started talking about.

  • How does the paper relate to the air safety thing?

  • Right, Right.

  • So we're trying to do reasoning and possibly prove theorems about very powerful A.

  • I systems.

  • And that means that we want to be able to think of them just in terms off being good at thinking.

  • And we've got a lot of good theory that pins down like what does it mean to be good at empirical uncertainty?

  • We have all of probability theory, and it's like statistics and weaken.

  • We can say These are the things you need to do to be good at probability.

  • And we have, like, rational choice theory.

  • Onda.

  • We can say this is what it means to be good at making decisions.

  • And so then, when we're reasoning about A I systems, we can think, well, it's probably gonna be good at reasoning about uncertainty and making decisions.

  • But we have to also assume that a very powerful, hypothetical future A.

  • I system would be good at reasoning, ontological uncertainty because it's going to be a physical, like bounded system.

  • It is going to need time to think about things, and it probably is going to need to, like, make decisions based on things that it hasn't logically thought through every possible consequence off yet.

  • So it's probably gonna be good at this, too.

  • And we need some, like formal framework with which we can think about what it means to be good at reasoning about logical uncertainty on.

  • That's like what this paper is trying to do.

  • They were reasonable.

  • Yeah, okay, I do also have a head shop PC.

  • I do have sharpies.

  • Yeah, it's the squeak on the paper people don't like.

  • Okay, fair enough.

  • In fact, opens do.

  • But these are slightly better, right?

today.

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

A2 初級

AIと論理誘導 - Computerphile (AI & Logical Induction - Computerphile)

  • 3 0
    林宜悉 に公開 2021 年 01 月 14 日
動画の中の単語