Placeholder Image

字幕表 動画を再生する

  • I'm going to ask a question, but you can only answer by saying either 'yes,' 'no,' or 'it's

  • complicated.'

  • Alright?

  • So, let's start over here.

  • Is some form of superintelligence possible, Jaan?

  • 'Yes,' 'no,' or 'it's complicated.'

  • Yes.

  • Yes.

  • Yes.

  • Yes.

  • Yes.

  • Yes.

  • Definitely.

  • No.

  • Well, this was disappointing, we didn't find any disagreement.

  • Let's try harder.

  • Just because it's possible doesn't mean that it's actually going to happen.

  • So, before I asked if superintelligence was possible at all according to the laws of physics.

  • Now, i'm asking, will it actually happen?

  • A little bit complicated, but yes.

  • Yes, and if it doesn't then something terrible has happened to prevent it.

  • Yes.

  • Probably.

  • Yes.

  • Yes.

  • Yes.

  • Yes.

  • No.

  • Shucks, still haven't found any interesting disagreements.

  • We need to try harder still.

  • OK.

  • So, you think it is going to happen, but would you actually like it to happen at some point?

  • Yes, no, or it's complicated?

  • Complicated, leaning towards yes.

  • It's complicated.

  • Yes.

  • Yes.

  • It's really complicated.

  • Yes.

  • It's complicated.

  • Very complicated.

  • Well, heck, I don't know.

  • It depends on which kind.

  • Alright, so it's getting a little bit more interesting.

  • When I think, we had a really fascinating...

  • When is it going to happen?

  • Well, we had a really fascinating discussion already in this morning's panel about when

  • we might get to human level AI.

  • So, that would sort of put a lower bound.

  • In the interest of time, I think we don't need to rehash the question of when going

  • beyond it might start.

  • But, let's ask a very related question to the one that just came up here.

  • Mainly, the question of well if something starts to happen, if you get some sort of

  • recursive self improvement or some other process whereby intelligence and machines start to

  • take off very very rapidly, there is always a timescale associated with this.

  • And there I hope we can finally find some real serious disagreements to argue about

  • here.

  • Some people have been envisioning this scenario where it goes PHOOM and things happen in days or

  • hours or less.

  • Whereas, others envision that it will happen but it might take thousands of years or decades.

  • So, if you think of some sort of doubling time, some sort of rough timescale on which

  • things get dramatically better, what time scale would you guess at, Jaan?

  • Start now or starting at human level?

  • No, no, so once we get human level AI or whatever point beyond there or a little bit before

  • there where things actually start taking off, what is the sort of time scale?

  • Any explosion, as a nerdy physicist, has some sort of time scale, right, on which it happens.

  • Are we talking about seconds, or years, or millennia?

  • I'm thinking of years, but it is important to act as if this timeline was shorter.

  • Yeah, I actually don't really trust my intuitions here.

  • I have intuitions that we are thinking of years, but I also think human level AI is

  • a mirage.

  • It is suddenly going to be better than human, but whether that is going to be a full intelligence

  • explosion quickly, I don't know.

  • I think it partly depends on the architecture that ends up delivering human level AI.

  • So, this kind of neuroscience inspired AI that we seem to be building at the moment

  • that needs to be trained and have experience in order for it to gain knowledge that may

  • be, you know, on the order of a few years so possible even a decade.

  • Some numbers of years, but it could also be much less.

  • But, I wouldn't be surprised if it was much more.

  • Potentially days or shorter, especially if it's AI researchers designing AI researchers

  • Every time there is an advance in AI, we dismiss it as 'oh, well that's not really AI:' chess,

  • go, self-driving cars.

  • An AI, as you know, is the field of things we haven't done yet.

  • That will continue when we actually reach AGI.

  • There will be lots of controversy.

  • By the time the controversy settles down, we will realize that it's been around for

  • a few years.

  • Yeah, so I think we will go beyond human level capabilities in many different areas, but

  • not in all at the same time.

  • So, it will be an uneven process where some areas will be far advanced very soon, already

  • to some extent and other might take much longer.

  • What Bart said.

  • But, I think if it reaches a threshold where it's as smart as the smartest most inventive

  • human then, I mean, it really could be only a matter of days before it's smarter than

  • the sum of humanity.

  • So, here we saw quite an interesting range of answers.

  • And this, I find is a very interesting question because for reasons that people here have

  • published a lot of interesting papers about the time scale makes a huge difference.

  • Right, if it's something that happening on the time scale of the industrial revolution,

  • for example, that's a lot longer than the time scale on which society can adapt and

  • take measures to steer development, borrowing your nice rocket metaphor, Jaan.

  • Whereas, if things happen much quicker than society can respond, it's much harder to steer

  • and you kind of have to hope that you've built in a good steering in advance.

  • So, for example in nuclear reactors, we nerdy physicists like to stick graphite sticks in

  • a moderators to slow things down maybe prevent it from going critical.

  • I'm curious if anyone of you feels that it would be nice if this growth of intelligence

  • which you are generally excited about, with some caveats, if any of you would like to

  • have it happen a bit slower so that it becomes easier for society to keep shaping it the

  • way we want it.

  • And, if so, and here's a tough question, is there anything we can do now or later on when it

  • gets closer that might make this intelligence explosion or rapid growth of intelligence

  • simply proceed slower so we can have more influence over it.

  • Does anyone want to take a swing at this?

  • It's not for the whole panel, but anyone who...

  • I'm reminded of the conversation we had with Rich Sutton in Puerto Rico.

  • Like, we had a lot of disagreements, but definitely could agree about paths slower being better

  • than faster.

  • Any thoughts on how one could make it a little bit slower?

  • I mean, the strategy I suggested in my talk was somewhat tongue and cheek.

  • But, it was also serious.

  • I think this conference is great and as technologists we should do everything we can to keep the

  • technology safe and beneficial.

  • Certainly, as we do each specific application, like self-driving cars, there's a whole host

  • of ethical issues to address.

  • But, I don't think we can solve the problem just technologically.

  • Imagine that we've done our job perfectly and we've created the most safe, beneficial

  • AI possible, but we've let the political system become totalitarian and evil, either a evil

  • world government or even just a portion of the globe that is that way, it's not going

  • to work out well.

  • And so, part of the struggle is in the area of politics and social policy to have the

  • world reflect the values want to achieve because we are talking about human AI.

  • Human AI is by definition at human levels and therefore is human.

  • And so, the issue of how we make humans ethical is the same issue as how we make AIs that

  • are human level ethical.

  • So, what i'm hearing you say is that before we reach the point of getting close to human

  • level AI, a very good way to prepare for that is for us humans in our human societies to

  • try and get our act together as much as possible and have the world run with more reason than

  • it, perhaps, is today.

  • Is that fair?

  • That's exactly what I'm saying.

  • Nick? Also, I just want to clarify again that when I asked about what you would do to slow things

  • down i'm not talking at all about slowing down AI research now.

  • We're simply talking about if we get to the point where we are getting very near human

  • level AI and think we might get some very fast development, how could one slow that

  • part down?

  • So, one method would be to make faster progress now, so you get to that point sooner when

  • hardware is less developed, you get less hardware overhang.

  • However, the current speed of AI progress is a fairly hard variable to change very much

  • because there are very big forces pushing on it, so perhaps the higher elasticity option

  • is what I suggested in the talk to ensure that whoever gets there first has enough of

  • a lead that they are able to slow down for a few months, let us say, to go slow during

  • the transition.

  • So, I think one thing you can do, I mean this is almost in the verification area, is to

  • make systems that provably will not recruit additional hardware or resigned their hardware,

  • so that their resources remain fixed.

  • And i'm quite happy to sit there for several years thinking hard about what the next step

  • would be to take.

  • But, it's trivial to copy software.

  • Software is self replicating and always has been and I don't see how you can possibly

  • stop that.

  • I mean, I think it would be great if it went slow, but it's hard to see how it does go

  • slow given the huge first mover advantages and getting to superintelligence.

  • The only scenario that I see where it might go slow is where there is only one potential

  • first mover that can then stop and think.

  • So, maybe that speaks to creating a society where, you know, AI is restrictive and unified, but without

  • multiple movers.

  • Yeah, Demis, so your colleague Sean Legg mentioned that the one thing that could help a lot here

  • is if there's things like this industry partnership and a sense of trust and openness between

  • the leaders, so that if there is a point where one wants to...

  • Yeah, I do worry about, you know, that sort of scenario where, you know, I think, I've

  • got quite high belief in human ingenuity to solve these problems given enough time. the

  • control problem and other issues.

  • They're very difficult, but I think we can solve them.

  • The problem is will there, you know, the coordination problem of making sure there is enough time

  • to slow down at the end and, you know, let Stuart think about this for 5 years.

  • But, what about, he may do that, but what about all the other teams that are reading

  • the papers and not going to do that while you're thinking.

  • Yeah, this is what I worry about a lot.

  • It seems like that coordination problem is quite difficult.

  • But, I think as the first step, may be coordinating things like the Partnership on AI, you know,

  • the most capable teams together to agree, at least agree on a set of protocols or safety

  • procedures, or things, you know, agree that, maybe, you know, you should verify these systems

  • and that is going to take a few years and you should think about that.

  • I think that would be a good thing.

  • I just want to caveat one thing about slowing versus fast progresses, you know, it could

  • be that, imagine there was a moratorium on AI research for 50 years, but hardware continued

  • to accelerate as it does now.

  • We could, you know, this is sort of what Nick's point was is that there could be a massive

  • hardware overhang or something where an AI actually many, many, many different approaches

  • to AI including seed AI, self-improving AI, all these things could be possible.

  • And, you know, maybe one person in their garage could do it.

  • And I think that would be a lot more difficult to coordinate that kind of situation, whereas,

  • so, I think there is some argument to be made where you want to make fast progress when

  • we are at the very hard point of the 's' curve.

  • Where actually, you know, you need quite a large team, you have to be quite visible,

  • you know who the other people are, and, you know, in a sense society can keep tabs on

  • who the major players are and what they're up to.

  • Whereas, opposed to a scenario where in say 50 or a 100 years time when, you know, someone,

  • a kid in their garage could create a seed AI or something like that.

  • Yeah, Bart, one last comment on this topic.

  • Yeah, I think that this process will be a very irregular process and sometime we will

  • be far advanced and other times we will be going quite slow.

  • Yeah, i'm sort of hoping that when society sees something like fake video creation where

  • you create a video where you have somebody say made up things and that society will actually

  • realize that there are these new capabilities for the machines and we should start taking

  • the problem as a society more seriously before we have full and general AI.

  • We'll use AI to detect that.

  • So, you mentioned the word 'worry' there, and you Nick went a bit farther, you had the word

  • 'doom' written on your slides three times.

  • No wonder there was one star on Amazon on that rating and that it was even in red color.

  • I think it's just as important to talk about existential hope and the fantastic upside

  • as downside and I want to do a lot of that here.

  • So, let's just get some of those worries out of the way now and then return to the positive

  • things.

  • I just want to go through quickly and give each one of you a chance to just pick one thing

  • that you feel is a challenge that we should overcome and then say something about what you feel

  • is the best thing we can do, right now, to try to mitigate it.

  • Do you want to start Jaan?

  • To mitigate what?

  • Mention one thing that you're worried could go wrong and tell us about something constructive

  • that we can do now that will reduce that risk.

  • I do think that AI arms races, I see like a lot of, like, good.

  • I'm really heartened to see kind of great contacts between OpenAI and DeepMind, but

  • I think this is just like a sort of toy model of what the world at large might come up with

  • in terms of arms races.

  • And for myself I have been spending increasing amount of time in Asia recently just to kind

  • of try to kind of pull in more people elsewhere, what has been so far, just been, kind of like, an Anglo-

  • American discussion mostly.

  • So, like this is, I think, this is one thing that should be done and i'm going to do it.

  • Well, as someone who is