字幕表 動画を再生する
I'm going to ask a question, but you can only answer by saying either 'yes,' 'no,' or 'it's
complicated.'
Alright?
So, let's start over here.
Is some form of superintelligence possible, Jaan?
'Yes,' 'no,' or 'it's complicated.'
Yes.
Yes.
Yes.
Yes.
Yes.
Yes.
Definitely.
No.
Well, this was disappointing, we didn't find any disagreement.
Let's try harder.
Just because it's possible doesn't mean that it's actually going to happen.
So, before I asked if superintelligence was possible at all according to the laws of physics.
Now, i'm asking, will it actually happen?
A little bit complicated, but yes.
Yes, and if it doesn't then something terrible has happened to prevent it.
Yes.
Probably.
Yes.
Yes.
Yes.
Yes.
No.
Shucks, still haven't found any interesting disagreements.
We need to try harder still.
OK.
So, you think it is going to happen, but would you actually like it to happen at some point?
Yes, no, or it's complicated?
Complicated, leaning towards yes.
It's complicated.
Yes.
Yes.
It's really complicated.
Yes.
It's complicated.
Very complicated.
Well, heck, I don't know.
It depends on which kind.
Alright, so it's getting a little bit more interesting.
When I think, we had a really fascinating...
When is it going to happen?
Well, we had a really fascinating discussion already in this morning's panel about when
we might get to human level AI.
So, that would sort of put a lower bound.
In the interest of time, I think we don't need to rehash the question of when going
beyond it might start.
But, let's ask a very related question to the one that just came up here.
Mainly, the question of well if something starts to happen, if you get some sort of
recursive self improvement or some other process whereby intelligence and machines start to
take off very very rapidly, there is always a timescale associated with this.
And there I hope we can finally find some real serious disagreements to argue about
here.
Some people have been envisioning this scenario where it goes PHOOM and things happen in days or
hours or less.
Whereas, others envision that it will happen but it might take thousands of years or decades.
So, if you think of some sort of doubling time, some sort of rough timescale on which
things get dramatically better, what time scale would you guess at, Jaan?
Start now or starting at human level?
No, no, so once we get human level AI or whatever point beyond there or a little bit before
there where things actually start taking off, what is the sort of time scale?
Any explosion, as a nerdy physicist, has some sort of time scale, right, on which it happens.
Are we talking about seconds, or years, or millennia?
I'm thinking of years, but it is important to act as if this timeline was shorter.
Yeah, I actually don't really trust my intuitions here.
I have intuitions that we are thinking of years, but I also think human level AI is
a mirage.
It is suddenly going to be better than human, but whether that is going to be a full intelligence
explosion quickly, I don't know.
I think it partly depends on the architecture that ends up delivering human level AI.
So, this kind of neuroscience inspired AI that we seem to be building at the moment
that needs to be trained and have experience in order for it to gain knowledge that may
be, you know, on the order of a few years so possible even a decade.
Some numbers of years, but it could also be much less.
But, I wouldn't be surprised if it was much more.
Potentially days or shorter, especially if it's AI researchers designing AI researchers
Every time there is an advance in AI, we dismiss it as 'oh, well that's not really AI:' chess,
go, self-driving cars.
An AI, as you know, is the field of things we haven't done yet.
That will continue when we actually reach AGI.
There will be lots of controversy.
By the time the controversy settles down, we will realize that it's been around for
a few years.
Yeah, so I think we will go beyond human level capabilities in many different areas, but
not in all at the same time.
So, it will be an uneven process where some areas will be far advanced very soon, already
to some extent and other might take much longer.
What Bart said.
But, I think if it reaches a threshold where it's as smart as the smartest most inventive
human then, I mean, it really could be only a matter of days before it's smarter than
the sum of humanity.
So, here we saw quite an interesting range of answers.
And this, I find is a very interesting question because for reasons that people here have
published a lot of interesting papers about the time scale makes a huge difference.
Right, if it's something that happening on the time scale of the industrial revolution,
for example, that's a lot longer than the time scale on which society can adapt and
take measures to steer development, borrowing your nice rocket metaphor, Jaan.
Whereas, if things happen much quicker than society can respond, it's much harder to steer
and you kind of have to hope that you've built in a good steering in advance.
So, for example in nuclear reactors, we nerdy physicists like to stick graphite sticks in
a moderators to slow things down maybe prevent it from going critical.
I'm curious if anyone of you feels that it would be nice if this growth of intelligence
which you are generally excited about, with some caveats, if any of you would like to
have it happen a bit slower so that it becomes easier for society to keep shaping it the
way we want it.
And, if so, and here's a tough question, is there anything we can do now or later on when it
gets closer that might make this intelligence explosion or rapid growth of intelligence
simply proceed slower so we can have more influence over it.
Does anyone want to take a swing at this?
It's not for the whole panel, but anyone who...
I'm reminded of the conversation we had with Rich Sutton in Puerto Rico.
Like, we had a lot of disagreements, but definitely could agree about paths slower being better
than faster.
Any thoughts on how one could make it a little bit slower?
I mean, the strategy I suggested in my talk was somewhat tongue and cheek.
But, it was also serious.
I think this conference is great and as technologists we should do everything we can to keep the
technology safe and beneficial.
Certainly, as we do each specific application, like self-driving cars, there's a whole host
of ethical issues to address.
But, I don't think we can solve the problem just technologically.
Imagine that we've done our job perfectly and we've created the most safe, beneficial
AI possible, but we've let the political system become totalitarian and evil, either a evil
world government or even just a portion of the globe that is that way, it's not going
to work out well.
And so, part of the struggle is in the area of politics and social policy to have the
world reflect the values want to achieve because we are talking about human AI.
Human AI is by definition at human levels and therefore is human.
And so, the issue of how we make humans ethical is the same issue as how we make AIs that
are human level ethical.
So, what i'm hearing you say is that before we reach the point of getting close to human
level AI, a very good way to prepare for that is for us humans in our human societies to
try and get our act together as much as possible and have the world run with more reason than
it, perhaps, is today.
Is that fair?
That's exactly what I'm saying.
Nick? Also, I just want to clarify again that when I asked about what you would do to slow things
down i'm not talking at all about slowing down AI research now.
We're simply talking about if we get to the point where we are getting very near human
level AI and think we might get some very fast development, how could one slow that
part down?
So, one method would be to make faster progress now, so you get to that point sooner when
hardware is less developed, you get less hardware overhang.
However, the current speed of AI progress is a fairly hard variable to change very much
because there are very big forces pushing on it, so perhaps the higher elasticity option
is what I suggested in the talk to ensure that whoever gets there first has enough of
a lead that they are able to slow down for a few months, let us say, to go slow during
the transition.
So, I think one thing you can do, I mean this is almost in the verification area, is to
make systems that provably will not recruit additional hardware or resigned their hardware,
so that their resources remain fixed.
And i'm quite happy to sit there for several years thinking hard about what the next step
would be to take.
But, it's trivial to copy software.
Software is self replicating and always has been and I don't see how you can possibly
stop that.
I mean, I think it would be great if it went slow, but it's hard to see how it does go
slow given the huge first mover advantages and getting to superintelligence.
The only scenario that I see where it might go slow is where there is only one potential
first mover that can then stop and think.
So, maybe that speaks to creating a society where, you know, AI is restrictive and unified, but without
multiple movers.
Yeah, Demis, so your colleague Sean Legg mentioned that the one thing that could help a lot here
is if there's things like this industry partnership and a sense of trust and openness between