字幕表 動画を再生する
I want to thank the organizers for inviting me to this.
This is way outside my usual area.
I'm a mathematician, and the closest I get would be CIDCOM,
but this has been a lot of fun.
And I have this incredibly pretentious title.
And so I'm going to try to explain
to you what I mean by this.
Online I have a bunch of videos that go
into this in a lot more detail.
So sort of think of this as a quick preview of the videos.
And I have a lot of people to thank,
not enough time to give them all the credit they deserve.
So what I'm interested in is these sort of major transitions
in evolution.
But they're also changes in architecture,
and you see increases in complexity and elaboration
of networks.
Unfortunately, those are the four most confused subjects
in all of science.
And engineers know a lot about these things,
but they keep that to themselves.
So I'm going to focus on two parts of this.
Of course, you're interested at this stuff at the top.
But I'm going to kind of use bipedalism, that transition,
as an example.
If we hadn't done that, none of the rest would have happened,
so that's a really crucial--
they're all crucial, but that one's particularly crucial.
And how do I explain universals?
Well, my main way of doing it is with math.
But we'll not do that today.
We'll focus on trying to look at some diverse domains, so not
just networking, but like I said, bipedalism and our brains
and how our brains work.
Currently, unfortunately, we have
kind of a fragmented theory behind this.
And so one of the objectives of my research
is to try to get this to be not a whole nine
subjects, but really one.
And that's the framework to try to do this,
is to create a theory which can then
be used to understand these transitions.
And again, lots of details in the videos.
So now, I'm very different from this community.
Maybe only one letter different, but that
makes a big difference.
But I think there's a lot of things
that we're also interested in common.
We want to have all these features.
I may be more theoretical.
Maybe you're more practical.
But I think we also, again, maybe
have different priorities but the same interests.
And also dynamic and deterministic.
And by deterministic I just mean in the way
I think about the problems today,
I focus on not average behavior, but kind of what
goes on worst case.
And so in bipedalism, one of the most important things
is a trade-off between robustness and efficiency.
Now of course, we'd like to be both.
We'd like to be in the lower left hand corner.
That's the ideal case.
And if you compare us with chimps, for example,
at distance running we're about four times
as efficient as they are, and that's really substantial.
And if you've got a bicycle, you get another factor
of two or so, roughly, again, roughly.
But much more fragile.
And the bike makes the crashes worse,
and so that's the trade-off we see in adopting bipedalism.
And so what I want to do is think about these kinds
of trade-offs.
We'd like to be cheap, we'd like to be robust.
But it's hard to be both.
Now the cardiovascular physiology part of it
is very interesting as well.
We have a very upgraded cardiovascular system
compared to chimps.
If you want to read about that, that's a recent paper.
And I have some, again, videos online on this physiology.
So we'll not talk about physiology.
We're going to worry about the balance part of it,
and not worry about efficiency, but really robustness.
And ideally, again, we'd be cheap, fast, flexible,
and accurate.
We'd have all these things.
Again, I'm going to ignore the cheap dimension.
PowerPoint only lets you really draw things in two dimensions,
so we're going to keep projecting things
into two dimensions.
So again, we'd like to be fast, flexible, and accurate,
but it's hard to be all of those things.
So what I want to talk about is the trade-off
in layered architectures, and focus
on a very simplified view of what our brains do,
which is planning and reflexes.
And as an example, this task.
This is not me.
I'm more of an uphill kind of guy.
So if this is me, we'd be watching a crash.
But what we can see here is this higher level
planning using vision is slow but very accurate.
And then you have a lower level at the same time,
a reflex layer, which is fast dealing with the bumps.
So you've got the trail you're following and the bumps.
And so we can think about this planning layer.
It's slow, but it gives us a lot of accuracy, flexibility,
it's centralized.
It's conscious, deliberate.
And it deals with stable virtual dynamics.
But just the opposite of the reflex layer,
which deals with the bumps.
It's fast, but it's inaccurate, rigid.
It's very localized and distributed,
and it's all unconscious and automatic.
And it deals with the unstable real dynamics to create that.
So these are really opposite, completely
opposite functions that the same nervous system multiplexes
very effectively.
And so we put those two things together.
We're not ideal in the corner, but we
behave almost as if we are.
And so of course we'd like to be better, faster, cheaper.
You can usually choose two or one at best.
And again, we're going to focus on this trade-off between fast,
accurate, and flexible.
And again, projecting very high dimensions into these.
And we're going to focus on just these aspects right now.
And again, how do we talk about that?
Well, again, we have a math framework for that,
but I'm going to talk about how this cuts across many domains.
So I claim that this is a feature universal, laws
and architectures.
And again what I mean by law is a law says,
this is what's possible.
Now in this context, this is what we can build out
of spiking neuron hardware.
But what is an architecture?
Architecture is being able to do whatever is lawful.
So a good architecture lets you do what's possible.
And that's what I mean by universal laws
and architectures.
What I claim is, in this sort of space of smart systems,
we see convergence in both the laws and the architectures.
And so, again, I want to try to talk
about this kind of picture, but in some diverse domains.
So what are some of the other architectures
that look like this?
Well, one that you're obviously familiar with
is this one from computing, where
we have apps sitting on hardware mediated by an operating
system.
We don't yet really understand what the operating system
is in the case of the brain.
We know it's got to be there, and we
know it's got to be really important,
but we're a little murky on it and exactly how it works.
So one of the things that I'm interested in
is kind of reverse engineering that system.
So you're very familiar with the universal trade-offs
you have here.
So for example, if you need absolute the fastest
functionality, then you need special purpose hardware.
But you get the greatest flexibility
by having diverse application software.
But that would tend to be slower,
and you've got that trade-off.
And then Moore's Law, of course, shifts this whole curve down
as you make progress.
Unfortunately, there's currently no Moore's Law
for spiking neurons.
We're kind of stuck with the hardware we have.
But the operating system is crucial in tying these two
things together.
So now we have a computer science theory
that more or less formalizes some aspects of this in a sense
that if you want to be really fast,
you have to have a very constrained problem,
say, in Class P. But if you want to be
very general in the kind of problems you solve, then
unfortunately your algorithms are necessarily
going to run slower.
It turns out, at the cell level we have
the same sort of trade-offs.
If you want to be fast, you better
have the proteins up and running,
but your greatest flexibility is in gene regulation
and also gene swapping.
So what we have here is these convergent architectures,
the fundamental trade-off being that you'd
like to have low latency, you'd like to be fast,
you'd like to be extremely accurate.
But the hardware that we have available to us
doesn't let us do those simultaneously,
and there's a trade-off.
And then we exploit that trade-off
to do the best we can with good architectures.
So I want to talk a little bit more
about this in a little more detail.
And I want to kind of go through and use this example of balance
as a way of seeing a little bit more detail about how
these systems work.
And I want to sort of connect the performance
with the underlying physiology a little bit.
So what we're going to do is we're
going to do a little experiment using your brains.
And so one thing things I want to do is use vision.
And I want you to try to read these texts as they move.
So it turns out, up to two Hertz you don't have
too much trouble doing this.
But between two and three Hertz, it gets pretty blurry.
And that's because that's as fast as your eye can move
to track these moving letters.
Now I want you to do a second experiment, which
is to shake your head no as fast as you can
while you're looking at this.
Now I don't mean big, I mean really fast, OK?
And it turns out no matter how fast you shake your head,
you can still read, certainly, the upper left.
So it turns out your ability to deal with head motion is much
faster than for object motion.
So why is that?
So first of all, evolutionarily why,
and then mechanistically why.
So there's a trade-off.
Object motion is flexible and very accurate, but slow,
whereas head motion is fast but relatively inflexible.
We'll see why that is.
So why is that?
Well, when you do object motion tracking, you're using vision.
Shouldn't be surprised by that.
So vision is very high bandwidth, but it's slow,
several hundred milliseconds of delay.
That's why you get this two to three Hertz bandwidth.
So slow but very flexible.
So your visual system did not evolve
to look at PowerPoint slides, yet you're
sitting here doing that.
And it's also very accurate, and we'll see in a minute
why the accuracy is there.
For head motion, you have a completely separate system that
doesn't use vision directly.
It has this sort of rate gyros in your ear.
And it's very fast, but as we'll see, inflexible and inaccurate.
And this is the vestibular ocular reflex.
So that's a very low delay system.
In fact, this is the lowest delay system in your body.
It's about 10 milliseconds.
So this is sort of a minimal cartoon that shows you
that this is the mechanism, both acting
through the same muscles of the eye, but two completely
separate pathways.
And then there's this trade-off between these two.
And that's why it's so much faster with your head
than with the vision.
And again, evolutionary, why is that?
Well, object motion-- if you're a hunter,
object motion is much slower than your own motion
of movement.
And so you need to be able to have a much better head
motion control system.
Turns out this is another situation
where we have a much more enhanced system than chimps.
Chimps don't need this as much as we do.
And so their VOR system is not as fast as ours is.
Now any top predator needs this ability
to be able to stalk at a distance,
but also be fast once you get there.
So it's not just us that has to have this capability.
And I don't want to make it all about violence.
You have to do this in all sorts of things.
It's not just vision, it's olfaction and other things
as well.
So this object motion is kind of part of this planning
and vision side of the picture.
And this head motion is sort of this reflex example
because I can't have you all get on a bicycle
and ride it down a mountain and crash and stuff like that.
So we have to do a simpler example here.
So again, what I want to repeat here
is what a law is is it says that this is what's possible,
what components can be built from neural hardware.
Ideally we'd have hardware that's
fast and flexible and accurate.
But we can't build that out of the hardware that--
we're sort of built out of fish parts, right,
and so there's only so much you can build with it.
So that's what I mean by law.
But what good architecture allows you to build
is we're building this vision system and this reflex system
out of the same hardware.
A good architecture allows you to do that.
And we're going to see a little more detail
about how that works.
So what a good architecture, though, allows you to do
is tune these to give you sort of the illusion
of fast and flexible, even though none of the parts
have that.
And so is that a contradiction?
Well, not really.
What happens is you're tuned to a specific environment,
but then there are tasks that you don't do very well.
So you're not very good at tracking
some fast-moving objects.
But you don't need to.
And you're familiar with this, right?
Because here's an example.
We'd like our memory to be fast, large, and cheap.
But there aren't any parts that are fast, large, and cheap.
There's all these different parts.
But of course, what you do is you build a virtual memory,
and it creates sort of the illusion of fast, large,
and cheap.
And of course, it's still fragile in some worst case.
But most of the time that doesn't hurt us.
So we're familiar with this idea of using virtualization
in architecture to take components that
have these trade-offs and create virtualization
that appear to beat this, at least for some cases.
So this is sort of the summary so far
about how your brain works with respect to vision.
But I want to go a little bit deeper into this system.
And first of all, these signals have to match.
Why is that?
Because they're using the same actuators,
and your head motion and object motion must be aligned.
And so that can't be encoded genetically.
So what you need is a game to tune, and it sits there.
So there's that little game, and it's
tuned by this system called the auxiliary optical system that
uses part of the cerebellum.
So this is another little piece in that.
And it actually is a whole different visual pathway
that doesn't go through cortex.
And so this is basically the essence of vision.
And the vestibular nuclei play this really crucial role
in tuning for this system.
So one thing that's interesting about this
is that this is connected up to your balance system.
So your balance system and your muscles
are directly connected to this.
And you have this trade-off, again, between vision and VOR,
which is fast.
But you now have a third thing, proprioception.
So proprioception is the feeling that you
have of where your body is.
So what I want to do is demo this.
And if you're really brave, you'll do it with me.
So you can try this.
So if you're interested, it's a good time to take a break
and stand up.
So everybody stand up.
You don't have to, but if you want to, stand up.
OK.
So what I want you to first do is we're going to first--
so everybody can stand up and balance.
No problem, right?
So it turns out if you close your eyes,
you can still stand up and balance and kind of move
around a little bit.
So open and watch me.
So I can stand here and I can move around, no problem.
Now what's going to happen without vision
is I'm going to run into things that I move around, right,
but it doesn't hurt my balance so much.
Now OK, open your eyes.
So we can't turn off proprioception.
If we did, you'd all fall on the floor.
But we can degrade it.
We can degrade that by standing on one leg.
And so if you stand on one leg, you're a little wobbly,
but you can still do it OK, right?
However, if we stand on one leg and close our eyes,
we're stuck with this lousy VOR system.
It's fast, but it's not very accurate.
So if you stand on one leg and then close your eyes,
all of a sudden you're going to start teetering.
And depending on how old and feeble you are,
you're going to fall over.
So I'm old and feeble and had a lot of concussions
when I was a kid.
I could've been smart.
Anyway, but when I stand on one leg and close my eyes, notice I
start wobbling and eventually I fall over.
Why is that?
That's because this VOR system in my head
is fast, but not very accurate.
So when I lose the accurate visual system,
the errors accumulate and I fall over.
So you can sit down now.
Now if you lose the VOR system, you're toast.
You can't stand up or do anything at all.
And there are chemical means by which you can do this.
Not that I recommend that.
So now where does motion sickness come from?
It turns out motion sickness looks
like it's a bug because this vestibular nuclei is connected
to all sorts of things.
So you need it to be connected to your muscles
and your proprioception.
This whole system has to be integrated together.
But boy does that look like a bug.
This system is connected to your GI tract and stuff like that.
And so motion sickness, which you
can get with VR, for example, or in cars
and a lot of situations, is a situation where the vestibular
nuclei detects an error.
And what do you do then?
Well, you throw up, which seems like a crazy bug, right?
Well, maybe not.
This is the most sensitive toxin sensor in your body.
So before you notice any other effects of ingesting a poison,
you know it here.
So what you want to do is you want
to connect that right to the GI tract
and throw up to get rid of the poison.
So that was an adaptive evolutionary step.
But now you put it in a modern environment
with cars and boats and virtual reality.
And it's now a side effect that is less pleasant.
The other thing you do is you need this VOR
sensor to be in your head.
And why is that?
So now we can't move that sensor around, it's stuck here.
So we have to do another experiment, which
is we're going to do a stick balancing experiment.
And what I want to show you-- and afterwards you can come up
and try it yourself--
is that as I make the stick shorter, it gets harder.
Why does it get harder to balance a shorter stick?
It's more unstable.
And I've got this big delay in my visual system.
And as I get a more unstable system with a delay,
it gets harder and eventually impossible.
But much more interesting is that if I take a long stick
and look down, put the sensor in the middle,
it's even harder still.
And so I'm going to do a little demo now with that.
So if I have the stick long, it's easy.
And notice that it's very slowly unstable.
But if I make it shorter, it gets harder and it's faster.
Watch.
Right?
So it's faster, and I haven't gotten any faster.
And so it gets harder.
And so I get it down really short here
and I can't do it at all.
And notice that I oscillate before I fail.
Another interesting thing we won't go into today,
but the theory predicts that.
But here's the interesting thing.
If I make it long, very, very stable, easy, no problem.
But if I look down it and occlude my peripheral vision,
completely impossible.
So you can, again, try this.
You can you just get any kind of pointer
and you can try this experiment.
If I have to look down, completely impossible,
where if it was short, it would be easy.
So you can't have the sensor in the middle.
The system won't work.
We can't move the sensor down here.
Fortunately it was here in the fish in the first place.
So when we got built out of fish parts, that part was OK.
We have some other wiring that I could
go into that is really bad.
So we need sensors that are high and fast.
Why?
Well, this theory-- we have a theory that says exactly why.
And again, I'm not going have time to go into that.
But there's videos on that online.
And again, I'm arguing that there's these universal laws
and architectures.
And how does this theory connect all these things?
Well, one of the things you need to have a theory
is you need to understand how the hardware influences
the system level performance.
So let me say a little bit about the hardware
because this is a hardware meeting, sort of.
So this is sort of the optical fiber of the brain.
It's spiking neurons.
So what a nerve is is a bunch of axons.
And the axons is what carries the signal
from individual neurons to each other.
So I'm going to look at a couple of cranial nerves.
They happen to be about the same diameter,
but real huge amount of diversity.
So what I'm plotting here, it's a log plot.
I'm plotting the mean axon diameter on the x-axis.
And I'm sort of ignoring--
there's some variability.
And then the axons per nerve.
So these are all about the same cross-sectional area.
So a fixed cross-sectional area on this plot
would look like a slope one line here.
And the optic nerve has about a million axons,
and they're about a micron.
That vestibular nerve has 50 times fewer axons,
but they're bigger, faster.
Auditory sits in there somewhere.
Olfactory's off the curve.
There's some spinal nerves here, and so on.
We're going to focus on this optic and vestibular.
But there's four orders of magnitude in just this picture.
So enormous heterogeneity in the composition of these otherwise
similar nerves.
But we're interested in speed versus accuracy,
not axons per nerve.
And it turns out, again--
I'm not going into the details here,
but this is work with some colleagues in neuroscience,
which is to say, given these are spiking neurons, what
is the speed accuracy trade-off, that is,
the latency and bandwidth trade-off here?
And it's basically you have a fast but inaccurate vestibular
nerve, a slow but accurate optic nerve.
So there is this speed accuracy trade-off
at the level of spiking neurons.
And these are extremely diverse and heterogeneous.
If we look at the rest of the body,
it's not just four orders of magnitude.
It's six, seven orders of magnitude,
of diversity just within the same nervous system.
Now I think that networking is also
going to get more diverse and heterogeneous, not less.
And so it's going to help.
And we're also going to want to solve these kinds of problems
in automation in our systems.
So we see this extreme diversity in the hardware,
but we also see this extreme diversity in the performance
of the system.
And what the theory does is connect those.
And again, I'm not going to go into the details,
but you can sort of imagine what that has to look like.
There is a formula for sort of the simplest possible case,
a little block diagram with the simplest possible case.
And so here's where we get this optimal,
robust, efficient, intelligent, all these different things that
are kind of trying to come out of the theory.
But again, dynamic and deterministic, worst case,
because if you're right a bike down a mountain, for example,
you don't want to crash.
And it's not OK to land our airplanes on average.
We want them to land all the time.
And so you see this extreme diversity and heterogeneity.
And if you look at across all the animal kingdom, again,
built out of the same hardware, you
see even greater diversity and heterogeneity.
Now there have been theory fragments before,
as I mentioned, but not this integrated theory
that puts this all together.
That's what's new and just happened in the last few years.
But it lets us connect this hardware
level limits with the performance
level we see at the top.
We think this theory is going to be
very relevant for networking, and particularly
cyber physical systems.
So what experiments do we do here?
Well, because this is all very new,
it turns out this is a relatively untouched part
of neuroscience.
And so we can't do this experiment very easily.
We can't get IRB approval to have
people crash mountain bikes.
So we actually, in the upper left corner
I just did a little cartoon version.
We actually have a nice little virtual reality
game where you have a force feedback steering
wheel for the bumps and a trail you have to follow.
And so we talked about this balancing stick, these motion--
we have a lot of different theories and experiments.
But let me talk a little bit more about these universals
as I'm running out of time.
We've talked about the sensory motor system.
We talked about sort of this layered architecture
in our computers.
It turns out bacterial cells have the same sort of trade-off
between hardware and software.
And so these are all layered architectures,
and these all talk about where function is controlled.
So is control function in the hardware
or is it done in applications?
Is it done in cortex or is it done in reflexes?
I'm going to skip over this.
This is the thing I'm currently working on,
which is how does the microbiome affect the brain.
It's a fascinating problem, and yet
another layered architecture going from cells to brains.
So one of these things these architectures let you do
is massively accelerate evolvability.
So in this system, how do you get evolvability?
You swap apps.
Of course, some of us actually do write software once
in a while.
But mostly we just download it from somewhere else.
We also can swap hardware.
What we can't swap is the operating system.
There are very many options.
We've been stuck with the TCP/IP for decades.
We're starting to play around with changing it.
But it's very hard to change this stuff in the middle.
It turns out horizontal transfer is also
how most systems do things.
So for example, genes and bacteria,
they do massive transfer of genes.
They swap genes like crazy.
That's why they get antibiotic resistant so easily.
So again, this is an example.
If you sequence just E. coli around the world, there's--
now this has gone way up.
This is out of date.
There's about 4,000 genes in each cell.
But there is now 40,000 or 50,000 different genes
across all the different cells because they're massively
swapping these things.
It's just like if you looked at all of the apps and all
the cell phones in this room, there'd
be a huge number, and much higher
than the ones in any individual.
So this is how you accelerate evolvability, by swapping.
So the idea is that we also do that.
What we're doing right now is swapping memes.
We swap ideas.
Of course, some of us have occasionally an idea
that's new.
But mostly we get ideas from other people.
And we also swap hardware.
We have an enormous amount of diverse hardware.
But there's sort of some subcortical regions in which
we're kind of all the same.
Huge diversity, in the all these different systems, but not
much of a difference.
There's not much diversity in this room in the operating
systems we're running.
It turns out the transcription and translational machinery
that is the core operating system in the cell
is universal across all cells.
There's about 10 of the 32 cells on the planet.
They're all exactly the same.
Easily evolvable, not evolvable in the middle.
And so what could go wrong with this?
Well, if it's easy to swap genes that we care about,
it's easy to swap viral genes, which we do all the time.
Now predators.
Don't care about architecture they just want the meat.
But a virus wants to keep most of the things intact
and just hijack these internals.
And there's some really amazing--
I mentioned zombies-- there's these things called
zombie parasites.
I'm out of time, so look up zombie parasites.
It's unbelievable the variety of them.
And what they do is they hijack the whole architecture.
Rabies is a good example of a system that is a parasite
but also uses the predatory nature of its host
to transmit itself.
So in us, what's our biggest problem
is bad meme transfer, right?
The biggest problem humans have is
we have beliefs that we're moving around that are false,
unhealthy, and dangerous.
And so this is not good news.
Biology does not offer us an encouraging story
about network security.
So I'm going to end here.
I haven't told you much about this,
but I've suggested that there are
these kind of universal laws and architectures.
We finally have an integrated theory
that's starting to do that.
We have to deal with going beyond centralized control
to this situation because we need
to do our control over systems that delay communications.
And of course, we need layered control.
We might have some centralized planning,
but distributed reflexes.
Again, I think I'll stop here.
But the idea is that we have this theory with these names
that you're all familiar with that
are historical artifacts to some extent.
And instead of having all these different theories,
we need to sort of cut ourselves down to just a few.
And so one of the things we're working on
is getting a more unified picture
of how this all fits together.
And you probably expected me to talk about machine learning.
We won't understand learning until we understand
how these systems work.
And so that's sort of the next big challenge.
[APPLAUSE]