字幕表 動画を再生する
MALE SPEAKER: Hello, hi.
Welcome to Tech Talk by Kevin Kelly.
This talk is going to be available on Google Video.
And so we ask that you hold questions that are proprietary
at all to Google until the end of the talk.
So Kevin Kelly is well known around the world as the
editor-in-chief of the Whole Earth Review, and is the
author of the book Out of Control: The New Biology of
Machines, Social Systems, and the Economic World.
Today Mr. Kelly shares his ideas on the future of science
and the scientific method.
He's come to the right place because Google has an
important role to play in the future of science.
Join me with a warm welcome for Mr. Kevin Kelly.
[APPLAUSE]
KEVIN KELLY: Thank you.
It's a real pleasure being here.
In addition to being editor and publisher of an obscure
magazine called the Whole Earth Review, I was also one
of the co-founders of Wired Magazine.
And for the five or ten years that Wired was the center of
the universe, it really felt great.
And being here at Google feels like that again.
It feels like I'm at the center of the universe again.
And I use that in a kind of metaphorical way, but also in
a very real sense.
I define the center of the universe as that place where
there's the least resistance to new ideas.
And I feel that, in some ways, this is the
center of the universe.
And it's a real privilege to be talking at a place where
there's probably the least resistance to new great ideas.
And what I want to spend a few minutes this afternoon talking
about is something that I think is very important but
gets very little attention.
And that is the nature of the scientific method.
And I'll start there and I kind of end up at some cosmic
level, and hope you kind of go along for the journey.
I think that it may have some bearing to what you're doing
in the long-term.
So in addition to also co-founding Wired Magazine and
being editor for many years, I also am in one of the small
group of people who have been pioneering long-term thinking.
We have a foundation called the Long Now Foundation.
Among other things we're trying to build a clock that
ticks for 10,000 years.
And the idea of that clock is to serve as an icon in
encouragement to think long-term.
So what I'm trying to do a bit in this talk is also taking a
very long-term view of things.
Rather than just thinking of the last five years, the next
five years, I'm trying to stretch our perspective and go
back 1,000 years and maybe look forward to 50 years.
That's kind of asymmetrical, but it's the best I can do in
terms of the future.
So what's interesting about science is that science is the
only thing that generates news.
Science is really the only news.
If you pick up a paper, most of the stuff that's happening
is not really news.
It's just kind of repetition, a little of recycling.
But science is the thing that generates real news.
What's interesting about the news that it generates is that
it's often novel.
And when we tend to think of the long-term history of say
the scientific method or science, we tend to think of
the inventions that science generates.
And as an exercise, we sent a questionnaire around to a
bunch of scientists around the world that said, what's the
most important scientific inventions in the last 1,000
years or the last 2,000.
And you come up with a list. And you can make your own
list. But some of the nominations for the most
important scientific inventions or the most
important inventions over the last couple
thousand years was hay.
The hay was actually the most important.
This was actually Freeman Dyson's idea, that it actually
encouraged cultivation of domesticated animals, which
led to protein, which lead to a longer life span, et cetera,
civilization.
So that's one.
Another one is antibiotics.
That was a really interesting, great, and important invention
in the history of the world.
Paper, who can argue against paper?
And, of course, this auxiliary technology of printing, that
was obviously a very important thing.
The rudder allowed navigation of the world, and sense of the
whole world and encouraged trade.
So that was also very important.
Electricity, obviously another very interesting things.
Once we harnessed the electrons, all kinds of other
great and wonderful things happened.
What's important about that is that you can make your own
list, and that these are different inventions.
But I'm actually not really concerned about that because
science makes news, but we very rarely look at science
itself, the method by which these things come about.
And that's what I'm really trying to do.
So if you actually suggest to someone that the method that
we know of as science today if still changing, and that, in
fact, the scientific method may, in fact, be very
different 50 years from now than it is today, this is the
kind of reaction you normally get.
Because when we talk about the long-term trend of the
scientific method, I find that people are very puzzled.
Science changing?
Science will be different in 50?
The scientific method?
And I think it will.
And so what I want to do is kind of take a quick tour
through the evolution of the scientific method in the past.
And interesting, when I went to look at this, I found that
basically there is no literature on this.
There is no book about the history of
the scientific method.
And for something so fundamentally you
would expect that.
But there isn't.
So I sort of cobbled these together and I'm going to go
through very quickly just to give you a sense of some of
the things I think are important.
And one of the patterns that you'll see, I hope, is that
this is a lot about information.
We have the idea of indexing and cataloguing.
Books have been around for a long time.
And then you have this idea of indexing or cataloguing the
information.
Or you have a book that is the index to the other books.
And so actually some of the great inventions in the
history of scientific method was the idea of an
alphabetical index, was the idea of having an index in the
book, was the idea of having a catalog to all the books in
your library.
Then there was the collaborative encyclopedia
where you had more than one author, more than one expert
coming together, sharing information, and in such a way
that it was all catalogued and indexed.
There was, later on, several hundred years later, there was
the first laboratories where they begin to do experiments
actually trying to observe, and measure, and record, those
observations.
We had the invention of observational tools like the
telescope and a microscope which greatly enlarged the
scientific method because I, along with some other people,
believe that the tools of science actually move science
more than any other discoveries of science, that,
in fact, the way in which you have progress is by the
creation and progressive improvement of tools.
Then we had the controlled experiment, which was of
Francis Bacon's idea that one needed to have controls.
To do an experiment you have controls.
There was Society of Experts that began to share the
information and the observations
that we were making.
So they began to actually have kind of a peerage.
And then there was--
oops.
They got cut off.
I had a problem here.
This is [? Bolton's ?]
idea that real science required they be able to
repeat an experiment, that someone else should also be
able to do it at the same time that you are.
Peer review where they began to actually send out their
observations and have them peer reviewed by people who
knew about it and could either give them credibility or
verification.
And, then this is Newton hypothesis/prediction, which
actually he was making a hypothesis about something and
making a prediction that you should find
data at some point.
Falsifiable testability, Popper's idea that what really
counts in terms of a prediction is whether it can
be falsified, that that's the sort of criteria by which you
judge a hypothesis.
Then Fisher and the randomized design, he was involved with
statistical analysis, bringing statistics into the design of
an experiment.
Placebo, somebody actually had to event that.
That was not an obvious idea.
It took many years.
It was actually fairly late in coming, 1937, before the first
placebo experiment was done.
And computer simulations--
I'll talk to a bit more about that-- began very early.
Almost as soon as computers were invented they begin to do
simulations.
And then the double blind refinement, where you actually
have both the observer and the patient not know about the
experiment.
And finally, I think, a very landmark moment was where
science began to study itself, where the scientific method
became a subject worthy of the scientific approach.
So one of the things that's interesting about this is that
I began to think about, what about China?
Now the thing about China is that it did
not discover science.
But if you take a list of all the technological inventions
that China made, it's phenomenal and actually very
unknown in this culture.
Most of the technologies that China discovered, they
discovered not just a few years before the West, but
often, on average, maybe 500 years to 1,000 years before
the West discovered it.
So it was independent, and yet they did
not originate science.
And what was missing?
Here are some examples of the things that actually were
discovered in China: paper, printing, gun powder, compass,
the rudder, the stirrup, row crops, iron plow, wheel
barrow, cast iron, vaccinations, the chain drive,
suspension bridge.
They were using petroleum and natural gas as fuel, again,
thousands, at least hundreds of years before the
West came to this.
And actually going back to Francis Bacon, Francis Bacon,
when he began the scientific method, believed that four
inventions had more impact on science at his time-- this is
around the 1600s--
than anything else.
He said these four inventions which were paper, printing,
gun powder, and the compass, he said transformed our world
far more than any religious belief, far more than any
political or war worrying, conquering nation.
That those four inventions which, he said, whose origins
are unknown were the transforming
powers in his world.
And he died before he discovered that all four of
those inventions came from China.
Now so why didn't the Chinese invent science?
And there is actually no easy answer this.
A man named Joseph Needham devoted his entire life to
trying to answer this question, and it's now called
the Needham question, which is how could science and
technology which was so far advanced in China for so long,
how come China did not continue going that way and
actually come up with the scientific method?
And the answers, as I said, are very complicated.
There's numbers of strands to it.
Part of it has to do with an inability to separate the
political and the inquiry, the religious and the inquiry, and
a sense of maybe valuing other things besides a quest for
truth as a proper form of investigation, all those kinds
of wrap together.
What happened was that there was one discovery that China
didn't make.
And that discovery was the
scientific method of discovery.
You can go a long way with creating novelties, actually
having progress by developing new ideas without the
scientific method.
But the scientific method is the best way to do discovery.
And once you make that discovery, you're off on a
different course.
I tend to think of science as a structure of information
that allows discovery, in fact, as a
structure of discovery.
And this is just a kind of a mapping of
the citation indexing.
This is a mapping of what they used to call in library
science citation indexing, which we would now think of
almost as a type of page ranking
of scientific discovery.
So they would look at the whole of all of the citations
that were referenced in the footnotes of the paper.
And if you extract out a map of those backward links, you
would get something like a clustering of those journals,
or those individuals, who were cited the most, basically that
were linked to the most. And this work was done by Eugene
Garfield in the '50s in Philadelphia, and it was
somewhat the inspiration that Larry and Sergey looked at for
their page rank.
And if you take this structure of scientific discovery just
as an example, and think about it as a way of structuring
knowledge and information as a way of knowing, that's the
view that I want to bring into what science is.
It's a way of knowing.
More importantly, it's actually a way that we change
how we know.
And I hope to return to that idea that, in fact, it's not
just the process of discovery, it's the process of how we
change how we discover.
Because there's a second order of delta in science.
It's not just studying things, learning new things, it's
changing how we learn new things.
It's learning how we learn new.
It has that recursive sense in it.
And I think that's really crucial because it's that
recursiveness that actually gives us generative power.
Almost all the things that we find interesting int he world
have a sort of paradoxically recursive
nature at the bottom.
They are self-organizing.
Their self-referential.
When you come right down to it, they make this sort of
paradoxical pointing back to themselves.
So the question is what's the next 150 years of science, I
mean what is it really?
What is it really?
What people really want to know is when are we going to
have flying cars?
They're not interested in the science and information.
They want to know when we're going to have flying cars?
When are we going to have robots that talk back to us,
that can tell us they're smarter than we are?
And when are we going to have virtual reality and things
where we can just interface by thinking or by gesturing?
Well, I'm not going to tell you about those because I
don't know.
My point is that those are just individual novelties.
Those are just examples, like the Chinese, of just making
new things, like hay or penicillin.
They aren't really about the structure of
how we discover things.
And so what I want to try to talk about is some
speculations on the ways in which the scientific method
itself may change, and not so much the particular ideas or
inventions that it might throw up.
The first set, well first of all, I'm suggesting that
science will actually change more in the next 50 years than
it has in the past 400 years.
If we know anything about the kinds of trends and curves
that we see, then the actual structure of the scientific
method will change more in the next 50 years than
the last 400 years.
And if you believe that, then you have to go along with me
in trying to think about ways in which that might happen.
And I think-- and basically this will be the context of
what I'm starting to suggest--
most this will be in the realm in which you all work, which
is in the structuring of information and knowledge.
That's what this is going to be about.
So I might say that Google is about changing the structure
of the scientific method in t he next 50 years.
So two, one thing that we know about is it's going to be a
bio century.
And here let me explain what I mean by that.
Right now, biology this year, has the most funding, the most
scientists, is generating the most results that are
published, and has the most economic value.
It's the most ethically important or relevant in terms
of the kinds of questions that it generates, and it has the
most to learn.
Let me explain that last one a little bit.
The living world, the biological world, basically
has had four billion years of learning.
Every day there's information being generated by biological
systems that's being recorded in genomes of these organisms
throughout the world, not just the single ones, of course,
but even the individual ones.
It's being embedded into the very ecological systems that
we have. It's a vast amount of knowledge and information.
If you compare that to physics, physics is the same.
It's pretty universal.
There are deep mysteries about the physics and understanding
of what's underneath the hood in our universe.
But the amount of information in physics is
actually very small.
The amount of information that's embedded
in biology is huge.
And that's one of the reasons why this is at it's point
right now where it's become the biggest science that we
have, and will continue to be at least
for the next 50 years.
There is so much to learn.
It's so deep.
It's so structured.
It's so complex.
And so the biggest mother lode of information and data in the
next 50 in science in terms of
understanding it is in biology.
Third, computers are leaving the third way of science.
And I'll take a few minutes to tell you what I mean
by the third way.
Traditional understanding of science has two parts.
There was hypothesis a measurement,
observation and theory.
Those two went hand in hand, and you kind of tried to keep
them in balance.
You might have a weird idea, and you go out looking for
data that was OK, or you might start with the data and try to
come up with a theory.
But the two of them together were kind of like the two feet
of science.
And this is how the normal understanding of science
worked, that you needed to have theories that predict,
make hypotheses that would predict data that you would
find, and you had to have theories that were based in
some ways that would explain the data that you had.
And you try to let neither one outrun the other.
But had a kind of a two-faced aspect of science that
required both.
So in the measurement side, what's happened is, is that
nothing, really, nothing that I can think of, nothing that
anyone else has been able to think of, is growing faster on
this planet than information.
It is the fastest growing entity on this planet.
Hal Varian, whom you're familiar with, did a study of
how much information in the world.
And he and Peter calculated that it was
growing at 66% a year.
Physical production, there may be spikes.
The iPods may grow at the rate of 200% for a couple of years.
But if we talk about anything on the terms of a decade or
longer there is nothing that we're making that is anywhere
near the growth rates of information.
So information, right now, is the largest growing thing on
this planet.
And if you take a chart of of the kind of data volume over
time, in the beginning most of the advances were coming
through increased precision.
And then after awhile we had increase in the spectrum in
which we could measure things.
We had in the beginning just what we could see.
Then there were things that we could see through a microscope
or telescope.
Then we increased the sources.
We could measure things through a thermometer.
We could measure other things that we couldn't see.
And then, over time, we had durations in which we could
measure them all the time.
For instance, we could measure them longer periods,
constantly, day and night for years.
And what's happening now, I think, is of course we're
adding these technological senses around the world.
So we're basically generating huge, huge volumes of
information all the time, in real time, constantly,
everywhere around the earth.
So I call this zillionics.
Zillionics is the field in which you're dealing with
zillions of things.
Once you get into zillions a lot of the tools that we have
break down, and we're trying to invent ways in which to
deal with [? zillion ?]
And science, of course, is generating
zillions of bits of data.
These are just some, actually at this
point, kind of old amounts.
But terabytes, petabytes, exabytes, we're headed there
very fast, zillionicbytes at some point or other.
On the other hand if I hypothesize of how we learn
things, one of the things that's happening right now,
right now this is done kind of with human minds, very, very
handcrafted, but, in fact, we now have ways to do a one type
of science that's called multiple competing hypothesis.
Instead of trying to go through, I have a hypothesis
about what may happen.
I'll try that out, try some things.
If it doesn't work, OK, I'll make another one.
You actually make multiple simultaneous hypotheses in
which you try to apply data.
So hypotheses become something that is becoming kind of a
mass phenomena rather than just a single
little thing you handcraft.
And there are ways in which you can actually try and
manage multiple hypotheses as a way to deal
with scientific discovery.
Another way is combinatorial sweep.
And this is what Stephen Wolfram did.
OK, the way you explore something is we'll generate
every possible variation of it, and we'll explore through
that space.
And by exploring that space of possibility we'll know
something about what that thing is.
So he takes the CA, the cellular automata, and they'll
make every possible CA and make that whole space of CA,
and then go through that trying to discern the
characteristics and the behavior of the CA by
exploring that possibility space.
Well, we can do the same thing with other things like with
combinational libraries and exhaustive search.
They're using it in biology right now.
They'll make every possible variation of a protein.
They'll make every possible variation of a ceramic or
chemical compound.
You just explore the possibility space.
And again, using robots to make up them, to test them,
this is something that was not possible before.
But we're basically exploring this possibility space of
hypothesis by doing it exhaustively in combinatorial
exponential expansion.
So chemistry compounds, synthesis methods, you make
something new.
You make every possible way of making it new.
And you also generate hypotheses in the same, every
possible hypothesis that could apply to the situation.
Those are the two standard pillars of science,
observation, hypothesis, theory.
But there's a third one, and that's what I
want to talk about.
The third one is a simulation or synthesis of those two.
You synthesize something.
I call it "nerd culture," and I did a little piece in
Science Magazine talking about the usual division of cultures
that C. P. Snow talked about, the two cultures, humanists
and humanities on one side and the scientists
on the other side.
Humanists would explore the human condition by creating,
by kind of probing an expression, by, I guess you
might say, examining the human condition.
And the scientists would do the same thing by
measuring or probing.
The third way of knowledge is the nerd way.
And the nerd way is that you make something.
So the way you study democracy is not by getting in and
reading a lot of books or by trying to measure it, the way
you study democracies is you make an artificial one.
You make a virtual democracy.
The way you study a mind is not to contemplate it, to
examine it from all sides, or to try and measure the little
data points in the brain cell.
The way you study a mind is you make an artificial mind.
That is the nerd way.
It's a way of doing things, studying things, and
discovering things by creating them.
Well, simulations has a little bit of that in it.
It generates a lot of data.
Here's a gamma ray simulation.
This is huge spinning off huge amounts of data.
And so what this suggests to us is that in the data volume
that most of the data in the coming years is going to be
generated by our simulations of things.
Actually simulating it into things that we make is
actually going to generate more than the natural world.
Their efforts in science, science itself, will be
generating more data than the natural world measurements do.
And so these three, I think science now has a triad.
It's now a tripod of three things.
It's data, measurement, and simulation.
And those three things together are becoming the kind
of core of all sciences.
So you have data, which is being fed into the simulation,
and the simulation is creating new data itself.
And then you have the simulation which also is kind
of a theory.
Basically simulation is a theory that's active.
It's an interactive theory.
So theory and simulation has become something that feed
upon each other.
And, of course, hypothesis and data measurement is the same.
And I think that--
the intersection of those three is
what I call deep science.
This is this new science that has increased data.
We can see this happening three different--
[UNINTELLIGIBLE] complex adaptive science where you
have continuous real time measurement, which is also
being simulated in real time feeding back to the real time
measurements.
The simulation itself is creating a continuous search
in real time for our hypothesis which is going on,
and the hypothesis, of course, guiding the creation and
measurement of data.
You can imagine the same thing happening in, say, health
sciences where you have a person.
You have sensors in the person and they're generating in real
time all kinds of information which is being
simulated in real time.
And that simulation is, of course, itself is feeding into
the hypothesis.
And so the three of them working together becomes this
new science of simulation, hypothesis, and measurement.
And, of course, we can imagine it happening say in the
ecological natural world where we're looking at the real data
from nature.
We're simulating it at the same time in real time.
We're generating real time hypotheses which are, again,
guiding our measurements and our simulations.
This is deep science.
So this is the third way of science where the computer
becomes very involved in our method of discovery.
It's the third way of science, or not.
It's scary because it has computers at the center, and
we are always careful about that.
So the last thing I want to talk about science is that it
actually will be creating a new way of knowing.
WikiScience is one possibility.
The editors of Nature told me that they're expected to get
their first 1,000 author paper the summer.
So that's in traditional science.
We can imagine WikiScience where there is an ongoing
document and an ongoing scientific journal article
that's never done.
It's constantly being updated.
It has thousands of contributors around the world.
It's open-ended.
And it's constantly in flux.
That's the nature of adaptive knowledge.
Another suggestion is compiled negative results.
Right now negative results are thrown away for the most part.
There is one obscure journal that actually has just started
to try and report negative results.
But negative results can be far more informative than the
positive results.
Normally they're thrown away.
They're not revealed.
And there's actually a move right now to require in
medical studies to require that the
negative results be reported.
And the way they're doing that is they're saying that the
journals will not published the medical article of your
final results unless you register with them to indicate
that you've done the early negative results.
So they're hinging the final results of your study being
published on the fact that you actually are going to
catalogue and report your negative results.
AI computer proofs is another example.
This is a packing hypothesis.
I think it was Kepler who did it, suggested at first. It was
not proved until recently, and it required AI help to
actually make the mathematical proof.
And we're going to see more and more of that as well.
The triple blind emergent trials, this is the idea that
you can actually do scientific studies ad hoc by
taking real time data.
Imagine if you have a large population of people.
You have 24-hour sensory information from their body,
temperature, all kinds of things.
And you basically extract out of that large amount of data
afterwards.
You sort out the controls afterwards so that neither the
observer, nor the researcher, nor the scientists actually is
aware that there's an experiment going on.
The experiment happens after the fact but taking large
numbers of variables and extracting out the ones that
you want to use and consider as controls.
So finally distributed experiments is another example
of [UNINTELLIGIBLE].
Other examples where we're taking very small amounts of
information and measurements distributed over time, not
centralized, and using those measurements to actually
correlate and aggregate them.
And that's another way.
And then the final way is the return of the subjective.
This has to do with science as long-term trend in terms of
becoming objective and refusing to consider the
subjective.
When we get down to fundamental questions like the
origin of the universe and things like this, and quantum
mechanics, we begin to find in the fact you cannot easily
remove the observer, and that you actually have to account
for the observer as part of the experiment.
And we don't have very good tools for dealing
with that right now.
And so what we're going to do in the next 50 years is also
learn how to tolerate and manage the
subjective in science.
So those are some speculations on the
next 50 years of science.
But I'd like to end by talking about just one last thing,
which is what I think science means.
I think sciences actually creates a
new level of meaning.
And one of the interesting things about emergent systems
is actually what's happening is that a new level meaning
comes out of the small parts that were at the lower level.
And so if you could go back to my diagram of the scientific
clustering of information and citations, we can also imagine
this web of the Internet, which is one of the more
current versions of the traffic on the Internet, the
different countries are different colors.
I think North American is blue.
This is a recursive map.
This is a map where things are touching back to itself.
This is a structure of information.
we can imagine each of those nodes on there as being
different species.
We can imagine them being different technologies.
We can imagine them being different methods of
investigation, different styles of discovery, and we
can also imagine this as one machine.
And so what I'm suggesting is, is that we have
science right now.
The way we're hooking up stuff is that it's actually one very
large machine.
And I decided to actually treat this as one machine.
So I said if this was really one machine, what
would it look like.
I'm sorry for this bad formatting.
But it has a billion PC chips.
This machine is right now sending one million emails per
second, one million IM messages per second, 8
terabytes per second of traffic, 65
billion phone calls.
It's a huge machine.
And, in fact, if you were to spec this machine out and try
to sell it on Amazon, the processor has got a billion
chips, has got one megahertz email, one megahertz web
search, ten kilohertz instant messaging, and one kilohertz
SMS. And that's a very big machine.
That's a very, very powerful machine.
And this is, of course, the aggregate of all the chips and
all the machines in all the world.
We've created one large machine.
Bus speed of ten terabytes, ram of 200
terabytes, storage exabytes.
Of course these are all going off the edge just as we speak.
But the idea is, is that that is the machine that we're now
programming for.
We're not programmed for your laptop.
We're not programming for your cell phone.
We're programming for this machine.
That is the thing that we're doing.
And the other thing about this, is that if you take a
biological view, it's very close to the complexity of a
human mind, a human brain I should say.
It's got a quintillion transistors.
And if you take all the transistors and all the PCs in
the world, there's somewhere a quintillion of them
operating right now.
They're all live.
It's got one trillion on the web.
The web has one trillion synapses,, one trillion links,
20 petahertz synapse firings.
That's the speed.
It has a 100 billion clicks per day.
So it's a very large brain.
So we are this machine.
Because if we add then to that web machine our own brains as
we sit behind, and as we guide the clicks, and as we
interface, if you add the collective intelligence of the
one billion people online right now, it's a very large
machine and it's a very smart machine.
One of the questions that people is well, are we going
to take off?
Is there a singularity about to happen?
I think there is, but I think it's not in the way that Ray
Kurzweil and other people would believe, in the sense
that there's going to be a rapture or people left behind.
I really like this.
This is one my favorite New Yorker cartoons.
It says, "Sir, the following paradigm shifts occurred while
you were out." I don't think that happens where we will
notice things.
In fact, I think what's happening is that we'll go
through this without really noticing it in the same way.
"Hey, did anyone notice we are using language?" There was
never that conversation.
People never sat around the campfire and said, hey,
actually we're talking.
Talking?
Yeah.
language obviously was a singularity.
But we passed through it without really noticing.
And I think the same thing is going to
happen with this process.
What technology gives us is the possibilities.
People often say technology creates as many problems as
it's solving.
Actually I think that's true, but each of those problems is
actually a possibility.
And what it gives us is possibilities.
So differences, diversity, options, these are the things
that technology creates, and it's a great bargain.
All of us will always go for that.
So each of these nodes in this network is actually
possibilities.
This is a possibility space.
And that's what technology is creating.
It's an infinite gain.
And the idea is to keep the gain going.
So if you can imagine, for a moment, van Gogh being born
before the technology of oil paints was invented, or Mozart
being born before the piano was invented, or maybe
Hitchcock being born before the technology of
the film was invented.
So there are people alive today whose technology has not
yet been invented, whose perfect means for their great
genius has not yet been invented.
And I think we have a moral obligation to actually create
those possibilities so that everybody in the world has a
potential to really, really use but
they've been born with.
Those possibilities, that quest for the possibilities is
really what technology is about.
And I think that's why we're here and why what you're doing
at Google is so great, because we're trying to play the
infinite game.
Thank you.
[APPLAUSE]
AUDIENCE: So I'll take questions if people have them.
Way in the back.
Through a lot of your talk you talked about science, and then
at the end you started using the word technology.
Now that doesn't bother me particularly, but if I think
about a lot of discussions on the scientific method and so
one, and I look at curricula in colleges, science and
technology tend to be kept very separate.
Science is a much more analytic technology and
[UNINTELLIGIBLE PHRASE].
Do you tend to distinguish the two, or to you are they all
lumped together?
KEVIN KELLY: I don't do things as much as I probably should.
But as I did suggest earlier, I think science has change
much more by its tools than anything else.
So the tool part of science is as important as anything, and
is definitely part of the scientific method.
So we can change science by creating tools that either
measure or, as science because more and more information
driven, the tools of information become as
important as anything else.
And so that's why a search engine I would make on the
list of some of the most important scientific method
inventions.
Some of the most important methods in the invention of
the scientific method is the search engine
because it's a tool.
And it is a technology, but it's actually not as divorced
as other strategies.
So I think there is a difference, but not much as
some people think.
Yeah?
AUDIENCE: [INAUDIBLE PHRASE]?
KEVIN KELLY: So let me repeat the question for
those in the back.
The question was that science is very important, but
scientists themselves are not rewarded usually as much as
say, other people, who may be involved in
technology, for instance.
Yeah, I think that's bad.
I mean, I think it would be better if we
rewarded them more.
And I think some of them moved towards in universities to
have scientists partake in their inventions through
patent arrangements.
It's one way in which society has been trying to reward
scientists.
But I think in general they aren't as rewards.
My wife is a scientist. She works at Genentech.
She's a scientist there.
So I know that scientists, like teachers, don't get the
same reward as they should, and I should also say, are
often really badly portrayed in movies.
I mean it just infuriates me to see someone wearing a white
lab coat and being crazy, like [UNINTELLIGIBLE]
in terms of Back to the Future.
So yes, scientist don't get the respect they should.
And I don't know if there's a mechanism to remedy that other
than hug a scientist tonight.
Right here.
AUDIENCE: Is there an asymptote to the amount of
knowledge that we can reach, either as humans or whatever
[UNINTELLIGIBLE]?
KEVIN KELLY: Is there an asymptote to the amount of
information humans can reach?
Actually this is another little talk.
But I believe that what science really generates is
not knowledge, but ignorance.
A really good question, a really good question is a
question that would generate more
questions than it answers.
Every time we have a really good answer, it will generate
two or three more questions, and I think of those questions
as possibilities.
So, in a certain sense, while the amount of information is
being increased, our ignorance is actually increasing faster.
We have far more ignorance now than we did 400 years ago.
We have far more questions.
We have far more things we want to know about, and we
realize we don't know about, than we did.
So in a certain sense, what science is actually trying to
do is expand that frontier space.
I often think of it like friction.
The new economy was supposed to decrease friction so we
have a frictionless economy.
It did that, but at the same time at the frontier, at the
edges, it was increasing friction.
It was increasing the unknown.
And so we actually are expanding the unknown faster
than we are answering it.
And I think that's because those unknowns are all
possibilities.
Those are things that are close enough.
The adjacent possible is what Stuart Kauffman calls it.
We now are close enough to see that we don't know that.
And that actually is expanding faster than what we know.
So I don't think there's an asymptote Because each one of
those unknowns is more information.
So it's infinite.
AUDIENCE: [INAUDIBLE PHRASE].
KEVIN KELLY: Oh, individually yes.
But we're beyond that.
At the time when any individual knew everything
there was is long gone.
This is that big ball, that big machine collectively is
understanding things.
And as that happens, we will know as individuals,
percentage wise, relatively less every year, less of all
this known.
Yeah?
AUDIENCE: [INAUDIBLE PHRASE]?
KEVIN KELLY: So the question is, is that if opportunities
are expanding, could we not say that the opportunity costs
also expand, and maybe even faster?
Yes.
I think they do.
And here's what I think one of the remedies is.
It's that we basically want AI.
We want many, many minds.
We want every possible type of intelligence.
We want more things seizing those opportunities.
So the costs will increase, but part of what the economy
does is it always drives things down at the center.
So you have to core where the opportunity costs are less,
where friction is less, and at the edges which are expanding,
there's always friction, and ignorance, and profit.
If you haven't figured that out, profit
is related to friction.
There's no profit at the frictionless center.
There's only profit in ignorance because you have
information that no one else has.
At the center where there's friction, where everything is
known, there's no profit because everybody knows
everything.
So that expansion is always happening at the edges where
there is high opportunity cost, high friction, high
ignorance, and high profit.
Well I guess I've answered everybody's questions.
That's great.
Thank you for having me.
It's really been a real pleasure.