字幕表 動画を再生する
MARTHA MINOW: I'm Martha Minow.
And it is with gratitude to the Berkman Klein team
and to Urs Gasser, particularly, that I
say, welcome to a discussion that I
promise will have answers--
at least suggestions-- as well as, oh my gosh!
What do we do?
So "fake news" is a phrase that, now, no one
is quite sure what it means.
But we're all worried about it.
And we will spend a little bit of time talking about,
what do we mean by it?
What has it come to mean?
But we're going to spend most of the time together talking
about, what are tools that are available or could be made
available to help people sort through the floods
of information and the democratization of access
to information that makes it very hard to know what's true
and what's not true.
And then, of course, there's not anything new at all
about propaganda and lies.
We've always had them.
Now we just have more access to them.
So one of my favorite cartoons shows
in the antique world of Xerox machines, someone
going to the Xerox machine and making a copy of something
and saying, send it to the world!
Now, at the time, that was a funny cartoon.
But now, with the internet and digital possibilities,
anybody can send anything, basically, to the world.
And I think that's the context that we're
going to be addressing.
I will say something briefly about each person
when I introduce them.
And I'm immediately turning to J. Nathan Matias, who
is very importantly involved in the Berkman Klein Center
for Internet and Society here.
And he's also involved in the MIT Center for Civic Media.
And he's a PhD candidate at MIT.
And he's going to kick us off.
What do we mean when we say fake news?
What do you mean?
J. NATHAN MATIAS: So when people think about fake news,
we often look back to that moment
when Craig Silverman at BuzzFeed did this amazing report
about Macedonian teenagers who were creating fake articles
and earning thousands of dollars a month.
In fact, one of my favorite fake news headlines is, quote,
"After election, Obama passes executive order
banning all fake news outlets."
Which, of course, was itself fake news.
But the reality is much more complex.
It's much more common to see something
like a recent Breitbart article entitled
"California's recipe for voter fraud on a massive scale."
There's recent work by Yochai Benkler and the folks
at the Media Cloud team here at Harvard
that shows that often what we get
are powerful political entities creating information
that has, maybe, a kernel of truth,
but it's really disinformation.
They mix truths with familiar falsehoods and logics
of the paranoid to make something that is not
just believable but something that, maybe,
when you go to Google, because they're
the only people writing about it,
you might feel like you're fact checking it.
Because you see 10 other links from
similarly-connected organizations saying
the same thing, even though it's something closer
to disinformation.
It goes beyond what can actually be claimed.
Because in the case, for example, of California,
their motor voter laws are things
that are similar to what other states have already
implemented.
And there's not really been evidence
that those kinds of things lead to voter fraud.
So there's this problem where we have a wide variety
of disinformation.
And people are concerned about how that information spreads
on social media.
There are fears about filter bubbles.
There are fears about the use of algorithms,
whether it's Google Search or whether it's
Facebook's news feed, that might influence
how these things spread.
And in my research, I've done work
to help understand what we as citizens can do
and what the public can do to better understand
those algorithms and influence how they work for the better.
MARTHA MINOW: Great.
We're going to hear more about that soon.
So An Xiao Mina is an expert on memes.
And she's a writer who looks at global internet and network
creativity.
And here, as a fellow at the Berkman Klein Center,
she's studying language barriers in the technology stack,
because the interest in diverse communities
is a big development of her work.
She leads the product team at Meedan,
which is building digital tools for journalists
and translators.
And she co-founded Civic Beat, a research
collective focused on the creative side
of civic technology.
What do you mean by fake news?
AN XIAO MINA: So I think, when we
think about fake news, often--
this is my perspective as a product manager
working with journalists.
Often, in these communities, we always
use air-quotes, "fake news, fake news."
And in many ways, this is an implicit acknowledgement
that this phrase has come to mean so many things
to become almost meaningless.
It's an umbrella term for so many other words,
other phenomenon.
So the problem of fake news starts to seem intractable,
because it has such a diffuse meaning.
And I really appreciate Claire Wardle's breakdown
of fake news.
She looks at different types of fake news, anything from satire
and parody to misleading content to really manipulated content
and, then, fully-fabricated content,
and then also breaks down different motivations,
everything from parody to the goal of punking to actually
spreading propaganda.
And when we look at these different techniques,
when we really break down fake news,
we can start to think about different strategies
and different techniques for addressing
the wide variety of problems under this umbrella.
So I think there's a different range
of strategies for when an Onion article becomes cited as fact--
which is a frequent phenomenon, especially in global contexts
where a global newspaper, a newspaper outside of the US,
might misunderstand the context of The Onion
and then cite that as news--
versus our strategies for dealing
with state-sponsored propaganda botnets.
So as we break down these different motivations
and techniques, it also helps us think about breaking down
our strategies.
The other thing about their frameworks around fake news
is also the very word "fake news."
It orients us towards an orientation towards truth
and falsehood.
When often, the reason that things spread
is not about truth or falsehood but about affirmation.
We talk about the internet as an information superhighway.
It's one of the early metaphors for the internet.
In many ways, it's like an affirmation superhighway.
People are looking for validation of perspectives
and deeper cultural logic.
So I tend to agree with researcher Whitney Phillips.
Her framework around this is suggesting
that we think about it as folkloric news or folk news.
Because it orients us less towards truth and falsehood,
which is still important, but more towards motivations
for sharing and participation and how
that reinforces deeper cultural logics.
And I guess that's-- my third point here is,
as we think about solutions for this fake news problem,
it's also thinking about short-term and long-term
solutions.
In product management, we often think about,
what is the immediately addressable problem versus what
is the long-term issue here?
And this issue around cultural logics, I think,
is an important one that I think about frequently.
Because thinking about fake news as symptoms,
as mirrors for deeper thinking in society,
for ways that people orient themselves and their values,
helps to think about other civic institutions
that we may want to engage to address those deeper logics.
And I'll be interested in talking
more about that as well.
MARTHA MINOW: That's fantastic.
And when you say cultural logics,
identity is, of course, such an important factor about that.
AN XIAO MINA: Absolutely
MARTHA MINOW: An affiliation.
Larry Bartels has a recent book looking
at American politics, electoral politics,
over the last 70 years and finds that it is not policy,
it is not party that explains how people vote.
It's membership and identity.
And so I think we need to include that,
particularly when we think about civic media.
So Sandra Cortesi is our expert in youth and media.
And she coordinates the youth and media policy
research at Berkman Klein with UNICEF,
an important initiative.
I was recently struck when it turns out
the Common Sense Media has issued its recent study where
it finds that a lot of young people
do not know what's news, what's not news, what's
fake news, what's not news.
And yet, they really want to get news.
So let's hear from you.
SANDRA CORTESI: Thank you.
So I would like to particularly highlight two points that,
again, focus on young people.
When I say young people, I mean, usually, middle and high school
students here in the US.
And in that context, together with Urs Gasser
and other people on the youth and media team,
we have engaged in a large, qualitative study
talking to young people.
And so, two points in that context.
The first one is I think we should look at fake news
more in the context of not just what is true
and what is false, but more in the context of information
quality.
So we have done a large study on information quality,
very big report.
But acknowledging that information quality, first,
is highly subjective-- so what you value as high-quality
information-- highly subjective--
and that it's very contextual.
And also important, in that context,
is that we acknowledge that we should take into account
that all the different steps in this information process
are not just how one evaluates information
once you're confronted with it, but also how you search for it
and where you get it and also how you share, you create it,
you remix it, and so forth.
And the second point, in the context of young people,
again-- we heard it a little bit before-- but what news actually
means to young people is quite different,
actually, than what it means to adults.
Through those focus groups, we have
learned that young people often have a more social view
on what news means, also a little bit broader
perception of news.
And I think that's then also important when we think about,
what are the different quality criteria they
employ when looking at news?
I think that's key to keep in mind in this conversation.
MARTHA MINOW: So social view means that it's affiliation
with the news producer?
Or what does the social mean?
SANDRA CORTESI: Well, social also in the sense
of what is relevant to them in their context, for their needs,
their communities, and so forth.
But also looking more at what kind of information/news
they get from friends and people in their networks
online, for instance on social media and those things.
MARTHA MINOW: That's great.
That's great.
Thank you.
So Jonathan Zittrain would be here
all day if I said all the things that he does and he knows.
But as faculty director of Berkman Klein,
as vice dean for library resources,
as professor of computer science here and at the Kennedy School,
one of the many things that you've done
is you've actually been attentive to the issues
and the risks of filtering.
You've done that from the beginning.
And you've conducted the first large-scale studies
of internet filtering in China and Saudi Arabia.
So as we think about fake news, how do you
relate that to these concerns?
JONATHAN ZITTRAIN: Great question.
And those studies of filtering, 15 years ago, now 20 years ago,
presumed that there was a source of consistent information
that, if not true, was respected and that people
around the world would want to get to it
and that regimes that didn't trust their populace
and maybe wanted to mislead them would
try to block the populace from getting to that good stuff.
And that model, I think, is greatly
scrambled now, in part, as regimes have learned--
I think it's Zeynep Tufekci who calls it
a sort of informational distributed denial of service
attack where if you could just pump enough stuff out there
it's really hard to figure out what the reliable stuff is
from the non-reliable stuff.
So when I think about defining fake news,
even there, I hope it's an oxymoron to use the phrase.
Because innate in the original concept of news
was that it would bear some relationship to reality.
I hope I'm setting the bar at the right level for news.
That's not actually saying it's true.
It just bears some relationship to reality,
both in having some truth value to it
and in being presented in some appropriate context.
It's really easy to create a mosaic
of completely true things that paints a totally false picture.
And traditionally, we have relied on or wanted to rely
on, ask of our news-generating organizations,
that they offered that relationship to reality in that
context when they exercised their privileged ability,
originally through the megaphone--
people with a photocopier couldn't replicate it--
and with their funding and with their privileged access
to speak truth to us.
And there was a time, 20 years ago,
at the time we were also doing the censorship work,
when I think there was a lot of celebration
in the neoliberal mode of new sources of information.
Citizen journalism was a phrase.
I think it has since receded, maybe because it's
now just part of the fabric.
And I remember, actually, a conference about 10 years ago
where somebody who was skeptical was
like, citizen journalist, huh?
Well, next time I need an operation,
I guess I'll go to a citizen surgeon
and see how that works out.
And I was trying to figure out, why wouldn't I
want a citizen journalism?
But I'm celebrating citizen journalism-- citizen surgeon
on the other side.
And I do think they're distinguishable.
But I would not, in dismissing what maybe
previously was thought of as a monopoly of sources--
"mainstream media," often spoken of in a disparaging way.
There was a sense of a craft and profession to journalism,
including ethics.
I spent a memorable summer in high school
at the National High School Institute at Northwestern.
They had something called the Cherub program.
And you could pick your topic.
And one of them was at Medill, the school of journalism.
And they really tried to inculcate
us recalcitrant 10th graders into what it really
means to be a journalist.
And there's a lot that has to do with the ethics and the trust
placed upon journalists.
And whether or not it makes sense
to have only a handful of reporting organizations
or whether that's even possible now
seems to me less the question than how much we should value
retaining a sense of those ethics and that balance,
aspire to it, even in the breach.
And that just briefly gets to the fake part of fake news.
I would define fake, also with a fairly generous standard,
as being that which is willfully false.
Take aside that which is careless,
negligent, got it wrong, gee, I'm sorry.
But rather look at the profusion of stuff
that the person saying it or repeating it
does not believe it to be true or is
absolutely indifferent as to whether it is true or false.
And to look at our ecosystem now,
we can see reasons of statecraft and ideology
to want to propagate false things,
because the ends justify the means.
And I have a better goal at stake, perhaps.
And under the idea of, just, it makes me money.
The microeconomics of this space should not be underestimated.
And it's something that Nate made reference to
in his opening remarks, that it can just
be extremely profitable to pump out
things that are knowingly false and to see
them achieve a foothold.
And that's part of the problem I think
we need to take on if our goal is to have a news
ecosystem that bears some relationship to reality
and comes from a source that has some sense of ethics
or responsibility.
MARTHA MINOW: Thank you.
Well, we're going to turn now to tools.
And we've heard people identify the material economic reasons
for the problems that we're now encountering
and the democratization of access and generation
of information, but also on the demand side.
That the demand is not necessarily for truth
and not necessarily for verified facts, but maybe
for affiliation, for membership, or--
dare I say it?-- rubber necking.
Can you believe this?
Right?
So Facebook has done an internal study
to find out, looking at their algorithms, what
gets escalated to the top.
And it turns out, the vast majority
of what we would label fake news that gets escalated to the top
was never opened.
So it's just the tagline.
It's just the headline.
So that seems to me, I want to call that
the rubber-necking idea.
Can you believe this?
And we don't even know whether the people forwarding it agree
or don't agree or it's affiliative.
Look, see, I was right.
Or can you imagine that people are saying this?
So as we think about tools, who wants to go first?
Nate, do you want to go first?
J. NATHAN MATIAS: Sure, I'm happy to kick it off.
So I appreciate you mentioning these--
we might think of them as feedback loops.
They could be social feedback loops,
where people connect more to people who they affiliate
with through misinformation.
And that leads them to spread it.
So there's a social feedback loop.
There are also these algorithmic feedback loops
that we might worry about, where then
these things become popular.
They then become popular by merit of being popular.
And the algorithms amplify them further.
The first thing I want to say about that,
though, is that we need to be able to put
the role of social technologies in context.
If you look at the percentage of people who actually rely
on social media as their primary source of news,
it's actually quite small.
Pew did a study where they found that, while 7% of Trump voters
and 8% of Clinton voters got their primary news
from Facebook, 40% of Trump voters
got their primary news from Fox News and television, and 18%
of Clinton voters from CNN.
So there's still a question about the role
that these platforms play compared to television, radio,
and other media.
But companies are doing something about it.
We have Google and Facebook both working
with fact-checking organizations,
trying to find ways to notify people that something
may be unreliable, that it's been
fact-checked by a third party.
And most recently, Facebook has started
testing a system that lets people
know if the thing they're about to share has been fact-checked
and what the response has been from one
of their fact-checking partners.
I just recently finished a study where
I worked with online communities, actual internet
users, rather than a platform, to test
the effect of their own fact-checking efforts.
Because even as companies try to introduce design
changes to improve things, they're
not usually very transparent about the effects
of those systems.
It's very rare for them to publish peer-reviewed research
on how those design changes turn out.
And something they sometimes get backlash for as well.
And so what I've been doing in my PhD
has been to support internet users to organize independently
to test their collective efforts on platforms and on issues
like unreliable news.
So I worked with a 15-million-subscriber community
on the platform Reddit, who discuss world news.
And they had similar problems.
People were submitting unreliable news
to their community.
And rather than outright remove this,
they wanted to foster discourse.
They automatically posted messages
encouraging their community to fact-check that information.
And we were able to see that not only did that encouragement
increase the rate at which people were chipping
in to fact-check things but, also, we
were able to see that simply encouraging people
to fact-check caused articles to be less promoted
by Reddit's algorithms.
That if you looked at the popularity
ranking of those articles over time, simply encouraging people
to fact-check an unreliable news article
demoted that article by four items, on average,
in the top 100 over time.
So it's a great example of how you can actually
test the widespread effects of these fact-checking efforts
and how you may not even need to rely on platforms to do that.
MARTHA MINOW: It's also about nudges, right?
J. NATHAN MATIAS: Exactly.
MARTHA MINOW: Yes.
So An, tools--
AN XIAO MINA: Yes.
I can talk a little bit about Check,
which is the platform we've been building at Meedan, that's
been used in a variety of contexts, both global
and, recently, in Western contexts,
that are looking at misinformation ecosystems.
And it's a tool that provides journalists
a very structured way to gather, for instance, a digital media
object, say, a tweet, that seems to have disputed content
and then to show, in very clear steps, what
steps they took to research that tweet,
the content of that tweet--
is it original?
Who's behind it?
What's the motivation for that person?--
and then cite the sources for those findings.
And this is kind of a shift in how journalists often
work, where their notes are often behind the scenes.
And so one of the principles of Check is to show the work
and show the process behind it.
This has a few effects.
One is for other journalists.
Check is being rolled out in France right now
with 34 news rooms and the First Draft News Network
in a product called CrossCheck, which
is helping those newsrooms collaborate together to look
at misinformation ecosystems.
And newsrooms frequently do not collaborate.
And the reason that this tool can help with collaboration
is because of its transparency.
The notion of CrossCheck allows for newsrooms to say,
OK, I can cross-check that, that, and that,
because I can see the steps that this other newsroom has taken.
And so, we're hoping, with this tool,
that this helps strengthen some of the news ecosystem
in certain contexts, where newsrooms typically do not
work together, and to encourage collaboration
through these kind of open workflows.
And through structured workflows as well, to ensure consistency.
Within the platform itself, there's
a variety of different questions,
those core questions that every story
needs to have answered, before it actually can become a story,
and to ensure that that is consistently done.
Because especially working in a busy newsroom environment where
there's breaking news all the time,
it can be easy to forget key steps.
And so the platform tries to systematize that
for people working together.
The reason for this is there's that immediate-term use.
But there's also some long-term thinking here as well,
because the tool is also being used in Hong Kong.
There's a chief executive election
coming up this weekend.
And the University of Hong Kong is
using the tool in their classrooms, in their J schools
with both journalists and non-journalists
and journalist students and non-journalism students.
And the goal there, in addition to all the other principals
I described, is not just to help with journalism
but also to help build better journalists.
This notion of creating ethics and trust around journalism
is to train journalists in the process,
in these new processes of research that are required
in the digital environment.
And so that process is going on right now
where we're still looking at the results.
But I was speaking with Professor Ann Kruger
out at the University of Hong Kong.
And she's been noting that, through the process of having
to go through all these steps and these checks,
people start to develop these habits of mind around what
they're sharing and then how they
are articulating what they're sharing
online with their community.
Even if it's fake, they might be debunking it or offering
rationales for why they're sharing it
that offers greater context to help with these misinformation
networks.
And then the longer-term goal with the platform
is then, also, collecting data.
All of the information is collected within Check.
People are adding metadata around this, tags,
and information about where the source is, what are
the motivations behind this?
Ultimately, because many of these techniques
are so new and different within a networked environment,
our hope is that this also provides
a set of data for longer-term research as well.
MARTHA MINOW: And transparency.
That's great.
Sandra, tools.
SANDRA CORTESI: Yes, tools.
So we heard that a little bit about social tools
and technical tools.
But as we are youth media or I work on youth issues,
education is one of the tools in this toolbox.
And there, I think, three points could be relevant.
One is, how do we actually conceptualize something
like news literacy or information literacy
or digital citizenship?
And how do we co-design projects,
together with young people, so they actually
are relevant to them, they work for them,
and that the programs bring together
the adult perspective as well as the youth perspective?
The second one that is important when
we think about learning and education more broadly
is, where do young people actually learn?
So it's not just in the formal context.
But we also have to include informal contexts.
So after-school learning-- how is that happening?
And how can you connect the informal with the formal one?
And the third one that I think is relevant in the youth
context is to also consider what are, actually,
the limits or limitations of media
literacy or digital literacy?
Because if we think that fake news is also an ecosystem
challenge, then it might be difficult, at some point--
so again, not just then the only tool in the toolbox--
to put so much emphasis on the individual,
if it's an ecosystem challenge.
And particularly, again in my context,
on the individuals that are young people-- so minors,
which are often considered vulnerable populations.
So why do we emphasize so much on their own abilities
to make sense of all this?
MARTHA MINOW: One of the forms of literacy education
that I've been intrigued by is involving youth
in creating news, so that they understand the steps
and then, maybe, can participate in some
of the social processes.
JZ, tools.
JONATHAN ZITTRAIN: Tools.
Well, you opened this round of questions
with the observation or the qualification
about if news is what people are in the market for.
And there's some curiosity about contemporary social media
that we don't know what we're in the market for.
We're just on Twitter and, like, let's see what comes up next!
And it's a very weird form of one-armed bandit, where
it's like, news!
Excitement!
Dog.
And--
MARTHA MINOW: Cat, usually.
JONATHAN ZITTRAIN: Yeah, right.
More cats, please.
And in that environment, you could see,
people could be online for all sorts
of totally justifiable or understandable reasons.
And if you're there for entertainment,
being reminded, as you're about to share something,
that, like, now, let's be careful.
The truth value of--
it's like, enough already.
I want to share the dog.
And it calls to mind, in the United States,
in the World Series or the Super Bowl,
we wouldn't be like, why do we have
to go through this strange contest of physical strength
and skill to figure out, possibly injuriously,
which is the better team?
Can't we have the Patriots and the Broncos
just sit down and negotiate?
Like, let's just figure out some way of splitting the pie.
And as good as our negotiation program
is here at HLS, even that might be beyond them.
The contest is the point.
And I think, for a lot of people online,
some people are like, I am trying to learn something.
And somebody else is like, I'm trying to crush you,
because you're on the other team.
It's a mismatch of expectation that
is not going to end well for at least one of the parties.
So already, we're--
I think-- thinking about people who are online
for information gathering.
Or even if they're there for entertainment,
they may come away thinking they have accidentally learned
something that is not true.
And that could be, socially, a problem.
So tools-- it's interesting to hear of wholesale tools,
tools that could be used within newsrooms as they
are on deadline, trying to throw things together.
I've also been thinking about retail tools What
are tools that could make the experience of typical media
consumption and social media usage
more amenable to learning?
If that's what you're there to be doing.
And for that, I think there's a couple of things.
Your point about there's a lot of headlines that are never
opened means that it's really easy, for instance,
to see a story from the Denver Guardian that turns out
to be false, about an FBI agent that
died under suspicious circumstances
and had the goods on Hillary.
And wholly apart from the facts of the story,
whether you want to contest them,
there's the fact that the Denver Guardian doesn't exist.
Like, it is not a newspaper.
OK, at least let's agree, there is no Denver Guardian.
If you were in Denver, no one would be guarding you.
And you could see wholesale tools that would at least have,
without reference to the content, something
about the source, the way that, on eBay, for the longest time,
somebody selling something on eBay who just joined
had shades next to the eBay account.
Which for years, I thought just meant they were really cool.
Like, when do I earn my shades?
It's like, no.
You've lost your chance.
You had to be new.
But it's supposed to mean they're a little shady.
Weird that eBay did that.
Before buying a plasma screen from a shady person,
you might want to not do it.
So that's the kind of thing you could see,
a tool being content neutral but looking at the shape of things.
Which inevitably, for those who want to spread untruths
or lies, then they'll have to game it.
But at least put them to the effort of gaming it.
Some of the other tools I think would be interesting.
Nate already mentioned the interstitial,
the fact that Facebook might throw up a thing that's
like, I don't know about this.
Continue?
Like, yes.
There's a famous quote that journalism
is the first draft of history.
It's inevitably going to get a lot of stuff wrong,
even before you get into the willful part.
And I think it's really too high a burden to think
of Facebook or other distributors, intermediaries,
somehow being able to fact-check in real-time.
People ought to have a chance to say, gosh!
This makes me really upset, what I just read and shared.
And I feel intensely about it.
It would be neat to know, a week later, two weeks later,
if it turned out that thing wasn't true,
after the smoke has cleared.
And often, when the second draft of history is around,
things are a lot clearer.
So allowing easy tools to learn more about what you heard--
I think of it in tort terms.
We have product recalls.
It's like, this seemed like a perfectly good lawnmower
with no screen on it when we sold it to you.
But on second thought, we think maybe it should have a mesh.
So we're recalling it.
Maybe we should have a concept of a voluntary knowledge
recall.
Which is not Orwellian-- you must forget this,
but rather, here's some new stuff
we learned since you first heard about this.
Which, in turn, could have people
learn to be a little more appropriately skeptical, as one
after another of the stories that all of us
have sometimes done--
I can't believe that!
Really?
Geez, that's terrible!
You learn the facts later.
And you feel a little differently about it.
And I also think, let's look to the future.
It's not going to be Facebook and Twitter forever.
We are inviting into our homes and our other environments,
including our headpieces, the concierge services like Google
Home and Siri and Alexa.
And we're treating them as oracular sources
of activities and news.
Here is somebody asking the Google Home--
which is like the 2001 monolith, in small version, on your desk,
ready to tell you anything.
Here's somebody asking if Republicans are fascists.
[AUDIO PLAYBACK]
- Are Republicans fascists?
- According to Debate.org, yes, Republicans equals Nazis.
Yes.
[END PLAYBACK]
[LAUGHTER]
JONATHAN ZITTRAIN: Now, all right,
I'm going to go out on a limb.
I think that's fake news.
And here's, in the same level tones of authority
that we've learned from the original Star Trek--
just-- it's the implacable march of fact.
And it would be quite helpful, if it's
going to do that, for these objects maybe
to glow a certain color if they don't think they know
what they're talking about.
They could still give the answer.
But it could be like, this isn't as vetted.
So that next time, too, when you say, my stomach feels funny.
I might have appendicitis.
What causes appendicitis?
You don't have it be like-- because of organic web
search hiccuping-- it's an imbalance of the four
bodily humors.
And you need an immediate leeching.
If you're going to say it, at least say, I think.
And there might be ways to signal that.
MARTHA MINOW: [LAUGHS]
I asked you all to identify further questions
that you have.
And we can talk about that.
But it's also my moment to invite all of you
here, what questions do you have?
And what tools do you have to offer?
So before we turn to everyone, let me just identify quickly--
libraries, Jonathan, you identified as one.
You wonder about the role of libraries.
And I think that would be a great thing to talk about.
And An, you said, also, other physical spaces,
like museums and schools.
And Sandra, you wondered about US versus global.
And I think that's a hugely important subject.
And in a variation on that question
and what An said before, I'm learning that, in countries
that have more censorship, having a global network
may be important for getting the information, even about
a country, by being in touch with people outside the country
and so forth.
And Nate, you talked about Karl Popper and social engineering.
Can you say some more about that?
J. NATHAN MATIAS: Sure.
So many of the things that we've talked about here as tools
are interventions that we hope will
influence the behavior of, potentially,
millions of people.
And there are forms of research that we
have at our fingertips for doing A/B tests
and studying what those outcomes are.
And they're important forms of research.
But at the same time, we're at this moment
where we're confronting what it means to use that research
power responsibly.
As we hope for platforms to change things,
as we expect the Google Home device to say, I'm not sure,
it's responsible to follow up, to see,
does that actually make a difference?
But it's also responsible to ask,
who has the power to ask that question?
And how can we ensure that that kind
of large-scale social engineering
is carried out in a responsible way?
MARTHA MINOW: That's great.
And I do worry about the infinite regress
of the person who is shady, putting shades by the entries
by others that aren't shady, and so forth and so on.
So OK, now your turn.
And in the spirit of what we've just
been talking about, please, not only identify yourself
and your source code, but also whether what you are saying
is an opinion or is a fact or something like that.
So please-- and wait for the microphone.
And just say who you are.
RON NEWMAN: Hi.
My name is Ron Newman.
And I'm somebody who has dabbled in journalism occasionally
and helps run a community blog in my neighborhood.
And this is a question.
Say you have a reputation management system that says,
oh!
The Denver Guardian doesn't exist.
Therefore, you shouldn't believe it.
How is it going to tell the difference between that
and the hypothetical Denver Times-Herald,
which nobody in Denver has also heard of,
because it just started last week,
and it's just getting under way, and this
is its first big news story?
JONATHAN ZITTRAIN: Maybe, since I brought up the Denver
Guardian, that was for me.
It's also in federal evidence law,
reminiscent of the Daubert case, where
judges are asked to assess the truth of something
before they decide whether a jury is allowed to see it,
for which one of the objections that
led to that particular formulation was like,
what about Galileo?
Everybody else disagreed with him.
But darn it!
He was right.
And I think there are, for any system,
going to be false negatives, false positives.
It's why you'd want to approach it with complete humility.
I appreciate Martha's warning about,
do we really want to put one of the handful of gatekeepers--
the point about Pew and sources notwithstanding--
in the seat of deciding what's true and what's false?
The thing is, we do need some defense against a very
well-resourced and implacable set of lies
from some state actors and other sources.
We ought to have humility about any tools we build.
But I don't think it should mean we shouldn't do anything.
Because there is still going to be a shaping and structuring.
Whether you've heard from that Denver paper
is going to already be structured.
So I would at least say, put the shades next to it.
It's the first week of the paper.
But maybe it's its big scoop.
It could be!
Feel free to read the article.
But have that grain of truth.
And then wait to see if the grudging and failing New York
Times is like, I should have had that story.
But I guess I'm now going to have
to-- must credit Denver Daily Telegram, because they
broke it.
And then you'll hear it from the Times as well.
And you'll hear it from Fox News.
MARTHA MINOW: Yes--
AN XIAO MINA: Also just to add--
oh, sorry.
MARTHA MINOW: Oh, go ahead.
AN XIAO MINA: Sorry.
Just to add to that, I think, also,
a certain transparency around the logics of the system
can also help.
I think, the shade icon--
part of what works is if you trust that eBay has
a logic for that shade icon.
And the logic seems to be that it's a completely new account.
So making a clear logic for the design language
that we are communicating about XYZ source,
so that people can dig in, say, OK.
What is the logic behind this?
Why did this happen?
What are the reasons for this?
And makes a clear distinction between a site that
has knowingly produced false information for years
versus a site that it just came up, just popped up a week ago.
And therefore, we think you just might
want to double-check this.
So I think, showing that can help along this path as well.
MARTHA MINOW: So the great thing is that there
can be more information.
And as you said before, it can be over time as well.
So you can click on that icon and get some background.
Why is that here?
And then somebody can adjust it over time
and have more feedback.
Please.
BENJAMIN: Hi, I'm Benjamin.
I'm from the Kennedy School.
So I have a question for An.
You mentioned a lot of transparency.
And I'm wondering-- good journalism, seems to me,
exists with some level of opacity, right?
You trust an unnamed source.
And you trust the journalists to have this ethics around it
that JZ talked about.
So I'm wondering, how do you balance
this need for extreme transparency with the idea
that the media is the fourth estate because there's
a privileged space for it, because you trust journalists.
And does that actually dilute some of the prestige
that we accord to journalists?
AN XIAO MINA: Yeah, absolutely.
And I think there's a broader conversation here,
of course, that about declining trust in media institutions.
And when we talk about transparency,
there's transparency of all the notes, of all the sources.
I think a lot of journalists are concerned
about maintaining some kind of privileges, especially
with their sources, especially sensitive sources or sources
from underprivileged communities.
But there's also a transparency around process and checks.
And so, I think, when we talk about transparency,
we have to talk about both of these things.
That sometimes, there's a reason a journalist
has to protect a source.
But frequently, when someone says,
an anonymous source said XYZ, they
don't indicate why that source has to be
anonymous for XYZ reasons.
And so showing a transparency of process--
here's a decision making process we
went through to decide whether or not to declare
the name of this source--
can hopefully help build trust.
Because a lot of the process of this kind
of sausage-making behind a story is often hidden.
Because there's this assumption that people might understand
what that looks like.
But often, the general public doesn't really
get to see the going-on behind the newsroom.
And so I think that's one way to break down transparency.
MARTHA MINOW: Let me put the library question.
What role could libraries play?
JONATHAN ZITTRAIN: Well, libraries, at the moment,
seem to be one of the handful of institutions in the United
States still retaining a lot of trust and good feeling
from a huge swath of the population.
It's hard to think of any other major institution that
comes close.
Now, you might say, that's a bunch of capital.
Let's not spend it down by getting
into the fake news wars.
And there's some sensibility to that.
But I actually think that, to the extent
that libraries represent a profession and a set of values
that overlap greatly with the ideals
of professional journalism, whether or not
they're met in day-to-day practice, the idea
of a librarian, whose job it is to help you find what you're
looking for without judging your own question,
guarding fiercely the privacy of your having asked that
question-- because curiosity is to be treasured
and the pursuit of knowledge should not be something
that you feel shame about and that could be made public.
Very different from a typical search
engine which, too often, has replaced the reference desk,
may not be embracing as much.
And I could see ways in which librarians
are ready to offer human effort, not
looking for the technological shortcut all the time.
And that could complement the technological shortcuts
that Google and Facebook and others
are racing to introduce, to help us gradate that which we see.
So my vision would be, you see something online.
And there's a button you can press that says,
I'm really curious about this.
And I'd like to know more about it.
And if a threshold-- enough people
who aren't habitual button pushers press the button about
that--
it appears in the inbox of one of the librarians
around the world who volunteered to have a look at it
and, maybe, even to have a discussion
group with some other librarians to actually get
to the bottom of it.
It's a distributed Snopes.
And Snopes has its own fans and detractors now.
They're in it.
They are part of the news story.
But having a distributed set of folks
who stand behind their own research, who
are citing to sources rather than speaking ex
cathedra or out of their own purported substantive
expertise-- their expertise is meta.
It's about finding the source, weighing it.
What a great opportunity to involve the treasure
that is our world's professional librarians in this process.
Get them interacting with people who are curious.
And it's the kind of thing I would hope, say, Sandra would
discover that kids would be interested in participating
in as well.
I think it's a totally reasonable
eighth or ninth grade assignment to spend
a day in the trenches of validating stuff
and share it with your teacher and your class.
What a great thing to do with our kids.
SANDRA CORTESI: And additionally, what I
think is great about libraries is that they bring communities
together.
So it's not just the space you would
go if you want to have a trusted piece of information
or a trusted librarian.
It's actually where people come together
and debate and discuss.
So I think that's equally important about libraries.
Particularly in Colombia, where I grew up,
there are many great examples of libraries playing
a key role in fostering dialogue among people
with different views.
MARTHA MINOW: Great.
It's kind of a neutral place.
PAUL LIPPE: Hi, I'm Paul Lippe.
This is a question for Jonathan in his role as a taxonomist.
So it seems to me that, potentially,
the scale of this problem is not that big.
So if you just did an 80-20 of how many stories
a day represent 80% of all stories read,
what do you think the number of stories is?
And is there a classification system
that exists to define and control the text of the stories
so you could say, this is X story?
Because if you have those, then you
can probably come up with some kind of system
that's not controlled by the walled gardens.
But the walled gardens kind of, implicitly, have--
JONATHAN ZITTRAIN: Yeah.
PAUL LIPPE: --a taxonomy of those stories.
JONATHAN ZITTRAIN: Well, I can certainly
see where you're trying to go with that
and how carefully you're trying to thread the needle of giving
what are our de facto gatekeepers--
and increasingly so-- like Facebook, some role to play,
without giving them such arbitrary discretion
that they are kingmakers of fact.
It points out how helpful it would be to be able to collect
fulsome data about the spread of memes--
it's the kind of things we're hearing about from Media Cloud,
our project here--
being able to watch how stuff happens.
So it might fit your instinct of a power law, where
20% of the fake news in the world
is what's hitting 80% of the people.
And there might even be ways to track it.
Kind of like, in the stock market--
you apply the brakes when the bottom seems to be falling out.
Without waiting for people to click the button to say,
I'm curious, you could say, gosh.
This thing is taking off like wildfire.
Send the librarians after it to catch up!
And if that's the case-- what is the Mark Twain quote?
Lies are halfway around the block by the time
the truth is finished putting on its shoes.
This is helping the truth put on its shoes a little bit
or, at least, again, offering the context, if people
are hungry to have it.
And there too, again, I think about,
like, in 1984, the two minutes hate, that
was part of the instrument of that state to get people up
and riled about something for two minutes.
And then, on to the next thing.
And there's something that may appeal to us about it,
not to our better instincts perhaps.
But there has to be a hunger among people
to want a little bit extra beat of the rest of the story that
makes the original story less interesting, less
shareable, less exciting.
MARTHA MINOW: It does seem that working out this demand side
is absolutely critical.
And we may be hardwired to be intrigued by and fearful about
and excited about things that are horrifying
and not so hardwired to be interested in good news
or complexity or something like that.
And so, how to cultivate a desire
that may be counter to our biology seems pretty important.
JONATHAN ZITTRAIN: Yeah.
MARTHA MINOW: Paul, do you--
you had a--
PAUL LIPPE: I'm-- I'm--
MARTHA MINOW: You don't have your mic anymore though.
PAUL LIPPE: That's OK.
I'm not--
MARTHA MINOW: Yeah, it's not OK.
So hold on a minute.
Thank you.
PAUL LIPPE: I'm not super sanguine
about the cognitive problem of what people are hungry for.
But it does seem to me, if the world is mostly,
on any given day, 200 stories that are broadly not fake
and 20 that are fake, and that's the total population--
if the problem is of that scale, the nature of the solution
may be easier than it feels like.
JONATHAN ZITTRAIN: It might be.
But again, this is not a snapshot.
This is a moving picture.
And for instance, among my desires
for the ecosystem of information,
I actually want to see fewer gatekeepers
or a broader range of them.
And for a Facebook, I want to take Mark Zuckerberg
at his word that they're a technology and platform
company, not a content and media company, and say, great.
Why then only have "the Facebook feed"
tailored to you in a secret way with no transparency?
--as An points out.
But rather, let anybody develop a recipe for a feed.
And then I can choose to have my feed driven by the National
Rifle Association and Ralph Nader, which
would be an interesting feed.
[LAUGHTER]
But if that aspiration is met, it
may mean a longer tail of stuff.
And suddenly, it's harder to find those top 20 stories.
In fact, there's a greater dispersal of stuff.
But with the snapshot of today, you might be right.
It may be easier than we think.
MARTHA MINOW: We're getting, I think,
a question about, how much do we know?
And there's a real need for more research--
JONATHAN ZITTRAIN: Yes.
MARTHA MINOW: --to even know what the scale is and, also,
what it would take to make it possible for people
to express what they want.
JONATHAN ZITTRAIN: Yes.
MARTHA MINOW: That maybe more people want to be
able to customize their feed.
And if that were clear and Facebook
hasn't asked that question, then that
would be relevant to Facebook and others.
JONATHAN ZITTRAIN: Yes.
MARTHA MINOW: An, do you want to say something?
AN XIAO MINA: Yeah, I think, just
to build on this notion of looking at the feed,
it's also to remember that a lot of what we call fake news
spreads on non-feed-based platforms, including
private platforms and email and then, also,
just through in-person networks.
And so our research methodologies
also need to adapt for this.
The ability to search the API for WhatsApp
is not possible, because there is no WhatsApp API.
And yet, WhatsApp is a critical platform
for a lot of rumors and misinformation
in a context like India.
And then, you have KakaoTalk, a private network in Korea.
And then you have WeChat in China.
And so the methodologies need to adapt.
and, also, our comfort with knowing that it's
unknowable, because of how these things spread
on private networks as well.
And so our focus on the feed is important.
But at the same time, understanding the other ways
that these things spread is also very critical.
MARTHA MINOW: The History Department here, now,
is encouraging people to do work on the history of rumors.
There's something we can learn historically too.
JONATHAN ZITTRAIN: I heard that too.
MARTHA MINOW: Hahaha!
[LAUGHTER]
Way in the back.
PETER METROS: So let's for a moment--
MARTHA MINOW: Say who you are, please.
PETER METROS: I'm Peter Metros.
MARTHA MINOW: Thank you.
PETER METROS: I want to grant the benefit of the doubt
to technology.
Let's say we have a perfect technology that can perfectly
identify fake news.
Why would anybody actually use it?
You have people in the general population who
go to entertainment media, be that television or Facebook,
basically to be entertained.
They seek out places that re-affirm them.
News sources incentivize journalists to put out articles
as quickly as possible.
Saying I was wrong--
The New York Times publishes an article.
If a week later, a person gets informed,
you read something wrong in the New York Times, that actually
undermines its credibility.
MARTHA MINOW: So just come to the point.
What's the question?
PETER METROS: How do you actually get tools like these
to be adopted?
Why would people use them, given market dynamics?
MARTHA MINOW: Thank you.
Thoughts?
JONATHAN ZITTRAIN: Well, that sounds
like the demand-side question again.
And there, it does get back to, what are people
doing when they're online?
And I think it's often a sort of vague
hanging out a lot of the time.
There are elements of gamification to it.
If Twitter tells you how many followers you have,
people pursue more followers.
If it's a blue check, people want the blue check.
And when we think about the tools for dialogue--
which remains, to me, surprisingly quite crude
across multiple platforms.
And to use Facebook again as an example,
it's, how many likes or shares did you get?
But it would be interesting.
If there were a button that said,
I want you to know I disagree with what you said.
But I'm really glad you said it.
Like, respect but not agreement.
I think we could figure--
I don't know what the icon would be.
Like, I'm confused?
But you're not confused.
You're just having a subtle thought.
It's that you like that it was said
but you don't agree with it.
MARTHA MINOW: I defend to the death your right to say it.
JONATHAN ZITTRAIN: Right, the Voltaire button.
MARTHA MINOW: Yes.
So it's a Voltaire button.
JONATHAN ZITTRAIN: And if that button existed,
people would actually start to post
stuff that accrued Voltaires.
And that would be pretty cool.
And you could see people then wanting to reach people
beyond those who agree with them,
and having some crude measurement of success at that.
And you then might find that that demand
creates a certain supply.
I think it's quite protean.
And this does not seem to me to be like Orwellian mind control.
It's like, there already are the buttons that are there.
You have to have buttons.
It's worth thinking about, what are
the buttons going to look like?
J. NATHAN MATIAS: So can I add something to that?
MARTHA MINOW: Yes, please.
J. NATHAN MATIAS: I think there's a temptation to enter
into this almost, like--
I don't know if any of you know the classic Atari video
game Missile Command.
MARTHA MINOW: Yes.
J. NATHAN MATIAS: Where there are these aliens coming down.
And you're panicked.
And you're shooting all these missiles
to stop the alien invasion.
And I think there's a tendency sometimes
to treat complex social issues as if they're
spam or other well-defined technical problems with a very
specific, high-impact, technical answer.
And Jonathan, when you were speaking earlier
and you talked about libraries, you
were drawing attention to the deeper currents at play.
There's been declining trust in institutions.
The news industry itself has had challenges staying funded.
And on top of that, there are a variety
of actors who are investing resources
into making that mistrust grow even further.
I think about a study that was just published yesterday
that asks the question, are trolls
to blame for the problems that we've been seeing?
And Biella Coleman and Whitney Phillips and others
wrote this masterful article pointing out
that, whatever trolling culture may be doing at a given time,
there are these wider social trends that we
need to be thinking about.
Even as we think about the Voltaire button,
how do we actually support the social institutions,
like libraries, that we need to build that trust over time?
And certainly, platforms will play a role in that.
But they won't be the only answer.
JONATHAN ZITTRAIN: And share the joy of the dialogue,
the discovery, the correction, the re-discovery--
I think, to many academics-- it's kind of core to academia--
to be found out that you were wrong about something
is a great day.
You're just like, that's so interesting!
I thought X. It turns out, not X.
Now, if you were not that careful in saying X,
it might also be a little embarrassing.
But if you'd dotted your Is and crossed your Ts
and you still got it wrong, it should be, how interesting!
And to be able to share that joy with people early and often
and have that be part of the fun of being online--
not for everybody, not all the time.
But there's surely more of that, rather
than aiming towards an oracular fact-generating machine.
Now, it's true.
There are times when, if you ask Google Home, like, for pi,
it will tell you the value of pi until you
tell it to stop talking.
And you probably don't want it to be like, OK.
Get out a piece of paper and draw a circle.
You're like, no, no.
I don't want to derive pi.
Just tell me what pi--
MARTHA MINOW: I actually just wanted a slice.
JONATHAN ZITTRAIN: A slice of pie--
that's true.
That could be as well, which would
be a terrible misunderstanding by Google Home.
But many more of the topics that we are concerned about
and that factor into for whom we would vote, how we would think,
how we look at our neighbors, those
are things that really are great candidates
for the dialogic dwelling upon, rather than
just the oracular answer.
Is this good or bad?
MARTHA MINOW: Urs Gasser.
URS GASSER: Thank you so much.
Great panel.
A quick observation-- it's so interesting
talking about the different tools we have in the toolbox.
We call it technology.
We talked about social norms approaches.
We talked about the importance of market forces.
And yet, in the background, is law, right?
The forced modality of regulation.
And we didn't talk about it at all.
JONATHAN ZITTRAIN: Yeah, it's right behind us.
URS GASSER: Exactly.
So the question might be-- even to the moderator, if I may--
does law play a role in all of this?
And not only in the restrictive version, in the constraining
version, as we recently see in Germany with the Network
Enforcement Act, requesting social media
platforms to take down content that is clearly illegal--
actually, interesting merger between hate
speech and fake news there--
within 24 hours.
But maybe, also, enabling law that clearly
shapes where we stand today--
these are legal decisions that these platforms have enabled.
So is there a role of law to play,
as the forced mode of regulation in the Lessig framework?
MARTHA MINOW: So this is the beginning of our next event,
I'm sure.
And I'm sure that it's just important for us
to shine a light on how law is underneath all
of these structures.
And what's permitted, what's enabled, what's encouraged,
what's reinforced--
there's a legal framework for the social engineering.
There's a legal framework for the cultural and civic
institutions.
And so I think it would be a very worthwhile exercise
to spotlight, to highlight, what law
is enabling, what it's preventing,
and what it could do to promote the development of more
tools along the many dimensions that we've talked about here.
Please join me in thanking this wonderful panel.
[APPLAUSE]