字幕表 動画を再生する
Okay, so full disclosure,
I don't love the NFL
and my ten-year-old son is more into Ed Sheeran than Taylor Swift,
so she hasn't yet flooded our household.
However, when one of the most famous people in the world
is caught in a deepfake porn attack
driven by a right-wing conspiracy theory,
forcing one of the largest platforms in the world
to shut down all Taylor Swift related content,
well, now you have my attention.
But what are we to make of all this?
First thing I think is it shows how crazy
this US election cycle is going to be.
The combination of new AI capabilities,
unregulated platforms, a flood of opaque super PAC money,
and a candidate who's perfectly willing to
fuel conspiracy theories
means the information ecosystem this year is going to be a mess.
Second, however, I think we're starting to see some
of the policy levers that could be pulled to address this problem.
The Defiance Act, tabled in the Senate last week,
gives victims of deepfakes the right to sue the people who created them.
The Preventing Deepfakes of Intimate Images Act,
stuck in the House currently,
goes a step further and puts criminal liability
on the people who create deepfakes.
Third, though, I think this shows how we need to regulate platforms,
not just the AI that creates the deepfakes,
because the main problem with this content
is not the ability to create them,
we've had that for a long time,
it's the ability to disseminate them broadly to a large number of people.
That's where the real harm lies.
For example, one of these Taylor Swift videos
was viewed 45 million times
and stayed up for 17 hours
before it was removed by Twitter.
And the #TaylorSwiftAI
was boosted as a trending topic by Twitter,
meaning it was algorithmically amplified,
not just posted and disseminated by users.
So what I think we might start seeing here
is a slightly more nuanced conversation
about the liability protection that we give to platforms.
This might mean that they are now liable
for content that is either algorithmically amplified
or potentially content that is created by AI.
All that said, I would not hold my breath for the US to do anything here.
And probably, for the content regulations we may need,
we're going to need to look to Europe, to the UK, to Australia,
and this year to Canada.
So what should we actually be watching for?
Well, one thing I would look for is how the platforms themselves
are going to respond to what is both a now
an unavoidable problem,
and one that has certainly gotten the attention of advertisers.
When Elon Musk took over Twitter,
he decimated their content moderation team.
But Twitter's now announced that they're going to start rehiring one.
And you better believe they're doing this
not because of the threat of the US Senate,
but because of the threat of their biggest advertisers.
Advertisers do not want their content,
but put aside politically motivated, deepfake pornography of incredibly popular people.
So that's what I'd be watching for here.
How are the platforms themselves
going to respond to what is a very clear problem,
in part as a function of how they've designed their platforms and their companies?
I'm Taylor Owen, and thanks for watching.