Placeholder Image

字幕表 動画を再生する

  • [♪ INTRO]

  • This video was filmed on April 28th.

  • For our most recent episodes on COVID-19, check out the playlist in the description.

  • The COVID-19 pandemic has been dominated by numbers,

  • especially the numbers of new cases and mortalities every day.

  • But what's just as important as those numbers is how they're changing.

  • Predictions on how they'll change over time depend on epidemiological modeling,

  • which aims to mathematically describe how diseases spread.

  • And this is what leaders are using to figure out how drastic

  • the effects of the pandemic could be and what measures we need to take.

  • Unfortunately, these models aren't simple and they don't always give

  • clear-cut answers, which has led to some confusion about how to interpret them.

  • On top of that, even when a model does give a clear estimate,

  • it often ends up being wrong weeks or months later,

  • because our response to the model can change the course we're on.

  • So if you see a model that overestimated something,

  • it doesn't necessarily mean we overreacted:

  • It often means we did exactly the right thing.

  • One study that got a lot of attention recently came out of Imperial College London in March,

  • and it predicted that, without drastic measures,

  • there could be up to a half a million fatalities in the U.K.

  • It had a huge impact, and even shaped the U.K's response to the pandemic,

  • but a few days later, the lead author told the British parliament

  • that he now only expected 20,000 fatalities.

  • Now, any number bigger than zero is bad news,

  • and honestly, talking about fatalities at all these days can be really difficult,

  • since we're still right in the middle of this, and these are real people.

  • So forgive us if we sound a bit clinical here.

  • But as bad as 20,000 deaths is, that's 25 times less than half a million.

  • That made some people think the model was unreliable or wrong,

  • but in reality, both numbers came out of the same model.

  • It's just that the half-a-million figure was if things continued as they were,

  • while the 20,000 figure factored in the effects of new protective measures,

  • and the predictions reflected that.

  • In fact, those circumstances, and the predictions, are still changing,

  • because models are constantly being updated to reflect new information.

  • So, to quote the statistician George E. P. Box:

  • All models are wrong, but some are useful.”

  • And that's because of how they're designed.

  • The simplest epidemiological models have three groups of people:

  • theSusceptiblewho haven't caught the disease at all,

  • theInfectedwho have it, and theRemovedwho already had it

  • and either recovered or didn't survive.

  • For that last group, the model assumes that once someone has stopped being Infected,

  • they can't catch the virus again or give it to anyone else,

  • although technically, we don't know for sure that that's true for COVID-19.

  • Together, these form what's called an SIR model.

  • SIR models start with some number of people in each group,

  • based on real-world data, and use computer models to step ahead in time.

  • At each step, the computer simulates

  • some number of Susceptibles catching the disease from the Infected.

  • Meanwhile, some of the Infected become Removed

  • if they've had the disease for long enough, either by recovering or by dying.

  • As the number of people in each group changes,

  • that tells you how many people catch the disease over time.

  • These models can be really helpful for making predictions,

  • but they also come with challenges.

  • For instance, epidemiologists have to determine, on average,

  • how many susceptible people can catch the disease from one infected person.

  • This number you might have heard of, it's called R0.

  • When the Imperial College did that first study that predicted half a million fatalities,

  • researchers estimated that R0 was 2.4, based on data from cases in Wuhan, China.

  • That's lower than the latest estimates, though,

  • and it's just one reason modeling is so hard.

  • Models depend on inputs like this, and those numbers aren't perfectly known.

  • Even where scientists have data, that data more often looks like

  • a range of possible answers instead of one precise value.

  • And since the numbers that go into the model are uncertain,

  • the predictions are also uncertain, but that doesn't mean they can't be useful.

  • For instance, you can run the model with a few different values,

  • and come up with a range of possible outcomes, from best case to worst.

  • But narrowing down values like R0 isn't all there is to it.

  • To make a model that predicts the complexity of the real world,

  • you also have to add even more inputs that take into account things

  • like how different groups interact and how the people in them vary by age and location.

  • So, basically, models get messy fast.

  • Over time, as we gather more data, we can compare the actual number

  • of new infections with the predicted numbers to pin down those inputs,

  • and that gives us even better predictions of what's to come,

  • even if they don't agree with the old ones.

  • The models can also take into account our current behavior,

  • like how well we're staying at home and social distancing.

  • That, of course, changes R0, since social distancing leads to fewer infections,

  • and that can change the outcome of the model.

  • For instance, the Imperial College model predicted

  • two million fatalities in the U.S. if no drastic action was taken.

  • But at that point, lockdowns hadn't begun,

  • and social distancing hadn't been factored into that number.

  • So now, at the time of filming, a new model from the University of Washington

  • is expecting a figure of around 74,000.

  • Which again, is a big and bad number, but it is at least a smaller one.

  • That difference isn't because the Imperial model was totally wrong:

  • It's largely because the reaction to the early predictions

  • led us to change our behaviors, and the latest predictions reflect that.

  • In a way, this is great.

  • We can actually see ourselves changing our future for the better.

  • As we change the ways we act and go about our lives,

  • scientists change their models to better reflect the new, safer path that we are on.

  • So if a model's predictions end up being wrong,

  • that could mean it has done exactly the job it was supposed to.

  • Thanks for watching this episode of SciShow News!

  • And thank you especially to our patrons who make it possible for us

  • to cover topics like this each week, especially in a world that is changing so quickly.

  • We couldn't do it without your support.

  • And if you're not a patron but you'd like to support what we do,

  • you can find out more at patreon.com/SciShow.

  • [♪ OUTRO]

[♪ INTRO]

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

A2 初級

COVID-19モデルの間違いが良い理由 (Why It's Good for COVID-19 Models to Be Wrong)

  • 7 0
    林宜悉 に公開 2021 年 01 月 14 日
動画の中の単語