Placeholder Image

字幕表 動画を再生する

  • Some critics of the TV show Mythbusters claim

  • that the show misrepresents the scientific process.

  • For example, experiments are sometimes conducted only once

  • and without adequate controls,

  • but then these results are

  • generalized to make definitive claims

  • rather than repeating the experiment

  • and using statistical analysis as a scientist would

  • to figure out what is really true.

  • So, ironically a show

  • that is meant to educate people about science

  • may instead be giving them the opposite impression

  • of how science works.

  • But you know, similar criticisms had been made of Veritasium.

  • For example, when Destin and I performed

  • our experiments to show that the water swirls

  • the opposite way in the northern and southern hemispheres,

  • we only showed doing it once

  • even though we each did it three or four times in our own hemisphere.

  • And I guess that brings forth the question

  • should we change what we're doing --

  • I mean should Mythbusters and Veritasium

  • really show the repetitive nature of science

  • and use statistical results as evidence for our claims.

  • Well my answer is no, but to understand why

  • we first have to dig into something called

  • the helping experiment.

  • And this was performed in New York

  • in the 1960s. And the way it went was --

  • individual participants were placed in isolated booths

  • where they could speak to five other participants through an intercom

  • but only one mic was live at a time.

  • And these participants were meant to speak in turns

  • for two minutes each about their lives --

  • any problems they were having --

  • and it would just go in rounds.

  • Now what the participants didn't know was that

  • one of them was actually an actor

  • who was reading a script

  • prepared for him by the experimenters.

  • And he went first in the first round.

  • He talked about the problems he was having adjusting to life in New York City

  • and particularly the difficulty that he gets the seizures, particularly when stressed.

  • And so everyone else had their turn and then it came back 'round

  • to this actor again. Now this time

  • when he was speaking he became more and more incoherent as he

  • was talking. He said that he could feel a seizure coming on and he made choking noises,

  • he asked for help from the other participants --

  • he said he felt like he was dying --

  • and, uh, then he continued to get more and more distressed

  • until his mic went off.

  • And the point of the experiment was to see how many of the participants would help.

  • I mean, if you were one of the other participants, do you think you would've left your booth

  • and gone to see if he was okay?

  • In total, about 13 participants took part in this experiment --

  • and the number that helped before his mic was turned off --

  • was just four.

  • Now, while this might sound a little bit disappointing

  • about the state of human helpfulness,

  • you gotta keep in mind that there were other people listening to the same distress call

  • and that may have diffused the responsibility that individuals would feel;

  • this is something known as the "bystander effect."

  • Now, what's interesting about this experiment from my point of view

  • is not how it confirms the bystander effect,

  • but in how people view the results.

  • For example, they fail to change their opinion

  • of themselves or others after learning about this experiment.

  • For example, have you changed your opinion

  • about how likely you would be to help in this situation --

  • now that you know that only 30 percent of people did, in that situation?

  • Well, there was a follow-up study conducted

  • where students were shown two videos

  • of individual participants who were purported to be from the original study.

  • And they had already learned about the study,

  • and then they were asked at the end of watching those two videos --

  • which were pretty uninformative, just showed that these people were

  • good, decent, ordinary people --

  • these students were asked,

  • "How likely do you think it was

  • that those two particular participants helped?"

  • And overwhelmingly students felt

  • that those two participants would have helped --

  • even though they knew that, statistically, only 30 percent did, so,

  • in fact, if would've been a better guess to say that they probably didn't.

  • They didn't seem to really internalize

  • those general results as pertaining to the particular,

  • they kind of assumed it excluded

  • ordinary, good, decent people.

  • Now, is there a way to get people to really

  • understand how the world works?

  • Well, they did another follow-up study

  • where they talked about the experiment,

  • they described the experiment,

  • but they didn't give the results.

  • And then they showed those two participant videos --

  • again, not mentioning anything about the experiment,

  • just showing that these are two, decent, ordinary people

  • and then they told the students that those two people

  • did not help in the experiment.

  • And they asked the students to, uh, guess

  • what proportion of people did help.

  • And now, in this case, when they were going

  • from those particular examples of ordinary, nice people

  • who didn't help,

  • they were much better at generalizing

  • to the overall result,

  • to the statistical result.

  • In fact, they got it basically right.

  • And I think this highlights for us that

  • our brains are much better at

  • working with individual stories

  • and things in detail

  • than they are with statistical results.

  • And that is why I think

  • if you're Mythbusters or Veritasium

  • it's better to communicate science --

  • to tell the story,

  • to show the experiment, really, once in a dramatic way --

  • rather than three or four times

  • where each new iteration --

  • well, each repetition --

  • just confirms the original result that you were talking about.

  • But if you're actually doing the science.

  • If you're actually trying to establish scientific fact --

  • then of course

  • you need the repetition and the statistical analysis.

  • So I think it really does come down to what your objectives are.

  • But with this conclusion

  • I think this opens up

  • two big potential pitfalls.

  • One is that people without scientific evidence

  • can make crafty stories

  • that catch on

  • and quickly become what people feel is the truth.

  • And the other pitfall is scientists

  • Who have strong scientific evidence --

  • Who have clear statistical results --

  • and yet they can't communicate them to people

  • because they don't have a great story.

  • So, an example of the first pitfall

  • is the recent spread of this rumor

  • That the outbreak of a birth defect microsephaly

  • in South America was actually caused

  • by a larvicide made by Monsanto.

  • That story caught on like wildfire.

  • And you can see why because its got this clear villain --

  • that everyone loves to hate -- in Monsanto.

  • And its got a really causal story

  • That someone is doing something bad

  • to the water -- and its this poison

  • that were poisoning ourselves and

  • its a very emotive -- clear -- story.

  • While the other story is, uh,

  • Well it's a little bit more statistical -- that there is

  • some kind of connection -- which is the scientific consensus

  • That the Zika virus carried by

  • these, uh, mosquitoes

  • is causing the microsephaly.

  • And there are strong indications that that really is what's happening.

  • And if you look at the claims about the larvicide,

  • they really don't hold much weight.

  • I mean, the larvicide is so weak,

  • that you could drink a thousand litres of it a day.

  • A thousand litres of the water treated with this larvicide and have no adverse effects.

  • Or, uh, this larvicide has been used in dog and cat pet collars.

  • Um, so really, you know, there isn't strong evidence for the larvicide connection. In fact

  • There is no connection between the larvicide and Monsanto at all.

  • But I think the story took hold because it had such a strong narrative

  • On the other hand, you have things like climate change,

  • which have very strong statistical evidence to back them up,

  • large scale result over the globe.

  • And yet, one cold snowy winter is so much more, uh,

  • visceral and meaningful to individual people

  • than this thing which feels, you know,

  • completely databased.

  • And, it just depends on how much you trust data, I guess.

  • As scientists, we love data.

  • And we feel like, if we're trying to communicate to someone,

  • we're trying to convince someone of something,

  • all we have to do is show more data.

  • But what experiments demonstrate to us with statistical certainty

  • is that stories work much better.

  • Normally I do this walk and talk self videos on my second channel, 2Veritasium, but

  • I imagine that some of you might not know that that exists.

  • So, I thought I'd put one of these here on 1Veritasium.

  • Plus, this one has a fair amount of data and, you know, experimental stuff in it,

  • so I figured that could be interesting for you as well.

  • So if you want to check out the check...

  • [CHUCKLES]

  • So if you want to check out the second channel, then

  • go to 2Veritasium. I'll put a card or something for it up here.

Some critics of the TV show Mythbusters claim

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

B1 中級

逸話がデータを切り抜く理由 (Why Anecdotes Trump Data)

  • 14 0
    林宜悉 に公開 2021 年 01 月 14 日
動画の中の単語