Placeholder Image

字幕表 動画を再生する

  • positive outcomes.


  • But there are certainly some risks.


  • Certainly, we've heard from folks like you on on Nick Boast Room, concerned about a eyes potential outpaced our ability to understand it.


  • What about those concerns?


  • And how do we think about that move for to protect not only ourselves but humanity at scale.


  • So let me start with what I think is the more immediate concern.


  • That's a solvable problem, but we have to be mindful of it.


  • And that is this category of specialized ai.


  • If you've got a computer that can play, go is pretty complicated game with a lot of variations.


  • Developing an algorithm that simply says maximize profits on the New York Stock Exchange is probably within sight.


  • And if one person or one organization got there first, they could bring down the stock market pretty quickly.


  • Or at least they could, um, you know, raise questions about the integrity of the financial markets.


  • Um a.


  • An algorithm that said, Go figure out how to launch, penetrate the nuclear code in the country and figure out how to launch some missiles.


  • That's their only job.


  • It's very narrow.


  • It doesn't require a super intelligence.


  • It just requires a really effective algorithm, then on itself teaching.


  • Then you got problems.


  • So So part of I think my directive toe my national security team is, um don't worry as much yet about machines taking over the world do worry about the capacity of either non state actors or hostile actors to penetrate systems.


  • And in that sense, it's not, um, it is not conceptually different.


  • You are, uh, different in a legal sense than a lot of the cybersecurity work that we're doing.


  • It just means that we're gonna have to be better, because those who might deploy these systems are going to be a lot better.


  • Now.


  • I think, as a precaution, all of us have spoken folks like Elon Musk who are concerned about uh yeah, the, uh, the super Intelligent Machine.

    予防策として 我々全員が話しました エロン・ムスクのような人が心配しています ええと ええと 超インテリジェント・マシンを

  • There's some prudence in thinking about benchmarks that would indicate some general intelligence developing on the horizon.


  • And if we can see that coming over the course of three decades, five decades, you know, whatever the latest estimates are, if ever, because they're also arguments that this thing is a lot more complicated than people make it out to be.


  • Ben.


  • Uh, no future generations were our kids or grandkids were gonna be ableto see it coming and figure it out.


  • Um, but But I do worry right now about all right.


  • Specialized ai.


  • I was on the West Coast.


  • Some kid looked like he was 25.


  • Shows me a laptop.


  • He's this.


  • This is not a laptop.


  • An iPad says this.


  • This is the future of radiology, right?


  • And he's got an algorithm that is teaching enough sufficient pattern recognition that over time it's gonna be a better identify her of disease than a radiologist would be.


  • And if that's already happening today on an iPad invented by some kid at M i t.

    それが今日、M i tの子供が発明したiPadで既に起きているとしたら。

  • Then you know the vulnerability of a lot of our systems is gonna be coming around pretty quick on.

    我々のシステムの脆弱性がすぐに 露呈することを知っているだろう

  • We're gonna have toe have some preparation for that.


  • But joy may have worst nightmares generally agree.


  • I think the only caveat is I would say there are a few people who believe that generally I will happen at some fairly high percentage chance in the next 10 years, people who are smart.


  • So I do think that being keeping aware, but the way I look at it is that there's a dozen or two different breakthroughs that need to happen for each of the pieces, so you can kind of monitor.


  • It's sort of and you don't know exactly when they're gonna happen because there, by definition, breakthroughs.

    それは一種のものであり、あなたは彼らがいつ起こるか正確にはわからない なぜなら、定義によって、ブレイクスルーがそこにあるからです。

  • And I think it's kind of when you think these breakthroughs will happen and you just have to have somebody close to the power cord right when you said about the Oh, wait, I'm completely with president that its short term, it's going to be bad people using a eyes for bad things will be an extension of of of us.

    そして私は、それはあなたがこれらのブレークスルーが起こるだろうと思うときのようなものだと思いますし、あなたはちょうどあなたがについて言ったときに電源コードの近くに誰かを持っている必要があります ああ、待って、私は完全に大統領と一緒にその短期的な、それは悪いことのために目を使用して悪い人々になるだろうということを私たちの延長線上にあると思います。

  • And then there's this other meta thing which happens, which is a group of people.


  • So So if you look at all of the hate, the Internet, it's not.


  • One person doesn't control that, but it's a thing.


  • It is pointed.


  • It points at things.


  • It's definitely fueling some political activity right now, but it's kind of that a life of its own, it's not even code, it's a culture and you see that also in the Middle East, right?


  • So why it's so hard to prevent yet because it actually gets stronger than when you attack it and And to me, what's curious and interesting is going to be the relationship between an A.


  • I say, a service that runs like that.


  • And then you throw in Bitcoin, which is the ability to move money around by a machine anonymously and so to me, it will be this weird.


  • And again, this is where I think it could be embedded if you if you if you gave this sort of mob more tools to cause that they are actually fairly, uh, coordinated in their own peculiar way, and them and the good side is you can imagine.


  • You know, I was talking to some politicians like Michael Johnson in Colorado.


  • He's trying to figure out How can we harness these things to inform and engaged citizens So so to me that the trick is if the problems, if you suppress it because of fear, the bad guys are still use it, and what is important is to get people who want to use it for good communities and leaders and figure out how to get them to use it so that they that that's where we start to lean.

    彼はどのようにしてこれらのことを市民に情報を提供するために 利用できるかを考えようとしています 私にとってのトリックは 問題があるならば 恐怖のためにそれを抑えれば 悪者はまだ利用します 重要なのは 良いコミュニティやリーダーのために それを利用したいと思う人を 捕まえることです 彼らに利用してもらうにはどうすれば良いかを 考えることです

  • Yeah, this may not be a precise analogy.


  • Traditionally, when we think about security and protecting ourselves, we think in terms of we need armor or walls from swords, blunt instruments, etcetera.


  • And increasingly, um, I find myself looking to medicine and thinking about viruses.

    ますます医学に目を向けるようになり ウイルスについて考えるようになった

  • Antibodies, right?


  • How do you create healthy systems back him word off, destructive elements distributed and in a distributed way.


  • And that requires more imagination.


  • And we're not, vary it.


  • It's part of the reason why cybersecurity continues to be so hard is because the threat is not a bunch of tanks rolling at you, but a whole bunch of systems that may be vulnerable.


  • Toe a warm didn't in there.


  • It means that we've got to think differently about our security, make different investments that may not be as sexy, but actually may end up being as important as anything.


  • And part of the reason I think about this good is because I also think that what I spend a lot of time worrying about are things like pandemic.


  • You can't build walls in order to prevent, you know, the next airborne, uh, lethal flu from landing on our shores.

    次の空気感染を防ぐために 壁は作れないんだ 致死性のインフルエンザが上陸しても

  • Instead, what we have to do is be able to set up systems toe, create public health systems and all parts of the world quick triggers that tell us when we see something emerging.


  • Make sure we've got quick protocols.


  • Systems that allow us toe make that vaccines a lot smarter.


  • So if you think, uh, if you take that model a public health model and you think about how we can deal with the, uh, the problems of cybersecurity, a lot of that may end up being really helpful in thinking about the I threats.


  • And just one thing that I think is interesting is when we start to look at microbiome and microbes everywhere.


  • There's a lot of evidence to show that introducing good bacteria to fight against the bad bacteria is the strategy and not to sterilize.


  • And I think that will sunny and bo like me when I wear.


  • When I walked them in the South Lawn, some things I see them that way.


  • Researcher.


  • Just reading the shine that actually opening windows in hospitals, Prince there sits sterilizing there may actually limit, so we have to rethink what clean means, and it's similar whether you're talking about cyber security or national security.


  • I think that the notion that you can make strict borders or that you could eliminate every possible pathogen is difficult.


  • And I think I think that in that sense you're in your position to be able to see medicine and cyber, and I think that's a so they're distributed threats.


  • But is there also the risk that this creates a new kind of arms race?


  • Look, I I think there's no doubt that developing international norms rules protocols, verification mechanisms around cybersecurity generally on a I in particular is in its infancy.


  • Um, and part of the reason for that is as joy identified, you got a lot of non state actors who are the biggest players.


  • Part of the problem is, is that identify us buying who's doing what is much more difficult if you're building a bunch of I C B m, we see him, Uh, if somebody sitting at a keyboard, we don't And so we've begun this conversation.

    問題の一部は、私たちは何をしているのかを購入している私たちを識別することです あなたがI C B mの束を構築している場合、私たちは彼を見て、ええと、誰かがキーボードの前に座っている場合、私たちはしません だから私たちはこの会話を始めました。

  • A lot of the conversation right now is not at the level of no dealing with real sophisticated AI, but has more to do with essentially states establishing norms about how they use their cyber capabilities.


  • Part of what makes this an interesting problem.


  • This is that the line between offense and defense is pretty blurred.


  • Um, you know the truth, The matter is and part of the reason why, for example, the debate here about cybersecurity Who are you more afraid of Big brother in the state or the guy was trying to empty out your bank account?


  • Part of reason that's so difficult is that if we're going to police this Wild West, whether it's the Internet or a I or any of these other areas, then by definition the government's gotta have capabilities of its got capabilities, then they're subject to abuse.

    それが難しい理由の一部は このワイルド・ウェストを警察しようとしているならば インターネットであろうとIIであろうと 他の分野であろうと 定義上は政府が持っている能力を持っていなければならない その能力は悪用の対象になります

  • And, uh, at a time when there's been a lot of mistrust built up about government that makes it difficult.


  • And when you have countries around the world who see America as the preeminent cyber power, now is the time for us to say we're willing to restrain ourselves if you are willing to restrain yourselves.


  • The challenge is the most sophisticated state actors.


  • Russia, China, Iran, uh, don't always embody the same norms or values that we do, but we're gonna have to surface this as an international issue in order for us to be effective because it's effectively.

    ロシア、中国、イランは我々と同じ規範や価値観を 持っているとは限らない しかし、我々が効果を発揮するためには 国際的な問題として表面化させる必要がある なぜなら、効果的だからだ

  • It's a borderless problem, and ultimately all states are gonna have to worry about this.


  • The it is very short sighted.


  • If there's a statement, thinks that it can develop, uh, super capacities in this area without some 25 year old kid in a basement somewhere, figuring that out pretty quick.


positive outcomes.



動画の操作 ここで「動画」の調整と「字幕」の表示を設定することができます

B1 中級 日本語 サイバー セキュリティ システム 問題 アルゴリズム ai

AI時代の国家のセキュリティとは? | バラク・オバマ×伊藤穰一 | Ep3 |

  • 5 1
    林宜悉 に公開 2020 年 08 月 06 日