Placeholder Image

字幕表 動画を再生する

  • [MUSIC PLAYING]

  • NATHAN SILBERMAN: Imagine you're in a traumatic accident

  • and end up in the hospital.

  • One of the first things an emergency room doctor may do

  • is take an ultrasound device and use

  • it to inspect your heart, your lungs, your kidney.

  • Ultrasound has become an indispensable window

  • into the human body for clinicians

  • across areas of medicine.

  • As powerful as ultrasound is, access

  • to ultrasound is still limited by a very high price,

  • form factor--

  • it's usually a large cart-based system they have to push

  • around--

  • and then, finally, years of education and training

  • required to use it effectively.

  • As a consequence, 2/3 of the world

  • has no access to medical imaging of any kind,

  • 5,000 children die every day from pediatric pneumonia,

  • and over 800 women die every single day

  • from totally preventable complications relating

  • to maternal health.

  • We need to do better.

  • To address this, Butterfly has developed

  • a handheld, pocket-sized ultrasound

  • device that connects right to your smartphone.

  • At $2,000, a hundredth of the price

  • of a conventional ultrasound system,

  • the Butterfly iQ is a personal ultrasound device,

  • a true visual stethoscope.

  • Butterfly's ambition is to democratize ultrasound.

  • The price and size of the device have solved the cost and form

  • factor problems.

  • But to truly make ultrasound universally accessible,

  • we need to solve two additional problems.

  • The first is guidance--

  • where exactly to place the ultrasound probe.

  • That's typically performed by a stenographer.

  • And then, interpretation-- understanding

  • the content of the image is typically

  • performed by a radiologist.

  • Now, in order to solve the access problem,

  • we can't just scale up education.

  • It doesn't scale fast enough.

  • We need to use machine learning.

  • What you're about to see is our machine learning solution

  • to guidance called, acquisition assistance.

  • After indicating in the app what the user wants to image,

  • the app shows the user a split screen.

  • On the bottom, what you're going to see is the ultrasound image.

  • And on the top is an augmented reality interface

  • that shows the user turn-by-turn visual directions

  • that indicates how exactly to move the ultrasound

  • device in order to acquire a diagnostic image.

  • After acquiring a diagnostic image, we need to interpret it.

  • One interpretation model that we've developed

  • is for ejection fraction, an essential measurement

  • of cardiac health.

  • Ejection fraction captures the ratio of blood volume

  • entering and exiting the left ventricle each time

  • your heart beats.

  • This is currently measured by clinicians by hand

  • tracing the edges of the left ventricle,

  • and then evaluating that change over time.

  • Unfortunately, there's great disagreement,

  • even among expert clinicians, with regard

  • to exactly where to place those tracings or even about which

  • frames that are of sufficient quality

  • for reliable evaluation.

  • This chart shows a handful of frames and the responses

  • we obtained from six different expert stenographers regarding

  • whether a frame has sufficient quality.

  • This kind of disagreement introduces a real problem.

  • How do you train and, more importantly,

  • evaluate a model when you don't have access

  • to a single unambiguous ground truth.

  • In our approach to evaluation, rather

  • than comparing to a single ground truth,

  • we seek statistical indistinguishability.

  • Intuitively, this means that if I

  • were to show you a bunch of different estimates,

  • some from a machine and some from humans,

  • you wouldn't be able to tell the difference.

  • So, for example, on the left, what you're seeing

  • is our model in red, which is distinguishable.

  • It has a very clear upward bias.

  • Whereas on the right, if I were to remove the colors,

  • you wouldn't be able to tell which

  • estimates come from the algorithm and which

  • from the clinician.

  • So how do we actually train a model?

  • We use a conventional encoder-decoder architecture.

  • For each frame, we predict whether the frame quality

  • is sufficiently clear for reliable assessment

  • and also produce a per-pixel segmentation

  • of the cardiac chambers.

  • Now that each frame in the video has been segmented,

  • we select a single heart cycle that is of the highest quality.

  • Finally, we estimate the ejection fraction

  • using the largest and smallest areas produced by our model

  • in that heart cycle.

  • Quantitatively, our model is statistically

  • indistinguishable from human experts.

  • More specifically, it produces estimates

  • that are closer to the average over all the clinicians

  • than any of the individual experts are to that average.

  • Underlying all this infrastructure is TensorFlow.

  • Everything that we do runs in real time on the device.

  • This needs to work whether you're

  • in a subbasement of a hospital or in a remote jungle.

  • We train everything with TensorFlow and compile

  • TensorFlow right into the app using

  • a bunch of custom operations.

  • Finally, we also used TF Serving to improve our labeling

  • and monitoring pipelines.

  • To summarize, Butterfly has developed a handheld ultrasound

  • device that has put a high-end ultrasound

  • cart, a stenographer, and a radiologist into your pocket.

  • This is already being used by expert clinicians.

  • And by solving the access problem,

  • our use of real-time machine learning

  • is making the democratization of ultrasound

  • a reality around the world.

  • Thank you.

  • [APPLAUSE]

  • [MUSIC PLAYING]

  • a

[MUSIC PLAYING]

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

B2 中上級

MLによる超音波の民主化 (TF Dev Summit '19) (Democratizing Ultrasound via ML (TF Dev Summit '19))

  • 2 0
    林宜悉 に公開 2021 年 01 月 14 日
動画の中の単語