Placeholder Image

字幕表 動画を再生する

  • Can you hear me?

  • Let's get started.

  • My name is Ahn Jae Man, and in this session

  • I am going to give a presentation about the Life-saving AI and Javascript.

  • Nice to meet you.

  • I work at AITRICS, a medical AI corporation, and I use AI

  • to predict the acute and critical diseases and emergencies of the inpatients,

  • And report the results to the medical team and help them to prepare for such incidents

  • We are provide solutions for such events.

  • Today I am here to walk you through the process and issues during the development of such solutions.

  • The topic itself is very broad.

  • When we say medical technology, it's a very broad term, same applies for the AI.

  • Likewise, there's a lot to talk about Javascript.

  • Our topics are extensive, and since I have such a limited time of 30 minutes,

  • I might briefly touch on some subjects on the surface level

  • On some topics, I may lack depth in my explanation.

  • and it might be hard to understand.

  • I apologize for such circumstances.

  • If you want further information

  • or if you would like to discuss some issues in depth, you can ask me,

  • or you can come talk to me in person later after the presentation,

  • I'll gladly discuss with you later on.

  • So in this session, what I would like to talk about is

  • The field of medical AI is not familiar with most people.

  • A lot of you might have heard of artificial intelligence so you'll might already be familiar with that.

  • but medical AI would be something most of you haven't heard of. So I will introduce you about this first

  • and then I will go over how to develop AI solutions,

  • the process of such development.

  • And then related to this topic, we are using the JS, so I will talk about Javascript related techniques

  • We use a lot of languages like Javascript, Python, and Go,

  • Today I would like to focus on the Javascript related issues.

  • And because the AI is different from regular softwares,

  • I will discuss several issues in relation to that.

  • This is the contents for today, and now I will start my presentation.

  • I will introduce the medical AI.

  • What if your code could save someone's life?

  • What if the code you created

  • Can contribute to raising the survival rate, even just 1%,

  • Your code will be able to save the life of one person out of a hundred.

  • In the U.S., 250,000 people lose their lives annually because of medical malpractice.

  • Now if your code may contribute to reducing such malpractice,

  • and increase the survival rate, even if it's just one percent,

  • You will be able to save 2,500 people annually, out of the 250,000.

  • Isn't it fascinating?

  • Now, I will talk about how the software engineering

  • and AI can save people's lives,

  • So one of the solutions we are working on right now is this:

  • This is a solution that predicts acute diseases in advance

  • and report the results to the doctors.

  • So what are the acute diseases and emergencies?

  • It can be death itself,

  • and an unexpected cardiac arrest,

  • and sepsis, or blood poisoning, which is not that widely known.

  • And many other diseases.

  • It may be hard to understand what sepsis is,

  • the sepsis refers to the same thing as blood poisoning.

  • So what exactly is it?

  • One of the common examples may be,

  • Let's say someone was bitten by a dog, and he was taken to the hospital, but passed away within a day.

  • Or,

  • Someone had an injection, like glucose solution, and suddenly passed away.

  • These are all cases caused by sepsis.

  • So sepsis, among many acute diseases,

  • induces the most deaths and incurs the highest cost,

  • And what's serious about it is, with each hour of delayed treatment,

  • you get 8% higher death rate.

  • More than 50% of deaths in the hospitals

  • are related to sepsis or blood poisoning.

  • Then if we can predict this sepsis and report it in advance,

  • we can intuitively understand that so many people can be saved.

  • So now we are working on sepsis,

  • the solution for predicting sepsis outbreaks.

  • Now I will introduce the development process for such a solution.

  • The overall process is shown in this figure.

  • The data of the patients, such as the heart rate, pulse, blood pressure, and body temperature,

  • and the blood test results, or X-ray, CT scan image, and prescriptions

  • So we collect all these,

  • and when we enter it into a machine learning model, and this model will, in 4 hours to 24 hours,

  • Calculate the probability of death or sepsis occurrence for the patient

  • Such results will be displayed on the dashboard,

  • Which is very simple.

  • It's a simple solution.

  • So this is the solution we are distributing right now, and

  • the mortality, or death, this solution decides that the person is risky of death in six hours,

  • and it notifies of the risk, and its evidences,

  • It offers the view of those information, such as the current state of the patient.

  • I summarized the process for AI solution development into five phases.

  • This is not the official 5-phase,

  • I thought it'd be easier for me to explain if I divide it into 5 phases.

  • I will walk you through each phases

  • First is data refining and model building,

  • this is an essential phase in building an AI.

  • You collect a lot of data

  • As you collect a massive amount of data,

  • you might encounter with strange data, or useless data,

  • Or irrelevant data.

  • So we do data cleansing and pre-processing.

  • And then for the outcome,

  • We have to define what we are going to predict.

  • When we predict death, death is obvious,

  • You would think, death is death,

  • You might jump into the conclusion that it's very simple to define the outcomes.

  • However, it is really complicated since we are predicting acute death

  • and we have to define what acute death is.

  • So the outcomes could be classified into many different definitions.

  • Let's say that the patient dies after 24 hours,

  • if you would like to predict this,

  • what about the patient who died after 25 hours?

  • What about 23 hours? These are all the issues that we have to keep in mind.

  • We think about those various circumstances,

  • and we struggle with many different data,

  • And we come up with the model that fits the data best.

  • This is what machine learning research usually does, and actually that's why

  • data refining and such takes up most of the time.

  • I would like to talk more about this process,

  • but I will skip this part for now.

  • And now the model, we developed this model,

  • and we have to deploy it,

  • and this is how the deployment is conducted.

  • Microservices, for example, Web API, most commonly,

  • or Rest API, we deploy models with these,

  • now that we have tensorflow JS, which enables us to

  • upload the model in the web browsers.

  • So with the tensorflow JS, we can deploy them on web browsers,

  • And of course processes like optimization for faster models, and compression, are necessary.

  • But in the deployment process, if we actually deploy them, it is reflected in the service,

  • so you have to make sure about this.

  • On my slide, it says it's about the safe usage of AI models,

  • but actually AI models do not always give the right output.

  • Even though it has 99% of accuracy,

  • It means it's accurate for 99 people out of 100, but it will be wrong for one person.

  • And this inaccuracy for one person is critical in many cases.

  • So what if the model suggests wrong output?

  • That's one issue,

  • And then when we have the output from the model,

  • why did the model produce such output?

  • How can we trust this output? This would be another issue.

  • So you have to defend such problems or consider such issues

  • to use the AI model safely.

  • So the drawback of the deep learning models is

  • They are very good at predicting the data it already had as the input

  • However, it cannot really predict well about the data it never learned.

  • For example, a deep learning model which distinguishes between dogs and cats,

  • if you put in a picture of a tiger, this model would,

  • It should say "this is neither a dog nor a cat"

  • but instead it will say that "it is a cat."

  • So when we develop machine learning models,

  • we test all of the possible inputs,

  • and even if there's some malfunctioning in the model

  • We need a lot of code to defend such cases in the solution phase.

  • If we provide machine learning models for this service,

  • we are usually concerned about these kind of things

  • so we need the test results of machine learning models for some random input values

  • This is usually called the property based testing

  • you test the model with random inputs

  • and observe what kinds of inputs fail the test

  • The test process for this is called property-based testing.

  • We conduct the property-based testing with Javascript

  • And the libraries like JS Verify or Fast Check supports this.

  • additionally, when the model produces an output,,

  • the issue of how can we trust the output,

  • It's about how accurate the results are.

  • So we have added a module that can be analyzed,

  • and so we were able to analyze and debug the module

  • If you look at the image on the right,

  • the image on the right bottom will be more intuitive

  • Here's a chest x-ray

  • Let's say I want to know if this person's health condition is in danger or not by looking at this x-ray

  • If you give this two images to the model,

  • and it says, 'the x-ray shows that he is in danger'

  • Then I need to know the exact reason why

  • to see if the model's answer is correct or not.

  • By putting an interpretable module, as we did for the right image,

  • it will tell us the reason why it's saying that. And we can see if it's working well or not.

  • So if we add an interpretable module,

  • we can use a much stabilized AI module.

  • Thus, when using interpreter modules

  • it's important to visualize it well when using the AI model.

  • And if I add some more explanation to the property based testing,

  • This is part of our solution code,

  • This code tests whether the people with sepsis actually

  • score higher on the sepsis risk test.

  • So when we find out the effect of sepsis,

  • the best way to diagnose sepsis is called lab blood culture feature.

  • If we give the value of 0.3,

  • the blood culture value is greater than 0.3. The rest of the value,

  • will be random values,

  • and it will automatically run a test,

  • and the result of the test must return a value greater than 0.2

  • And this shows that blood culture has some effect.

  • By testing it like this

  • we can run a machine learning module test randomly.

  • So by doing this,

  • I've introduced a way of deploying safely.

  • Next now that we have a model,

  • we have to find a way to use it with the actual data.

  • When we have data coming in live,

  • by using the calculator we have to find out the score,

  • for this we use data pipeline, like NodeJS,

  • it can be implemented very easily.

  • NodeJS supports asynchronous event,

  • when data flows in,

  • it tells the data to do a certain task,

  • so we were able to build a data pipeline easily using NodeJS.

  • During this progress, we can monitor the data flow,

  • or periodically check it's accuracy.

  • This is a part of our presentation during the PyCon.

  • So a data pipeline is something like this, when a data comes in from the hospital

  • the patients information comes in along with predictive acute disease solution.

  • By using the machine learning model, we can predict and show it on the Dashboard.

  • It will notify it in some way, such as the text message or the phone call.

  • Now you might think that we have simple data pipeline,

  • but this is actually very complicated,

  • If you look on top left corner, we can see that the patient's data comes from the hospital

  • Our synchronizer, which is a microservice, inputs the data into our database,

  • and if you look on the right side

  • you can see prediction, medical score,

  • and an alerting service

  • And if you see on top, you can see the things that must be done regularly.

  • Like backing up data,

  • a scheduler that trains the model,

  • and below that there's the monitor and controller.

  • I think we won't have enough time to go over everything,

  • so I'll start with this first,

  • I'll go back to other things later on

  • The monitor controller part

  • We use redis for this, and redis consumes a string,

  • and if we send it through StatsD

  • you can something like this on the right side of the screen.

  • You can see the medical score or EMR data.

  • By monitoring,

  • we can check if the data pipeline is working well or not.

  • As for the controller,

  • If we type in a command using slack,

  • we can check the status of the data pipeline

  • or rerun the data pipeline

  • you can do these kind of things.

  • It's easy when using node.js

  • to create this kind of things.

  • We can also use other things like python,

  • those things aren't event based,

  • so you have to check them continuously through threading, so there's a bit of a problem.

  • The next step is,

  • the most important part when making a service.

  • It's the frontend design,

  • You might think that frontend design

  • might not have any connection with machine learning

  • But actually,

  • Drawing from my experience,

  • In developing an AI model based product,

  • One of the most important parts is developing the frontend.

  • You might be wondering why,

  • So you get the output from the model,

  • It's crucial to interpret it and display it well

  • This will help you understand better.

  • The output from the machine learning model is

  • just that one number.

  • If you put the data of a patient, it will say, 0.015,

  • or 0.2, 0.03, hundreds of these numbers.

  • The doctors cannot make decisions out of these. These are not the sole decision maker.

  • If this patient gets 0.015, is she in risk or not,

  • or was she already in danger,

  • or has she developed the risk recently, these factors would be put into consideration

  • So what the medical team would like to see

  • would be the screen below here.

  • Producing that screen from the number 0.015 above, is what the frontend does.

  • It needs a lot of work and consideration

  • How should we interpret the model? We need to work on that as well,

  • For this process,

  • We draw these kind of charts a lot,

  • We check if our interpretation is actually accurate,

  • this is not the actual graph we use,

  • This is the kind of graph we use and I brought it from the E-chart docs

  • For example, the estimated score, and the actual outcome, like death and so on,

  • What was the relationship with the outcome and the input,

  • Those are drawn into graphs

  • to verify our intuition and results.

  • So when it's delivered to the patients, they warn that this is very risky,

  • like the figure is holding his heart,

  • we contemplate on the ways to deliver it more intuitively.

  • So far you might think that we finished making the AI product

  • But in fact, the AI models

  • are not always accurate when we actually put it into service.

  • It's always wrong, it is always wrong.

  • So if we dig down into the reason why it is wrong when it was successful during the development,

  • first, the data is a little different.

  • The aspects of the data is somewhat different.

  • For example, the patient in 2015 and 2019 are completely different.

  • Not completely maybe, but they are quite different. So that lowers the accuracy of the prediction

  • and people's actions are a little different as well.

  • Like when we predict sepsis

  • and report them 4 hours ahead of its predicted occurrence,

  • And the medical team reacts ahead according to it.

  • With that reaction, the pattern of the sepsis will be different,

  • And our AI model will lose accuracy because of it.

  • Because we intervene into people's behaviors and reactions, it results in less accurate prediction.

  • So the issue here is how do we make AI

  • learn from the data

  • that keeps changing.

  • We have to consider that as well.

  • You've seen this data pipeline before, right?

  • So this trainer here does that work,

  • The trainer microservice is not that big of a deal.

  • We have the original data,

  • We have data from September 1st, September 2nd, and the data keeps accumulating.

  • And we get the version 1 model from the original data.

  • If we include the data from September 1st, we have version 2,

  • and if we have less accuracy with it, we should discard it.

  • we decide to discard it,

  • And then we have other data on September 2nd,

  • and we train the model with it,

  • if we have increased accuracy with it, we decide to keep that data.

  • So we keep training the model regularly,

  • this is the microservice that enables the model to be updated automatically.

  • During such process, we verify the model with the data from other institutions

  • We keep verifying the model that it is applicable to many other hospitals and institutions

  • This described how to develop an AI product,

  • it was a primitive description.

  • And we cannot skip over tensorflow JS.

  • You might have heard a lot about the tensorflow JS.

  • This is, as you might know,

  • a library that enables machine learning with Javascript.

  • The one on the right is our code.

  • You would know it if you have used Python tensorflow,

  • the composition is very intuitive.

  • The code is not that different from the Python tensorflow,

  • So that you can use existing pre trained model

  • You can also convert and use a pre-made model with Python,

  • and it's also possible to train it on the browser or Node.js.

  • If you access the tensorflow website, you have a tutorial page,

  • and it's really good for beginners,

  • even if you know nothing about it, you can try and develop a cool product

  • So you learn about the already existing models here.

  • or you can retrain the existing models,

  • or use Javascript,

  • to develop the machine learning from the beginning, those are some things that you can do.

  • If you are interested, it would be fun to try it out.

  • There are many interesting examples.

  • You can play games, play a piano, there are a lot of fun demonstrations.

  • So, it would be good for you to try.

  • If you ask why we have to practice the machine learning on web browsers,

  • on the right, this is a patient and a doctor having a real time conversation

  • The doctor is constantly checking the patient's condition.

  • Like this, predicting something interactively in real time.

  • If you need this function,

  • getting back from the back-end is a slow process

  • and it doesn't react interactively.

  • So, if you put it on the browser using Tensorflow js,

  • very interactively,

  • you can use the machine learning model.

  • And of course, since it's ran on a web browser, the server load would be reduced.

  • And,

  • another benefit is that machine learning models could be visualized.

  • These are the advantages.

  • So we have used them a lot.

  • We were thinking of how to use the Tensorflow js.

  • Tensorflow doesn't support everything yet

  • and there are only few people using it in communities. So,

  • we made a simple model

  • and tried running them or making simple products and famous ones.

  • But we didn't try to use it for real production or anything like that.

  • When the community grows, we'll be able to try many other things.

  • So far, it's very good but it's not good enough to use right away.

  • This is a library called TensorSpace js.

  • When you run the machine learning model on a web browser, like this,

  • you can easily check how the machine learning model

  • actually moves about and what structure it takes.

  • It's also very easy to debug the machine learning model.

  • We've come to this point very quickly, now let's go back.

  • So the products we are making right now.

  • What kind of innovation could they actually make?

  • You might be wondering, are they actually saving people's lives?

  • In Korea, some hospitals are having a pilot test.

  • When I checked the feedback from the medical team, doctors and nurses,

  • there were some very good feedback.

  • For example, it helped distributing medical resources efficiently.

  • They could see the patients they couldn't see. Their work load have reduced so they could go home early.

  • Their sleeping hours increased by an hour. There were good feedbacks like these.

  • It's hard to say here whether it enhanced the survival rate or not.

  • because that has to go through a very scientific and statistical verification.

  • So it's ambiguous for me to say the exact percentage of it.

  • But based on the feedbacks, I can safely say that it is contributing to the survival rate.

  • So next, we are going to

  • show that AI solution can actually save human lives,

  • by verifying through a time-consuming experiments and observations.

  • And we should certainly save a lot of lives, right?

  • It's getting a bit cheesy so let's skip this part

  • To explain the technological details,

  • We are increasing the number of predictable diseases.

  • So that we could predict every disease that occur at the hospital.

  • We are trying to build that platform.

  • And when a medicine is injected to cure a disease,

  • predicting the reaction of the patient.

  • Medically, we are facing these types of challenges

  • To look at the machine learning, in terms of software,

  • since we are making many predictions,

  • it's hard to run all the machine learning models in the back-end.

  • We do machine learning even for the tiniest parts.

  • So we use the Tensorflow js to put it on a web browser or,

  • just by using Tensorflow api,

  • put it on mobile. We are using these processes.

  • The screen on the right shows something we have presented at Nvidea.

  • It's a platform that can accelerate machine learning research.

  • You can look up the details of the presentation.

  • Now, we are trying to

  • create a platform that can fasten the learning process of machine learning

  • We are working on that too.

  • Additionally, other than medical, we have other things to process machine learning

  • All of you, not just the people from our company, should utilize the machine learning in various fields

  • and try different things.

  • This could bring innovations that are never done before.

  • This is the end of my presentation, and the time's almost up.

  • Thank you for being such a wonderful audience.

Can you hear me?

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

A2 初級

ライフセービングAIとJavaScript|Jeeman An|JSConf Korea 2019(en sub) (Lifesaving AI and JavaScript | Jaeman An | JSConf Korea 2019(en sub))

  • 3 0
    林宜悉 に公開 2021 年 01 月 14 日
動画の中の単語