Placeholder Image

字幕表 動画を再生する

  • do you like robots?

  • Because I should do so in today's video.

  • I'm going to be using artificial intelligence and face recognition to determine my emotions through my Web cam in real time.

  • Also, if you enjoy this video, make sure to let me know down below so that I can create a part two to this video where I use face recognition in order to determine who is in a picture and display their name next to their face.

  • Let's get started now.

  • In order accomplished this face recognition.

  • We're going to be using a library cold face a ph dot Js You can see over here on the right, and this is a wrapper around.

  • Tensorflow, which is one of the most popular machine learning libraries out there, is going to allow us to do real time face detection in the browser, and it's really easy to get set up with.

  • The first thing that you need to do is download this library.

  • There's going to be a link in the description below where you can download this library from Also, you're gonna need to download the models that you'll be using for your face detection when we get to the model section.

  • I'm going to list off all the models that we use so that you can download them.

  • Also, all of the models are going to be available on my get up in the source code for this video so you can get them there as well to get started.

  • I just want to create a blank html page here and inside of this we want to put a video element because this is going to be where we render our webcam where we do our real time face detection.

  • And this is just going to have an idea of video so we can easily access it in the Java script.

  • On what you're gonna give it a wit.

  • Here, you can use whatever width and height that you want, but in our case, I'm going to use 720.

  • I'm gonna use the height of 560 you want to make sure you just set this to be autoplay and muted because we don't actually want any sound.

  • And you have to make sure you specify a width and a height on here.

  • Otherwise, the face detection will not be able to draw properly onto your web cam.

  • Now, once we have that done, I'm just going to add a little bit of basic styling to our shoot so we can come in here and out of style tag.

  • And really, all I'm going to do is just style the body of this.

  • So I'm going to give it no margin, make that zero same thing with padding here, we're gonna do zero.

  • And essentially, the reason I'm doing this styling is just to get the webcam was centered in the screen.

  • So I'm gonna set the with 2 100 view withs.

  • The height is going to be 100 view heights, and I'm going to change this to be displayed Flex just to find the content in the center and aligning the items items in the center.

  • And this is just going to put the webcam in the very center of the screen so we could just open this up with life's over just to see what we have working with.

  • And here we go.

  • It's loading up, and obviously nothing's going to render for our video because we haven't actually hooked up our webcam into our video yet.

  • So Let's go into our script here and work on doing that.

  • First, we need to get that video minutes.

  • We're going to create a helmet here called Video and we can say document.

  • I get element by i.

  • D.

  • And we gave it an idea video, and this is going to be our video take.

  • Then we can create a function called Start Video.

  • We're going to use this to hook up our Web cam, tow our video moment.

  • So in order to get the Web cam we need to use navigator dot get user media, and this is going to take an object as the first parameter.

  • And this just says what we want to give.

  • And we want to get the video elements.

  • Were you gonna say video is the key and an empty object as the perimeter.

  • Then we're going to have a method here, which is going to be streamed.

  • This is essentially what's coming from our Web cam.

  • So what's coming from our web cam?

  • We want to set as the source of our video so we'll save video source object.

  • There's going to be equal to that stream, and then lastly, we have an error function that we can call here.

  • So if we do get in air, we just want a large best will say consul dot log.

  • There goes air just like that and let's make sure it's an era log instead of a normal walk.

  • And now, if we call that function, we could just say Start video and go into HTML.

  • Make sure to include that soap here.

  • We're gonna include our script tag.

  • Make sure he said it to defer, and we want the source of that to be our script dot Js.

  • Now we save that.

  • You should see that it's going to access our video.

  • It'll load up right here, and this is just a live preview of my Webcam and may be slightly delayed, but that's just because it's going through the browser as opposed to going directly into my recording software.

  • So now once we have that done what's also include our face a p I library, while we're at it so we could do a script here.

  • We wanted to for this one as well, and we want this face a P I, and we want to make sure that this is defined above our normal script so that it gets loaded before we actually run our script.

  • And now we can work on actually setting up our script in order to detect my face as opposed to just render my video.

  • In order to do that, we need to load all of the different models.

  • Let's do that at the very top.

  • This is all done a synchronously.

  • So we want to use promise dot all which is going to run all of these a sinking his calls in parallel, which will make it much quicker to execute.

  • And then here we just pass an array of all of our promises, and what you do is you wanna call face A P i dot nets.

  • And then this is going to be where you call all the different models you want.

  • In our case, we're using the tiny face detector.

  • This is just like any normal face detector, but it's going to be smaller and quicker, so it'll run in real time in the browser as opposed to being very slow.

  • And we want to say load from It's a nice fellow properly load from you or I, and in here we have a Model's folder with all of our different models.

  • So we're going to just pass that and we'll just say slash models.

  • We wanted to do this a couple of times for all of our different models.

  • Let's copy that down four times and we have our tiny face detector.

  • The next one that we have is going to be our face landmark flips Landmark 68 net, and this is going to be able to register.

  • The different parts of my face is post my mouth, my eyes, my nose, et cetera.

  • Next thing that we're going to have here's face recognition, I can smell that wreck cognition net.

  • And this is just going to be able to allow the A p I to recognize where my face is, the box around it.

  • And then, lastly, we're going to have his face, expression, net and whips expression just like that.

  • And what this is going to allow you to do is to recognize when I was smiling, frowning, happy sat etcetera.

  • So what we want to do now is promised.

  • All is going to take a dot Ben.

  • And after we're done with this, we want to call our start video so we can move this here and now.

  • Once we're done loading all of our models, it's going to start the video over here on the side.

  • And it may take a little bit longer because loading these models does take a little bit of time.

  • But it's fairly quick.

  • Then we can actually set up an event listener.

  • So we can say video dot at event listener we're gonna do is we want to add event listener for when the video starts playing.

  • So then when the video starts playing, we're gonna have our code down here which we can say to recognize your face.

  • For now, let's just do a simple council dot log.

  • I was just like, whatever come over here and inspect that and assume is the video starts.

  • You see, we get that log down here at the bottom, which means we know that everything's working so far.

  • Now we can work on actually setting up the face detection, and this is actually incredibly straightforward to Dio We want to do is we want to do a set interval so that we can run the code inside of this multiple times in a row.

  • We want to make sure it's in a sinking dysfunction because this is going to be an asynchronous library, and all we need to do inside of here is get our detection.

  • So we're going to just say detection is going to be equal to awaiting our face FBI or just a face a p i dot detect all faces.

  • And this is going to get all the faces inside of the Webcam image every single time that this gets called.

  • And we're going to do this, for example, every 100 milliseconds and what we want to do instead of here is we want to pass the element, which in our case, is a video element as well as what type of library we're going to use to detect the faces.

  • In our case, we're using the tiny face CPs well, safe a c p i.

  • That tiny face of space detector options, and this is just going to be empty because we don't want to pass any custom options because the default is going to work perfectly fine for our scenario.

  • Then we also say what we want to detect these faces with someone is going to say with face landmarks, and this is going to be for when we actually draw the face on the screen.

  • We're going to be all that see these difference at sections.

  • So the face landmarks is going to be the different dots that you see on my face, and then they can say dot with face expressions and this is going to be able to determine whether I'm happy, sad, angry, upset whatever based on just the image that it gets of my face.

  • So we must make sure I spell that properly.

  • There we go now.

  • We could just log out these detection, so we're going to say console, not long detections just so we can see that if this is working and we could go inspect over here and you can see that we're getting air immediately and it's saying tiny face detector options is not a constructor, and that's super easy to fix.

  • This just needs to be a Capital T over here.

  • We can save this, and now we should be able to actually get our detection is showing up over here, So let's make sure that we go and you can see we have a bunch of objects in here.

  • And we just have one element in the raid because there's only one face currently and this is all the different detection information, expressions, et cetera.

  • And what we want to do is actually display this on the screen.

  • And to do that, we're going to be using a canvas element.

  • So inside of our index dot html here, we just want to style this canvas that has to be positioned absolute so that it positions directly over top of our video element.

  • And we don't actually to put the canvas element inside of our aged female because we can do that and our job a skirt.

  • So let's do that now.

  • We can just say canvas is going to be equal to face a p I dot create canvas from media media and we want to create it from our video element.

  • Then we want to do is just add that campus to the screen.

  • So we're gonna say, document thought body upend, and this is just going to put this at the very end of our page and since his position, absolutely, it doesn't really matter where it goes, and then we want to do is get the display size of our current video, and this will be so that our canvas could be sized perfectly over a video.

  • So this is just going to be an object with a with which is just video dot with, and it's going to have a high property, which is video dot height.

  • Now that we have that other way, we can actually work on displaying our elements inside the canvas.

  • So we want to take our detection, which we have right here created new variable, which is going to be called resized protections.

  • We just want to set this tow or face a P.

  • I not resize the results, and we want to pass in here.

  • The detection is that we have as well as the display size, and this is just going to make it so that the boxes that show up around my face are properly sized for the video moment that we're using over here, as well as for our canvas.

  • Let's make this a little wider, so it's easy to see.

  • And now all we need to do is actually just draw this.

  • We can save face E p.

  • I gotta draw, draw, detections, detections.

  • And what we do is we passed in the canvas.

  • We want to draw on to as well as our resized detections and make sure spoke face a P I correctly, over here.

  • Let's save that and see how this works.

  • It's going to load in my face over here.

  • And as you can see, it's already got a problem because our canvas isn't being cleared and we have our video element being shown up over top, which is definitely not what we want.

  • The first thing we can do to fix this is to actually clear a canvas before we draw it.

  • So let's remove this consul about log and instead of here, right after we get our detection is right.

  • After we re size everything and right before we draw, we want to take our canvas.

  • We want to get the context, the two D context.

  • This is just a two dimensional canvas on.

  • We just want to clear it so we'll say, clear, wrecked, and we want to clear it from 00 and the width is going to be just the canvas.

  • Start with and the canvas dot height for the height is going to clear the entire canvas.

  • Also, we want to make sure we match our canvas to this display size so we can say face a P i dot match dimensions dot match dimensions and we put it in our canvas as well as a display size.

  • Make sure you spell dimensions properly.

  • And now if we say that we should get much better detection over on the side and if we just give it a second, you can see that it's detecting my face.

  • And as I moved, it's following me around.

  • It also has a number, which is how percent sure it is.

  • So it's about 90% sure that this is a face which is perfect.

  • So now we can actually start drawing even more details.

  • We can go into the face a p i dot draw again, and this time we can draw the Wayne Marks so we can say you are faced landmarks.

  • This is going to take the canvas as well as the resized results.

  • And if we say that, what's resized results?

  • Save that again.

  • Let it refresh over here and now you'll see that it'll actually draw some lines and dots on my face based on where the landmarks my face are.

  • I notice that's actually not working in ashes because this should be called resized detections, not resized results.

  • So it's saved that refresh it again and let it do its work.

  • And if we wait a second, you'll see that it now has all the different face detection.

  • It knows where my eyes are, my eyebrows, my actual face shape, my mouth's nos et cetera.

  • And lastly, if we want to determine if I'm happy said, or what not?

  • Based on this image, we can go to the face shape out here again dot draw.

  • And this time we want to draw the face expressions.

  • This is going to take a canvass element as well as it's going to take our resized detections again.

  • Now, if we say that, it'll be able to determine my emotion based on just my image alone so you can see that it's about 100% sure that I'm neutral.

  • But if I make a surprise face, for example, it says it's 100% charm.

  • Surprised, And if I look angry, little Sam angry and so on, which is really impressive and that's all it takes to create this simple face detection algorithm.

  • If you want more artificial intelligence and face detection, let me know down in the comments bullet, and I'll definitely make videos like that.

  • Also check out some of my other project based videos linked over here.

  • I subscribe to the channel for more videos just like this.

  • Thank you very much for watching and have a good day.

do you like robots?

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

B1 中級

JavaScriptでリアルタイムの顔検出を構築する (Build Real Time Face Detection With JavaScript)

  • 0 0
    林宜悉 に公開 2021 年 01 月 14 日
動画の中の単語