Placeholder Image

字幕表 動画を再生する

  • Why is linear algebra actually useful? There very many applications of linear algebra.

  • In data science, in particular, there are several ones of high importance.

  • Some are easy to grasp, others not just yet. In this lesson, we will explore 3 of them:

  • Vectorized code also known as array programmingImage recognition

  • Dimensionality reduction Okay. Let’s start from the simplest and

  • probably the most commonly used onevectorized code.

  • We can certainly claim that the price of a house depends on its size. Suppose you know

  • that the exact relationship for some neighborhood is given by the equation:

  • Price equals 10,190 + 223 times size. Moreover, you know the sizes of 5 houses 693,

  • 656, 1060, 487, and 1275 square feet. What you want to do is plug-in each size in

  • the equation and find the price of each house, right?

  • Well, for the first one we get: 10190 + 223 times 693 equals 164,729.

  • Then we can find the next one, and so on, until we find all prices.

  • Now, if we have 100 houses, doing that by hand would be quite tedious, wouldn’t it?

  • One way to deal with that problem is by creating a loop. You can iterate over the sizes, multiplying

  • each of them by 223, and adding 10,190. However, we are smarter than that, aren’t

  • we? We know some linear algebra already. Let’s explore these two objects:

  • A 5 by 2 matrix and a vector of length 2. The matrix contains a column of 1s and another

  • with the sizes of the houses. The vector contains 10,190 and 223 – the numbers from

  • the equation. If we go about multiplying them, we will get

  • a vector of length 5. The first element will be equal to:

  • 1 times 10,190 plus 693 times 223. The second to:

  • 1 times 10,190 plus 656 times 223. And so on.

  • By inspecting these expressions, we quickly realize that the resulting vector contains

  • all the manual calculations we made earlier to find the prices.

  • In machine learning and linear regressions in particular, this is exactly how algorithms

  • work. Weve got an inputs matrix; a weights, or a coefficients matrix; and an output matrix.

  • Without diving too deep into the mechanics of it here, let’s note something.

  • If we have 10,000 inputs, the initial matrix would be 10,000 by 2, right? The weights matrix

  • would still be 2 by 1. When we multiply them, the resulting output matrix would be 10,000-by-1.

  • This shows us that no matter the number of inputs, we will get just as many outputs.

  • Moreover, the equation doesn’t change, as it only contained the two coefficients – 10,190

  • and 223. Alright.

  • So, whenever we are using linear algebra to compute many values simultaneously, we call

  • thisarray programmingorvectorized code’. It is important to stress that array

  • programming is much, much faster. There are libraries such as NumPy that are optimized

  • for performing this kind of operations which greatly increases the computational efficiency

  • of our code. Okay.

  • What about image recognition? In the last few years, deep learning, and

  • deep neural networks in particular, conquered image recognition. On the forefront are convolutional

  • neural networks or CNNs in short. What the basic idea? You can take a photo, feed it

  • to the algorithm and classify it. Famous examples are:

  • the MNIST dataset, where the task is to classify handwritten digits

  • CIFAR-10, where the task is to classify animals and vehicles

  • andCIFAR-100, where you have 100 different

  • classes of images The main problem is that we cannot just take

  • a photo and give it to the computer. We must design a way to turn that photo into numbers

  • in order to communicate the image to the computer. Here’s where linear algebra comes in.

  • Each photo has some dimensions, right? Say, this photo is 400 by 400 pixels. Each pixel

  • in a photo is basically a colored square. Given enough pixels and a big enough zoom-out

  • enough causes our brain to perceive this as an image, rather than a collection of squares.

  • Let’s dig into that. Here’s a simple greyscale photo. The greyscale contains 256 shades of

  • grey, where 0 is totally white and 255 is totally black, or vice versa.

  • We can actually express this photo as a matrix. If the photo is 400 by 400, then that’s

  • a 400 by 400 matrix. Each element of that matrix is a number from 0 to 255. It shows

  • the intensity of the color grey in that pixel. That’s how the computersees’ a photo.

  • But greyscale is boring, isn’t it? What about colored photos?

  • Well, so far, we had two dimensionswidth and height, while the number inside corresponded

  • to the intensity of color. What if we want more colors?

  • Well, one solution mankind has come up with is the RGB scale, where RGB stands for red,

  • green, and blue. The idea is that any color, perceivable by the human eye can be decomposed

  • into some combination of red, green, and blue, where the intensity of each color is from

  • 0 to 255 - a total of 256 shades. In order to represent a colored photo in some

  • linear algebraic form, we must take the example from before and add another dimensioncolor.

  • So instead of a 400 by 400 matrix, we get a 3-by-400-by-400 tensor!

  • This tensor contains three 400 by 400 matrices. One for each colorred, green, and blue.

  • And that’s how deep neural networks work with photos!

  • Great! Finally, dimensionality reduction.

  • Since we haven’t seen eigenvalues and eigenvectors yet, there is not much to say here, except

  • for developing some intuition. Imagine we have a dataset with 3-variables.

  • Visually our data may look like this. In order to represent each of those points, we have

  • used 3 valuesone for each variable x, y, and z. Therefore, we are dealing with an

  • m-by-3 matrix. So, the point “i” corresponds to a vector X i, y i, and z i.

  • Note that those three variables: x, y, and z are the three axes of this plane.

  • Here’s where it becomes interesting. In some cases, we can find a plane, very close

  • to the data. Something like this. This plane is two-dimensional, so it is defined by two

  • variables, say u and v. Not all points lie on this plane, but we can approximately say

  • that they do. Linear algebra provides us with fast and efficient

  • ways to transform our initial matrix from m-by-3, where the three variables are x, y,

  • and z, into a new matrix, which is m-by-2, where the two variables are u and v.

  • In this way, instead of having 3 variables, we reduce the problem to 2 variables.

  • In fact, if you have 50 variables, you can reduce them to 40, or 20, or even 10.

  • How does that relate to the real world? Why does it make sense to do that?

  • Well, imagine a survey where there is a total of 50 questions. Three of them are the following:

  • Please rate from 1 to 5: 1) I feel comfortable around people

  • 2) I easily make friends and

  • 3) I like going out Now, these questions may seem different, but

  • in the general case, they aren’t. They all measure your level of extroversion. So, it

  • makes sense to combine them, right? That’s where dimensionality reduction techniques

  • and linear algebra come in! Very, very often we have too many variables that are not so

  • different, so we want to reduce the complexity of the problem by reducing the number

  • of variables. Thanks for watching!

Why is linear algebra actually useful? There very many applications of linear algebra.

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

B1 中級

線形代数はなぜ役に立つのか? (Why is Linear Algebra Useful?)

  • 5 1
    林宜悉 に公開 2021 年 01 月 14 日
動画の中の単語