字幕表 動画を再生する 英語字幕をプリント [MUSIC PLAYING] SPEAKER: So today we're going to have our first of a few discussions about cybersecurity, and later on we're going to talk a little bit about cybersecurity in the context of the internet and some of the challenges that it brings up there. But today we're going to focus mostly on cybersecurity issues related to your machine, your computer without necessarily being connected to the internet. Before we do, we need to understand a little bit more about our machine's infrastructure, its hardware. And the biggest question to ask at the outset is, when we talk about the system's memory, what do we mean by that? That term kind of gets thrown around and it means a couple of different things, potentially. It might mean your system's RAM or random access memory, which is a rough translation of how much computing power it has, how many things it can do. And we can also talk about hard drive space as another example of system memory. Hard drive space is usually just free storage, basically. How much room do we have to literally store files on our machine? How much memory does your computer have? Maybe you do or maybe you don't know. If you take a look at your system information or look up the computer that you bought on the internet, you might find that if we're quoting memory in terms of RAM, that your device might have as low as 512 megabytes of RAM, which is about half of a gigabyte. And that's not very much, most machines have much more than that now unless you have a low powered Chromebook, for example, that you use for travel. Memory on the RAM scale might go as high as 32 gigabytes of RAM, which is quite a bit more than that. That's generally for really high end computers. Computers, in particular, that process a lot of graphics. So sometimes computers that are specifically dedicated for gaming might have that much RAM. But typically the range is somewhere between four and 16 nowadays. When we're talking about hard drive space, that number is quite a bit bigger. So the typical hard drive nowadays might be as low as 128 gigabytes, if the drive is a solid state drive, versus a hard disk drive. We won't go into too much detail about the distinction between those two things, other than right now to say those are just two different ways to store data long term. So that might be the low end. The high end is probably somewhere on two terabytes of information. One terabyte is 1000 gigabytes, give or take. So two terabytes would be about 2000, give or take, gigabytes. So quite a bit. Maybe even as high as four terabytes. That's quite a bit of storage information. That's enough to store several hundred HD, high quality films. But there's much more to memory than just RAM and hard disk space. There's actually kind of a hierarchy of memory that exists within your machine. Most of these numbers, though, aren't usually quoted in the specs of a device. So there's RAM, random access memory, and then there's a series of caches, each of which gets successively smaller. So they're going to be quite a bit smaller than the four gigs, say, of RAM that your device has. But they're also a little bit faster, and the reason these things get faster, these caches get faster, is they are getting closer and closer to the computer's processor, which is really the only part of the device that is able to manipulate information. It's the only part that can process information. So the memory that we're feeding to that processor needs to get faster and faster, such that it can continue to swap things in and out. So we have the RAM, maybe an L3 cache, a Level 3 cache, Level 2, Level 1, and then finally CPU memory, which is the processor memory itself. Plus some small bits of memory called registers, which are used to be the final sort of pass of information from RAM or this hierarchy of memory into the CPU. But again, every file on your machine lives somewhere permanently on a disk drive. And there are, again, two different kinds of disk drives. We have solid state drives and hard disk drives. We should treat them as effectively identical for purposes of our discussion today. They-- solid state drives tend to behave a bit differently than hard disk drives, they tend to be a bit more secure than some of the vulnerabilities that hard disk drives present, which we're going to talk about a little bit later in today's lecture. But in general, when we talk about hard disks or storage space for the rest of today's lecture, we're going to be mostly focusing on hard disk drives. They're also just much more prevalent still. Solid state drives are coming into their own and becoming more and more frequent as they appear in devices, but hard disk drives are still far and away more and more prevalent within devices that exist now. They are just storage space, though, we can't do anything with data that is stored on disk. We have to first move it to RAM and then have it sort of go up and down that chain of RAM, the different caches to the CPU, in order to actually manipulate the data. Once we're done manipulating it, and maybe we're turning our computer off for the evening, then all of the data that is in RAM will be stored back into the hard disk space so that we're able to access it at another time. One thing to keep in mind as we begin this discussion of memory, though, is that memory is really just an array. And we've talked about arrays already, where each cell of that array basically is one byte wide. And recall that one byte is eight bits. We may have anywhere between 512 megabytes of memory, so about 512 million of those little one byte sized cells, maybe as high as four, 8, 16, and so on gigabytes. And we have quite a few of those items in our array. But it really is just an array, which means we can jump to different addresses. It has the same properties as any other random access array that we've already discussed. Different types of data take up different amounts of memory on our systems. So if we think about a very low level programming language like C, which is this is just an example. Different programming languages may store different types of data using different amounts of space. But if we look to just the most base level of data and think about the smallest individual pieces into which we can break it, we may be able to store an integer, for example, in four byte. Which means we have exactly 32 bits worth of space to store an integer. Characters will take up one byte, so we have only eight bits worth of memory required to store a single character. So capital or lowercase letters, digits, punctuation marks, and so on. Not a huge variety of options there. Floats are-- you may recall are real numbers, numbers that have decimal points in them. Doubles are, as well. They're double precision floating point values and they take up four or eight bytes. So basically the idea here is different types of memory will take up different amount of space and then we eventually can construct these things into pixels, and images, and films, each of which will also take up different amounts of space and memory if we are manipulating or working with that data. So again, let's think of memory as a big array of individual byte-sized cells. Because it is an array, that means we have random accessability. We can say, I want to go to memory address x and see what is there. I want to go to memory address y and change what is there. We have the ability to do that. We don't have to iterate through step by step by step in order to make changes. If we did, the processor would be quite a bit slower having to perform this, we might term linear search as we try to iterate through memory to find the one byte we're looking for. It's very helpful to be able to jump to a particular byte. And that means that every location in memory must have an address. We must have a way to refer to that individual byte in order to randomly access it. We can't just look at this grid of cells and say, I want to go to this one and sort of, you know, imagine particular spot. We need to say, I want to go to exactly this memory address. OK? So s-- the fact that memory cells have an address is what comes into play when you think about this idea of a 32-bit system or a 64-bit system, and this may be a term that you've heard before. It refers to the ability to process an address. So for example, a 32-bit computer, a 32-bit system, can process memory addresses up to 32 bits in length. Which means it understands memory address zero through memory address right up to four billion, a little over four billion. But it doesn't understand memory past that. Now interestingly, this doesn't mean that a 32-bit system is limited to four gigabytes of RAM. There are some software tricks that we can pull using something called virtual memory, which we're not going to get into in any more depth than to refer to it as virtual memory today, that allow you to use more than four gigabytes of RAM on a 32-bit system by doing-- sort of, you know, pretending that things live somewhere where they don't. But when you talk about a 64-bit system, that means we have many more memory cells that we can refer to without running into our sort of artificial limit of how high we can count. Now granted, there are no memory banks out there that have all of the memory addresses from zero to 64 bits worth of memory. That's somewhere in the quintillion or higher. It's a very, very large number and we don't yet have the storage capacity to store that much data on our machines. But theoretically, it is possible that with a 64-bit system we could have very, very large amounts of RAM and again, the more RAM we have, generally the more quickly our computer is going to operate because there's more space for it to store information. It doesn't have to keep sending stuff back to the hard drive when the RAM is full because there's so much information being processed at once. More of it is available in that quicker, more accessible bit of memory. So recall that with each bit, remember a bit can only take on one of two states. Zero or one, off or on. Or you can think about it in terms of electricity, which is how RAM actually works, as being unpowered or powered. That again means that we have 32-- two to the 32nd power, excuse me, possible memory addresses. So about four billion memory addresses. Now it is sometimes the case that programmers, and subsequently, those who may need to read their code, may need a way to refer to specific memory addresses. But a memory address like this, which is a memory address. There are zeros and ones in this address. This is exactly how we would refer to an address in memory. This is rather cumbersome. No programmer wants to talk to another programmer and no programmer wants to talk to an advisor by saying the code that lives at 00101 and so on. That's just not-- that doesn't make any sense. That's just not how we would talk and it would take forever just to say the name of the memory before you even get to the point of what is in that memory. And so rather than using binary notation to refer to a memory address, computer scientists will oftentimes use something called hexadecimal notation. Hexadecimal is 16 hexadecimal, 6 and 10. And so this is the base 16 number system. It's a different number system than the decimal system, base 10, that we have used since childhood to count and understand place values of numbers and so on. What's convenient about hexadecimal being base 16 versus binary being base two is that four binary digits or four bits can be represented using a single what is often called hex digit. So for every group of four binary digits that we have, we can represent that more succinctly using just one hexadecimal digit. And because there are four bits, that means we have two to the fourth, or 16 different combinations. So we can account for every single possible on off combination of all of the four bits in that cluster using a single hex digit. So we might instead refer to this memory address looking like this. And there are some letter characters in there, and that's because in order to represent a single digit in hexadecimal, we need to be on the count higher than 10 using two digits, as we are confined to in decimal. In order to represent the number 10, we need a one and zero, a one being in the tens place and a zero in the ones place. But in hexadecimal, we need 16 possible digits to represent all of the 16 possible values at any given place value. So here's an example of something that a programmer might see.