字幕表 動画を再生する 英語字幕をプリント [MUSIC PLAYING] DAVID J. MALAN: All right. This is CS50, Yale University's introduction to the intellectual enterprises of computer science and the arts of programming. All of that and so much more will make great sense before long. And I thought I'd mention that Benedict and I actually go way back. He and I were classmates at Harvard some 20-plus years ago. And we went through our archives, even though this was well before digital cameras, and found this one. I used to go over to Benedict's room. And I was friendly with some of his roommates as well. And if you can see here, this is a stack of some very cheap pizza in Cambridge, Massachusetts, from Pizza Ring. It was so amazing that they would give you two pies at once in the same box for like $8. And if we enhance here, this is our own Professor Brown. So he and I go way back. And it's only fortuitous that we've now come full circle and are together now, teaching this course here at Yale. So if you are leaning toward computer science, you might already have some comfort with the idea of being in a class like this. But odds are, you don't. And in fact, if you're anything like me back in the day, I had no interest in or really appreciation of what computer science was back in my first year of college. And in fact, when I got to college, I really gravitated toward things I was more familiar with. At the time, I'd come off the heels of high school. And I really liked a constitutional law class in high school. And I liked history and that sort of domain. And so I just kind of naturally gravitated when I got to Cambridge toward government, which was the closest concentration or major there in Cambridge. And I was still, nonetheless, kind of a geek as a kid. I certainly played with computers or computer games. But I never really took a genuine interest in it. And even my friends, ironically in retrospect, who did take computer science in high school, I thought they were actually the geeks and didn't really see the appeal of heads down at a computer monitor, typing away, door closed, fluorescent lights. It just didn't really make any sense to me or resonate. But I finally, sophomore year, I got up the nerve to shop a course that was and still is now called CS50. It was being offered only at Harvard at that time. And even then, I only got up the nerve to go into that lecture hall because the professor at the time let me sign up pass-fail, since I really rather feared failure of some sort because it was just so new and unfamiliar to me. But long story short, I ended up falling in love with it once I actually came to realize what it was. And no joke, on Friday evenings, when back then the homework assignments or problem sets would be released, I would legitimately look forward to going back to my dorm room-- 7:00 PM, the p set would be released-- and diving into that week's programming challenge. But why? So at the end of the day, computer science really isn't about programming. It's probably not about what you perceived your own friends as doing in high school versions thereof. It really is about problem solving more generally. So if you're feeling at all uncomfortable with the idea of shopping or taking a class like this, realize that most of the people around you feel that same way. 2/3 of CS50 students each year have no prior CS experience. So even though on occasion it might sound like it, perhaps based on answers that others seem to be giving or facial expressions they might be having, it's really not actually the case. 2/3 of you are feeling as uncomfortable or less comfortable as I was back in the day. But the course itself, as you'll see in the syllabus, really focuses on students individually. There is no course-wide curve per se, but rather what's going to matter ultimately is not so much where you end up relative to your classmates at the end of this course, but where you end up relative to yourself as of today, and focusing on that delta and that sense of progression yourselves. So what then is computer science? I daresay we can simplify it as this. So problem solving. What does it mean to solve a problem? And that domain itself doesn't have to be engineering, doesn't have to be science per se. Really, I daresay we can generalize problem solving to be a picture like this. There's some kind of input, the problem that you want to solve. And there's an output, hopefully the solution to that problem. And then between is this sort of proverbial, a proverbial black box, this sort of secret sauce that somehow takes that input and produces that output. And it's not yet apparent to us how it does that. But that's the goal, ultimately, of computer science and solving problems more generally. But to get to that point, I claim that we need to talk a little bit about what computer scientists would call representation, like how you actually represent those inputs and those outputs. And odds are, even if you really aren't a computer person yourself, you probably know that computers only speak a certain language of sorts, an alphabet called binary. So binary. But probably fewer of you have an appreciation of what that actually means and how you get from 0s and 1s to Google documents and Facebook and Instagram and applications and all of the complexities that we use and carry around with us every day. But let's start with the simplest form of representation of information. Like if I were to start taking attendance in this room, I might, old school-style, just take a piece of chalk or a pencil and just say, OK, 1, 2, 3, 4. And then I might get a little efficient and say 5, just to make obvious that it's a 5. But these are just hash marks on a screen. And of course, I don't have to do it in chalk. I can just do 1, 2, 3, 4, 5 and then count even higher if I use other digits as well. And digits, it's kind of a nice coincidence there. And we'll come back to that in just a moment. This system of using hash marks on the board or using your own human fingers is what's called unary, uno implying one, where the hash mark is either there or it's not. And so, 1, 2, 3, 4, 5, it's like using sticks or fingers or hash marks. That's unary notation. There's only one letter in your alphabet, so to speak. And it happens to be a 1 or a hash mark or a finger, whatever the unit of measure happens to be. So you can get, though, from using that unary notation of course, counting up even higher if we use more hash marks or more fingers. And we get to the more familiar system that we all use, which is called the decimal system, dec meaning 10. And the only reason for that is because you've got 10 letters in your alphabet so to speak, 0 through 9. 10 total digits. And these are the kinds of numbers that you and I use and sort of take for granted every day. But a bunch of you already know and realize that computers somehow only speak 0s and 1s. And yet, my god, a computer, my phone, any number of electronic devices these days, are so much more powerful, it would seem, than us humans. So how is it that using just 0s and 1s, they can achieve that kind of complexity? Well, turns out it's pretty much the same way we humans think. It's just we haven't thought this way explicitly for quite some time. For instance, on the screen here is what? Shout out the obvious. 123. All right. But why is that? 123. Really, well, all that's on the screen is three symbols. And frankly, if we rewound quite a few years in your development, it would look like just three cryptic symbols. But you and I ascribe meaning to these symbols now. But why? And from where does this meaning come? Well, if you roll back far enough in time, odds are you'll recall that when thinking about a number like 1, a pattern like 123, we sort of instantly these days intuitively ascribe meaning. So this is the so-called 1s column or 1s place, this is the 10s column or 10s place, and this is the 100s column or 100s place. And so all of us rather instantaneously these days do the math. 100 times 1 plus 10 times 2 plus 1 times 3, which is, of course, 100 plus 20 plus 3, which gets us back, to be fair, to the same symbols. But now they have meaning because we have imposed meaning based on the positions of those values and which symbols they are. So all of us do this every day any time we stare at or use a number. Well, it turns out that computers are actually doing fundamentally the same thing. They just tend to do it using fewer letters in their alphabet. They don't have 2s and 3s, let alone 8s and 9s. They only have 0s and 1s. But it still works, because instead of using these values, for instance, I can go ahead and use, not 0 through 9, but just 0 and 1 as follows. Let me give myself sort of three placeholders again, three columns if you will. But rather than call this 1, 10, 100, which if you think about it are powers of 10-- 10 to the 0 is 1; 10 to the 1 is 10; 10 to the 2 is 100-- we'll use powers of 2. So 2 to the 0 is going to give us 1 again. 2 to the 1 is going to give us 2 this time. And 2 to the 2, or 2 squared, is going to give us 4. And if we kept going, it'd be 8, 16, 32, 64, instead of 1,000, 10,000, 100,000, 1,000,000, and so forth. So same idea. Just a different base system, so to speak. Indeed, we're using now binary, because we have two letters in our alphabet, 0 and 1, hence the bi prefix there. So in binary, suppose that you use, for instance, these digits here, these symbols, 000. What number is the computer storing if it's somehow thinking of 000, but in binary, not decimal? Yeah, it's just 0. The decimal number you and I know as 0. Why? Well, the super-quick math is, well, this is just 4 times 0 is 0, plus 2 times 0 is 0, plus 1 times 0 is 0. So that gets us back, of course, to 0. But in the computer, if it were instead storing a pattern of symbols that isn't 000, but maybe 001, you can probably imagine that now-- just do the quick math-- this is now the decimal number we know as 1, because it's 4 times 0, 2 times 0, but 1 times 1. Skipping ahead, you might be inclined now to represent the next number here as what? It's obviously not 002, because we don't have access to a digit 2. We only have 0s and 1s. So you might be inclined to do something like this, 11, right? Because if I'm counting to 2 on my fingers, it's 1, 2. So two hash marks on the board would seem to give me 2. But not in binary. This is unary. What do I get in binary with the pattern 011? AUDIENCE: 3. DAVID J. MALAN: Yeah, so it's actually 3. So the correct way of counting up from 0 on up in binary would be to start as we did, 000. Then we go up to 001. Then we go up to 010. Now we go up to 011. And anyone, how do you count to 4? What's the pattern of symbols then? Yeah, 100. Now skipping ahead, if we wanted to represent not 4, but rather this value, how high up can we go with just three columns? Seven, right? So it's 4 plus 2 plus 1 gives you 7. So if you wanted to represent 8, what happens next? Yeah, so you sort of carry the 1. So just like in our human world, when you go from 9 to 10, you typically carry the 1 to a new column over to the left. Exact same idea. So the number 8 in binary, so to speak, would now be this. But you need a fourth column, just like you would in our human decimal world. So what then is the mapping between these low-level ideas to actual physical computers? Well, inside of computers today, whether it's your Mac or PC or your iPhone or your Android phone, are millions, millions of things called transistors, which are tiny little switches that just turn