字幕表 動画を再生する 英語字幕をプリント >> PIKE: I’m going to talk about the Go Programming Language, which is going to be--we hope going open source on November 10th, which is in just a couple of weeks. This talk is a little bit before then, so some--there are some future tense for things like links and so on, but everything should be as recorded by the time we actually go public with this stuff. You can see we have a website we’re setting up, which will be the public domain, public viewable domain for that. So, what is it? Well, I’m can talk about all these pieces but it is new, an experimental, concurrent, garbage-collected, systems, language. Experimental because we’re not--we don’t believe this is the answer to everything, but we’re playing and we think it’s gotten to the point where it’s time we should probably tell the world about it. It’s concurrent because it’s really important in the modern world. It’s garbage-collected because we have an opinion on that stuff, which I’ll talk about it quite a bit later. And it’s a systems language in the sense that we intended to be written, to be used to write things like Web servers and other, you know, system aspects like that. You have more control over some things and you have another languages like, say, Java or Python, but it’s--it works like a systems language when you use it. On the other hand, it's very good at things we find for things like front-ends or just general programming. So, although, it’s designed as a systems language, it has much broader use than just that. I can't give a talk without showing you, "Hello, world." There’s "Hello, world" with the "world" being Japanese, which I want to attempt to pronounce. But you can see that it looks vaguely C like although there’s some "weirdnesses" about it. I won’t talk about what those "weirdnesses" are. Who’s involve? Well, I’m giving the talk but it’s by no means my project. There’s a bunch of people involve and I want to make sure everyone’s cleared. Some of the names are really important here. Robert Griesemer, Ken Thompson, and I started talking about doing something to solve the problems we were seeing with other environments. Some time around two years ago in late 2007, we played mostly at the Wade Park for a few months to figure some things out. But by middle of 2008, we had a toy compiler and things were starting to look interesting and we started working on the stuff for real. And around that time, Ian Taylor joined and Russ Cox joined, and the five of us did the most of the work, but we’ve had a lot of help from a lot of people that this slide is too small to name. So I just make--I want to make it clear this is a collaborative effort and the collaboration is growing. Why? Well, the reason is things were just taking too long: too long to compile, too long to build, too long to think about. We want a program fast. We want a program fun. And to make programming fun, you have to find ways to speed things up and make things work better, not just the code itself but the process of writing the code needs to be efficient. So what’s happened? Well, you know, program used to be fun. What went wrong? Well, it’s interesting there's been really been no new major systems language in at least 10 years or could be longer. But in that decade, a lot has changed. The libraries have gotten bigger. Professional programming has, to a larger extent, become a layering process, and so we layer more and more stuff on and we also broaden out the base. The libraries got really big. There’s a lots of them. They communicate with dependencies to one another and so the force has become a thicket, and at some times it’s kind of hard to cut your way through it. Networking has pretty much taken over the world, that's the way to think about computing. The old son thing about the network is the computer is pretty much true today, but the languages don’t really reflect that. There’s a lot of client/server kind of stuff especially in the systems base. Languages don’t really help you there too directly. Of course, we’re talking about massive compute clusters now and again, the languages are designed for single processors. They don’t really work very well in that model. And on a related note, multi-core CPUs are taking over, too. It’s getting harder to buy a high-end computer with only one processor. It tends to be multi-core, sometimes multi-chip/multi-core. And again, the languages that we’re using weren't really designed with that kind of stuff in mind. And it’s also gotten--I think this is the thing that finally pushed us to try to do something. It’s just gotten to slow to build software. The tools are slow. They have a hard problem to do, but they still--they’re tend to be slow and they’re getting slower. The computers have speed up enormously but the software development process, if anything, has gotten slower over the last ten years. The dependencies are uncontrolled and the languages don’t help you control them, so you tend to spend a lot of time maybe building things you don’t even need but you don’t know it, can’t prove it. The machines have also stopped getting faster. They’ve gotten more but they haven’t gotten faster. There are more processors, there are more of them, but the actual clock speed has hardly changed in the last few years and Moore's law, to some extent, is peering out and yet the software gets bigger and bigger and bigger. And so, somehow, it feels like if we don’t do something, then software construction is just going to become unbearably slower and we need to think about the process of making it fast again. Robert Griesemer observes that a lot of the interesting work in languages in the last few years is been because a lot of the people using these standards systems languages like C, C++, even Java are finding the clamp--the type systems are very sort of clumsy or hard to work with. And as they go to the dynamic languages: the Pythons, the Ruby’s, even the JavaScript of the world, they have a lot more fun because the type systems don’t get in the way so much at least as long as the program keeps running. And so, the challenge here is to try to deal with this. You want typing. You want a good typing because it makes programs robust, but it can be done badly. And we are given in some ways, there is--if not done badly, it should at least be done the different way. Sometimes good ideas, in principle, make bad practice. And a good example of that is the "const" keyword in C and in C++, which was very well intentioned and seem to address the real need in making program safer, but it tends to make programming awkward in a lot of ways, so this isn’t worth the benefit you get back from using. Also, there is this notion of everything being a type hierarchy in object-oriented programming. And yet, the types in large programs don’t really naturally fall into hierarchies. We find the way to make them fit, but it’s a bit of a struggle. In program, we spend so much time re-factoring a code, juggling type trees around, and that has very little to do with the implementation and lot to do with the way the language is forcing you to think. We’d like to be--we’d like to sort of step back from that model. And in short, therefore, with the way type systems work today, you can be productive or you can safe, but you can’t really be both. And that seems like a shame; we should be able to fix that. So, why a new language? Why don't we just, you know, fix what we've got? Well, some of the problems are just fundamental to the way the languages are thinking, the way the type systems work, the way they’re compiled, the way dependencies are managed. The languages themselves have these problems endemic in them, and to fix them, you have to rethink the languages itself. For instance, the libraries can’t help you, because adding anything to try to fix some of these problems is going in the wrong direction. There’s too much already. We need to trim down, cut the fat, and make things cleaner. Fixing it by adding software isn’t going to do it. You have to sort of step aside and do something separate. And so, we decided: Robert, Ken, and I couple of years ago that we need to start over. Just take a step away and think about the way programs are written and constructed and try to make a language that makes that process a lot easier. So, it’s a new language. What are the goals of this language? The sort of short version of the story is we want the efficiency of a statically compiled language; that means truly compiled language, but the ease of programming of a dynamic language. And I think we have come pretty close to achieving that. You can decide for yourself as we go on. Safety is critical. It’s critical to the language: the type-safe and then the memory-safe. It’s important that a program not be able to derive a bad address and just use it, that a program that compiles is type-safe and memory-safe. That’s a critical part of making robust software and we just--that’s just fundamental. We want good support for concurrency. I’ll talk a little bit about that. Communication, I’ll talk about that, too. Those are tools that can help you build software that runs in the network or in multi-core CPUs. We want to build--we’re going to garbage like to think, because a lot of the bookkeeping that goes on inside modern programming, especially in C and C++, has to do with memory management. And that can be totally automated. And we believe, "believe" is the word, that you can do that efficiently and, essentially, latency-free that the garbage collection technology is advance to the point where a garbage collected systems language actually makes a lot of sense, and so we’d like to do that. And also, we’d like the compiler to just run fast. Everyone knows this slide and there’s some truth to it. You know, you spent too much time waiting for compilers. So, let me give you a compilation demo here. This is a directory of Go code. If you look in here, there’s a 1,300 lines, so I make clean, and then I’ll just do a major. There, I just built the last library. That’s compiled, efficient code, clean again. In case you wonder if I can type, okay, time make. Okay. So that’s what two--well, pretty accurately, 200 milliseconds real time to build the library. That feels about right. For the last years or so, those who’s working with Go had been getting used to 100 millisecond builds on one machine. That’s running on my laptop, by the way. That’s not some distributed thing. That’s one processor, one build. And just to prove it a little better, let me make clean. This is the complete Go source tree. All of the libraries written in Go, there’s something of order. I think it’s about 120, 130,000 lines. So, included in here are regular expressions, flags, concurrency primitives, the run-time, the garbage collector, pieces--oh it’s finished. Okay. So that was a one processor. I could’ve paralyzed it a bit better. But that’s eight seconds. That feels long to me now. That’s a long time because this is all the software that’s there and, you know, it’s a long time to build--you can get a lot done with that much time. Just to remind you, then we go back here, that’s what you want to see. You want to have character return time for your compilations, and it can be done. So let me get that set for later and we’ll go back to this. Okay. So, we think we’ve got fast compilation and I can’t tell you how much--what a difference that makes to your life. Sometimes hours and hours to go by when people finally showed up to my office and say, "Hey, you didn’t answer my mail." I have to look in the mail and actually getting work done. Okay. So to do this language, we had to have some principles on which is built. And there are actually some fundamental principles in here that in some ways, I think, differ from some of the other languages out there, so I'd like to go through them. One of them is that we try as much as possible to keep the concepts orthogonal. For instance, in the interface surface, I’ll talk about the concept of implementation versus interface or completely orthogonal. They’re two separate things, they don’t interact and you don’t define them together, you define them in separate objects and that makes a lot of stuff cleaner. And in practice, this is because a few orthogonal features that cover the space work a lot better in practice than a lot of overlapping lines. That’s a principle that’s a little vague but I think, I think you know what I’m getting at. The grammar is very regular and very simple. It’s actually conflict free and can be parsed without a symbol table, there’s no conflicts in it. That makes it easier to write tools, like IDEs or editor plug-ins, and things like that, debuggers because the parsing line is understanding and the program is actually pretty straight forward. We try to reduce the typing. What I mean by that is that we want the language to work things out. You tend to type too much when you’re programming in modern object or in languages. You tend to write statements like that. That’s literal. If you change "Foo" for a much longer word, that’s an example I found in some code I want to end. And I just, you know, "Foo" should just go away. You shouldn’t have to type all those "Foos," that’s crazy. So you want to get rid of the bookkeeping, you want the type system to automatically works the "Foo," but you still want to be safe. You want it--you want it to be type-safe, but well handled. And also, not only you don't want to reduce typing, you want to reduce typing. You want to type to sort of melt away into the background. You want it to be clear, you don’t want to type hierarchy, we don’t want to type hierarchy, and then I'll show you some examples on why that actually helps. We just think the program by constructing type hierarchies, is not the way to go. We have an alternate world. However, Go is a profoundly object-oriented language. Arguably, more object-oriented language--more object-oriented than, say, Java or C++, and I’ll show you why. So, the big picture, there’s some fundamentals. We have a clean concise syntax. We have very lightweight type system. We got rid of implicit conversions, because they’re all inconvenient. They tend to cause a lot of confusion particularly in the old C rules for things like the, you know, the usual conversions are actually not all that usual, and now these got right. So everything is very clear and very clean on what’s going on in the page. Of course, when you know the conversions, you tend to do a lot of casting or conversions by hand and that’s messy, so we address that by changing constants pretty profoundly. I’ll talk about that in a little bit. You'll never--you won’t see "0x80ULL" in a Go program. The ULL is completely pointless, you can’t even say it, numbers are just numbers. And there’s a strict separation between the interface and implementation. That’s part of keeping the concept orthogonal and the type system clean. So there’s a runtime, there’s a garbage collection, there’s a good support for strings, maps, associate raise, that kind of thing, and also the communication channels, which we’ll talk about, and then there’s really strong support for concurrency. It’s actually some of the best concurrency stuff, I think, that’s out there. It's really good. And then, there’s a package model, which I won’t talk very much about because it’s kind of takes a lot of explaining. But the notion here is that inside the packages, everything is explicit. What you depend on is explicit. Compiler rejects your program if you put in the dependency you don’t need. It’s all very clean and so, it makes tools easy to understand dependencies and then it makes the builds guaranteed minimal. So, that helps enormously. Regarding the dependencies, the linking going faster, the compilation can go faster but--and there’s actually a notion in here that I think is important. The compiler actually pulls the transitive dependency information up the tree as it’s compiling, simply to reduce the number of files it has to look at, which in turn speeds up the compilation process. So, if you have a program, A.go, that depends on B.go that depends on C.go, which is pretty standard kind of thing. You--in the usual way, you have to compile them in reverse order, because you got to build the dependent stuff before the depending stuff. But the point is when you go to compile, A.go, it doesn’t need to compile C. It doesn’t need to look at C.go at all. It doesn’t even need to look at the object code for C, because everything that B need from C has already been pulled up into B.go. And that doesn’t sound like a big deal, but when you’re at scale, it’s, literally, exponentially faster at building software when you get in to this huge dependents situation. For concurrency, we sort of have a model. I’ll touch on it today. There’s a lot more stuff online about it. But we have a suggestion for how to write systems and servers as concurrent garbage-collected processes, which we called, goroutines. We’ll have good support from the language and the run-time. We change the name because they're slightly different from what you're used to. If you use the words like thread or process or goroutine, that’s not quite the same, so we have our own word, goroutine. The language takes care of managing the goroutines and managing the memory that they use and the communication between them. Stacks grow automatically. That’s taken care of by the language. You don't have to declare how big the stack is going to be. If it needs a bigger stack, it gets the bigger stack. If it needs a smaller stack, it uses less memory. And those stacks are multiple--I’m sorry, the goroutines are then multiplexed on the threads. That’s all done automatically for you and [INDISTINCT] transparently. And, one of the feature--one of the reasons for needing garbage collection is that concurrent programming is very hard without garbage collection, because as you hand items around between this concurrent things, who owns the memory? Whose job is it to free it? And there’s a lot of bookkeeping involved that the--if the language or the system doesn’t take care of that for you. So you need garbage collection to make concurrency work well. On the other hand, you need--to get the garbage collection to work well, you need a language that makes garbage collection a feasible thing to do, which is another argument for doing a new language. So, I don’t have time to explain all of Go to you. It’s a significant language, it’s--I’d say probably bigger than C but not as big as C++. Enough time to go through all of it, but I want to give you the flavors. I’m going to go through about a dozen slides that are--I hope sort of representative of the kinds of things that go on, but I’m going to leave out quite a bit. Lexically, it looks a little bit like C, actually a little more than like C than this slide, we let you believe. There’s usually a keywords involved, introducing things, you see const and var. Const, that means it’s a constant, not--it’s not the const keyword from C, it’s completely different meaning. It just means, in the first line there, N is a constant 1024. And it’s just a number, it’s not an int, it’s not a float, it’s not an unsigned int, it’s just a number. And there’s a const string and you can see we support Unicode just fine. In fact, the language requires that the input be UTF-8 encoded. And then, you can declare variables with the "var" keyword in the usual way. You’ll notice--I’m quite use to it now, but for C programmers this looks backwards, because in C, you put the float before the X and the Y, but they're reversed and there’s a long explanation on the web about why we did this. But the fundamental reason is it actually has been cleaner because, for instance, in that declaration, X and Y both have type float. Whereas, if flip it around put the keyword first, things get a little funnier. And you can see what the C version of that will look like. You'd have to reproduce the star. So this is simplification you get. There’s characters, Unicode supports, and stuff like that. Then we in this example, was not really an example, just some lines of code, there’s a type struct, T, that's defined. It's got a couple of fields within if they're both integers. Then we declare a variable, t0, which is a new T. That should be pretty familiar what that means. But there’s too much typing going on there. So the next line there, T1 := new(T), that’s an old notation from some languages that I work with before. What that call and equals means is declaring the initialized. So that--those two lines are equivalent. Var t0 *T = new(T), it has too many Ts in it. We can get rid one of them by just having the type of t1 we derive from the expression that’s used to declare and initialize it. And you’ll see that used a lot in Go code. The var doesn’t show up nearly as much as the colon equals notation. Their control structure is they're actually quite interesting control structures--just the first blast through, pretty the same as C. They’re actually quite richer, but the basics are the same. The main difference is syntactic. The braces are required on the blocks and there’s no parenthesis on, for instance, the conditional expression inside an "if" statement. That's just to clean up the grammar. So here’s a real program. It’s not "hello, world," it's echo this time, but it’s got all the parts. The first time the code says, "package main," that has some meaning in the language about me get muted. We're thinking about simplifying a few things but at least for the moment it's "package main." then you can see there's a function called "main" is declared down below. And the program begins execution by starting "main.main" initialization inside. There’s a declaration of the flag from the flag’s package, so that makes minus N, turn off new lines from echo. It’s not strictly a UNIX echo because it’s got real flag parsing, but it’s essentially the same. Those import statements say, "We’re going to use package OS," which does some stuff for us and we’re going to use the flag package, obviously. Now, the way the initialization stuff works is that in-flag variable will be initialized before "main" starts executing. All the global variables get initialized by--and they can be initialized by arbitrary complex stuff before "main" starts. So then in "main," we do the flag parts that get the command line pulled apart. And then we just loop over the arguments which are now the [INDISTINCT] the flag package, and it’s the obvious loop accumulating. You can see += on the string there to append to the string, and stuff like that. And then at the bottom, the flags are actually pointers in Go, the flag package makes them pointers. So the *nFlag, which is a Boolean, is false. Then, you add a new line to the end of it. And then finally, you do a method called "os.Stdout" to get the open-up. So, although, pretty much every line in here is different from what you’ve seen before, it should be perfect and clear what’s going on. I don’t think you’ll find that a mysterious program. So let’s talk about constants. I said they are a little different. Constants in Go are what we like to think of as sort of type-less. They have no type. They have no size. They’re just numbers or strings. You can have constants and strings in Booleans, but numbers are the interesting case. So in this little snippet here, is just some constants being used. We start by defining a type TZ for timeZone, which is an integer. And then, we declare some constants, UTC and EST. We declare them to have type TZ and assigned to those initial values. You can clear what’s going on there. The parenthesis around the const is distributing the const declaration introducing keyword across the elements inside that declaration. So that’s the same as saying, "Two const declarations with the two variables in them." You’ll see later on the next page here that actually matters that you can group them like this, because there’s a thing called iota, which is an enumerating constant. It literally--for each const declaration, counts which element you’re on. At every semicolon, it increments. Starts at zero on the next line, it's one, the next line is two. And so, here you can see, we’re declaring bit0 and mask0 to be uint32s, and the first one is set to one shifted up iota. Iota starts at zero in every const. So that’s one shifted up zero and one shift zero minus one. So that’s a bit and a mask. And then the next line does it again, but now iota has gone up to one. So we get the next guy in the sequence. You can see how to use that to build enumerated constants but this is a slightly richer example. And again, to reduce typing, if you want, you can just say, declaration, like the third line there with the semicolon and what that says is, "Just keep doing what I did." Take the declaration initializer from the previous line then pretend that you just type that here. And so, if you want to declare a new ready constant, you can say, const open parenthesis inum0 equals iota semicolon and then you just go inum1 semicolon, inum2 semicolon and all the way down the page. And just have them declare sequential numbers for their constants. One very important properties of these numeric constants in Go, is it there are arbitrary precision. So, here’s a const, the logarithm of 2, you can see that’s got like 40 or 50 decimal places in it, which is more than even a float 64 can represent. But that’s a valid declaration, because they’re just numbers. And then the next line, we can actually compute log2, log base 2 of E as one over log of 2, and that’s a precise division because it’s happening not in floating point space but in arbitrary number precision space. So that’s a very accurate reciprocal. And that kind of stuff is nice too, because you don’t see cache and conversions and worries about overflows and underflows. Constants just take the job for you. And then we have values, obviously. There’s an array like type called the slice. Those are arrays but they’re kind of peculiar. You don’t use them much. Instead, use a reference type called the slice. So we can there, is declared to be a slice of strings with Saturday and Sunday as strings declared inside it. And then we take the timeZones and make declare timeZones map to be a map from a string to TZ or timeZone. And you can see how that works. It’s pretty straightforward. There’s a function declaration to do an "add." You can see the function syntax there. There’s actually a lot of interesting things in how functions deal with return values, which I’m not going to talk about today, but you should check that out. And then, there’s a declaration of function type which takes two integers return int, so "add" is of type Op. And then, here we declare an RPC, which is a structure. I showed that before but it's a [INDISTINCT] example. It’s got A and B integers. It’s got an operator, and it’s got a result which is a pointer to an integer. And all that should be fairly straightforward even if the declarations look backwards to a C programmer. And then the last line there, we declare using a structure literal, an instance of an RPC and again, it’s very simple. You just say type name, open brace, list the elements of the structure, closing brace and then you got a value, you can assign that to or in this case, initialize a new variable to that value. So, I said [INDISTINCT] language. We haven’t seen much yet, there was actually one and the other is a method in the echo example where we wrote the os.Stdout. But let’s talk about methods, because they’re kind of different. So here’s a type called point that has X and Y. Now, those are upper case and the reason is, in Go, the complete decision about whether an identifier is visible outside the package or not, is whether or not it starts with an uppercase Unicode letter. That’s it. So if a variable or a type or whatever declared at the top level is uppercase, is visible to the clients of the package. If it’s lower case, it’s not. Inside the global struct, if a method or field is uppercase, it’s visible to the clients of the package, otherwise, it’s not. That’s it. It’s really simple. So, there’s that type. Now, we’re going to define a couple of methods on it. The declaration syntax for a method, you’ll notice it’s not declared inside the struct, and the reason, we'll see in a minute, is a method and structs are independent ideas. You can have methods on things other than structs, so it wouldn’t make sense if you put them inside the struct declaration. What you do is you say, "Here’s a func." And then before the name of the method, you write a parenthesis, kind of like its own little one element parameter list, receive a declaration and receiver is a short hand for the thing that’s receiving the method when it’s called. So this says, that scale is a method of type star point and it takes a float argument and then the implementation is inside and it’s just multiplies up the point by the usual, by the scale factor. You’ll notice that the receiver is explicit. There’s no this, automatic or anything like that. You have to actually say, declare receiver and use it if you need to access the fields of the struct inside the method. And then there’s Abs, you can imagine what that is. There’s a square root, which is a function inside the math package computing the hypotenuse of the point of the [INDISTINCT] of the vector. And then here’s--we declare an X. Now, this time you notice there’s an ampersand before the point, word there. Point of three, four is a value. Ampersand of point of three, four allocates. It’s like a constructor like thing to create a pointer to a new instance of that. And that--every time you execute that, you get a new instance of point that initializes the three, four. Then, you can call x.Scale, which is assumed you'd expect to be able to do. Okay. So that’s pretty simple. The syntax is slightly different, but the ideas are pretty simple. Now, it gets more interesting because methods on any type you define. And here’s a full example, this is another complete program that actually does something a little different. So again, we start with package main and now we import fmt, which is the formative printing package. It has printf and stuff like that in it. And then, here’s our type TZ int coming back. And now, we declare few more of these constants, we have an Hour which is of type TZ, which is 60 minutes times 60 seconds. And then we declared, just to make it short, two timeZones: Zero hours and minus five hours for UTZ and EST, okay? Now, we declare a map from the names to the values of those timeZones. That should be familiar in the timeZones. We actually have that exact the same example before. And now, we put a method on a timeZone. Now, timeZone is just an int, but I’m going to put a method on it. So, I declare a function called capital string that returns a string to receive tz, lower case, plus a type TZ, upper case. Notice that it’s not a pointer, that’s really important. It doesn’t have to be a pointer. And then, using a variant of the for-loop that's arranged over the elements of a collection, I say name comma zone is range of timeZones. What I’m looking for is the element of the timeZone map whose value is equal to the value of TZ. That’s the name of the guy I want. So it’s like reversing the map. So I just loop over all the elements looking for the one whose timeZone is equal to the zone of that element of the map and if that’s true I return the name. And you can see there’s actually quite a bit going on in there. There’s the range which is giving you the iterator, there’s a cone [PH] equals to declare loop variables that work over the elements and thing inside in and all not. But of course you might get a value of the--of TZ integer that’s not an exact timeZone number. And so, if that happens, we finish the loop and we return fmt.Sprintf of blah-blah. An Sprintf is analogous to the standard C as printf, but actually returns a new string. It doesn’t put into a buffer. So, that’s an implementation of a method that let’s you print timeZones in interesting ways. So we can use it in our main function by saying, if we just say, fmt.Println of EST, Println is a variant of print that doesn’t require a format. It’ll work. It’ll actually printout EST by catching it in the inner loop of the string method. And notice that Println(EST) knows under the covers by mechanisms that are described the length and the website, how that the timeZone type has a string method and that’s how you should pretty print it if you get a value of that type. So, that’s all you need to do to make it print itself. And then, there’s another example, we can print Println five times hour over two and it comes out as plus two colon thirty, because the printf at the bottom works. So this is a fairly contrive example, but you can see to there’s a [INDISTINCT] going on here. Having methods on general types, as any small talk programmer will know, a very convenient thing to do. But by separating them methods away from structural classes, we actually introduce some new ideas that you can play with and that’s pretty fun. Okay. So now, we come to something that I will not be able to describe in full in the time available, but they are pretty interesting and they are arguably the most noble about Go, and that is the concept of an Interface. An Interface is an, what’s the word, a formalization of the concept of a set of methods. So, we can declare a method, sorry, an interface type Magnitude and say that anything that implements the Magnitude interface is something that implements the Abs method that returns a float. And of course there could be lots of other things in there, but we’ll assume for the moment there’s just one. So, remember we declared an Abs method for the type star point. So, we can--we had an x from the previous slide, we go back there, there, that x address the point three, four, that’s the same x. We can use it here. We declare a variable with the interface type magnitude and assign it to assign to it the value x. And the reason we can do that is x is an implementation of that interface, and so that’s a valid assignment, okay? And having done that, had I typed var m magnitude instead of m magnitude. No, I got it right. Sorry, sorry, I confused myself. Var m magnitude is a variable, we assign to x and then we declare a new variable mag that’s m.Abs, so it’s just the, the vector length of the point x. But now we declare another type point three which has three coordinates to find the Abs method for that, same idea. And now, we can assign m equal to a point three and that value still is an interface var, but inside it is an object that has the implementation of the Abs method. And so we can call the Abs method on that and add that to the variable that we’ve got. And then we can do it again, and this time just to be fun, we declare a polar object, which is R and theta and, of course, that’s a capital theta because a public field. And then, we declare p Polar. Notice there is not a pointer this time. Why make it a pointer, it doesn’t matter. So we just make it a value polar and it returns the radius of the thing which is the definition of the magnitude. And then we design a value that type to m, and now we can add another thing at the magnitude. So the interface variable is letting you define, "I can work with anything that implements these methods and anything at all will do." Here, I use three different representations of a coordinate, but they’re quite different in character. Under the covers, it’s all very simple. And the key point is that nowhere does point three or polar have to tell you that it implements the magnitude interface. It implemented the fact, though, because it satisfies the methods that are defined by the interface. And there’s way more going on there than I have time to talk about, but I have to talk about one and very important concept behind them, which is the generality that they give you, because any method, it could be part of an interface, an interface can be any method. It's all sort of, it’s not one to one, it’s many to many. It’s all mixed up. And so a struct of a given type may implement multiple interfaces depending on what interface you are using. And that gives you the opportunity to define very simple interfaces that define very general properties. And a really good example of this is in the IO package and it is called the Writer interface. And a Writer is anything that implements a standard write call and that’s what we define a standard write call to be. The, you know, there’s a slice of bytes that can--that is the data you’re going to write. It has a length inclusively inside it, so we don’t need a byte count. And it returns, there’s that function that [INDISTINCT] before returns a pair, a count and an error, but don’t worry about that. Just think about the writer. The writer said anything that implements this standard form of the write call can be used to write it to. It makes sense. This is just--it's almost a tautology. But now, anything you implement write for can be used by anybody that only needs something that implements write. It doesn’t matter where the properties it has. So for instance, fmt.Printf which is just what it sounds, fmt.Fprintf, pardon me, which is product much what it sounds like, doesn’t take a file as the first argument. It takes an io.Writer as its first argument. And a result, you can call fmt.Fprintf on a network connection, a pipe, a file descriptor, a buffer, all kinds of other things, anything that implements write, a cryptography pipeline. And then, the way buffer io is done in Go is that there is no such thing as buffer io, there’s just buffering. And what happens is if you want to create a buffered version of something, you give it a writer which is anything that implements write into the covers and it gives you back a buffered version of that same thing that you can use where you used it before. So here’s an example that puts that together, again, a complete program. Starting with package main, we import bufio, fmt and os again. And now, we wrote the os.Stdout before, but now were going to use Fprintf to write to it to make it explicit. So, we say, fmt.Fprnintf os.Stdout hello. Then, we call bufio.NewWriter. We pass it os.Stdout but NewWriter only--all it cares about os.Stdout is that it implements the right call. And the NewWriter function inside bufio returns something else that implements the writer interface, and so I can write to it. So, I can call fmt.Fprintf of buf and now I have a buffered write. So I better flush it when I’m done. But you see the Fprintf use two different types in there for its output and it worked, because underneath--if you look at the Fprintf declaration, all it cares about is that the thing that is being passed in implements the write call, which of course they do and it’s type checks, so you know they do. So, there's a lot more of that stuff to talk about to, but we’re trying to skip through the highlights. Communication channels are little as radical than Interfaces because they’re based on, worked on in some early language, going back to a Hoare’s CSP. But unlike Hoare’s original CSP, the communication is done through first-class values called Channels. His second round of it done about 10 years later had these, but we’d played with them linguistically by then. And, the ideas, you make a Channel, it has type string. That means you can send and receive strings on that channel. Just like a pipe that you can send type values through. And so, that notation with the left facing out is the communication operator when you used infix it means send. So, that sends the string, hello, down the channel. And somewhere else, some other time, some other Goroutine can do read from channel c by doing a prefix operator on c of that communications and get out the greetings, so it will see greeting as assigned to hello. And this is haven’t done giving Goroutine because the channel is synchronized by default, hello. You can control that when you allocate them. And one of the cool things about having them is you can make them--channels can actually be first class size. That means you can pass a channel on a channel. And I’ll show you in a minute to actually give you some interesting capability. So, if you have someone else wants to have this channel that you can send greetings on, you can pass it to him over channel then he can send you greetings, all right? So, to do that, though, we have to talk a little bit about Goroutines. So, I’ve mentioned some calculations, it just takes far too long, right? Who wants to wait around for that? You're going to block forever waiting for the calculation, but maybe you don’t want to. What you can do instead is create a channel to get the answer back on. And then declare a function, wrapper, that takes the value of the parameter for the calculation, and a channel invokes the calculation and then when it is done, sends the result back. And I broke it in two lines just to make it easier to read. You'd probably just write C gets long calculation of A in a real code, okay? And then, to run the thing, there’s a Go keyword. Go is a keyword. Go wrapper of 17c says, "Take the function call, 17 comma C, and run in the background and let me keep going." Okay? And then, eventually, you want to hear what happened, so once you’re finished doing your business, you come back and say, "Okay, I’m ready for the answer now." And you say, "Now, I want to receive from C and get the result." So I’ve taken this calculation and I’ve solved it off over here and then it slows back to me by doing a receive on the channel down underneath. That should be fairly easy to follow. This might not be so easy, but we’ll try. This is a multiplexed server. At least, the server side of it, I’ll show you the client on the next slide. And the idea here, remember I said you can pass a channel to a channel. What we have is a request which is a request for the server to implement. That's trivial. It just takes a pair of integers in the channel in which it replies. Very much like what we did before on previous slides, generalized somewhat. We defined a binary operator type and we defined a function run that takes a binary operator and a request and sends on the reply channel inside the request the value of running the operator on the arguments inside the request. So that’s just turning to send the operations in the sending of the thing from the previous slide into a function, not very exciting yet. But then, we defined a server function that takes an operator that is going to serve in a channel of request. And it’s just sits in a loop reading request from the service channel and then saying run of Op request, but it puts a Go keyword at the beginning. So, it runs all these requests in parallel in the background. The server is never blocked. It’s always ready to receive another call no matter how much is going on. Obviously, there's issues about re-limiting and stuff like that, but let's not worry about those. The key point is that that’s, basically, a server loop. And then, to make it easy to use for the client we defined a public function or an exported function by starting with a capital letter, starts server. And it says, "Take an operator and start a server for that operator and return the channel on which those requests can be sent. So, what we do is we allocate the channel, a channel start a request, we start the server process which in turn is going to start the little implementers of the operator as they come in. And then, we return the channel as the return value from start server. So, you call a function you get a channel back, now you got a thing you can talk to and that is the fundamental idea on how you use this stuff to build services, okay? So on the client side, it looks like this. You say, server, start server, here's a function. Now, notice that I’m just writing a function in line there. Go has four closure, it seems writing down its expressions. And so, now I’ve got a server running that’s going to add numbers for me. And I’ve got a channel for that server in the variable server called Lowercase Server. I allocated a couple of request in the obvious way and inside the request, I allocate new channels because I want to do--respond to me on separate channels for whatever reason, okay? And then, I can send the request to the server, it doesn’t matter what order I send them. It’s totally not blocking. I'll just send them anywhere I want. And now, I can print them out again, print the results as they come back. And I don’t have to worry about the order because, again, it’s totally parallel. So, in fact, I send one first but ask her two first. And if two takes forever, I’ll still hang around there waiting for the response to come back and once it finally does then I’ll print the value on one even if one is ready sooner. So, this one is very rushed through hear, so I apologize if I left you a little be fuddled, but there’s examples like this in the tutorial and some other documents. It should make a little clear what’s going on if that looses you. Now, if we go back to the server here, remember I said, it has a rate limiting issue and also the problem is it runs forever. You want to go and shut it down. One way to do that is to use a control structure in Go called the Select, which is quite a rich control structure. But the basic idea is given a set of communications on channels. Select let’s you choose, let's you to actually wait for any one of them to be become ready and once it does become ready, let that single thing communicate. So, it’s like a switch statement for communication. So, if we replace the inner loop of server, which was just, remember, get a request and then run Go on it. With an operator that has a select statement inside it and the second channel called Quit, that’s passed into the server when we create it. Then, we can just run this normally. It will just do what it was doing before, but as soon as you want it to go away, we just send a signal by sending a Boolean of any value, it doesn’t matter what, on the Quit channel and that branch of the select will fire and this function will return, which is equivalent to exiting the Go routine. So, that’s again pretty quick and I apologize for that. But I want to make sure we hit some of those points. And in fact, there’s a lot more to talk about. I haven’t talk much about package construction. Initialization is actually quite interesting in Go. It's much--it's sort of hard to see, but it does a lot for you. It can help you set up things like RPC servers with almost no code and stuff like that. There’s fall reflection, you can reflect from there, you can reflect on functions, channels, maps, all kinds of stuff. It’s got interesting some dynamic typing going on if you wanted to, but you don’t have to use it. There’s a thing called Embedding that is a little bit like an inheritance, but somewhat more simple and yet general. There’s iterators that come out of the way. The range thing works on fours in our channel’s functions, things like that. And there’s some interesting testing software, but I don’t have time to talk about any of these right here, but I just want to make it clear there’s a lot more of the language than what I have time to show you. So let me close this section with another interesting example. This is a real communications example. Remember, we started this language because we want to build servers. And one of the problems with building servers in the modern world is there’s this model of a thread per request, which is quite labor intensive and resource intensive to run and quite difficult to get right in more languages. And we want to make it easy to think about having tens of thousands or even hundreds of thousands of threads—-whatever they are-—working on your behalf. And to prove that we got some way towards that. This is an example that will create a hundred thousand goroutines and then they can do something. And what it does, you probably won’t be able to decode this in real time, but what it does is it has a little function F there that all it does is read a value from a channel, add one to it and send it out again. So, there left gets one plus, get from right. It’s that thing inside F. It just passes its token along but adds one to it, okay? And inside main, we declare the left most piece—-to go your direction—-and then a couple of variables. And then for the number of goroutines you want to start, we sort of thread a new piece on and walk away along, but we remember where we started. So, we stitch together a bunch of goroutines like this, each of which is passing a channel value along to the next one. And so, we end up constructing, in this case, a hundred thousand goroutines off turning together like pearls on a string. And each end is waiting for the guy upstream to give him a value. So, then right at the bottom there after the loop--once we built this thing—-we say, drop a value into the right hand side and it goes pfft and pops out the left. And the value that comes out is of course the number of goroutines in the chain because we’ve added one for each step, okay? So, you can examine that later if it confuses you, but let me show it to you actually running. Okay. So--oh dear, I think it was the wrong file—-okay, sorry it took so long. And now, I’ll run it—-now remember this is a little, a little mac air very fast. That's a hundred thousand threads of control--I won’t say threads because that’s not how it is done—-but just incase you’re wondering, it takes about 1.5 seconds to manage a hundred thousand goroutines. And it’s real--it’s honest to God management too. It’s [INDISTINCT], it’s stitch together communication. Doing the communication and it’s tearing them down. This hasn't created an instant kill, this is a legal honest build, use them, tear them down. It’s kind of a minimum thing you can do that, but still, hundred thousand is an interesting number and we could go more but it start to use up memory. So, let’s just prove that there’s—-anyone knew that. Here we go. So, that’s concurrency showing that you are going to at least, conceptually, have many, many things. Now, the goroutine implementation has to worry about real threading and real stuff like that when you’re doing I/O operations that could block but that is taken care off. So, you are left with a simple mind example, it’s indicative of something that really works. So, let me talk a little bit about the status. It’s actually quite a bit to say, we have two complete implementations in two different compiler technologies. Ken Thompson wrote a suite loosely based on some of the Plan 9 tools but it’s from scratch, and using the peculiar notation, 6g is the AMD—-it’s a 64 bit X86 compiler. There’s also a 3D 61 and an ARM which is five for reasons. Don’t even ask. But anyway, the point is there is a compiler. That’s the one I’ve been using today. Some are more experimental than the other compiler but as you’ve seen it, it generates code quickly. The code it generates is pretty good, it’s not as good as what comes out of GCC, but it’s better than you might think for a compiler that goes that fast. But, in order to do that, it got it’s own rules and so, the output is not linkable directly with GCC. But there is a foreign function interface support that is coming along pretty well. We can call C-code from Go with this compiler just fine and once we get Swig work and we’re able to call C++ code. And then Ian Taylor, wrote a GCC front end for Go which is a complete implementation--I should say, by the way, that Ken’s compiler is written in C for bootstrapping reasons, although, honestly it would be a great language to write a compiler but it’s written in C for now. And Ian Taylor thing is a C++ front end for the GCC backend. It generates excellent code with GCC quality but it’s not as fast it’s about something like four or five times slower. But the real advantage of compiling GO isn’t the compiler speed, it’s the compilation speed. The whole picture is where the speed really comes from. And both these compilers supports 32 and 64 bit x86 and with ARM compiler that Kyle has been working on that’s almost ready—-I think it’s up to 97% to test suite compliance now. So, we are very close. We have to have it ready in time for the open source release. And for performance of the generated code, it varies obviously because it’s different but if you avoid some of the libraries for which have weak implementations. Typical inter loop kind of performance or standard programming gets you typically to 10 to 20% range with the GCC tending to be more like—-by which I mean 10 percent slower than C and Ken’s compiler is typically like 20% slower. So, 1.1x to 1.2x, but this is for a type save, statistically compiled garbage collection language. It’s pretty nice. And I’m happy to give up 10 percent even with words lost flowing down, that’s still not very much clock time on the calendar. So, there’s a Run-time that handles all the memory allocation, garbage collection, stuff like that, the stack handling stuff which is pretty special but very important. The goroutines support channels, slices, map, reflection; all of that stuff is built into a real run-time. So, even though it’s a system language it has a really powerful run-time system including dynamic type reflection. And it’s pretty solid. It’s improving, it got ways to go especially in memory allocation and some of the scheduling stuff, but that’s yet to come, but we’re working on it. 6g has a very good goroutines support and much of them on the threads well and implements what we call segmented stacks, which is how we keep the footprints small but let them grow as they need to. GCC goes a little behind on that because Ian has been working on a few other things lately. But we hope by the end of the year to have all that stuff working in GCC Go and they’ll basically be at the same stage at that point. But even so, GCC Go can compile all the code and run it now. The garbage collector, 6g has a simple but effective mark-and-sweep collector. And this sounds simple, but work is under the way to do a much better job. We believe, with multi-card machines, you can actually do concurrent garbage collection with essentially zero latency and very little cost and overhead. And IBM has this garbage collector technology that we think is pretty exciting. We think building on that stuff, we can actually make our real goals which are to avoid a lot of the pitfalls that garbage collector tend to have. And we’ve done work in other languages before that indicate this really can be done and now that we have multi-card machines we believe that we can really solve it. GCC Go at the moment has no collector, but, as I said, we're working on a common run-time, and I hope by the time this collector is done is design to run with either compiler. It’s part of the general run-time so both compilers will have real garbage collectors inside of them. There’s lot of libraries, there’s tons more to do, but we have a pretty good start. We’ve got obviously, OS and I/O stuff. I showed you some of that. It’s got a nice math, a simple math package. It got strings, good support for Unicode, rudimentary but functional regular expression implementation. It’s got run-time reflection, command-line flags and logging, which are very nice to use. It's got full hashes and crypto and all that kind of stuff. It's got a really good testing tool and a library to support it, standard networking libraries just like you’d expect, including a native RPC implementations. It’s kind of pretty. There’s a really interesting template library based on some work that Andy Chu did that let’s you write HTML or, in fact, anything at all using this to generate really simple data driven page generation. It's really quite nice. Andy gets most of the credit for that. There’s a lot more, but you can see there’s actually quite a bit in place already. So, one of the most interesting programs written in Go is a pair of things that are related--they share a lot of code--called Godoc and Gofmt. Godoc is analogous to Javadoc. It serves documents to request to look up, you know, what this package does or what that function does. And there’s a set of links listed there. Golang.org is a top-level guide that’s running Godoc. It serves the landing page. And underneath that, you can find all the other documents specifications, editorials, and FAQs, and stuff like that. But if you dig down into the package subdirectory, that’s really interesting, automatically generate the package documentation is especially quite rich. And then under the source directory, the source gets served, but, of course, you can get out of this code repository, too. The difference is that the source is processed by the coding Gofmt. The Gofmt is a pretty printer and all of the coding repository has been formatted by Gofmt. So, rather than set a bunch of style rules, we have a program that says this is what a code looks like, and you just run through Gofmt when you check it in and that’s, just settles all those arguments. If you want debate, if you wanted debate, we got to debate by changing Gofmt. You can’t debate by changing a document, and I think it’s actually, it does an amazingly good job. Actually, Robert [INDISTINCT] a good job. We have a debugger. It’s not quite ready, but it’s pretty close. I hope by the end of the year we'll something functional but we’ll be happy to show. But it's coming along and it will work with either environment but at the moment its working in the 6G world primarily. Gccgo users can of course use gdb. Gdb doesn’t understand the symbols tables. They're coming out of Ken's compiler, so it doesn’t work very well with that. But with Gccgo compilation, you can use gdb, but it thinks that the C program is debugging and there’s some weirdness in the symbol table. And also it doesn’t know anything about the run time, which is a critical thing when you’re debugging on an environment like this. So, we really got to get our debug up to scratch. We had a summary in turn. Austin Clement was here working on that. It's pretty close, but not quite ready for real use. One question everybody wants to know is what about generics. And the answer is Go doesn’t have them yet. We don’t understand exactly what they do in Go’s world. They’re quite subtle. The easy examples are easy to write down. It's obvious what the easy examples do, but the complete, correct definition is going to be quite complicated. We want to make sure we get it right. In some ways, Go makes it simpler because there’s no type inheritance, and that eliminates one branch of complexity in generics. But on the other hand, you can put methods on things that are values of arbitrary size, so, that’s a consequence, and the other directions are complicates and simplifies at the same time. And we just don’t think we understand it well enough to do them yet. Also, in the current world, although that would be definitely useful and could solve a lot of problems, the map and slice implementations and the interfaces themselves actually cover a lot of the sort of obvious examples for generics, which is not to say they wouldn’t cover a lot of others if they came along, but the need isn’t quite as acute as it is in the language. It doesn’t have those features. You can in fact build collections fairly nicely using what’s called the MT interface, which is just interface open close. That’s an interface that implements no methods because everything in the world satisfies the MT interface. It's kind of a little bit like it but in a completely different meaning the object in Java. And you can build collections that use. There's quite a few checked in, but they have the disadvantage that they’re not type safe since you have to unbox manually and stuff like that. They’re not really what we would propose to do instead. It's just what we’re using for now, why we struggle at this question. So generics, not yet but they're very subtle. It’s interesting in the codes workbook, Josh Black has a long discussion about them. And I think most people think that Java generics works really well including Josh, but he points out they're actually quite a bit harder to get right than you think, and then maybe it's time to think really hard before you put them in. So, they may come but they're not there now. And, of course, there’s a million of other thing. Where’s my feature? Everybody who programs has a feature they want, and chances are that some of the things you really think are important are missing, or they might be there but not in the way you’re used to. A really good example is inum but there is no iota. And iota's an interesting thing, but it's not an inum. It's different. So, it may not be there, but there may be some way to do what you want. It’s just Go is different or almost as good or maybe even better. But why is it a feature there? Well, maybe it doesn’t really work in the language, you know. Like there’s no point of arithmetic, point of arithmetic is nice but we can’t do a point of arithmetic and make a language safe, there just not there. And the compiler makes up for trying to compile a way some of the inefficiencies you get by having your index and everywhere but on the other hand you got a safe length. So, a lot of the things the people expect the language like this were not there because they break the rules and we don’t have them. Or at least they break principles about what we think for language should do. And there’s also a possible--like generics is a good example. They're just not on top of our list. It doesn’t mean it won’t happen. Union types is another one we’ve been thinking about. Actually, I think it would be a very good proposal for them, but we haven’t done that yet. So it’s possible some of the things you’re finding for will actually occur fairly sooner or not too far out. But it’s a different off language that even if it doesn’t have the features you want, it may have features you haven’t used before, if I can guarantee that some of you haven’t used before, the interface stuff is very interesting. And don’t let the lack of things you expect stop you from playing with things that are there and seeing that it does actually do of some pretty interesting stuff. And a lot more information on this topic available is actually explicit language-assigned FAQ on the website and it talks about a lot of the "why things aren’t there" things. So, to conclude, I think we really have a nice language. I’m having more fun programming than I’ve had in a long time because I feel like I’m getting work done, because I’m getting work done. It’s early yet. I think the language is going to evolve some more, these things we know, but a lot of the basics are locked down pretty well. I feel pretty comfortable with them. It’s a very comfortable language to work in. It’s very productive. You can write code really fast and have it work and be safe. There’s tons of documentation. There’s a specification, complete spec of the language. There’s simple to trial for beginners. There’s a document called Effective Go, which is growing to sort of explain idioms and how you think about things and language differently. There’s some FAQ, and there's even other things than that. All the implementations are available as open-source, the 6GHG5G suite and all its tools, the gzgo and all that stuff is also in the different place, but it’s also going out and it will be linked from the landing page. And so, if you want to try it, go get it. If you want to help, please do it. There’s tons of work to do to build it up to another level. We really are welcoming people to come in and talk about stuff, especially if they want to help us do some interesting library work or build new tools, there’s plenty of room for things to happen. So that’s our site and that’s our language. Thanks for listening.
B1 中級 囲碁プログラミング言語 (The Go Programming Language) 138 10 Hhart Budha に公開 2021 年 01 月 14 日 シェア シェア 保存 報告 動画の中の単語