Placeholder Image

字幕表 動画を再生する

  • [MUSIC PLAYING]

  • PROFESSOR: Well, Hal just told us how you build robust

  • systems. The key idea was--

  • I'm sure that many of you don't really assimilate that

  • yet-- but the key idea is that in order to make a system

  • that's robust, it has to be insensitive to small changes,

  • that is, a small change in the problem should lead to only a

  • small change in the solution.

  • There ought to be a continuity.

  • The space of solutions ought to be continuous in this space

  • of problems.

  • The way he was explaining how to do that was instead of

  • solving a particular problem at every level of

  • decomposition of the problem at the subproblems, where you

  • solve the class of problems, which are a neighborhood of

  • the particular problem that you're trying to solve.

  • The way you do that is by producing a language at that

  • level of detail in which the solutions to that class of

  • problems is representable in that language.

  • Therefore when you makes more changes to the problem you're

  • trying to solve, you generally have to make only small local

  • changes to the solution you've constructed, because at the

  • level of detail you're working, there's a language

  • where you can express the various solutions to alternate

  • problems of the same type.

  • Well that's the beginning of a very important idea, the most

  • important perhaps idea that makes computer science more

  • powerful than most of the other kinds of engineering

  • disciplines we know about.

  • What we've seen so far is sort of how to use

  • embedding of languages.

  • And, of course, the power of embedding languages partly

  • comes from procedures like this one that

  • I showed you yesterday.

  • What you see here is the derivative program that we

  • described yesterday.

  • It's a procedure that takes a procedure as an argument and

  • returns a procedure as a value.

  • And using such things is very nice.

  • You can make things like push combinators and all that sort

  • of wonderful thing that you saw last time.

  • However, now I'm going to really muddy the waters.

  • See this confuses the issue of what's the procedure and what

  • is data, but not very badly.

  • What we really want to do is confuse it very badly.

  • And the best way to do that is to get involved with the

  • manipulation of the algebraic expressions that the

  • procedures themselves are expressed in.

  • So at this point, I want to talk about instead of things

  • like on this slide, the derivative procedure being a

  • thing that manipulates a procedure--

  • this is a numerical method you see here.

  • And what you're seeing is a representation of the

  • numerical approximation to the derivative.

  • That's what's here.

  • In fact what I'd like to talk about is instead things that

  • look like this.

  • And what we have here are rules from a calculus book.

  • These are rules for finding the derivatives of the

  • expressions that one might write in

  • some algebraic language.

  • It says things like a derivative of a constant is 0.

  • The derivative of the valuable with respect to which you are

  • taking the derivative is 1.

  • The derivative of a constant times the function is the

  • constant times the derivative of the function,

  • and things like that.

  • These are exact expressions.

  • These are not numerical approximations.

  • Can we make programs?

  • And, in fact, it's very easy to make programs that

  • manipulate these expressions.

  • Well let's see.

  • Let's look at these rules in some detail.

  • You all have seen these rules in your elementary calculus

  • class at one time or another.

  • And you know from calculus that it's easy to produce

  • derivatives of arbitrary expressions.

  • You also know from your elementary calculus that it's

  • hard to produce integrals.

  • Yet integrals and derivatives are opposites of each other.

  • They're inverse operations.

  • And they have the same rules.

  • What is special about these rules that makes it possible

  • for one to produce derivatives easily and

  • integrals why it's so hard?

  • Let's think about that very simply.

  • Look at these rules.

  • Every one of these rules, when used in the direction for

  • taking derivatives, which is in the direction of this

  • arrow, the left side is matched against your

  • expression, and the right side is the thing which is the

  • derivative of that expression.

  • The arrow is going that way.

  • In each of these rules, the expressions on the right-hand

  • side of the rule that are contained within derivatives

  • are subexpressions, are proper subexpressions, of the

  • expression on the left-hand side.

  • So here we see the derivative of the sum, with is the

  • expression on the left-hand side is the sum of the

  • derivatives of the pieces.

  • So the rule of moving to the right are reduction rules.

  • The problem becomes easier.

  • I turn a big complicated problem it's lots of smaller

  • problems and then combine the results, a perfect place for

  • recursion to work.

  • If I'm going in the other direction like this, if I'm

  • trying to produce integrals, well there are several

  • problems you see here.

  • First of all, if I try to integrate an expression like a

  • sum, more than one rule matches.

  • Here's one that matches.

  • Here's one that matches.

  • I don't know which one to take.

  • And they may be different.

  • I may get to explore different things.

  • Also, the expressions become larger in that direction.

  • And when the expressions become larger, then there's no

  • guarantee that any particular path I choose will terminate,

  • because we will only terminate by accidental cancellation.

  • So that's why integrals are complicated

  • searches and hard to do.

  • Right now I don't want to do anything as hard as that.

  • Let's work on derivatives for a while.

  • Well, these roles are ones you know for

  • the most part hopefully.

  • So let's see if we can write a program which is these rules.

  • And that should be very easy.

  • Just write the program.

  • See, because while I showed you is that it's a reduction

  • rule, it's something appropriate for a recursion.

  • And, of course, what we have for each of these rules is we

  • have a case in some case analysis.

  • So I'm just going to write this program down.

  • Now, of course, I'm going to be saying something you have

  • to believe.

  • Right?

  • What you have to believe is I can represent these algebraic

  • expressions, that I can grab their parts, that I can put

  • them together.

  • We've invented list structures so that you can do that.

  • But you don't want to worry about that now.

  • Right now I'm going to write the program that encapsulates

  • these rules independent of the representation of the

  • algebraic expressions.

  • You have a derivative of an expression with

  • respect to a variable.

  • This is a different thing than the

  • derivative of the function.

  • That's what we saw last time, that numerical approximation.

  • It's something you can't open up a function.

  • It's just the answers.

  • The derivative of an expression is

  • the way it's written.

  • And therefore it's a syntactic phenomenon.

  • And so a lot of what we're going to be doing today is

  • worrying about syntax, syntax of expressions

  • and things like that.

  • Well, there's a case analysis.

  • Anytime we do anything complicated thereby a

  • recursion, we presumably need a case analysis.

  • It's the essential way to begin.

  • And that's usually a conditional

  • of some large kind.

  • Well, what are their possibilities?

  • the first rule that you saw is this something a constant?

  • And what I'm asking is, is the expression a constant with

  • respect to the variable given?

  • If so, the result is 0, because the derivative

  • represents the rate of change of something.

  • If, however, the expression that I'm taking the derivative

  • of is the variable I'm varying, then this is the same

  • variable, the expression var, then the rate of change of the

  • expression with respect to the variable is 1.

  • It's the same 1.

  • Well now there are a couple of other possibilities.

  • It could, for example, be a sum.

  • Well, I don't know how I'm going to express sums yet.

  • Actually I do.

  • But I haven't told you yet.

  • But is it a sum?

  • I'm imagining that there's some way of telling.

  • I'm doing a dispatch on the type of the expression here,

  • absolutely essential in building languages.

  • Languages are made out of different expressions.

  • And soon we're going to see that in our more powerful

  • methods of building languages on languages.

  • Is an expression a sum?

  • If it's a sum, well, we know the rule for derivative of the

  • sum is the sum of the derivatives of the parts.

  • One of them is called the addend and the

  • other is the augend.

  • But I don't have enough space on the blackboard

  • to such long names.

  • So I'll call them A1 and A2.

  • I want to make a sum.

  • Do you remember which is the sum for end or the menu end?

  • Or was it the dividend and the divisor or

  • something like that?

  • Make sum of the derivative of the A1, I'll call it.

  • It's the addend of the expression with respect to the

  • variable, and the derivative of the A2 of the expression,

  • because the two arguments, the addition with

  • respect to the variable.

  • And another rule that we know is product rule, which is, if

  • the expression is a product.

  • By the way, it's a good idea when you're defining things,

  • when you're defining predicates, to give them a

  • name that ends in a question mark.

  • This question mark doesn't mean anything.

  • It's for us as an agreement.

  • It's a conventional interface between humans so you can read

  • my programs more easily.

  • So I want you to, when you write programs, if you define

  • a predicate procedure, that's something that rings true of

  • false, it should have a name which ends in question mark.

  • The list doesn't care.

  • I care.

  • I want to make a sum.

  • Because the derivative of a product is the sum of the

  • first times the derivative of the second plus the second

  • times the derivative of the first. Make a sum of two

  • things, a product of, well, I'm going to say the M1 of the

  • expression, and the derivative of the M2 of the expression

  • with respect to the variable, and the product of the

  • derivative of M1, the multiplier of the expression,

  • with respect to the variable.

  • It's the product of that and the multiplicand, M2, of the

  • expression.

  • Make that product.

  • Make the sum.

  • Close that case.

  • And, of course, I could add as many cases as I like here for

  • a complete set of rules you might find in a calculus book.

  • So this is what it takes to encapsulate those rules.

  • And you see, you have to realize there's a lot of

  • wishful thinking here.

  • I haven't told you anything about how I'm going to make

  • these representations.

  • Now, once I've decided that this is my set of rules, I

  • think it's time to play with the representation.

  • Let's attack that/

  • Well, first of all, I'm going to play a pun.

  • It's an important pun.

  • It's a key to a sort of powerful idea.

  • If I want to represent sums, and products, and differences,

  • and quotients, and things like that, why not use the same

  • language as I'm writing my program in?

  • I write my program in algebraic expressions that

  • look like the sum of the product on a and the product

  • of x and x, and things like that.

  • And the product of b and x and c, whatever, make that a sum

  • of the product.

  • Right now I don't want to have procedures with unknown

  • numbers of arguments, a product of b and x and c.