Placeholder Image

字幕表 動画を再生する

  • JOSEPH JAY WILLIAMS: So I just want to mention, my background

  • is in cognitive science.

  • So I'm really interested in doing experiments in terms of

  • understanding how people learn.

  • And to get a sense of the way that fits in with the research

  • landscape, because there's a ton of research on learning

  • and many insights.

  • I guess the best concept is thinking in terms of

  • qualitative analyses.

  • So these rich investigations that you see from a

  • sociological perspective.

  • Education schools that will take a student and really take

  • a very close look at what they're learning.

  • Explore how they're understanding of algebra and

  • all the difference misconceptions they can have.

  • Then there's also things that are much more like randomized

  • control trials.

  • So policy workwear.

  • You might take a bunch of students in the school and

  • give them a treatment where they

  • undergo a training program.

  • And then you can see if that actually has an effect

  • compared to control group.

  • And that's a very large scale.

  • There's longitudinal studies, which, again, are very long

  • time scale.

  • Collecting measures, like people's grades or students'

  • outcomes as they progress through traditional education.

  • And of course there's working computer science and

  • Educational Data Mining, where you take very large data sets

  • in terms of observations, then try and induce what's going on

  • with learners.

  • As is terms of the way cognitive science fits in with

  • this, I think it's in between something like a randomized

  • control trial, longitudinal study and some light

  • qualitative analysis.

  • Because most of the experiments we do on a short

  • time scale.

  • And it really involves precisely controlling what

  • someone's learning in a particular situation and

  • trying out different forms of instruction and then assessing

  • how learning is occurring after that.

  • And you might think that from such micro experiments you

  • can't learn much.

  • But that's actually the expertise

  • in cognitive science.

  • And I think it's ready insightful.

  • It's obviously using a lot of insights.

  • But I think it's ready well suited for online education,

  • where often you want to ask questions that intermediate

  • between again, quality of assessment of what's going on,

  • and running a full-out randomized control trial where

  • you give people two different versions of the course.

  • There are questions about how you should frame instruction,

  • what kind of video you should show someone, and the kind of

  • strategies you should teach them.

  • So it just lets you know where it sits in there.

  • And what's also nice about cognitive science as an

  • approach is it's pretty interdisciplinary.

  • In a sense, you've got psychology, which is heavily

  • focused on experiments.

  • Philosophy, in terms of really trying to do good conception

  • analysis of what problem you're facing and classifying

  • different kinds of learning.

  • Linguistics, anthropology, as I mentioned, neuroscience, and

  • of course, AI.

  • And what's also nice, I think, is that it's also a bit easier

  • for people in cognitive science perhaps to talk with

  • the kind of researchers at Google who are interested

  • often in things like modeling and machine learning.

  • So to give you a bit of a preview of [INAUDIBLE]

  • cover.

  • So I'm going to talk about two ways you

  • could think about learning.

  • One that I think gives us a lot of rich insights into how

  • to improve education.

  • And I'm going to talk about very quickly three finds in

  • cognitive science that I find have been

  • particularly powerful.

  • It's thinking about what can you do before someone starts

  • learning in terms of framing their learning as

  • an answer to a problem?

  • What can you do during learning in terms of

  • requesting explanations from the learner?

  • And then what can you do after learning?

  • A new and interesting finding here is that you can actually

  • use assessments as instructional tools.

  • Having people take a test can actually be more powerful for

  • learning than having them study material again.

  • And then having looked at some of what we know about how to

  • promote learning of a concept or some set of knowledge, it's

  • worth asking, well, what knowledge can we teach someone

  • that's actually going to have the biggest impact

  • practically?

  • And that's why I think thinking about this idea of

  • what do people learn that's going to help them learn more?

  • So it's not just on this concept but across a range of

  • situations.

  • And then I'll talk about how you can change people's

  • beliefs and have a massive effect on their motivation.

  • You can teach people strategies for learning that

  • then have trickle down effects on all the content they may

  • come across.

  • And then finally, I'm going to talk about online search,

  • which is something I really started to think about

  • seriously recently since looking [INAUDIBLE]

  • of a power searching course.

  • And I actually think this is a really fascinating topic

  • that's right at the junction of things that are really

  • important to industry and the business world, and also

  • really important to education.

  • And I think that's actually a great place to focus, because

  • it allows you to get the benefits of private innovation

  • as well as academic work.

  • And also it means that you can use insights across areas.

  • When you're teaching a business person about search,

  • you're also learning something about how you could help a

  • child to be more inventive in the way they discover

  • information on the internet.

  • How is my speed in terms of talking?

  • Because with an accent, it probably sounds

  • like twice as fast.

  • And I speak twice as fast, so you're hitting a lot of powers

  • being raised there.

  • OK.

  • And one thing I'll cover at the end is I put on the

  • website just a list of resources that I found really

  • useful in terms of knowing what the literature is that

  • has shown really impressive effects on learning.

  • When you go in Google Scholar, I mean literally there are

  • thousands of papers there.

  • And I think that can be daunting.

  • And it's just easy to start with your

  • problem and work on that.

  • But there's always tons of really

  • interesting relevant work.

  • And so I tried to put aside some of the resources that I

  • found most useful.

  • So I have hyperlinks to some of the papers there and brief

  • explanations of what they're about.

  • And also information about other things that I'll talk

  • about at the end.

  • So in terms of learning, I think one kind of challenge,

  • even after studying learning for all my adult life, I still

  • think that there's this kind of intuition or this intuitive

  • theory that I have about learning that holds me back,

  • or misleads me when I'm making instructional decisions.

  • I think it's something that's sort of common

  • across many of us.

  • In the Cambridge Handbook of Learning and Sciences they

  • refer to this idea that when you learn something, you're

  • just dropping it into a bucket.

  • Learning is transferred from the teacher to the student.

  • So let's think about some content.

  • For example, learning from a video, like from a power

  • searching online video, a Khan Academy video.

  • Or reading text, which is how we absorb most of our

  • information.

  • Or you could think about learning from an exercise.

  • So a student solving a math problem, or someone attempting

  • to process a financial statement or do some budgeting

  • or solving a problem of managing

  • relationships with coworkers.

  • Anytime you can see those icons I need to represent, you

  • can stick your personal content in there.

  • So if you have a favorite example of learning or one

  • that's particularly relevant to your everyday experience or

  • your job, just feel free to insert that every time that

  • flashes out.

  • But I'll try to illustrate it with examples.

  • Because again there's good evidence that that's an

  • important thing to do for learning.

  • So I think there's this intuition that what learning

  • is about is just adding information.

  • So it's almost as if we have this model where

  • the mind is a bucket.

  • And so what does it mean to learn?

  • Well it means to take information and drop it in

  • that bucket.

  • Whether it's videos, text, solving stuff, just put it in.

  • And later on you can come back and get it.

  • But actually I think a much better way of thinking about

  • learning is that it's like taking a piece of information,

  • integrating it into the internet, and assuming it had

  • a web page.

  • So that galaxy is the internet.

  • And you can guess what those big giant planets are.

  • So how might you think about this analogy shedding more

  • light on learning?

  • Well under this kind of view, when you're adding content to

  • the internet, first of all, you have to

  • think about three stages.

  • Well what do you do before the content arrives?

  • Or what do you do in linking it to what's already known?

  • So it's really important now.

  • You're not just dropping in a bucket.

  • You're actually linking it to web pages that already exist

  • by citing them.

  • You're putting on a particular domain.

  • And so I think this is very analogous to the challenge we

  • face as learners of how do you get information to link up to

  • what people already know?

  • And in what kind of knowledge are you linking

  • new concepts to?

  • And I think that some things are obvious

  • when we look at it.

  • But it's definitely not at the forefront, I think, of a lot

  • of instructional decisions.

  • Also if we think now even of processes people engage in

  • while they're learning.

  • Everyone on the internet, time is limited, they're not going

  • to read your whole page.

  • If you're lucky they will even read the first few lines.

  • So we have to ask questions like, how do you structure the

  • information on a web page so that the really core

  • concepts jump out?

  • What are the key principles?

  • Or depending on the person, is the web page structured so

  • they can get to the information they need?

  • And so this is analogous to when we're processing

  • something like a video or a bunch of text, it's actually

  • just flat out impossible to remember all that information.

  • So what are the cognitive operations you're engaging in

  • that pick out some information as being more

  • important than others?

  • Some relationships or some principles or some details of

  • whatever content you're learning.

  • And then finally, what's the last part of getting a web

  • page off of the internet?

  • Again, I think you guys all have a better advantage in

  • answering these audience type [INAUDIBLE] questions.

  • But it's actually being able to find it.

  • And this is probably the most underlooked part of what

  • learning involves.

  • Because anytime you learn something, it's only going to

  • be learning if it influences your mind in a way that, at

  • some point in the future, you're going to act

  • differently because of it.

  • So you have to retrieve that information somewhere.

  • So for example, the final stage of learning is it's got

  • to be set up in a way that it's got the right cues or the

  • right connections that when someone goes looking for that

  • information, they're actually going to find it.

  • For example I might be really interested in things about

  • mathematics from a video.

  • And you could say that I know it.

  • But when I am actually faced with a situation where I have

  • to solve or differentiate something, am I actually going

  • to remember that fact at that moment?

  • Or am I going to remember something else I learned about

  • differentiation, or something else about functions?

  • And so really a key challenge in learning is the instruction

  • has to actually ensure that people are putting information

  • in the right way that they can access it later.

  • And actually a key part of that is going to be that after

  • instruction takes place, it's actually really important to

  • give people practice in testing them, and sort of

  • accessing that information.

  • What could be called retrieval of practice.

  • OK.

  • So one thing I wanted to hammer home at this point, in

  • terms of why it's really important to think about how

  • hard learning is and why these two models kind of matter, is

  • that there's this phenomenon called transfer, or this

  • distinction which is that learning might just be, again,

  • a piece of information.

  • A transfer is when you take an abstract principle from one

  • context and use it in another one.

  • So transfer is extremely rare.

  • And I think this is probably the biggest, most important

  • thing I feel like I know now that I didn't

  • know eight years ago.

  • Is that I assume that if I am told something and I don't

  • remember it, well, I just forgot it.

  • I have to practice a bit more.

  • But actually it's a much more profound problem than that.

  • Some people make this statement that almost nothing

  • that we learn in one context is going to get transferred to

  • a very different context.

  • And so actually the way we do most of our learning is by

  • really learning things in very specific contexts and slowly

  • generalizing them out.

  • So for example, let's think of a problem.

  • Let's say that you're a general

  • trying to invade a castle.

  • And you've got 100,000 people.

  • And you know that you can take it if you send them

  • all at the same time.

  • Unfortunately the person in the castle knows that as well.

  • And they've mined the roads so you can only

  • send 20,000 at a time.

  • So that's just not enough.

  • You can't send 20,000 people to take the castle.

  • So how do you solve this problem?

  • You need 100,000 to arrive at the castle.

  • But you can only send 20,000 along any

  • single road at a time.

  • OK.

  • Yeah.

  • There are multiple roads.

  • So the idea is that you divide your force up, and

  • they go and do it.

  • Here's a separate problem.

  • It's going to use a different kind of technique.

  • You're a physician.

  • And you need to kill a tumor in someone's stomach.

  • And you've got really high frequency rays that can

  • destroy the tumor.

  • But it's also going to take out a large part of the

  • person's flesh.

  • And you've got low frequency rays.

  • But they're not actually going to kill the

  • flesh or the tumor.

  • How do you solve that problem?

  • Yeah.

  • So here I gave you guys an advantage.

  • But what's shocking is that if you have people, you can bring

  • someone right in front of you.

  • They'll do the problem, turn the page to the next problem.

  • 20% of people will actually transfer that spontaneously.

  • You could even help them and get them to elaborate the

  • principle, do all kinds of things when they're learning

  • their first problem.

  • They don't hit the transfer spontaneously.

  • If you tell them, I think the one before is relevant, then

  • they might stop and really think and

  • they're like, oh yes.

  • I see how to solve it.

  • But that kind of transfer, you obviously can't have someone

  • running around telling you what's relevant to what.

  • I think it really, in my mind, highlights something that

  • seems so obvious, like the common principle there.

  • It does not happen for people at all.

  • It just doesn't come across.

  • It's almost like they learn something about generals and

  • invasion, and they learn something

  • about medical practice.

  • They didn't learn that abstract principle, and it's

  • really tough to get people to.

  • In fact, I can bet I could give you guys similar versions

  • of this problem and bet in your life in a week, in a few

  • weeks, and you'd be surprised how high a rate

  • you'd fail at that.

  • They've even done really compelling studies where they

  • take business students.

  • And they have them learn a negotiation strategy.

  • For example, we all think negotiation is about

  • compromise.

  • But actually one much better strategy is to find out what

  • the other person wants that you don't really care about

  • and what you want that they don't really care about and

  • trade off on that.

  • So we could have an orange, and if we're fighting you

  • could take half, I could take half.

  • But maybe I want the peel because I have to

  • flavor up some food.

  • And you actually want the inside because you're hungry.

  • That's a trade off negotiation strategy.

  • So what's interesting is when you take MBA students and

  • teach them this strategy, they do everything that you want

  • your students to do.

  • They understand the strategy, they can explain it to you,

  • they can give you other examples of the strategy.

  • They really get it.

  • Except when they actually have to use it.

  • If you put them in a face-to-face negotiation, they

  • fail to use the strategy.

  • So that's really compelling.

  • Because even in a situation where someone's done

  • everything that we like students to do, and students

  • almost never get there all of the time, it's still the case

  • that they're going to fail the transfer.

  • And so I think that really highlights the importance of

  • thinking carefully about what sort of processes you have

  • people engage in while they're learning, and also having

  • assessments that can really help you understand if people

  • have transferred or not.

  • AUDIENCE: [INAUDIBLE].

  • JOSEPH JAY WILLIAMS: Oh sorry, yeah.

  • I'll clarify.

  • So they could tell the strategy to you and they could

  • write out and explain the principle.

  • But if two minutes later they were then in a situation where

  • they had to negotiate with someone face-to-face as part

  • of a team, 20 to 30% would do it.

  • 20 or 30, but it was very low.

  • AUDIENCE: Was anyone asking the question, what do you want

  • that I don't want?

  • JOSEPH JAY WILLIAMS: Exactly.

  • And it's shocking, because they really did

  • understand the strategy.

  • But face-to-face, it's almost like you have a different set

  • of concepts when you're dealing with people and a

  • different set of concepts when you're learning something in

  • your class.

  • AUDIENCE: [INAUDIBLE].

  • JOSEPH JAY WILLIAMS: Yeah.

  • AUDIENCE: [INAUDIBLE].

  • JOSEPH JAY WILLIAMS: I guess it's probably both true, but I

  • would say I lean towards the second one.

  • The way some people write about it, your mind is not

  • designed for transfer.

  • It's not designed.

  • You've got concrete concepts that correspond to different

  • things, and if you want transfer you have to do

  • something special with them.

  • Actually that's the original paper that showed the general

  • and the medical study.

  • This is one that showed it with business students.

  • And actually this guy wrote a book that sort of

  • put transfer on trial.

  • Where he pretty much argues, he thinks transfer almost

  • never occurs.

  • And so what we need to do is teach people specific skills

  • in specific contexts.

  • Obviously that's an extreme position but that's just to

  • let you know the tune of how people think about this.

  • AUDIENCE: [INAUDIBLE].

  • JOSEPH JAY WILLIAMS: Yeah.

  • AUDIENCE: [INAUDIBLE].

  • JOSEPH JAY WILLIAMS: Oh sure.

  • The question was, are these theories of transfer also true

  • if while they're learning about the concept they demote

  • mock negotiations?

  • Or the other point was maybe if they do three different

  • negotiations across different contexts.

  • And so those things definitely help and they make a big

  • difference.

  • The really key principle I'd seen is that there's a big

  • focus in business and medical education

  • on case-base reasoning.

  • And actually there's good evidence that how people

  • structure abstract concepts around a specific case is

  • really useful.

  • But the experiment by Genter, Loewenstein, and Thompson,

  • what this showed is that even if it helps to have a specific

  • case, you still don't necessarily get

  • the abstract principle.

  • So what is really nice about the experiment is, they would

  • give people two cases, one after the other, as was

  • suggested, versus giving the two cases side by side, and

  • actually tell people what are the similarities and

  • differences between these two cases.

  • In this story, it was about business, but let's say a

  • story about general, the story about the medical doctor,

  • which agents are playing what role?

  • And what's the relationship that exists here?

  • And so when you actually have that pretty extensive process,

  • which is really good at helping people really get

  • abstractions in a way that's telling it to them and words

  • that just doesn't, when they actually have to process this

  • and align items, that actually makes a big difference in

  • abstraction.

  • But it's not necessary enough for them to just see cases.

  • That helps.

  • But you've really got to add something extra onto it.

  • In comparison, I would say one of the super processes that

  • we've seen for getting people to really grasp an abstract

  • concept that they wouldn't get in words or they won't get

  • just from examples.

  • OK.

  • Well, in terms of thinking about if someone's about to

  • learn some content, what are things that

  • we're going to do before?

  • Are they going to watch a video about some training they

  • have at work?

  • Or are they going to read a text about mathematics?

  • So what we typically would do is give them that information.

  • And we might ask one or two questions or tell them what

  • they're going to study.

  • You're going to learn about math.

  • But pretty much you give them the information very much in

  • line with putting it into their head like it's a bucket.

  • But actually there's this really interesting set of work

  • on what's called Problem Based Learning.

  • And so the idea here is that with a very minimal change you

  • can influence how people are going to process

  • that video or lesson.

  • If I ask them a question before that's going to make

  • them interpret everything they learn as a

  • solution to a problem.

  • Instead of learning, for example, reading a bunch of

  • text about solving algebra equations, if that was

  • prefaced with if you have two variables in an equation and

  • you're trying to figure out how to isolate one, how

  • would you do it?

  • The student will probably not actually know how to do that.

  • They'll fail.

  • They'll try all kinds of things that are wrong.

  • But the mere fact that you got them thinking about that

  • problem, that means that as they go through the lesson,

  • they are encoding all that information as a solution to

  • that problem.

  • For example, you can think of cases where people have told

  • you something and you take it for granted or encode it, but

  • if at first you'd first been trying to solve a problem and

  • then got that piece of information in response, that

  • would've been much more effective for learning.

  • And so what's really nice about this manipulation is

  • that it's very minimal in that all you've got to do it just

  • take whatever content you have and proceed

  • it with some questions.

  • For example, if you're about to learn, let's say, about

  • power search, you could ask someone as well, how do you

  • find this kind of content?

  • Or if you're learning mathematics, is it possible to

  • construct this kind of relation and then just having

  • people think about that will then lead them to process

  • everything they see after differently?

  • It's also more motivating for students in that they're not

  • just absorbing facts, but they feel like they're actually

  • learning with some kind of purpose in mind.

  • In terms of thinking about what you can do during

  • learning, and here I'm going to focus on the work I've done

  • on explanation just because it's what I know best in terms

  • of presenting.

  • So the idea here is that we all have this intuition that

  • it's really good if we actively process information.

  • Or if we have ever tried to explain a concept to someone

  • else, something about actually explaining it to them might

  • actually resort to something new.

  • But despite that, it's not so often that instructional

  • videos or texts have questions embedded in them or activities

  • that force you to generate explanations.

  • And so there's a good bit of evidence that this does

  • actually help learning.

  • If you can prompt someone as they're learning to generate

  • explanations, they understand it more deeply and they're

  • more likely to transfer the principles on the situation.

  • The question that was interesting is understanding--

  • well, not to be too mad about it--

  • but why is it that explaining why helps you learn?

  • And this is really relevant in terms of thinking, when are

  • you going to ask learners for explanations?

  • You can't ask someone all the time.

  • Or is this some kind of content you should ask for?

  • Or what kind of knowledge is going to be targeted when you

  • ask for an explanation?

  • So it's got really important practical

  • implications in that sense.

  • So now we contrast just throwing someone into learning

  • and letting them use whatever strategies come to them, with

  • actually prompting them while they're learning to explain

  • why they're doing something, to explain why a fact is true

  • instead of just accepting it.

  • And so we looked at this in three contexts.

  • The first was in terms of learning artificial

  • categories.

  • So even though there's lots of work on explanation and rich

  • educational settings, this is the first study that's

  • actually in a lab context with artificial materials where we

  • can really control things very precisely.

  • Even if we have a sense an experience is helping, it's

  • very hard to pin down exactly what kind of knowledge people

  • are gaining.

  • Is it just that explaining is making

  • people pay more attention?

  • So the mere fact that you have to answer a question means

  • that you've got to pay attention to what you're

  • learning instead of dozing off or looking up information.

  • You've got to spend more time processing it.

  • It could also be that because you have to provide an

  • explanation you're more motivated.

  • There's a standard social context where you have to

  • produce information that could possibly be relevant to

  • someone else.

  • And so if those are how experience works, then that

  • suggests one way in which we'd use it as an

  • instructional tool.

  • But the idea that we wanted to test is actually that

  • experience effects is a lot more selective.

  • That there's something about explanation that doesn't just

  • boost your general processing or attention, but actually is

  • constraining you or forcing you to search for underlying

  • principles or patterns or generalizations.

  • And so this is a question we could test more readily when

  • we look at a constrained context like learning

  • artificial categories.

  • We also examined explaining people's behavior.

  • So for example, if you notice a friend donates to charity,

  • that wasn't me, right?

  • OK, that's fine.

  • If you notice a friend donates to charity, you could just

  • encode this fact about them.

  • You might make other predictions from it like,

  • they're going to have less money in their pocket at the

  • end of the day and probably won't treat you for drinks.

  • Or you could try and explain it.

  • Well why did that person give to charity?

  • And again, that's a different way of learning

  • about another person.

  • And so are facts just to boost your general attention to what

  • they're doing?

  • Or is it actually going to do something deeper like make you

  • search for an underlying principle?

  • And the final context we looked at this in was actually

  • trying to learn something more educational.

  • So the idea here was that we had people learn a bit about

  • the concept of variability.

  • And this is an example.

  • I'm actually going to talk more in depth so that I don't

  • spend too much time on this part, but I'll sum up the main

  • conclusions.

  • Across all of these domains the basic form was for

  • different materials and different kinds of knowledge,

  • but the basic form was asking a why question.

  • Explaining category membership, explaining why

  • someone did something, or explaining

  • the solution or problem.

  • And that would be contrasted with letting people choose

  • their own strategies, so being able to study freely.

  • And we also contrasted it with asking people to say out loud

  • what they were thinking.

  • Because it could be that the main effects of explanation

  • are actually having you clarify your thoughts.

  • Once you state it explicitly you can see what you were

  • thinking, what was missing, what the gaps

  • in knowledge were.

  • And so we're trying to pick apart that effect from whether

  • there's something else we're explaining, driving you to

  • search for underlying principles.

  • And then following the study, we gave people learning

  • measures the tapped exactly what they've learned.

  • So to talk about going into that third one in more depth,

  • I'll just run through the procedure so that

  • you can get a sense.

  • So the idea here is that people have to learn a

  • university's ranking system from examples.

  • So you're told that a university has a way of

  • ranking students from different classes.

  • And you're going to see examples of students who have

  • been ranked by last year's officer.

  • And you have to learn, well, what's the

  • system for doing it?

  • What information are they using?

  • So for example, you've got information about someone like

  • their personal score, their class average, the class

  • minimum score, the class deviation.

  • So for example, this might be John.

  • He got an 86% in history.

  • You might see what the top score is, what the class mean

  • standard deviation is.

  • Then there's another student.

  • Then in physics, he got about 80%.

  • Again, you see the top score, the class mean, the class

  • standard deviation.

  • And so, you'd actually be told which one was ranked higher.

  • So for example you'd be told the person on the left was.

  • And so I'm trying to think about how we

  • can learn about this.

  • There are a bunch of different ways you could approach it.

  • You could pay attention to lots of examples, and then

  • that would inform future situations.

  • You automatically get an intuition for

  • how people are ranked.

  • You could try and look for different patterns.

  • So for example, maybe people are ranked higher just if they

  • had a higher score.

  • The university is pretty egalitarian across courses.

  • You could think it's actually how far they are above the

  • class average.

  • So for example, this person's 6% above the class average,

  • whereas this person's 4%.

  • So even though they got a much lower score, they might still

  • actually be ranked higher.

  • Actually sorry, this is consistent in this case, but

  • the reason this person's ranked higher is not because

  • they got a higher score.

  • It's actually because they're farther from the average.

  • You could also think it's actually just a matter of

  • being close to the maximum score.

  • If you're close to the top person then you deserve to be

  • ranked higher no matter what.

  • And then finally, if you think about what statistics would

  • suggest, there's this idea that you should actually look

  • the distance from the average, but weighted by whatever the

  • standard deviation is.

  • And so the idea here is that what you really want to know

  • is, how far above most of the other students in the class

  • was each of these people?

  • And that'll give you a measure of whether you should rank

  • someone highly or not.

  • How many standard deviations above were

  • they, what is the Z-score?

  • And so we gave students a bunch of these observations.

  • They saw five ranked pairs in total, and the actual true

  • rule was using the number of standard deviations someone

  • was above the average.

  • But the other rules were pretty salient.

  • And in pre-testing, we find that a lot of

  • students endorse them.

  • Whoever got the highest score should be ranked higher.

  • Whoever's closest to the maximum, and so on.

  • And so some of the observations just didn't fit

  • with those rules.

  • So this might fit sometimes with the higher raw score, but

  • sometimes they don't.

  • They might match the close to maximum rule, but not always.

  • And so the idea here was, if explaining really forces

  • people to seek underlying principles, then explain these

  • sort of observations and the ones that conflicted with some

  • rules and others should drive them to actually find this

  • underlying principle, the rule above deviation that actually

  • applies to all the observations.

  • And so this is what we found.

  • This is showing you people's accuracy in classifying ranked

  • pairs before and after and how much improved.

  • And what you see is that people who had to explain had

  • a much bigger increase in accuracy, of about 40%,

  • compared to those who were writing their thoughts.

  • That's interesting because writing your thoughts is a

  • pretty elaborative activity.

  • You have to express what you're

  • thinking about the materials.

  • You have to process them deeply.

  • But explaining [INAUDIBLE]

  • this broader principle.

  • And that's just an illustration because across

  • all those contexts where there was learning about categories,

  • explaining how people find a broad generalization that

  • counted membership for all the robots.

  • If it was explaining people's behavior, again, you were more

  • likely to discover a general pattern like, young people

  • donated to charities more often than older people.

  • So this suggests that, in terms of thinking about

  • educational contexts, when you want to ask someone a why

  • question, it's not necessarily a good idea to just always be

  • prompting them, or just trying to boost their engagement.

  • You actually want to be selective in terms of when

  • they're given examples they can point them towards an

  • underlying principle, then you need to prompt

  • them to explain why.

  • Or if you think they don't have enough knowledge to

  • induce the principle yet, then it might actually make more

  • sense to engage in other activities, like helping them

  • acquire new concepts or have more facility with

  • understanding what the different facts are.

  • And then they might reach a stage where they can actually,

  • with prompting, be able to induce

  • the underlying principle.

  • What's another interesting thing is that in the second

  • line of experiments, we found that explaining actually

  • boosted the extent to which people used their knowledge in

  • finding underlying principles.

  • So the idea was that in the absence of explaining you

  • might have a lot of relevant knowledge that might suggest a

  • deviation is a good rule or something else is.

  • But you don't actually bring it to bear.

  • If you explain though, and you don't have that knowledge,

  • well then you might find some pattern but it's not

  • necessarily the correct one.

  • So the idea is that explaining of prior knowledge, interact

  • or guide what patterns people are searching for, and what

  • they're likely to detect.

  • AUDIENCE: [INAUDIBLE].

  • JOSEPH JAY WILLIAMS: You mean in general, or with respect--

  • AUDIENCE: You said [INAUDIBLE]?

  • JOSEPH JAY WILLIAMS: So we ran it online and we just asked

  • people afterwards how familiar with it they were.

  • And it was a pretty big range.

  • Some people had never heard of it before, but a decent number

  • had at least come across the concept before.

  • And those people though, they still did ready

  • badly on the pre-test.

  • So in a lot of experimental situations you can look at

  • interesting effects of learning.

  • Because the truth is that learning just doesn't really

  • accrue that well the first time around.

  • So often there's relearning the material.

  • And even with extensive practice, it's hard to get

  • people to get that idea of the Z-score principle.

  • And we didn't find differences between low and high people

  • who had low and high exposure.

  • But again that might be because we need

  • to broaden the sample.

  • Because it was online we might just look on people who just

  • didn't have a lot of knowledge.

  • They wouldn't sign up for a math experiment maybe.

  • Also in terms of thinking about explanation's effect, it

  • really was selective.

  • And actually what we found is that it didn't help you

  • remember information better.

  • Sometimes even impaired it.

  • But especially if there wasn't a reliable pattern present, or

  • if you couldn't actually find it because you just didn't

  • have enough knowledge to, explain actually impaired

  • learning in some cases.

  • Because it drove people to keep searching for a principle

  • that didn't exist.

  • Or one that was only partially reliable.

  • And so this is something you might think about in terms of

  • when people make these mini generalizations, or they make

  • these kind of bugs that's sort of seem half right.

  • Children do this a lot in algebra for example.

  • And it might actually be from trying to seek explanations

  • and construct some kind of pattern.

  • But it's just a case that it's characterized as cases they've

  • seen so far.

  • But it's not actually going to help

  • them with future learning.

  • And so again that's another reason to think carefully

  • about well, when are you going to prompt someone for an

  • explanation?

  • You need to at a point where it's not going to hinder their

  • learning or be worse.

  • On the flip side, it's not saying explanation is bad,

  • it's just saying it's sort of like a powerful selective tool

  • that if you use in the wrong cases,

  • will actually be harmful.

  • And then since most of the workers have dealt with adults

  • we also looked at the effects of explanation in

  • five-year-olds learning causal relationships.

  • And we found very similar results.

  • Even though they were young and their ability to verbalize

  • are pretty limited, it was still able to, when prompted

  • to explain, discover underlying patterns.

  • And especially to bring prior knowledge to bear, and

  • actually use it to discover patterns that they wouldn't if

  • they weren't explaining.

  • And in terms of thinking how to take these kinds of

  • findings online, one really good context is thinking about

  • things like online mathematics, exercises, like

  • the one that Khan Academy has in their framework.

  • So that's a mini shrunken version but basically people

  • will get a problem statement, and they'll try and solve it

  • and type in an answer.

  • And along the way if they wanted hints, clicking on a

  • hint button gradually exposes the actual solution.

  • And so in the literature this is called a worked example.

  • And it's sort of infamous that sometimes they can be much

  • better for students learning than just trying things out

  • and not succeeding.

  • But it's really hard to get people to process them deeply,

  • to actually not just skim over it but understand each

  • component, each step, and how it relates to something more

  • general, so to get the principle.

  • As the last one case explained could be really helpful.

  • And so there are a number of different ways we're thinking

  • of approaching this.

  • But the simplest version is something like, when someone

  • enters a correct answer, or even if they're following a

  • solution, you can prompt them to try and think, well,

  • explain why that solution's correct.

  • Or explain why that person took the step they did.

  • Even though it seems, of course, you just saw the

  • correct answer, you should be thinking

  • about why it's correct.

  • This is actually not necessarily that common.

  • I mean, it's not guaranteed.

  • A lot of students may be trying to memorize the answer.

  • They may be trying to explain other aspects of the solution.

  • There are many things that people may be doing.

  • And when you do detailed analyses of children's

  • strategies, you find incredible variation.

  • And so that idea, just having that prompt to focus them on

  • understanding how that particular answer, or that

  • particular solution is just an instance

  • of a general principle.

  • What's the general strategy that was used here?

  • What's the key concept?

  • Now one thing that could be problematic is even though

  • this is promising because you can get people to learn just

  • by being prompted, so you don't necessarily have to give

  • them feedback, when people don't get feedback they often

  • tend to feel like they're throwing

  • explanations into the void.

  • They're just talking and it's disappearing.

  • And so one thing I'm going to try is try and couple this

  • really closely with actually giving them an explanation, a

  • correct one, or almost correct one.

  • And tell them that's another student's or teacher's

  • explanation.

  • And so the idea is not only change a task from do

  • something that the teacher or instructor's asking you to do,

  • type something in, to generate an explanation, and then

  • you're going to be comparing it to what someone else said.

  • Actually the idea here is that at this point you could ask

  • them to do things like, once they've got an explanation

  • written down and they see someone else's, well, what if

  • you grade both of them?

  • Do you think that was a good one on a scale from one to 10?

  • Was your explanation good on a scale from one to 10?

  • You could have them elaborate on what they thought might be

  • better or worse about an explanation.

  • Or you could even just have them make ratings like a

  • similarity or dissimilarity.

  • So it's not very common to ask for these kinds of ratings

  • that's in a MOOC or online.

  • And I think it could be because often if you just ask

  • a question, and at random, 1 to 10 rating, people may not

  • give informative answers.

  • But if you use the kind of assessments that psychologists

  • have really honed down on for understanding things like what

  • concepts people think are similar, or people's ability

  • to predict when they make errors.

  • I think there's some really useful information which we

  • get out of this.

  • And what's even better is you're getting information

  • about what students think, and you're helping them learn at

  • the same time.

  • You have to comparing what they degenerate into what

  • someone else generates.

  • It could be very powerful.

  • There's a good reason to think it would be very

  • powerful for learning.

  • Another thing that would fit in really well here is that

  • there's work by people at Khan Academy, like

  • [? Carolyn Rosie ?]

  • and David Adamson.

  • And what they've got is they've found a way to

  • leverage the minimum amount of AIU you can get into a web

  • based system to get lots of students learning.

  • And so the idea here is that when people might type in

  • something like this, like explain why you

  • think that's correct.

  • They're placed in a small chat room with a

  • couple of other students.

  • So then this is not embedded in the context of necessarily

  • a Khan Academy exercise.

  • But it's embedded in the context of trying to solve a

  • problem together.

  • And so someone has to provide an explanation.

  • And then the chatbot can then say things like well, John,

  • what do you think about that person's explanation?

  • How would you say it in your own words?

  • Do you disagree?

  • And so these are simple questions you can ask people.

  • But they've been shown to have a really powerful effect on

  • students' learning.

  • The term for this class of strategies, like accountable

  • talk moves.

  • And the idea is if I get people to really think about

  • what they're saying and how it differs from other people's

  • ideas, you can get a lot of learning.

  • And once you have that chatbot in there, it can also act as a

  • sort of instructional tutor, but one that doesn't need a

  • lot of attention, a lot of time.

  • Because students will often carry a lot of the

  • conversation themselves.

  • And the key thing is just doing things like adding

  • prompts to keep it going.

  • Or matching a student's explanation to something else

  • in it's database.

  • And then not saying, and just asking a student, is

  • this what you meant?

  • And so it's a great way that very simple technology you can

  • give an answer and the answer's probably not going to

  • be taught right because NLP isn't there.

  • But then actually the student's having a learning

  • experience because they're now trying to correct what you

  • said and put exactly what they were trying to get across.

  • And it's a really exciting line of work to look at.

  • So in terms of thinking about the benefits of explanation, I

  • think it's a great way to have instructor guidance, because

  • you can ask people questions that

  • guide how they're thinking.

  • While at the same time having learners generate information

  • or generate knowledge.

  • So they're actually constructing it.

  • It's also especially on online courses useful because you can

  • do learning without any feedback.

  • So you might have to do things to get people to actually

  • generate explanations, but it's a great way of actually

  • maximizing learning without necessarily having, even if

  • you don't have the resources to give people

  • individualized feedback.

  • And again it's really key, because as I mentioned,

  • transfer is really challenging to people, so the fact that

  • explanations help him understand these abstract

  • principles can have a very powerful

  • effect on their learning.

  • So in terms of thinking of what you can--

  • do you have a question?

  • AUDIENCE: [INAUDIBLE].

  • JOSEPH JAY WILLIAMS: That's a planned study, that's a

  • planned study, yeah.

  • So--

  • AUDIENCE: [INAUDIBLE]?

  • JOSEPH JAY WILLIAMS: Yeah.

  • I think we're really excited to look at--

  • oh sorry, the question was, did we find effects of having

  • them grade other people's explanations?

  • I was just saying that that's a planned study.

  • Because we can update tons of things in the lab when using

  • software [INAUDIBLE].

  • But when we have to deal with a platform that has its

  • constraints, unfortunately it takes a lot of adaptation.

  • Just on a side point, actually, one way I think that

  • research psychologists can be really relevant to people

  • actually doing practical work, like in edX and of Coursera,

  • in Khan Academy is they have a lot of constraints in terms of

  • the system they have to build has to scale,

  • getting things done.

  • But actually I think something like a sand-boxing addition to

  • any of those platforms will be very useful.

  • For example there's like 50, I mean literally 50 softwares

  • that don't require programming experience.

  • You can author an experiment really quickly.

  • It won't be as beautiful or as glossy as maybe, we think,

  • Khan Academy's perfected.

  • But it would be a great way to test out very quickly and

  • cheaply lots of different strategies for learning.

  • And then once you figure out which ones seem to work really

  • well for learners, that's what you put your

  • development time into.

  • Into actually investing it.

  • For example with Google's Open Course Builder, maybe there's

  • functionality you're thinking of implementing.

  • Well before doing that, you could try and add an addition

  • to a power searching course where some people will opt

  • into take in a different software, and then we can

  • replicate tons of things in terms of the questions they

  • asked or the examples they see or the format of video

  • presentation.

  • And then when you see which ones are really effective

  • that's some functionality we can then

  • build into the platform.

  • So in terms of thinking what people can do after learning,

  • after the study is done that actually has a

  • huge effect on learning.

  • I think a really interesting idea here is to use

  • assessments as instructional tools.

  • So normally we think of a test as just being to measure what

  • someone's done.

  • So it's like an inconvenience.

  • It's taking away time from learning.

  • And actually at Mackenzie, for example, especially when they

  • do training of soft skills, the instructional designers

  • complain that when they spend 20% of the time actually

  • testing people on this, it actually didn't think it was a

  • waste of time, because that was less instruction people

  • were getting.

  • I think that's something that maybe needs to be re-thought

  • in some ways.

  • Because there's now really good evidence that actually

  • testing people can have a really powerful effect on what

  • they learn.

  • If you think of the traditional model where you do

  • lesson one, lesson two, and lesson three.

  • And then maybe you do them again.

  • You drop some more information into the bucket.

  • Versus actually after each of those you actually add an

  • assessment.

  • So there's actually tons of reasons

  • testing can help learning.

  • For example if it gives you feedback, if it makes you more

  • motivated to study.

  • But I'm actually going to talk about the effects of testing

  • that are only due to taking the test.

  • So there's no feedback involved, and you're not

  • allowed to restudy.

  • So the idea is that something like mnemonic or there's

  • something cognitive about having to

  • generate information yourself.

  • Even though you get no new information

  • and you don't study.

  • And on a test you normally generate much less than what

  • was actually in the actual lesson.

  • So this refers to the testing effect.

  • As you can imagine, if you give a learner the option to

  • study something once, and then following that they have the

  • option of studying it again, versus they had the option of

  • studying it and then taking a test.

  • Most learners would opt for this first one.

  • And in fact, there's good data that they do.

  • And in a way it's sort of strange to think that taking a

  • test would be helpful because you can't recall the 200 facts

  • from the video.

  • You can only recall a subset of them.

  • But what's actually been found is if you look at immediate

  • testing, so let's say you do the two videos, you do a

  • version of test, and then you, let's say, do one more test

  • short after.

  • You actually don't find so much for difference, or you

  • find a slight advantage for spending extra time studying.

  • But if you wait an hour or a few days or weeks, what you

  • actually find is that studying twice is definitely much worse

  • than studying and being tested.

  • In fact the most recent paper in science found that you

  • could have people study four times in a row, and it wasn't

  • as good as studying and then testing.

  • So the key thing that's going on here is thinking about the

  • retrieval stage of learning.

  • Where it's not just about having

  • information in your mind.

  • It's actually able to get to get at it.

  • And when you do a test you're forced to form the cues or the

  • retrieval, pathways or connections that will help you

  • get back that information.

  • What's interesting is that in all these studies, even though

  • they were doing worse in the study/study condition.

  • They claimed that they were doing better.

  • And they expected their performance to he higher.

  • Another really interesting finding actually that I think

  • has gotten less attention but it's very powerful is what's

  • called the Mixing Effect.

  • So let's say you have questions of type A, B, and C.

  • Sorry, lessons A, B, and C. It's natural to put the

  • assessments for A, whatever 3x says you have right

  • afterwards, and so on for B and C.

  • But the idea behind the Mixing Effect is, don't make new

  • exercises that do anything different.

  • Or just scramble the order.

  • So if after you learn A, you get one question A. After you

  • learn B you get a question B, a question A, and a question

  • B. And then when you finally finish it out on C, you get a

  • mix of all of them.

  • And so this is especially dramatic because you haven't

  • increased how much exercises you have to make.

  • Learners do terribly when they're doing this and they

  • think they're doing really badly.

  • But it has profound effects on their ability to actually

  • generalize.

  • Because the idea is that when you do an exercise and them

  • move straight into having a bunch of questions, it's

  • really easy to know which principle applies.

  • When you mix them together it's really necessary for them

  • to figure out what the correct principles are

  • and where they apply.

  • And so even though they do much worse in the learning

  • stage they do far better in generalizing.

  • And this something you can easily do

  • in a textbook, right?

  • Take all the exercises and start mixing them around.

  • AUDIENCE: [INAUDIBLE].

  • JOSEPH JAY WILLIAMS: Yeah.

  • Let me think a second.

  • So there's no study I can think of that said that.

  • Even though there might be one out there.

  • But I should have thought.

  • Because that would definitely be, as you

  • said, a great point.

  • AUDIENCE: The idea is to--

  • before you [INAUDIBLE]?

  • JOSEPH JAY WILLIAMS: Yeah.

  • That's a really good point, actually, yeah.

  • So I'll check the quiz.

  • And so this is the quiz [INAUDIBLE] on Mixing Effects.

  • So I'll take a look at the paper and see if these

  • [INAUDIBLE].

  • But that's true, that would work really well.

  • And it's not something, I think, that would necessarily

  • be obvious in this perspective.

  • But if you put together the problem-based learning work,

  • then it does make a lot of sense.

  • So basically I think what's really exciting about this

  • work is, that basically when you have online education you

  • can't have a real world lab.

  • We have to be in a lab of psychologists because you need

  • control and you have to be able to measure

  • what people are doing.

  • But now that people have moved online, you've got a real

  • world environment that's exactly like a lab context.

  • You can do experiments, randomize assignments,

  • precisely deliver and control.

  • And you can have really quantitative measures of

  • learning that are automatically connected.

  • And so I hope that in the future what that means is that

  • you get a lot more cognitive scientists and experiment

  • psychologists actually working directly in

  • online education platforms.

  • Whereas normally it's incredibly difficult to do

  • education research.

  • You can't have a teacher around and assign instruction

  • to different students.

  • They just can't do it, you know?

  • Whereas you can have two different videos show up.

  • What's also really nice, then, is that we're doing research

  • in a context with really rapid ecological validity.

  • And you're actually doing research on a product.

  • Where when you're done with your theoretical research it's

  • actually been improved.

  • It's better for learning.

  • And so what you can have is a lot of

  • evidence-based decisions.

  • You can take that product and deliver

  • it directly to someone.

  • So there's perfect fidelity to treatment.

  • Whereas a lot of research conclusions, even if they work

  • really well when the research is carried out, it's hard to

  • disseminate it, because the way someone else implements it

  • isn't guaranteed to be true to that.

  • There's great scalability because you've got these

  • implements in online system.

  • And especially you've got the knowledge really wrapped up in

  • the precise things that the researchers decide to ask.

  • And of course iterative improvement where it's

  • constantly getting better.

  • So given that all of these methods are improving

  • learning, I'm coming towards the end so I'm going to touch

  • on these things really quickly.

  • But the idea is, if you've got these ways of teaching a

  • concept that seem to be effective, well, what concepts

  • do you want to teach that are going to

  • have the maximum effect?

  • And any kind of concept that improves people's future

  • learning is a good candidate.

  • So compare with your motivation, learning

  • strategies and online search.

  • And so in terms of increasing motivation, the idea here is

  • that one really powerful way to motivate people is actually

  • by targeting their beliefs about intelligence.

  • So do you agree, for example, that your intelligence is

  • something very basic about you that you

  • can't change very much?

  • Or do you think no matter how much intelligence you have,

  • you can always change it quite a bit?

  • And so Carol Dweck writes that as being a fixed theory of

  • this malleable theory of intelligence.

  • And it's pretty shocking what this predicts in terms of

  • people who start off with equal IQ's will diverge.

  • Because people with fixed theory

  • avoid taking hard problems.

  • People with a malleable theory always ask questions, they're

  • happy for challenges, they don't think it means something

  • negative about them.

  • And so what are the effects of actually teaching people a

  • malleable theory?

  • And so this is work that's done by PERTS, a project for

  • education that researches [INAUDIBLE] at Stanford.

  • That's Dave Paunesku and Carissa.

  • Actually Carissa's is right here, she just

  • wanted to sit it.

  • And so what they did is trying to instill all these insights

  • into a single lesson, where people will be taught that the

  • brain's malleable.

  • Then they would have some questions and make them think

  • about how it applies to their life.

  • And you can contrast the control commission where the

  • same lesson will teach people about the brain, but not

  • necessarily emphasize yet that your brain changes every time

  • you learn a new fact and that intelligence is constantly

  • being improved, do exercises, and then try and take some

  • measurements of what people's mindsets are.

  • And so this is great in terms of actually being able to see

  • what effect this has on students.

  • And instead of this whole system of actually running in

  • high schools and middle schools and so on.

  • So it's got all those great features of a real world lab

  • that I mentioned.

  • But in addition to asking about mindset, they also just

  • collect the student's grades.

  • And normally I wouldn't collect measures on

  • [INAUDIBLE] grades, because it's so hard to impact

  • something that has so many variables fitting into it.

  • And there's a lot that goes on.

  • But actually what they found in this randomized controlled

  • trial is that just two lessons of about 45 minutes actually

  • boosted students' GPA.

  • Now you might think grades are not the best measure of

  • things, but if someone knows how to change grades they

  • really understand something really deep about the

  • psychological structure of what students are doing.

  • And you can imagine these kinds of effects applying to

  • all kinds of contexts.

  • For example in a workplace in terms of changing people's

  • beliefs about hard work or about what it means to get

  • negative feedback.

  • And so in terms of how I was thinking it would be good to

  • bring cognitive and social psychology together in this

  • way is just applying some of these principles like

  • preparatory questions during the actual learning of the

  • materials, have explanations, and after, questions that

  • repeatedly have applied the concept.

  • And so with Dave and Carissa, I have developed some initial

  • measures of how people might apply mindsets.

  • And we'd like to create video versions for MOOCs and Khan

  • Academy and that you can give to someone.

  • But we haven't gotten to that yet.

  • But what I've done is taken those cognitive principles and

  • applied them to the text they had.

  • And try and take this simple idea of a mindset, but teach

  • people it in a way that they're more likely to

  • remember it and apply it to situations in everyday life.

  • And so that's actually running now, but in so many lab

  • studies at least it is having an effective where even if my

  • inset is better than the control for getting people to

  • believe in a growth mindset and make judgments about other

  • people that are consistent with the malleable theory, the

  • effect of addition of [? Khan ?]

  • [INAUDIBLE]

  • is increasing that further.

  • Well, that's backward.

  • But we have our fingers crossed that it's going to

  • turn out successfully, and especially

  • if it impacts grades.

  • Another thing that is really interesting is there's so many

  • ways this mindset idea can be implemented into courses.

  • For example changing feedback in exercises.

  • It's natural to tell someone who does well

  • that they did great.

  • And it's even better to tell them they're smart.

  • But the problem is when you tell someone that they're

  • smart, you're emphasizing something internal and fixed

  • about them, a quality.

  • And so there's actually work showing that that can actually

  • undermine children's abilities.

  • By praising them for their intelligence they can actually

  • start to become very resistant of trying things out or being

  • right about things, showing that their dumb.

  • Whereas praising children for their effort, or praising them

  • for the way they've set up a problem or approached

  • something or tried to find out more is actually a lot more

  • effective in the long run.

  • As you can imagine, Khan Academy exercises in workplace

  • evaluations try to frame things from the perspective of

  • improving or people's ability to be malleable can have a

  • really powerful effect on learning.

  • And so just quickly to go through, I guess I'll just

  • skip over this.

  • So basically I'd like to apply the same concept to learning

  • strategies.

  • And so the idea here is to integrate work not just from

  • cognitive psychology, but also ideas about mindset and habit

  • and behavior change, which is it's own separate literature.

  • Get people to stop thinking about learning in terms of

  • memorizing facts but actually in terms of understanding.

  • And in particular that what it means to learn a concept is to

  • be able to understand it, to be able to teach

  • it to someone else.

  • So the idea would be within a MOOC where they collect grades

  • automatically.

  • It's a great situation to introduce training videos on

  • learning strategies.

  • Like whenever you're studying something, imagine you're

  • going to be teaching it to someone.

  • Or when you finish something, imagine if you had to explain

  • what you learned to someone else.

  • And how technology can prompt people to do this.

  • And then see if it actually does have an

  • effect on final grades.

  • And MOOCs provide a great way to actually do that data, that

  • kind of longitudinal study that you

  • just couldn't do before.

  • AUDIENCE: [INAUDIBLE].

  • JOSEPH JAY WILLIAMS: What?

  • Oh, sorry.

  • OK.

  • A MOOC is a Massive Open Online Course.

  • So a course there on Google Power Search.

  • You have 20,000 people and you teach a course.

  • OK.

  • And online search.

  • I guess you guys are all pretty much on board with

  • online search being important.

  • But actually I think everyone knows it's good

  • to be able to search.

  • But often we think if it as just like a tool, like it's

  • for something like fishing.

  • Versus like a tool kit for problem solving.

  • And online search I think is powerful for teaching because

  • if a student has problems in the math problem, and they

  • just need a habit or they have the skills to be able to find

  • resources on the internet.

  • They can find a site that's better for them than maybe

  • their current textbook.

  • They could find a site that offers free tutoring.

  • If they ever later on in life need to solve a similar

  • problem, they can go online and find those resources.

  • And so I think online education that teaches

  • students better search skills or especially to search for

  • knowledge as opposed to just being able to find product

  • reviews online.

  • It can be really powerful in terms of learning.

  • The fact that there's Google Scholar, it simply means that

  • if I hear a medical claim and I don't know if it's true, I

  • can go and look on Google Scholar and just

  • see what turns up.

  • I mean, I don't understand exactly just the journal, but

  • if I find a journal article relevant to it,

  • it tells me a lot.

  • So these kind of strategies could actually be a great way

  • to improve scientific literacy.

  • Even of course being able to pursue a project, errors that

  • are not under your current interest is also a great power

  • of getting students in the habit of online search.

  • Well then you're allowing it to a bit more independently

  • driven if they want to learn about a

  • topic that wasn't covered.

  • And you can even imagine using online search to do things

  • like, what are good learning strategies?

  • Or, how can I improve my grades?

  • We don't think to go and Google that online but it's

  • amazing what you will find on Google these days.

  • So in terms of just--

  • OK, I'm over, but the one point I'd make is cognitive

  • science doesn't just have to use instruction design.

  • I think it ready interfaces nicely with the machine

  • learning and work that people want to do.

  • Right now we've got binary answers on

  • multiple choice questions.

  • And it's true it's big data in that it's on the order of tons

  • of information.

  • But if you have a narrow bandwidth, you just can't that

  • get much out of it.

  • And so I think subtle changes to these things can make a big

  • difference in what we can get out of machine learning and

  • data mining.

  • For example, having people rate the plausibility of each

  • of four multiple choice questions can raise our binary

  • score into four numbers that each correspond to a different

  • kind of concept.

  • Just including those are standard practice.

  • For example, you just have them read one to seven

  • [INAUDIBLE].

  • And so be like, everyone understands.

  • Having people predict their accuracy is also a really

  • powerful thing, because there was a study that [INAUDIBLE]

  • teaches students, and then saw how it correlated with their

  • later performance in undergrad and the SAT college

  • performance.

  • But it didn't correlate as well as how accurate students

  • were in predicting which questions they got wrong and

  • which they got right.

  • So this study had you do an essay question and say how

  • likely are you to get that right.

  • And the fact that you were good at judging what you're

  • were getting wrong or right was a much better predictor of

  • undergrad success than actually the SAT itself.

  • And so if you're looking for an individual difference in

  • learners, that would be a great one.

  • How good they are at understanding gaps in their

  • own knowledge.

  • We think similarity.

  • It seems really simple, but that's again something that's

  • just been shown across tons of studies to reveal something

  • very deep about people's minds.

  • And I think also grading explanations.

  • So again that would be a much richer data set than is

  • currently available for actually understanding

  • what's going on.

  • And that's where you might see data turn into

  • the best data yet.

  • So let's say I give you two concepts and I say, how

  • similar is this mathematical concept to this

  • mathematical concept?

  • Well two explanations, and I say, well,

  • how similar are these?

  • How different?

  • And that's actually a really powerful way of getting to

  • what people think or believe.

  • OK.

  • And I'm happy to talk about this some more.

  • But I've tried to put together the resources that have been

  • useful with me on a website in terms of citations.

  • And if you're really interested in improving

  • education, this is a great book in education policy that

  • looked at all the systems across the world and tried to

  • see what insights that they have for US education.

  • And I think especially online education is different.

  • Obviously there's a lot that's already known.

  • So we could benefit from a lot of that.

  • All right.

  • Thank you.

  • [APPLAUSE]

  • And do people have any questions?

  • AUDIENCE: [INAUDIBLE].

  • JOSEPH JAY WILLIAMS: So the question was, I was talking

  • about subjects to explain by drawing on knowledge that they

  • already had, and how did I measure it?

  • There are many different ways people take to it.

  • Mostly past studies like you said administered a pre-test.

  • The way I did it was actually I manipulated knowledge

  • experimentally.

  • So they explained or didn't.

  • And there's another experimental factor, I gave

  • them extra training or I didn't.

  • And so that's a way we can know exactly what

  • they've been learning.

  • AUDIENCE: That's one of the benefits of [INAUDIBLE]

  • is that they have learning without feedback.

  • But you mentioned that explanations were [INAUDIBLE].

  • Or those people could have had an idea of what you were

  • thinking, have a way of learning without feedback.

  • JOSEPH JAY WILLIAMS: Yeah.

  • So I think that's a really good point.

  • I think that's why I tried to see well, what is exactly is

  • an effective explanation?

  • Because then, once we have a bit of a better handle on what

  • explaining is doing, if it's driving people towards

  • principles, then we have a better sense of, OK, it's not

  • a good idea to ask this person to explain at

  • this point in time.

  • We should wait to ask them to explain when

  • they have more knowledge.

  • Or it would be a bad idea to ask them to explain why these

  • two problems are correct.

  • Because that might lead them to a local generalization that

  • only applies to those two math problems.

  • For example with the statistics example, it's

  • pretty reasonable to say people with higher scores

  • would be ranked more highly, but that's just accounting for

  • the observations you've seen so far.

  • It won't hold in the general case.

  • So I think features like that in terms of how you set up the

  • environment and which cases you ask people to explain

  • would be important variables to look at.

  • So I think then more research or using research that's

  • already there would be useful for that.

  • Another thing to keep in mind is actually that sometimes it

  • can actually be helpful for people to produce a wrong

  • explanation.

  • And there's a paper that suggests this and has some

  • evidence from the recording they did but it wasn't tested

  • experimentally.

  • Which is that once you've produced a misconception and

  • qualified it, it's then a lot easier to recognize it.

  • Because then you're not just implicitly

  • thinking through things.

  • You've actually made a statement.

  • Like, I think that that person's ranked high because

  • they got a higher grade.

  • And then when you see data that actually goes against

  • that, you can then revise your belief.

  • And for example, that's how a lot of children's learning

  • might work.

  • Where you could say, my hypothesis' case is flat.

  • And I'm not going to do something if I

  • know a lot of data.

  • Or you could say I'm going to try a lot of things out, and

  • then just be really open to revise them quickly.

  • AUDIENCE: [INAUDIBLE].

  • JOSEPH JAY WILLIAMS: Yeah.

  • So on that resource page I have some reference of the

  • [INAUDIBLE].

  • And I'll let you know some more.

  • But Eric Mazur, who's a physicists at Harvard, has

  • pushed this idea of, it's not a flipped classroom, but where

  • every class actually is structured around questions in

  • the sense that he'll start a lecture.

  • After five minutes he'll put a question on the board.

  • And people will start discussing it.

  • They'll turn to each other and they'll say what they think.

  • Then they'll argue for, they'll argue against.

  • Then he'll pass around and talk with them and they'll get

  • back together.

  • And so I think that's got a lot of really

  • good lessons in there.

  • And also, this idea I think I

  • mentioned, reciprocal teaching.

  • Trying to teach someone a concept and have them teach it

  • back to you.

  • And I think that in terms of online environments, what

  • would be really powerful would be having people generate free

  • responses, and then again letting them see other

  • people's explanations, and use those as a base for

  • comparison.

  • And so that's something that I think you could get people to

  • be a lot more engaged.

  • Plus get the benefits of generating information, plus

  • it gets the benefits of things that are

  • like the right answer.

  • So that's what I'm really excited about

  • also trying that out.

  • A big thing that I think that people do now is because they

  • think that they have to give someone an explanation, there

  • are very few free responses.

  • So like the cognitive tutors, which are these adaptive

  • programs, will actually give people a multiple choice

  • instead of explanations.

  • And I think this simple manipulation of not having

  • multiple choice, having a free response, and then putting the

  • multiple choice can make a really big difference in terms

  • of what people learn.

  • You had a question?

  • AUDIENCE: You're running out of time.

  • JOSEPH JAY WILLIAMS: Oh, OK.

  • AUDIENCE: And so I think if anybody had questions for

  • Joseph, feel free to ask them after the talk.

  • JOSEPH JAY WILLIAMS: Thanks for having me.

JOSEPH JAY WILLIAMS: So I just want to mention, my background

字幕と単語

ワンタップで英和辞典検索 単語をクリックすると、意味が表示されます

A2 初級

How Can Cognitive Science Improve Online Learning?

  • 249 33
    Why Why に公開 2013 年 03 月 29 日
動画の中の単語