Placeholder Image

字幕表 動画を再生する

  • - [automated message] This UCSD TV program is presented by University of California

  • Television. Like what you learn? Visit our website, or follow us on Facebook and

  • Twitter to keep up with the latest programs.

  • ♪ [music] ♪

  • - [male] We are the paradoxical ape, bi-pedal, naked, large brain, long the

  • master of fire, tools, and language, but still trying to understand ourselves.

  • Aware that death is inevitable, yet filled with optimism. We grow up slowly. We hand

  • down knowledge. We empathize and deceive. We shape the future from our shared

  • understanding of the past. CARTA brings together experts from diverse disciplines

  • to exchange insights on who we are and how we got here. An exploration made possible

  • by the generosity of humans like you.

  • ♪ [music] ♪

  • - [Evelina] Thanks very much for having me here. So today, I will tell you about the

  • language system and some of its properties that may bear on the question of how this

  • system came about. I will begin by defining the scope of my inquiry, because

  • people mean many different things by language. I will then present you some

  • evidence, suggesting that our language system is highly specialized for language.

  • Finally I will speculate a bit on the evolutionary origins of language.

  • So linguistic input comes in through our ears or our eyes. Once it's been analyzed

  • perceptually, we interpret it by linking the incoming representations toward our

  • stored language knowledge. This knowledge, of course, is also used for generating

  • utterances during language production. Once we've generated an utterance in our

  • heads, we can send it down to our articulation system. The part that I focus

  • on is this kind of high-level component of language processing.

  • This cartoon shows you the approximate locations of the brain regions that

  • support speech perception, shown in yellow, visual letter and word recognition

  • in green. This region is known as the visual word-form area. Articulation in

  • pink and high-level language processing in red.

  • What differentiates the high-level language processing regions from the

  • perceptual and articulation regions is that they're sensitive to the

  • meaningfulness of the signal. So for example, the speech regions will respond

  • just as strongly to foreign speech as they respond to meaningful speech in our own

  • language. The visual word-form area responds just as much to a string of

  • consonants as it does to real words. The articulation regions can be driven to a

  • full extent by people producing meaningless sequences of syllables.

  • But in contrast, the high-level language-processing system or network, or

  • sometimes I refer to it just as the language network for short, cares deeply

  • about whether the linguistics signal is meaningful or not.

  • In fact, the easiest way to find this system in the brain is to contrast

  • responses to meaningful language stimuli, like words, phrases, or sentences. Some

  • control conditions like linguistically degraded stimuli.

  • The contrast I use most frequently is between sentences and sequences of

  • non-words. A key methodological innovation that laid the foundation for much of my

  • work was the development of tools that enable us to define the regions of the

  • language network, functionally at the individual subject level, using contrasts

  • like these. Here, I'm showing you sample language regions in three individual

  • brains. This so-called functional localization approach has two advantages.

  • One, it circumvents the need to average brains together, which is what's done in

  • the common approach, and it's a very difficult thing to do because brains are

  • quite different across people. Instead, in this approach, we can just average the

  • signals that we extract from these key regions of interest.

  • The second advantage is that it allows us to establish a cumulative research

  • enterprise, which I think we all agree is important in science, because comparing

  • results across studies and labs is quite straightforward if we're confident that

  • we're referring to the same brain regions across different studies. This is just

  • hard or impossible to do in the traditional approach, which relies on very

  • coarse anatomical landmarks, like the inferior frontal gyrus or the superior

  • temporal sulcus, which span many centimeters of cortex and are just not at

  • the right level of analysis. So what drives me and my work is the

  • desire to understand the nature of our language knowledge and the computations

  • that mediate language comprehension and production. However, these questions are

  • hard, especially given the lack of animal models for language. So for now, I settle

  • on more tractable questions. For example, one, what is the relationship between the

  • language system and the rest of human cognition? Language didn't evolve, and it

  • doesn't exist in isolation from other evolutionarily older systems, which

  • include the memory and attention mechanisms, the visual and the motor

  • systems, the system that supports social cognition, and so on. That means that we

  • just can't study language as an isolated system. A lot of my research effort is

  • aimed at trying to figure out how language fits with the rest of our mind and brain.

  • The second question delves inside the language system, asking, "What does its

  • internal architecture look like?" It encompasses questions like, "What are the

  • component parts of the language system? And what is the division of labor among

  • them, in space and time?" Of course, both of those questions

  • ultimately should constrain the space of possibilities for how language actually

  • works. So they place constraints on both the data structures that underlie language

  • and the computations that are likely performed by the regions of the system.

  • Today, I focus on the first of these questions. Okay. So now onto some

  • evidence. So the relationship between language and the rest of the mind and

  • brain has been long debated, and the literature actually is quite abundant with

  • claims that language makes use of the same machinery that we use for performing other

  • cognitive tasks, including arithmetic processing, various kinds of executive

  • function tasks, perceiving music, perceiving actions, abstract conceptual

  • processing, and so on. I will argue that these claims are not

  • supported by the evidence. Two kinds of evidence speak to the relationship between

  • language and other cognitive systems. There is brain-imaging studies,

  • brain-imaging evidence, and investigations of patients with brain damage. In fMRI

  • studies, we do something very simple. We find our regions that seem to respond a

  • lot to language, and then we ask how do they respond to other various

  • non-linguistic tasks. If they don't show much of a response, then we can conclude

  • that these regions are not engaged during those tasks. In the patient studies, we

  • can evaluate non-linguistic abilities in individuals who don't have a functioning

  • language system. If they perform well, we can conclude that the language system is

  • not necessary for performing those various non-linguistic tasks. So starting with the

  • fMRI evidence, I will show you responses in the language regions to arithmetic,

  • executive function tasks, and music perception today. So here are two sample

  • regions, the regions of the inferior frontal cortex around Broca's area and the

  • region in the posterior middle temporal gyrus, but the rest of the regions of this

  • network look similar in their profiles. The region on the top is kind of in and

  • around this region known as Broca's area, except I don't use that term because I

  • don't think it's a very useful term. In black and gray, I show you the responses

  • to the two localizer conditions, sentences and non-words. These are estimated in data

  • that's not used for defining these regions. So we divide the data in half,

  • use half of the data to define the regions and the other half to quantify their

  • responses. I will now show you how these regions respond when people are asked to

  • do some simple arithmetic additions, perform a series of executive function

  • tasks, like for example, hold a set of locations in spacial memory, spacial

  • locations in working memory, or perform this classic flanker task, or listen to

  • various musical stimuli. For arithmetic and various executive tasks, we included a

  • harder and an easier condition, because we wanted to make sure that we can identify

  • regions that are classically associated with performing these tasks, which is

  • typically done by contrasting a harder and an easier condition of a task.

  • So I'll show you now, in different colors, responses to these various tasks, starting

  • with the region on the lower part of the screen. So we find that this region

  • doesn't respond during arithmetic processing, doesn't respond during working

  • memory, doesn't respond during cognitive control tasks, and doesn't respond during

  • music perception. Quite strikingly to me at the time, a very similar profile is

  • observed around this region, which is smack in the middle of so-called Broca's

  • area, which appears to be incredibly selective in its response for language.

  • Know that it's not just the case that these, for example, demanding tasks fail

  • to show a hard versus an easy difference. They respond pretty much at or below

  • fixation baseline when people are engaged in these tasks. So that basically tells

  • you that these language regions work as much when you're doing a bunch of math in

  • your head or hold information in working memory, as what you're doing when you're

  • looking at a blank screen. So they really do not care. So of course, to interpret

  • the lack of the response in these language regions, you want to make sure that these

  • tasks activate the brain somewhere else. Otherwise, you may have really bad tasks

  • that you don't want to use. Indeed, they do.

  • So here, I'll show you activations for the executive function tasks, but music also

  • robustly activates the brain outside of the language system. So here are two

  • sample regions, one in the right frontal cortex, one in the left parietal cortex.

  • You see the profiles of response are quite different from the language regions. For

  • each task, we see robust responses, but also a stronger response to the harder

  • than the easier condition across these various domains. These regions turn out to

  • be part of this bilateral frontal parietal network, which is known in the literature

  • by many names, including the cognitive control network or the multiple demand

  • system, the latter term advanced by John Duncan, who wanted to highlight the notion

  • that these regions are driven by many different kinds of cognitive demands. So

  • these regions appeared to be sensitive to effort across tasks, and their activity

  • has been linked to a variety of goal-directed behaviors. Interestingly if

  • you look at the responses of these regions to our language localizer conditions, we

  • find exactly the opposite of what we find in the language regions. They respond less

  • to sentences than sequences of non-words, presumably because processing sentences

  • requires less effort, but clearly this highlights, again, the language and the

  • cognitive control system are clearly functionally distinct.

  • Moreover, damage to the regions of the multiple demand network has been shown to

  • lead to decreases in fluid intelligence. So Alex Woolgar reported a strong

  • relationship between the amount of tissue loss in frontal and parietal cortices and

  • a measure of IQ. This is not true for tissue loss in the temporal lobes. It's

  • quite striking. You can actually calculate for this many cubic centimeters of loss in

  • the MD system, you lose so many IQ points. It's a strong, clear relationship. So this

  • system is clearly an important part of the cognitive arsenal of humans because the

  • ability to think flexibly and abstractly and to solve new problems are exactly . .

  • . These are the kinds of abilities that IQ tests aim to measure, are considered kind

  • of one of the hallmarks of human cognition. Okay. So as I mentioned, the

  • complementary approach for addressing questions about language specificity and

  • relationship to other mental functions is to examine cognitive abilities in

  • individuals who lack a properly functioning language system. Most telling

  • are cases of global aphasia. So this is a severe disorder which affects pretty much

  • the entire front temporal language system, typically due to a large stroke in the

  • middle cerebral artery and lead to profound deficits in comprehension and

  • production. Rosemary Varley at UCL has been studying this population for a few

  • years now. With her colleagues, she has shown that

  • actually these patients seem to have preserved abilities across many, many

  • domains. So she showed that they have in-tact arithmetic abilities. They can

  • reason causally. They have good nonverbal social skills. They can navigate in the

  • world. They can perceive music and so on and so forth. Of course, these findings

  • are then consistent with the kind of picture that emerges in our work in fMRI.

  • Let's consider another important non-linguistic capacity, which a lot of

  • people often bring up when I tell them about this work. How about the ability to

  • extract meaning from non-linguistic stimuli? Right? So given that our language

  • regions are so sensitive to meaning, we can ask how much of that response is due

  • to the activation of some kind of abstract, conceptual representation that

  • language may elicit, rather than something more language-specific, a semantic

  • representation type. So to ask these questions, we can look at how language

  • regions respond to nonverbal, meaningful representations. In one study, we had

  • people look at events like this or the sentence-level descriptions of them, and

  • either we had them do kind of a high-level semantic judgment test, like decide

  • whether the event is plausible, or do a very demanding perceptual control task.

  • Basically what you find here is, again, the black and gray are responses to the

  • localizer conditions. So in red, as you would expect, you find strong responses to

  • this and to the condition where people see sentences and make semantic judgments on

  • them. So what happens when people make semantic judgments on pictures? We find

  • that some regions don't care at all about those conditions, and other regions show

  • reliable responses, but they're much weaker than those elicited by the

  • meaningful sentence condition. So could it be that some of our language regions are

  • actually abstract semantic regions? Perhaps. But for now, keep in mind that

  • the response to the sentence-meaning condition is twice stronger, and it is

  • also possible that participants may be activating linguistic representations to

  • some extent when they encounter meaningful visual stimuli. So to answer this question

  • more definitively, we're turning to the patient evidence again. If parts of the

  • language system are critical for processing meaning in non-linguistic

  • representations, then aphasic individuals should have some difficulties with

  • nonverbal semantics. First, I want to share a quote with you from Tom Lubbock, a

  • former art critic at The Independent, who developed a tumor in the left temporal

  • lobe which eventually killed him. As the tumor progressed, and he was losing his

  • linguistic abilities, he was documenting his impressions of what it feels like to

  • lose the capacity to express yourself using verbal means.

  • So he wrote, "My language to describe things in the world is very small,

  • limited. My thoughts, when I look at the world, are vast, limitless, and normal,

  • same as they ever were. My experience of the world is not made less by lack of

  • language, but is essentially unchanged." I think this quote quite powerfully

  • highlights the separability of language and thought. So in work that I'm currently

  • collaborating on with Rosemary Varley and Nancy Kanwisher, we are evaluating the

  • global aphasics performance on a wide range of tasks, requiring you to process

  • meaning in nonverbal stimuli. So for example, can they distinguish between real

  • objects and novel objects that are matched for low-level visual properties? Can they

  • make plausibility judgments for visual events? What about events where

  • plausibility is conveyed simply by the prototypicality of the roles? So you can't

  • do this task by simply inferring that a watering can doesn't appear next to an egg

  • very frequently. Right? It seems like the data so far is suggesting that they indeed

  • seem fine on all of these tasks, and they laugh just like we do when they see these

  • pictures because they're sometimes a little funny. So they seem to process

  • these just fine. So this suggests, to me, that these kinds of tasks can be performed

  • without a functioning language system. So even if our language system stores some

  • abstract conceptual knowledge in some parts of it, it tells me at least that

  • that code must live somewhere else as well. So even if we lose our linguistic

  • way to encode this information, we can have access to it elsewhere.

  • So to conclude this part, fMRI in patients sudies converge suggesting that the front

  • temporal language system is not engaged in and is not needed for non-linguistic

  • cognition. Instead, it appears that these regions are highly specialized for

  • interpreting and generating linguistics signals. So just a couple minutes on what

  • this means. So given this highly selective response to language stimuli that we

  • observe, can we make some guesses already about what these regions actually do? I

  • think so. I think a plausible hypothesis is that this network houses our linguistic

  • knowledge, including our knowledge of the sounds of the language, the words, the

  • constraints on how sounds and words can combine with one another. Then essentially

  • the process of language interpretation is finding matches between the pieces of the

  • input that are getting into