Making sense of language is related to what we already know—and don’t know
When the party of mountain climbers reached the summit, they were in awe of the magnificent view.
After reading this brief string of words, you probably have a distinct image in your mind, one that is rich with details, even though they are not specified in the sentence: a magnificent blue sky, glistening snow-capped peaks, crampons, snow goggles. The picture emerged because our brains, with lightning speed, make sense of words and sentences based on our previous experiences.
Language—maybe the defining characteristic of what it means to be human—remains one of the most opaque scientific mysteries. Gina Kuperberg is bringing some clarity to the puzzle. Trained as a psychiatrist and a cognitive neuroscientist, she is trying to understand how our brains derive meaning from language as they weave together new incoming information with what we already know.
A professor of psychology in the School of Arts and Sciences, Kuperberg uses brain-scanning electrodes to peer deep into the mind as people make sense of what they hear or read. By observing which parts of the brain go to work—and when—Kuperberg hopes to gain insight, not only into how we process language, but what happens when things go wrong, as they do in people with schizophrenia, for example.
One of Kuperberg’s main research tools is the brain-scanning technology EEG. Familiar to science-fiction aficionados, the business end of an EEG (short for electroencephalography) resembles a shower cap embedded with electrode-bearing suction cups. The scanner measures electrical activity at the surface of the skull, tracing out a graph that can be matched—or time-locked—to brain activity produced in response to a stimulus at any particular moment.
Some of these spikes and dips in activity are remarkably consistent across most healthy people. Neuroscientists were already aware of a specific peak they nicknamed the N400, because it kicks in as the brain begins to understand the meaning of words, and peaks at around the 400 milliseconds after word onset.
Kuperberg uses the relative strength of this signal as a gauge of how hard the brain has to work to make sense of things. Her research suggests that when we listen to each other talk or start to read a sentence, we bring a host of assumptions based on prior knowledge and experiences to the way we interpret that language. Kuperberg thinks the brain makes those assumptions, or predictions as she calls them, to help us comprehend language more quickly and efficiently. A weak N400 signal shows that those predictions have been confirmed.
Some Eggs to Eat
In an experiment she began as a postdoc at MGH in 1998, Kuperberg wanted to see whether people with schizophrenia process words in context differently from healthy people. Using the EEG to measure when brain activity is happening along with other scanning technologies that help pinpoint where it is occurring (e.g., functional magnetic resonance imaging), she and her colleagues observed people’s brains as they read sentences.
Kuperberg and her frequent collaborator, Phil Holcomb, who is also a professor of psychology in Tufts’ School of Arts and Sciences, provided their subjects with the nonsensical sentence Every morning at breakfast, the eggs would eat. The researchers suspected that the word eat would stop the healthy brains in their tracks, while the schizophrenics’ brains might more easily accept it. But when the team ran the experiment, the word eat didn’t produce the N400 at all, not even in healthy people.
But Kuperberg and her colleagues found that the nonsense sentence did trip up the brain a few milliseconds later. It generated another well-known marker, called the P600. This was interesting because the P600 had previously been most closely associated with making sense of grammatical or syntactical errors.
“Very quickly afterwards, your brain goes ‘Hang on! Eggs can’t eat,’ ” says Kuperberg. “That evoked this robust P600 in healthy people as they were trying to reconstruct meaning and recover. And it turns out patients with schizophrenia don’t do that.”
Today, Kuperberg suspects that the normal brain doesn’t immediately flinch at the word eat the way she had originally thought it might because “when you’re reading or hearing that kind of sentence, I think you’re activating this whole general script—this whole breakfast-eating script. It doesn’t matter that eggs can’t eat. The word ‘eat’ still fits with the general script, and that’s what the brain cares about initially. It’s only a bit later – when we actually combine meaning with structure – when our brains realize the error: that the specific event we had predicted is different to what the sentence actually said. I think that the P600 is the brain’s response to this prediction error, and that the reason why people with schizophrenia don’t produce such a strong P600 is because they fail to detect this prediction error.”
Since Kuperberg came to Tufts in 2005, she and her colleagues have used these markers of brain activity to reveal many additional insights into how our minds work. She has recently turned her attention to how we make sense of emotional language. For example, in a one study, volunteers were given three types of nearly identical sentences that began: Sarah is in her hotel room when the man comes in with a [blank]. The last word—either rose, tray or gun—radically altered the scene described.
Here, the context was not highly predictable. Yet the N400 was still larger to emotional words (like rose and gun) than to neutral words like tray. Kuperberg concludes that the brain flags the more emotionally charged words, whether positive or negative. “It’s as though people were processing those words more deeply, trying to extract more of their meaning,” she says.
In a more recent study conducted by Eric Fields, a graduate student in her lab, volunteers were also given sentences like: You are in your hotel room when the man comes in with a [blank]. The study, which Kuperberg co-authored and is published in the August 2012 issue of the journal NeuroImage, showed that our brains processes emotional words quite differently when they are perceived as being relevant to ourselves.
Taken together, Kuperberg argues that her work suggests the brain uses a lot of what it already knows to make sense of the world, even when presented with brand new information. It’s an assertion that challenges the long-held dogma among linguists that the brain can derive meaning only through syntax, that word order governs the way we digest language. “That’s only partially true,” she says. “Our beliefs, our expectations, our experience and knowledge all have an immediate influence, and they can have a direct impact on how we derive new meaning from language.”
Jacqueline Mitchell, a senior health sciences writer in Tufts’ Office of Publications, can be reached at firstname.lastname@example.org.