You Don’t Need Words to Think

Scholars have long contemplated the connection between language and thought—and to what degree the two are intertwined—by asking whether language is somehow an essential prerequisite for thinking.

[…]

Evelina Fedorenko, a neuroscientist who studies language at the McGovern Institute for Brain Research at the Massachusetts Institute of Technology, has spent many years trying to answer these questions. She remembers being a Harvard University undergraduate in the early 2000s, when the language-begets-thought hypothesis was still highly prominent in academia.

[…]

She recently co-authored a perspective article in Nature that includes a summary of her findings over the ensuing years. It makes clear that the jury is no longer out, in Fedorenko’s view: language and thought are, in fact, distinct entities that the brain processes separately. The highest levels of cognition—from novel problem-solving to social reasoning—can proceed without an assist from words or linguistic structures.

[…]

Language works a little like telepathy in allowing us to communicate our thoughts to others and to pass to the next generation the knowledge and skills essential for our hypersocial species to flourish. But at the same time, a person with aphasia, who are sometimes unable to utter a single word, can still engage in an array of cognitive tasks fundamental to thought. Scientific American talked to Fedorenko about the language-thought divide and the prospects of artificial intelligence tools such as large language models for continuing to explore interactions between thinking and speaking.

[…]

What evidence did you find that thought and language are separate systems?

The evidence comes from two separate methods. One is basically a very old method that scientists have been using for centuries: looking at deficits in different abilities—for instance, in people with brain damage.

Using this approach, we can look at individuals who have impairments in language—some form of aphasia. […] You can ask whether people who have these severe language impairments can perform tasks that require thinking. You can ask them to solve some math problems or to perform a social reasoning test, and all of the instructions, of course, have to be nonverbal because they can’t understand linguistic information anymore. Scientists have a lot of experience working with populations that don’t have language—studying preverbal infants or studying nonhuman animal species. So it’s definitely possible to convey instructions in a way that’s nonverbal. And the key finding from this line of work is that there are people with severe language impairments who nonetheless seem totally fine on all cognitive tasks that we’ve tested them on so far.

[…]

A nicely complementary approach, which started in the 1980s and 1990s, is a brain-imaging approach. We can measure blood flow changes when people engage in different tasks and ask questions about whether the two systems are distinct or overlapping—for example, whether your language regions overlap with regions that help you solve math problems. These brain-imaging tools are really good for these questions. But before I could ask these questions, I needed a way to robustly and reliably identify language areas in individual brains, so I spent the first bunch of years of my career developing tools to do this.

And once we have a way of finding these language regions, and we know that these are the regions that, when damaged in adulthood, lead to conditions such as aphasia, we can then ask whether these language regions are active when people engage in various thinking tasks. So you can come into the lab, and I can put you in the scanner, find your language regions by asking you to perform a short task that takes a few minutes—and then I can ask you to do some logic puzzles or sudoku or some complex working memory tasks or planning and decision-making. And then I can ask whether the regions that we know process language are working when you’re engaging in these other kinds of tasks. There are now dozens of studies that we’ve done looking at all sorts of nonlinguistic inputs and tasks, including many thinking tasks. We find time and again that the language regions are basically silent when people engage in these thinking activities.

[…]

Do the language and thinking systems interact with each other?

There aren’t great tools in neuroscience to study intersystem interactions between language and thought. But there are interesting new opportunities that are opening up with advances in AI where we now have a model system to study language, which is in the form of these large language models such as GPT-2 and its successors. These models do language really well, producing perfectly grammatical and meaningful sentences. They’re not so good at thinking, which is nicely aligning with the idea that the language system by itself is not what makes you think.

But we and many other groups are doing work in which we take some version of an artificial neural network language model as a model of the human language system. And then we try to connect it to some system that is more like what we think human systems of thought look like—for example, a symbolic problem-solving system such as a math app. With these artificial intelligence tools, we can at least ask, “What are the ways in which a system of thought, a system of reasoning, can interact with a system that stores and uses linguistic representations?” These so-called neurosymbolic approaches provide an exciting opportunity to start tackling these questions.

So what do large language models do to help us understand the neuroscience of how language works?

They’re basically the first model organism for researchers studying the neuroscience of language. They are not a biological organism, but until these models came about, we just didn’t have anything other than the human brain that does language. And so what’s happening is incredibly exciting. You can do stuff on models that you can’t do on actual biological systems that you’re trying to understand. There are many, many questions that we can now ask that had been totally out of reach: for example, questions about development.

In humans, of course, you cannot manipulate linguistic input that children get. You cannot deprive kids of language, or restrict their input in some way, and see how they develop. But you can build these models that are trained on only particular kinds of linguistic input or are trained on speech inputs as opposed to textual inputs. And then you can see whether models trained in particular ways better recapitulate what we see in humans with respect to their linguistic behavior or brain responses to language.

So just as neuroscientists have long used a mouse or a macaque as a model organism, we can now use these in silico models, which are not biological but very powerful in their own way, to try to understand some aspects of how language develops or is processed or decays in aging or whatnot.

We have a lot more access to these models’ internals. The methods we have for messing with the brain, at least with the human brain, are much more limited compared with what we can do with these models.

Source: You Don’t Need Words to Think | Scientific American

Robin Edgar

Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft

 robin@edgarbv.com  https://www.edgarbv.com