Monday, December 23, 2024
HomeNewsHow deep is your language?

How deep is your language?

Computers using deep language models (DLMs) help us understand how the brain processes language, according to a new study by an international team of researchers from American, German and Israeli universities.

Traditionally, psycholinguistic research has modelled language in terms of how items such as nouns or verbs follow rules. DLMs come at language modelling very differently, with no ‘preconceptions’ of any kind – not even that there are in fact parts of speech or syntax.

When DLMs are exposed to language they learn by constantly predicting the next word and using feedback from the degree of ‘surprise’ to inform future predictions. It has long been theorised that human brains do something similar, but it has been fiendishly difficult to measure such specific, rapid activities in the brain.

In this study, nine participants listened to a 30-minute podcast while their brain activity was recorded via electrocorticography (ECoG). ECoG is a more sensitive variation on the more familiar EEG (electroencephalography). In EEG, electrodes are placed on the scalp, but in ECoG they are placed on the actual surface of the brain. The nine volunteers were having ECoG monitoring for other clinical reasons and 1,339 electrodes were implanted on each brain to record activity while listening to part of the story ‘So a Monkey and a Horse Walk Into a Bar: Act One, Monkey in the Middle’.

A DLM (GPT-2) also ‘listened’ to the same story and its processing was monitored.

Three main processes were shown to be similar in the human brain and DLM:

  1. The recorded brain activity clearly indicated that listeners were constantly predicting the next word in the story, hundreds of milliseconds before the next word and clearly separate to processing the actual next word.
  2. Both human brains and GPT-2 produced a ‘surprise’ response, a kind of error signal in response to the actual next word.
  3. The meaning of words were constantly evaluated with reference to preceding words: altering one word at the start of the story altered the following neural activity for all subsequent sentences.

To the extent that a DLM can mimic these brain processes to produce apparently meaningful sentences in context, does this constitute ‘thinking’? At present it does not. DLMs cannot ‘generate new meaningful ideas by integrating prior knowledge’. But the human brain can – and further research could analyse how it does that using abilities such as the spontaneous predictive text and feedback demonstrated in this study.

REFERENCE

  • Goldstein, A, Zada, Z, Buchnik, E. et al (2022). ‘Shared computational principles for language processing in humans and deep language models’, Nature Neuroscience 25: 369-380, https://doi.org/10.1038/s41593- 022-01026-4 OPEN ACCESS.
Image courtesy of Shutterstock
Previous article
Next article
Gill Ragsdale
Gill Ragsdale
Gill has a PhD in Evolutionary Psychology from Cambridge, and teaches Psychology with the Open University, but also holds an RSA-Cert TEFL. Gill has taught EFL in the UK, Turkey, Egypt and to the refugees in the Calais 'Jungle' in France. She currently teaches English to refugees in the UK.
OTHER POSTS
- Advertisment -

Latest Posts