Page 10 - ELG2503 March Issue 493
P. 10
RESEARCH NEWS .
ong?
What could possibly go wr
What could possibly go wrong?
Hacking the human language network.
rtificial intelligence that minimal response, termed ‘supress surprising, but the effect then
mimics human language sentences’ (e.g., ‘We were sitting declined as sentences became
can also predict and on the couch’). more extremely implausible or
Acontrol brain responses The responses to these ungrammatical.
in the human language network, sentences were recorded by Findings also contribute to as the study authors example of
according to a recent Nature functional magnetic resonance a longstanding debate over optimising stimuli for patients
Human Behaviour paper from an imaging (fMRI) of the recipients’ whether language processing in with brain disorders.
international team led by Greta brains. GPT2-XL was then trained the brain is tied to social cognitive From a teaching perspective,
Tuckute and Evelina Fedorenko at on these fMRI responses and functioning such as inferring the a programme such as GPT2-XL
the Massachusetts Institute of tasked to predict responses to the mental states of others. If this is could be modified to stimulate
Technology, Cambridge, MA, USA. total 1.8 million set of sentences. the case then content associated more active language processing
The ability of large language In follow-up studies also with mental states should impact in student learners, most obviously
models—for example, GPT2-XL— assessing the outcome by fMRI arousal of the language network, during reading comprehension
to mimic human language also of the recipients’ brains’, GPT2- but no such effect was found in or conversation practice. Future
provide a model for human XL was able to select and modify this study. developments might also target
language processing. This study sentences that would drive or Generally, sentences referring better retention.
set out to test whether GPT2-XL supress neural response in the to objects, places and easily Curiously, the authors do not
could generate language and human language network. Some visualised content, as well as more mention the possible negative
successfully predict the neural of the drive sentences identified emotionally positive sentences, all consequences of AI facilitated
response in the human receiver’s were unexpected and would not elicited lower responses. control of neural responses in the
brain. have been selected by human If a language model such as human brain.
For the initial training, the choice. GPT2-XL can select language
researchers chose two sets of The effect of eleven sentence items and predict the neural REFERENCE
250 sentences from a collection properties on neural responses response, then it can manipulate n Tuckuet, G. et al (2024) Driving
of 1.8 million. One set was were assessed to further identify and control the response of the and suppressing the human language
chosen to elicit a strong neural the characteristics of drive v human recipient to language network using large language
response, termed ‘drive sentences’ suppressor sentences. Stronger input. This has far-reaching Models, Nature Human Behaviour
(e.g., ‘Turin loves me not, nor responses were associated implications, not all of which https://doi.org/10.1038/s41562-
will’) and the other set to elicit with sentences that were more are as positive and optimistic 023-01783-7
10 March 2025