Incremental Accumulation of Linguistic Context in Artificial and Biological Neural Networks

Abstract

Accumulated evidence suggests that Large Language Models (LLMs) are beneficial in predicting neural signals related to narrative processing. The way LLMs integrate context over large timescales, however, is fundamentally different from the way the brain does it. In this study, we show that unlike LLMs that apply parallel processing of large contextual windows, the incoming context to the brain is limited to short windows of a few tens of words. We hypothesize that whereas lower-level brain areas process short contextual windows, higher-order areas in the default-mode network (DMN) engage in an online incremental mechanism where the incoming short context is summarized and integrated with information accumulated across long timescales. Consequently, we introduce a novel LLM that instead of processing the entire context at once, it incrementally generates a concise summary of previous information. As predicted, we found that neural activities at the DMN were better predicted by the incremental model, and conversely, lower-level areas were better predicted with short-context-window LLM.

Publication
In bioRxiv
Yoav Meiri
Yoav Meiri
Msc. Student in Data Science

My research explores reading behaviour of humans of language model across different reading regimes.