Friday, July 11, 2025

How humans store narrative memories using 'random trees' | Phys.org

To effectively model the representation of meaningful memories using random trees, Tsodyks and his colleagues used like Amazon and Prolific to perform recall experiments on a large number of subjects, using narratives borrowed from a paper by W. Labov. 
Essentially, they asked 100 people to recall 11 narratives of various lengths (ranging from 20 to 200 sentences) and later analyzed recorded recalls to test their theory.

Mathematical model reveals how humans store narrative memories using 'random trees'

https://scx1.b-cdn.net/csz/news/800a/2025/a-random-tree-model-of.jpg
Artistic image evoking the purpose of the study and created by Misha Tsodyks using LLMs. Credit: Physical Review Letters (2025). DOI: 10.1103/g1cz-wk1l

Humans can remember various types of information, including facts, dates, events and even intricate narratives. Understanding how meaningful stories are stored in people's memory has been a key objective of many cognitive psychology studies.

Tsodyks and his colleagues hypothesized that a tree representing a narrative is first constructed when an individual first hears or reads a story and understands it. 
As past studies suggest that individuals comprehend the same narratives differently, then the resulting trees would have unique structures.

"We formulated a model as an ensemble of random trees of a particular structure," said Tsodyks.  

"The beauty of this model is that it can be solved mathematically, and its predictions can be directly tested with the data, which we did. The main novelty of our random tree model of memory and recall is the assumption that any meaningful material is generically represented in the same fashion.

"Our study could have broader implications of this fact for because narratives seem to be a general way we reason about a wide range of phenomena in our individual lives and social and historical processes."

The recent work by this team of researchers highlights the promise of mathematical approaches and AI-based techniques for studying how humans store and represent meaningful information in their memories.

  •  In their next studies, Tsodyks and his colleagues plan to assess the extent to which their theory and random tree modeling approach could apply to other types of narratives, such as fiction stories.

"A more ambitious direction for future research will be to find more direct proofs for the tree model," added Tsodyks. 

"This would require designing other experimental protocols beyond simple recall. Brain imaging with people engaged in narrative comprehension and recall could be another interesting direction."

================================================================= 

Written for you by our author Ingrid Fadelli, edited by Gaby Clark, and fact-checked and reviewed by Robert Egan—this article is the result of careful human work. 
We rely on readers like you to keep independent science journalism alive. 
 
If this reporting matters to you, please consider a donation (especially monthly). You'll get an ad-free account as a thank-you. 
 https://bpb-us-w2.wpmucdn.com/web.sas.upenn.edu/dist/9/312/files/2016/12/phys_org_950x535.png
More information: Weishun Zhong et al, Random Tree Model of Meaningful Memory, Physical Review Letters (2025). DOI: 10.1103/g1cz-wk1l

No comments: