Speaker
Description
Visual perception requires integrating incoming contextual information with prior memories. Predictive processing theories propose that this integration is supported by the laminar architecture of the visual cortex and its interactions with the medial temporal lobe, particularly the hippocampus and entorhinal cortex. To examine these neural mechanisms, we acquired ultra-high-field 7T fMRI data using an occluder paradigm that dissociates memory signals from concurrent contextual cues. Participants (N=33) first learned scenes depicting real-world locations with specific target objects. Twenty-four hours later, during fMRI scanning, they were shown the learned scenes with target objects occluded and asked to mentally retrieve the missing objects. Using layer-specific decoding and representational similarity analysis, preliminary results showed context information could be decoded from the hippocampal subiculum, whereas memory-related information could be decoded from the early visual cortex deep layers and hippocampal CA2/3. These findings reveal how perceptual predictions arise from interactions between sensory input and memory-based representations.