Attention & Perception Talk Series
Links to the speaker's website and and talk information will be added as the semester goes along. Talks are online unless otherwise specified.
Spring 2025
- Brady Roberts — Memorable by design: The intrinsic properties of effective symbols
January 21st, 2025 (in-person)
Recent work has begun to evaluate the memorability of everyday visual symbols as a new way to understand how abstract concepts are processed in memory. Symbols were previously found to be highly memorable, especially relative to words, but it remained unclear what was driving memorability. Here, we bridged this gap by exploring the visual and conceptual attributes driving the high memorability observed for symbols. Participants were tested on their memory for conventional symbols (e.g., !@#$%) before sorting them based on visual or conceptual features. Principal component analyses performed on the sorting data then revealed which of these features predict symbol memorability. An artificial image generator was then used to form novel symbols while accentuating or downplaying predictive features to create a set of memorable and forgettable symbols, respectively. Both recognition and cued-recall memory performance were substantially improved for symbols that were designed to be memorable. This work suggests that specific visual attributes drive image memorability and offers initial evidence that memorability can be engineered.
- Thomas Langlois — Efficient Computations and Representations for Perceptual Inference and Communication
February 4th, 2025 (in-person)
In order to keep pace with a complex and ever-changing visual environment, the visual system must combine moment-to-moment sensory evidence with prior expectations that reflect predictable regularities in the environment. Although priors (and other subjective probability distributions) are key to visual perception, they are notoriously difficult to estimate because perception is an inherently private (subjective) experience. In this talk, I will highlight work using large-scale serial reproduction experiments to obtain stable estimates of subjective probability distributions in visual memory. I will also discuss recent work elucidating how neural population activity in the PFC integrates prior expectations with sensory signals during visual perception in macaque monkeys. Time permitting, I will highlight my current work investigating the relation between perceptual representations and emergent communication using the Information Bottleneck (IB) Principle.
- Qi Lin — Individual differences in prefrontal coding of visual features
February 11th, 2025 (online)
Each of us perceives the world differently. What may underlie such individual differences in perception? In this talk, I will focus on characterizing the lateral prefrontal cortex (LPFC)'s role in vision with a specific focus on individual differences. Using a 7T fMRI dataset, I first show that encoding models relating visual features extracted from a deep neural network to brain responses to natural images robustly predict responses in patches of LPFC. Intriguingly, there are more substantial individual differences in the coding schemes of LPFC compared to visual regions. I will then present computational work showing how such amplification of individual differences could result from a neural architecture involving random reciprocal connections between sensory and high-level regions. Lastly, I will discuss ongoing work exploring the behavioral consequences of such individual differences in LPFC coding. Together, this work demonstrates the under-appreciated role of LPFC in visual processing and suggests that LPFC may underlie the idiosyncrasies in how different individuals experience the visual world.
- Harini Sankar — Modeling the influence of semantic context in spoken word recognition
February 18th, 2025 (in-person)
Spoken word recognition is a context dependent process. Studies have shown that the semantic association between words not only influences behavioral responses to ambiguous speech sounds but also influences how we encode the sounds themselves. The influence of semantic context has also been shown to persist over longer spans of time. While earlier computational models of spoken word recognition have captured various aspects of speech perception, they have yet to integrate the role of long-distance semantic dependencies. Creating a model that learns these semantic associations in a self-supervised manner would also be able to demonstrate how humans learn and use these semantic associations between words in everyday speech. In this project, I created two models — a simple recurrent network (SRN) and long-short-term-memory (LSTM) networks that were trained on word pairs that varied in their degree of semantic association. The models were able to learn the semantic associations between word pairs in a self-supervised manner. I will also discuss how the models could use these learnt semantic associations to influence how they encode ambiguous phonemes that would reflect the results from the behavioral and electrophysiological data.
- Michael Beyeler — From Perception to Function: Toward a Usable and Intelligent Bionic Eye
March 4th, 2025 (online)
How can we restore meaningful vision to people with incurable blindness? Despite advances in visual neuroprosthetics (bionic eyes), current implants provide only rudimentary percepts that do not always translate into functional vision. In this talk, I will discuss our work on computational models that predict what implant users might "see", shedding light on the perceptual challenges of prosthetic vision. I will also explore why implantees use their implants less frequently than expected, highlighting key usability barriers and the need for smarter integration. Finally, I will outline how personalized stimulation strategies and closed-loop prostheses could help bridge the gap between perception and function - moving beyond basic phosphenes toward a usable and intelligent bionic eye.
- Keith Doelling — Sequence prediction in Natural and Artificial Perception
March 11th, 2025 (online)
Lived experience unfolds as sequences of information. These sequences are highly influential in the processing of current stimuli, providing context on which to establish surprise and attention or reconstruct missing information. Still, the neural mechanisms that control the computation of contextual information and its implementation in processing of current events remain a critical question for our understanding of processing of naturalistic sequences like speech and music. My work combines human electrophysiology with computational models of naturalistic perception to assess how sequential information is computed and deployed in neural processing. I have applied this framework in several cases: 1) the study of temporal prediction in rhythms linking Bayesian and Neural Mass Models to show how oscillatory can be linked to a Bayesian prior for rhythmicity, 2) scaling to real world content predictions using deep learning models of naturalistic music, and 3) examining the consequences of sequential processing clinical audiograms. These projects demonstrate the broad implications of sequential information processing on neural activity and the potential consequences for processing disorders like hearing loss as well as on higher order cognition such as memory and attention.
- No talk on March 18th, 2025 (Spring Break)
- Galit Yovel, March 25th, 2025
- Cathleen Moore, April 1st, 2025
- Clara Colombatto, April 8th, 2025
- Daniel Albohn, April 15th, 2025 (in-person)
- Lucy Cui, April 22nd, 2025
- Michael Cohen, April 29th, 2025
- Yiwen Wang, May 6th, 2025 (in-person)