Attention & Perception Talk Series

Links to the speaker's website and and talk information will be added as the semester goes along. Talks are online (Zoom link) unless otherwise specified.

Spring 2025

  • Brady Roberts — Memorable by design: The intrinsic properties of effective symbols

    January 21st, 2025 (in-person)

    Recent work has begun to evaluate the memorability of everyday visual symbols as a new way to understand how abstract concepts are processed in memory. Symbols were previously found to be highly memorable, especially relative to words, but it remained unclear what was driving memorability. Here, we bridged this gap by exploring the visual and conceptual attributes driving the high memorability observed for symbols. Participants were tested on their memory for conventional symbols (e.g., !@#$%) before sorting them based on visual or conceptual features. Principal component analyses performed on the sorting data then revealed which of these features predict symbol memorability. An artificial image generator was then used to form novel symbols while accentuating or downplaying predictive features to create a set of memorable and forgettable symbols, respectively. Both recognition and cued-recall memory performance were substantially improved for symbols that were designed to be memorable. This work suggests that specific visual attributes drive image memorability and offers initial evidence that memorability can be engineered.

  • Thomas Langlois — Efficient Computations and Representations for Perceptual Inference and Communication

    February 4th, 2025 (in-person)

    In order to keep pace with a complex and ever-changing visual environment, the visual system must combine moment-to-moment sensory evidence with prior expectations that reflect predictable regularities in the environment. Although priors (and other subjective probability distributions) are key to visual perception, they are notoriously difficult to estimate because perception is an inherently private (subjective) experience. In this talk, I will highlight work using large-scale serial reproduction experiments to obtain stable estimates of subjective probability distributions in visual memory. I will also discuss recent work elucidating how neural population activity in the PFC integrates prior expectations with sensory signals during visual perception in macaque monkeys. Time permitting, I will highlight my current work investigating the relation between perceptual representations and emergent communication using the Information Bottleneck (IB) Principle.

  • Qi Lin — Individual differences in prefrontal coding of visual features

    February 11th, 2025 (online)

    Each of us perceives the world differently. What may underlie such individual differences in perception? In this talk, I will focus on characterizing the lateral prefrontal cortex (LPFC)'s role in vision with a specific focus on individual differences. Using a 7T fMRI dataset, I first show that encoding models relating visual features extracted from a deep neural network to brain responses to natural images robustly predict responses in patches of LPFC. Intriguingly, there are more substantial individual differences in the coding schemes of LPFC compared to visual regions. I will then present computational work showing how such amplification of individual differences could result from a neural architecture involving random reciprocal connections between sensory and high-level regions. Lastly, I will discuss ongoing work exploring the behavioral consequences of such individual differences in LPFC coding. Together, this work demonstrates the under-appreciated role of LPFC in visual processing and suggests that LPFC may underlie the idiosyncrasies in how different individuals experience the visual world.

  • Harini Sankar — Modeling the influence of semantic context in spoken word recognition

    February 18th, 2025 (in-person)

    Spoken word recognition is a context dependent process. Studies have shown that the semantic association between words not only influences behavioral responses to ambiguous speech sounds but also influences how we encode the sounds themselves. The influence of semantic context has also been shown to persist over longer spans of time. While earlier computational models of spoken word recognition have captured various aspects of speech perception, they have yet to integrate the role of long-distance semantic dependencies. Creating a model that learns these semantic associations in a self-supervised manner would also be able to demonstrate how humans learn and use these semantic associations between words in everyday speech. In this project, I created two models — a simple recurrent network (SRN) and long-short-term-memory (LSTM) networks that were trained on word pairs that varied in their degree of semantic association. The models were able to learn the semantic associations between word pairs in a self-supervised manner. I will also discuss how the models could use these learnt semantic associations to influence how they encode ambiguous phonemes that would reflect the results from the behavioral and electrophysiological data.

  • Michael Beyeler — From Perception to Function: Toward a Usable and Intelligent Bionic Eye

    March 4th, 2025 (online)

    How can we restore meaningful vision to people with incurable blindness? Despite advances in visual neuroprosthetics (bionic eyes), current implants provide only rudimentary percepts that do not always translate into functional vision. In this talk, I will discuss our work on computational models that predict what implant users might "see", shedding light on the perceptual challenges of prosthetic vision. I will also explore why implantees use their implants less frequently than expected, highlighting key usability barriers and the need for smarter integration. Finally, I will outline how personalized stimulation strategies and closed-loop prostheses could help bridge the gap between perception and function - moving beyond basic phosphenes toward a usable and intelligent bionic eye.

  • Keith Doelling — Sequence prediction in Natural and Artificial Perception

    March 11th, 2025 (online)

    Lived experience unfolds as sequences of information. These sequences are highly influential in the processing of current stimuli, providing context on which to establish surprise and attention or reconstruct missing information. Still, the neural mechanisms that control the computation of contextual information and its implementation in processing of current events remain a critical question for our understanding of processing of naturalistic sequences like speech and music. My work combines human electrophysiology with computational models of naturalistic perception to assess how sequential information is computed and deployed in neural processing. I have applied this framework in several cases: 1) the study of temporal prediction in rhythms linking Bayesian and Neural Mass Models to show how oscillatory can be linked to a Bayesian prior for rhythmicity, 2) scaling to real world content predictions using deep learning models of naturalistic music, and 3) examining the consequences of sequential processing clinical audiograms. These projects demonstrate the broad implications of sequential information processing on neural activity and the potential consequences for processing disorders like hearing loss as well as on higher order cognition such as memory and attention.

  • No talk on March 18th, 2025 (Spring Break)
  • Galit Yovel — Disentangling Vision and Language in Human Mental Representations with Deep Learning

    March 25th, 2025 (online)

    Mental representations of familiar categories are composed of both visual and semantic information. These two types of information are intertwined in mental representations, making them difficult to disentangle. Artificial neural networks trained on either images or text allow us to measure the distinct contributions of visual and semantic information in mental representations and neural responses to images of familiar categories. Our findings reveal that representations of familiar images in perception and short-term memory are dominated by visual information, whereas the representations in long-term memory are dominated by semantic information. We further use this approach to assess the extent to which vision and language convey similar or distinct information. Our findings highlight the interplay between visual and semantic information in shaping mental representations and offer new insights into their organization across cognitive processes.

  • Cathleen Moore — Exploring the Fundamental Role of Object Structure in Vision

    April 1st, 2025 (in-person)

    Demonstrations of perceptual organization processes like grouping, surface completion, figure-ground assignment, and correspondence across time and space, are fascinating not just because of their compelling phenomenology, but because they provide insight into different levels of representation and their functions within the visual cognitive system more generally. I will present research that has focused on some of the functional consequences of perceptual organization in visual processing as it unfolds over time. I suggest that an implication of this work is that perceptual organization processes serve to establish dynamic representations of the current object structure of the world, which in turn determines how newly sampled visual information is integrated into existing representations of the external world. This function of perceptual organization, it will be argued, is a critical determinant of what we experience and what we miss.

  • Clara Colombatto — Perceiving Perception and Attending to Attention

    April 8th, 2025 (online)

    Our visual experience is determined not only by extrinsic properties (such as colors and shapes) but also by a type of peer pressure: We constantly monitor (and follow) where others are looking, and many past studies have emphasized the importance of others' eyes as uniquely powerful stimuli. In this talk, I will argue that perception is socially sophisticated, as it is driven not merely by tracking others' eye and head movements, but rather by the perception of underlying mental states such as others' attention and intentions. I will support this view by showing how gaze effects are attenuated when the eyes do not signal any underlying pattern of attention and intentions, and conversely, how our visual system spontaneously prioritizes others' degree of attention (vs. distraction). These studies span various aspects of perception, from visual awareness to time perception, and together demonstrate that perception itself is intrinsically 'social': ultimately, what matters for visual experience is not just perceiving and attending to the relevant physical features, but rather 'perceiving perception', and 'attending to attention'.

  • Daniel Albohn — Modeling Subjective Preferences

    April 15th, 2025 (in-person)

    Human preferences are inherently subjective—individuals have their preferred types of artwork, shoes to wear, and who they find attractive. However, much of what is known about human preference comes from aggregated data that essentially treat idiosyncratic differences as noise. A holistic understanding of human preference requires building individualized models. In this talk, I demonstrate that while some judgments exhibit shared patterns across individuals, others are highly idiosyncratic and difficult to statistically model. Using generative artificial intelligence, I showcase a novel method to visualize and quantify individual differences in perception, creating valid, robust, and photorealistic representations of how people perceive their world. These findings have significant implications for understanding the influence of individual variation on diverse domains, including visual stereotypes, consumer behavior, and clinical interventions.

  • Lucy Cui — Beyond the Bar: Judging Dot Means on Bar Graphs

    April 22nd, 2025 (online)

    Superimposing data points over bar graphs have been becoming more popular. This practice may be motivated by data transparency but may resolve some of the concerns that bar graphs bring to interpreting results and reasoning about data accurately. For example, people tend to believe dots within the bar contribute to the bar edge more than dots outside the bar, also known as within-bar-bias. I will be presenting studies that investigate whether people have a within-bar-bias when viewing data points superimposed over a bar graph. We have participants view graphs where we have manipulated the bar and data features (bar vs. line/control, filled vs. unfilled bar, same-color vs. different-color data, distribution of dots) and judge whether the data/dot mean is above or below the bar edge. We discuss how graph features and data features influence people's performance and response time on this task and implications for data visualization design.

  • Mihael Cohen — The cognitive and neural limitations of perceptual awareness

    April 22nd, 2025 (online)

    What are the limits of perceptual awareness and what are the cognitive and neural factors that determine those limits? In this talk, I will first describe a series of behavioral experiments using head-mounted virtual reality (VR), traditional psychophysics, and deep learning methods (i.e., convolutional neural networks) that aim to identify what parts of the external world are consciously perceived at any given moment. Then, I will use a combination of EEG and fMRI to examine the neural factors that determine whether or not a piece of information is ultimately accessed by awareness, with particular focus on a previously undiscovered neural signature of conscious perception that generalizes to both auditory and visual consciousness. Taken together, this collection of results offers new insights into the limits of human cognition and opens the door for many future studies aimed at understanding these limitations.

  • Yiwen Wang, May 6th, 2025 (in-person)