Discovering Temporally Compositional Neural Manifolds with Switching Infinite GPFA

Changmin Yu, Maneesh Sahani, Máté Lengyel

Research output: Working paper/PreprintPreprint

Abstract (may include machine translation)

Gaussian Process Factor Analysis (GPFA) is a powerful latent variable model for extracting low-dimensional manifolds underlying population neural activities. However, one limitation of standard GPFA models is that the number of latent factors needs to be pre-specified or selected through heuristic-based processes, and that all factors contribute at all times. We propose the infinite GPFA model, a fully Bayesian non-parametric extension of the classical GPFA by incorporating an Indian Buffet Process (IBP) prior over the factor loading process, such that it is possible to infer a potentially infinite set of latent factors, and the identity of those factors that contribute to neural firings in a compositional manner at each time point. Learning and inference in the infinite GPFA model is performed through variational expectation-maximisation, and we additionally propose scalable extensions based on sparse variational Gaussian Process methods. We empirically demonstrate that the infinite GPFA model correctly infers dynamically changing activations of latent factors on a synthetic dataset. By fitting the infinite GPFA model to population activities of hippocampal place cells during spatial navigation, we identify non-trivial and behaviourally meaningful dynamics in the neural encoding process.
Original languageEnglish
Number of pages20
DOIs
StateIn preparation - 4 Oct 2024

Fingerprint

Dive into the research topics of 'Discovering Temporally Compositional Neural Manifolds with Switching Infinite GPFA'. Together they form a unique fingerprint.

Cite this