Active sensing in the categorization of visual patterns

Scott Cheng Hsin Yang*, Máté Lengyel, Daniel M. Wolpert

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract (may include machine translation)

Interpreting visual scenes typically requires us to accumulate information from multiple locations in a scene. Using a novel gaze-contingent paradigm in a visual categorization task, we show that participants' scan paths follow an active sensing strategy that incorporates information already acquired about the scene and knowledge of the statistical structure of patterns. Intriguingly, categorization performance was markedly improved when locations were revealed to participants by an optimal Bayesian active sensor algorithm. By using a combination of a Bayesian ideal observer and the active sensor algorithm, we estimate that a major portion of this apparent suboptimality of fixation locations arises from prior biases, perceptual noise and inaccuracies in eye movements, and the central process of selecting fixation locations is around 70% efficient in our task. Our results suggest that participants select eye movements with the goal of maximizing information about abstract categories that require the integration of information from multiple locations.

Original languageEnglish
Article numbere12215
JournaleLife
Volume5
Issue numberFEBRUARY2016
DOIs
StatePublished - 10 Feb 2016

Fingerprint

Dive into the research topics of 'Active sensing in the categorization of visual patterns'. Together they form a unique fingerprint.

Cite this