Research output per year
Research output per year
Scott Cheng Hsin Yang*, Máté Lengyel, Daniel M. Wolpert
Research output: Contribution to journal › Article › peer-review
Interpreting visual scenes typically requires us to accumulate information from multiple locations in a scene. Using a novel gaze-contingent paradigm in a visual categorization task, we show that participants' scan paths follow an active sensing strategy that incorporates information already acquired about the scene and knowledge of the statistical structure of patterns. Intriguingly, categorization performance was markedly improved when locations were revealed to participants by an optimal Bayesian active sensor algorithm. By using a combination of a Bayesian ideal observer and the active sensor algorithm, we estimate that a major portion of this apparent suboptimality of fixation locations arises from prior biases, perceptual noise and inaccuracies in eye movements, and the central process of selecting fixation locations is around 70% efficient in our task. Our results suggest that participants select eye movements with the goal of maximizing information about abstract categories that require the integration of information from multiple locations.
Original language | English |
---|---|
Article number | e12215 |
Number of pages | 22 |
Journal | eLife |
Volume | 5 |
DOIs | |
State | Published - 10 Feb 2016 |
Research output: Contribution to journal › Comment/debate