Neural Network Models of Visual Learning and Development

C.E. Credits: P.A.C.E. CE Florida CE
Speaker

Abstract

Children efficiently develop their visual systems through learning from their environment. How this development unfolds in noisy real-world data streams remains largely unknown. Deep neural networks trained on large annotated visual datasets have become state-of-the-art models for both visual recognition tasks and predicting neuronal responses in the primate visual stream. However, the large number of annotations required to train these networks made them implausible as models of real visual development and learning. In this talk, I will first show that the recent rapid progress in unsupervised learning has largely closed this gap. We find that neural network models learned with deep unsupervised contrastive embedding methods achieve neural prediction accuracy in multiple ventral visual cortical areas that equals or exceeds that of the models supervised on the labels, even when trained on noisy and limited first-person view datasets collected from infants. I will then show that augmenting these models with memory enables capturing human learning dynamics in both real-time and life-long time scales and that the more recent unsupervised learning algorithms diverge from being human-like compared to earlier algorithms.

Learning Objectives:

1. Explain why categorization-trained deep neural networks cannot model how humans develop their visual system.

2. Describe how contrastive learning algorithms train the neural network models from images at a high level.

3. Paraphrase how to model the real-time and life-long visual learning dynamics using the unsupervised learning algorithms.
 


You May Also Like
Loading Comments...