Recent advances in machine learning have shown that deep neural networks (DNNs) can provide powerful and flexible models of neural sensory processing. In the auditory system, standard linear-nonlinear (LN) models are unable to account for high-order cortical representations, but thus far it is unclear what additional insights can be provided by DNNs, particularly in the case of single-neuron activity. DNNs can be difficult to fit with relatively small datasets, such as those available from single neurons. In the current study, we developed a population encoding model for a large number of neurons recorded during presentation of a large, fixed set of natural sounds. Leveraging signals from the population substantially improved performance over models fit to individual neurons. We tested a range of DNN architectures on data from primary and non-primary auditory cortex, varying number and size of convolutional layers at the input and dense layers at the output. DNNs performed consistently better than LN models. Moreover, the DNNs were highly generalizable. The output layer of a model pre-fit using one population of neurons could be fit to different single units and/or different stimuli, with performance close to that of neurons in the original fit data. These results indicate that population encoding models capture a general set of computations performed by auditory cortex and can be analyzed to derive a characterization of auditory cortical function. Tools developed for this project as part of the BRAIN Initiative can be applied to a wide range of problems in the auditory and other neural systems.
1. Applying deep learning to neural populations overcomes data limitations that hinder analysis of single-neuron data.
2. Deep learning models consistently outperform traditional linear-nonlinear models of single neurons in auditory cortex.
3. Population-based models generalize well to neurons not included in the original model fit.