Null Models of Large Neural Networks

Speaker

Abstract

One role of theory is in guiding future experiments: What should we aim to measure? Which experimental results should we be surprised about? I will argue here that simple random networks models of neural dynamics explain, to the experimental accuracy, many features of large population recordings in a variety of nervous systems. Observation of such features is thus not surprising in the context of random null models. I will discuss how this affects, or can affect, our interpretation of experiments, our choices of which experiments to do next, and how it raises hopes for building next-level theories of nervous systems, finally leaving the neuron behind.

Learning Objectives:

1. Explain how large simple neural networks can result in complex experimental recordings.

2. Introduce the concept of latent variables dynamics

3. Define coarse-graining modeling approaches


You May Also Like
Loading Comments...