SNAIL Blogs

Representational asymmetries in neural networks: why should we care?

Neural networks, both biological and artificial, don’t always distribute neurons evenly across input features. Some features may be encoded more frequently by neurons than others. This kind of representational asymmetries are all over the cortex: cardinal orientations, centrifugal motion stimuli, horizontal disparities, expansive optic flow, etc. 

As discussed in a recent paper by Andrew Lampinen and colleagues, several factors can influence this asymmetric representation of features (Figure 1). These include the ease of extracting a feature from the input, its prevalence in the training data, and the order in which features are ... [continue]