How does attentional modulation of neural activity enhance performance? Here we use a deep convolutional neural network as a large-scale model of the visual system to address this question. We model the feature similarity gain model of attention, in which attentional modulation is applied according to neural stimulus tuning. Using a variety of visual tasks, we show that neural modulations of the kind and magnitude observed experimentally lead to performance changes of the kind and magnitude observed experimentally. We find that, at earlier layers, attention applied according to tuning does not successfully propagate through the network, and has a weaker impact on performance than attention applied according to values computed for optimally modulating higher areas. This raises the question of whether biological attention might be applied at least in part to optimize function rather than strictly according to tuning. We suggest a simple experiment to distinguish these alternatives.
Object Category Tuning Curves and Gradients
This file contains tuning curves made from responses (to regular images) to the 20 object categories used. Four types of gradient values are also included defined according to images used (1=merged, 2=array) and classifier (binary or 1000-way)
object_GradsTCs.zip
Binary Object Classifiers
Tensorflow weights for each of the 20 binary classifiers (1 for each category) that are used to replace the original 1000-way classifier
catbins.zip
Performance on Object Detection Tasks
These files contain true positive and true negative rates for attention applied in different ways to each of 20 categories
objperf.zip
CNN weights
The tensorflow weights for the full VGG-16 convolutional neural network used in this study (originally provided by Davi Frossard)
vgg16_weights.npz
Orientation Binary Classifiers
Tensorflow weights that replace final 1000-way classifier with a series of binary classifiers for different orientations
ori_catbins.zip
Orientation Tuning Curves and Gradients
Tuning curve values and binary detection gradient values for oriented grating stimuli
ori_TCGrads.zip
Performance on Orientation Detection Tasks
Files containing true positive and true negative performance values for attention applied in different ways
oriperf.zip
Orientation Task Images
This file contains images with two orientated stimuli in them along with information on the stimuli locations, orientations, and color
Stim2Constr540_oriims.npz
Images for Object task
Merged and Array object category images
objims.zip
Recorded Activity during Oriented Stimuli
Each of the files in this folder (named according to the orientation and layer attention was applied to) contains activity recorded from each feature map, averaged over spatial dimensions and images. It is arranged in a list that contains 5 numpy arrays (1 for each of the sets of convolutional layers between pooling layers). Each numpy array is of dimensions: 2 or 3 (depending on the number of convolutional layers in that part of the architecture) x number of feature maps at that layer x orientation presented (9) x strength of attention. The tuning values are also included as they are needed to make Figure 6
ori_activity.zip