The ten hertz alpha oscillations are the strongest rhythmic encodings from human brain. They arise from occipital lobes and are connected with visual perception and visual cognition. A flicker stimulus can also evoke oscillations in human EEG that are of the same frequency as the flicker stimulus. The oscillations are evoked even when there is no conscious perception of the flicker stimulus. The oscillations so evoked show resonances at 10, 20, 40 and 80 hertz. When a subject is not able to perceive the flicker and the source appears as a steady source of light, flicker fusion is said to have occurred. Psychophysics experiments on flicker fusion involves presenting a human subject with a flicker stimulus, and the subject classifying the stimulus as either flickering or fused. The physical parameters associated with the stimulus that determines whether the stimulus is classified as flickering or fused by the subject, are varied in psychophysics experiments. Deep neural networks, on the other hand, have been proposed as a model for cognition and perception that can make falsifiable predictions. In this work, motivated by the feedforward and feedback visual pathways, we propose a deep convolutional recurrent neural network model that may be trained using psychophysics data. The neural network takes the time series representation of the flicker stimulus as input, and the binary classification made by the subject on whether stimulus appears flickering or fused, is set as the output. We show that an intermediate convolution layer of such a recurrent neural network trained on psychophysics data can give sinusoidal output on the input of a representation of ten hertz stimulus, signifying that an in-silica computation of alpha oscillations is possible through such a network.