We investigate using a previously developed spiking neuron model of layer 4 of primary visual cortex (V1) 
as a recurrent network whose activity is consequently linearly decoded, given a set of complex visual stimuli.
Our motivation is based on the following: 1) Linear decoders have proven useful in analyzing a variety of
neural signals, including spikes, firing rates, local field potentials, voltage sensitive dye imaging, and scalp
EEG, 2) linear decoding of activity generated from highly recurrent, nonlinear networks with fixed
connections has been shown to provide universal computational capabilities, with such methods termed liquid
state machines (LSM)  and echo state networks (ESN) , 3) in LSMs or ESNs often little is assumed about
the recurrent network architecture. However it is likely that for a given type of stimulus/input, the architecture of a biologically constrained recurrent network is important since it shapes the spatio-temporal correlations across the neuronal population, which can potentially be exploited efficiently by an appropriate decoder.
We conduct experiments using a two-alternative forced choice paradigm of face and car discrimination, where
a set of 12 face (Max Plank Institute face database) and 12 car grey-scale images are used . All the images
(512 x 512 pixels, 8 bits/pixel) have identical Fourier magnitude spectra. The phase spectra of the images are
manipulated using the weighted mean phase method to introduce noise, resulting in a set of images graded by
phase coherence. The sequence of images are presented to the V1 model (detailed in ) in a block design,
where a face or car image is flashed for 50 ms, followed by interval of 200 ms in which a mean luminance
background is shown. We use a linear decoder to map the spatio-temporal activity in the recurrent V1 model to
a decision on whether the input stimulus is a face or a car. We employ a sparsity constraint on the decoder in
order to control the dimension of the effective feature space. Sparse decoding is also consistent with previous
research efforts on decoding multi-unit recording and optical imaging data.
We evaluate the decoding accuracy of the linear decoding of the activity in the V1 model and compare that to a
set of psychophysical data using the same stimuli. We construct a neurometric function for the decoder, with
the variable of interest being the stimulus phase coherence. We find that linear decoding of neural activity in arecurrent V1 model can yield discrimination accuracy that is at least as good as, if not better than, human
psychophysical performance for relatively complex visual stimuli. Thus substantial information for superaccurate
decoding remains at the level of V1 and loss of information needed to better match behavioral
performance is predicted to occur downstream in the decision making process. We also find a small
improvement in discrimination accuracy when a spatio-temporal word is used relative to a spatial-only word,
providing insight into the utility of a temporal vs. a rate code for behaviorally relevant decoding.