Tagged: Neurons

Brain-computer interfaces

The human brain is perhaps the most fascinating and complex signal processing machine in existence. It is capable of transducing a variety of environmental signals (the senses, including taste, touch, smell, sound, and sight) and extracting information from these disparate signal streams, ultimately fusing this information to enable behavior, cognition, and action. What is perhaps surprising is that the basic signal processing elements of the brain, i.e., neurons, transmit information at a relatively slow rate compared to transistors, switching about 106 times slower in fact. The brain has the advantage of having a tremendous number of neurons, all operating in parallel, and a highly distributed memory system of synapses (over 100 trillion in the cerebral cortex) and thus its signal processing capabilities may largely arise from its unique architecture. These facts have inspired a great deal of study of the brain from a signal processing perspective. Recently, scientists and engineers have focused on developing means in which to directly interface with the brain, essentially measuring neural signals and decoding them to augment and emulate behavior. This research area has been termed brain computer interfaces and is the topic of this issue of IEEE Signal Processing Magazine.

Mapping visual stimuli to perceptual decisions via sparse decoding of mesoscopic neural activity

In this talk I will describe our work investigating sparse decoding of neural activity, given a realistic mapping of the visual scene to neuronal spike trains generated by a model of primary visual cortex (V1). We use a linear decoder which imposes sparsity via an L1 norm. The decoder can be viewed as a decoding neuron (linear summation followed by a sigmoidal nonlinearity) in which there are relatively few non-zero synaptic weights. We find: (1) the best decoding performance is for a representation that is sparse in both space and time, (2) decoding of a temporal code results in better performance than a rate code and is also a better fit to the psychophysical data, (3) the number of neurons required for decoding increases monotonically as signal-to-noise in the stimulus decreases, with as little as 1% of the neurons required for decoding at the highest signal-to-noise levels, and (4) sparse decoding results in a more accurate decoding of the stimulus and is a better fit to psychophysical performance than a distributed decoding, for example one imposed by an L2 norm. We conclude that sparse coding is well-justified from a decoding perspective in that it results in a minimum number of neurons and maximum accuracy when sparse representations can be decoded from the neural dynamics.

Perceptual Decision Making Investigated via Sparse Decoding of a Spiking Neuron Model of V1

Recent empirical evidence supports the hypothesis that invariant visual object recognition might result from non-linear encoding of the visual input followed by linear decoding [1]. This hypothesis has received theoretical support through the development of neural network architectures which are based on a non-linear encoding of the input via recurrent network dynamics followed by a linear decoder [2], [3]. In this paper we consider such an architecture in which the visual input is non-linearly encoded by a biologically realistic spiking model of V1, and mapped to a perceptual decision via a sparse linear decoder. Novel is that we 1) utilize a large-scale conductance based spiking neuron model of V1 which has been well-characterized in terms of classical and extra-classical response properties, and 2) use the model to investigate decoding over a large population of neurons. We compare decoding performance of the model system to human performance by comparing neurometric and psychometric curves.

Analysis of a gain control model of V1: Is the goal redundancy reduction?

In this paper we analyze a popular divisive normalization model of V1 with respect to the relationship between its underlying coding strategy and the extraclassical physiological responses of its constituent modeled neurons. Specifically we are interested in whether the optimization goal of redundancy reduction naturally leads to reasonable neural responses, including reasonable distributions of responses. The model is trained on an ensemble of natural images and tested using sinusoidal drifting gratings, with metrics such as suppression index and contrast dependent receptive field growth compared to the objective function values for a sample of neurons. We find that even though the divisive normalization model can produce “typical” neurons that agree with some neurophysiology data, distributions across samples do not agree with experimental data. Our results suggest that redundancy reduction itself is not necessarily causal of the observed extraclassical receptive field phenomena, and that additional optimization dimensions and/or biological constraints must be considered.

Simulated optical imaging of orientation preference in a model of V1

Optical imaging studies have played an important role in mapping the orientation selectivity and ocular dominance of neurons across an extended area of primary visual cortex (V1). Such studies have produced images with a more or less smooth and regular spatial distribution of relevant neuronal response properties. This is in spite of the fact that results from electrophysiological recordings, though limited in their number and spatial distribution, show significant scatter/variability in the relevant response properties of nearby neurons. In this paper we present a simulation of the optical imaging experiments of ocular dominance and orientation selectivity using a computational model of the primary visual cortex. The simulations assume that the optical imaging signal is proportional to the averaged response of neighboring neurons. The model faithfully reproduces ocular dominance columns and orientation pinwheels in the presence of realistic scatter of single cell preferred responses. In addition,we find the simulated optical imaging of orientation pinwheels to be remarkably robust, with the pinwheel structure maintained up to an addition of degrees of random scatter in the orientation preference of single cells. Our results suggest that an optical imaging result does not necessarily, by itself, provide any obvious upperbound for the scatter of the underlying neuronal response properties on local scales.