Tagged: Visualization

A 3-D Immersive Environment for Characterizing EEG Signatures of Target Detection

Visual target detection is one of the most studied paradigms in human electrophysiology. Electroencephalo-graphic (EEG) correlates of target detection include the well-characterized N1, P2, and P300. In almost all cases the experimental paradigms used for studying visual target detection are extremely well-controlled – very simple stimuli are presented so as to minimize eye movements, and scenarios involve minimal active participation by the subject. However, to characterize these EEG correlates for real-world scenarios, where the target or the subject may be moving and the two may interact, a more flexible paradigm is required. The environment must be immersive and interactive, and the system must enable synchronization between events in the world, the behavior of the subject, and simultaneously recorded EEG signals. We have developed a hardware/software system that enables us to precisely control the appearance of objects in a 3D virtual environment, which subjects can navigate while the system tracks their eyes and records their EEG activity. We are using this environment to investigate a set of questions which focus on the relationship between the visibility, salience, and affect of the target; the agency and eye movements of the subject; and the resulting EEG signatures of detection. In this paper, we describe the design of our system and present some preliminary results regarding the EEG signatures of target detection.

Mapping visual stimuli to perceptual decisions via sparse decoding of mesoscopic neural activity

In this talk I will describe our work investigating sparse decoding of neural activity, given a realistic mapping of the visual scene to neuronal spike trains generated by a model of primary visual cortex (V1). We use a linear decoder which imposes sparsity via an L1 norm. The decoder can be viewed as a decoding neuron (linear summation followed by a sigmoidal nonlinearity) in which there are relatively few non-zero synaptic weights. We find: (1) the best decoding performance is for a representation that is sparse in both space and time, (2) decoding of a temporal code results in better performance than a rate code and is also a better fit to the psychophysical data, (3) the number of neurons required for decoding increases monotonically as signal-to-noise in the stimulus decreases, with as little as 1% of the neurons required for decoding at the highest signal-to-noise levels, and (4) sparse decoding results in a more accurate decoding of the stimulus and is a better fit to psychophysical performance than a distributed decoding, for example one imposed by an L2 norm. We conclude that sparse coding is well-justified from a decoding perspective in that it results in a minimum number of neurons and maximum accuracy when sparse representations can be decoded from the neural dynamics.