Tagged: face perception

Coupling Retinal Imaging With Psychophysics to Assess Perceptual Consequences of AMD

Purpose: Retinal imaging does not necessarily provide a complete picture of expected vision loss for macular disease. We use a psychophysics test coupled with computational modeling to relate pathologies, found via fundus imaging, to expected perceptual function for a group of AMD patients. Methods: We recruited 10 low-vision patients with mild yet progressive AMD, as well as 10 age-matched healthy controls at the Edward Harkness Eye Institute, Columbia Presbyterian Medical Center. Both patients and controls, whose ages ranged from 65 to 84, were corrected to 20/20 to 20/50 visual acuity. All the subjects participated in a 2-AFC perceptual task, in monocular mode, where they were required to discriminate face and car images in the presence of variable noise. Color fundus photographs were collected using a Zeiss FF 450 Plus camera. Fundus images were segmented using a robust and automated algorithm to quantify disease-specific pathologies on the retina. We mapped each patient’s retinal pathology to cortical activity and neurometric curves using a computational model of V1 and a decoding framework. We compared the psychometric curves between controls and patients, and investigated the quality of the neurometric predictions. We further analyzed the correlation between the neurometric curves with statistics of drusen in the masks. Results: AMD patients had substantially lower discrimination accuracies compared to controls. Moreover, the degradation in the discrimination accuracy of AMD patients was much more pronounced at higher signal-to-noise (SNR) levels of the stimulus. We observed a positive correlation (r = 0.67) between the fraction of drusen free area on the mask with the predicted perceptual discrimination at the highest SNR level for the stimulus. Conclusions: The psychophysics and modeling framework we developed provides a quantitative assessment for the perceptual consequences of AMD and can potentially serve as a method for relating clinical findings in retinal imaging to perceptual function.

Post-stimulus endogenous and exogenous oscillations are differentially modulated by task difficulty.

We investigate the modulation of post-stimulus endogenous and exogenous oscillations when a visual discrimination is made more difficult. We use exogenous frequency tagging to induce steady-state visually evoked potentials (SSVEP) while subjects perform a face-car discrimination task, the difficulty of which varies on a trial-to-trial basis by varying the noise (phase coherence) in the image. We simultaneously analyze amplitude modulations of the SSVEP and endogenous alpha activity as a function of task difficulty. SSVEP modulation can be viewed as a neural marker of attention toward/away from the primary task, while modulation of post-stimulus alpha is closely related to cortical information processing. We find that as the task becomes more difficult, the amplitude of SSVEP decreases significantly, approximately 250-450 ms post-stimulus. Significant changes in endogenous alpha amplitude follow SSVEP modulation, occurring at approximately 400-700 ms post-stimulus and, unlike the SSVEP, the alpha amplitude is increasingly suppressed as the task becomes less difficult. Our results demonstrate simultaneous measurement of endogenous and exogenous oscillations that are modulated by task difficulty, and that the specific timing of these modulations likely reflects underlying information processing flow during perceptual decision-making.