Tagged: Electroencephalography

Brain-computer interfaces

The human brain is perhaps the most fascinating and complex signal processing machine in existence. It is capable of transducing a variety of environmental signals (the senses, including taste, touch, smell, sound, and sight) and extracting information from these disparate signal streams, ultimately fusing this information to enable behavior, cognition, and action. What is perhaps surprising is that the basic signal processing elements of the brain, i.e., neurons, transmit information at a relatively slow rate compared to transistors, switching about 106 times slower in fact. The brain has the advantage of having a tremendous number of neurons, all operating in parallel, and a highly distributed memory system of synapses (over 100 trillion in the cerebral cortex) and thus its signal processing capabilities may largely arise from its unique architecture. These facts have inspired a great deal of study of the brain from a signal processing perspective. Recently, scientists and engineers have focused on developing means in which to directly interface with the brain, essentially measuring neural signals and decoding them to augment and emulate behavior. This research area has been termed brain computer interfaces and is the topic of this issue of IEEE Signal Processing Magazine.

Fusing multiple neuroimaging modalities to assess group differences in perception-action coupling

In the last few decades, noninvasive neuroimaging has revealed macroscale brain dynamics that underlie perception, cognition, and action. Advances in noninvasive neuroimaging target two capabilities: 1) increased spatial and temporal resolution of measured neural activity; and 2) innovative methodologies to extract brain–behavior relationships from evolving neuroimaging technology. We target the second. Our novel methodology integrated three neuroimaging methodologies and elucidated expertise-dependent differences in functional (fused EEG-fMRI) and structural (dMRI) brain networks for a perception–action coupling task. A set of baseball players and controls performed a Go/No-Go task designed to mimic the situation of hitting a baseball. In the functional analysis, our novel fusion methodology identifies 50-ms windows with predictive EEG neural correlates of expertise and fuses these temporal windows with fMRI activity in a whole-brain 2-mm voxel analysis, revealing time-localized correlations of expertise at a spatial scale of millimeters. The spatiotemporal cascade of brain activity reflecting expertise differences begins as early as 200 ms after the pitch starts and lasts up to 700 ms afterwards. Network differences are spatially localized to include motor and visual processing areas, providing evidence for differences in perception–action coupling between the groups. Furthermore, an analysis of structural connectivity reveals that the players have significantly more connections between cerebellar and left frontal/motor regions, and many of the functional activation differences between the groups are located within structurally defined network modules that differentiate expertise. In short, our novel method illustrates how multimodal neuroimaging can provide specific macroscale insights into the functional and structural correlates of expertise development.

Learning EEG Components for Discriminating Multi-Class Perceptual Decisions

Logistic regression has been used as a supervised method for extracting EEG components predictive of binary perceptual decisions. However, often perceptual decisions require a choice between more than just two alternatives. In this paper we present results using multinomial logistic regression (MLR) for learning EEG components in a 3-way visual discrimination task. Subjects were required to decide between three object classes (faces, houses, and cars) for images which were embedded with varying amounts of noise. We recorded the subjects’ EEG while they were performing the task and then used MLR to predict the stimulus category, on a single-trial basis, for correct behavioral responses. We found an early component (at 170ms) that was consistent across all subjects and with previous binary discrimination paradigms. However a later component (at 300-400ms), previously reported in the binary discrimination paradigms, was more variable across subjects in this three-way discrimination task. We also computed forward models for the EEG components, with these showing a difference in the spatial distribution of component activity for the different categorical decisions. In summary, we find that logistic regression, generalized to the arbitrary N-class case, can be a useful approach for learning and analyzing EEG components underlying multi-class perceptual decisions.

A 3-D Immersive Environment for Characterizing EEG Signatures of Target Detection

Visual target detection is one of the most studied paradigms in human electrophysiology. Electroencephalo-graphic (EEG) correlates of target detection include the well-characterized N1, P2, and P300. In almost all cases the experimental paradigms used for studying visual target detection are extremely well-controlled – very simple stimuli are presented so as to minimize eye movements, and scenarios involve minimal active participation by the subject. However, to characterize these EEG correlates for real-world scenarios, where the target or the subject may be moving and the two may interact, a more flexible paradigm is required. The environment must be immersive and interactive, and the system must enable synchronization between events in the world, the behavior of the subject, and simultaneously recorded EEG signals. We have developed a hardware/software system that enables us to precisely control the appearance of objects in a 3D virtual environment, which subjects can navigate while the system tracks their eyes and records their EEG activity. We are using this environment to investigate a set of questions which focus on the relationship between the visibility, salience, and affect of the target; the agency and eye movements of the subject; and the resulting EEG signatures of detection. In this paper, we describe the design of our system and present some preliminary results regarding the EEG signatures of target detection.

The Bilinear Brain: Towards Subject‐Invariant Analysis

A major challenge in single-trial electroencephalography (EEG) analysis and Brain Computer Interfacing (BCI) is the so called, inter-subject/inter-session variability: (i.e large variability in measurements obtained during different recording sessions). This variability restricts the number of samples available for single-trial analysis to a limited number that can be obtained during a single session. Here we propose a novel method that distinguishes between subject-invariant features and subject-specific features, based on a bilinear formulation. The method allows for one to combine multiple recording of EEG to estimate the subject-invariant parameters, hence addressing the issue of inter-subject variability, while reducing the complexity of estimation for the subject-specific parameters. The method is demonstrated on 34 datasets from two different experimental paradigms: Perception categorization task and Rapid Serial Visual Presentation (RSVP) task. We show significant improvements in classification performance over state-of-the-art methods. Further, our method extracts neurological components never before reported on the RSVP thus demonstrating the ability of our method to extract novel neural signatures from the data.

Do We See Before We Look?

We investigated neural correlates of target detection in the electroencephalogram (EEG) during a free viewing search task and analyzed signals locked to saccadic events. Subjects performed a search task for multiple random scenes while we simultaneously recorded 64 channels of EEG and tracked subjects eye position. For each subject we identified target saccades (TS) and distractor saccades (DS). We sampled the sets of TS and DS saccades such that they were equalized/matched for saccade direction and duration, ensuring that no information in the saccade properties themselves was discriminating for their type. We aligned EEG to the saccade onset and used logistic regression (LR), in the space of the 64 electrodes, to identify activity discriminating a TS from a DS on a single-trial basis. We found significant discriminating activity in the EEG both before and after the saccade. We also saw substantial reduction in discriminating activity when the saccade was executed. We conclude that we can identify neural signatures of detection both before and after the saccade, indicating that subjects anticipate the target before the last saccade, which serves to foveate and confirm the target identity.

A system for single-trial analysis of simultaneously acquired EEG and fMRI

In this paper we describe a system for simultaneously acquiring EEG and fMRI and evaluate it in terms of discriminating, single-trial, task-related neural components in the EEG. Using an auditory oddball stimulus paradigm, we acquire EEG data both inside and outside a 1.5T MR scanner and compare both power spectra and single-trial discrimination performance for both conditions. We find that EEG activity acquired inside the MR scanner during echo planer image acquisition is of high enough quality to enable single-trial discrimination performance that is 95 % of that acquired outside the scanner. We conclude that EEG acquired simultaneously with fMRI is of high enough fidelity to permit single-trial analysis.

Classifying single-trial ERPs from visual and frontal cortex during free viewing

Event-related potentials (ERPs) recorded at the scalp are indicators of brain activity associated with event-related information processing; hence they may be suitable for the assessment of changes in cognitive processing load. While the measurement of ERPs in a laboratory setting and classifying those ERPs is trivial, such a task presents major challenges in a “real world” setting where the EEG signals are recorded when subjects freely move their eyes and the sensory inputs are continuously, as opposed to discretely presented. Here we demonstrate that with the aid of second-order blind identification (SOBI), a blind source separation (BSS) algorithm: (1) we can extract ERPs from such challenging data sets; (2) we were able to obtain meaningful single-trial ERPs in addition to averaged ERPs; and (3) we were able to estimate the spatial origins of these ERPs. Finally, using back-propagation neural networks as classifiers, we show that these single-trial ERPs from specific brain regions can be used to determine moment-to-moment changes in cognitive processing load during a complex “real world” task.

Using single-trial EEG to estimate the timing of target onset during rapid serial visual presentation

The timing of a behavioral response, such as a button press in reaction to a visual stimulus, is highly variable across trials. In this paper we describe a methodology for single-trial analysis of electroencephalography (EEG) which can be used to reduce the error in the estimation of the timing of the behavioral response and thus reduce the error in estimating the onset time of the stimulus. We consider a rapid serial visual presentation (RSVP) paradigm consisting of concatenated video clips and where subjects are instructed to respond when they see a predefined target. We show that a linear discriminator, with inputs distributed across sensors and time and chosen via an information theoretic feature selection criterion, can be used in conjunction with the response to yield a lower error estimate of the onset time of the target stimulus compared to the response time. We compare our results to response time and previous EEG approaches using fixed windows in time, showing that our method has the lowest estimation error. We discuss potential applications, specifically with respect to cortically-coupled computer vision based triage of large image databases

Spatio-temporal linear discrimination for inferring task difficulty from EEG

We present a spatio-temporal linear discrimination method for single-trial classification of multi-channel electroencephalography (EEG). No prior information about the characteristics of the neural activity is required i.e. the algorithm requires no knowledge about the timing and/or spatial distribution of the evoked responses. The algorithm finds a temporal delay/window onset time for each EEG channel and then spatially integrates the channels for each channel-specific onset time. The algorithm can be seen as learning discrimination trajectories defined within the space of EEG channels. We demonstrate the method for detecting auditory evoked neural activity and discrimination of task difficulty in a complex visual-auditory environment

Comparison of supervised and unsupervised linear methods for recovering task-relevant activity in EEG

In this paper we compare three linear methods, independent component analysis (ICA), common spatial patterns (CSP), and linear discrimination (LD) for recovering task relevant neural activity from high spatial density electroencephalography (EEG). Each linear method uses a different objective function to recover underlying source components by exploiting statistical structure across a large number of sensors. We test these methods using a dual-task event-related paradigm. While engaged in a primary task, subjects must detect infrequent changes in the visual display, which would be expected to evoke several well-known event-related potentials (ERPs), including the N2 and P3. We find that though each method utilizes a different objective function, they in fact yield similar components. We note that one advantage of the LD approach is that the recovered component is easily interpretable, namely it represents the component within a given time window which is most discriminating for the task, given a spatial integration of the sensors. Both ICA and CSP return multiple components, of which the most discriminating component may not be the first. Thus, for these methods, visual inspection or additional processing is required to determine the significance of these components for the task.

Spatial signatures of visual object recognition events learned from single-trial analysis of EEG

In this paper we use linear discrimination for learning EEG signatures of object recognition events in a rapid serial visual presentation (RSVP) task. We record EEG using a high spatial density array (63 electrodes) during the rapid presentation (50-200 msec per image) of natural images. Each trial consists of 100 images, with a 50% chance of a single target being in a trial. Subjects are instructed to press a left mouse button at the end of the trial if they detected a target image, otherwise they are instructed to press the right button. Subject EEG was analyzed on a single-trial basis with an optimal spatial linear discriminator learned at multiple time windows after the presentation of an image. Analysis of discrimination results indicated a periodic fluctuation (time-localized oscillation) in A/sub z/ performance. Analysis of the EEG using the discrimination components learned at the peaks of the A/sub z/ fluctuations indicate 1) the presence of a positive evoked response, followed in time by a negative evoked response in strongly overlapping areas and 2) a component which is not correlated with the discriminator learned during the time-localized fluctuation. Results suggest that multiple signatures, varying over time, may exist for discriminating between target and distractor trials.