Tagged: electroencephalography (EEG)

Converging evidence of linear independent components in EEG

Blind source separation (BSS) has been proposed as a method to analyze multi-channel electroencephalography (EEG) data. A basic issue in applying BSS algorithms is the validity of the independence assumption. In this paper we investigate whether EEG can be considered to be a linear combination of independent sources. Linear BSS can be obtained under the assumptions of non-Gaussian, non-stationary, or non-white independent sources. If the linear independence hypothesis is violated these three different conditions will not necessarily lead to the same result. We show, using 64 channel EEG data, that different algorithms which incorporate the three different assumptions lead to the same results, thus supporting the linear independence hypothesis.

Response error correction-a demonstration of improved human-machine performance using real-time EEG monitoring

We describe a brain–computer interface (BCI) system, which uses a set of adaptive linear preprocessing and classification algorithms for single-trial detection of error related negativity (ERN). We use the detected ERN as an estimate of a subject’s perceived error during an alternative forced choice visual discrimination task. The detected ERN is used to correct subject errors. Our initial results show average improvement in subject performance of 21% when errors are automatically corrected via the BCI. We are currently investigating the generalization of the overall approach to other tasks and stimulus paradigms.

Towards Serious Games for Improved BCI

Brain-computer interface (BCI) technologies, or technologies that use online brain signal processing, have a great promise to improve human interactions with computers, their environment, and even other humans. Despite this promise, there are no current serious BCI technologies in widespread use, due to the lack of robustness in BCI technologies. The key neural aspect of this lack of robustness is human variability, which has two main components: (1) individual differences in neural signals and (2) intraindividual variability over time. In order to develop widespread BCI technologies, it will be necessary to address this lack of robustness. However, it is currently unknown how neural variability affects BCI performance. To accomplish these goals, it is essential to obtain data from large numbers of individuals using BCI technologies over considerable lengths of time. One promising method for this is through the use of BCI technologies embedded into games with a purpose (GWAP). GWAP are a game-based form of crowdsourcing which players choose to play for enjoyment and during which the player performs key tasks which cannot be automated but that are required to solve research questions. By embedding BCI paradigms in GWAP and recording neural and behavioral data, it should be possible to much more clearly understand the differences in neural signals between individuals and across different time scales, enabling the development of novel and increasingly robust adaptive BCI algorithms.

Correlating Speaker Gestures in Political Debates with Audience Engagement Measured via EEG

We hypothesize that certain speaker gestures can convey significant information that are correlated to audience engagement. We propose gesture attributes, derived from speakers’ tracked hand motions to automatically quantify these gestures from video. Then, we demonstrate a correlation between gesture attributes and an objective method of measuring audience engagement: electroencephalography (EEG) in the domain of political debates. We collect 47 minutes of EEG recordings from each of 20 subjects watching clips of the 2012 U.S. Presidential debates. The subjects are examined in aggregate and in subgroups according to gender and political affiliation. We find statistically significant correlations between gesture attributes (particularly extremal pose) and our feature of engagement derived from EEG both with and without audio. For some stratifications, the Spearman rank correlation reaches as high as ρ = 0.283 with p < 0.05, Bonferroni corrected. From these results, we identify those gestures that can be used to measure engagement, principally those that break habitual gestural patterns.

Converging evidence of independent sources in EEG

Blind source separation (BSS) has been proposed as a method to analyze multi-channel electroencephalography (EEG) data. A basic issue in applying BSS algorithms is the validity of the independence assumption. We investigate whether EEG can be considered to be a linear combination of independent sources. Linear BSS can be obtained under the assumptions of non-Gaussian, non-stationary, or non-white independent sources. If the linear independence hypothesis is violated, these three different conditions will not necessarily lead to the same result. We show, using 64 channel EEG data, that different algorithms which incorporate the three different assumptions lead to the same results, thus supporting the linear independence hypothesis.

NEDE: An Open-Source Scripting Suite for Developing Experiments in 3D Virtual Environments

Background As neuroscientists endeavor to understand the brain’s response to ecologically valid scenarios, many are leaving behind hyper-controlled paradigms in favor of more realistic ones. This movement has made the use of 3D rendering software an increasingly compelling option. However, mastering such software and scripting rigorous experiments requires a daunting amount of time and effort. New method To reduce these startup costs and make virtual environment studies more accessible to researchers, we demonstrate a naturalistic experimental design environment (NEDE) that allows experimenters to present realistic virtual stimuli while still providing tight control over the subject’s experience. NEDE is a suite of open-source scripts built on the widely used Unity3D game development software, giving experimenters access to powerful rendering tools while interfacing with eye tracking and EEG, randomizing stimuli, and providing custom task prompts. Results Researchers using NEDE can present a dynamic 3D virtual environment in which randomized stimulus objects can be placed, allowing subjects to explore in search of these objects. NEDE interfaces with a research-grade eye tracker in real-time to maintain precise timing records and sync with EEG or other recording modalities. Comparison with existing methods Python offers an alternative for experienced programmers who feel comfortable mastering and integrating the various toolboxes available. NEDE combines many of these capabilities with an easy-to-use interface and, through Unity’s extensive user base, a much more substantial body of assets and tutorials. Conclusions Our flexible, open-source experimental design system lowers the barrier to entry for neuroscientists interested in developing experiments in realistic virtual environments.

Neurally and ocularly informed graph-based models for searching 3D environments

OBJECTIVE: As we move through an environment, we are constantly making assessments, judgments and decisions about the things we encounter. Some are acted upon immediately, but many more become mental notes or fleeting impressions-our implicit ‘labeling’ of the world. In this paper, we use physiological correlates of this labeling to construct a hybrid brain-computer interface (hBCI) system for efficient navigation of a 3D environment. APPROACH: First, we record electroencephalographic (EEG), saccadic and pupillary data from subjects as they move through a small part of a 3D virtual city under free-viewing conditions. Using machine learning, we integrate the neural and ocular signals evoked by the objects they encounter to infer which ones are of subjective interest to them. These inferred labels are propagated through a large computer vision graph of objects in the city, using semi-supervised learning to identify other, unseen objects that are visually similar to the labeled ones. Finally, the system plots an efficient route to help the subjects visit the ‘similar’ objects it identifies. MAIN RESULTS: We show that by exploiting the subjects’ implicit labeling to find objects of interest instead of exploring naively, the median search precision is increased from 25% to 97%, and the median subject need only travel 40% of the distance to see 84% of the objects of interest. We also find that the neural and ocular signals contribute in a complementary fashion to the classifiers’ inference of subjects’ implicit labeling. SIGNIFICANCE: In summary, we show that neural and ocular signals reflecting subjective assessment of objects in a 3D environment can be used to inform a graph-based learning model of that environment, resulting in an hBCI system that improves navigation and information delivery specific to the user’s interests.

Simultaneous EEG-fMRI Reveals a Temporal Cascade of Task-Related and Default-Mode Activations During a Simple Target Detection Task

Focused attention continuously and inevitably fluctuates, and to completely understand the mechanisms responsible for these modulations it is necessary to localize the brain regions involved. During a simple visual oddball task, neural responses measured by electroencephalography (EEG) modulate primarily with attention, but source localization of the correlates is a challenge. In this study we use single-trial analysis of simultaneously-acquired scalp EEG and functional magnetic resonance image (fMRI) data to investigate the blood oxygen level dependent (BOLD) correlates of modulations in task-related attention, and we unravel the temporal cascade of these transient activations. We hypothesize that activity in brain regions associated with various task-related cognitive processes modulates with attention, and that their involvements occur transiently in a specific order. We analyze the fMRI BOLD signal by first regressing out the variance linked to observed stimulus and behavioral events. We then correlate the residual variance with the trial-to-trial variation of EEG discriminating components for identical stimuli, estimated at a sequence of times during a trial. Post-stimulus and early in the trial, we find activations in right-lateralized frontal regions and lateral occipital cortex, areas that are often linked to task-dependent processes, such as attentional orienting, and decision certainty. After the behavioral response we see correlates in areas often associated with the default-mode network and introspective processing, including precuneus, angular gyri, and posterior cingulate cortex. Our results demonstrate that during simple tasks both task-dependent and default-mode networks are transiently engaged, with a distinct temporal ordering and millisecond timescale.

Musical experts recruit action-related neural structures in harmonic anomaly detection: Evidence for embodied cognition in expertise

Humans are extremely good at detecting anomalies in sensory input. For example, while listening to a piece of Western-style music, an anomalous key change or an out-of-key pitch is readily apparent, even to the non-musician. In this paper we investigate differences between musical experts and non-experts during musical anomaly detection. Specifically, we analyzed the electroencephalograms (EEG) of five expert cello players and five non-musicians while they listened to excerpts of J.S. Bach’s Prelude from Cello Suite No. 1. All subjects were familiar with the piece, though experts also had extensive experience playing the piece. Subjects were told that anomalous musical events (AMEs) could occur at random within the excerpts of the piece and were told to report the number of AMEs after each excerpt. Furthermore, subjects were instructed to remain still while listening to the excerpts and their lack of movement was verified via visual and EEG monitoring. Experts had significantly better behavioral performance (i.e. correctly reporting AME counts) than non-experts, though both groups had mean accuracies greater than 80%. These group differences were also reflected in the EEG correlates of key-change detection post-stimulus, with experts showing more significant, greater magnitude, longer periods of, and earlier peaks in condition-discriminating EEG activity than novices. Using the timing of the maximum discriminating neural correlates, we performed source reconstruction and compared significant differences between cellists and non-musicians. We found significant differences that included a slightly right lateralized motor and frontal source distribution. The right lateralized motor activation is consistent with the cortical representation of the left hand – i.e. the hand a cellist would use, while playing, to generate the anomalous key-changes. In general, these results suggest that sensory anomalies detected by experts may in fact be partially a result of an embodied cognition, with a model of the action for generating the anomaly playing a role in its detection.

Post-stimulus endogenous and exogenous oscillations are differentially modulated by task difficulty.

We investigate the modulation of post-stimulus endogenous and exogenous oscillations when a visual discrimination is made more difficult. We use exogenous frequency tagging to induce steady-state visually evoked potentials (SSVEP) while subjects perform a face-car discrimination task, the difficulty of which varies on a trial-to-trial basis by varying the noise (phase coherence) in the image. We simultaneously analyze amplitude modulations of the SSVEP and endogenous alpha activity as a function of task difficulty. SSVEP modulation can be viewed as a neural marker of attention toward/away from the primary task, while modulation of post-stimulus alpha is closely related to cortical information processing. We find that as the task becomes more difficult, the amplitude of SSVEP decreases significantly, approximately 250-450 ms post-stimulus. Significant changes in endogenous alpha amplitude follow SSVEP modulation, occurring at approximately 400-700 ms post-stimulus and, unlike the SSVEP, the alpha amplitude is increasingly suppressed as the task becomes less difficult. Our results demonstrate simultaneous measurement of endogenous and exogenous oscillations that are modulated by task difficulty, and that the specific timing of these modulations likely reflects underlying information processing flow during perceptual decision-making.

Components of ongoing EEG with high correlation point to emotionally-laden attention — a possible marker of engagement?

Recent evidence from functional magnetic resonance imaging suggests that cortical hemo- dynamic responses coincide in different subjects experiencing a common naturalistic stimulus. Here we utilize neural responses in the electroencephalogram (EEG) evoked by multiple presentations of short film clips to index brain states marked by high levels of corre- lation within and across subjects.We formulate a novel signal decomposition method which extracts maximally correlated signal components from multiple EEG records.The resulting components capture correlations down to a one-second time resolution, thus revealing that peak correlations of neural activity across viewings can occur in remarkable corre- spondence with arousing moments of the film. Moreover, a significant reduction in neural correlation occurs upon a second viewing of the film or when the narrative is disrupted by presenting its scenes scrambled in time. We also probe oscillatory brain activity during periods of heightened correlation, and observe during such times a significant increase in the theta band for a frontal component and reductions in the alpha and beta frequency bands for parietal and occipital components. Low-resolution EEG tomography of these components suggests that the correlated neural activity is consistent with sources in the cingulate and orbitofrontal cortices. Put together, these results suggest that the observed synchrony reflects attention- and emotion-modulated cortical processing which may be decoded with high temporal resolution by extracting maximally correlated components of neural activity.

In a Blink of an Eye and a Switch of a Transistor: Cortically-coupled Computer Vision

Our society’s information technology advancements have resulted in the increasingly problematic issue of information overload-i.e., we have more access to information than we can possibly process. This is nowhere more apparent than in the volume of imagery and video that we can access on a daily basis-for the general public, availability of YouTube video and Google Images, or for the image analysis professional tasked with searching security video or satellite reconnaissance. Which images to look at and how to ensure we see the images that are of most interest to us, begs the question of whether there are smart ways to triage this volume of imagery. Over the past decade, computer vision research has focused on the issue of ranking and indexing imagery. However, computer vision is limited in its ability to identify interesting imagery, particularly as ¿interesting¿ might be defined by an individual. In this paper we describe our efforts in developing brain-computer interfaces (BCIs) which synergistically integrate computer vision and human vision so as to construct a system for image triage. Our approach exploits machine learning for real-time decoding of brain signals which are recorded noninvasively via electroencephalography (EEG). The signals we decode are specific for events related to imagery attracting a user’s attention. We describe two architectures we have developed for this type of cortically coupled computer vision and discuss potential applications and challenges for the future

Single-trial Analysis of Neuroimaging Data: Inferring Neural Networks Underlying Perceptual, Decision Making in the Human Brain

Advances in neural signal and image acquisition as well as in multivariate signal processing and machine learning are enabling a richer and more rigorous understanding of the neural basis of human decision-making. Decision-making is essentially characterized behaviorally by the variability of the decision across individual trials—e.g., error and response time distributions. To infer the neural processes that govern decision-making requires identifying neural correlates of such trial-to-trial behavioral variability. In this paper, we review efforts that utilize signal processing and machine learning to enable single-trial analysis of neural signals acquired while subjects perform simple decision-making tasks. Our focus is on neuroimaging data collected noninvasively via electroencephalograpy (EEG) and functional magnetic resonance imaging (fMRI). We review the specific frame-work for extracting decision-relevant neural components from the neuroimaging data, the goal being to analyze the trial-to-trial variability of the neural signal along these component directions and to relate them to elements of the decision-making process. We review results for perceptual decision-making and discrimination tasks, including paradigms in which EEG variability is used to inform an fMRI analysis. We discuss how single-trial analysis reveals aspects of the underlying decision-making networks that are unobservable using traditional trial-averaging methods.

Comparing neural correlates of visual target detection in serial visual presentations having different temporal correlations

Most visual stimuli we experience on a day-to-day basis are continuous sequences, with spatial structure highly correlated in time. During rapid serial visual presentation (RSVP), this correlation is absent. Here we study how subjects’ target detection responses, both behavioral and electrophysiological, differ between continuous serial visual sequences (CSVP), flashed serial visual presentation (FSVP) and RSVP. Behavioral results show longer reaction times for CSVP compared to the FSVP and RSVP conditions, as well as a difference in miss rate between RSVP and the other two conditions. Using mutual information, we measure electrophysiological differences in the electroencephalography (EEG) for these three conditions. We find two peaks in the mutual information between EEG and stimulus class (target vs. distractor), with the second peak occurring 30–40 ms earlier for the FSVP and RSVP conditions. In addition, we find differences in the persistence of the peak mutual information between FSVP and RSVP conditions. We further investigate these differences using a mutual information based functional connectivity analysis and find significant fronto-parietal functional coupling for RSVP and FSVP but no significant coupling for the CSVP condition. We discuss these findings within the context of attentional engagement, evidence accumulation and short-term visual memory.

Removal of BCG artifacts using a non-Kirchhoffian overcomplete representation

We present a nonlinear unmixing approach for extracting the ballistocardiogram (BCG) from EEG recorded in an MR scanner during simultaneous acquisition of functional MRI (fMRI). First, an overcomplete basis is identified in the EEG based on a custom multipath EEG electrode cap. Next, the overcomplete basis is used to infer non-Kirchhoffian latent variables that are not consistent with a conservative electric field. Neural activity is strictly Kirchhoffian while the BCG artifact is not, and the representation can hence be used to remove the artifacts from the data in a way that does not attenuate the neural signals needed for optimal single-trial classification performance. We compare our method to more standard methods for BCG removal, namely independent component analysis and optimal basis sets, by looking at single-trial classification performance for an auditory oddball experiment. We show that our overcomplete representation method for removing BCG artifacts results in better single-trial classification performance compared to the conventional approaches, indicating that the derived neural activity in this representation retains the complex information in the trial-to-trial variability.

EEG-Informed fMRI Reveals Spatiotemporal Characteristics of Perceptual Decision Making

Single-unit and multiunit recordings in primates have already established that decision making involves at least two general stages of neural processing: representation of evidence from early sensory areas and accumulation of evidence to a decision threshold from decision-related regions. However, the relay of information from early sensory to decision areas, such that the accumulation process is instigated, is not well understood. Using a cued paradigm and single-trial analysis of electroencephalography (EEG), we previously reported on temporally specific components related to perceptual decision making. Here, we use information derived from our previous EEG recordings to inform the analysis of fMRI data collected for the same behavioral task to ascertain the cortical origins of each of these EEG components. We demonstrate that a cascade of events associated with perceptual decision making takes place in a highly distributed neural network. Of particular importance is an activation in the lateral occipital complex implicating perceptual persistence as a mechanism by which object decision making in the human brain is instigated.

Causal influences in the human brain during face discrimination: a short-window directed transfer function approach

In this letter, we considered the application of parametric spectral analysis, namely a short-window directed transfer function (DTF) approach, to multichannel electroencephalography (EEG) data during a face discrimination task. We identified causal influences between occipitoparietal and centrofrontal electrode sites, the timing of which corresponded to previously reported EEG face-selective components. More importantly we present evidence that there are both feedforward and feedback influences, a finding that is in direct contrast to current computational models of perceptual discrimination and decision making which tend to favor a purely feedforward processing scheme.

Cortically-coupled computer vision for rapid image search

We describe a real-time electroencephalography (EEG)-based brain-computer interface system for triaging imagery presented using rapid serial visual presentation. A target image in a sequence of nontarget distractor images elicits in the EEG a stereotypical spatiotemporal response, which can be detected. A pattern classifier uses this response to reprioritize the image sequence, placing detected targets in the front of an image stack. We use single-trial analysis based on linear discrimination to recover spatial components that reflect differences in EEG activity evoked by target versus nontarget images. We find an optimal set of spatial weights for 59 EEG sensors within a sliding 50-ms time window. Using this simple classifier allows us to process EEG in real time. The detection accuracy across five subjects is on average 92%, i.e., in a sequence of 2500 images, resorting images based on detector output results in 92% of target images being moved from a random position in the sequence to one of the first 250 images (first 10% of the sequence). The approach leverages the highly robust and invariant object recognition capabilities of the human visual system, using single-trial EEG analysis to efficiently detect neural signatures correlated with the recognition event.

Neural representation of task difficulty and decision making during perceptual categorization: a timing diagram

When does the brain know that a decision is difficult to make? How does decision difficulty affect the allocation of neural resources and timing of constituent cortical processing? Here, we use single-trial analysis of electroencephalography (EEG) to identify neural correlates of decision difficulty and relate these to neural correlates of decision accuracy. Using a cued paradigm, we show that we can identify a component in the EEG that reflects the inherent task difficulty and not simply a correlation with the stimulus. We find that this decision difficulty component arises ≈220 ms after stimulus presentation, between two EEG components that are predictive of decision accuracy [an “early” (170 ms) and a “late” (≈300 ms) component]. We use these results to develop a timing diagram for perceptual decision making and relate the component activities to parameters of a diffusion model for decision making.

Recipes for the linear analysis of EEG

In this paper, we describe a simple set of “recipes” for the analysis of high spatial density EEG. We focus on a linear integration of multiple channels for extracting individual components without making any spatial or anatomical modeling assumptions, instead requiring particular statistical properties such as maximum difference, maximum power, or statistical independence. We demonstrate how corresponding algorithms, for example, linear discriminant analysis, principal component analysis and independent component analysis, can be used to remove eye-motion artifacts, extract strong evoked responses, and decompose temporally overlapping components. The general approach is shown to be consistent with the underlying physics of EEG, which specifies a linear mixing model of the underlying neural and non-neural current sources.

Linear Spatial Integration for Single-Trial Detection in Encephalography

Conventional analysis of electroencephalography (EEG) and magnetoencephalography (MEG) often relies on averaging over multiple trials to extract statistically relevant differences between two or more experimental conditions. In this article we demonstrate single-trial detection by linearly integrating information over multiple spatially distributed sensors within a predefined time window. We report an average, single-trial discrimination performance of Az ≈ 0.80 and fraction correct between 0.70 and 0.80, across three distinct encephalographic data sets. We restrict our approach to linear integration, as it allows the computation of a spatial distribution of the discriminating component activity. In the present set of experiments the resulting component activity distributions are shown to correspond to the functional neuroanatomy consistent with the task (e.g., contralateral sensory–motor cortex and anterior cingulate). Our work demonstrates how a purely data-driven method for learning an optimal spatial weighting of encephalographic activity can be validated against the functional neuroanatomy.