Tagged: Brain–computer interface (BCI)

Response error correction-a demonstration of improved human-machine performance using real-time EEG monitoring

We describe a brain–computer interface (BCI) system, which uses a set of adaptive linear preprocessing and classification algorithms for single-trial detection of error related negativity (ERN). We use the detected ERN as an estimate of a subject’s perceived error during an alternative forced choice visual discrimination task. The detected ERN is used to correct subject errors. Our initial results show average improvement in subject performance of 21% when errors are automatically corrected via the BCI. We are currently investigating the generalization of the overall approach to other tasks and stimulus paradigms.

Towards Serious Games for Improved BCI

Brain-computer interface (BCI) technologies, or technologies that use online brain signal processing, have a great promise to improve human interactions with computers, their environment, and even other humans. Despite this promise, there are no current serious BCI technologies in widespread use, due to the lack of robustness in BCI technologies. The key neural aspect of this lack of robustness is human variability, which has two main components: (1) individual differences in neural signals and (2) intraindividual variability over time. In order to develop widespread BCI technologies, it will be necessary to address this lack of robustness. However, it is currently unknown how neural variability affects BCI performance. To accomplish these goals, it is essential to obtain data from large numbers of individuals using BCI technologies over considerable lengths of time. One promising method for this is through the use of BCI technologies embedded into games with a purpose (GWAP). GWAP are a game-based form of crowdsourcing which players choose to play for enjoyment and during which the player performs key tasks which cannot be automated but that are required to solve research questions. By embedding BCI paradigms in GWAP and recording neural and behavioral data, it should be possible to much more clearly understand the differences in neural signals between individuals and across different time scales, enabling the development of novel and increasingly robust adaptive BCI algorithms.

NEDE: An Open-Source Scripting Suite for Developing Experiments in 3D Virtual Environments

Background As neuroscientists endeavor to understand the brain’s response to ecologically valid scenarios, many are leaving behind hyper-controlled paradigms in favor of more realistic ones. This movement has made the use of 3D rendering software an increasingly compelling option. However, mastering such software and scripting rigorous experiments requires a daunting amount of time and effort. New method To reduce these startup costs and make virtual environment studies more accessible to researchers, we demonstrate a naturalistic experimental design environment (NEDE) that allows experimenters to present realistic virtual stimuli while still providing tight control over the subject’s experience. NEDE is a suite of open-source scripts built on the widely used Unity3D game development software, giving experimenters access to powerful rendering tools while interfacing with eye tracking and EEG, randomizing stimuli, and providing custom task prompts. Results Researchers using NEDE can present a dynamic 3D virtual environment in which randomized stimulus objects can be placed, allowing subjects to explore in search of these objects. NEDE interfaces with a research-grade eye tracker in real-time to maintain precise timing records and sync with EEG or other recording modalities. Comparison with existing methods Python offers an alternative for experienced programmers who feel comfortable mastering and integrating the various toolboxes available. NEDE combines many of these capabilities with an easy-to-use interface and, through Unity’s extensive user base, a much more substantial body of assets and tutorials. Conclusions Our flexible, open-source experimental design system lowers the barrier to entry for neuroscientists interested in developing experiments in realistic virtual environments.

Second-Order Bilinear Discriminant Analysis

Traditional analysis methods for single-trial classification of electro-encephalography (EEG) focus on two types of paradigms: phase-locked methods, in which the amplitude of the signal is used as the feature for classification, that is, event related potentials; and second-order methods, in which the feature of interest is the power of the signal, that is, event related (de)synchronization. The process of deciding which paradigm to use is ad hoc and is driven by assumptions regarding the underlying neural generators. Here we propose a method that provides an unified framework for the analysis of EEG, combining first and second-order spatial and temporal features based on a bilinear model. Evaluation of the proposed method on simulated data shows that the technique outperforms state-of-the art techniques for single-trial classification for a broad range of signal-to-noise ratios. Evaluations on human EEG−including one benchmark data set from the Brain Computer Interface (BCI) competition−show statistically significant gains in classification accuracy, with a reduction in overall classification error from 26%-28% to 19%.

Cortically-coupled computer vision for rapid image search

We describe a real-time electroencephalography (EEG)-based brain-computer interface system for triaging imagery presented using rapid serial visual presentation. A target image in a sequence of nontarget distractor images elicits in the EEG a stereotypical spatiotemporal response, which can be detected. A pattern classifier uses this response to reprioritize the image sequence, placing detected targets in the front of an image stack. We use single-trial analysis based on linear discrimination to recover spatial components that reflect differences in EEG activity evoked by target versus nontarget images. We find an optimal set of spatial weights for 59 EEG sensors within a sliding 50-ms time window. Using this simple classifier allows us to process EEG in real time. The detection accuracy across five subjects is on average 92%, i.e., in a sequence of 2500 images, resorting images based on detector output results in 92% of target images being moved from a random position in the sequence to one of the first 250 images (first 10% of the sequence). The approach leverages the highly robust and invariant object recognition capabilities of the human visual system, using single-trial EEG analysis to efficiently detect neural signatures correlated with the recognition event.

Recipes for the linear analysis of EEG

In this paper, we describe a simple set of “recipes” for the analysis of high spatial density EEG. We focus on a linear integration of multiple channels for extracting individual components without making any spatial or anatomical modeling assumptions, instead requiring particular statistical properties such as maximum difference, maximum power, or statistical independence. We demonstrate how corresponding algorithms, for example, linear discriminant analysis, principal component analysis and independent component analysis, can be used to remove eye-motion artifacts, extract strong evoked responses, and decompose temporally overlapping components. The general approach is shown to be consistent with the underlying physics of EEG, which specifies a linear mixing model of the underlying neural and non-neural current sources.