Tagged: Image Analysis

Training neural networks for computer-aided diagnosis: experience in the intelligence community

Neural networks are often used in computer-aided diagnosis (CAD) systems for detecting clinically significant objects. They have also been applied in the AI community to cue image analysts (IAs) for assisted target recognition and wide-area searching. Given the similarity between the applications in the two communities, there are a number of common issues that must be considered when training these neural networks. Two such issues are: (1) exploiting information at multiple scales (e.g. context and detail structure), and (2) dealing with uncertainty (e.g. errors in truth data). We address these two issues, transferring architectures and training algorithms originally developed for assisting IAs in search applications, to improve CAD for mammography. These include hierarchical pyramid neural net (HPNN) architectures that automatically learn and integrate multi-resolution features for improving microcalcification and mass detection in CAD systems. These networks are trained using an uncertain object position (UOP) error function for the supervised learning of image searching/detection tasks when the position of the objects to be found is uncertain or ill-defined. The results show that the HPNN architecture trained using the UOP error function reduces the false-positive rate of a mammographic CAD system by 30%-50% without any significant loss in sensitivity. We conclude that the transfer of assisted target recognition technology from the AI community to the medical community can significantly impact the clinical utility of CAD systems.

A system for single-trial analysis of simultaneously acquired EEG and fMRI

In this paper we describe a system for simultaneously acquiring EEG and fMRI and evaluate it in terms of discriminating, single-trial, task-related neural components in the EEG. Using an auditory oddball stimulus paradigm, we acquire EEG data both inside and outside a 1.5T MR scanner and compare both power spectra and single-trial discrimination performance for both conditions. We find that EEG activity acquired inside the MR scanner during echo planer image acquisition is of high enough quality to enable single-trial discrimination performance that is 95 % of that acquired outside the scanner. We conclude that EEG acquired simultaneously with fMRI is of high enough fidelity to permit single-trial analysis.

Detection, synthesis and compression in mammographic image analysis with a hierarchical image probability model

We develop a probability model over image spaces and demonstrate its broad utility in mammographic image analysis. The model employs a pyramid representation to factor images across scale and a tree-structured set of hidden variables to capture long-range spatial dependencies. This factoring makes the computation of the density functions local and tractable. The result is a hierarchical mixture of conditional probabilities, similar to a hidden Markov model on a tree. The model parameters are found with maximum likelihood estimation using the EM algorithm. The utility of the model is demonstrated for three applications; 1) detection of mammographic masses in computer-aided diagnosis 2) qualitative assessment of model structure through mammographic synthesis and 3) compression of mammographic regions of interest.

Hierarchical multi-resolution models for object recognition: Applications to mammographic computer-aided diagnosis

A fundamental problem in image analysis is the integration of information across scale to detect and classify objects. We have developed, within a machine learning framework, two classes of multiresolution models for integrating scale information for object detection and classification-a discriminative model called the hierarchical pyramid neural network and a generative model called a hierarchical image probability model. Using receiver operating characteristic analysis, we show that these models can significantly reduce the false positive rates for a well-established computer-aided diagnosis system.

In a Blink of an Eye and a Switch of a Transistor: Cortically-coupled Computer Vision

Our society’s information technology advancements have resulted in the increasingly problematic issue of information overload-i.e., we have more access to information than we can possibly process. This is nowhere more apparent than in the volume of imagery and video that we can access on a daily basis-for the general public, availability of YouTube video and Google Images, or for the image analysis professional tasked with searching security video or satellite reconnaissance. Which images to look at and how to ensure we see the images that are of most interest to us, begs the question of whether there are smart ways to triage this volume of imagery. Over the past decade, computer vision research has focused on the issue of ranking and indexing imagery. However, computer vision is limited in its ability to identify interesting imagery, particularly as ¿interesting¿ might be defined by an individual. In this paper we describe our efforts in developing brain-computer interfaces (BCIs) which synergistically integrate computer vision and human vision so as to construct a system for image triage. Our approach exploits machine learning for real-time decoding of brain signals which are recorded noninvasively via electroencephalography (EEG). The signals we decode are specific for events related to imagery attracting a user’s attention. We describe two architectures we have developed for this type of cortically coupled computer vision and discuss potential applications and challenges for the future