In contrast to static imagery, detection of events of interest in video involves evidence accumulation across space and time; the observer is required to integrate features from both motion and form to decide whether a behavior constituents a target event. Do such events that extend in time elicit evoked responses of similar strength as evoked responses associated with instantaneous events such as the presentation of a static target image? Using a set of simulated scenarios, with avatars/actors having different behaviors, we identified evoked neural activity discriminative of target vs. distractor events (behaviors) at discrimination levels that are comparable to static imagery. EEG discriminative activity was largely in the time-locked evoked response and not in oscillatory activity, with the exception of very low EEG frequency bands such as delta and theta, which simply represent bands dominating the event related potential (ERP). The discriminative evoked response activity we see is observed in all target/distractor conditions and is robust across different recordings from the same subjects. The results suggest that we have identified a robust neural correlate of target detection in video, at least in terms of the stimulus set we used—i.e., dynamic behavior of an individual in a low clutter environment. Additional work is needed to test a larger variety of behaviors and more diverse environments.