Early Parallel Processing of Auditory Word and Voice Information
The present study investigates the relationship of linguistic (phonetic) and extralinguistic (voice) information in preattentive auditory processing. We provide neurophysiological data, which show for the first time that both kinds of information are processed in parallel at an early preattentive stage. In order to establish the temporal and spatial organization of the underlying neuronal processes, we studied the conjunction of voice and word deviations in a mismatch negativity experiment, whereby the listener's brain responses were collected using magnetoencephalography. The stimuli consisted of single spoken words, whereby the deviants manifested a change of the word, of the voice, or both word and voice simultaneously (combined). First, we identified the N100m (overlain by mismatch field, MMF) and localized its generators, analyzing N100m/MMF latency, dipole localization, and dipole strength. While the responses evoked by deviant stimuli were more anterior than the standard, localization differences between the deviants could not be shown. The dipole strength was larger for deviants than the standard stimulus, but again, no differences between the deviants could be established. There was no difference in the hemispheric lateralization of the responses. However, a difference between the deviants was observed in the latencies. The N100m/MMF revealed a significantly shorter and less variant latency for the combined stimulus compared to all other experimental conditions. The data suggest an integral parallel processing model, which describes the early extraction of phonetic and voice information from the speech signal as parallel and contingent processes.