Reconstructing Speech from Human Auditory Cortex

Reconstructing Speech from Human Auditory Cortex

Reconstructing Speech from Human Auditory Cortex

It is really unknown to almost everyone, especially the non-scientists, how the Direct brain recordings from neurosurgical patients listening to speech reveal that the acoustic speech signals can be reconstructed from neural activity in auditory cortex.

Scientists used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity.

They found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram.

However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility.

The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex.

Spoken language is a uniquely human trait. The human brain has evolved computational mechanisms that decode highly variable acoustic inputs into meaningful elements of language such as phonemes and words. Unraveling these decoding mechanisms in humans has proven difficult, because invasive recording of cortical activity is usually not possible.

In this study, the scientists take advantage of rare neurosurgical procedures for the treatment of epilepsy, in which neural activity is measured directly from the cortical surface and therefore provides a unique opportunity for characterizing how the human brain performs speech recognition.

Using these recordings, the scientists sought what aspects of speech sounds could be reconstructed, or decoded, from higher order brain areas in the human auditory system.

Posted by on Friday February 10 2012, 4:54 AM EDT. Ref: Brian N. Pasley et al. Link. All trademarks acknowledged. Filed under Featured News, Health. Comments and Trackbacks closed. Follow responses: RSS 2.0

Comments are closed

Featured Press Releases

Log in