Skip to main content
. Author manuscript; available in PMC: 2021 Jul 21.
Published in final edited form as: J Neural Eng. 2020 Nov 25;17(6):066007. doi: 10.1088/1741-2552/abbfef

Fig. 6:

Fig. 6:

Speech synthesis using ‘brain-to-speech’ unit selection.

(A) Audio waveforms for the actual words spoken by participant T5 (top) and the synthesized audio reconstructed from neural data (bottom). (B) Corresponding acoustic spectrograms. The correlation coefficient between true and synthesized audio (averaged across all 40 Mel frequency bins) for these 9 good examples was 0.696.