Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Machine translation of cortical activity to text with an encoder–decoder framework

Abstract

A decade after speech was first decoded from human brain signals, accuracy and speed remain far below that of natural speech. Here we show how to decode the electrocorticogram with high accuracy and at natural-speech rates. Taking a cue from recent advances in machine translation, we train a recurrent neural network to encode each sentence-length sequence of neural activity into an abstract representation, and then to decode this representation, word by word, into an English sentence. For each participant, data consist of several spoken repeats of a set of 30–50 sentences, along with the contemporaneous signals from ~250 electrodes distributed over peri-Sylvian cortices. Average word error rates across a held-out repeat set are as low as 3%. Finally, we show how decoding with limited data can be improved with transfer learning, by training certain layers of the network under multiple participants’ data.

This is a preview of subscription content, access via your institution

Access options

Rent or buy this article

Prices vary by article type

from$1.95

to$39.95

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: The decoding pipeline.
Fig. 2: WERs of the decoded sentences.
Fig. 3: WER of the decoded MOCHA-1 sentences for encoder–decoder models trained with transfer learning.
Fig. 4: The contributions of each anatomical area to decoding, as measured by the gradient of the loss function with respect to the input data (see “Anatomical contributions” for details).
Fig. 5: Electrode coverage and contributions.
Fig. 6: Graphical model for the decoding process.
Fig. 7: Network architecture.

Similar content being viewed by others

Data availability

Deidentified copies of the data used in this study will be provided upon reasonable request. Please contact E.F.C. via e-mail with any inquiries. Source data for the figures are likewise available upon request; please contact J.G.M. via e-mail with inquiries.

Code availability

The code used to train and test the encoder–decoders is available at https://github.com/jgmakin/machine_learning. Code used to assemble data and generate figures is also available upon reasonable request; please contact J.G.M. via e-mail with any inquiries.

References

  1. Nuyujukian, P. et al. Cortical control of a tablet computer by people with paralysis. PLoS ONE 13, 1–16 (2018).

    Article  Google Scholar 

  2. Gilja, V. et al. Clinical translation of a high-performance neural prosthesis. Nat. Med. 21, 1142–1145 (2015).

    Article  CAS  Google Scholar 

  3. Jarosiewicz, B. et al. Virtual typing by people with tetraplegia using a self-calibrating intracortical brain–computer interface. Sci. Transl. Med. 7, 1–19 (2015).

    Article  Google Scholar 

  4. Brumberg, J.S. Kennedy, P.R. & Guenther, F.H. Artificial speech synthesizer control by brain–computer interface. In Interspeech, 636–639 (International Speech Communication Association, 2009).

  5. Brumberg, J. S., Wright, E. J., Andreasen, D. S., Guenther, F. H. & Kennedy, P. R. Classification of intended phoneme production from chronic intracortical microelectrode recordings in speech-motor cortex. Front. Neuroeng. 5, 1–12 (2011).

    Google Scholar 

  6. Pei, X., Barbour, D. L. & Leuthardt, E. C. Decoding vowels and consonants in spoken and imagined words using electrocorticographic signals in humans. J. Neural Eng. 8, 1–11 (2011).

    Article  Google Scholar 

  7. Mugler, E. M. et al. Differential representation of articulatory gestures and phonemes in precentral and inferior frontal gyri. J. Neurosci. 4653, 1206–18 (2018).

    Google Scholar 

  8. Stavisky, S.D. et al. Decoding speech from intracortical multielectrode arrays in dorsal ‘arm/hand areas’ of human motor cortex. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS (ed. Patton, J.) 93–97 (IEEE, 2018).

  9. Herff, C. et al. Brain-to-text: decoding spoken phrases from phone representations in the brain. Front. Neurosci. 9, 1–11 (2015).

    Article  Google Scholar 

  10. Sutskever, I., Vinyals, O. & Le, Q.V. Sequence to sequence learning with neural networks. Adv. Neural Inform. Process. Syst. 27, 3104–3112 (2014).

    Google Scholar 

  11. Cho, K. et al. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) (eds Moschitti, A., Pang, B. & Daelemans, W.) 1724–1734 (Association for Computational Linguistics, 2014).

  12. Koehn, P. Europarl: a parallel corpus for statistical machine translation. In Machine Translation Summit X, 79–86 (Asia-Pacific Association for Machine Translation, 2005).

  13. Beelen, K. et al. Digitization of the Canadian parliamentary debates. Can. J. Polit. Sci. 50, 849–864 (2017).

    Article  Google Scholar 

  14. Wrench, A.A. A multichannel articulatory database and its application for automatic speech recognition. In Proceedings of the 5th Seminar of Speech Production (ed. Hoole, P.) 305–308 (Institut für Phonetik und Sprachliche Kommunikation, Ludwig-Maximilians-Universität, 2000).

  15. Dichter, B. K., Breshears, J. D., Leonard, M. K. & Chang, E. F. The control of vocal pitch in human laryngeal motor cortex. Cell 174, 21–31.e9 (2018).

    Article  CAS  Google Scholar 

  16. Bouchard, K. E., Mesgarani, N., Johnson, K. & Chang, E. F. Functional organization of human sensorimotor cortex for speech articulation. Nature 495, 327–332 (2013).

    Article  CAS  Google Scholar 

  17. Caruana, R. Multitask learning. Mach. Learn. 28, 41–75 (1997).

    Article  Google Scholar 

  18. Szegedy, C. et al. Going deeper with convolutions. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1–9 (IEEE, 2015).

  19. Rumelhart, D., Hinton, G. E. & Williams, R. Learning representations by back-propagating errors. Nature 323, 533–536 (1986).

    Article  Google Scholar 

  20. Srivastava, N., Hinton, G. E., Krizhevsky, A., Sutskever, I. & Salakhutdinov, R. R. Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014).

    Google Scholar 

  21. Xiong, W. et al. Toward human parity in conversational speech recognition. IEEE/ACM Trans. Audio Speech Lang. Process. 25, 2410–2423 (2017).

    Article  Google Scholar 

  22. Munteanu, C. Penn, G. Baecker, R. Toms, E. & James, D. Measuring the acceptable word error rate of machine-generated webcast transcripts. In Interspeech, 157–160 (ISCA, 2006).

  23. Schalkwyk, J. et al. in Advances in Speech Recognition (ed. Neustein, A.) 61–90 (Springer, 2010).

  24. Moses, D. A., Leonard, M. K., Makin, J. G. & Chang, E. F. Real-time decoding of question-and-answer speech dialogue using human cortical activity. Nat. Commun. 10, 3096 (2019).

    Article  Google Scholar 

  25. Cho, K. van Merrienboer, B. Bahdanau, D. & Bengio, Y. On the properties of neural machine translation: encoder–decoder approaches. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation (eds Wu, D., Carpuat, M., Carreras, X. & Vecchi, E. M.) 103–111 (Association for Computational Linguistics, 2014).

  26. Pratt, L., Mostow, J. & Kamm, C. Direct transfer of learned information among neural networks. In Proceedings of the Ninth National Conference on Artificial Intelligence Vol. 2, 584–589 (AAAI Press, 1991).

  27. Simonyan, K. Vedaldi, A. & Zisserman, A. Deep inside convolutional networks: visualising image classification models and saliency maps. In Workshop at the International Conference on Learning Representations (eds Bengio, Y. & LeCun, Y.) 1–8 (ICLR, 2014).

  28. Burke, J. F. et al. Synchronous and asynchronous theta and gamma activity during episodic memory formation. J. Neurosci. 33, 292–304 (2013).

    Article  CAS  Google Scholar 

  29. Meisler, S. L., Kahana, M. J. & Ezzyat, Y. Does data cleaning improve brain state classification? J. Neurosci. Methods 328, 1–10 (2019).

    Article  Google Scholar 

  30. Conant, D. F., Bouchard, K. E., Leonard, M. K. & Chang, E. F. Human sensorimotor cortex control of directly measured vocal tract movements during vowel production. J. Neurosci. 38, 2955–2966 (2018).

    Article  CAS  Google Scholar 

  31. Mesgarani, N., Cheung, C., Johnson, K. & Chang, E. F. Phonetic feature encoding in human superior temporal gyrus. Science 343, 1006–1010 (2014).

    Article  CAS  Google Scholar 

  32. Yi, H. G., Leonard, M. K. & Chang, E. F. The encoding of speech sounds in the superior temporal gyrus. Neuron 102, 1096–1110 (2019).

    Article  CAS  Google Scholar 

  33. Chang, E. F., Niziolek, C. A., Knight, R. T., Nagarajan, S. S. & Houde, J. F. Human cortical sensorimotor network underlying feedback control of vocal pitch. Proc. Natl Acad. Sci. USA 110, 2653–2658 (2013).

    Article  CAS  Google Scholar 

  34. Bahdanau, D. Cho, K. & Bengio, Y. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations (eds Bengio, Y. & LeCun, Y.) 1–15 (ICLR, 2015).

  35. Bai, S. Kolter, J.Z. & Koltun, V. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. Preprint at arXiv https://arxiv.org/pdf/1803.01271.pdf (2018).

  36. Tian, X. & Poeppel, D. Mental imagery of speech and movement implicates the dynamics of internal forward models. Front. Psychol. 1, 1–23 (2010).

    Google Scholar 

  37. Lyons, J. et al. Python Speech Features v.0.6.1 https://doi.org/10.5281/zenodo.3607820 (Zenodo, 2020).

  38. Hochreiter, S. & Schmidhuber, J. Long short-term memory. Neural Comput. 9, 1735–1780 (1997).

    Article  CAS  Google Scholar 

  39. Gers, F. A., Schmidhuber, J. & Cummins, F. Learning to forget: continual prediction with LSTM. Neural Comput. 12, 2451–2471 (2000).

    Article  CAS  Google Scholar 

  40. Abadi, M. et al. Tensorflow: large-scale machine learning on heterogeneous distributed systems. in Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation 265–283 (USENIX Association, 2016).

  41. Kingma, D.P. & Ba, J. Adam: a method for stochastic optimization. in 3rd International Conference on Learning Representations (eds Bengio, Y. & LeCun, Y.) (ICLR, 2015).

  42. Zaremba, W., Sutskever, I. & Vinyals, O. Recurrent neural network regularization. Preprint at arXiv http://arxiv.org/abs/1409.2329 (2015).

Download references

Acknowledgements

The project was funded by a research contract under Facebook’s Sponsored Academic Research Agreement. Data were collected and preprocessed by members of the Chang laboratory, some (MOCHA-TIMIT) under NIH grant no. U01 NS098971. Some neural networks were trained using GPUs generously donated by the Nvidia Corporation. We thank M. Leonard, B. Dichter and P. Hullett for comments on a draft of the manuscript and thank J. Burke for suggesting bipolar referencing.

Author information

Authors and Affiliations

Authors

Contributions

J.G.M. conceived and implemented the decoder and all analyses thereof, except the comparison to the phoneme-based decoder, which was conceived and implemented by D.A.M. E.F.C. led the research project. J.G.M. wrote the manuscript with input from all authors.

Corresponding authors

Correspondence to Joseph G. Makin or Edward F. Chang.

Ethics declarations

Competing interests

This work was funded in part by Facebook Reality Labs. UCSF holds patents related to speech decoding.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Information

Supplementary Fig. 1 and Supplementary Tables 1–6.

Reporting Summary

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Makin, J.G., Moses, D.A. & Chang, E.F. Machine translation of cortical activity to text with an encoder–decoder framework. Nat Neurosci 23, 575–582 (2020). https://doi.org/10.1038/s41593-020-0608-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41593-020-0608-8

This article is cited by

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing