VIDEO DOI: https://doi.org/10.48448/v5rn-cz20

poster

EMNLP 2021

November 08, 2021

Live on Underline

How much pretraining data do language models need to learn syntax?

Please log in to leave a comment

Downloads

SlidesTranscript English (automatic)

Next from EMNLP 2021

Efficient Mind-Map Generation via Sequence-to-Graph and Reinforced Graph Refinement
poster

Efficient Mind-Map Generation via Sequence-to-Graph and Reinforced Graph Refinement

EMNLP 2021

Mengting Hu
Mengting Hu

08 November 2021

Similar lecture

VisualSem: a high-quality knowledge graph for vision and language
workshop paper

VisualSem: a high-quality knowledge graph for vision and language

EMNLP 2021

Yibo Liu
Yibo Liu

08 November 2021

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved