Learning from Bootstrapping and Stepwise Reinforcement Reward: A Semi-Supervised Framework for Text Style Transfer

Zhengyuan Liu, Nancy Chen


Abstract
Text style transfer is an important task in controllable language generation. Supervised approaches have pushed performance improvement on style-oriented rewriting such as formality conversion. However, challenges remain due to the scarcity of large-scale parallel data in many domains. While unsupervised approaches do not rely on annotated sentence pairs for each style, they are often plagued with instability issues such as mode collapse or quality degradation. To take advantage of both supervised and unsupervised paradigms and tackle the challenges, in this work, we propose a semi-supervised framework for text style transfer. First, the learning process is bootstrapped with supervision guided by automatically constructed pseudo-parallel pairs using lexical and semantic-based methods. Then the model learns from unlabeled data via reinforcement rewards. Specifically, we propose to improve the sequence-to-sequence policy gradient via stepwise reward optimization, providing fine-grained learning signals and stabilizing the reinforced learning process. Experimental results show that the proposed approach achieves state-of-the-art performance on multiple datasets, and produces effective generation with as minimal as 10% of training data.
Anthology ID:
2022.findings-naacl.201
Volume:
Findings of the Association for Computational Linguistics: NAACL 2022
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2633–2648
Language:
URL:
https://aclanthology.org/2022.findings-naacl.201
DOI:
10.18653/v1/2022.findings-naacl.201
Bibkey:
Cite (ACL):
Zhengyuan Liu and Nancy Chen. 2022. Learning from Bootstrapping and Stepwise Reinforcement Reward: A Semi-Supervised Framework for Text Style Transfer. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 2633–2648, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Learning from Bootstrapping and Stepwise Reinforcement Reward: A Semi-Supervised Framework for Text Style Transfer (Liu & Chen, Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-naacl.201.pdf
Video:
 https://aclanthology.org/2022.findings-naacl.201.mp4
Code
 seq-to-mind/semi-style-transfer
Data
GYAFC