Research articles

Voice Assignment in Vocal Quartets Using Deep Learning Models Based on Pitch Salience

Authors:

Abstract

This paper deals with the automatic transcription of four-part, a cappella singing, audio performances. In particular, we exploit an existing, deep-learning based, multiple F0 estimation method and complement it with two neural network architectures for voice assignment (VA) in order to create a music transcription system that converts an input audio mixture into four pitch contours. To train our VA models, we create a novel synthetic dataset by collecting 5381 choral music scores from public-domain music archives, which we make publicly available for further research. We compare the performance of the proposed VA models on different types of input data, as well as to a hidden Markov model-based baseline system. In addition, we assess the generalization capabilities of these models on audio recordings with differing pitch distributions and vocal music styles. Our experiments show that the two proposed models, a CNN and a ConvLSTM, have very similar performance, and both of them outperform the baseline HMM-based system. We also observe a high confusion rate between the alto and tenor voice parts, which commonly have overlapping pitch ranges, while the bass voice has the highest scores in all evaluated scenarios.

Keywords:

voice assignmentmulti-pitch estimationmusic information retrievalvocal quartetspolyphonic vocal musicdeep learning
  • Year: 2022
  • Volume: 5 Issue: 1
  • Page/Article: 99–112
  • DOI: 10.5334/tismir.121
  • Submitted on 18 Oct 2021
  • Accepted on 3 Mar 2022
  • Published on 26 May 2022
  • Peer Reviewed