Incremental text-to-speech (TTS) synthesis generates utterances in small linguistic units for the sake of real-time and low-latency applications. We previously proposed an incremental TTS method that leverages a large pre-trained language model to take unobserved future context into account without waiting for the subsequent segment. Although this method achieves comparable speech quality to that of a method that waits for the future context, it entails a huge amount of processing for sampling from the language model at each time step. In this paper, we propose an incremental TTS method that directly predicts the unobserved future context with a lightweight model, instead of sampling words from the large-scale language model. We perform knowledge distillation from a GPT2-based context prediction network into a simple recurrent model by minimizing a teacher-student loss defined between the context embedding vectors of those models. Experimental results show that the proposed method requires about ten times less inference time to achieve comparable synthetic speech quality to that of our previous method, and it can perform incremental synthesis much faster than the average speaking speed of human English speakers, demonstrating the availability of our method to real-time applications.
※ wpm: words per minute
"that is reflected in definite and comprehensive operating procedures."
Groundtruth
Fullsentence
Unicontext: 800 wpm
Teacher: 80 wpm
Student (λ=0.95): 800 wpm
Student (λ=1): 800 wpm
"After this came the charge of administering oil of vitriol, which failed, as has been described."
Groundtruth
Fullsentence
Unicontext: 800 wpm
Teacher: 80 wpm
Student (λ=0.95): 800 wpm
Student (λ=1): 800 wpm
"In planning its data processing techniques,"
Groundtruth
Fullsentence
Unicontext: 800 wpm
Teacher: 80 wpm
Student (λ=0.95): 800 wpm
Student (λ=1): 800 wpm
"During November the Dallas papers reported frequently on the plans for protecting the President, stressing the thoroughness of the preparations."
Groundtruth
Fullsentence
Unicontext: 800 wpm
Teacher: 80 wpm
Student (λ=0.95): 800 wpm
Student (λ=1): 800 wpm
"with hope to the last. There is always the chance of a flaw in the indictment, of a missing witness, or extenuating circumstances."
Groundtruth
Fullsentence
Unicontext: 800 wpm
Teacher: 80 wpm
Student (λ=0.95): 800 wpm
Student (λ=1): 800 wpm
"The occupants of this terrible black pew were the last always to enter the chapel."
Groundtruth
Fullsentence
Unicontext: 800 wpm
Teacher: 80 wpm
Student (λ=0.95): 800 wpm
Student (λ=1): 800 wpm
"who was one of the first witnesses to alert the police to the Depository as the source of the shots, as has been discussed in chapter three."
Groundtruth
Fullsentence
Unicontext: 800 wpm
Teacher: 80 wpm
Student (λ=0.95): 800 wpm
Student (λ=1): 800 wpm