Real-time full-band voice conversion with sub-band modeling and data-driven phase estimation of spectral differentials


Paper
Comming soon

Authors
Takaaki Saeki, Yuki Saito, Shinnosuke Takamichi, Hiroshi Saruwatari
(The University of Tokyo, Japan)

Abstract
This paper proposes two high-fidelity and computationally efficient neural voice conversion (VC) methods based on a direct waveform modification using spectral differentials. The conventional spectral-differential VC method with a minimum-phase filter achieves high-quality conversion for narrow-band (16 kHz-sampled) VC but requires heavy computational cost in filtering. This is because the minimum phase obtained using a fixed lifter of the Hilbert transform often results in a long-tap filter. Furthermore, when we extend the method to full-band (48 kHz-sampled) VC, the computational cost is heavy due to increased sampling points, and the converted-speech quality degrades due to large fluctuations in the high-frequency band. To construct a short-tap filter, we propose a lifter-training method for data-driven phase reconstruction that trains a lifter of the Hilbert transform by taking into account filter truncation. We also propose a frequency-band-wise modeling method based on sub-band multi-rate signal processing (sub-band modeling method) for full-band VC. It enhances the computational efficiency by reducing sampling points of signals converted with filtering and improves converted-speech quality by modeling only the low-frequency band. We conducted several objective and subjective evaluations to investigate the effectiveness of the proposed methods through implementation of the real-time, online, full-band VC system we developed, which is based on the proposed methods. The results indicate that 1) the proposed lifter-training method for narrow-band VC can shorten the tap length to 1/16 without degrading the converted-speech quality, and 2) the proposed sub-band modeling method for full-band VC can improve the converted-speech quality while reducing the computational cost, and 3) our real-time, online, full-band VC system can convert 48 kHz-sampled speech in real time attaining the converted speech with a 3.6 out of 5.0 mean opinion score of naturalness.


Speech samples

F2F: Female speaker (JSUT corpus [2]) to Female speaker (VOICEACTRESS Corpus [3])

SourceTarget[Arakawa+, 2019]BenchmarkNarrow-band+Full-bandFull-band+
Sample 1
Sample 2
Sample 3

NOTE: We only publish speech samples used in experimental evaluations for only female-to-female conversion due to licensing issues with 48 kHz-sampled JVS corpus [2].


Real-time demo video

You can experience both high quality and low latency with our system by listening to the original and converted speech in left and right headphones, respectively.



References

  1. R. Arakawa, et al., "DNN-based real-time voice conversion and its improvements by audio data augmentation and mask-shaped device," in Proc. SSW, Vienna, Austria, Sep. 2019, pp. 93-98.
  2. S. Takamichi, et al., "JSUT and JVS: Free japanese voice corpora for ac-celerating speech synthesis research," in Acoustical Science and Tech-nology, vol. 41, pp. 761-768, 2020.
  3. y_benjo and MagnesiumRibbon, "Voice-actress corpus." http://voice-statistics.github.io/.