<ul class="dashed" data-apple-notes-indent-amount="0"><li><span style="font-family: '.PingFangSC-Regular'">文章标题:</span>Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech</li><li><span style="font-family: '.PingFangSC-Regular'">文章地址:</span><a href="https://arxiv.org/abs/2106.06103">https://arxiv.org/abs/2106.06103</a> </li><li>ICML 2021</li></ul> <img src="https://res.cloudinary.com/montaigne-io/image/upload/v1721737926/8B557D35-9FF2-4AFE-83E5-F644BAAE1977.png" style="background-color:initial;max-width:min(100%,2300px);max-height:min(1334px);;background-image:url(https://res.cloudinary.com/montaigne-io/image/upload/v1721737926/8B557D35-9FF2-4AFE-83E5-F644BAAE1977.png);height:auto;width:100%;object-fit:cover;background-size:cover;display:block;" width="2300" height="1334"> 文章提出了VITS,一种并行的端到端的TTS模型,能比传统两阶段的方法生成更自然的声音。该方法采用了与归一化流模型协同的变分增强推理方法,该方法可以增强建模的表达能力。为了生成多样的语音节奏,模型提出了随机间隔预测器。基于隐空间建模和语音持续时间的不确定性,模型可以对同一个文本对应多种韵律和节奏。在人类评判基准MOS中,模型超越了所有公开的TTS模型,接近GT的水平。 在神经网络的影响下,大多TTS系统分为两阶段,第一个阶段为将文本建模转化成语音中间表示(如梅尔频谱、语音特征等),第二个阶段就是基于这些中间表示还原语音。这两阶段相互独立发展。两阶段有个弊端就是后阶段模型的训练需要用到前阶段模型采样的结果,这会导致模型性能下降。文章提出了一种并行端到端的方法生成了比两阶段法更自然的语音。 下面介绍模型的方法。首先介绍模型的数学模型,模型可以表示为条件VAE的形式通过最大化ELBO去逼近一个复杂的分布: <img src="https://res.cloudinary.com/montaigne-io/image/upload/v1721789317/AC313DA0-BA7C-4157-8F9B-11D0E237CA63.png" style="background-color:initial;max-width:min(100%,986px);max-height:min(134px);;background-image:url(https://res.cloudinary.com/montaigne-io/image/upload/v1721789317/AC313DA0-BA7C-4157-8F9B-11D0E237CA63.png);height:auto;width:100%;object-fit:cover;background-size:cover;display:block;" width="986" height="134"> 因此模型的损失可以分为两部分:第一项,也就是重建损失,和第二项KL散度。这两项的公式分别为: <img src="https://res.cloudinary.com/montaigne-io/image/upload/v1721789509/3A8A0468-1E29-44CE-BC35-986DDF27AF5F.png" style="background-color:initial;max-width:min(100%,516px);max-height:min(84px);;background-image:url(https://res.cloudinary.com/montaigne-io/image/upload/v1721789509/3A8A0468-1E29-44CE-BC35-986DDF27AF5F.png);height:auto;width:100%;object-fit:cover;background-size:cover;display:block;" width="516" height="84"> <img src="https://res.cloudinary.com/montaigne-io/image/upload/v1721789509/3B72C4BA-0FE7-4D0E-B957-B3401F1C2451.png" style="background-color:initial;max-width:min(100%,878px);max-height:min(150px);;background-image:url(https://res.cloudinary.com/montaigne-io/image/upload/v1721789509/3B72C4BA-0FE7-4D0E-B957-B3401F1C2451.png);height:auto;width:100%;object-fit:cover;background-size:cover;display:block;" width="878" height="150"> 对于重建损失,模型采用的是梅尔频谱间的L1损失而非原始波形,因为梅尔频谱更符合人类的听觉。对于KL散度,模型采用的是线性尺度频谱图而非梅尔频谱,A为音素和隐变量的对齐矩阵。目前对于先验概率和后验概率的编码器都是使用的标准正态分布,要提高生成语音的真实性,先验分布的表达性是很重要的,因此作者加入了归一化流模型用于将先验分布转换成一个复杂的分布,并且这种转换是可逆的。 <img src="https://res.cloudinary.com/montaigne-io/image/upload/v1721790155/74BCF4B1-B35C-43AB-B481-53765CBC4BD1.png" style="background-color:initial;max-width:min(100%,954px);max-height:min(194px);;background-image:url(https://res.cloudinary.com/montaigne-io/image/upload/v1721790155/74BCF4B1-B35C-43AB-B481-53765CBC4BD1.png);height:auto;width:100%;object-fit:cover;background-size:cover;display:block;" width="954" height="194"> 然后介绍对齐的估计,作者使用了之前提出的MAS方法去计算对齐矩阵A: <img src="https://res.cloudinary.com/montaigne-io/image/upload/v1721790539/0007E320-A9C0-4426-B1F1-9B0F6810A902.png" style="background-color:initial;max-width:min(100%,1000px);max-height:min(238px);;background-image:url(https://res.cloudinary.com/montaigne-io/image/upload/v1721790539/0007E320-A9C0-4426-B1F1-9B0F6810A902.png);height:auto;width:100%;object-fit:cover;background-size:cover;display:block;" width="1000" height="238"> 有了对其矩阵便能得到每个隐变量确定的持续时间,这样做缺乏多样性,因此作者提出了随机持续时间预测器,引入了随机性,增加了多样性。最后便是对抗学习,作者采用了最小均方损失和特征匹配损失: <img src="https://res.cloudinary.com/montaigne-io/image/upload/v1721790972/31812618-D3B1-4C2C-B23F-409AFC2AF8CA.png" style="background-color:initial;max-width:min(100%,988px);max-height:min(370px);;background-image:url(https://res.cloudinary.com/montaigne-io/image/upload/v1721790972/31812618-D3B1-4C2C-B23F-409AFC2AF8CA.png);height:auto;width:100%;object-fit:cover;background-size:cover;display:block;" width="988" height="370"> 因此模型最终的损失为: <img src="https://res.cloudinary.com/montaigne-io/image/upload/v1721790972/9E0E2852-0107-41FC-8DD2-2199214E50E2.png" style="background-color:initial;max-width:min(100%,1016px);max-height:min(96px);;background-image:url(https://res.cloudinary.com/montaigne-io/image/upload/v1721790972/9E0E2852-0107-41FC-8DD2-2199214E50E2.png);height:auto;width:100%;object-fit:cover;background-size:cover;display:block;" width="1016" height="96"> 模型结构如主图,具体模型细节可以看原文。 作者做了大量的实验证明方法的有效性,多样性和推理快速性。 <img src="https://res.cloudinary.com/montaigne-io/image/upload/v1721791593/7DA824CF-39F3-4234-93FE-764C1BDE77F6.png" style="background-color:initial;max-width:min(100%,1158px);max-height:min(700px);;background-image:url(https://res.cloudinary.com/montaigne-io/image/upload/v1721791593/7DA824CF-39F3-4234-93FE-764C1BDE77F6.png);height:auto;width:100%;object-fit:cover;background-size:cover;display:block;" width="1158" height="700"> <img src="https://res.cloudinary.com/montaigne-io/image/upload/v1721791593/52BC7665-27D3-49D3-B54B-E7121E7D6B8E.png" style="background-color:initial;max-width:min(100%,1150px);max-height:min(606px);;background-image:url(https://res.cloudinary.com/montaigne-io/image/upload/v1721791593/52BC7665-27D3-49D3-B54B-E7121E7D6B8E.png);height:auto;width:100%;object-fit:cover;background-size:cover;display:block;" width="1150" height="606"> <img src="https://res.cloudinary.com/montaigne-io/image/upload/v1721791593/DE21B338-EF4A-4636-A7E0-C49AB1FF65D5.png" style="background-color:initial;max-width:min(100%,2368px);max-height:min(1586px);;background-image:url(https://res.cloudinary.com/montaigne-io/image/upload/v1721791593/DE21B338-EF4A-4636-A7E0-C49AB1FF65D5.png);height:auto;width:100%;object-fit:cover;background-size:cover;display:block;" width="2368" height="1586"> <img src="https://res.cloudinary.com/montaigne-io/image/upload/v1721791783/E8B88042-4AA2-441C-BFA0-FC91C8BBB0C7.png" style="background-color:initial;max-width:min(100%,1122px);max-height:min(518px);;background-image:url(https://res.cloudinary.com/montaigne-io/image/upload/v1721791783/E8B88042-4AA2-441C-BFA0-FC91C8BBB0C7.png);height:auto;width:100%;object-fit:cover;background-size:cover;display:block;" width="1122" height="518">