<ul class="dashed" data-apple-notes-indent-amount="0"><li><span style="font-family: '.PingFangUITextSC-Regular'">文章标题:</span>FREENOISE: TUNING-FREE LONGER VIDEO DIFFUSION VIA NOISE RESCHEDULING</li><li><span style="font-family: '.PingFangSC-Regular'">文章地址:</span><a href="https://arxiv.org/abs/2310.15169">https://arxiv.org/abs/2310.15169</a> </li><li>ICLR 2024</li></ul> <img src="https://res.cloudinary.com/montaigne-io/image/upload/v1748943604/91EDA054-FF75-4702-99C6-C1BB8AC57766.png" style="background-color:initial;max-width:min(100%,2528px);max-height:min(1148px);;background-image:url(https://res.cloudinary.com/montaigne-io/image/upload/v1748943604/91EDA054-FF75-4702-99C6-C1BB8AC57766.png);height:auto;width:100%;object-fit:cover;background-size:cover;display:block;" width="2528" height="1148"> 文章针对现有文生视频模型只能输出固定帧数的视频的缺点进行优化,使其能够生成长视频且支持多prompt控制生成。具体来说,先对噪声进行拓展,并将其拓展部分进行小窗口随机重排序,从而得到了长序列噪声;但是原模型不能很好地处理长噪声(预训练数据固定帧数),因此提出了基于滑动窗口的注意力混合机制,即每次计算注意力都维持原序列长度,然后再进行加权混合,从而使得模型能够处理长序列噪声。对于多prompt,该方法进行了不同时间步的text embedding混合引导策略。 <img src="https://res.cloudinary.com/montaigne-io/image/upload/v1748944378/C73D14E8-9CE0-4678-B69A-56F8E1B07513.png" style="background-color:initial;max-width:min(100%,1916px);max-height:min(794px);;background-image:url(https://res.cloudinary.com/montaigne-io/image/upload/v1748944378/C73D14E8-9CE0-4678-B69A-56F8E1B07513.png);height:auto;width:100%;object-fit:cover;background-size:cover;display:block;" width="1916" height="794"> <ul class="dashed" data-apple-notes-indent-amount="0"><li>数据:无需训练数据</li><li>指标:FVD;KVD;CLIP-SIM;Inference Time</li><li>硬件:不重要</li><li>开源:<a href="http://haonanqiu.com/projects/FreeNoise.html">http://haonanqiu.com/projects/FreeNoise.html</a> </li></ul>