<ul class="dashed" data-apple-notes-indent-amount="0"><li><span style="font-family: '.PingFangUITextSC-Regular'">文章标题:</span>Diffusion Self-Guidance for Controllable Image Generation</li><li><span style="font-family: '.PingFangSC-Regular'">文章地址:</span><a href="https://arxiv.org/abs/2306.00986">https://arxiv.org/abs/2306.00986</a> </li><li>NIPS 2023</li></ul> <img src="https://res.cloudinary.com/montaigne-io/image/upload/v1734100239/1157D550-816F-47F4-8239-551FDC1EF09A.png" style="background-color:initial;max-width:min(100%,2008px);max-height:min(956px);;background-image:url(https://res.cloudinary.com/montaigne-io/image/upload/v1734100239/1157D550-816F-47F4-8239-551FDC1EF09A.png);height:auto;width:100%;object-fit:cover;background-size:cover;display:block;" width="2008" height="956"> 方法其实都围绕隐空间变量与cross-attention map来进行的。 通过定义一些操作,完成对图像的编辑。 感觉现在都是这样操作的。 <img src="https://res.cloudinary.com/montaigne-io/image/upload/v1734101654/F3D35E3B-8BEF-42CC-A18B-CF4C053B2D43.png" style="background-color:initial;max-width:min(100%,2564px);max-height:min(1348px);;background-image:url(https://res.cloudinary.com/montaigne-io/image/upload/v1734101654/F3D35E3B-8BEF-42CC-A18B-CF4C053B2D43.png);height:auto;width:100%;object-fit:cover;background-size:cover;display:block;" width="2564" height="1348">