Adding Conditional Control to Text-to-Image Diffusion Models Paper • 2302.05543 • Published Feb 10, 2023 • 36
Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators Paper • 2303.13439 • Published Mar 23, 2023 • 4
Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation Paper • 2212.11565 • Published Dec 22, 2022 • 3
ELITE: Encoding Visual Concepts into Textual Embeddings for Customized Text-to-Image Generation Paper • 2302.13848 • Published Feb 27, 2023 • 1
FateZero: Fusing Attentions for Zero-shot Text-based Video Editing Paper • 2303.09535 • Published Mar 16, 2023 • 1
Improving Sample Quality of Diffusion Models Using Self-Attention Guidance Paper • 2210.00939 • Published Oct 3, 2022 • 6
Understanding 3D Object Interaction from a Single Image Paper • 2305.09664 • Published May 16, 2023 • 1
Dense Text-to-Image Generation with Attention Modulation Paper • 2308.12964 • Published Aug 24, 2023 • 2
Masked Diffusion Transformer is a Strong Image Synthesizer Paper • 2303.14389 • Published Mar 25, 2023 • 1
Editing Implicit Assumptions in Text-to-Image Diffusion Models Paper • 2303.08084 • Published Mar 14, 2023 • 2
StableVideo: Text-driven Consistency-aware Diffusion Video Editing Paper • 2308.09592 • Published Aug 18, 2023 • 2
SALAD: Part-Level Latent Diffusion for 3D Shape Generation and Manipulation Paper • 2303.12236 • Published Mar 21, 2023 • 3
StyleGANEX: StyleGAN-Based Manipulation Beyond Cropped Aligned Faces Paper • 2303.06146 • Published Mar 10, 2023 • 2