Showing posts with label diffusion transformers. Show all posts
Showing posts with label diffusion transformers. Show all posts

3.7.25

LongAnimation promises Tokyo-quality color at indie-studio speed

 When you think about the most time-consuming part of anime production, flashy fight scenes or painstaking tweening may spring to mind. In reality, a huge chunk of budget and overtime goes into the unglamorous grind of coloring hundreds of frames so that a heroine’s yellow ribbon doesn’t silently morph into pink halfway through a scene. A new paper out of the University of Science and Technology of China and HKUST wants to make that tedium disappear.

Today the team unveiled LongAnimation: Long Animation Generation with Dynamic Global-Local Memory, a diffusion-transformer pipeline that can propagate colors consistently across 500-frame sequences—roughly 20 seconds at broadcast frame rates—without the dreaded color drift that plagues existing tools. Compared with state-of-the-art video colorization baselines, LongAnimation slashes Frechet Video Distance by 35.1% on short clips and 49.1% on long ones, while cutting perceptual error (LPIPS) by more than half.




How it works

  1. SketchDiT
    A customized DiT backbone ingests three control signals—line-art sketches, a single colored keyframe, and optional text prompts—to extract what the authors call a “hybrid reference embedding.” This keeps the model flexible enough to obey textual cues (“sunset sky”) while staying locked onto a character’s palette.

  2. Dynamic Global-Local Memory (DGLM)
    Prior systems only merge overlapping windows, so they see at best the last few seconds of footage. LongAnimation pipes every generated segment through Video-XL, a long-video understanding model, compressing thousands of frames into a global cache. During generation, the network adaptively fuses that global context with a short “local” cache, letting it remember that the yellow ribbon was, in fact, yellow back in frame 25.

  3. Color Consistency Reward (CCR)
    To train the system without back-propagating through a hefty 3D VAE, the authors bolt on a reinforcement-learning reward that directly scores low-frequency color coherence. A late-stage latent-space fusion trick during inference (their “CCF”) then smooths boundary artifacts between segments.


Why it matters

Traditional colorization assistants like LVCD or ToonCrafter top out at ~100 frames or quietly devolve into noise accumulation if you stitch segments together. LongAnimation’s five-times leap in sequence length pushes automated coloring into territory that covers most dialogue and establishing shots, not just blink-and-you-miss-it gifs.

For mid-tier studios in Seoul or Manila that churn through thousands of outsourced cuts each month, the economics are compelling: one keyframe plus vectorized sketches could drive bulk coloring, leaving human artists to polish hero shots. And because SketchDiT still honors text instructions, directors can tweak backgrounds—“make it dawn instead of dusk”—without round-tripping to compositing.


Under the hood

  • Model size: Built on top of CogVideoX-1.5 (5 B params).

  • Training set: ~80 k high-aesthetic clips from Sakuga-42M, filtered for >91 frames.

  • Hardware: 6 × NVIDIA A100 GPUs, LR = 1e-5, three-stage curriculum (SketchDiT 30 k steps → DGLM 10 k → CCR 10 k).

  • Code: The repo, demo videos, and Colab notebook are already live on GitHub.


The bigger picture

LongAnimation lands amid a broader rush to extend diffusion transformers beyond blink-length video. Google’s DitCtrl and Meta’s SlowFast-VGen deliver longer shots but rely on window fusion or fine-tuned LoRA weights. By contrast, LongAnimation’s plug-and-play memory module could slot into any DiT-style architecture, making it a tempting drop-in upgrade for text-to-video startups chasing the next One Piece.

Just don’t expect the tech to kill colorists’ jobs overnight. Rendering frames is only half the battle; style supervision, motion cleanup and final compositing still demand human taste. But if the ribbon stays yellow without manual touch-ups, the conversation around AI in animation may shift from “Will it replace us?” to “How much budget does it free for better storytelling?”

Paper link: arXiv:2507.01945 (PDF)

 When you think about the most time-consuming part of anime production, flashy fight scenes or painstaking tweening may spring to mind. In r...