Meta VideoJAM: Motion Generation Surpassing Sora and Gen3
VideoJAM enhances motion effects beyond Sora and Gen3 by integrating Joint Appearance-Motion Representation and Inner-Guidance Mechanism, ensuring realism and consistency in video generation.
"AI Disruption" publication New Year 30% discount link.
To address the issue of motion consistency in video generation, Meta's GenAI team has proposed a new framework called VideoJAM.
VideoJAM is based on the mainstream DiT approach, but compared to pure DiT models like Sora, it directly enhances dynamic effects:
Even for fast and complex dances with significant changes, the movements look incredibly realistic, and synchronization between two people is preserved: