TechAll

Morphic's frames-to-video, with up to 5 frames and time control, is now open-source

October 29, 2025
Morphic's frames-to-video, with up to 5 frames and time control, is now open-source

Authored by Adithya Iyer.

What is Frames to Video

Frames-to-video (F2V) with multi-frame and flexible timing is Morphic's feature that lets creators transform upto 5 keyframes into a coherent, animated video, with precise timing control between frames. It fills the motion in between, preserving the visual consistency and creative direction of your frames while providing creative control over the visual narrative of your story.

Whether it’s an evolving landscape, a cinematic moment, or a stylized character animation, Morphic's frames-to-video helps you go from keyframes to living motion in seconds.

Why we are open-sourcing it

By opening up frames-to-video, we're sharing a small part of how Morphic builds motion, specifically, the model that handles time-controlled transitions between frames, so others can experiment, remix, and invent their own ways to bring frames to life. We're inviting researchers, developers, and artists to:

  • Experiment freely with new motion synthesis techniques
  • Improve the pipeline for stylized or frame-consistent animation
  • Integrate it anywhere, from local workflows to creative apps
  • And most importantly, co-create the next layer of generative storytelling

What’s inside

This release would not have been possible without the amazing foundational model work from the Alibaba Wan AI team and their breakthrough open-source model Wan2.2, upon which this release is built.

The open-source release includes:

  • Modified Wan 2.2 repository with inference code for frames-to-video with time control.
  • Hugging Face hosted model weights.

We have tried to keep things reproducible, extendable, and as close to the original Wan source as possible. If you face any difficulties, please reach out to ml@morphic.com or report an issue in GitHub.

Frames-to-video is one of the many ways we're exploring how motion can be built from imagination. We're excited to see how developers and artists use it — whether to create visual special effects, prototype creative tools, or simply experiment with new forms of animation.

Explore the model on: GitHub.
Find the model weights on: Hugging Face.