# Video (From Animation)

You authored keyframes on the shot in Compose. Video (From Animation) takes those keyframes as the motion source for the video. The model follows the camera move, the subject motion, and the pose changes you set, rather than inventing motion of its own. Or pick Direct Render and skip the AI entirely.

![Visualizer panel with the mode dropdown open, showing Image, Video (Start Frame), Video (Start + End Frames), and Video (From Animation)](/files/TcBTHfAwbzJyr0ype74P)

## What it does

Most video models start from a still image and a prompt and invent the motion. Video (From Animation) is different: it takes the animated 3D scene as the input. The keyframes you set in Compose drive what the model renders. The output follows the camera path, the character pose changes, and the object motion you authored, within whatever the chosen model can interpret.

This is the mode to reach for when the motion matters more than the look. If you've spent time animating a precise camera move or character action, this is how you preserve it through video generation.

## How to use it

1. **Author the animation in Compose.** Set keyframes on the camera, subjects, and any animated objects in the shot. See [Animate an object](/overview/how-to/animate-an-object.md) for the full flow.
2. **Switch to Visualize.** The shot's animation is preserved across modes.
3. **Set the mode dropdown to Video (From Animation).** The panel updates to show the model picker for this mode.
4. **Pick a model.** Three options, each does something different (see below).
5. **Write a prompt** (skip this for Direct Render; the scene drives everything).
6. **Click Capture Animation.** The job runs server-side. The output lands as the shot's video.

![Video (From Animation) selected in the Visualizer with Direct Render and the Capture Animation button visible](/files/w6Vk0C03vUIlX9IzIM4Z)

## The three model options

![Model dropdown for Video (From Animation) showing Seedance 2.0 Motion Guide, Luma Ray 3.14, and Direct Render](/files/FYVumR0xlvyMQkS8alo3)

* **Seedance 2.0 Motion Guide** (ByteDance). Generative video that follows the scene's motion. Use it when you want a stylized AI output that respects your authored blocking.
* **Luma Ray 3.14** (Luma). Video-to-video model with first-and-last frame anchoring. Tightest motion fidelity of the AI options.
* **Direct Render** (Intangible). Non-AI. Renders the 3D scene as video, frame-for-frame, no diffusion model involved. See [Direct Render](/visualize/direct-render.md) for the full picture.

## Motion versus appearance

When you use Video (From Animation) with an AI model, the *motion* is driven by your keyframes but the *appearance* still comes from the prompt and any reference images. A dolly through an empty 3D scene produces a dolly through whatever the prompt describes. A subject animating across frame stays animated, but its visual identity is decided by the prompt.

If you want the appearance locked to your scene too, that's [Direct Render](/visualize/direct-render.md).

## When to reach for each option

| If you want                                              | Use                                                                                                  |
| -------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
| Stylized AI video that follows your authored motion      | Seedance 2.0 Motion Guide                                                                            |
| Tighter motion fidelity with start / end anchoring       | Luma Ray 3.14                                                                                        |
| Wireframe preview of the authored shot, no AI, no credit | [Direct Render](/visualize/direct-render.md)                                                         |
| A still-image generation with no motion                  | Switch the mode dropdown back to Image                                                               |
| Video from a single start image, no scene animation      | Switch to Video (Start Frame), see [Generate video](/visualize/generate-video.md)                    |
| Video from two anchor images                             | Switch to Video (Start + End Frames), see [First and last frame](/visualize/first-and-last-frame.md) |

## Limits and known issues

* **The model interprets your motion; it does not replay it.** Seedance Motion Guide and Luma Ray 3.14 honor authored motion, but they still resample through their own video generation. Sharp keyframe changes can soften; fast camera moves can drift. If a move has to land exactly, use Direct Render.
* **Direct Render output is wireframe / greybox**, not photoreal. It captures the 3D scene as it appears in the viewport. For a finished look, render through one of the AI models.
* **No audio.** The audio toggle that appears on other video modes is not present here. Add audio later in your editing tool, or use Video (Start Frame) with a model that generates audio.

## Related

* [Direct Render](/visualize/direct-render.md) – the non-AI option inside this mode.
* [Animate an object](/overview/how-to/animate-an-object.md) – how to author the motion that drives this.
* [Animation and shot time](/overview/concepts/animation-and-shot-time.md) – the concept.
* [Generate video](/visualize/generate-video.md) – the Start Frame and Start + End Frames modes.
* [Models](/visualize/ai-models.md) – the full model reference.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://help.intangible.ai/visualize/video-from-animation.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
