# First and last frame

Hand the video model two images: where the shot starts and where it ends. The model interpolates the motion between them. The right tool when you have the start and end framings nailed but want the model to figure out the in-betweens, or when you've fixed a render in Photoshop and need to use it as the start of a video.

## Watch

{% embed url="<https://www.youtube.com/watch?v=tI7ODVUJd1o>" %}

![Visualize mode in video mode, Veo 3.1 selected as the model. The right panel shows Start Frame and End Frame slots, with prompt-guidance text and the Generate Video button at the bottom](/files/nyFn7zMRsfr9xugva3Di)

## What it does

Veo, Kling 2.6 and later, and Seedance support this today. Provide two images; the model produces a video that starts on the first image, ends on the second image, and interpolates plausibly between them. The output is a continuous shot.

Two specific scenarios this is the right tool for:

1. **Compositional bookends.** You know exactly what the first and last frames should be, but you don't want to author the in-between motion shot-by-shot. The model produces the connective motion.
2. **Fix-and-regenerate.** A rendered image is almost right but has a problem (the famous extra cop car). Download, fix in Photoshop, upload as the start frame. The video uses your edited start.

## How to use it

1. **Set the visualizer mode to Video (Start + End Frames).** The mode dropdown lives at the top of the visualizer panel. Picking this mode scopes the model dropdown to the models that support end frames (Veo, Kling 2.6+, Seedance) and exposes both anchor-frame slots.
2. **Pick a model.** Each does end-frame interpolation differently; see [Models](/visualize/ai-models.md) for guidance.
3. **Set the start frame.** Click the Start Frame slot. Same picker semantics as the End Frame picker below; pick from the project's renders or upload your own.
4. **Set the end frame.** Click the End Frame slot. The **Select an end frame** picker opens with these surfaces:
   * **Upload** – a fresh image from your machine.
   * **Favorites** – the renders you've hearted.
   * **Scene tabs** – one tab per scene in the project (e.g. "Manhattan city street", "Scene 1", "Scene 2"). The active scene's renders show first.
   * **Import media** – pull from project-level media imports.
5. **Pick from the per-scene grid.** Each scene shows its rendered shots and videos with a count summary ("3 shots, 2 images, 2 videos"). Click a thumbnail to set it as the end frame.
6. **Set Duration / Orientation / Audio** dropdowns (see [Generate video](/visualize/generate-video.md)).
7. **Click Generate Video.** The model runs the interpolation; the result lands in the gallery.

![Select an end frame picker open with Upload / Favorites / scene tabs / Import media filters across the top, and a per-scene grid of rendered thumbnails below](/files/Kb1KyueGRvapCxlXMysn)

## How to produce two consistent frames from the same shot

A common workflow if you don't already have two compatible images:

{% stepper %}
{% step %}

#### Author the shot in Compose mode

Set up the camera and the subjects at frame 0.
{% endstep %}

{% step %}

#### Generate the start image

In Visualize mode, generate the image for the current state.
{% endstep %}

{% step %}

#### Move the camera or subjects to the end position

Back in Compose mode, advance the playhead to the shot's last frame and adjust accordingly.
{% endstep %}

{% step %}

#### Generate the end image

Visualize again. The image of the new state.
{% endstep %}

{% step %}

#### Switch to first-and-last frame video

Back in Visualize, set the mode dropdown to **Video (Start + End Frames)**. Pick a model (Veo 3.1, Kling 2.6 Pro or later, or Seedance 2.0), then drop the two images into the start and end slots.
{% endstep %}

{% step %}

#### Generate

The model interpolates motion between the two framings into a continuous video.
{% endstep %}
{% endstepper %}

## When to reach for it

* **Hero camera moves.** A push-in, a pull-back, a sweeping reveal. Author the start and end framings; let the model figure out the motion.
* **Fixed-up renders.** When external editing got the start frame perfect but the rest of the shot still needs to be a video.
* **Match-cuts between two compatible images.** A "before / after" structure where the same scene transitions from one state to another.

When not to reach for it:

* **Single-image animation.** When you just want subtle motion baked in from one image, use a regular video model with no end frame.
* **Long-duration shots.** First-and-last works best on shots a few seconds long. Beyond that, the model struggles to bridge plausibly.

## Limits and known issues

* **Not every video model supports end-frame inputs.** Veo, Kling 2.6 and later, and Seedance currently support first-and-last frame. The visualizer enforces this through the mode selector: when you pick **Video (Start + End Frames)** as the mode, the model dropdown shows only the models that support it. If a model you want isn't there, it doesn't support end-frame interpolation today.
* **Bridging diverges with extreme differences.** If the start and end are far apart (different scene, different camera position, different subjects), the model invents transitional content that may not match. Keep the framings related.
* **Audio is optional but adds cost.** Same audio toggle as regular video generation.

## Related

* [Generate video](/visualize/generate-video.md)
* [Generate image](/visualize/generate-image.md)
* [Edit images](/visualize/edit-images.md)
* [Models](/visualize/ai-models.md)
* [Managing outputs](/visualize/managing-outputs.md)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://help.intangible.ai/visualize/first-and-last-frame.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
