# Keep a character consistent across shots

The recurring failure mode in pure prompt tools: the hero looks different in shot 2 than in shot 1. A different jacket. A different face. The Lambo is now red. Re-prompting won't fix it because the model is being asked to invent the look every time, and small variations compound.

The fix has one moving part. Bind a reference image to the 3D object once, then reuse the same object across every shot. The visualizer locks the appearance to the reference for every render that includes the object.

## Steps

1. **Place the object in the scene.** Use the Asset Library for stock geometry or [import your own model](/build/import-your-own-models.md) for a client product. The object lives in the scene; shots are framings of it.
2. **Select the object and open Object Details.** Click the object in the viewport. The Details panel docks on the left. Scroll to the **Reference images** section.
3. **Attach a reference image.** Click **Upload** or **Add from Media Library**. JPEG and PNG accepted. Up to four images per object for multi-view references.
4. **Reuse the same object across shots.** Don't duplicate the object per shot. Move the same instance, or let the camera reframe it. The reference binding follows the object, so every shot that includes it inherits the locked appearance.
5. **Render.** In Visualize mode, generate. The reference travels into the auto-prompt's Subjects block and into the model alongside the prompt. The character, vehicle, or product comes back as the reference, not as a generic interpretation of one.

## Faster path: the AI Composer

Instead of opening the Details panel, type into the AI Composer prompt input at the bottom of Build mode:

> Attach this as a reference image to the chair: \<image URL>

The composer finds the matching object, attaches the reference, and the viewport updates. Faster when you can describe the object in a sentence. See [AI Composer](/build/ai-composer.md).

## Why this works

The model receives a structured 3D scene plus a prompt, not just text. The reference image attaches to a specific object in the scene, so the model knows exactly which thing in the rendered output should match the reference. Pure prompt tools can't do this because they have no concept of "this object" – they only have the description.

For the deeper version, see [Image reference](/overview/concepts/image-reference.md) and [How the visualizer thinks](/overview/concepts/how-the-visualizer-thinks.md).

## Tips that level up the result

* **Vanilla object names beat descriptive ones for character consistency.** "Mark" and "Jeff" hold their look across renders better than "pirate captain" and "British sailor". Descriptive names pull the model toward generic interpretation and away from the specific reference.
* **Multi-view references improve mesh-aware results.** Front, three-quarter, and side photographs produce more consistent renders than a single angle.
* **Reference-tuned image models honor references most tightly.** Nano Banana Pro and Flux Pro Kontext are tuned for reference-image conditioning. Older video models honor references less tightly – if a reference seems ignored on video, try first-and-last-frame interpolation from a reference-locked still.

## Limits

* **Reference fidelity varies by model.** If a reference seems ignored, switch to Nano Banana Pro or Flux Pro Kontext for the image generation, then use first-and-last-frame interpolation for video.
* **Reference doesn't override composition.** A reference image tells the model what the object looks like, not where it goes. Composition is a Compose-mode concern.
* **One reference per object at a time.** You can attach up to four images for multi-view, but they're treated as multiple angles of the same object, not different alternates.
* **References are downsampled and composited internally.** Every reference attached to an object lands on a single 720 × 1024 sheet before the model sees it. Multi-view (front, three-quarter, side) works well at this resolution; oversized source files give the model nothing extra. Sub-megapixel inputs are fine.

## Related

* [Image reference (concept)](/overview/concepts/image-reference.md)
* [Controlling AI output](/overview/concepts/control-ai-output.md)
* [Object details](/build/object-details.md)
* [AI Composer](/build/ai-composer.md)
* [LoRAs](/build/loras.md) – the alternative when a fine-tuned model is required.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://help.intangible.ai/overview/how-to/consistent-character-ai.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
