# Auto-prompt

The visualizer builds the prompt from your 3D scene, your camera, and the named details on your objects. You don't have to write a long string of adjectives – the scene already knows what's in it.

![Visualizer Scene tab with the auto-generated prompt grouped under bracketed headings: \[Scene Context\], \[Environment & Props\], \[Subjects\], \[Lighting\]](/files/BlQQ15pRtnoVcV66Hiyx)

## What it does

The default workflow most diffusion tools push you into is: write a paragraph describing your scene, hit generate, see what comes out, edit the paragraph. Intangible flips that. The 3D scene is the spatial truth; the prompt is generated from it. You read the auto-prompt to confirm the model sees what you intended, edit only the parts that are wrong, then generate.

The prompt is structured into three blocks:

* **\[Scene Context]**: the framing – what kind of shot, what mood, what location.
* **\[Environment & Props]**: the physical setting and secondary objects.
* **\[Subjects]**: the hero objects – the car, the character, the product.

Each block draws from a different source. Editing the right source upstream is faster than editing the prompt every iteration.

## How to use it

The auto-prompt lives in the **Scene** tab of the right-hand Visualizer panel.

1. **Open Visualize mode** with a shot active. The auto-prompt appears, populated from the scene and the active shot's camera.
2. **Read the three blocks.** Confirm the model is reading the scene correctly. If a hero object isn't in the \[Subjects] block, the model isn't going to render it well – go fix the source.
3. **Edit a block in place.** Click any block's text and edit directly. Manual edits stick on the next generate.
4. **Regenerate the prompt.** Click **Generate prompt** to rebuild from the current scene state. Useful after you've made Build-mode or Compose-mode changes – the auto-prompt won't update on its own.
5. **Generate.** Pick a model and click **Generate Image** (or switch to video). The prompt the model receives is the assembled three-block string.

### Where each block comes from

* **\[Scene Context]** is built from the active scene's name and description, the active shot's name and description, the camera lens, and the aspect ratio. Editing a shot's name or description (in [Shot details](/compose/shot-details.md)) is the cleanest way to influence this block.
* **\[Environment & Props]** is built from populator labels, environment settings (sun position, fog, terrain), and the names of secondary objects in the scene. Renaming a populator from "populator-1" to "tropical forest" changes this block immediately.
* **\[Subjects]** is built from the named hero objects, their descriptions, and any image references attached to them. This block is where attaching an image reference has the strongest effect on the rendered output.

## Details

| Block                  | Source                                            | Best way to influence                   |
| ---------------------- | ------------------------------------------------- | --------------------------------------- |
| \[Scene Context]       | Scene + shot names, lens, aspect ratio            | Rename the shot; change lens            |
| \[Environment & Props] | Populator labels, env settings, secondary objects | Rename populators; adjust environment   |
| \[Subjects]            | Hero object names, descriptions, image references | Attach image references to hero objects |

## Limits and known issues

* **The auto-prompt doesn't refresh automatically.** Build-mode changes don't trigger regeneration. Click **Generate prompt** when you've made meaningful scene changes.
* **Manual edits get overwritten on regeneration.** If you've edited a block in place and then click **Generate prompt**, your edits are lost. Keep them, or rebuild from upstream.
* **Long object descriptions get truncated.** The visualizer trims very long descriptions before they reach the model. Keep object descriptions to a sentence or two.

## Further viewing

Phil walks through this concept in depth at the 2026 Production Summit (32 minutes).

{% embed url="<https://www.youtube.com/watch?v=Y3CoibxA_ag>" %}

## Related

* [Scene context](/visualize/scene-context.md)
* [How the visualizer thinks](/overview/concepts/how-the-visualizer-thinks.md)
* [Image reference](/overview/concepts/image-reference.md)
* [Generate image](/visualize/generate-image.md)
* [Generate video](/visualize/generate-video.md)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://help.intangible.ai/visualize/auto-prompt.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
