# Generate image

The core image-generation flow. Pick an image model from the dropdown, set 1K or 2K resolution, click **Generate Image**. The result lands in the gallery to the right of the viewport.

## Watch

{% embed url="<https://www.youtube.com/watch?v=gC8yn3Zyedo>" %}

![Visualize mode in image mode with a rendered Lamborghini on a wet Manhattan street, the Visualizer panel on the right showing Image / Nano Banana Pro selected and the Scene tab populated with the auto-prompt](/files/hM1HlspfmwRKC488I3Kh)

## What it does

Reads the 3D scene plus the active shot's camera, assembles the prompt across the Scene / Style / Lighting tabs, picks the chosen image model, and generates. The output is one image per click; iterating is just clicking again.

## How to use it

1. **Switch to image mode.** The top of the visualizer panel has a Image / Video toggle. Image is the default.
2. **Pick a model.** The model dropdown lists the currently-supported image models: Flux Pro Kontext, Flux2 Pro, Nano Banana Pro, Nano Banana 2. Pick one. See [Models](/visualize/ai-models.md) for guidance on when to reach for which.
3. **Read the auto-prompt.** Scan the Scene tab. Confirm the \[Subjects] block mentions your hero objects. If a hero is missing, fix upstream in Build mode (rename the object, attach an image reference) before regenerating.
4. **Pick resolution.** 1K for iteration, 2K for final. The resolution toggle sits near the bottom of the Visualize panel, just above the Generate button.
5. **Pick style and lighting.** Optional but high-value. Click the Style tab, pick a [preset](/visualize/style-presets.md). Click the Lighting tab, pick a [preset](/visualize/lighting-presets.md). Skip if photoreal-default is what you want.
6. **Click Generate Image.** The job runs. The result lands in the gallery to the right of the viewport with a thumbnail and a heart icon for favoriting.

## Reading what came back

A render lands in the gallery in a few seconds to a minute, depending on model and resolution. Hover the result for actions:

* **Favorite (heart)** – marks the render. Favorites are easier to find later in [Managing outputs](/visualize/managing-outputs.md).
* **Download** – saves a full-resolution copy.
* **Edit (pencil)** – opens [Edit images](/visualize/edit-images.md) for in-place corrections.
* **Delete** – removes from the gallery, with a confirmation prompt.

To use a render as the start frame of a video, select it in the gallery, then switch the visualizer dropdown from Image to Video. The selected render auto-populates as the first frame for video generation. See [First and last frame](/visualize/first-and-last-frame.md).

## When the render is wrong

A few common patterns:

* **Wrong subject.** The hero isn't what you wanted. Fix upstream: image reference on the object in Build mode. Don't re-prompt.
* **Wrong composition.** The framing is off. Fix in Compose mode: move the camera, change the lens.
* **Wrong style.** The image is too clean, too dark, too painted. Try a different [style preset](/visualize/style-presets.md).
* **Wrong mood.** The lighting feels off for the scene. Try a different [lighting preset](/visualize/lighting-presets.md).
* **Stubborn extra object.** Something keeps appearing that you don't want (the famous extra cop car). Edit the render directly via [Edit images](/visualize/edit-images.md) or download and edit externally, then re-upload as a start frame for video. See [First and last frame](/visualize/first-and-last-frame.md).

{% hint style="info" %}
Resolution choice matters more than people expect. 1K iterations are cheap; use them aggressively. Commit to 2K only when the composition is right. A 2K image render is roughly 2x the cost of a 1K render on the same model; the exact ratio varies by model provider.
{% endhint %}

## Limits and known issues

* **Generation can take time on busy capacity.** Most renders return within a minute. If a job sits beyond that, the model provider's queue may be slow; try a different model.
* **Model availability varies.** Some models are gated to higher tiers or come and go as providers update their APIs. Check the model dropdown for the current set.
* **Output gallery has finite history.** Past a few hundred renders, older results may not be retrievable from the gallery. Download anything you want to keep.

## Related

* [Auto-prompt](/visualize/auto-prompt.md)
* [Scene context](/visualize/scene-context.md)
* [Style presets](/visualize/style-presets.md)
* [Lighting presets](/visualize/lighting-presets.md)
* [Edit images](/visualize/edit-images.md)
* [First and last frame](/visualize/first-and-last-frame.md)
* [Models](/visualize/ai-models.md)
* [Resolution and cost](/visualize/resolution-and-cost.md)
* [Managing outputs](/visualize/managing-outputs.md)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://help.intangible.ai/visualize/generate-image.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
