# Edit images

A render came back almost right. The composition is good, the subject is right, but one element is off – a stubborn extra cop car, a sign that should be a different color, a character whose shirt should be red instead of blue. The pencil tool on a generated image lets you describe the change in plain language and the model edits the image in place.

## Watch

{% embed url="<https://www.youtube.com/watch?v=01Q3rSDuxC0>" %}

![Edit Image modal open over Visualize mode with a side-by-side before/after view - the original render on the left and the edited version on the right (a second car added to the scene), an Edit Instructions field below, plus a model picker and Cancel and Generate actions](/files/lGoeOvAWH2mJhMyPs1rt)

![Edit Image model picker dropdown open showing two provider sections - Black Forest Labs (Flux Pro Kontext) and Google (Nano Banana Pro selected, Nano Banana 2)](/files/FjYGYewc47DLfhekqf6m)

## What it does

Diffusion models can edit images they generated, given a textual instruction and the original image as the spatial truth. The pencil tool exposes that flow: describe what should change, the model applies the edit and returns a new version of the image with that change applied. The rest of the image is preserved as much as the model can manage.

## How to use it

1. **Pick the rendered image** in the gallery to the right of the viewport. The image opens in a larger view.
2. **Click the pencil icon.** The Edit Image modal opens with a before / after preview, an Edit Instructions field, and a model picker.
3. **Type the change.** Plain language. *"Make the car lime green."* *"Remove the trash can from the sidewalk."* *"Add a streetlight on the left."* There's no resolution control inside the modal – the edit runs against the source image based on your prompt.
4. **Click Generate.** The job runs; the edited image lands as a new version in the gallery.
5. **Iterate.** If the edit isn't quite right, click the pencil on the edited version and describe the next change.

## What works well

* **Color swaps.** *"Make the car blue."* The model is good at recoloring specific objects.
* **Object removal.** *"Remove the trash can."* The model fills the space behind with plausible content.
* **Object addition.** *"Add a flag on the building."* The model places the object plausibly.
* **Surface treatment changes.** *"Make the floor wet."* The model adds reflections and atmosphere.
* **Lighting adjustments.** *"Add warm light from the right."* Limited but workable.

## What works less well

* **Replacing a subject with a specific other subject.** *"Change the car to a Lamborghini Revuelto"* often produces a generic Lamborghini. For specific brand replacements, regenerate from the 3D scene with the right asset and image reference.
* **Major composition changes.** *"Move the character to the left of the frame"* doesn't work cleanly. Composition lives upstream in the 3D scene.
* **Multi-object orchestration.** *"Make the cars red and the trees green and the sky stormy"* tends to confuse the model. Do edits one at a time.

{% hint style="info" %}
The image-edit flow is fast iteration on small problems. For large problems, regenerate from the 3D scene. The dividing line: if the change you want is something a colorist would do in post, edit-in-place; if a director or DP would do it, fix upstream.
{% endhint %}

## Specific reference language

Help the model find the thing you want to change:

* *"Remove the trash can from the sidewalk"* beats *"remove the trash can"* if there are multiple trash cans in the image.
* *"Make the closest car red"* beats *"make the car red"* when there are multiple cars.
* *"Make the dashboard glow blue"* beats *"add blue light"* for specific surface lighting.

The model honors specific reference language better than vague language. If you'd describe the change to a colorist, describe it to the model the same way.

## Limits and known issues

* **Edits are not undoable into the original.** Each edit produces a new version in the gallery. The original is preserved alongside; you can return to it whenever.
* **Edit cost is per attempt.** A failed or unusable edit still costs credits.
* **Edit support is per-model.** All current image models support in-place editing. The pencil icon dims if the active model doesn't, which is the cue to switch models before retrying.
* **Quality drops on heavy iteration.** Editing an edited edited image five generations deep produces compound artifacts. After a few iterations, return to the source and start fresh.

## Related

* [Generate image](/visualize/generate-image.md)
* [First and last frame](/visualize/first-and-last-frame.md): for "fix in Photoshop, regenerate as start frame"-style edits.
* [Custom styles](/visualize/custom-styles.md)
* [Managing outputs](/visualize/managing-outputs.md)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://help.intangible.ai/visualize/edit-images.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
