# Troubleshooting rendering

A render came back not-quite-right. The fix depends on what kind of wrong it is. The patterns below cover most of what surfaces in support tickets.

## The composition is wrong

**Symptom**: the camera angle, framing, or which subjects are visible doesn't match what you intended.

**Fix**: not in Visualize. The composition is determined by the 3D scene plus the active shot's camera. Switch to Compose mode and adjust the camera; switch to Build mode and adjust object positions. Re-prompting won't help because the prompt didn't cause the issue.

If the auto-prompt's \[Subjects] block doesn't list a hero you expected to see, the model isn't being told the hero exists. New objects default to the asset name plus a number (for example `Chair 2`); rename it to something specific in [Object details](/build/object-details.md), regenerate the prompt, render again.

## The subject is generic

**Symptom**: the rendered car is "a car" rather than the specific Bronco the brief calls for.

**Fix**: attach an [image reference](/overview/concepts/image-reference.md) to the object in Build mode. The model receives the image at generation time and constrains the rendered subject to honor it. Description fields alone aren't enough for brand-accurate rendering.

For characters specifically: vanilla object names ("Mark") plus a reference image hold consistency across shots better than descriptive names ("pirate captain") with no reference.

## A stubborn extra object keeps appearing

**Symptom**: an extra car, an extra character, an extra prop the model keeps including despite not being in your scene. The famous "extra cop car" problem from the image-to-video tutorial.

**Fix**: don't expect prompt edits to remove it; the model is hallucinating, and re-prompting tends to invent different artifacts. Two real fixes:

1. **Edit the rendered image** with the pencil tool. *"Remove the extra car on the right."* See [Edit images](/visualize/edit-images.md).
2. **Download, edit externally, re-upload as a start frame.** Photoshop the artifact out, then use the cleaned image as the start frame for video generation. See [First and last frame](/visualize/first-and-last-frame.md).

The second pattern is what Phil walks through in tI7ODVUJd1o's image-to-video tutorial.

## The render is darker or muddier than the scene

**Symptom**: the viewport looks bright and clean; the rendered output is dim, hazy, or low-contrast.

**Fix**: check the active [lighting preset](/visualize/lighting-presets.md) and the scene's [environment](/build/environment.md) settings. A "stormy coastal tempest" lighting preset overrides any sunny look the viewport implies. Also check fog density – fog at high density darkens the model's interpretation. Either swap the preset, swap the time of day, or turn fog off.

If lighting and environment look right and the render is still muddy, the model may be the issue. Try a different image model; Nano Banana Pro and Flux Pro Kontext have different default tonal registers.

## The character drifts between shots

**Symptom**: the same character renders differently in shot 1 and shot 3 of the same sequence.

**Fix**: image reference plus vanilla naming. The character object should have:

* A vanilla name (Mark, Jeff, Sarah – not "pirate captain")
* An attached image reference (preferably multi-view: front, three-quarter, side)

See [Image reference](/overview/concepts/image-reference.md). The webinar pirate-scene walkthrough is the canonical example.

If consistency is still slipping despite vanilla naming and reference images, consider a [LoRA](/build/loras.md) trained on the character. LoRAs hold consistency more tightly across many renders.

## The model ignores my reference image

**Symptom**: image reference is attached, the auto-prompt mentions it, but the rendered subject still looks generic.

**Fix**: not all models honor references equally. Image-side: Nano Banana Pro and Flux Pro Kontext are tuned for image-reference conditioning. Older video models, including older Kling versions, are less capable with references than newer releases – treat their reference behavior as guidance rather than constraint. Switch to a newer model if the reference is being ignored. See [Models](/visualize/ai-models.md).

## The render takes too long

**Symptom**: a generation job sits for several minutes without returning.

**Fix**: there's no way to cancel a render once you've hit Generate. The visualizer offloads to external providers and provider queues fluctuate; the only signal is elapsed time. If a job fails (provider-side or network), the system retries automatically and you aren't charged credits for the failed attempt.

If multiple jobs are slow, the culprit is usually network: see [System requirements](/overview/reference/system-requirements.md).

## The render failed and I don't know why

**Symptom**: the render comes back as a failure card or a red toast, with a generic "generation failed" message. You can't tell if the prompt tripped a content filter, the provider timed out, or something else.

**Fix**: hover or click into the failure to read the provider's reason. The failure surface has three layers of detail:

* **Toast**: the short summary that appears top-right when the job fails. Often includes the provider's primary reason in one line.
* **Tooltip**: hover the failed render's tile in the gallery. The tooltip expands to show the captured provider reason, usually one to two sentences.
* **Media banner with "Show more"**: the failed render's tile shows a generic banner. Click **Show more** to reveal the full provider error message and any details captured at the time of failure.

The text comes directly from the underlying provider (Fal, the model vendor, or the Intangible job manager). Sometimes the reason is useful ("input image rejected by content filter", "request timed out after 600s"); sometimes it's opaque ("internal error"). When the reason names a specific cause, the fix is usually obvious from the text. When it doesn't, retry once – transient provider errors clear on the next attempt – before escalating via [Help and feedback](/overview/get-started/help-and-feedback.md).

Failed jobs don't consume credits, so a retry is free.

## I want a specific style but can't get it

**Symptom**: every render comes back photoreal even though you've described a specific illustrative look.

**Fix**: pick a [style preset](/visualize/style-presets.md). The preset's prose is what the model needs; describing a style in your own words sometimes lands and sometimes doesn't. Presets bundle the language that works.

If the preset isn't quite the look you want, [Custom styles](/visualize/custom-styles.md) lets you upload reference images for the look and save it as a reusable preset.

## Related

* [Edit images](/visualize/edit-images.md)
* [First and last frame](/visualize/first-and-last-frame.md)
* [Image reference](/overview/concepts/image-reference.md)
* [How the visualizer thinks](/overview/concepts/how-the-visualizer-thinks.md)
* [Models](/visualize/ai-models.md)
* [Help and feedback](/overview/get-started/help-and-feedback.md)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://help.intangible.ai/overview/faq/troubleshooting-rendering.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
