# Match a client's product exactly

The brief said *2024 Bronco* and the AI rendered a generic mid-sized SUV. The brief said *Lululemon storefront* and the AI rendered a glass-fronted retail box. The brief said *iPhone 15 Pro Max in Natural Titanium* and the AI rendered a phone. Pure prompt tools fail at brand fidelity because text descriptions are lossy and the model fills in what it doesn't know.

The fix combines two techniques: bring the client's actual geometry into the scene, and bind a hero photograph as the visual reference. The model gets structural truth from the mesh and surface truth from the photograph. The render comes back as the actual product.

## Steps

1. **Get the client's CAD or 3D file.** FBX, OBJ, USD, GLB, or DXF. Most agency briefs include this; if not, request it before generating anything. A 3D file beats reverse-engineering geometry from photographs.
2. **Import via Smart Import.** Build mode → **Import Asset**. Drop the file. Smart Import opens an editing environment that lets you fix scale, orientation, and category before the asset joins the scene. See [Import your own models](/build/import-your-own-models.md).
3. **Place the asset in the scene.** Position, rotate, scale to fit the composition. The geometry is now the source of truth for the object's shape and position.
4. **Open Object Details and attach a hero photograph as the reference image.** Select the imported asset, open Details (left side panel), scroll to **Reference images**, click **Upload**. Pick a clean photograph of the actual product – not a render, not a stylized image, the photograph the client uses on their site.
5. **Optionally add multi-view references.** Up to four images per object. Front, three-quarter, side. Multi-view produces more consistent renders than a single angle.
6. **Compose the shot.** Switch to Compose mode, frame the product the way the brief asks. Lens, aspect, aim target. The reference travels with the object across every shot in the project.
7. **Render in Visualize.** Use a reference-tuned image model first. Nano Banana Pro and Flux Pro Kontext are tuned for reference-image conditioning and will honor the photograph most tightly.
8. **Iterate without losing fidelity.** Re-frame, change lighting, switch shots. The reference stays bound to the object. Every render comes back as the actual product, not a new interpretation each time.

## Why this works

The visualizer receives the 3D geometry, the bound reference image, and the prompt as structured input – not as a single text description. The model sees the actual shape and the actual surface and renders both. Pure prompt tools can't do this because they have no concept of "this specific object" – they only have words.

For the technical version, see [How the visualizer thinks](/overview/concepts/how-the-visualizer-thinks.md).

## When the client only has photographs

No CAD, no 3D file, just photographs. Two options:

* **Generate a 3D model from the photograph.** Build mode → [Generate 3D Asset](/build/generate-3d-asset.md) → image-to-mesh. The output is a 3D asset based on the photograph. Then attach the same photograph as the Image Reference. Mesh from photo, surface from photo – it works.
* **Use** [**Image to scene and shot**](/build/image-to-scene-and-shot.md)**.** Drop a photograph in. Intangible builds an approximate scene and a matching shot. Faster when the entire scene comes from one reference rather than just one product in it.

## Tips

* **Object names matter.** Vanilla names like the actual product code ("Bronco-2024", "Air-Max-1") beat descriptive names ("the SUV", "the running shoe") because descriptive names pull the model toward generic interpretation.
* **Photo quality matters more than resolution.** A clean product photograph against a plain background outperforms a noisy file at any size.
* **One reference per object at a time, but multi-view counts as one.** Four photographs of the same product from different angles is one reference (multi-view). Four photographs of four different products is four references on four objects.

## Limits

* **Reference fidelity varies by model.** If a reference seems ignored, switch to Nano Banana Pro or Flux Pro Kontext for image generation. For video, render a reference-locked still first, then use first-and-last-frame interpolation.
* **The reference doesn't override composition.** A photograph of a product tells the model what it looks like, not where to put it. Composition is a Compose-mode decision.
* **Highly complex CAD can stress Smart Import.** Aerospace-grade FBX with thousands of nested groups may need flattening in the source DCC before import.

## Related

* [Image reference (concept)](/overview/concepts/image-reference.md)
* [Import your own models](/build/import-your-own-models.md)
* [Generate 3D Asset](/build/generate-3d-asset.md)
* [Image to scene and shot](/build/image-to-scene-and-shot.md)
* [Keep a character consistent across shots](/overview/how-to/consistent-character-ai.md)
* [Models](/visualize/ai-models.md)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://help.intangible.ai/overview/how-to/brand-true-ai-render.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
