# Controlling AI output

Pro creatives walked away from Midjourney, Sora, Runway, ChatGPT image generation, Stable Diffusion, and stock Flux when they couldn't keep a character consistent across shots, frame a specific composition, or stop the model from re-interpreting the client's product. Powerful generation, no control surface. The render that came back was someone else's idea of the brief.

Intangible's three modes are the control surface. Each mode is one lever.

* **Build locks the scene.** Objects, props, time of day, imported CAD, brand-true reference images. Everything that belongs in the shot, you place. The model honors what you brought.
* **Compose frames the moment.** Camera position, lens, aspect, aim target. Spatial decisions the model can't override.
* **Visualize picks the model.** Photoreal, stylized, video with first-and-last-frame interpolation. The auto-prompt is editable; the scene grounds it.

A render you can put your name on isn't a single prompt. It's three locked decisions.

## Which lever for which problem

The most common control failures with prompt-only tools, and where Intangible solves each:

| What you're trying to control                       | The right lever                               | Where to start                                                               |
| --------------------------------------------------- | --------------------------------------------- | ---------------------------------------------------------------------------- |
| Same character or product across every shot         | Build (Image Reference)                       | [Image reference](/overview/concepts/image-reference.md)                     |
| Specific framing, lens, aspect ratio                | Compose                                       | [Camera controls](/compose/camera-controls.md), [Lenses](/compose/lenses.md) |
| Photoreal vs stylized look                          | Visualize (model picker)                      | [Models](/visualize/ai-models.md)                                            |
| Lighting and time of day                            | Build (Sky) plus Visualize (Lighting Presets) | [Lighting presets](/visualize/lighting-presets.md)                           |
| Brand-specific style across the project             | Visualize (Custom Styles)                     | [Custom styles](/visualize/custom-styles.md)                                 |
| Specific 3D geometry the asset library doesn't have | Build (Smart Import or Generate 3D)           | [Generate 3D Asset](/build/generate-3d-asset.md)                             |
| A character or product the model has never seen     | Build (LoRAs)                                 | [LoRAs](/build/loras.md)                                                     |

## Why this works when prompting alone doesn't

Pure prompt tools render from text. Text is lossy. "A red sports car on a wet street at dusk" generates a plausible red sports car, not the 2024 Bronco the client briefed. The model fills in what it doesn't know, and the gaps drift between renders.

Intangible passes the AI model a structured 3D scene plus a prompt. The geometry, the placement, the bound reference images, and the camera framing are all there as structured input. The prompt is one piece, not the whole brief. See [How the visualizer thinks](/overview/concepts/how-the-visualizer-thinks.md) for the technical version.

The practical version: every Intangible page below is one of the levers. When a render comes back wrong, the fix is almost never in the prompt.

## Where to go next

<table data-view="cards"><thead><tr><th data-card-target data-type="content-ref"></th><th></th></tr></thead><tbody><tr><td><a href="/pages/aN2Javl8dFdLVVSpVQXM">/pages/aN2Javl8dFdLVVSpVQXM</a></td><td>The single most-used technique for keeping a character or product consistent across shots.</td></tr><tr><td><a href="/pages/b3XcyyF7lIaWzF1q9iuG">/pages/b3XcyyF7lIaWzF1q9iuG</a></td><td>How Build, Compose, and Visualize map to a real production pipeline.</td></tr><tr><td><a href="/pages/qCaRmCYWAzGbxTlKnwL3">/pages/qCaRmCYWAzGbxTlKnwL3</a></td><td>Why structured 3D plus prompt beats prompt alone.</td></tr></tbody></table>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://help.intangible.ai/overview/concepts/control-ai-output.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
