# Brief AI like a DP

A director walks onto a stage with a brief. They don't hand the DP a paragraph and walk away. They walk through the set, point at where the action will happen, agree the lens, agree the framing, agree the look. Three locked decisions. Then the DP shoots.

Pure prompt tools collapse all three decisions into a paragraph and ask the model to back-fill. That's why they fail at finished work. Intangible separates the three decisions across three modes, so each is a deliberate choice rather than a hope.

This guide is the mental model, not a step list. Use it to make sense of the rest of the docs.

## The three decisions, in order

### 1. The scene (Build mode)

The set. The world. Everything that's *in* the shot.

A DP doesn't pick the lens before the set is built. They walk the location, see the geometry, see the props, see the natural light. The scene is the first decision because every later decision depends on it.

In Intangible, Build mode is where the scene is constructed. Place objects, import client CAD, generate the mesh that doesn't exist yet, attach reference images for brand-true geometry, set the time of day, drop in populators where you need crowds or trees or vehicles in volume. When Build mode is done, the scene is locked. The model can't override what you placed.

See [Build mode](/build/build.md), [Image reference](/overview/concepts/image-reference.md), [Match a client's product exactly](/overview/how-to/brand-true-ai-render.md).

### 2. The camera (Compose mode)

The framing. The lens. The aspect.

A DP picks the lens to do specific work. 24mm to immerse, 50mm for natural perspective, 85mm to flatter, telephoto to compress. Aspect ratio sets the proportion the audience reads – cinemascope, vertical for social, square for branded. Position and aim set what's in frame.

Compose mode holds these decisions as 3D camera state, not as English. The model receives the actual camera, not a description of one. Re-prompting won't change the framing because the framing isn't in the prompt – it's in the scene.

See [Cameras and shots](/compose/shots.md), [Lenses](/compose/lenses.md), [Aspect ratios and film gates](/compose/aspect-ratios-and-film-gates.md), [Control shot composition](/overview/how-to/control-ai-composition.md).

### 3. The model (Visualize mode)

The look. The render. The technology that turns the brief into pixels.

A DP doesn't pick the camera body for every shot. They pick a working camera at the start of the production and stay with it. AI rendering is similar: pick the model that fits the brief – photoreal, illustrative, video with motion, video with end-frame interpolation – and stay with it across the project.

Visualize mode is where you pick the model and tune its inputs. The scene's auto-prompt is editable but not from-scratch; it's pre-populated from the structured Build and Compose state. The model receives the scene, the camera, the prompt, the bound references, the lighting preset – all of it at once.

See [Models](/visualize/ai-models.md), [Generate image](/visualize/generate-image.md), [Generate video](/visualize/generate-video.md), [Style presets](/visualize/style-presets.md).

## What this gets you

When the brief is the brief and the brief is locked across three deliberate decisions, the renders come back consistent. The same character, the same composition language, the same look across every shot.

When the brief drifts because it lives in a paragraph, the renders drift with it.

This is what Philip's tutorials walk through end to end. Watch one to see the mental model in motion:

{% embed url="<https://www.youtube.com/watch?v=x_glYD5b8jw>" %}

## Common mistakes

* **Trying to brief in Visualize first.** Skipping Build and Compose to get to the render is the most common mistake from people coming off pure prompt tools. The render won't honor the brief because the brief isn't in the scene yet. Build first, compose second, render third.
* **Re-prompting to fix composition.** If the framing is wrong, the fix is in Compose, not Visualize. The model can't reframe what's locked spatially.
* **Re-prompting to fix the product.** If the brand is wrong, the fix is in Build (Image Reference), not Visualize. The model isn't being given the visual truth.
* **Picking the model before the scene exists.** Different models have different strengths, but you can't choose well without a composed shot to test against. Build the shot, then test models against it.

## A concrete walkthrough of the pattern

Concrete sequence for an agency brief – car ad, hero drive, urban setting:

1. **Scene** – import the client's vehicle CAD via Smart Import, attach a hero photograph as Image Reference, populate a city street with buildings and crowd, set Time to 5:30pm for golden-hour light.
2. **Camera** – add the hero shot, position low and forward of the vehicle, set lens to 35mm for wide environmental coverage, set aspect to 2.39:1 cinemascope, aim camera at the vehicle so the framing tracks if the car drives.
3. **Model** – switch to Visualize, pick the photoreal image model that's tuned for reference-image conditioning, pick a Lighting Preset that fits the brand language, render. Then render the video version with the same scene + camera, switching only the model.

Three deliberate decisions, locked. The render reflects the brief because the brief reflects the production order.

## Related

* [Controlling AI output](/overview/concepts/control-ai-output.md) – the conceptual umbrella.
* [The three modes](/overview/concepts/the-three-modes.md) – how Build, Compose, and Visualize map to a real production pipeline.
* [How the visualizer thinks](/overview/concepts/how-the-visualizer-thinks.md) – the technical version of why this works.
* [From idea to finished render](/overview/how-to/from-idea-to-finished-render.md) – the same pattern walked end to end as steps.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://help.intangible.ai/overview/how-to/brief-ai-like-a-dp.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
