# Control shot composition

Pure prompt tools randomize composition. Ask for "a wide shot of a car on a wet street at night" and you get whatever wide shot the model trained on. Ask for "from a low angle, 35mm, 2.39:1 cinemascope, lead with negative space on the left" and you get back a generic wide shot anyway. Composition is spatial; text is bad at carrying it.

Compose mode is where you make composition non-negotiable. Camera position, lens, aspect, aim target – all spatial decisions, all passed to the model as structured input alongside the prompt. The model can't override what you set; it renders the framing you placed.

## Steps

1. **Add a shot.** In Compose mode, click the camera **+** icon directly below the viewport. The new shot snapshots the current viewport framing as its starting point.
2. **Position the camera.** Orbit, pan, dolly, and zoom to find the framing. The viewport shows the camera's actual view. See [Camera controls](/compose/camera-controls.md).
3. **Set the lens.** Open the lens picker and choose the focal length. Ultra-wide for environmental establishment, 24mm or 35mm for documentary feel, 50mm for natural perspective, 85mm for portraiture, telephoto for compression. See [Lenses](/compose/lenses.md).
4. **Set the aspect ratio.** 16:9 for digital delivery, 2.39:1 for cinemascope, 1:1 for social, 9:16 for vertical. The picker has the standard production aspects pre-loaded. See [Aspect ratios and film gates](/compose/aspect-ratios-and-film-gates.md).
5. **Set an aim target if the camera should track an object.** Aim Camera binds the camera's look-at point to a 3D object, so when the object moves the camera follows. The target appears as a crosshair anchored to the object. See [Aim Camera and Target](/compose/aim-camera-and-target.md).
6. **Frame the action.** Use the PIP (picture-in-picture) overhead view in the top-left of the viewport to check spatial relationships – where the subject sits relative to the environment. The PIP is on by default in Compose.
7. **Lock with Shot Details.** Open [Shot details](/compose/shot-details.md). Name the shot from the brief and write a description. The shot is now ready for render.
8. **Render.** In Visualize mode, generate. The composition you set – position, lens, aspect, aim – is passed to the model as part of the scene. The render comes back framed exactly the way you composed it.

## Why this works

The Compose mode camera is a real 3D camera, not a description of one. The model receives the camera's position, target, focal length, and aspect ratio as structured parameters – not as English. Lens choice changes perspective and field of view in the rendered output the way it would in a real camera, because the same math applies.

Prompt tools can't do this because they have no 3D camera state. "35mm" in a prompt is a token the model has loosely associated with certain visual qualities; it isn't actually a 35mm lens.

For the deeper version, see [How the visualizer thinks](/overview/concepts/how-the-visualizer-thinks.md).

## Composition decisions in production order

| Decision            | Where                                         | Affects                                                |
| ------------------- | --------------------------------------------- | ------------------------------------------------------ |
| Camera position     | Compose viewport (orbit / pan / dolly / zoom) | Where the camera is in the scene                       |
| Lens / focal length | Compose lens picker                           | Field of view, perspective compression, depth of field |
| Aspect ratio        | Compose aspect picker                         | Frame proportions, where to place subjects in frame    |
| Aim target          | Aim Camera tool                               | What the camera tracks if the subject moves            |
| Subject placement   | Build mode (move the object)                  | What's in frame at all                                 |

The order matters in production: build the world, then choose the lens, then position the camera, then aim. Reverse the order and you spend more time hunting for the framing.

## Tips

* **Pick the lens before fine-tuning the position.** Different focal lengths require different camera distances for the same composition. A 24mm at 5 feet and an 85mm at 20 feet frame the subject the same size, but with very different perspective. Choose the lens, then position.
* **Use the PIP overhead view for staging.** When characters or vehicles need specific blocking relative to each other, the PIP shows the spatial layout that the camera view obscures.
* **Aim Camera for moving subjects.** A racing car shot frames itself if you aim the camera at the car and let the car drive past. The camera tracks the subject across the shot's duration without manual keyframing.

## Limits

* **No DOF / aperture control as a numeric setting.** Depth of field is implied by lens choice and the model's interpretation. Telephoto lenses produce more compressed depth in renders than wide lenses, but you don't dial f-stop directly.
* **No camera shake or handheld simulation.** The camera path is smooth unless you keyframe variation. For handheld feel, add it in your edit.
* **The first-and-last-frame video flow uses two stills.** For controlled video composition with camera motion, see [First and last frame](/visualize/first-and-last-frame.md).

## Related

* [Camera controls](/compose/camera-controls.md)
* [Lenses](/compose/lenses.md)
* [Aspect ratios and film gates](/compose/aspect-ratios-and-film-gates.md)
* [Aim Camera and Target](/compose/aim-camera-and-target.md)
* [Shot details](/compose/shot-details.md)
* [Controlling AI output](/overview/concepts/control-ai-output.md)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://help.intangible.ai/overview/how-to/control-ai-composition.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
