# AI Composer

A conversational agent that builds environments and adjusts scenes from natural-language prompts. Ask for a tropical forest, a city block, a living room with characters – it goes and gets the assets, arranges them, and reports back.

## Watch

<https://www.youtube.com/watch?v=XxbxqcGCuR4>

![AI Composer Chat History panel on the right side of Build mode showing a conversation - user prompt asks to add a second sports car behind the Lamborghini, AI responds with the asset it found and placed. A second yellow super car is now visible in the scene](/files/JEejhsFf1Ro6k5zQ4mzx)

## What it does

If you've staged a previs scene before, you know the shape of the work: an hour collecting forty pieces of vegetation for a forest, then another hour distributing them so it looks like an actual forest and not a plant nursery. AI Composer collapses that. Prompt for the environment – "create a tropical forest" – and the agent collects the assets it needs and arranges them in a populator, ready for you to nudge.

The agent runs on a multi-agent architecture under the hood. Asset-lookup agents identify what your prompt implies, scene-composition agents decide how those assets should be arranged, and Intangible's proprietary world-model agents reason about the spatial layout. The point is that you get to direct, not drag – which matters when your director asks for a different forest twenty minutes before the client meeting.

## How to use it

The AI Composer dock sits at the bottom-right of Build mode. Open it with the chat-bubble icon, type a prompt, hit enter.

1. **Prompt for an environment.** "Create a city block." "Build a wedding scene." "Make a tropical forest, roughly 100 by 100 meters." Specificity helps; vague prompts produce generic results.
2. **Wait for the populator.** Most environments come back as a populator – a procedural container that distributes assets across an area. Expect 30 seconds to a couple of minutes depending on the complexity.
3. **Adjust the populator.** Click the populator in the viewport to open its contents panel. Increase or decrease specific asset types ("more palm trees, fewer banana trees"), change footprint, or add new asset categories.
4. **Add objects to an existing scene.** "Add a character on the left side of the cabinet." "Drop a coffee table between the two couches." The agent uses spatial language – "left", "right", "between", "behind" – and tries to honor it.
5. **Delete or replace.** "Delete the rightmost couch." "Replace the dragon with a horse." The agent matches against object names and descriptions in the scene.
6. **Pose a character.** "Make the character stand." "Rotate the character to face the camera." Pose changes apply to whichever character the agent thinks you mean; name your characters if you have multiples.
7. **Attach a reference image to a specific object.** "Attach this as a reference image to the chair: \<image URL>". The agent finds the matching object, attaches the image, and the viewport updates to render the textured / colored variant of that object live. This is the conversational shortcut for the most-important Build action; see [Image reference](/overview/concepts/image-reference.md).
8. **Thumbs up or thumbs down.** Each agent response has feedback icons – click them. The signal trains the agent to do better next time.

### Working with character names

The agent has a habit of latching onto descriptive names ("pirate captain") and over-rendering them in subsequent prompts. If you want a character to stay visually consistent across shots, give them a vanilla name – "Mark", "Jeff" – and use image references on the character object instead of relying on the description. The webinar walkthrough hits this directly.

## Details

| Capability           | What it can do                                                                                     |
| -------------------- | -------------------------------------------------------------------------------------------------- |
| Environment creation | Cities, forests, interiors, beaches, urban districts – anything with characteristic asset families |
| Asset addition       | Add a single object to an existing scene with spatial language                                     |
| Asset deletion       | Delete by reference – "the rightmost couch", "the dragon", "everything red"                        |
| Character pose       | Standing, sitting, running, gesturing, custom prompts                                              |
| Character placement  | Spatial moves – "behind the bar", "in the back of the truck"                                       |
| Population scaling   | "More vegetation", "fewer cars", "double the crowd"                                                |

## Limits and known issues

* **Spatial accuracy is approximate.** "Move the character forward by 75 cm" works directionally; the exact distance often comes back wrong. Re-prompt or use [Transform](/build/transform.md) for fine placement.
* **Generation can go off the rails.** A vague prompt occasionally returns nothing useful. Refine with more specificity or undo and try again.
* **Character consistency is best with image references, not names.** See "Working with character names" above and [Image reference](/overview/concepts/image-reference.md).
* **The agent doesn't know your custom assets unless you tell it.** Imported objects in My Assets won't be picked up by general prompts. Drag them in manually, or reference them by name.

## Related

* [Asset library](/build/asset-library.md)
* [Populators](/overview/concepts/populators.md)
* [Image reference](/overview/concepts/image-reference.md)
* [Smart Import](/build/import-your-own-models.md)
* [How the visualizer thinks](/overview/concepts/how-the-visualizer-thinks.md)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://help.intangible.ai/build/ai-composer.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
