# Generate 3D Asset

Type a description, drop in a reference image, or hand the system three photographs from different angles. The output is a 3D mesh ready to drag into your scene like any other library asset.

## Watch

{% embed url="<https://www.youtube.com/watch?v=5jKS24GhIzU>" %}

![Generate 3D Asset workflow producing a mesh from a reference image](/files/W94UGX4PCjnK550H0N91)

{% hint style="info" %}
Free has limited capability for Generate 3D Asset. Explorer, Business, and Enterprise plans have full access. See [intangible.ai/pricing](https://intangible.ai/pricing) for specifics.
{% endhint %}

## What it does

Three input modes, all producing a usable 3D mesh:

* **Single reference image.** Upload one photograph; the system generates a mesh that matches the visual.
* **Multi-view reference.** Upload front, side, and back (and optionally other angles) of the same object. The mesh that comes back honors the geometry implied by the multiple angles much more accurately than single-image.
* **Text only.** Describe what you want in plain language. "A medieval gauntlet, articulated, dark steel." The system invents the geometry from the description.

The result is a mesh that lives in the asset library next to your imports and library items. Drag, swap, attach an [image reference](/overview/concepts/image-reference.md), all the same workflows as a regular asset.

## How to use it

![Generate Asset modal with the Text / Single Image / Multi-View input-mode tabs, an image drop zone, a Name field, a Detail picker (Standard / Medium / High), and a 'Check to review and edit' toggle](/files/F0RMx017K9yZ8PiYbZwj)

1. **Open the asset library.** Bottom of Build mode.
2. **Click Generate 3D** (next to **Import**). The Generate-3D modal opens.
3. **Pick the input mode.** Single image, multi-view (front/left/back/right), or text.
4. **Provide the input.** Upload the image(s) or type the description.
5. **Set the Detail tier.** Standard, Medium, or High. Higher tiers cost more credits and take longer; Standard is fine for most blocking work.
6. **Leave "Check to review and edit" on** if you want to inspect and adjust the result before it lands in your scene. It's recommended (and on by default). Turn it off to skip the review step and drop the mesh directly into the library.
7. **Click Generate.** The job runs in the background. A toast appears with progress; the asset library reopens when the result arrives.
8. **Review the result (if the toggle was on).** The same editing environment as [Smart Import](/build/import-your-own-models.md): set scale, orientation, name, description.
9. **Save.** The mesh lands in your asset library, available to drag into the scene like any other asset.

## When to reach for which input mode

* **Single image** – fastest, cheapest, lowest fidelity. Good for "approximately right" mass that you'll attach an image reference to anyway.
* **Multi-view** – more credits per generation, much better geometry. Use when the object has distinct front/side/back features (vehicles, characters, hardware with handles or controls).
* **Text only** – when there's no reference image at all, or when you're brainstorming and the prompt itself is the test ("a robot vacuum that looks like a manta ray").

## Custom assets at scale

Once a generated asset is in your library, it behaves like any other library asset. The team-asset visibility pattern works for the agency case where one creative generates a fleet of vehicles for a campaign and the rest of the team can drag them in without re-generating.

The webinar's automotive walkthrough shows the canonical flow:

1. Generate the hero asset from multi-view reference photographs.
2. Save to Team Assets with a descriptive name.
3. Every team member can drop the same hero into different shots and scenes.
4. Image reference on the hero locks the visual look at render time.

## Limits and known issues

{% hint style="warning" %}
**Single-image generation can produce ambiguous geometry.** A photo of a car from the front gives the system no information about the back. The result will invent something that may not match the actual rear of the vehicle. Use multi-view when accurate geometry matters.
{% endhint %}

* **Topology is generation-time, not production-grade.** Generated meshes have non-uniform topology that's fine for scene blocking and rendering through the visualizer, but not for downstream pipelines that need clean retopology (Unreal cinematics, real-time engines). For those, generate as starting state and retopologize externally.
* **Materials are inferred.** A texture is generated from the reference image; specialty shaders aren't authored. For brand-accurate surface treatment, attach an [image reference](/overview/concepts/image-reference.md) to the object after import.
* **Generation cost is per attempt.** A failed or unusable result still costs credits. Leave the "Check to review and edit" toggle on so you can inspect before the mesh lands in your library.

## Related

* [Asset library](/build/asset-library.md)
* [Smart Import](/build/import-your-own-models.md)
* [Image reference](/overview/concepts/image-reference.md)
* [Plans and billing](/teams-and-billing/plans-and-billing.md)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://help.intangible.ai/build/generate-3d-asset.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
