# Generate a 3D model from an image

The Asset Library covers the common ground – chairs, vehicles, characters, environments. It doesn't cover every brief. The brief for a deeptech pitch wants a specific reentry capsule. The brief for a fashion campaign wants a specific bag. The brief for an architecture flythrough wants a specific facade.

When the library doesn't have it, you generate it. Intangible's Generate 3D Asset takes a text prompt or a reference image and returns a 3D mesh you can drag into the scene like any library asset. Image-to-mesh is the one that pulls its weight – text-to-mesh is generic; image-to-mesh produces the actual shape in the photograph.

## Steps

1. **Open Build mode.** Generate 3D Asset is a Build-mode action; it adds a new asset to the library, not to a specific shot.
2. **Open Generate 3D Asset.** Look for **Generate 3D** in the Asset Library footer, or via the AI Composer with a prompt like "generate a 3D model of a 1965 Ford Mustang from this image".
3. **Pick the input mode.** Three options:
   * **Text-to-mesh** – describe the asset. Works for generic shapes; weak for specific products or branded geometry.
   * **Image-to-mesh** – drop a single reference image. The output mesh approximates the shape in the photograph. The most-used path.
   * **Multi-view-to-mesh** – three photographs from different angles (front, three-quarter, side). The most accurate path; produces a more topologically clean mesh than single-view.
4. **Provide the input.** For image modes, drop a clean photograph against a plain background if possible. Cluttered backgrounds confuse the segmentation step. For text mode, write a specific prompt – "a 1965 Ford Mustang, side view, glossy red paint" beats "a sports car".
5. **Generate.** The mesh is queued and produced. Generation usually takes a minute or two.
6. **Drag the result into the scene.** When the mesh is ready, it appears in the **My Assets** category of the Asset Library. Drag it into the viewport like any other asset.
7. **Optionally bind a reference image to the new asset.** Open Object Details, attach the same photograph (or a hero shot) as an Image Reference. The mesh provides the shape, the reference provides the surface – the model gets both.

## Why this works

Text-to-3D models work by sampling shapes from the training distribution, conditioned on the text. They produce believable shapes, but not specific ones. Image-to-3D conditions on the actual shape in the photograph, so the output is the specific object's silhouette and proportions, not a category-average version.

For agency briefs, image-to-mesh is the right reach almost always – the client has photographs, the photograph is the brief, the mesh is downstream.

## Tips

* **Clean backgrounds beat hero photographs for image-to-mesh.** A product shot against white outperforms an editorial photograph against a complicated environment, because the mesh-extraction step has less to ignore.
* **Multi-view is worth the extra setup if accuracy matters.** Three angles produce noticeably better topology than one. For complex shapes (vehicles, hardware, characters), use multi-view if you have the references.
* **Generate once, reference everywhere.** A single Generate 3D output sits in My Assets and can be dragged into any project. You don't need to regenerate the same shape across projects.
* **Pair with Image Reference for finished work.** Generate 3D gives you the geometry. Image Reference (attached to the same object) gives you the surface. Both are needed for brand-true output.

## Plan and limits

* **Generate 3D Asset is a Business plan feature.** Free and Explorer plans don't have access. See [Plans and billing](/teams-and-billing/plans-and-billing.md).
* **Mesh quality varies with input quality.** Garbage-in, garbage-out applies. A blurry phone photograph at an awkward angle produces a worse mesh than a clean studio shot.
* **Topology isn't production-ready for animation rigging.** The output mesh is suitable for static placement and shot rendering, not for character rigging or simulation. For rigged work, generate the static mesh, then use it as reference for traditional 3D modeling outside Intangible.
* **No internal-cavity or hidden-geometry capture.** Image-to-mesh sees what the photograph sees. The back of an object photographed from the front is approximated, not measured.

## Related

* [Generate 3D Asset (reference)](/build/generate-3d-asset.md)
* [Image to scene and shot](/build/image-to-scene-and-shot.md) – generate a whole scene from an image, not just one asset.
* [Import your own models](/build/import-your-own-models.md) – when the client has actual CAD instead of just photos.
* [Match a client's product exactly](/overview/how-to/brand-true-ai-render.md) – the canonical workflow that combines Generate 3D with Image Reference.
* [Image reference (concept)](/overview/concepts/image-reference.md)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://help.intangible.ai/overview/how-to/ai-3d-model-from-image.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
