# LoRAs

A LoRA is a small fine-tuning file that biases the diffusion model toward a specific style, character, or product. Attach one to a Build-mode object and the visualizer applies the LoRA's behavior at render time, on top of whatever the base model would otherwise do.

![Create New LoRA modal with fields for Name (Dragon), Notes, Triggers, plus Upload LoRA and Public URL inputs and Cancel / Save buttons](/files/MatYfvMPGUNMz4h9rDpk)

## Image Reference vs LoRA vs Custom Style

Three different ways to lock visual consistency. They serve different jobs.

|                        | Image Reference                      | LoRA                                                             | Custom Style                            |
| ---------------------- | ------------------------------------ | ---------------------------------------------------------------- | --------------------------------------- |
| Scope                  | One specific object                  | One object, model-level bias                                     | Whole render, every shot                |
| Controls               | What this object looks like          | A learned style or character bias                                | Project-wide brand treatment            |
| Setup                  | Drop in a photo                      | Train or download `.safetensors` or `.pt`                        | Upload references in Visualize, save    |
| Best for               | Brand-true product, hero with photos | Illustrator style, internal aesthetic, characters without photos | Brand- or campaign-wide look            |
| Cross-shot consistency | Yes, automatic                       | Yes, automatic                                                   | Yes, automatic                          |
| Plan requirement       | All plans                            | All plans                                                        | All plans                               |
| Reach first when       | Almost always start here             | Can't capture the look in a single photo                         | The style is bigger than any one object |

The fastest correct answer for most agency briefs: try Image Reference first. Layer in LoRA when one specific character or style needs more than a photograph can carry. Switch to Custom Style when the brand language has to apply globally.

## What it does

Image references handle most consistency cases (see [Image reference](/overview/concepts/image-reference.md)) and they don't require a fine-tuning step. LoRAs cover the cases image reference can't:

* **Cross-object style consistency.** A whole rendered scene in a specific illustrator's pen-and-ink style isn't a per-object reference – it's a model-level bias. LoRA on a global "scene style" object handles it.
* **Character that needs more than one reference image can capture.** A custom character with many expressions, costumes, and angles benefits from a trained LoRA more than from a stack of reference images.
* **Brand-specific surface treatments.** A material library that the team has trained internally as a LoRA and wants to apply consistently.

If you don't already have a LoRA on hand, you probably don't need one. Image reference is the right starting tool.

## How to use it

LoRAs attach per object, via the contextual menu's **More Options** menu.

1. **Select the object** in the viewport. The contextual menu opens above it.
2. **Open More Options** – the three-dot (⋯) icon at the right end of the contextual menu.
3. **Click Add LoRA.** A dialog opens with inputs for the file (or URL) and an optional trigger word.
   * **Upload from computer.** Browse to a `.safetensors` or `.pt` file. The file gets stored alongside the project; subsequent renders reuse it without re-uploading.
   * **Add via URL.** Paste a URL from a service that hosts LoRA files (Civitai, Hugging Face, internal artifact storage). The system fetches and caches.
   * **Trigger word.** Some LoRAs are trained to activate only when a specific word appears in the prompt. Enter the trigger word here; the visualizer prepends it to the prompt block for this object.

Once attached, the LoRA is bound to that object. The visualizer's auto-prompt notes the LoRA's presence in the \[Subjects] block at render time.

{% hint style="info" %}
LoRAs apply per-object, not per-scene. Attach the LoRA to the object whose look should be biased. To bias the entire scene's style, attach to a "global style" placeholder object, or use the [Custom styles](/visualize/custom-styles.md) feature in Visualize mode for the scene-wide case.
{% endhint %}

## How LoRAs interact with image references

```mermaid
flowchart LR
    A[Object] --> B[Image reference]
    A --> C[LoRA]
    B --> D[Visualizer]
    C --> D
    D --> E[Render]
```

Both can apply to the same object. The model receives the prompt text plus the image reference plus the LoRA bias at generation time. In practice:

* **Image reference dominates for "what does this specific thing look like".** A photograph of the actual car constrains the rendered car's appearance directly.
* **LoRA dominates for "what is the style of this rendering".** The pen-and-ink LoRA bends every render through that aesthetic regardless of the reference image's surface.

Use both when both apply. They don't fight; they layer.

## Limits and known issues

* **Not every base model honors LoRAs.** Some of the visualizer's models (notably some video models) don't accept LoRA conditioning. The LoRA attaches successfully but is silently ignored at render time. Test against your target model before committing.
* **LoRA files are large.** Multi-hundred-megabyte LoRAs slow project sync and increase storage costs. Use compressed formats where possible.
* **Trigger words must be exact.** A LoRA trained on the trigger "robocar" won't activate from "robo car" or "robotic car". Match the training data exactly.

## Where to find LoRAs

* **Civitai** – community library for image-model LoRAs. Most are stylistic biases.
* **HuggingFace** – broader model repository; LoRAs live alongside their parent base models.
* **Internal artifacts** – if your agency or team has trained custom LoRAs, host them in your own storage and share URLs.

## Related

* [Object details](/build/object-details.md)
* [Image reference](/overview/concepts/image-reference.md)
* [Custom styles](/visualize/custom-styles.md)
* [How the visualizer thinks](/overview/concepts/how-the-visualizer-thinks.md)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://help.intangible.ai/build/loras.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
