Skip to main content
Every Reactor model must be decorated with @model. This decorator is metadata: it tells Reactor what your model is and how to identify it. It does not contain runtime arguments or configuration values for your model logic.
from reactor_runtime import VideoModel, model

@model(name="my-video-model")
class MyVideoModel(VideoModel):
    def start_session(self):
        ...
Everything Reactor needs to know about your model’s identity is declared right on the class itself.

Required Fields

name

The only required field. This is the unique identifier for your model in the Reactor ecosystem.
@model(name="my-diffusion-model")
class MyVideoModel(VideoModel):
    ...
Rules for name:
  • Use lowercase letters, numbers, and hyphens
  • No spaces or special characters
  • Should be unique across your models
  • This is used in logs, deployments, and URLs
  • This is what clients will use to identify your model and connect to it
Examples:
@model(name="longlive")

@model(name="my-diffusion-v2")

@model(name="world-simulator-beta")

Optional Fields

config (Default Configuration File)

@model(name="my-model", config="configs/default.yaml")
class MyVideoModel(VideoModel):
    ...
Points to a YAML file containing default configuration values passed to your model’s __init__. This is the factory configuration that ships with your model when it is built and deployed. If you do not specify config, your model receives an empty configuration (unless the user passes -c on the command line).
Configuration values can be overridden via CLI flags like --model.fps=60. See Configuration for details on overrides and best practices.

weights

@model(name="my-model", weights=["my-model-weights", "vae-decoder"])
class MyVideoModel(VideoModel):
    ...
A list of weight folder names that your model requires. These are used by Reactor’s weight management system in production deployments where weights are distributed separately from code. For local development, you can load weight files using normal Python paths. The weights field becomes important when deploying to Reactor’s infrastructure.

Additional Metadata

You can pass any additional keyword arguments to store custom metadata:
@model(
    name="my-model",
    config="configs/default.yaml",
    author="Your Name",
    version="1.0.0",
    description="A real-time diffusion model"
)
class MyVideoModel(VideoModel):
    ...
This metadata is available for tooling and deployment systems but does not affect runtime behavior.

Running Your Model

When you run reactor run, the CLI scans Python files in the current directory for a class decorated with @model. Navigate to the folder containing your model file and run:
cd my-model
reactor run --runtime http
The CLI will:
  1. Import Python files in the directory
  2. Find the class decorated with @model
  3. Read the decorator’s configuration
  4. Instantiate your model and start the runtime
Only one @model decorator per project. If multiple classes are decorated with @model in the same directory, Reactor will raise an error. Each project should have exactly one entry point.

Import Errors

If your model file has import errors before the decorated class definition, the CLI will report that no decorated class was found. This happens because Python cannot parse the file far enough to see the decorator.
Error: No @model decorated class found in project.
If you see this error but your class is correctly decorated, check for:
  • Missing dependencies (pip install your requirements)
  • Syntax errors earlier in the file
  • Import errors in modules your file depends on
Fix the import errors first, then reactor run will find your decorated class.

Complete Example

from reactor_runtime import VideoModel, model, command, get_ctx
from omegaconf import DictConfig

@model(
    name="realtime-diffusion",
    config="configs/production.yaml",
    weights=["diffusion-weights", "vae-decoder"]
)
class RealtimeDiffusionModel(VideoModel):
    def __init__(self, config: DictConfig):
        self.num_steps = config.get("num_steps", 5)
        self.pipeline = self._load_pipeline()

    def _load_pipeline(self):
        weights_path = VideoModel.weights("diffusion-weights")
        # Load your model...

    @command("set_prompt", description="Set the generation prompt")
    def set_prompt(self, prompt: str):
        self.current_prompt = prompt

    def start_session(self):
        while not get_ctx().should_stop():
            frames = self.generate()
            get_ctx().emit_block(frames)

Decorator vs Configuration

It is important to understand what goes where:
@model DecoratorConfiguration (YAML file)
What your model isHow your model runs
Model identity and metadataRuntime parameters
Weight folder namesFPS, resolution, steps
Does not change per environmentCan vary per deployment
Decorator is about identity:
@model(name="my-diffusion", weights=["my-weights"])
Configuration is about behavior:
fps: 30
num_steps: 8
guidance_scale: 7.5