Skip to main content
Reactor Runtime turns any video generation model into a real-time, interactive API. Define your model in Python, and we handle WebRTC streaming, session management, and command routing.
  • Stream video to any frontend at generation speed
  • Accept real-time input from users while generating
  • Load weights once, serve many users with zero cold start
  • Auto-generate UI controls from your model’s command schema
from reactor_runtime import VideoModel, model, command, get_ctx
from pydantic import Field

@model(name="my-model", config="config.yml")
class MyVideoModel(VideoModel):
    def __init__(self, config):
        self.pipeline = load_pipeline(device="cuda")
        self._prompt = "a calm forest"

    @command("set_prompt", description="Change the prompt in real time")
    def set_prompt(self, prompt: str):
        self._prompt = prompt

    @command("set_speed", description="Adjust generation speed")
    def set_speed(self, speed: float = Field(1.0, ge=0.1, le=5.0)):
        self._speed = speed

    def start_session(self):
        while not get_ctx().should_stop():
            frames = self.pipeline.generate(prompt=self._prompt)
            get_ctx().get_track().emit(frames)
reactor run --runtime http
That’s it. Your model is now a WebRTC server streaming video and accepting commands.

How It Works

Your model loads weights once in __init__. When a user connects, start_session() runs your generation loop — streaming frames while @command methods handle input on a separate thread. When the user disconnects, session state resets and the model is immediately ready for the next user. Weights stay loaded. Your @command decorators define the input schema. Any frontend that speaks this schema can connect: web apps, game engines, mobile apps. You define the API once.

Quickstart

Install, run a model, and connect a frontend in 5 minutes.