Skip to main content
This guide takes you from nothing to a working Reactor model with a connected frontend. By the end, you will have a model running locally and a web UI controlling it in real time.

Install the Runtime

First, install the Reactor Runtime package:
pip install reactor-runtime
Verify the installation by checking the CLI:
reactor --help
You should see the available commands including run and init.

Create a Model Project

Reactor provides a starter template to get you going. Create a new project:
reactor init my-first-model
cd my-first-model
This creates a directory with the following structure:
my-first-model/
├── brightness_example.py   # Your model code
├── config.yml              # Model configuration
├── requirements.txt        # Python dependencies
└── README.md               # Project documentation
Install the model’s dependencies:
pip install -r requirements.txt

Start the Model

Run the model with the HTTP runtime:
reactor run --runtime http
You will see output as the model loads:
Startup Logs
user@host % reactor run --runtime http
Starting reactor runtime (http)...
Model: brightness-example (brightness_example:BrightnessExample)
Model config: {'text_x': 10, 'text_y': 10}
INFO:root:Launching Reactor Runtime with model=brightness_example:BrightnessExample
INFO:reactor_runtime.output.frame_buffer:Frame Buffer initialized with fps_debuff_factor: 1.0
INFO:brightness_example:Loading model weights...
INFO:brightness_example:Model ready!
INFO:reactor_runtime.runtime_api:Model brightness-example loaded successfully and now available for inference.
INFO:reactor_runtime.runtimes.http.http_runtime:Starting HTTP runtime on 0.0.0.0:8080
INFO:     Started server process [21642]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
The model is now running as a local server. It has loaded its weights (in this example, simulated with a 2-second delay) and is waiting for a client to connect.
Runtime state machine: load model, wait for user, run inference, cleanup, repeat
The model loads once and stays in memory. When users connect and disconnect, only the session changes, but the weights remain loaded. This is the key to low-latency serving. See Model Instancing for details.

Configuring WebRTC

WebRTC uses STUN/TURN servers to establish connections and UDP ports for media streaming. The defaults work for most local development, but you can customize them for restrictive network environments.
reactor run --runtime http \
  --webrtc-port-range 10000:20000 \
  --stun-server stun:stun.example.com:3478 \
  --turn-server "user;password;turn:turn.example.com:3478"
See Local Setup for detailed configuration options.

Connect a Frontend

With the model running, you need a client to connect to it. Reactor provides a scaffolding tool that sets up a Next.js frontend with the SDK pre-configured. Open a new terminal and run:
pnpm dlx create-reactor-app
When prompted:
  1. Project name: Enter any name (e.g., my-frontend)
  2. Template: Select dynamic from the list
Interactive CLI for creating a new Reactor app
Once complete, start the frontend:
cd my-frontend
pnpm dev
Open http://localhost:3000 in your browser.
The dynamic UI example frontend

See It Work

Ensure that the local checkbox is selected in the dynamic UI. Click Connect in the frontend. You will see:
The video stream
The slider
INFO:reactor_runtime.output.frame_buffer:Frame emission started
INFO:brightness_example:Session started
INFO:reactor_runtime.runtime_api:Model session started in background thread
INFO:reactor_runtime.runtimes.http.http_runtime:HTTP session started
Move the brightness slider. The video updates in real time as your model receives commands and adjusts its output on the fly. Click Disconnect when done. Your model terminal will show Session ended, and the model returns to idle, ready for the next connection.

Extending the Model

Now that you have a working setup, let’s add a new capability. Open brightness_example.py and add a new command to control the animation speed:
@command("set_speed", description="Control animation speed")
def set_speed(
    self,
    speed: float = Field(1.0, ge=0.1, le=5.0, description="Animation speed multiplier")
):
    """Set the speed of the moving bar."""
    self._speed = speed
Update __init__ to initialize the speed:
def __init__(self, config: DictConfig):
    # ... existing code ...
    self._speed = 1.0  # Add this line
And use it in _generate_frame:
def _generate_frame(self) -> np.ndarray:
    # ... existing code ...
    # Change this line:
    offset = int((self._frame_count * 2 * self._speed)) % w
    # ... rest of the method ...
Do not forget to reset it in _reset_state:
def _reset_state(self):
    self._brightness = 1.0
    self._speed = 1.0  # Add this line
    self._frame_count = 0
Restart your model (Ctrl+C then reactor run --runtime http), refresh the frontend, and reconnect. A new slider for animation speed appears automatically. The frontend reads your model’s command schema and generates the appropriate UI.
The @command decorator turns methods into real-time API endpoints. Type annotations and Pydantic Field constraints become the schema that frontends use to generate controls. See Accepting User Inputs for the full API.

Next Steps

You now have a working local development environment. From here: