Skip to main content
You have written a VideoModel with commands, a generation loop, frame emission, and cleanup. Now it is time to run it. If you are here after completing the guide, this is where you see your model come to life. If you are here because you have a model you want to run locally (cloned or written), this page covers everything you need.

Runtimes

Your VideoModel does not know where frames go or where commands come from. Runtimes are the connectors that hook it up to inputs and outputs. When your code calls get_ctx().get_track().emit(frames), the runtime decides what happens: stream to a browser, write to disk, etc. Your model code stays the same. Reactor ships two runtimes:
RuntimeDescription
HTTPRuns your model in an HTTP server with WebRTC streaming. Best for local development and demos.
HeadlessRuns in a CLI. You type commands interactively (e.g., start, stop, cmd set_prompt {"prompt": "hello"}), and frames are written to disk as PNG files. Supports feeding input frames from a folder of PNGs via start --input-folder PATH [--input-fps FPS]. Useful for testing and batch processing.
For local development, use the HTTP runtime. The headless runtime is more limited: you send commands by typing JSON into a terminal instead of using a frontend.

The HTTP Runtime

For local development and demos, the HTTP runtime is the way to go. It starts a FastAPI server that:
  • Exposes REST endpoints for session control
  • Handles WebRTC connections for real-time video streaming
  • Routes commands from the frontend to your model
When running locally, this setup is identical to production. Sessions, disconnection handling, cleanup, the full model lifecycle is exactly the same. If your model works locally, it will work when deployed.

Starting the Runtime

If you have not already, install the runtime:
pip install reactor-runtime
The runtime uses GStreamer by default. To ensure it is available, install the system libraries and the Python bindings:
pip install reactor-runtime[gst]
See GStreamer Transport below for the required system libraries and platform-specific instructions. If GStreamer is not set up correctly, you will see a warning at startup and the runtime will fall back to aiortc. See Troubleshooting for details. Navigate to your model’s directory (the one containing your @model decorated class) and run:
reactor run --runtime http
If your model is in a different directory, use --path:
reactor run --runtime http --path /path/to/my-model
The server starts on http://localhost:8080 by default. You will see logs as the model loads:
Loading model weights...
Weights loaded!
INFO:     Started server process
INFO:     Uvicorn running on http://0.0.0.0:8080
Use --help to see available options:
reactor run --runtime http --help

Base Options

These flags apply to all runtimes:
FlagDescriptionDefault
--path, -pPath to model directoryCurrent directory
--runtimeRuntime to use (http or headless)headless
-c, --configPath to model config file (overrides @model decorator config)None
-v, --verboseEnable verbose (DEBUG) loggingOff
--debugEnable debugpy and wait for a debugger to attach before startingOff
--debug-portPort for debugpy to listen on (requires --debug)5678
--bucket, -bS3 bucket name for weights (overrides REACTOR_MODELS_BUCKET env var)reactor-models

HTTP Runtime Options

FlagDescriptionDefault
--hostHost to bind the server to0.0.0.0
--portPort to bind the server to8080
--orphan-timeoutSeconds to wait before stopping a session with no connected client. Set to 0 to disable30.0
--max-session-durationMaximum session duration in seconds. Sessions are automatically stopped after this duration. Also configurable via MAX_SESSION_DURATION_SECONDS env varDisabled
--webrtc-port-rangeUDP port range for WebRTC ICE in format min:maxEphemeral
--stun-serverSTUN server URL (can be specified multiple times)Google STUN
--turn-serverTURN server in format username;credential;url (can be specified multiple times)None
--ice-transport-policyICE transport policy: all gathers all candidate types, relay forces traffic through TURN server only. Also configurable via ICE_TRANSPORT_POLICY env varall
--transportWebRTC transport backend: gstreamer or aiortc. See GStreamer Transportgstreamer (falls back to aiortc)
--enable-profilingEnable file-based profiling output. Timing data is written to JSON files for later visualizationOff
--profiling-output-dirDirectory for profiling output files./profiling

Configuring WebRTC Port Range

By default, WebRTC uses ephemeral ports for ICE (Interactive Connectivity Establishment) UDP traffic. In environments with strict firewall rules, you may need to restrict WebRTC to a specific port range.

Via CLI Flag

Specify a port range using the --webrtc-port-range flag:
# Full range
reactor run --runtime http --webrtc-port-range 10000:20000

# Only set minimum (open-ended max)
reactor run --runtime http --webrtc-port-range 10000:

# Only set maximum (open-ended min)
reactor run --runtime http --webrtc-port-range :20000

Via Environment Variable

Set the WEBRTC_PORT_RANGE environment variable with the same format:
export WEBRTC_PORT_RANGE="10000:20000"
reactor run --runtime http
Ensure your firewall allows UDP traffic on the configured port range. The range must use ports above 1023 (non-privileged ports).

Configuring STUN/TURN Servers

WebRTC requires STUN/TURN servers to establish peer-to-peer connections, especially when clients are behind NAT or firewalls. By default, the runtime uses Google’s public STUN server (stun:stun.l.google.com:19302), which works for most local development scenarios. For production or restrictive network environments, you may need to configure your own STUN/TURN servers.

Via CLI Flags

You can specify STUN and TURN servers using CLI flags. Both flags can be repeated to add multiple servers:
reactor run --runtime http \
  --stun-server stun:stun.relay.metered.ca:80 \
  --turn-server "myuser;mypassword;turn:global.relay.metered.ca:80" \
  --turn-server "myuser;mypassword;turns:global.relay.metered.ca:443?transport=tcp"

Via Environment Variables

Alternatively, configure ICE servers using environment variables:
VariableFormatExample
STUN_SERVERSComma-separated URLsstun:stun1.example.com,stun:stun2.example.com
TURN_SERVERSComma-separated username;credential;url entriesuser;pass;turn:turn.example.com:3478
ICE_TRANSPORT_POLICYall or relayrelay
export STUN_SERVERS="stun:stun.relay.metered.ca:80"
export TURN_SERVERS="myuser;mypassword;turn:global.relay.metered.ca:80,myuser;mypassword;turns:global.relay.metered.ca:443?transport=tcp"
reactor run --runtime http
CLI flags take precedence over environment variables. If you specify --stun-server or --turn-server on the command line, the corresponding environment variable is ignored.
TURN servers require authentication. The format is username;credential;url where:
  • username is your TURN server username
  • credential is your TURN server password
  • url is the server URL (e.g., turn:example.com:3478 or turns:example.com:443?transport=tcp for TLS)

GStreamer Transport

The runtime supports two WebRTC transport backends:
TransportDescription
GStreamer (default)Uses the GStreamer multimedia framework. Supports hardware-accelerated video encoding and a wider range of codecs. Recommended for local development and production.
aiortc (fallback)Pure-Python WebRTC implementation. Works out of the box with no system dependencies. Used automatically when GStreamer is not installed.
The runtime uses GStreamer by default. If GStreamer is not installed, it falls back to aiortc automatically. You can override this with the --transport flag:
# Force aiortc
reactor run --runtime http --transport aiortc

# Force GStreamer (fails if not installed)
reactor run --runtime http --transport gstreamer

Installing GStreamer

GStreamer requires two things: the Python bindings and the system libraries.

1. Python Bindings

Install the gst optional extra:
pip install reactor-runtime[gst]
This installs PyGObject, the Python bindings for GObject/GStreamer.

2. System Libraries

PyGObject needs the GStreamer runtime and development libraries installed at the OS level. Follow the official GStreamer installation guide for platform-specific instructions (macOS, Linux, Windows). After installing both components, run reactor run --runtime http. If GStreamer is set up correctly the runtime will use it with no extra flags. If something is missing, you will see a warning at startup and the runtime falls back to aiortc. See Troubleshooting for details on each warning.

Connecting a Frontend

The fastest way to get a frontend running is with our interactive CLI. It scaffolds a complete React app with the Reactor SDK already configured:
npx create-reactor-app my-app
See our Quickstart for the full walkthrough and links to live demos.

Manual Setup

If you prefer to add Reactor to an existing project, install the SDK:
npm install @reactor-team/js-sdk

Basic Structure

Here’s a minimal React setup to connect to your local model:
page.tsx
"use client";

import { ReactorProvider, ReactorView } from "@reactor-team/js-sdk";

export default function Home() {
  return (
    <ReactorProvider modelName="my-model" local autoConnect>
      <ReactorView className="w-full aspect-video" />
    </ReactorProvider>
  );
}
That’s it. The ReactorProvider manages the connection, and ReactorView displays the video stream. Set modelName to the name from your model’s @model(name="...") decorator.
When running locally, any model name will work since only one model runs on the HTTP server. However, use the actual model name for consistency and to avoid issues when deploying to production.

Adding Controls

Use the useReactor hook to access connection state and actions:
Controls.tsx
"use client";

import { useReactor } from "@reactor-team/js-sdk";

export function Controls() {
  const { status, connect, disconnect, sendCommand } = useReactor((state) => ({
    status: state.status,
    connect: state.connect,
    disconnect: state.disconnect,
    sendCommand: state.sendCommand,
  }));

  return (
    <div>
      <p>Status: {status}</p>
      <button onClick={() => connect()}>Connect</button>
      <button onClick={() => disconnect(false)}>Disconnect</button>
      <button onClick={() => sendCommand("set_prompt", { prompt: "Hello" })}>
        Send Command
      </button>
    </div>
  );
}

Using the Dynamic Controller

For quick prototyping, the SDK includes a ReactorController that auto-generates UI for your model’s commands:
page.tsx
"use client";

import { ReactorProvider, ReactorView, ReactorController } from "@reactor-team/js-sdk";

export default function Home() {
  return (
    <ReactorProvider modelName="my-model" local autoConnect>
      <ReactorView className="w-full aspect-video" />
      <ReactorController />
    </ReactorProvider>
  );
}
The controller automatically discovers your model’s @command decorated methods and renders appropriate inputs.
Dynamic controller UI

Full Example

Here’s a complete page with status display and controls:
page.tsx
"use client";

import { ReactorProvider, ReactorView, ReactorController, useReactor } from "@reactor-team/js-sdk";

function Status() {
  const { status, sessionId, connect, disconnect } = useReactor((state) => ({
    status: state.status,
    sessionId: state.sessionId,
    connect: state.connect,
    disconnect: state.disconnect,
  }));

  return (
    <div className="flex items-center gap-4">
      <span className={status === "ready" ? "text-green-500" : "text-yellow-500"}>
        {status}
      </span>
      {sessionId && <span className="text-gray-400">Session: {sessionId}</span>}
      <button onClick={() => connect()}>Connect</button>
      <button onClick={() => disconnect(false)}>Stop</button>
    </div>
  );
}

export default function Home() {
  return (
    <ReactorProvider modelName="my-model" local autoConnect>
      <div className="flex flex-col gap-4 p-4">
        <Status />
        <ReactorView className="w-full aspect-video bg-black rounded-lg" />
        <ReactorController />
      </div>
    </ReactorProvider>
  );
}

Provider Props

PropTypeDefaultDescription
modelNamestringrequiredName from the @model decorator
localbooleanfalseConnect to localhost:8080 instead of production
autoConnectbooleantrueAutomatically connect on mount
coordinatorUrlstringOverride the coordinator URL (useful for custom ports)
jwtTokenstringAuthentication token for production
If your model runs on a different port, set coordinatorUrl:
<ReactorProvider modelName="my-model" local coordinatorUrl="http://localhost:3001">

Quick Start Checklist

1

Decorate your model class

Your VideoModel subclass must have the @model(name="...") decorator.
2

Start the runtime

Run reactor run --runtime http from your model’s directory.
3

Create your frontend

Use create-reactor-app or add the SDK to an existing project.
4

Wrap with ReactorProvider

Add ReactorProvider with local set to true and your model name.
5

Display the stream

Add ReactorView to show the video output.

Congratulations! You have completed the Reactor Runtime guide. Your model can now:
  • Load weights once and serve many users
  • Accept real-time commands from clients
  • Generate frames in a continuous loop with real-time conditioning
  • Emit frames smoothly regardless of generation speed
  • Clean up properly between sessions
  • Run locally with the same behavior as production

Troubleshooting

GStreamer warnings at startup

When the runtime cannot use GStreamer, it falls back to aiortc and logs a warning explaining what is missing. The warning tells you exactly what to fix.

Python bindings not installed

WARNING: GStreamer Python bindings (PyGObject) are not installed, falling back to aiortc.
Install them with: pip install reactor-runtime[gst]
Fix: Install the gst extra:
pip install reactor-runtime[gst]

System libraries missing

WARNING: PyGObject is installed but the GStreamer system libraries or typelibs are missing,
falling back to aiortc. Install GStreamer:
https://gstreamer.freedesktop.org/documentation/installing/index.html
Fix: The Python bindings are installed, but the GStreamer system packages are not. Install them using your platform’s package manager. See Installing GStreamer above for platform-specific commands.

GStreamer fails to initialize

WARNING: GStreamer failed to initialize, falling back to aiortc.
Fix: Both the Python bindings and system libraries are present, but GStreamer could not start. This usually means a corrupted or incomplete GStreamer installation. Reinstall the system packages and try again.

Next Steps

Dive deeper into the concepts behind the runtime: