Installation
The Reactor CLI is available after installing the reactor-runtime package:
pip install reactor-runtime
After installation, the reactor command is available globally.
Commands Overview
| Command | Purpose |
reactor run | Start the runtime and serve your model |
reactor init | Create a new model from template |
reactor download | Download model weights from Model Registry |
reactor upload | Upload model to Supabase database |
reactor capabilities | Extract and display model capabilities |
reactor setup | Interactive setup for credentials |
Only commands available in the public repository are documented below.
reactor run
Start the Reactor runtime with your model.
Usage
Options
| Option | Type | Default | Description |
--deploy | flag | False | Run in production deployment mode |
--debug | flag | False | Run in headless debug mode (runtime only) |
--headless | flag | False | Alias for —debug |
--host | string | 0.0.0.0 | Host to bind the FastAPI server |
--port | integer | 8081 | FastAPI server port |
--log-level | string | INFO | Logging level (CRITICAL, ERROR, WARNING, INFO, DEBUG) |
What It Does
When you run reactor run:
-
Validates Workspace
- Checks for
manifest.json in current directory
- Validates required fields (
class, reactor-runtime, model_name, model_version)
- Verifies runtime version matches manifest
-
Loads Model
- Reads
class field from manifest (e.g., model_longlive:LongLiveVideoModel)
- Instantiates your
VideoModel class with args from manifest
- Calls your
__init__ method (heavy weight loading happens here)
-
Starts Components
- Runtime Server (Port 8081): FastAPI server hosting your model
- LiveKit Server (Port 7880): Handles WebRTC video streaming
- Development Services: Session management and coordination
-
Ready State
- Model loaded in GPU memory
- All three components listening
- Ready to accept session requests
Examples
Basic usage:
# Run model in current directory
reactor run
Custom port:
# Run on port 9000
reactor run --port 9000
Debug mode:
# Run with debug logging
reactor run --log-level DEBUG
Debug/Headless mode:
# Run without coordinator and LiveKit (testing only)
reactor run --debug
Production deployment:
# Requires REDIS_URL environment variable
reactor run --deploy
Runtime Modes
Local Mode (Default)
- Starts all components automatically
- Uses local LiveKit server
- Development services enabled
- Perfect for local testing
Debug Mode (--debug or --headless)
- Runs only the runtime server
- No streaming infrastructure
- For testing model initialization only
- Cannot accept client connections
Deploy Mode (--deploy)
- Production mode
- Requires
REDIS_URL environment variable
- Connects to cloud services
- For actual deployment
- Not present in public repository release.
Validation
The command validates your manifest.json:
{
"reactor-runtime": "0.0.0", // Must match installed version
"class": "model_file:ClassName", // Required
"model_name": "your-model", // Required
"model_version": "1.0.0", // Required
"args": {}, // Optional
"weights": [] // Optional
}
Error messages you might see:
"No manifest.json found" - Run from model directory
"Version mismatch" - Update manifest or runtime version
"Missing 'class' field" - Add class specification to manifest
"Missing 'model_name' field" - Add model name to manifest
Process Management
Stopping the runtime:
# Press CTRL+C
# Runtime will:
# 1. Stop accepting new sessions
# 2. Clean up current session (if any)
# 3. Shutdown all three components
Between sessions:
- Model stays loaded in memory
- No restart needed
- Next user connects instantly
reactor download
Download model weights from the Model Registry.
For Deployments Only: The reactor download command is designed for
production deployments and requires access to Reactor’s private S3 bucket and
Supabase database. If you’re developing locally or don’t have these
credentials, skip this command and continue using your local weights as you
normally would (e.g., from Hugging Face, local files, etc.).
Usage
reactor download [OPTIONS]
Options
| Option | Type | Description |
--weights | list | Explicit list of weight folder names |
--models | list | List of model IDs to fetch weights from (format: model@version) |
--no-cache | flag | Force re-download even if weights exist locally |
What It Does
Downloads weights from S3 Model Registry to local cache:
~/.cache/reactor_registry/
├── LongLive/
│ └── models/
│ └── longlive_base.pt
├── Wan2_1_VAE/
│ └── vae_weights.pt
└── ...
Modes
Mode 1: From Local Manifest (Default)
# Downloads all weights specified in local manifest.json
reactor download
Reads the weights array from your manifest.json:
{
"weights": ["LongLive", "Wan2_1_VAE"]
}
Downloads these weight folders from S3.
Mode 2: Explicit Weights List
# Download specific weight folders
reactor download --weights LongLive Wan2_1_VAE Matrix-Game-2_0-base
Downloads the specified weight folders directly.
Mode 3: From Model IDs
# Download weights for specific models
reactor download --models longlive matrix-2
# Specify versions
reactor download --models [email protected] matrix-2@latest
Fetches manifests from Supabase database, extracts their weights arrays, and downloads all unique weights.
Examples
Pre-download for current model:
Download for multiple models:
# Download weights needed for longlive and matrix-2
reactor download --models longlive matrix-2
Force re-download:
# Re-download even if cached
reactor download --no-cache
Download specific weights:
reactor download --weights LongLive Wan2_1_VAE
Parallel Downloads
The command downloads weights in parallel for speed:
- Uses ThreadPoolExecutor
- Shows progress bars for each download
- Skips already downloaded weights (unless
--no-cache)
Cache Location
Weights are cached at:
~/.cache/reactor_registry/<weight-folder-name>/
Example:
~/.cache/reactor_registry/LongLive/
~/.cache/reactor_registry/Wan2_1_VAE/
Usage in Code
Access downloaded weights in your model:
from reactor_runtime import VideoModel
class YourModel(VideoModel):
def __init__(self):
# Returns Path to ~/.cache/reactor_registry/LongLive/
weights_path = VideoModel.weights("LongLive")
# Load from the cached location
model.load_state_dict(torch.load(weights_path / "model.pt"))
reactor init
Create a new model from a template.
Usage
reactor init <MODEL_NAME>
Arguments
| Argument | Type | Required | Description |
MODEL_NAME | string | Yes | Name for your new model |
What It Does
Creates a new directory with starter files:
<MODEL_NAME>/
├── manifest.json # Model configuration template
├── model_template.py # VideoModel skeleton implementation
├── requirements.txt # Empty dependencies file
└── README.md # Getting started guide
Examples
# Create a new model called "my-awesome-model"
reactor init my-awesome-model
# Output:
# Created directory: my-awesome-model
# Created: manifest.json
# Created: model_template.py
# Created: requirements.txt
# Created: README.md
Next Steps After Init
# 1. Navigate to your model
cd my-awesome-model
# 2. Edit the template
# Edit model_template.py
# 3. Update manifest
# Edit manifest.json
# 4. Add dependencies
# Edit requirements.txt
# 5. Run your model
reactor run
Check out the Coding Models Guide for more information on how to get started coding your models.
reactor capabilities
Extract and display your model’s capabilities (commands).
Usage
What It Does
Analyzes your VideoModel class and extracts all @command decorated methods:
class YourModel(VideoModel):
@command("set_prompt", description="Change the prompt")
def set_prompt_command(self, prompt: str = Field(..., description="New prompt")):
pass
Outputs JSON:
{
"commands": {
"set_prompt": {
"description": "Change the prompt",
"schema": {
"prompt": {
"type": "string",
"required": true,
"description": "New prompt"
}
}
}
}
}
Ports Reference
The runtime uses these default ports:
| Component | Port | Purpose | Configurable |
| Runtime Server | 8081 | FastAPI server hosting model | Yes (--port) |
| Local LiveKit Server | 7880 | WebRTC streaming server | No |
Port Conflicts
If ports are in use:
# Find processes using ports
lsof -i :8081
lsof -i :8080
lsof -i :7880
# Kill if needed
lsof -ti :8081 | xargs kill
Or use a different port for runtime:
Common Workflows
First Time Setup
# 1. Install runtime
pip install reactor-runtime
# 2. Configure credentials
reactor setup
# 3. Create a model
reactor init my-model
cd my-model
# 4. Implement your model
# Edit model_template.py
# 5. Add dependencies
# Edit requirements.txt
# 7. Run locally
reactor run
Next Steps