A track is a named channel for sending or receiving media between your model and the client.
The simplest models need only one output track — Reactor provides this by default. But models
that receive webcam input, output multiple video streams, or work with audio need explicit track
declarations.
Default Track
If you do not declare any tracks, Reactor gives your model a single video output track called
main_video. This is why the basic emit pattern works without any track setup:
from reactor_runtime import get_ctx
def start_session(self):
while not get_ctx().should_stop():
frame = self.generate()
get_ctx().get_track().emit(frame)
Calling get_track() with no arguments returns the default track. For most video-out models, this
is all you need. The rest of this page covers models that need more.
Declaring Tracks
Tracks are declared as class attributes on your model using descriptors from
reactor_runtime.tracks:
from reactor_runtime import VideoModel, model, get_ctx
from reactor_runtime.tracks import VideoOut, VideoIn
@model(name="face-transform", config="config.yml")
class FaceTransform(VideoModel):
webcam = VideoIn()
main_video = VideoOut(default=True)
def start_session(self):
ctx = get_ctx()
while not ctx.should_stop():
frame = ctx.get_track("webcam").latest()
if frame is not None:
output = self.transform(frame)
ctx.get_track().emit(output)
The class attribute name is the track’s identifier — it is the same string you pass to
get_track() in your Python code, and the same string the frontend uses to route media. In the
example above, webcam is declared as VideoIn(), so:
- In the model:
get_ctx().get_track("webcam").latest() reads the webcam feed.
- On the frontend:
send={[video("webcam")]} tells the SDK to publish the user’s camera to that
track.
Similarly, main_video is declared as VideoOut(default=True), so:
- In the model:
get_ctx().get_track("main_video").emit(frame) (or just get_track() since it is
the default).
- On the frontend:
receive={[video("main_video")]} tells the SDK to subscribe to that track.
Choose descriptive names — they appear in your runtime code, your frontend code, and in WebRTC
signaling. Names like webcam, main_video, microphone, or narration make the data flow
obvious to anyone reading either side.
Track Types
| Type | Direction | Description |
|---|
VideoOut | model → client | Output video track. Send frames with emit(). |
AudioOut | model → client | Output audio track. Send audio with emit(). |
VideoIn | client → model | Input video track. Read with latest(). |
AudioIn | client → model | Input audio track. Read with latest(). |
Track Parameters
Each descriptor accepts two optional parameters:
| Parameter | Type | Default | Description |
|---|
rate | float | 0.0 | Frame rate (video) or sample rate (audio). 0.0 means automatic — Reactor adapts to your model’s actual push rate. |
default | bool | False | Mark as the default track for shorthand access via get_track(). |
If no track is explicitly marked as default, the first VideoOut track declared on your class
becomes the default automatically.
Accessing Tracks
Use get_ctx().get_track() to access tracks by name:
ctx = get_ctx()
ctx.get_track().emit(frame) # default output track
ctx.get_track("main_video").emit(frame) # by name
ctx.get_track("webcam").latest() # input track by name
This is a global accessor — you can call it from your VideoModel, a separate pipeline class,
utility functions, or any module. No need to pass track references through constructors:
from reactor_runtime import get_ctx
class MyPipeline:
def run(self):
while not get_ctx().should_stop():
decoded = self.forward()
get_ctx().get_track().emit(decoded)
Output Tracks: emit()
Call emit() on an output track to send data to the client:
ctx.get_track().emit(frame) # numpy array
ctx.get_track().emit(None) # black frame (see below)
Parameters:
frame: A NumPy array with shape (N, H, W, 3) or (H, W, 3), dtype=np.uint8, values
0–255, RGB channel order. Or None to emit a black frame.
When you pass None, the track synthesises a real 720p black frame
(np.zeros((720, 1280, 3), dtype=np.uint8)) for video, or a silence buffer for audio.
The synthesised frame flows through the pipeline normally — None never propagates beyond
emit(). This is useful for placeholder output during loading or between generation runs.
For format details and PyTorch conversion examples, see Emitting Frames.
Call latest() on an input track to read the most recent data from the client:
frame = ctx.get_track("webcam").latest()
Returns: A copy of the most recent frame from the client, or None if no data has arrived
yet. The runtime updates input tracks in the background as new data arrives.
latest() is a polling API — it always returns the most recent frame, skipping any that arrived
between calls. This is the right choice for most models, where you want the freshest input each
iteration rather than processing every frame.
For models that need to process every inbound frame rather than just the latest, override
on_media() on your VideoModel:
from reactor_runtime.transports.media import MediaBundle
@model(name="frame-processor")
class FrameProcessor(VideoModel):
webcam = VideoIn()
main_video = VideoOut(default=True)
def on_media(self, bundle: MediaBundle):
webcam_data = bundle.get_track("webcam")
if webcam_data is not None:
self.pipeline.push_frame(webcam_data.data)
def start_session(self):
self.pipeline.run()
The MediaBundle contains a TrackData entry for each input track that had data in the current
media event. Use bundle.get_track(name) to look up a track’s data, and access the NumPy array
via .data.
For most models, latest() is simpler and preferred. Use on_media() only when your pipeline
needs to process the full stream of inbound frames without dropping any.
Multi-Track Models
A model can declare any number of input and output tracks. Here is a model that receives a webcam
feed and outputs a transformed video:
from reactor_runtime import VideoModel, model, command, get_ctx
from reactor_runtime.tracks import VideoOut, VideoIn
@model(name="morpheus", config="config.yml")
class Morpheus(VideoModel):
webcam = VideoIn()
main_video = VideoOut(default=True)
def __init__(self, config):
self.pipeline = load_pipeline(config)
@command("set_prompt")
def set_prompt(self, prompt: str):
self.pipeline.set_prompt(prompt)
def on_media(self, bundle: MediaBundle):
webcam_data = bundle.get_track("webcam")
if webcam_data is not None:
self.pipeline.push_frame(webcam_data.data)
def start_session(self):
ctx = get_ctx()
ctx.get_track().emit(None) # black frame while pipeline warms up
self.pipeline.run()
On the frontend side, the JS SDK references the exact same names you chose as class attributes:
<ReactorProvider
modelName="morpheus"
coordinatorUrl={url}
jwtToken={token}
receive={[video("main_video")]} // matches `main_video = VideoOut()`
send={[video("webcam")]} // matches `webcam = VideoIn()`
>
The strings "main_video" and "webcam" must match the Python class attribute names exactly.
This is how Reactor routes media between the frontend and the model over WebRTC. For single-track
video-out models, these props can be omitted — the SDK defaults to receiving from main_video.
Quick Reference
| Task | Code |
|---|
| Emit to default track | get_ctx().get_track().emit(frame) |
| Emit to named track | get_ctx().get_track("name").emit(frame) |
| Emit black frame | get_ctx().get_track().emit(None) |
| Read latest input | get_ctx().get_track("webcam").latest() |
| Declare video output | main_video = VideoOut() |
| Declare video input | webcam = VideoIn() |
| Declare audio output | narration = AudioOut() |
| Declare audio input | microphone = AudioIn() |
| Set default track | main_video = VideoOut(default=True) |
| Set explicit rate | cam = VideoIn(rate=30.0) |