Skip to main content
The Context is your model’s interface to the outside world during a session. It provides methods for emitting frames, sending messages to the frontend, checking for stop signals, and managing frame monitoring. Access it anywhere in your code via get_ctx().

Accessing the Context

Import get_ctx from reactor_runtime and call it to get the current session’s context:
from reactor_runtime import get_ctx

def start_session(self):
    while not get_ctx().should_stop():
        frames = self.generate()
        get_ctx().emit_block(frames)
The context is a global singleton scoped to the current session. You can call get_ctx() from anywhere in your code: your VideoModel, your pipeline class, utility functions, or any module. As long as a session is active, get_ctx() returns the same context instance.
Calling get_ctx() outside of an active session raises a RuntimeError. Only use it within start_session() or code called from within start_session().

API Reference

should_stop()

Check if the session should stop. Returns True when the user disconnects, an explicit stop request is made, or the runtime is shutting down.
while not get_ctx().should_stop():
    # Continue generating
Check this regularly in your generation loop and inside long-running operations for responsiveness.

emit_block(frames)

Emit frames to the video stream. Reactor buffers and paces frames automatically for smooth playback.
get_ctx().emit_block(frames)
Parameters:
  • frames: NumPy array with shape (N, H, W, 3) for N frames or (H, W, 3) for a single frame. Must be dtype=np.uint8 with values in range 0-255, RGB channel order. Pass None to emit a black frame.
See Emitting Frames for format details and conversion examples.

send(data)

Send a JSON-serializable message to the frontend. Use this for progress updates, state changes, or any custom communication your application needs.
get_ctx().send({
    "type": "progress",
    "data": {"current_frame": 42, "total_frames": 120}
})
Parameters:
  • data: A dictionary that will be JSON-serialized and sent to the client.
The runtime wraps your data in an ApplicationMessage envelope before sending. Your frontend receives the payload and can react accordingly. Common use cases:
  • Progress indicators
  • State synchronization
  • Error notifications (non-fatal)
  • Custom events for your UI

enable_monitoring()

Resume frame rate monitoring. Call this when your model resumes active generation after an intentional pause.
get_ctx().enable_monitoring()

disable_monitoring()

Pause frame rate monitoring. Call this when your model intentionally stops generating frames (e.g., waiting for user input) to prevent Reactor from lowering the target framerate.
get_ctx().disable_monitoring()
See Frame Monitoring for usage patterns.

Complete Example

Here is a pipeline using all context methods:
from reactor_runtime import get_ctx
import time

class MyPipeline:
    def __init__(self):
        self.prompt = None
        self.frame_count = 0

    def set_prompt(self, prompt: str):
        self.prompt = prompt

    def inference(self):
        # Wait for initial input
        while self.prompt is None:
            if get_ctx().should_stop():
                return
            get_ctx().disable_monitoring()  # Not generating, don't track
            time.sleep(0.1)

        get_ctx().enable_monitoring()  # Resume tracking

        # Generation loop
        while not get_ctx().should_stop():
            curr_prompt = self.prompt
            frames = self.generate_block(cond=curr_prompt)

            if frames is None:
                return  # Stop requested mid-generation

            get_ctx().emit_block(frames)
            self.frame_count += 1

            # Send progress to frontend
            get_ctx().send({
                "type": "progress",
                "data": {"frames_generated": self.frame_count}
            })

    def generate_block(self, cond):
        for step in self.denoising_steps:
            if get_ctx().should_stop():
                return None
            self.latent = self.denoise_step(self.latent, step, cond)
        return self.decode(self.latent)

Context Lifecycle

The context is created when a session starts and destroyed when it ends:
  1. User connects → Runtime creates a new ReactorContext
  2. start_session() runsget_ctx() returns the active context
  3. Session ends → Context is invalidated
  4. Next user connects → New context is created
Each session gets its own context. Never store the context in instance variables that persist across sessions:
# BAD: Context stored across sessions
def __init__(self):
    self.ctx = get_ctx()  # Will break on next session

# GOOD: Get context when needed
def generate(self):
    get_ctx().emit_block(frames)  # Fresh reference each time

Thread Safety

The context methods are thread-safe. You can call them from the generation thread, command handler threads, or any other thread. The runtime handles synchronization internally. However, the context itself is session-scoped. If you spawn background threads, ensure they stop when should_stop() returns True, or they may try to use an invalidated context.