Skip to main content
Helios is an interactive, autoregressive video generation model that produces continuous video streams in real-time. Built on a 14B-parameter Diffusion Transformer, Helios generates video in 33-frame chunks with multi-resolution history conditioning, enabling minute-scale video generation with smooth temporal coherence.

Key Features

Autoregressive Streaming

Generates video in 33-frame chunks with multi-resolution history conditioning for long-form coherence

Image-to-Video

Provide a reference image to guide generation and swap it mid-stream with cut or blend transitions

Real-Time Prompt Control

Schedule prompt changes at specific chunks during generation for dynamic scene transitions

Quick Start

Get started with Helios in seconds:
npx create-reactor-app my-helios-app helios-interactive
This creates a Next.js application from our open-source example.

Model Name

helios

Prompt Guide

Getting good results from Helios depends heavily on how you write your prompts.
Helios responds best to long, richly detailed prompts. Instead of short descriptions, paint a full picture: describe the scene, lighting, camera angle, atmosphere, and any objects or characters present.
A rotating camera view inside a large New York museum gallery, showcasing a towering stack of vintage televisions, each displaying different programs from the 1950s and 1970s. The televisions show a mix of 1950s sci-fi movies, horror films, news broadcasts, static, and a 1970s sitcom. The gallery space is filled with the nostalgic glow of the old TV screens, their edges worn and frames aged. The background features other vintage exhibits and artifacts, adding to the historical ambiance. The televisions are arranged in a dynamic, almost chaotic pattern, creating a sense of visual interest and movement. A wide-angle shot capturing the entire stack and the surrounding gallery space.
When updating the prompt mid-generation with set_prompt or schedule_prompt, the new prompt should be just as detailed as the original. Vague transition prompts will produce subtle or unnoticeable changes.
A rotating camera view inside a large New York museum gallery, showcasing a towering stack of vintage televisions, each displaying different programs from the 1950s and 1970s. The televisions show the text REACTOR. The gallery space is filled with the nostalgic glow of the old TV screens, their edges worn and frames aged. The background features other vintage exhibits and artifacts, adding to the historical ambiance. The televisions are arranged in a dynamic, almost chaotic pattern, creating a sense of visual interest and movement. A wide-angle shot capturing the entire stack and the surrounding gallery space.
Helios generates video in 33-frame chunks. Prompt changes take effect at chunk boundaries, so visual transitions are not instantaneous — there is a natural delay of one chunk before the new prompt influences the output.

How It Works

When you connect to Helios, the model is active and ready but won’t start generating until you explicitly tell it to. Here’s the workflow:
  1. Connect to the model
  2. Set a prompt using set_prompt or schedule_prompt at chunk 0 (required)
  3. Optionally set a reference image for image-to-video mode
  4. Start generation
  5. Control playback with pause/resume as needed
  6. Schedule additional prompts at future chunks or reset to start over
  7. Change or remove the reference image mid-generation, or adjust image strength
Helios generates video in chunks (33 frames each), not individual frames. Prompt scheduling and state tracking operate at the chunk level.

Commands

Send commands to the model using reactor.sendCommand(). Below are all available commands:
CommandDescription
set_promptSet the prompt (at chunk 0, or current chunk if running)
schedule_promptSchedule a prompt at a specific chunk index
set_imageSet or change a reference image (works mid-generation)
clear_imageRemove image anchor, continue with text only
set_seedSet the RNG seed for reproducible generation
startBegin video generation
pausePause generation
resumeResume generation
resetReset to initial state

set_prompt

Convenience wrapper around schedule_prompt that automatically picks the right chunk index so you don’t have to track it yourself:
  • Not started: schedules at chunk 0
  • Paused: schedules at the current chunk (takes effect on resume)
  • Running: schedules at the next chunk (current chunk is already being processed)
This is the simplest way to change the prompt — just call set_prompt and the model figures out when to apply it.Parameters:
ParameterTypeRequiredDescription
promptstringYesThe prompt text to use for generation
Example:
// Set initial prompt (before starting)
await reactor.sendCommand("set_prompt", {
  prompt: "A serene mountain landscape at sunrise"
});

// Change prompt mid-generation — automatically applies at the next chunk
await reactor.sendCommand("set_prompt", {
  prompt: "The landscape transitions to a stormy ocean"
});

Messages from Model

Listen for messages from the model using reactor.on("message", ...) (imperative) or the useReactorMessage() hook (React). The model emits two types of messages:

State Messages

State messages provide a snapshot of the current model state. They are emitted:
  • When a prompt is scheduled
  • When generation starts
  • When generation is paused or resumed
  • When a prompt switch occurs at a chunk boundary
  • When generation is reset
  • After each chunk is processed
{
  type: "state",
  data: {
    running: boolean,                // Whether generation is actively running
    current_frame: number,           // Global pixel frame counter
    current_chunk: number,           // Current chunk index
    current_prompt: string | null,   // Active prompt text (null if not started)
    paused: boolean,                 // Whether generation is paused
    scheduled_prompts: {             // Map of chunk indices to prompts
      [chunk: number]: string
    }
  }
}
Example handler:
reactor.on("message", (msg) => {
  if (msg.type === "state") {
    console.log(`Running: ${msg.data.running}`);
    console.log(`Frame: ${msg.data.current_frame}`);
    console.log(`Chunk: ${msg.data.current_chunk}`);
    console.log(`Prompt: ${msg.data.current_prompt}`);
    console.log(`Paused: ${msg.data.paused}`);
  }
});

Event Messages

Emitted when lifecycle events occur:
{
  type: "event",
  data: {
    event: string,  // Event type
    // Additional fields depending on event type
  }
}

Event Types

EventDescriptionAdditional Fields
generation_startedGeneration has begunprompt: string
generation_pausedGeneration was pausedframe: number, chunk: number
generation_resumedGeneration was resumedframe: number, chunk: number
generation_resetModel was resetframe: number, chunk: number
image_setReference image was set or changedwidth: number, height: number, transition: string
image_clearedReference image was removed
seed_setRNG seed was updatedseed: number
prompt_scheduledA prompt was scheduledchunk: number, prompt: string
prompt_switchedActive prompt changed at a chunk boundaryframe, chunk, new_prompt, previous_prompt
errorAn error occurredmessage: string
Example handler:
reactor.on("message", (msg) => {
  if (msg.type === "event") {
    switch (msg.data.event) {
      case "generation_started":
        console.log(`Generation started with prompt: ${msg.data.prompt}`);
        break;
      case "generation_reset":
        console.log(`Reset at frame ${msg.data.frame}, chunk ${msg.data.chunk}`);
        break;
      case "image_set":
        console.log(`Reference image set: ${msg.data.width}x${msg.data.height}`);
        break;
      case "seed_set":
        console.log(`Seed set to ${msg.data.seed}`);
        break;
      case "prompt_scheduled":
        console.log(`Prompt scheduled at chunk ${msg.data.chunk}: ${msg.data.prompt}`);
        break;
      case "prompt_switched":
        console.log(`Switched to: ${msg.data.new_prompt}`);
        break;
      case "error":
        console.error(`Error: ${msg.data.message}`);
        break;
    }
  }
});

Complete Example

import { Reactor, fetchInsecureToken } from "@reactor-team/js-sdk";

// Authenticate
const jwtToken = await fetchInsecureToken(apiKey);

const reactor = new Reactor({
  modelName: "helios",
});

// Set up video display
const videoElement = document.getElementById("video") as HTMLVideoElement;
reactor.on("trackReceived", (name, track, stream) => {
  if (name === "main_video") {
    videoElement.srcObject = stream;
    videoElement.play().catch(console.warn);
  }
});

// Listen for state updates
reactor.on("message", (msg) => {
  if (msg.type === "state") {
    document.getElementById("info")!.textContent =
      `Chunk: ${msg.data.current_chunk} | Frame: ${msg.data.current_frame}`;
  }
  if (msg.type === "event" && msg.data.event === "prompt_switched") {
    console.log(`Now showing: ${msg.data.new_prompt}`);
  }
});

// Connect
await reactor.connect(jwtToken);

// Set a seed for reproducibility
await reactor.sendCommand("set_seed", { seed: 42 });

// Schedule prompts for a cinematic sequence
await reactor.sendCommand("schedule_prompt", {
  prompt: "A peaceful forest at dawn, soft morning light filtering through the trees",
  chunk: 0
});

await reactor.sendCommand("schedule_prompt", {
  prompt: "Sunlight breaking through the canopy, golden rays illuminating the forest floor",
  chunk: 5
});

await reactor.sendCommand("schedule_prompt", {
  prompt: "A deer walking through the misty forest clearing",
  chunk: 10
});

// Start generation
await reactor.sendCommand("start", {});