Long-form real-time video generation with autoregressive chunked diffusion
Helios is an interactive, autoregressive video generation model that produces continuous video streams in real-time. Built on a 14B-parameter Diffusion Transformer, Helios generates video in 33-frame chunks with multi-resolution history conditioning, enabling minute-scale video generation with smooth temporal coherence.
Getting good results from Helios depends heavily on how you write your prompts.
Be detailed and descriptive
Helios responds best to long, richly detailed prompts. Instead of short descriptions, paint a full picture: describe the scene, lighting, camera angle, atmosphere, and any objects or characters present.
Good
Too vague
Copy
Ask AI
A rotating camera view inside a large New York museum gallery, showcasing a towering stack of vintage televisions, each displaying different programs from the 1950s and 1970s. The televisions show a mix of 1950s sci-fi movies, horror films, news broadcasts, static, and a 1970s sitcom. The gallery space is filled with the nostalgic glow of the old TV screens, their edges worn and frames aged. The background features other vintage exhibits and artifacts, adding to the historical ambiance. The televisions are arranged in a dynamic, almost chaotic pattern, creating a sense of visual interest and movement. A wide-angle shot capturing the entire stack and the surrounding gallery space.
Copy
Ask AI
A desert with a camel
Prompt transitions need detail too
When updating the prompt mid-generation with set_prompt or schedule_prompt, the new prompt should be just as detailed as the original. Vague transition prompts will produce subtle or unnoticeable changes.
Good
Too vague
Copy
Ask AI
A rotating camera view inside a large New York museum gallery, showcasing a towering stack of vintage televisions, each displaying different programs from the 1950s and 1970s. The televisions show the text REACTOR. The gallery space is filled with the nostalgic glow of the old TV screens, their edges worn and frames aged. The background features other vintage exhibits and artifacts, adding to the historical ambiance. The televisions are arranged in a dynamic, almost chaotic pattern, creating a sense of visual interest and movement. A wide-angle shot capturing the entire stack and the surrounding gallery space.
Copy
Ask AI
The televisions show the text REACTOR.
Helios generates video in 33-frame chunks. Prompt changes take effect at chunk boundaries, so visual transitions are not instantaneous — there is a natural delay of one chunk before the new prompt influences the output.
Convenience wrapper around schedule_prompt that automatically picks the right chunk index so you don’t have to track it yourself:
Not started: schedules at chunk 0
Paused: schedules at the current chunk (takes effect on resume)
Running: schedules at the next chunk (current chunk is already being processed)
This is the simplest way to change the prompt — just call set_prompt and the model figures out when to apply it.Parameters:
Parameter
Type
Required
Description
prompt
string
Yes
The prompt text to use for generation
Example:
Copy
Ask AI
// Set initial prompt (before starting)await reactor.sendCommand("set_prompt", { prompt: "A serene mountain landscape at sunrise"});// Change prompt mid-generation — automatically applies at the next chunkawait reactor.sendCommand("set_prompt", { prompt: "The landscape transitions to a stormy ocean"});
Schedule a prompt to be applied at a specific chunk index during video generation.Parameters:
Parameter
Type
Required
Description
prompt
string
Yes
The prompt text to use
chunk
integer
Yes
The chunk index at which to apply the prompt
Behavior:
Scheduling a prompt at a chunk that already has a prompt will overwrite it
Prompts scheduled in the past are rejected (e.g., if the model is at chunk 10 and you schedule at chunk 5, an error is emitted)
A prompt must exist at chunk 0 before calling start
Prompts can be scheduled while generation is running for real-time control
Example:
Copy
Ask AI
// Schedule initial prompt (required before start)await reactor.sendCommand("schedule_prompt", { prompt: "A serene mountain landscape at sunrise", chunk: 0});// Schedule a transition at chunk 10await reactor.sendCommand("schedule_prompt", { prompt: "The mountain transforms into a futuristic city", chunk: 10});
Set or change the reference image for image-to-video conditioning. Can be called before or during generation.Parameters:
Parameter
Type
Required
Description
image_b64
string
Yes
Base64-encoded reference image (RGB). Data URL prefix is stripped automatically.
transition
string
No
Transition mode: "cut" for immediate switch (default), "blend" for interpolation
Behavior:
Can be set before starting or while generation is running
The image is resized to match the model’s output resolution
Use transition: "blend" for a smooth interpolation to the new image, or "cut" for an immediate switch
The image_b64 payload currently has a maximum size of ~64 KB. You must resize and compress the image before sending.
Preparing an image:Use a helper like the one below to resize and JPEG-compress an image file before sending. This produces a payload well under the 64 KB limit.
Copy
Ask AI
/** Resize and JPEG-compress an image for set_image. */function prepareImageForUpload( file: File, maxWidth = 640, maxHeight = 384, quality = 0.5): Promise<string> { return new Promise((resolve, reject) => { const img = new Image(); img.onload = () => { const scale = Math.min(maxWidth / img.width, maxHeight / img.height, 1); const w = Math.round(img.width * scale); const h = Math.round(img.height * scale); const canvas = document.createElement("canvas"); canvas.width = w; canvas.height = h; const ctx = canvas.getContext("2d"); if (!ctx) return reject(new Error("Failed to get canvas context")); ctx.drawImage(img, 0, 0, w, h); resolve(canvas.toDataURL("image/jpeg", quality)); }; img.onerror = reject; img.src = URL.createObjectURL(file); });}
If you’re working server-side (Node.js, Python, etc.), resize the image to at most 640×384 and encode as JPEG at ~50% quality before base64-encoding. The resulting string should be well under 64 KB.
Example:
Copy
Ask AI
// Set initial reference imageconst imageB64 = await prepareImageForUpload(imageFile);await reactor.sendCommand("set_image", { image_b64: imageB64});// Change image mid-generation with a smooth blendconst newImageB64 = await prepareImageForUpload(newImageFile);await reactor.sendCommand("set_image", { image_b64: newImageB64, transition: "blend"});
Pause the video generation after the current chunk finishes processing. The model retains its full state including history buffers.Parameters: NoneExample:
Listen for messages from the model using reactor.on("message", ...) (imperative) or the useReactorMessage() hook (React). The model emits two types of messages:
State messages provide a snapshot of the current model state. They are emitted:
When a prompt is scheduled
When generation starts
When generation is paused or resumed
When a prompt switch occurs at a chunk boundary
When generation is reset
After each chunk is processed
Copy
Ask AI
{ type: "state", data: { running: boolean, // Whether generation is actively running current_frame: number, // Global pixel frame counter current_chunk: number, // Current chunk index current_prompt: string | null, // Active prompt text (null if not started) paused: boolean, // Whether generation is paused scheduled_prompts: { // Map of chunk indices to prompts [chunk: number]: string } }}
reactor.on("message", (msg) => { if (msg.type === "event") { switch (msg.data.event) { case "generation_started": console.log(`Generation started with prompt: ${msg.data.prompt}`); break; case "generation_reset": console.log(`Reset at frame ${msg.data.frame}, chunk ${msg.data.chunk}`); break; case "image_set": console.log(`Reference image set: ${msg.data.width}x${msg.data.height}`); break; case "seed_set": console.log(`Seed set to ${msg.data.seed}`); break; case "prompt_scheduled": console.log(`Prompt scheduled at chunk ${msg.data.chunk}: ${msg.data.prompt}`); break; case "prompt_switched": console.log(`Switched to: ${msg.data.new_prompt}`); break; case "error": console.error(`Error: ${msg.data.message}`); break; } }});