Skip to main content
The LiveCore model is an interactive, real-time video generation system that creates continuous video streams using diffusion-based AI. Unlike traditional video generation models that produce fixed-length clips, LiveCore generates videos in a streaming fashion, allowing for dynamic prompt changes and real-time interaction during the generation process. Each generation cycle produces 240 frames. After completing a cycle, the model resets to a black screen and waits for new prompts to be scheduled before the next generation can begin.

Key Features

Interactive Generation

Generate videos that can be influenced in real-time through prompt scheduling

Streaming Output

Produces video frames continuously rather than generating complete videos at once

Dynamic Prompt Switching

Change prompts at specific timestamps during generation to alter the video content

Quick Start

Get started with LiveCore in seconds:
npx create-reactor-app my-livecore-app livecore
This creates a Next.js application from our open-source example.

Model Name

livecore

How It Works

When you connect to LiveCore, the model is active and ready but won’t start generating until you explicitly tell it to. Here’s the workflow:
  1. Connect to the model
  2. Schedule at least one prompt at frame 0 (required)
  3. Start generation
  4. Control playback with pause/resume as needed
  5. Schedule additional prompts in real-time or reset to start over

Commands

Send commands to the model using reactor.sendCommand(). Below are all available commands:
CommandDescription
schedule_promptSchedule a prompt at a specific frame
startBegin video generation
pausePause generation
resumeResume generation
resetReset to initial state

schedule_prompt

Schedule a prompt to be applied at a specific frame during video generation.Parameters:
ParameterTypeRequiredDescription
new_promptstringYesThe prompt text to use
timestampintegerYesThe frame number at which to apply the prompt (0-239)
Each generation cycle is 240 frames (0-239). After reaching frame 239, the model resets to a black screen and waits for new prompts before starting the next generation.
Behavior:
  • Scheduling a prompt at a timestamp that already has a prompt will overwrite it
  • Prompts scheduled in the past are ignored (e.g., if the model is at frame 100 and you schedule at frame 50, it will be ignored)
  • At least one prompt must be scheduled at frame 0 before calling start
  • Prompts can be scheduled while generation is running for real-time control
Example:
// Schedule initial prompt (required before start)
await reactor.sendCommand("schedule_prompt", {
  new_prompt: "A serene mountain landscape at sunrise",
  timestamp: 0
});

// Schedule a transition at frame 30
await reactor.sendCommand("schedule_prompt", {
  new_prompt: "The mountain transforms into a futuristic city",
  timestamp: 30
});

Messages from Model

Listen for messages from the model using reactor.on("newMessage", ...) (imperative) or the useReactorMessage() hook (React). The model emits two types of messages:

State Messages

State messages provide a snapshot of the current model state. They are emitted:
  • When a prompt is scheduled
  • When generation starts
  • When generation is paused or resumed
  • After each frame block is processed
{
  type: "state",
  data: {
    current_frame: number,           // Current frame being processed
    current_prompt: string | null,   // Active prompt text (null if not started)
    paused: boolean,                 // Whether generation is paused or waiting
    scheduled_prompts: {             // Map of frame numbers to prompts
      [frame: number]: string
    }
  }
}
The paused field is true both when explicitly paused and when waiting for generation to start (before a prompt at frame 0 is scheduled and the start command is issued).
Example handler:
reactor.on("newMessage", (msg) => {
  if (msg.type === "state") {
    console.log(`Frame: ${msg.data.current_frame}`);
    console.log(`Prompt: ${msg.data.current_prompt}`);
    console.log(`Paused: ${msg.data.paused}`);
  }
});

Event Messages

Emitted when lifecycle events occur:
{
  type: "event",
  data: {
    event: string,  // Event type
    // Additional fields depending on event type
  }
}

Event Types

EventDescriptionAdditional Fields
generation_startedGeneration has begun-
generation_pausedGeneration was pausedframe: number
generation_resumedGeneration was resumedframe: number
generation_resetModel was reset-
prompt_switchedActive prompt changedframe, new_prompt, previous_prompt
errorAn error occurredmessage: string
Example handler:
reactor.on("newMessage", (msg) => {
  if (msg.type === "event") {
    switch (msg.data.event) {
      case "generation_started":
        console.log("Generation started!");
        break;
      case "prompt_switched":
        console.log(`Switched to: ${msg.data.new_prompt}`);
        break;
      case "error":
        console.error(`Error: ${msg.data.message}`);
        break;
    }
  }
});

Complete Example

import { Reactor, fetchInsecureJwtToken } from "@reactor-team/js-sdk";

// Authenticate
const jwtToken = await fetchInsecureJwtToken(apiKey);

const reactor = new Reactor({
  modelName: "livecore",
});

// Set up video display
const videoElement = document.getElementById("video") as HTMLVideoElement;
reactor.on("streamChanged", (track) => {
  if (track) {
    const stream = new MediaStream([track]);
    videoElement.srcObject = stream;
    videoElement.play().catch(console.warn);
  }
});

// Listen for state updates
reactor.on("newMessage", (msg) => {
  if (msg.type === "state") {
    document.getElementById("frame")!.textContent = 
      `Frame: ${msg.data.current_frame}`;
  }
  if (msg.type === "event" && msg.data.event === "prompt_switched") {
    console.log(`Now showing: ${msg.data.new_prompt}`);
  }
});

// Connect
await reactor.connect(jwtToken);

// Schedule prompts for a cinematic sequence
await reactor.sendCommand("schedule_prompt", {
  new_prompt: "A peaceful forest at dawn",
  timestamp: 0
});

await reactor.sendCommand("schedule_prompt", {
  new_prompt: "Sunlight breaking through the trees",
  timestamp: 60
});

await reactor.sendCommand("schedule_prompt", {
  new_prompt: "A deer walking through the forest",
  timestamp: 120
});

// Start generation
await reactor.sendCommand("start", {});