Skip to main content
The LongLive model is an interactive, real-time video generation system that creates continuous video streams using diffusion-based AI. Unlike traditional video generation models that produce fixed-length clips, LongLive generates videos in a streaming fashion, allowing for dynamic prompt changes and real-time interaction during the generation process.

Key Features

Interactive Generation

Generate videos that can be influenced in real-time through prompt scheduling

Streaming Output

Produces video frames continuously rather than generating complete videos at once

Dynamic Prompt Switching

Change prompts at specific timestamps during generation to alter the video content

Quick Start

Get started with LongLive in seconds:
  • npm
  • pnpm
npx create-reactor-app my-longlive-app longlive
This creates a Next.js application from our open-source example. You can also try the live demo first.

Prompting and Directing

When you connect immediately to the model, you will not immediately start to see a generation. The model will be active and ready, listening to a prompt, but it will not start until you’ll send the actual start command. Here’s how it works:
  • Upon connection, the model doesn’t know what prompt to start from.
  • Before calling “start”, you are required to first have at least put 1 prompt as the starting prompt (scheduled at time 0)
  • After having input the first prompt, you have 2 options: start immediately and create an application that can schedule prompts in parallel of the video being generated.
  • The alternative is to let the user schedule all the prompts in advance, and call “start” only once they’re ready.

Tracking Progress

The current_start_frame is a key progress indicator that tracks the model’s position in the video generation sequence. Here’s how it works:

What is current_start_frame?

The current_start_frame represents the index of the next frame that will be generated. It starts at 0 when generation begins and increments as frames are produced.

How it’s Emitted

The current_start_frame is emitted as a progress update:
  1. Initialization: Set to 0 when the inference process starts
  2. Frame Generation: After each block of frames is generated and decoded, current_start_frame is incremented by the number of frames in that block
  3. Progress Updates: The updated value is sent to clients as a message:
    {
      "type": "progress",
      "data": {
        "current_start_frame": <frame_number>
      }
    }
    

Use Cases

Clients can use current_start_frame to:
  • Display generation progress to users
  • Synchronize prompt changes with specific frame positions
  • Estimate remaining generation time
  • Implement frame-accurate interactions or overlays
The current_start_frame provides a reliable way to track the model’s progress through the video sequence and coordinate real-time interactions with the generation process.

Model Name

longlive

Commands

Once connected, send commands using reactor.sendMessage() to schedule prompts, control generation, and manage state. Below are all available commands:
  • schedule_prompt
  • start
  • reset

schedule_prompt

Description: Schedule a prompt to be applied at a specific timestamp during video generation.Parameters:
  • new_prompt (string, required): The prompt text to use at the scheduled timestamp
  • timestamp (integer, required): The frame number at which to apply the prompt
Use Cases:
  • Pre-scheduled sequences: Schedule multiple prompts before starting generation to create a cinematic experience with predefined scene transitions
  • Real-time control: Schedule prompts while generation is running to dynamically modify the output
Behavior:
  • Scheduling a prompt at a timestamp that already has a prompt will overwrite the existing prompt
  • The model broadcasts its current frame position to connected clients, enabling real-time synchronization for dynamic video control
Note: To clear a specific scheduled prompt, you currently need to use the reset command to clear all prompts. A feature to remove individual prompts is coming soon.Example:
// Schedule a prompt at the beginning
await reactor.sendMessage({
  type: "schedule_prompt",
  data: {
    new_prompt: "A serene mountain landscape at sunrise",
    timestamp: 0
  }
});

// Schedule a transition at frame 30
await reactor.sendMessage({
  type: "schedule_prompt",
  data: {
    new_prompt: "The mountain transforms into a futuristic city",
    timestamp: 30
  }
});

// Schedule another scene at frame 90
await reactor.sendMessage({
  type: "schedule_prompt",
  data: {
    new_prompt: "Zooming through the city streets at night",
    timestamp: 90
  }
});

Credits

LongLive is developed by Shuai Yang, Wei Huang, Ruihang Chu, Yicheng Xiao, Yuyang Zhao, Xianbang Wang, Muyang Li, Enze Xie, Yingcong Chen, Yao Lu, Song Han, and Yukang Chen (NVIDIA, MIT, HKUST(GZ), HKU, THU) Project Page - View on GitHub
I