Skip to main content
WorldCore is an interactive world foundation model for real-time long video generation. Built upon an auto-regressive diffusion-based image-to-world framework, it can generate real-time long videos conditioned on keyboard and mouse inputs, enabling fine-grained control and dynamic scene evolution.

Key Features

Real-Time Control

Navigate through generated environments using WASD movement and camera rotation controls

Custom Starting Images

Initialize video generation from any image, creating explorable worlds from static scenes

Interactive World Building

Dynamically explore and generate content as you move through the environment
The model responds to movement and camera controls in real-time, creating an immersive, game-like experience where users can explore AI-generated worlds with immediate visual feedback.

Quick Start

Get started with WorldCore in seconds:
npx create-reactor-app worldcore-app worldcore
This creates a Next.js application from our open-source example.

Model Name

worldcore

How It Works

When you connect to WorldCore, the model immediately starts generating video. Here’s the workflow:
  1. Connect to the model - generation starts automatically from the default starting image
  2. Control movement with set_keyboard_action (WASD) and camera with set_camera_action
  3. Adjust speed using set_movement_speed and set_rotation_speed
  4. Change the scene by setting a new set_starting_image or set_prompt
  5. Reset the generation at any time while preserving your custom settings

Commands

Send commands to the model using reactor.sendCommand(). Below are all available commands:
CommandDescription
set_keyboard_actionSet WASD movement (forward, backward, strafe)
set_camera_actionSet camera rotation (turn, look up/down)
set_promptChange the generation prompt in real-time
set_starting_imageSet a custom starting image
set_movement_speedAdjust movement speed (0.0-1.0)
set_rotation_speedAdjust rotation speed (0.0-1.0)
resetRestart generation (preserves custom settings)

set_keyboard_action

Description: Set WASD movement action for navigating through the generated environment.Parameters:
  • action (string, required): Movement control
    • w = Move forward
    • a = Strafe left
    • s = Move backward
    • d = Strafe right
    • still = No movement
Behavior:
  • Can be used simultaneously with set_camera_action for combined control (e.g., move forward while turning left)
  • The action remains active until you send a new action or set it to still
  • This allows you to react only to user input changes (key press/release) rather than sending continuous commands
Example Use Case: Create an interactive, game-like experience where users can navigate through generated environments in real-time.Example:
// Move forward
await reactor.sendCommand("set_keyboard_action", { action: "w" });

// Strafe left
await reactor.sendCommand("set_keyboard_action", { action: "a" });

// Stop movement
await reactor.sendCommand("set_keyboard_action", { action: "still" });

Messages from Model

Listen for messages from the model using reactor.on("newMessage", ...) (imperative) or the useReactorMessage() hook (React). The model emits state messages to keep you informed of the current generation state.

State Messages

State messages provide a snapshot of the current model state. They are emitted when any control value changes (keyboard action, camera action, prompt, or speed).
{
  type: "state",
  data: {
    // Current generation position
    chunk_index: number,              // The current chunk being generated

    // Current state (what's actively being used for generation)
    current_prompt: string | null,    // Prompt text currently in use
    current_keyboard_action: string,  // Keyboard action applied to current chunk
    current_camera_action: string,    // Camera action applied to current chunk
    current_movement_speed: number,   // Movement speed applied to current chunk
    current_rotation_speed: number,   // Rotation speed applied to current chunk

    // Pending state (what will be applied on the next chunk)
    pending_keyboard_action: string,  // Keyboard action queued for next chunk
    pending_camera_action: string,    // Camera action queued for next chunk
    pending_movement_speed: number,   // Movement speed queued for next chunk
    pending_rotation_speed: number    // Rotation speed queued for next chunk
  }
}
The model distinguishes between current (actively in use) and pending (queued for next chunk) values. This allows you to see both what’s happening now and what will happen next.
Example handler:
reactor.on("newMessage", (msg) => {
  if (msg.type === "state") {
    console.log(`Chunk: ${msg.data.chunk_index}`);
    console.log(`Current: keyboard=${msg.data.current_keyboard_action}, camera=${msg.data.current_camera_action}`);
    console.log(`Pending: keyboard=${msg.data.pending_keyboard_action}, camera=${msg.data.pending_camera_action}`);
  }
});

Complete Example

import { Reactor, fetchInsecureJwtToken } from "@reactor-team/js-sdk";

// Authenticate
const jwtToken = await fetchInsecureJwtToken(apiKey);

const reactor = new Reactor({
  modelName: "worldcore",
});

// Set up video display
const videoElement = document.getElementById("video") as HTMLVideoElement;
reactor.on("streamChanged", (track) => {
  if (track) {
    const stream = new MediaStream([track]);
    videoElement.srcObject = stream;
    videoElement.play().catch(console.warn);
  }
});

// Listen for state updates
reactor.on("newMessage", (msg) => {
  if (msg.type === "state") {
    document.getElementById("info")!.textContent = 
      `Chunk: ${msg.data.chunk_index} | Move: ${msg.data.current_keyboard_action} | Camera: ${msg.data.current_camera_action}`;
  }
});

// Connect - generation starts automatically
await reactor.connect(jwtToken);

// Set up keyboard controls
document.addEventListener("keydown", async (e) => {
  switch (e.key.toLowerCase()) {
    case "w":
      await reactor.sendCommand("set_keyboard_action", { action: "w" });
      break;
    case "a":
      await reactor.sendCommand("set_keyboard_action", { action: "a" });
      break;
    case "s":
      await reactor.sendCommand("set_keyboard_action", { action: "s" });
      break;
    case "d":
      await reactor.sendCommand("set_keyboard_action", { action: "d" });
      break;
    case "arrowup":
      await reactor.sendCommand("set_camera_action", { action: "up" });
      break;
    case "arrowdown":
      await reactor.sendCommand("set_camera_action", { action: "down" });
      break;
    case "arrowleft":
      await reactor.sendCommand("set_camera_action", { action: "left" });
      break;
    case "arrowright":
      await reactor.sendCommand("set_camera_action", { action: "right" });
      break;
  }
});

// Stop movement/camera on key release
document.addEventListener("keyup", async (e) => {
  const key = e.key.toLowerCase();
  if (["w", "a", "s", "d"].includes(key)) {
    await reactor.sendCommand("set_keyboard_action", { action: "still" });
  }
  if (e.key.startsWith("Arrow")) {
    await reactor.sendCommand("set_camera_action", { action: "still" });
  }
});