WorldCore is an interactive world foundation model for real-time long video generation. Built upon an auto-regressive diffusion-based image-to-world framework, it can generate real-time long videos conditioned on keyboard and mouse inputs, enabling fine-grained control and dynamic scene evolution.
Navigate through generated environments using WASD movement and camera rotation controls
Custom Starting Images
Initialize video generation from any image, creating explorable worlds from static scenes
Interactive World Building
Dynamically explore and generate content as you move through the environment
The model responds to movement and camera controls in real-time, creating an immersive, game-like experience where users can explore AI-generated worlds with immediate visual feedback.
Description: Upload a custom image to use as the initial frame for video generation. This triggers an immediate restart of the generation loop to use the new image.Parameters:
image_base64 (string, required): The starting image encoded as a base64 string
Use Cases:
Start generation from a specific scene or frame
Initialize the model with custom artwork or photographs
Behavior:
The custom starting image is preserved across soft restarts (reset command)
The image will be reset to factory default when the session ends
Example:
Copy
// Set a custom starting imageawait reactor.sendCommand("set_starting_image", { image_base64: "iVBORw0KGgoAAAANSUhEUgA..."});
Description: Set the movement speed multiplier for keyboard (WASD) actions.Parameters:
speed (number, required): Speed multiplier from 0.0 to 1.0
0.0 = No movement
1.0 = Full speed (default)
Use Case: Adjust how fast forward/backward/strafe movements are. Can be adjusted in real-time during generation.Example:
Copy
// Set movement to half speedawait reactor.sendCommand("set_movement_speed", { speed: 0.5 });// Set movement to full speedawait reactor.sendCommand("set_movement_speed", { speed: 1.0 });
Listen for messages from the model using reactor.on("newMessage", ...) (imperative) or the useReactorMessage() hook (React). The model emits state messages to keep you informed of the current generation state.
State messages provide a snapshot of the current model state. They are emitted when any control value changes (keyboard action, camera action, prompt, or speed).
Copy
{ type: "state", data: { // Current generation position chunk_index: number, // The current chunk being generated // Current state (what's actively being used for generation) current_prompt: string | null, // Prompt text currently in use current_keyboard_action: string, // Keyboard action applied to current chunk current_camera_action: string, // Camera action applied to current chunk current_movement_speed: number, // Movement speed applied to current chunk current_rotation_speed: number, // Rotation speed applied to current chunk // Pending state (what will be applied on the next chunk) pending_keyboard_action: string, // Keyboard action queued for next chunk pending_camera_action: string, // Camera action queued for next chunk pending_movement_speed: number, // Movement speed queued for next chunk pending_rotation_speed: number // Rotation speed queued for next chunk }}
The model distinguishes between current (actively in use) and pending (queued for next chunk) values. This allows you to see both what’s happening now and what will happen next.