Runway icon

Runway

GWM-1

Runway develops AI tools for video generation and creative projects in art and entertainment, fueling innovation and storytelling.

Pricing:Free + from $12/mo
Jump to section

Featured alternatives

Vyond

Wan icon

Wan

Luma Dream Machine icon

Luma Dream Machine

Pika icon

Pika

DeepMotion

Seedance icon

Seedance

Overview

Runway GWM-1 (General World Model 1) marks a pivotal shift from static video generation to interactive world simulation. GWM-1 is an autoregressive model built on top of Runway Gen-4.5, designed to simulate the physics and dynamics of the real world through frame-by-frame video generation that responds to user actions in real-time. This release introduces a family of models that allow users to not just generate a scene, but to interact with and explore it through action-conditioned controls.

The GWM-1 family is categorized into three primary variants—Worlds, Avatars, and Robotics—each tailored for specific interactive use cases. Whether navigating through generated environments, creating responsive character animations, or simulating robotic actions, GWM-1 provides action-conditioned video simulation with temporal consistency and spatial coherence.

What's New

GWM Worlds: Interactive Environments

GWM Worlds enables the creation of immersive, explorable environments through real-time video generation. Unlike traditional pre-rendered video, these environments respond to camera actions dynamically, allowing users to navigate through generated spaces. The model maintains spatial consistency and coherent geometry, lighting, and physics as the viewpoint changes, creating the experience of exploring a simulated world.

GWM Avatars: Audio-Driven Characters

The Avatars variant is an audio-driven interactive video generation model that focuses on human-centric visual representation. It generates characters with natural facial expressions, eye movements, lip sync, and body language driven by audio input. This model maintains temporal consistency during extended interactions, making it suitable for interactive storytelling, virtual presenters, and character animation. Note that dialogue content generation typically requires a separate language model or dialogue system.

GWM Robotics: Action-Conditioned Simulation

GWM Robotics is a specialized model designed for robotics applications. It can be conditioned on robot actions and robot pose inputs to generate video simulations showing how a robot would interact with its environment. This provides action-conditioned video rollouts useful for training and evaluating robotic policies, serving as a visual simulation tool for robotics research and development.

Action-Conditioned Generation

A core capability of GWM-1 is its action-conditioned generation approach. The model can be controlled through various action inputs including camera pose, robot commands, and audio signals. By conditioning each frame on the current state and the provided action, GWM-1 enables interactive control over the simulation, allowing users to influence how the generated world evolves in real-time.

Availability & Access

Runway GWM-1 is currently available through an early access application process. Access varies by variant, and Runway has not yet announced general availability for all users.

  • GWM Robotics: SDK access is available by request through Runway's application form.
  • GWM Avatars: Announced as coming soon to the Runway web product and Runway API.
  • GWM Worlds: Described as an interactive app experience; access may require early access approval.
  • Output Constraints: Current research release supports up to 2 minutes of output at 720p resolution (24fps based on media reports).
  • Platform: Access is provided via Runway's hosted platform, with API and SDK pathways for specific use cases.

Pricing & Plans

Runway has not publicly detailed plan-level eligibility or specific pricing for GWM-1. Access is currently positioned around early access requests rather than standard subscription tiers.

Runway Platform Subscriptions (for reference):

  • Standard Plan: $12/month (billed annually) — includes access to Gen-3 Alpha Turbo and other standard models
  • Pro Plan: $28/month (billed annually) — includes Gen-3 Alpha and enhanced features
  • Unlimited Plan: $76/month (billed annually) — includes unlimited standard generations and explore mode
  • Enterprise Plan: Custom pricing for organizations requiring advanced features, SDK integration, and dedicated support

GWM-1 Pricing Status:
Runway has not announced whether GWM-1 will be included in existing subscription plans, offered as a separate add-on, or charged based on compute credits. Given the real-time simulation nature of GWM-1, a separate pricing structure or credit system is possible. Users interested in GWM-1 should apply for early access and await official pricing announcements.

Pros & Cons

Pros

  • Real-Time Interactivity — Enables action-conditioned video simulation, allowing users to control and explore generated environments dynamically.
  • Specialized Variants — Three distinct models (Worlds, Avatars, Robotics) address specific use cases from storytelling to robotics research.
  • Physics-Aware Simulation — Runway positions GWM-1 as a physics-aware model that maintains coherent geometry, lighting, and temporal consistency across frames.
  • Flexible Action Control — Supports various conditioning inputs including camera pose, robot commands, and audio for precise control over simulation.

Cons

  • Limited Availability — Currently requires early access approval; general availability timeline not announced.
  • Resolution Constraints — Limited to 720p at 24fps, which may not meet professional production requirements for high-resolution content.
  • Duration Limits — Current output is capped at 2 minutes, which may be restrictive for longer simulations or narratives.
  • Uncertain Pricing — Public pricing and credit consumption rates have not been disclosed, making cost planning difficult.

Best For

  • Game Designers & World Builders — Prototyping interactive environments and exploring procedurally generated spaces through camera-controlled navigation.
  • Robotics Researchers — Generating action-conditioned video simulations for evaluating robotic behaviors and policies.
  • Content Creators & Storytellers — Creating interactive character animations and audio-driven avatars for immersive narratives.
  • AI Researchers & Developers — Experimenting with world models, action-conditioned generation, and real-time simulation technologies.
  • Enterprise Innovation Teams — Exploring applications of interactive world simulation for training, visualization, and research purposes.

FAQ

What makes GWM-1 different from Gen-3 or Gen-4.5?

Gen-3 and Gen-4.5 are video generation models that produce high-quality video from text or image prompts, creating a fixed output sequence. GWM-1, by contrast, is an action-conditioned world simulation model that generates video frame-by-frame in response to user actions. This allows for real-time interaction and exploration, where users can influence the simulation through camera movements, audio inputs, or robotic commands, creating a dynamic rather than predetermined output.

Can I download GWM-1 for local use?

Runway currently provides GWM-1 access exclusively through its hosted platform, API, and SDK pathways (for early access participants). Local or offline deployment options have not been publicly described. Given the real-time simulation requirements, cloud-based access appears to be the primary distribution model.

Does GWM-1 support audio?

Yes, GWM-1 supports audio as an action-conditioning signal, particularly in the Avatars variant where audio drives character expressions, lip sync, and movements. This is distinct from audio generation: GWM-1 uses audio as an input control signal rather than generating audio content itself. (Note: Runway's Gen-4.5 update separately introduced native audio generation capabilities.)

What is the maximum duration for a GWM-1 simulation?

The current research release supports up to 2 minutes of output at 720p resolution. This constraint may evolve as the technology matures and infrastructure scales to support longer simulations.

Version History

Gen-4.5

Released on December 11, 2025

View Update
+What's new
2 updates
  • Generate highly realistic videos with superior motion quality and precise prompt adherence, enabling professional creators to produce cinematic content with minimal retakes
  • Achieve top text-to-video performance with a 1,247 Elo rating on Artificial Analysis benchmark, helping maintain consistent visual fidelity and temporal stability across complex action sequences

GWM-1

Current Version

Released on December 11, 2025

+What's new
2 updates
  • Real-time, action-conditioned world simulation rendered as video, enabling explorable and coherent environments with dynamic control
  • Access specialized variants including GWM Worlds, Avatars, and Robotics for interactive storytelling, character animation, and robotic simulation

Gen-4

Released on April 1, 2025

+What's new
2 updates
  • Produce consistent characters, locations, and objects across multiple scenes to maintain narrative continuity without the need for complex model fine-tuning or training
  • Define specific aesthetic styles and cinematographic moods that remain coherent throughout an entire production, ensuring a professional and unified visual identity

Gen-4 Images

Released on November 25, 2024

+What's new
2 updates
  • Achieve advanced stylistic control and visual fidelity in image generation, creating high-quality concept art and brand assets with consistent aesthetic signatures
  • Integrate image generation directly into video workflows and APIs, allowing creators to bridge the gap between static visual assets and dynamic video content production

Gen-3 Alpha Turbo

Released on August 14, 2024

+What's new
2 updates
  • Generate high-quality video 7x faster and at half the price of the original model, enabling rapid prototyping and cost-effective scaling for social media content production
  • Access improved generation capabilities with the same quality standards as Gen-3 Alpha, providing editors with a faster alternative for iterative creative workflows

Gen-3 Alpha

Released on July 1, 2024

+What's new
2 updates
  • Create highly detailed videos with significant improvements in motion and fidelity, powered by a new infrastructure designed for large-scale multimodal training
  • Generate cinematic text-to-video content with enhanced temporal consistency and visual quality, establishing a new standard for AI-generated video production

Gen-2

Released on March 11, 2023

+What's new
2 updates
  • Transform text, images, or existing video clips into entirely new cinematic sequences, offering creators multiple ways to guide the AI video generation process
  • Support multimodal video generation through Text to Video, Image to Video, and Video to Video modes, enabling flexible creative workflows for filmmakers and content creators

Gen-1

Released on December 11, 2022

+What's new
2 updates
  • Apply artistic styles to existing videos using text or image references, enabling low-cost conceptualization and stylistic transformation for filmmakers and editors
  • Utilize multiple editing modes like Stylization and Storyboard to iterate on video concepts quickly, establishing a new foundation for generative video-to-video tools

Top alternatives

Related categories