User Guide~6 min read

Which AI Models Does OpenClaw Support? Full 2026 List

OpenClaw supports all major AI model providers including Claude, GPT, Gemini, DeepSeek, MiniMax, and local Ollama models. Users outside China can start with Claude or GPT-4o; users in China are recommended MiniMax or DeepSeek for direct access without a VPN.

OpenClaw Supported Models Overview

OpenClaw uses a model-agnostic design, meaning you can configure any supported AI model and switch freely between them. The framework supports three types of model access:

  1. Cloud API models: Claude, GPT-4, Gemini, DeepSeek, MiniMax, etc. โ€” charged by the model provider based on usage
  2. Local models (Ollama): Fully offline, no API fees, runs on your own hardware
  3. Multi-model failover: Configure multiple models as primary + backup for automatic switching

Full Supported Model List (2026)

Anthropic โ€” Claude Series

Model Strengths Context Window
Claude Opus 4.6 Strongest reasoning, complex tasks 1M tokens
Claude Sonnet 4.6 Best performance/cost balance 1M tokens
Claude Haiku 4.5 Fastest response, low cost 200K tokens

Best for: Complex reasoning, code generation, long document analysis, multilingual tasks

Access: Direct API, no China VPN required if using a compliant domestic relay service


OpenAI โ€” GPT Series

Model Strengths Context Window
GPT-4o Multimodal, comprehensive capability 128K tokens
GPT-4.1 Enhanced instruction following 128K tokens
GPT-5.4 mini Fast, low cost 64K tokens
GPT-5.4 nano Lightweight, ultra-low cost 32K tokens

Best for: General-purpose tasks, multimodal inputs (images + text), instruction following

Access: Requires API Key, direct access in China is not available without a relay


Google โ€” Gemini Series

Model Strengths Context Window
Gemini 2.5 Pro Native multimodal reasoning 1M tokens
Gemini 2.0 Flash Fast, cost-effective 1M tokens

Best for: Multimodal tasks, long document analysis, Google Workspace integration


DeepSeek โ€” Domestic Accessible

Model Strengths Context Window
DeepSeek R2 Math reasoning, code, Chinese 64K tokens
DeepSeek V3 General tasks, Chinese 64K tokens

Best for: Users in China (direct access, no VPN), code generation, Chinese content, cost-sensitive workloads

Access: Direct connection in China, generous free tier


MiniMax โ€” Domestic Accessible

Model Strengths Context Window
MiniMax-M2.1 Multilingual, long context 200K tokens
MiniMax-M2.5 Enhanced reasoning 200K tokens

Best for: China users getting started (free quota upon registration), Lesson 01 default recommendation

Access: Direct connection in China, free quota on registration โ€” ideal for getting started


Ollama โ€” Local Models (Fully Offline)

Ollama lets you run open-source models locally on your Mac, Linux, or Windows machine (via WSL2):

Model Size Best For
Llama 3.3 70B ~40GB Best quality local model
Qwen2.5 32B ~20GB Chinese language, coding
Mistral 7B ~4GB Light tasks, fast response
Phi-4 14B ~8GB Reasoning, low hardware requirements

Best for: Users with strict privacy requirements, scenarios where data must not leave the device, zero API cost


How to Configure Models

Add model providers in openclaw.json:

{
  "models": {
    "mode": "merge",
    "providers": {
      "anthropic": {
        "apiKey": "${ANTHROPIC_API_KEY}",
        "models": [
          { "id": "claude-sonnet-4-6", "name": "Claude Sonnet 4.6" }
        ]
      },
      "minimax": {
        "baseUrl": "https://api.minimax.io/anthropic",
        "apiKey": "${MINIMAX_API_KEY}",
        "api": "anthropic-messages",
        "models": [
          { "id": "MiniMax-M2.1", "name": "MiniMax M2.1" }
        ]
      }
    }
  },
  "agents": {
    "defaults": {
      "model": {
        "primary": "minimax/MiniMax-M2.1",
        "fallback": ["anthropic/claude-haiku-4-5"]
      }
    }
  }
}

See Lesson 05: Multi-Model & Failover for detailed configuration.


How to Choose the Right Model

Situation Recommended Model
Just getting started (China users) MiniMax M2.1 (free quota)
Just getting started (international users) Claude Haiku 4.5 (low cost)
Complex reasoning / code Claude Sonnet 4.6 or DeepSeek R2
Maximum privacy, offline use Ollama + Llama 3.3 70B
Cost-sensitive high frequency GPT-5.4 mini or DeepSeek V3
Long document processing (700+ pages) Claude Opus 4.6 or Gemini 2.5 Pro

FAQ

How often is the model list updated?

The OpenClaw team typically adds support for major new models within 1โ€“2 weeks of release. The community also contributes adapters for smaller models.

How does model billing work?

OpenClaw itself doesn't charge for model usage. You pay the model provider directly (Anthropic, OpenAI, etc.) based on your actual API usage. Ollama local models are completely free.

Can I limit which model a specific skill uses?

Yes. You can specify a model at the skill level in OpenClaw's configuration, so different skills use different models optimally (e.g., light tasks use a cheap fast model, complex reasoning uses a more capable model).

Stay up to date with OpenClaw

Follow @lanmiaoai on X for tips, updates and new tutorials.

Follow

More FAQ