OpenClaw 使用本地Ollama模型配置

分类:笔记 26浏览

OpenClaw这玩意目前非常消耗Token,尝试使用本地Ollama+Qwen3-coder:30b 模型运行。

本测试在MAC下进行

1、按常规方法配置好OpenClaw,在OnBoard手动填入模型名 ollama/qwen3-coder:30b

2、进入配置文件目录

Finder
/Users/kwxcc/.openclaw/

3、进入配置文件,修改 models 和  agents配置

  "models": {
    "mode": "merge",
    "providers": {
      "ollama": {
        "baseUrl": "http://192.168.5.14:3000/v1",
        "apiKey": "sk-wn4LavM0mFLU026ySpjkV09CYDT1xxist54EqJijggx9ofa0",
        "api": "openai-completions",
        "models": [
          {
            "id": "qwen3-coder:30b",
            "name": "qwen3-coder Chat",
            "reasoning": false,
            "input": [
              "text"
            ],
            "cost": {
              "input": 0,
              "output": 0,
              "cacheRead": 0,
              "cacheWrite": 0
            },
            "contextWindow": 200000,
            "maxTokens": 8192
          }
        ]
      }
    }
  },
  "agents": {
    "defaults": {
      "workspace": "/Users/kevin/.openclaw/workspace",
      "compaction": {
        "mode": "safeguard"
      },
      "maxConcurrent": 4,
      "subagents": {
        "maxConcurrent": 8
      },
      "model": {
        "primary": "ollama/qwen3-coder:30b"
      }
    }
  },

根据实际情况,修改IP地址和模型名,以及apikey(ollama为空)

4、保存重启

openclaw gateway restart