openclaw 持续使用笔记

openclaw是什么?

openclaw是一个开源的能把SKILL、MCP划拉到一起用的本地部署的智能体,可以接触各种IM终端。因为“能自动赚钱”的宣传所以爆火。

本文档持续记录安装、使用、配置的一系列问题

一、install

1.1 环境准备

# 最好挂个代理
# export https_proxy=http://127.0.0.1:7890 
# export http_proxy=http://127.0.0.1:7890
# export all_proxy=socks5://127.0.0.1:7890
export https_proxy=http://192.168.124.200:7890 
export http_proxy=http://192.168.124.200:7890
export all_proxy=socks5://192.168.124.200:7890

# 以UBUNTU为例
apt-get update && apt-get install -y curl git ca-certificates gnupg build-essential && rm -rf /var/lib/apt/lists/*~

# UBUNTU带桌面显示隐藏文件夹
# ctrl + H

# NOTE: GIT没有设置SSHKEY可以用下面的设置
git config --global url."https://github.com/".insteadOf ssh://git@github.com/

# 更新NODE NPM到最新版本 
sudo apt install -y npm
sudo npm install -g n
sudo n lts
node -v
npm -v

# refresh linux link hash hash -r 
hash -r 

#linux homebrew
sudo apt update
sudo apt install -y build-essential curl file git
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

1.2 install openclaw

# install use npm
npm install -g openclaw@latest

# 配置
openclaw onboard

# 更新openclaw
openclaw gateway stop
npm cache clean --force
hash -r 
npm i -g openclaw@latest --verbose
openclaw doctor

1.3 ClawHub

https://clawhub.ai/

# install clawhub
npm i -g clawhub
npx clawhub@latest install <skill-slug>

二、模型手动配置示例

如果你用onboard没有立刻配置好模型,或者你的模型没有在列表上。

2.1 本地ollama

// models.providers
      "ollama": {
        "baseUrl": "https://127.0.0.1/v1",
        "apiKey": "ollama_local",
        "api": "openai-completions",
        "models": [
          {
            "id": "kimi-k2.5:cloud",
            "name": "kimi-k2.5:cloud",
            "api": "openai-completions",
            "reasoning": true,
            "input": [
              "text"
            ],
            "cost": {
              "input": 0,
              "output": 0,
              "cacheRead": 0,
              "cacheWrite": 0
            },
            "contextWindow": 256000,
            "maxTokens": 256000,
            "compat": {
              "supportsDeveloperRole": false,
              "supportsReasoningEffort": true
            }
          }
        ]
      },

2.2 ollama远程

// models.providers

      "ollama-remote": {
        "baseUrl": "https://ollama.com/v1",
        "apiKey": "apiKey",
        "api": "openai-completions",
        "models": [
          {
            "id": "kimi-k2.5:cloud",
            "name": "kimi-k2.5:cloud",
            "api": "openai-completions",
            "reasoning": true,
            "input": [
              "text"
            ],
            "cost": {
              "input": 0,
              "output": 0,
              "cacheRead": 0,
              "cacheWrite": 0
            },
            "contextWindow": 256000,
            "maxTokens": 256000,
            "compat": {
              "supportsDeveloperRole": false,
              "supportsReasoningEffort": true
            }
          }
        ]
      },

三、channel 会话渠道

3.1 飞书

飞书插件官方内置的不用再使用INSTALL

注意事项:

  1. 需要先配置,才能在开放平台开启长连接
  2. 开启长连接之后首次会话需要根据提示进行配对
  3. 如果没提示配对就到命令行界面下看看
  4. 最后看下日志,有没有权限没开启
{
	"channels": {
		"feishu": {
			"appSecret": "...", // 配置
			"appId": "...", // 配置
			"allowFrom": []
		}
	},
	"plugins": {
	    "entries": {
		    "feishu": {
			    "enabled": true
			}
    },
    "installs": {
      "feishu": {
        "source": "npm",
        "spec": "@openclaw/feishu",
        "installPath": "/home/usename/.openclaw/extensions/feishu",
        "version": "2026.2.9",
        "installedAt": "2026-02-11T08:59:30.793Z"
      }
    }
  }
}

3.2 TUI


openclaw tui 

# 默认本地 openclaw tui 就能进入TUI
# 如果报错可以加授权参数

openclaw tui --url ws://127.0.0.1:18789 --token XXX

四、上网

4.1 使用 openclaw chrome extension 上网

但是经常扩展状态不可用

第1步:命令行执行 openclaw browser extension install 第2步:打开浏览器开发者模式 第3步:加载指定目录里的浏览器扩展 第4步:点个扩展置顶 第5步:把扩展点ON 第6步:测试

4.2 使用聚合搜索API

但是这个要绑卡 https://brave.com/zh/search/api/

4.2.2 Tavily

不用绑卡

  1. 安装tavily skill
  2. 注册tavily账号,获取API KEY
  3. 配置API KEY到环境变量里

4.3

4.4

八、TESTS

TODO

九、ERRORS

9.1 context window too small

low context window: ollama/qwen3:d6b ctx=4096 (warn<32000) source=modelsConfig

blocked model (context window too small): ollama/qwen3:d6b ctx=4096 (min=16000) source=modelsConfig
lane task error: lane=main durationMs=17 error="FailoverError: Model context window too small (4096 tokens). Minimum is 16000."

lane task error: lane=session:agent:main:main durationMs=19 error="FailoverError: Model context window too small (4096 tokens). Minimum is 16000."

Embedded agent failed before reply: Model context window too small (4096 tokens). Minimum is 16000.

Embedded agent failed before reply: Model context window too small (4096 tokens). Minimum is 16000.

因为openclaw每次会话携带的TOKEN非常长 16K这么多,所以大模型至少16K上下文参数量

9.2 Unrecognized key glm-4.7-flash latest

Invalid config at /home/parallels/.openclaw/openclaw.json:\n- agents.defaults.model: Unrecognized key: "ollama/glm-4.7-flash:latest"

构建索引的时候可能不支持模型名字有点号,所以把名字改了,把配置里的名字也改了。

ollama cp glm-4.7-flash:latest glm-47-flash:latest

9.3 模型回复为空

依次排查:

  1. 日志是否有报错
  2. 有些模型不支持一些参数, 例如:kimi-k2.5 不支持 supportsDeveloperRole

9.4 龙虾不执行指令

原因是模型不太匹配,例如用的通用大语言模型没执行指令的能力 qwen3 换成 qwen3-coder

References