跳至主要内容
小龙虾小龙虾AI
🤖

ModelReady

Start using a local or Hugging Face model instantly, directly from chat.

下载958
星标0
版本1.0.0
开发工具
安全通过
💬Prompt

技能说明


name: modelready description: Start using a local or Hugging Face model instantly, directly from chat. metadata: {"openclaw":{"requires":{"bins":["bash", "curl"]}, "env": ["URL"]}}

ModelReady

ModelReady lets you start using a local or Hugging Face model immediately, without leaving clawdbot.

It turns a model into a running, OpenAI-compatible endpoint and allows you to chat with it directly from a conversation.

When to use

Use this skill when you want to:

  • Quickly start using a local or Hugging Face model
  • Chat with a locally running model
  • Test or interact with a model directly from chat

Commands

Start a model server

/modelready start repo=<path-or-hf-repo> port=<port> [tp=<n>] [dtype=<dtype>]

Examples:

/modelready start repo=Qwen/Qwen2.5-7B-Instruct port=19001
/modelready start repo=/home/user/models/Qwen-2.5 port=8010 tp=4 dtype=bfloat16

Chat with a running model

/modelready chat port=<port> text="<message>"

Example:

/modelready chat port=8010 text="hello"

Check status or stop the server

/modelready status port=<port>
/modelready stop port=<port>

Set default host or port

/modelready set_ip   ip=<host>
/modelready set_port port=<port>

Notes

  • The model is served locally using vLLM.
  • The exposed endpoint follows the OpenAI API format.
  • The server must be started before sending chat requests.

如何使用「ModelReady」?

  1. 打开小龙虾AI(Web 或 iOS App)
  2. 点击上方「立即使用」按钮,或在对话框中输入任务描述
  3. 小龙虾AI 会自动匹配并调用「ModelReady」技能完成任务
  4. 结果即时呈现,支持继续对话优化

相关技能