🤖
Langcache Semantic Caching for OpenClaw
This skill should be used when the user asks to "enable semantic caching", "cache LLM responses", "reduce API costs", "speed up AI responses", "configure LangCache", "search the semantic cache", "store responses in cache", or mentions Redis LangCache, semantic similarity caching, or LLM response caching. Provides integration with Redis LangCache managed service for semantic caching of prompts and responses.
安全通过
如何使用「Langcache Semantic Caching for OpenClaw」?
- 打开小龙虾AI(Web 或 iOS App)
- 点击上方「立即使用」按钮,或在对话框中输入任务描述
- 小龙虾AI 会自动匹配并调用「Langcache Semantic Caching for OpenClaw」技能完成任务
- 结果即时呈现,支持继续对话优化