跳至主要内容
小龙虾小龙虾AI
🤖

Neural Memory

Associative memory with spreading activation for persistent, intelligent recall. Use PROACTIVELY when: (1) You need to remember facts, decisions, errors, or context across sessions (2) User asks "do you remember..." or references past conversations (3) Starting a new task — inject relevant context from memory (4) After making decisions or encountering errors — store for future reference (5) User asks "why did X happen?" — trace causal chains through memory Zero LLM dependency. Neural graph with

下载3.7k
星标2
版本1.0.0
AI 智能体
安全通过
🔌MCP

技能说明


name: neural-memory description: | Associative memory with spreading activation for persistent, intelligent recall. Use PROACTIVELY when: (1) You need to remember facts, decisions, errors, or context across sessions (2) User asks "do you remember..." or references past conversations (3) Starting a new task — inject relevant context from memory (4) After making decisions or encountering errors — store for future reference (5) User asks "why did X happen?" — trace causal chains through memory Zero LLM dependency. Neural graph with Hebbian learning, memory decay, contradiction detection, and temporal reasoning. homepage: https://github.com/nhadaututtheky/neural-memory metadata: {"openclaw":{"emoji":"brain","primaryEnv":"NEURALMEMORY_BRAIN","requires":{"bins":["python3"],"env":["NEURALMEMORY_BRAIN"]},"os":["darwin","linux","win32"],"install":[{"id":"pip","kind":"node","package":"neural-memory","bins":["nmem"],"label":"pip install neural-memory"}]}}

NeuralMemory — Associative Memory for AI Agents

A biologically-inspired memory system that uses spreading activation instead of keyword/vector search. Memories form a neural graph where neurons connect via 20 typed synapses. Frequently co-accessed memories strengthen their connections (Hebbian learning). Stale memories decay naturally. Contradictions are auto-detected.

Why not just vector search? Vector search finds documents similar to your query. NeuralMemory finds conceptually related memories through graph traversal — even when there's no keyword or embedding overlap. "What decision did we make about auth?" activates time + entity + concept neurons simultaneously and finds the intersection.

Setup

1. Install NeuralMemory

pip install neural-memory
nmem init

This creates ~/.neuralmemory/ with a default brain and configures MCP automatically.

2. Configure MCP for OpenClaw

Add to your OpenClaw MCP configuration (~/.openclaw/mcp.json or project openclaw.json):

{
  "mcpServers": {
    "neural-memory": {
      "command": "python3",
      "args": ["-m", "neural_memory.mcp"],
      "env": {
        "NEURALMEMORY_BRAIN": "default"
      }
    }
  }
}

3. Verify

nmem stats

You should see brain statistics (neurons, synapses, fibers).

Tools Reference

Core Memory Tools

ToolPurposeWhen to Use
nmem_rememberStore a memoryAfter decisions, errors, facts, insights, user preferences
nmem_recallQuery memoriesBefore tasks, when user references past context, "do you remember..."
nmem_contextGet recent memoriesAt session start, inject fresh context
nmem_todoQuick TODO with 30-day expiryTask tracking

Intelligence Tools

ToolPurposeWhen to Use
nmem_autoAuto-extract memories from textAfter important conversations — captures decisions, errors, TODOs automatically
nmem_recall (depth=3)Deep associative recallComplex questions requiring cross-domain connections
nmem_habitsWorkflow pattern suggestionsWhen user repeats similar action sequences

Management Tools

ToolPurposeWhen to Use
nmem_healthBrain health diagnosticsPeriodic checkup, before sharing brain
nmem_statsBrain statisticsQuick overview of memory counts
nmem_versionBrain snapshots and rollbackBefore risky operations, version checkpoints
nmem_transplantTransfer memories between brainsCross-project knowledge sharing

Workflow

At Session Start

  1. Call nmem_context to inject recent memories into your awareness
  2. If user mentions a specific topic, call nmem_recall with that topic

During Conversation

  1. When a decision is made: nmem_remember with type="decision"
  2. When an error occurs: nmem_remember with type="error"
  3. When user states a preference: nmem_remember with type="preference"
  4. When asked about past events: nmem_recall with appropriate depth

At Session End

  1. Call nmem_auto with action="process" on important conversation segments
  2. This auto-extracts facts, decisions, errors, and TODOs

Examples

Remember a decision

nmem_remember(
  content="Use PostgreSQL for production, SQLite for development",
  type="decision",
  tags=["database", "infrastructure"],
  priority=8
)

Recall with spreading activation

nmem_recall(
  query="database configuration for production",
  depth=1,
  max_tokens=500
)

Returns memories found via graph traversal, not keyword matching. Related memories (e.g., "deploy uses Docker with pg_dump backups") surface even without shared keywords.

Trace causal chains

nmem_recall(
  query="why did the deployment fail last week?",
  depth=2
)

Follows CAUSED_BY and LEADS_TO synapses to trace cause-and-effect chains.

Auto-capture from conversation

nmem_auto(
  action="process",
  text="We decided to switch from REST to GraphQL because the frontend needs flexible queries. The migration will take 2 sprints. TODO: update API docs."
)

Automatically extracts: 1 decision, 1 fact, 1 TODO.

Key Features

  • Zero LLM dependency — Pure algorithmic: regex, graph traversal, Hebbian learning
  • Spreading activation — Associative recall through neural graph, not keyword/vector search
  • 20 synapse types — Temporal (BEFORE/AFTER), causal (CAUSED_BY/LEADS_TO), semantic (IS_A/HAS_PROPERTY), emotional (FELT/EVOKES), conflict (CONTRADICTS)
  • Memory lifecycle — Short-term → Working → Episodic → Semantic with Ebbinghaus decay
  • Contradiction detection — Auto-detects conflicting memories, deprioritizes outdated ones
  • Hebbian learning — "Neurons that fire together wire together" — memory improves with use
  • Temporal reasoning — Causal chain traversal, event sequences, temporal range queries
  • Brain versioning — Snapshot, rollback, diff brain state
  • Brain transplant — Transfer filtered knowledge between brains
  • Vietnamese + English — Full bilingual support for extraction and sentiment

Depth Levels

DepthNameSpeedUse Case
0Instant<10msQuick facts, recent context
1Context~50msStandard recall (default)
2Habit~200msPattern matching, workflow suggestions
3Deep~500msCross-domain associations, causal chains

Notes

  • Memories are stored locally in SQLite at ~/.neuralmemory/brains/<brain>.db
  • No data is sent to external services (unless optional embedding provider is configured)
  • Brain isolation: each brain is independent, no cross-contamination
  • nmem_remember returns fiber_id for reference tracking
  • Priority scale: 0 (trivial) to 10 (critical), default 5
  • Memory types: fact, decision, preference, todo, insight, context, instruction, error, workflow, reference

如何使用「Neural Memory」?

  1. 打开小龙虾AI(Web 或 iOS App)
  2. 点击上方「立即使用」按钮,或在对话框中输入任务描述
  3. 小龙虾AI 会自动匹配并调用「Neural Memory」技能完成任务
  4. 结果即时呈现,支持继续对话优化

相关技能