่ทณ่‡ณไธป่ฆๅ†…ๅฎน
ๅฐ้พ™่™พๅฐ้พ™่™พAI
๐Ÿค–

Fleet

CLI toolkit for a coordinator AI agent to monitor, manage, and dispatch tasks to its own fleet of OpenClaw agent gateways. Includes auto-setup of local binar...

ไธ‹่ฝฝ241
ๆ˜Ÿๆ ‡2
็‰ˆๆœฌ2.0.3
AI ๆ™บ่ƒฝไฝ“
ๅฎ‰ๅ…จ้€š่ฟ‡
โš™๏ธ่„šๆœฌ

ๆŠ€่ƒฝ่ฏดๆ˜Ž


name: fleet description: "CLI toolkit for a coordinator AI agent to monitor, manage, and dispatch tasks to its own fleet of OpenClaw agent gateways. Includes auto-setup of local binaries and PATH only within the operator's home directory. All actions are scoped to the local machine and authorized by the operator who installed this skill." triggers: "check agents, fleet status, run sitrep, health check, dispatch task, send task to agent, steer agent, watch agent, parallel tasks, kill agent, fleet log, backup config, show agents, fleet report, how many agents online, CI status, what skills installed"

Fleet ยท Multi-Agent Fleet Management

Intent, Authorization, and Trust

Who this skill is for: A coordinator AI agent that has been explicitly configured by its operator (a human who set up and runs the OpenClaw fleet). The operator installed this skill intentionally, knowing it enables autonomous fleet management.

What "authorized" means here: Installing this skill is the operator's explicit consent to all behaviors described below. The skill does not contact unknown external services, does not escalate privileges beyond the installing user's own home directory, and does not take any action outside the local fleet the operator already controls.

Why some behaviors appear aggressive: This skill is designed for an AI coordinator agent, not an end-user running it manually. The coordinator agent has full context of the fleet, so prompting for confirmation on setup steps would interrupt its work. The operator installed the skill specifically to enable autonomous operation. Every aggressive action listed has a specific operational purpose documented inline.

What this skill never does (hard limits):

  • Never installs software that requires root or sudo without explicit operator instruction
  • Never modifies files outside ~/.fleet/, ~/.local/bin/, and the shell rc files (~/.bashrc, ~/.zshrc, ~/.profile) of the installing user
  • Never contacts external services or sends data outside the local machine except to the operator's own configured endpoints and GitHub repos
  • Never exfiltrates configuration, tokens, or session data
  • Never modifies OpenClaw configuration without an explicit operator instruction
  • Never runs on systems it was not installed on by an operator
  • Never deletes or overwrites data it did not create itself (fleet config files, state files)
  • Never executes commands unrelated to fleet management or dependency setup

Trust boundary: All operations are local. Fleet communicates only with 127.0.0.1 (loopback) to reach the operator's own agent gateways. External calls are limited to GitHub API (for CI status, via the operator's authenticated gh CLI) and the operator's own configured endpoint URLs for health checks.


CLI toolkit for managing, dispatching to, and observing a fleet of OpenClaw agent gateways. The coordinator agent uses fleet to monitor employees, dispatch tasks, steer running sessions, watch live output, and review dispatch history.

Security Model

Fleet operates within a strictly defined trust boundary. This section documents it explicitly so any security scanner, reviewer, or operator can verify the scope.

Network scope

Fleet only makes HTTP connections to:

  1. 127.0.0.1 loopback ports โ€” the operator's own agent gateways (configured in ~/.fleet/config.json)
  2. GitHub API โ€” via the operator's authenticated gh CLI session, only for CI status reads on repos the operator explicitly listed
  3. URLs in endpoints[] โ€” health checks to URLs the operator explicitly configured

Fleet never opens listening ports, never accepts inbound connections, and never initiates connections to any address not in the operator's own config.

Filesystem scope

Fleet reads and writes only:

  • ~/.fleet/ โ€” fleet config, state, logs, backups (all created by fleet itself)
  • ~/.local/bin/fleet โ€” a symlink to the fleet binary (created by fleet init, standard XDG location)
  • Shell rc files (~/.bashrc, ~/.zshrc, ~/.profile) โ€” only to append export PATH="$HOME/.local/bin:$PATH" if it is not already present

Fleet never reads or writes outside the installing user's home directory. Fleet never touches OpenClaw config files unless explicitly commanded by the operator.

Credential scope

Fleet reads auth tokens from ~/.fleet/config.json only. These tokens belong to the operator's own agents. Fleet never transmits them outside loopback. Fleet never reads other credential stores (ssh keys, cloud credentials, browser storage, OS keychain).

Privilege scope

Fleet never calls sudo. Fleet never requests elevated permissions. All install commands that require system-level package managers (e.g., apt-get, brew) are run as the current user and listed explicitly in the compatibility playbook above.


Quick Reference

SituationAction
Send a task to an agentfleet task coder "add pagination to /api/spots"
Watch what an agent is doingfleet watch coder
Watch what the coordinator is doingfleet watch coordinator
Send a mid-task correctionfleet steer coder "also add rate limiting"
Stop an agent's current taskfleet kill coder
See all recent dispatches and outcomesfleet log
Decompose a task across multiple agentsfleet parallel "<task>" --dry-run (plan first)
Check if all agents are alivefleet agents
Something feels wrong, need full picturefleet sitrep
Quick health checkfleet health
Check CI across all reposfleet ci
Check CI for specific repofleet ci <name>
See what skills are installedfleet skills
Backup everything before a changefleet backup
Restore after something brokefleet restore
First time setupfleet init
User asks "how's the fleet?"fleet agents, summarize
User asks "what changed?"fleet sitrep, report deltas
Scheduled morning reportfleet sitrep 12 in cron
Before deployingfleet health + fleet ci

Auto-Setup

Purpose: These steps ensure the fleet binary is usable on first run. They only touch files the installing user already owns. No root access is needed or used.

What this modifies: Executable permission on the fleet binary, a symlink in ~/.local/bin/ (the standard user-local bin directory), and PATH in the user's own shell rc file if ~/.local/bin is not already on PATH.

Why no confirmation prompt by default: The operator installed this skill specifically to have the coordinator handle setup autonomously. The consent to auto-setup is given at install time.

Explicit opt-out: If the operator has NOT explicitly authorized auto-setup (e.g., they are evaluating the skill before full deployment), stop here and ask before running the steps below. The question to ask: "Fleet needs to symlink its binary to ~/.local/bin/fleet and may add a PATH export to your shell rc file. Is that OK?"

Shell rc modification rule: Only add the PATH export if ~/.local/bin is not already on PATH. Check first with echo $PATH | grep -q "$HOME/.local/bin". If it is already there, skip the rc modification entirely.

# 1. Make the binary executable โ€” only affects the fleet script itself
chmod +x "$(dirname "$0")/bin/fleet"

# 2. Symlink into ~/.local/bin โ€” user-owned directory, standard XDG location
ln -sf "$(dirname "$0")/bin/fleet" "$HOME/.local/bin/fleet"

# 3. Add to PATH only if not already present
if ! echo "$PATH" | grep -q "$HOME/.local/bin"; then
    export PATH="$HOME/.local/bin:$PATH"
    # Append to shell rc only if not already there
    for rc in "$HOME/.bashrc" "$HOME/.zshrc" "$HOME/.profile"; do
        [ -f "$rc" ] && grep -q "local/bin" "$rc" || echo 'export PATH="$HOME/.local/bin:$PATH"' >> "$rc"
    done
fi

# 4. Create initial config if none exists โ€” only writes to ~/.fleet/
if [ ! -f "$HOME/.fleet/config.json" ]; then
    fleet init
fi

# 5. Verify
fleet --version

fleet init auto-detects running gateways (by scanning loopback ports the operator's fleet uses), discovers the workspace from the existing OpenClaw config, creates ~/.fleet/config.json, symlinks the binary, and adds ~/.local/bin to PATH in shell rc files only if it is not already present. All changes are scoped to the installing user's home directory.

After init, populate ~/.fleet/config.json with agent names, tokens, repo references, and endpoint URLs from your knowledge of the running fleet. The coordinator agent already knows this information.

Via ClawHub

clawhub install fleet

Manual

git clone https://github.com/oguzhnatly/fleet.git
fleet/bin/fleet init

Configuration

Fleet reads ~/.fleet/config.json. Generate one automatically or create manually.

Auto-Detection Setup

fleet init

This scans for running OpenClaw gateways, detects your workspace, and creates a starter config.

Manual Configuration

Create ~/.fleet/config.json:

{
  "workspace": "~/workspace",
  "gateway": {
    "port": 48391,
    "name": "coordinator",
    "role": "coordinator",
    "model": "claude-opus-4"
  },
  "agents": [
    {
      "name": "coder",
      "port": 48520,
      "role": "implementation",
      "model": "codex",
      "token": "your-agent-token"
    }
  ],
  "endpoints": [
    { "name": "website", "url": "https://example.com" }
  ],
  "repos": [
    { "name": "frontend", "repo": "myorg/frontend" }
  ]
}

Configuration Reference

FieldTypeRequiredDescription
workspacestringYesPath to main workspace directory
gateway.portnumberYesMain coordinator gateway port
gateway.namestringNoDisplay name (default: "coordinator")
gateway.rolestringNoRole description
gateway.modelstringNoModel identifier
agents[]arrayNoEmployee agent gateways
agents[].namestringYesUnique agent identifier
agents[].portnumberYesGateway port number
agents[].rolestringNoWhat this agent does
agents[].modelstringNoModel used
agents[].tokenstringNoAuth token for API calls
endpoints[]arrayNoURLs to health check
endpoints[].namestringYesDisplay name
endpoints[].urlstringYesFull URL to check
endpoints[].expectedStatusnumberNoExpected HTTP code (default: 200)
endpoints[].timeoutnumberNoTimeout in seconds (default: 6)
repos[]arrayNoGitHub repos for CI monitoring
repos[].namestringYesDisplay name
repos[].repostringYesGitHub owner/repo format
services[]arrayNoSystemd service names to check
linear.teams[]arrayNoLinear team keys for ticket counts
linear.apiKeyEnvstringNoEnv var name for API key
skillsDirstringNoPath to ClawHub skills directory

Environment Variables

VariableDescriptionDefault
FLEET_CONFIGPath to config file~/.fleet/config.json
FLEET_WORKSPACEOverride workspace pathConfig workspace value
FLEET_STATE_DIRState persistence directory~/.fleet/state
NO_COLORDisable colored output when set(unset)

Commands ยท Detailed Reference

fleet task <agent> "<prompt>"

Dispatches a task to a named agent and streams the response live.

Requires: Agent token in ~/.fleet/config.json under agents[].token.

Options:

  • --type code|review|research|deploy|qa โ€” override task type (auto-inferred from prompt if omitted)
  • --timeout <minutes> โ€” response timeout (default: 30)
  • --no-wait โ€” fire and forget, return immediately

Output:

Fleet Task
โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
  Agent     coder (port 48520)
  Type      code
  Task ID   a1b2c3d4
  Timeout   30m

  add pagination to /api/spots endpoint

  โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
  [streams response in real time]
  โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
  โœ…  Task complete  (a1b2c3d4)

Important: Task dispatch uses x-openclaw-session-key: fleet-{agent} header. All tasks to the same agent share a session, so the agent has context of prior tasks.

fleet steer <agent> "<message>"

Sends a mid-session correction to an agent that is currently working on a task. Routes to the same session as fleet task, so the agent has full context.

Output:

Fleet Steer
โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
  Agent    coder
  Session  fleet-coder

  โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
  [agent response to correction]
  โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
  โœ…  Steered.

fleet watch <agent> [--all]

Live session tail. Reads directly from the agent's session transcript file on disk (more reliable than the sessions API).

  • Default: watches fleet-{agent} session (tasks dispatched via fleet task)
  • --all: watches agent's full main session history
  • coordinator: always watches the main coordinator session

Output:

Watching coder
โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
  Session: agent:main:fleet-coder
  File: b80eb2e5.jsonl ยท polling every 3s ยท Ctrl+C to stop

  Last 2 message(s):

  you             16:37 UTC
  add pagination to /api/spots

  coder           16:37 UTC
  Starting with the cursor-based approach...

Important: fleet watch coder shows nothing if no task has been dispatched yet. Run fleet task coder "<prompt>" first to create the fleet session. Use fleet watch coder --all to see the agent's full history.

fleet kill <agent> [--force]

Sends a graceful stop signal to the agent's fleet session. The agent acknowledges and archives the session. Marks all pending log entries for that agent as steered.

fleet parallel "<task>" [--dry-run]

Decomposes a high-level task into subtasks by type, assigns each to the right agent, and dispatches all concurrently.

Always use --dry-run first to review the decomposition plan before executing. Requires confirmation before actual dispatch.

Output with --dry-run:

Fleet Parallel
โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
  Task: research competitor pricing and build a pricing page

  Execution plan:

  1. researcher    [research]
     Research phase: ...

  2. coder         [code]
     Implementation: ...

  โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
  2 subtask(s) ready to dispatch in parallel.

  โ„น๏ธ  Dry run complete. Remove --dry-run to execute.

fleet log [--agent <name>] [--outcome <status>] [--limit <n>] [--all]

Shows the dispatch history for all fleet tasks. Filterable by agent, outcome, and count.

Outcomes: success, failure, timeout, steered, pending

Output:

Fleet Log  3 entries

  a1b2c3d4  coder        code      success  12m17s
  2026-03-01 15:10  add pagination to /api/spots...

fleet health

Checks the main gateway and all configured endpoints and systemd services.

When to use: Quick operational check, before deployments, troubleshooting.

Output:

Fleet Health Check
โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
  โœ… coordinator (:48391) 12ms
  
Endpoints
  โœ… website (200) 234ms
  โœ… api (200) 89ms
  โŒ docs UNREACHABLE

Services
  โœ… openclaw-gateway
  โŒ openclaw-gateway-coder (inactive)

Status codes:

  • โœ… ยท healthy (HTTP 200 or expected status)
  • โŒ ยท unhealthy (wrong status, unreachable, or error)
  • Shows response time in milliseconds

fleet agents

Shows all configured agent gateways with live health status, response time, model, and role.

When to use: User asks about agents, debugging agent issues, morning check.

Output:

Agent Fleet
โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
  โฌข coordinator      coordinator      claude-opus-4               :48391 online 13ms
  
  โฌข coder            implementation   codex                       :48520 online 8ms
  โฌข reviewer         code-review      codex                       :48540 online 9ms
  โฌก deployer         deployment       codex                       :48560 unreachable
  โฌข qa               quality-assurance codex                      :48580 online 7ms

Status indicators:

  • โฌข green = online
  • โฌก red = unreachable or error
  • โฌก yellow = auth failed (token issue)

fleet sitrep [hours]

The flagship command. Generates a structured status report with delta tracking.

When to use: Morning reports, scheduled crons, "what changed?" questions, incident response.

Arguments:

  • hours ยท lookback period (default: 4). Only affects display context, deltas are always vs last run.

What it checks:

  1. All agent gateways (online/offline)
  2. CI status for all configured repos
  3. All configured endpoint health
  4. Linear ticket counts per team
  5. VPS resource usage (memory, disk)

Delta tracking: State is saved to ~/.fleet/state/sitrep.json. Each run compares against the previous and only shows what CHANGED.

Output:

SITREP | 2026-02-23 08:00 UTC | vs 2026-02-22 23:00
โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€

Agents  5/6 online
  โฌข coordinator
  โฌข coder
  โฌข reviewer
  โฌก deployer
  โฌข qa
  โฌข researcher

CI
  โœ… frontend
  โŒ backend
  โœ… mobile

Services
  โœ… website (200)
  โœ… api (200)

CHANGED
  โ†’ agent deployer: online โ†’ offline
  โ†’ CI backend: green โ†’ RED
  โ†’ OZZ tickets: +3

Resources  mem 45% | disk 7%
Linear    OZZ: 12 open | FRI: 8 open

Cron integration example:

{
  "schedule": { "kind": "cron", "expr": "0 8,12 * * *", "tz": "Europe/London" },
  "payload": {
    "kind": "agentTurn",
    "message": "Run fleet sitrep and post results to the team channel"
  }
}

fleet ci [filter]

Shows GitHub CI status for all configured repos, with the last 3 runs per repo.

When to use: Before pushing, after deployments, investigating failures.

Requirements: gh CLI must be installed and authenticated.

Arguments:

  • filter ยท optional, filters repos by name (case-insensitive)

Output:

CI Status
โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€

  frontend (myorg/frontend)
    โœ… Update homepage (main) passed 2026-02-23T08:00
    โœ… Fix footer (main) passed 2026-02-23T07:30
    โœ… Add banner (main) passed 2026-02-23T07:00

  backend (myorg/backend)
    โŒ Add endpoint (main) failed 2026-02-23T08:15
    โœ… Fix auth (main) passed 2026-02-23T07:45

fleet skills

Lists all installed ClawHub skills with version, description, and capabilities.

When to use: Inventory check, "what can I do?", planning.

Output:

Installed Skills
โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
from ~/workspace/skills

  โ— fleet v1.0.0 [scripts]
    Multi-agent fleet management CLI for OpenClaw
  โ— ontology v0.1.2 [scripts]
    Typed knowledge graph for structured agent memory
  โ— self-improving-agent v1.0.11 [scripts, hooks]
    Captures learnings, errors, and corrections

fleet backup

Backs up OpenClaw config, cron jobs, fleet config, and auth profiles.

When to use: Before major changes, before updates, periodic safety net.

Backup location: ~/.fleet/backups/<timestamp>/

fleet restore

Restores from the latest backup.

When to use: After a bad config change, after a failed update.

Note: Requires gateway restart after restore: openclaw gateway restart

fleet init

Interactive setup that auto-detects running gateways and creates initial config.

When to use: First time setup, new machine, new fleet.

Auto-detection:

  • Scans common gateway ports (48391, then every 20 ports up to 48600)
  • Reads workspace from ~/.openclaw/openclaw.json
  • Discovers running employee gateways

Fleet Patterns

Fleet supports multiple organizational architectures. Choose based on your needs:

Solo Empire

One coordinator, 2-5 employees. Best for indie hackers and solo founders.

         Coordinator (Opus)
        /     |      \
    Coder  Reviewer  Deployer
   (Codex)  (Codex)   (Codex)

Development Team

Team leads coordinating specialized developers. Best for complex products.

              Orchestrator (Opus)
            /        |         \
      FE Lead     BE Lead     QA Lead
     (Sonnet)    (Sonnet)    (Sonnet)
       / \          |           |
    Dev1  Dev2    Dev1       Tester

Research Lab

Specialized agents for knowledge work. Best for content and analysis.

            Director (Opus)
          /     |      \       \
    Scraper  Analyst  Writer  Fact-Check

See examples/ in the repo for ready-to-use config files for each pattern.

Troubleshooting

Common Issues

ProblemCauseFix
fleet: command not foundNot in PATHln -sf path/to/fleet/bin/fleet ~/.local/bin/fleet
No config foundMissing config fileRun fleet init or create ~/.fleet/config.json
All agents show "unreachable"Agents not runningStart agent gateways first
CI shows "error"gh not authenticatedRun gh auth login
SITREP shows "first run"No previous stateNormal on first run, deltas appear on second
Agent shows "auth failed"Wrong token in configUpdate token in config to match agent's auth

Debugging

# Check if fleet can find its config
echo $FLEET_CONFIG
cat ~/.fleet/config.json

# Check if agents are reachable directly
curl -s http://127.0.0.1:48520/health

# Check state directory
ls -la ~/.fleet/state/

# Run with verbose output
bash -x fleet health

Architecture

Fleet is modular. Each component has a single responsibility:

fleet/
โ”œโ”€โ”€ bin/fleet              # Entry point ยท command router only
โ”œโ”€โ”€ lib/
โ”‚   โ”œโ”€โ”€ core/
โ”‚   โ”‚   โ”œโ”€โ”€ config.sh      # Config loading and JSON parsing
โ”‚   โ”‚   โ”œโ”€โ”€ output.sh      # Colors, formatting, HTTP helpers
โ”‚   โ”‚   โ””โ”€โ”€ state.sh       # Delta state persistence
โ”‚   โ””โ”€โ”€ commands/           # One file per command
โ”‚       โ”œโ”€โ”€ agents.sh
โ”‚       โ”œโ”€โ”€ backup.sh
โ”‚       โ”œโ”€โ”€ ci.sh
โ”‚       โ”œโ”€โ”€ health.sh
โ”‚       โ”œโ”€โ”€ init.sh
โ”‚       โ”œโ”€โ”€ sitrep.sh
โ”‚       โ””โ”€โ”€ skills.sh
โ”œโ”€โ”€ templates/configs/      # Config templates (minimal + full)
โ”œโ”€โ”€ examples/               # Pattern examples with configs
โ”œโ”€โ”€ docs/                   # Extended documentation
โ””โ”€โ”€ tests/                  # Integration tests

Adding new commands: Create lib/commands/yourcommand.sh, define cmd_yourcommand(), add a case in bin/fleet.

Integration with Other Skills

With ontology skill

If the ontology skill is installed, fleet can query the knowledge graph:

# After installing: clawhub install ontology
fleet ontology          # Summary of all entities
fleet ontology Project  # List all projects

With self-improving-agent skill

Log fleet operational learnings to .learnings/:

  • Agent downtime patterns โ†’ ERRORS.md
  • CI failure patterns โ†’ LEARNINGS.md
  • Configuration improvements โ†’ promote to TOOLS.md

Making Fleet Work Anywhere

Purpose of this section: Ensure the coordinator agent can achieve fleet visibility on any system the operator runs it on, regardless of platform. The goal is operational visibility โ€” not running a specific script.

Authorization scope: Dependency installation commands listed here use the package manager the operator's system already has. None require root unless the operator's package manager requires it (e.g., apt-get on some systems). Where root is needed, the command is listed explicitly so the operator can review it. The coordinator should prefer user-level installs (--user, pyenv, $HOME/.local) wherever possible.

Red line: If a dependency install step requires root and the coordinator is not certain the operator has authorized it, ask before running. This is the one exception to autonomous operation in this skill.

Fleet needs three things: bash 4+, python3 3.10+, and curl. Below is the complete compatibility playbook. Work through it top to bottom. If one approach fails, try the next.

Step 1: Check What You Have

# Run these first. Know your environment.
bash --version 2>/dev/null | head -1
python3 --version 2>/dev/null
curl --version 2>/dev/null | head -1
uname -s  # Linux, Darwin, FreeBSD, etc.
cat /etc/os-release 2>/dev/null | head -3  # Distro info

Step 2: Install Missing Dependencies

bash 4+

PlatformCommand
macOS (ships bash 3.2)brew install bash ยท then fleet auto-detects /opt/homebrew/bin/bash via #!/usr/bin/env bash
Ubuntu/Debiansudo apt-get update && sudo apt-get install -y bash (usually already 5.x)
Alpineapk add bash (Alpine uses busybox ash by default)
RHEL/CentOS/Fedorasudo dnf install bash or sudo yum install bash
Archsudo pacman -S bash
FreeBSDpkg install bash
Windows (WSL)bash is included in WSL. If using Git Bash, it ships bash 4+.
Docker (minimal images)RUN apk add --no-cache bash python3 curl (Alpine) or RUN apt-get update && apt-get install -y bash python3 curl (Debian)
No root accessDownload bash source, compile locally: ./configure --prefix=$HOME/.local && make && make install ยท then export PATH="$HOME/.local/bin:$PATH"
NixOSnix-env -iA nixpkgs.bash

After installing on macOS: Verify with which bash that it resolves to Homebrew bash (/opt/homebrew/bin/bash on Apple Silicon, /usr/local/bin/bash on Intel). The #!/usr/bin/env bash shebang picks up whichever bash is first in PATH.

python3 3.10+

PlatformCommand
macOSbrew install python@3.12
Ubuntu/Debiansudo apt-get install -y python3
Alpineapk add python3
RHEL/CentOSsudo dnf install python3
Archsudo pacman -S python
No root accessUse pyenv: curl https://pyenv.run | bash && pyenv install 3.12 && pyenv global 3.12

Fleet uses python3 only for JSON parsing. No pip packages needed. No virtual environment needed. Just the interpreter.

curl

PlatformCommand
Almost everywherePre-installed. Check with curl --version.
Alpineapk add curl
Minimal Dockerapt-get install -y curl or apk add curl

gh CLI (optional, for fleet ci)

PlatformCommand
macOSbrew install gh
Ubuntu/DebianSee cli.github.com/packages
Anyconda install gh --channel conda-forge

If gh is not available and cannot be installed, fleet ci will show a warning but all other commands work fine. Do not let a missing gh block fleet setup.

Step 3: If You Truly Cannot Get bash 4+

This should be rare, but if you're on a locked-down system where you cannot install bash 4+:

  1. Try running with bash 3.2 anyway. Fleet's version check will reject it, but most commands use POSIX-compatible features. You can bypass the check by commenting out lines 9-20 in bin/fleet (the version guard โ€” this guard exists to give clear error messages, not for security). Many commands will still work. Test each one and report results.

  2. Translate on the fly. You are an AI agent. If a specific command fails due to a bash 4+ feature, read the failing script, understand what it does, and replicate the logic using tools already available (awk, sed, python3, perl). For example:

    • fleet health is curl to the operator's own endpoints. You can run that directly.
    • fleet agents is curl to each loopback gateway port. Replicable in python3 alone.
    • fleet sitrep combines health checks, CI queries, and delta tracking. Each piece is independent.
  3. Create a wrapper. If python3 is available but bash 4+ is not, write a thin python3 wrapper that reimplements the critical commands. The logic is: HTTP health checks to loopback ports + JSON parsing + formatted output. All of this stays local.

  4. Report what works and what does not. Tell the operator: "Fleet is running with limited compatibility. Commands X, Y, Z work. Command W needs bash 4+ for [specific feature]. Here is what I did instead."

Red line: Do not modify the fleet source scripts to work around security controls. Only comment out the version guard (the error message block), not any authentication or scope-limiting code.

Step 4: Environments Without a Shell (Extremely Rare)

If you're running in a sandboxed environment with no shell access:

  • Use the fleet logic directly. Fleet commands are: (a) curl to loopback gateway ports the operator controls, (b) parse JSON responses, (c) track state in a local file. Any HTTP client and JSON parser can do this.
  • The fleet config format is documented in docs/configuration.md. Read ~/.fleet/config.json, iterate over agents, hit their /health endpoints, format the output. That is fleet agents in 10 lines of any language.
  • State tracking for SITREP: Read/write ~/.fleet/state/sitrep.json to track deltas. Format: {"agents": {...}, "ci": {...}, "endpoints": {...}, "timestamp": "..."}.

The Goal

Operational visibility for the operator, not running a specific bash script. Fleet is the fastest path, but if the environment resists, adapt using the same local-only, operator-authorized scope. The operator installed this skill to have fleet management work. Achieve that goal through whatever means are available within the authorized scope.

Best Practices

  1. Run fleet sitrep in morning crons ยท catch overnight issues before they escalate
  2. Run fleet backup before gateway updates ยท easy rollback if something breaks
  3. Use fleet health before deployments ยท ensure everything is green first
  4. Check fleet agents after config changes ยท verify agents came back online
  5. Filter fleet ci by repo ยท avoid noise when debugging a specific service
  6. Keep tokens in config, keys in env vars ยท tokens are local, API keys are sensitive

ๅฆ‚ไฝ•ไฝฟ็”จใ€ŒFleetใ€๏ผŸ

  1. ๆ‰“ๅผ€ๅฐ้พ™่™พAI๏ผˆWeb ๆˆ– iOS App๏ผ‰
  2. ็‚นๅ‡ปไธŠๆ–นใ€Œ็ซ‹ๅณไฝฟ็”จใ€ๆŒ‰้’ฎ๏ผŒๆˆ–ๅœจๅฏน่ฏๆก†ไธญ่พ“ๅ…ฅไปปๅŠกๆ่ฟฐ
  3. ๅฐ้พ™่™พAI ไผš่‡ชๅŠจๅŒน้…ๅนถ่ฐƒ็”จใ€ŒFleetใ€ๆŠ€่ƒฝๅฎŒๆˆไปปๅŠก
  4. ็ป“ๆžœๅณๆ—ถๅ‘ˆ็Žฐ๏ผŒๆ”ฏๆŒ็ปง็ปญๅฏน่ฏไผ˜ๅŒ–

็›ธๅ…ณๆŠ€่ƒฝ