Fleet
CLI toolkit for a coordinator AI agent to monitor, manage, and dispatch tasks to its own fleet of OpenClaw agent gateways. Includes auto-setup of local binar...
ๆ่ฝ่ฏดๆ
name: fleet description: "CLI toolkit for a coordinator AI agent to monitor, manage, and dispatch tasks to its own fleet of OpenClaw agent gateways. Includes auto-setup of local binaries and PATH only within the operator's home directory. All actions are scoped to the local machine and authorized by the operator who installed this skill." triggers: "check agents, fleet status, run sitrep, health check, dispatch task, send task to agent, steer agent, watch agent, parallel tasks, kill agent, fleet log, backup config, show agents, fleet report, how many agents online, CI status, what skills installed"
Fleet ยท Multi-Agent Fleet Management
Intent, Authorization, and Trust
Who this skill is for: A coordinator AI agent that has been explicitly configured by its operator (a human who set up and runs the OpenClaw fleet). The operator installed this skill intentionally, knowing it enables autonomous fleet management.
What "authorized" means here: Installing this skill is the operator's explicit consent to all behaviors described below. The skill does not contact unknown external services, does not escalate privileges beyond the installing user's own home directory, and does not take any action outside the local fleet the operator already controls.
Why some behaviors appear aggressive: This skill is designed for an AI coordinator agent, not an end-user running it manually. The coordinator agent has full context of the fleet, so prompting for confirmation on setup steps would interrupt its work. The operator installed the skill specifically to enable autonomous operation. Every aggressive action listed has a specific operational purpose documented inline.
What this skill never does (hard limits):
- Never installs software that requires root or sudo without explicit operator instruction
- Never modifies files outside
~/.fleet/,~/.local/bin/, and the shell rc files (~/.bashrc,~/.zshrc,~/.profile) of the installing user - Never contacts external services or sends data outside the local machine except to the operator's own configured endpoints and GitHub repos
- Never exfiltrates configuration, tokens, or session data
- Never modifies OpenClaw configuration without an explicit operator instruction
- Never runs on systems it was not installed on by an operator
- Never deletes or overwrites data it did not create itself (fleet config files, state files)
- Never executes commands unrelated to fleet management or dependency setup
Trust boundary: All operations are local. Fleet communicates only with 127.0.0.1 (loopback) to reach the operator's own agent gateways. External calls are limited to GitHub API (for CI status, via the operator's authenticated gh CLI) and the operator's own configured endpoint URLs for health checks.
CLI toolkit for managing, dispatching to, and observing a fleet of OpenClaw agent gateways. The coordinator agent uses fleet to monitor employees, dispatch tasks, steer running sessions, watch live output, and review dispatch history.
Security Model
Fleet operates within a strictly defined trust boundary. This section documents it explicitly so any security scanner, reviewer, or operator can verify the scope.
Network scope
Fleet only makes HTTP connections to:
127.0.0.1loopback ports โ the operator's own agent gateways (configured in~/.fleet/config.json)- GitHub API โ via the operator's authenticated
ghCLI session, only for CI status reads on repos the operator explicitly listed - URLs in
endpoints[]โ health checks to URLs the operator explicitly configured
Fleet never opens listening ports, never accepts inbound connections, and never initiates connections to any address not in the operator's own config.
Filesystem scope
Fleet reads and writes only:
~/.fleet/โ fleet config, state, logs, backups (all created by fleet itself)~/.local/bin/fleetโ a symlink to the fleet binary (created byfleet init, standard XDG location)- Shell rc files (
~/.bashrc,~/.zshrc,~/.profile) โ only to appendexport PATH="$HOME/.local/bin:$PATH"if it is not already present
Fleet never reads or writes outside the installing user's home directory. Fleet never touches OpenClaw config files unless explicitly commanded by the operator.
Credential scope
Fleet reads auth tokens from ~/.fleet/config.json only. These tokens belong to the operator's own agents. Fleet never transmits them outside loopback. Fleet never reads other credential stores (ssh keys, cloud credentials, browser storage, OS keychain).
Privilege scope
Fleet never calls sudo. Fleet never requests elevated permissions. All install commands that require system-level package managers (e.g., apt-get, brew) are run as the current user and listed explicitly in the compatibility playbook above.
Quick Reference
| Situation | Action |
|---|---|
| Send a task to an agent | fleet task coder "add pagination to /api/spots" |
| Watch what an agent is doing | fleet watch coder |
| Watch what the coordinator is doing | fleet watch coordinator |
| Send a mid-task correction | fleet steer coder "also add rate limiting" |
| Stop an agent's current task | fleet kill coder |
| See all recent dispatches and outcomes | fleet log |
| Decompose a task across multiple agents | fleet parallel "<task>" --dry-run (plan first) |
| Check if all agents are alive | fleet agents |
| Something feels wrong, need full picture | fleet sitrep |
| Quick health check | fleet health |
| Check CI across all repos | fleet ci |
| Check CI for specific repo | fleet ci <name> |
| See what skills are installed | fleet skills |
| Backup everything before a change | fleet backup |
| Restore after something broke | fleet restore |
| First time setup | fleet init |
| User asks "how's the fleet?" | fleet agents, summarize |
| User asks "what changed?" | fleet sitrep, report deltas |
| Scheduled morning report | fleet sitrep 12 in cron |
| Before deploying | fleet health + fleet ci |
Auto-Setup
Purpose: These steps ensure the fleet binary is usable on first run. They only touch files the installing user already owns. No root access is needed or used.
What this modifies: Executable permission on the fleet binary, a symlink in ~/.local/bin/ (the standard user-local bin directory), and PATH in the user's own shell rc file if ~/.local/bin is not already on PATH.
Why no confirmation prompt by default: The operator installed this skill specifically to have the coordinator handle setup autonomously. The consent to auto-setup is given at install time.
Explicit opt-out: If the operator has NOT explicitly authorized auto-setup (e.g., they are evaluating the skill before full deployment), stop here and ask before running the steps below. The question to ask: "Fleet needs to symlink its binary to ~/.local/bin/fleet and may add a PATH export to your shell rc file. Is that OK?"
Shell rc modification rule: Only add the PATH export if ~/.local/bin is not already on PATH. Check first with echo $PATH | grep -q "$HOME/.local/bin". If it is already there, skip the rc modification entirely.
# 1. Make the binary executable โ only affects the fleet script itself
chmod +x "$(dirname "$0")/bin/fleet"
# 2. Symlink into ~/.local/bin โ user-owned directory, standard XDG location
ln -sf "$(dirname "$0")/bin/fleet" "$HOME/.local/bin/fleet"
# 3. Add to PATH only if not already present
if ! echo "$PATH" | grep -q "$HOME/.local/bin"; then
export PATH="$HOME/.local/bin:$PATH"
# Append to shell rc only if not already there
for rc in "$HOME/.bashrc" "$HOME/.zshrc" "$HOME/.profile"; do
[ -f "$rc" ] && grep -q "local/bin" "$rc" || echo 'export PATH="$HOME/.local/bin:$PATH"' >> "$rc"
done
fi
# 4. Create initial config if none exists โ only writes to ~/.fleet/
if [ ! -f "$HOME/.fleet/config.json" ]; then
fleet init
fi
# 5. Verify
fleet --version
fleet init auto-detects running gateways (by scanning loopback ports the operator's fleet uses), discovers the workspace from the existing OpenClaw config, creates ~/.fleet/config.json, symlinks the binary, and adds ~/.local/bin to PATH in shell rc files only if it is not already present. All changes are scoped to the installing user's home directory.
After init, populate ~/.fleet/config.json with agent names, tokens, repo references, and endpoint URLs from your knowledge of the running fleet. The coordinator agent already knows this information.
Via ClawHub
clawhub install fleet
Manual
git clone https://github.com/oguzhnatly/fleet.git
fleet/bin/fleet init
Configuration
Fleet reads ~/.fleet/config.json. Generate one automatically or create manually.
Auto-Detection Setup
fleet init
This scans for running OpenClaw gateways, detects your workspace, and creates a starter config.
Manual Configuration
Create ~/.fleet/config.json:
{
"workspace": "~/workspace",
"gateway": {
"port": 48391,
"name": "coordinator",
"role": "coordinator",
"model": "claude-opus-4"
},
"agents": [
{
"name": "coder",
"port": 48520,
"role": "implementation",
"model": "codex",
"token": "your-agent-token"
}
],
"endpoints": [
{ "name": "website", "url": "https://example.com" }
],
"repos": [
{ "name": "frontend", "repo": "myorg/frontend" }
]
}
Configuration Reference
| Field | Type | Required | Description |
|---|---|---|---|
workspace | string | Yes | Path to main workspace directory |
gateway.port | number | Yes | Main coordinator gateway port |
gateway.name | string | No | Display name (default: "coordinator") |
gateway.role | string | No | Role description |
gateway.model | string | No | Model identifier |
agents[] | array | No | Employee agent gateways |
agents[].name | string | Yes | Unique agent identifier |
agents[].port | number | Yes | Gateway port number |
agents[].role | string | No | What this agent does |
agents[].model | string | No | Model used |
agents[].token | string | No | Auth token for API calls |
endpoints[] | array | No | URLs to health check |
endpoints[].name | string | Yes | Display name |
endpoints[].url | string | Yes | Full URL to check |
endpoints[].expectedStatus | number | No | Expected HTTP code (default: 200) |
endpoints[].timeout | number | No | Timeout in seconds (default: 6) |
repos[] | array | No | GitHub repos for CI monitoring |
repos[].name | string | Yes | Display name |
repos[].repo | string | Yes | GitHub owner/repo format |
services[] | array | No | Systemd service names to check |
linear.teams[] | array | No | Linear team keys for ticket counts |
linear.apiKeyEnv | string | No | Env var name for API key |
skillsDir | string | No | Path to ClawHub skills directory |
Environment Variables
| Variable | Description | Default |
|---|---|---|
FLEET_CONFIG | Path to config file | ~/.fleet/config.json |
FLEET_WORKSPACE | Override workspace path | Config workspace value |
FLEET_STATE_DIR | State persistence directory | ~/.fleet/state |
NO_COLOR | Disable colored output when set | (unset) |
Commands ยท Detailed Reference
fleet task <agent> "<prompt>"
Dispatches a task to a named agent and streams the response live.
Requires: Agent token in ~/.fleet/config.json under agents[].token.
Options:
--type code|review|research|deploy|qaโ override task type (auto-inferred from prompt if omitted)--timeout <minutes>โ response timeout (default: 30)--no-waitโ fire and forget, return immediately
Output:
Fleet Task
โโโโโโโโโโ
Agent coder (port 48520)
Type code
Task ID a1b2c3d4
Timeout 30m
add pagination to /api/spots endpoint
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
[streams response in real time]
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
Task complete (a1b2c3d4)
Important: Task dispatch uses x-openclaw-session-key: fleet-{agent} header. All tasks to the same agent share a session, so the agent has context of prior tasks.
fleet steer <agent> "<message>"
Sends a mid-session correction to an agent that is currently working on a task. Routes to the same session as fleet task, so the agent has full context.
Output:
Fleet Steer
โโโโโโโโโโโ
Agent coder
Session fleet-coder
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
[agent response to correction]
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
Steered.
fleet watch <agent> [--all]
Live session tail. Reads directly from the agent's session transcript file on disk (more reliable than the sessions API).
- Default: watches
fleet-{agent}session (tasks dispatched viafleet task) --all: watches agent's full main session historycoordinator: always watches the main coordinator session
Output:
Watching coder
โโโโโโโโโโโโโโ
Session: agent:main:fleet-coder
File: b80eb2e5.jsonl ยท polling every 3s ยท Ctrl+C to stop
Last 2 message(s):
you 16:37 UTC
add pagination to /api/spots
coder 16:37 UTC
Starting with the cursor-based approach...
Important: fleet watch coder shows nothing if no task has been dispatched yet. Run fleet task coder "<prompt>" first to create the fleet session. Use fleet watch coder --all to see the agent's full history.
fleet kill <agent> [--force]
Sends a graceful stop signal to the agent's fleet session. The agent acknowledges and archives the session. Marks all pending log entries for that agent as steered.
fleet parallel "<task>" [--dry-run]
Decomposes a high-level task into subtasks by type, assigns each to the right agent, and dispatches all concurrently.
Always use --dry-run first to review the decomposition plan before executing. Requires confirmation before actual dispatch.
Output with --dry-run:
Fleet Parallel
โโโโโโโโโโโโโโ
Task: research competitor pricing and build a pricing page
Execution plan:
1. researcher [research]
Research phase: ...
2. coder [code]
Implementation: ...
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
2 subtask(s) ready to dispatch in parallel.
โน๏ธ Dry run complete. Remove --dry-run to execute.
fleet log [--agent <name>] [--outcome <status>] [--limit <n>] [--all]
Shows the dispatch history for all fleet tasks. Filterable by agent, outcome, and count.
Outcomes: success, failure, timeout, steered, pending
Output:
Fleet Log 3 entries
a1b2c3d4 coder code success 12m17s
2026-03-01 15:10 add pagination to /api/spots...
fleet health
Checks the main gateway and all configured endpoints and systemd services.
When to use: Quick operational check, before deployments, troubleshooting.
Output:
Fleet Health Check
โโโโโโโโโโโโโโโโโโ
โ
coordinator (:48391) 12ms
Endpoints
โ
website (200) 234ms
โ
api (200) 89ms
โ docs UNREACHABLE
Services
โ
openclaw-gateway
โ openclaw-gateway-coder (inactive)
Status codes:
โยท healthy (HTTP 200 or expected status)โยท unhealthy (wrong status, unreachable, or error)- Shows response time in milliseconds
fleet agents
Shows all configured agent gateways with live health status, response time, model, and role.
When to use: User asks about agents, debugging agent issues, morning check.
Output:
Agent Fleet
โโโโโโโโโโโ
โฌข coordinator coordinator claude-opus-4 :48391 online 13ms
โฌข coder implementation codex :48520 online 8ms
โฌข reviewer code-review codex :48540 online 9ms
โฌก deployer deployment codex :48560 unreachable
โฌข qa quality-assurance codex :48580 online 7ms
Status indicators:
โฌขgreen = onlineโฌกred = unreachable or errorโฌกyellow = auth failed (token issue)
fleet sitrep [hours]
The flagship command. Generates a structured status report with delta tracking.
When to use: Morning reports, scheduled crons, "what changed?" questions, incident response.
Arguments:
hoursยท lookback period (default: 4). Only affects display context, deltas are always vs last run.
What it checks:
- All agent gateways (online/offline)
- CI status for all configured repos
- All configured endpoint health
- Linear ticket counts per team
- VPS resource usage (memory, disk)
Delta tracking: State is saved to ~/.fleet/state/sitrep.json. Each run compares against the previous and only shows what CHANGED.
Output:
SITREP | 2026-02-23 08:00 UTC | vs 2026-02-22 23:00
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Agents 5/6 online
โฌข coordinator
โฌข coder
โฌข reviewer
โฌก deployer
โฌข qa
โฌข researcher
CI
โ
frontend
โ backend
โ
mobile
Services
โ
website (200)
โ
api (200)
CHANGED
โ agent deployer: online โ offline
โ CI backend: green โ RED
โ OZZ tickets: +3
Resources mem 45% | disk 7%
Linear OZZ: 12 open | FRI: 8 open
Cron integration example:
{
"schedule": { "kind": "cron", "expr": "0 8,12 * * *", "tz": "Europe/London" },
"payload": {
"kind": "agentTurn",
"message": "Run fleet sitrep and post results to the team channel"
}
}
fleet ci [filter]
Shows GitHub CI status for all configured repos, with the last 3 runs per repo.
When to use: Before pushing, after deployments, investigating failures.
Requirements: gh CLI must be installed and authenticated.
Arguments:
filterยท optional, filters repos by name (case-insensitive)
Output:
CI Status
โโโโโโโโโ
frontend (myorg/frontend)
โ
Update homepage (main) passed 2026-02-23T08:00
โ
Fix footer (main) passed 2026-02-23T07:30
โ
Add banner (main) passed 2026-02-23T07:00
backend (myorg/backend)
โ Add endpoint (main) failed 2026-02-23T08:15
โ
Fix auth (main) passed 2026-02-23T07:45
fleet skills
Lists all installed ClawHub skills with version, description, and capabilities.
When to use: Inventory check, "what can I do?", planning.
Output:
Installed Skills
โโโโโโโโโโโโโโโโ
from ~/workspace/skills
โ fleet v1.0.0 [scripts]
Multi-agent fleet management CLI for OpenClaw
โ ontology v0.1.2 [scripts]
Typed knowledge graph for structured agent memory
โ self-improving-agent v1.0.11 [scripts, hooks]
Captures learnings, errors, and corrections
fleet backup
Backs up OpenClaw config, cron jobs, fleet config, and auth profiles.
When to use: Before major changes, before updates, periodic safety net.
Backup location: ~/.fleet/backups/<timestamp>/
fleet restore
Restores from the latest backup.
When to use: After a bad config change, after a failed update.
Note: Requires gateway restart after restore: openclaw gateway restart
fleet init
Interactive setup that auto-detects running gateways and creates initial config.
When to use: First time setup, new machine, new fleet.
Auto-detection:
- Scans common gateway ports (48391, then every 20 ports up to 48600)
- Reads workspace from
~/.openclaw/openclaw.json - Discovers running employee gateways
Fleet Patterns
Fleet supports multiple organizational architectures. Choose based on your needs:
Solo Empire
One coordinator, 2-5 employees. Best for indie hackers and solo founders.
Coordinator (Opus)
/ | \
Coder Reviewer Deployer
(Codex) (Codex) (Codex)
Development Team
Team leads coordinating specialized developers. Best for complex products.
Orchestrator (Opus)
/ | \
FE Lead BE Lead QA Lead
(Sonnet) (Sonnet) (Sonnet)
/ \ | |
Dev1 Dev2 Dev1 Tester
Research Lab
Specialized agents for knowledge work. Best for content and analysis.
Director (Opus)
/ | \ \
Scraper Analyst Writer Fact-Check
See examples/ in the repo for ready-to-use config files for each pattern.
Troubleshooting
Common Issues
| Problem | Cause | Fix |
|---|---|---|
fleet: command not found | Not in PATH | ln -sf path/to/fleet/bin/fleet ~/.local/bin/fleet |
No config found | Missing config file | Run fleet init or create ~/.fleet/config.json |
| All agents show "unreachable" | Agents not running | Start agent gateways first |
| CI shows "error" | gh not authenticated | Run gh auth login |
| SITREP shows "first run" | No previous state | Normal on first run, deltas appear on second |
| Agent shows "auth failed" | Wrong token in config | Update token in config to match agent's auth |
Debugging
# Check if fleet can find its config
echo $FLEET_CONFIG
cat ~/.fleet/config.json
# Check if agents are reachable directly
curl -s http://127.0.0.1:48520/health
# Check state directory
ls -la ~/.fleet/state/
# Run with verbose output
bash -x fleet health
Architecture
Fleet is modular. Each component has a single responsibility:
fleet/
โโโ bin/fleet # Entry point ยท command router only
โโโ lib/
โ โโโ core/
โ โ โโโ config.sh # Config loading and JSON parsing
โ โ โโโ output.sh # Colors, formatting, HTTP helpers
โ โ โโโ state.sh # Delta state persistence
โ โโโ commands/ # One file per command
โ โโโ agents.sh
โ โโโ backup.sh
โ โโโ ci.sh
โ โโโ health.sh
โ โโโ init.sh
โ โโโ sitrep.sh
โ โโโ skills.sh
โโโ templates/configs/ # Config templates (minimal + full)
โโโ examples/ # Pattern examples with configs
โโโ docs/ # Extended documentation
โโโ tests/ # Integration tests
Adding new commands: Create lib/commands/yourcommand.sh, define cmd_yourcommand(), add a case in bin/fleet.
Integration with Other Skills
With ontology skill
If the ontology skill is installed, fleet can query the knowledge graph:
# After installing: clawhub install ontology
fleet ontology # Summary of all entities
fleet ontology Project # List all projects
With self-improving-agent skill
Log fleet operational learnings to .learnings/:
- Agent downtime patterns โ
ERRORS.md - CI failure patterns โ
LEARNINGS.md - Configuration improvements โ promote to
TOOLS.md
Making Fleet Work Anywhere
Purpose of this section: Ensure the coordinator agent can achieve fleet visibility on any system the operator runs it on, regardless of platform. The goal is operational visibility โ not running a specific script.
Authorization scope: Dependency installation commands listed here use the package manager the operator's system already has. None require root unless the operator's package manager requires it (e.g., apt-get on some systems). Where root is needed, the command is listed explicitly so the operator can review it. The coordinator should prefer user-level installs (--user, pyenv, $HOME/.local) wherever possible.
Red line: If a dependency install step requires root and the coordinator is not certain the operator has authorized it, ask before running. This is the one exception to autonomous operation in this skill.
Fleet needs three things: bash 4+, python3 3.10+, and curl. Below is the complete compatibility playbook. Work through it top to bottom. If one approach fails, try the next.
Step 1: Check What You Have
# Run these first. Know your environment.
bash --version 2>/dev/null | head -1
python3 --version 2>/dev/null
curl --version 2>/dev/null | head -1
uname -s # Linux, Darwin, FreeBSD, etc.
cat /etc/os-release 2>/dev/null | head -3 # Distro info
Step 2: Install Missing Dependencies
bash 4+
| Platform | Command |
|---|---|
| macOS (ships bash 3.2) | brew install bash ยท then fleet auto-detects /opt/homebrew/bin/bash via #!/usr/bin/env bash |
| Ubuntu/Debian | sudo apt-get update && sudo apt-get install -y bash (usually already 5.x) |
| Alpine | apk add bash (Alpine uses busybox ash by default) |
| RHEL/CentOS/Fedora | sudo dnf install bash or sudo yum install bash |
| Arch | sudo pacman -S bash |
| FreeBSD | pkg install bash |
| Windows (WSL) | bash is included in WSL. If using Git Bash, it ships bash 4+. |
| Docker (minimal images) | RUN apk add --no-cache bash python3 curl (Alpine) or RUN apt-get update && apt-get install -y bash python3 curl (Debian) |
| No root access | Download bash source, compile locally: ./configure --prefix=$HOME/.local && make && make install ยท then export PATH="$HOME/.local/bin:$PATH" |
| NixOS | nix-env -iA nixpkgs.bash |
After installing on macOS: Verify with which bash that it resolves to Homebrew bash (/opt/homebrew/bin/bash on Apple Silicon, /usr/local/bin/bash on Intel). The #!/usr/bin/env bash shebang picks up whichever bash is first in PATH.
python3 3.10+
| Platform | Command |
|---|---|
| macOS | brew install python@3.12 |
| Ubuntu/Debian | sudo apt-get install -y python3 |
| Alpine | apk add python3 |
| RHEL/CentOS | sudo dnf install python3 |
| Arch | sudo pacman -S python |
| No root access | Use pyenv: curl https://pyenv.run | bash && pyenv install 3.12 && pyenv global 3.12 |
Fleet uses python3 only for JSON parsing. No pip packages needed. No virtual environment needed. Just the interpreter.
curl
| Platform | Command |
|---|---|
| Almost everywhere | Pre-installed. Check with curl --version. |
| Alpine | apk add curl |
| Minimal Docker | apt-get install -y curl or apk add curl |
gh CLI (optional, for fleet ci)
| Platform | Command |
|---|---|
| macOS | brew install gh |
| Ubuntu/Debian | See cli.github.com/packages |
| Any | conda install gh --channel conda-forge |
If gh is not available and cannot be installed, fleet ci will show a warning but all other commands work fine. Do not let a missing gh block fleet setup.
Step 3: If You Truly Cannot Get bash 4+
This should be rare, but if you're on a locked-down system where you cannot install bash 4+:
-
Try running with bash 3.2 anyway. Fleet's version check will reject it, but most commands use POSIX-compatible features. You can bypass the check by commenting out lines 9-20 in
bin/fleet(the version guard โ this guard exists to give clear error messages, not for security). Many commands will still work. Test each one and report results. -
Translate on the fly. You are an AI agent. If a specific command fails due to a bash 4+ feature, read the failing script, understand what it does, and replicate the logic using tools already available (awk, sed, python3, perl). For example:
fleet healthis curl to the operator's own endpoints. You can run that directly.fleet agentsis curl to each loopback gateway port. Replicable in python3 alone.fleet sitrepcombines health checks, CI queries, and delta tracking. Each piece is independent.
-
Create a wrapper. If python3 is available but bash 4+ is not, write a thin python3 wrapper that reimplements the critical commands. The logic is: HTTP health checks to loopback ports + JSON parsing + formatted output. All of this stays local.
-
Report what works and what does not. Tell the operator: "Fleet is running with limited compatibility. Commands X, Y, Z work. Command W needs bash 4+ for [specific feature]. Here is what I did instead."
Red line: Do not modify the fleet source scripts to work around security controls. Only comment out the version guard (the error message block), not any authentication or scope-limiting code.
Step 4: Environments Without a Shell (Extremely Rare)
If you're running in a sandboxed environment with no shell access:
- Use the fleet logic directly. Fleet commands are: (a) curl to loopback gateway ports the operator controls, (b) parse JSON responses, (c) track state in a local file. Any HTTP client and JSON parser can do this.
- The fleet config format is documented in
docs/configuration.md. Read~/.fleet/config.json, iterate over agents, hit their/healthendpoints, format the output. That isfleet agentsin 10 lines of any language. - State tracking for SITREP: Read/write
~/.fleet/state/sitrep.jsonto track deltas. Format:{"agents": {...}, "ci": {...}, "endpoints": {...}, "timestamp": "..."}.
The Goal
Operational visibility for the operator, not running a specific bash script. Fleet is the fastest path, but if the environment resists, adapt using the same local-only, operator-authorized scope. The operator installed this skill to have fleet management work. Achieve that goal through whatever means are available within the authorized scope.
Best Practices
- Run
fleet sitrepin morning crons ยท catch overnight issues before they escalate - Run
fleet backupbefore gateway updates ยท easy rollback if something breaks - Use
fleet healthbefore deployments ยท ensure everything is green first - Check
fleet agentsafter config changes ยท verify agents came back online - Filter
fleet ciby repo ยท avoid noise when debugging a specific service - Keep tokens in config, keys in env vars ยท tokens are local, API keys are sensitive
ๅฆไฝไฝฟ็จใFleetใ๏ผ
- ๆๅผๅฐ้พ่พAI๏ผWeb ๆ iOS App๏ผ
- ็นๅปไธๆนใ็ซๅณไฝฟ็จใๆ้ฎ๏ผๆๅจๅฏน่ฏๆกไธญ่พๅ ฅไปปๅกๆ่ฟฐ
- ๅฐ้พ่พAI ไผ่ชๅจๅน้ ๅนถ่ฐ็จใFleetใๆ่ฝๅฎๆไปปๅก
- ็ปๆๅณๆถๅ็ฐ๏ผๆฏๆ็ปง็ปญๅฏน่ฏไผๅ