Growing Exoskeletons 4: Building the Control Room
n8n as a secure execution layer for OpenClaw. The agent triggers workflows, n8n holds the secrets, git controls what's allowed.

Series context: Part 3.5 gave the agent full machine access and moved the security boundary to the network. The agent owns the box. Tailscale ACLs are the cage. This post adds a new layer: what happens when the agent needs to use services that require secrets it shouldn’t hold.
Why the Agent Shouldn’t Hold Every Secret
In 3.5, we made a bet: give the agent the box, make the network the wall. It worked. The agent runs native, full filesystem, full shell, full browser. The machine is disposable. If compromised, nuke and rebuild.
But then you want the agent to send an email. Post to social media. Query a database. Call a paid API. Check the weather and escalate to your phone if a tornado is coming.
Each of those requires a credential. An API key, an OAuth token, a webhook secret. The agent already holds the secrets it needs for its core job: its LLM provider key, its gateway token. But external service credentials are a different category. If the agent holds every secret for every integration, a successful prompt injection compromises them all. The agent has full filesystem access. Any secret stored in a file the agent can read is a secret the agent can leak.
Anthropic’s transparency reports show Claude blocking 86-94% of prompt injection attempts depending on model and deployment context. That leaves 6-14% getting through. On a box with full shell access, even 6% is not a rounding error. It’s multiple opportunities per day for a compromised agent to exfiltrate every API key on the machine.
The inverted perimeter solved the network problem. This solves the credential problem.
The architecture:
OpenClaw (full machine access, no external secrets)
↓ triggers via webhook
n8n (holds secrets, executes sensitive tasks)
↓ defined by
Git repo (agent can read, can't write without approval)
n8n is a self-hosted workflow automation platform. It runs on the same box as OpenClaw but as a different Linux user. The agent can trigger n8n workflows via webhook. It cannot read n8n’s credential store, its database, or its environment variables. The secrets never cross the boundary.
The agent has hands. But the hands are n8n’s. And n8n only moves the way the playbook says.
The pattern is universal even if the implementation is specific. If you’re here for the concept (separate agent intent from execution credentials), skip to What Else Fits This Pattern? and The Trust Architecture. If you’re here to build it, keep reading.
Installing n8n on the Same Box
n8n runs on Node.js. Your box already has Node 24 from the OpenClaw install. But n8n and OpenClaw should never share a user, a data directory, or an environment.
Create the n8n user
sudo adduser --disabled-password --gecos "n8n Service" --ingroup n8n n8n
sudo chmod 700 /home/n8n
Verify with id n8n. The --ingroup n8n flag prevents Ubuntu from silently adding the user to the users group (gid 100), which openclaw may also belong to. If you already created the user without it, sudo deluser n8n users fixes it after the fact.
No sudo, no docker group, no shared groups with openclaw. This user exists only for n8n.
Install n8n
sudo -u n8n bash -c '
mkdir -p $HOME/.npm-global
npm config set prefix $HOME/.npm-global
echo "export PATH=\$HOME/.npm-global/bin:\$PATH" >> $HOME/.bashrc
export PATH=$HOME/.npm-global/bin:$PATH
npm install -g n8n
n8n --version
'
Keep n8n updated. n8n had three critical CVEs in a single month (March 2026). This is not a service you install and forget. To upgrade: sudo -u n8n bash -c 'export PATH=$HOME/.npm-global/bin:$PATH && npm install -g n8n', then sudo systemctl restart n8n. Workflows and encrypted credentials survive upgrades. The encryption key and database stay in /home/n8n/.n8n/. Put this on a calendar.
Configure secrets
Generate an encryption key before first launch. This key encrypts all credentials stored in n8n’s database. If you lose it, every stored credential is permanently unrecoverable.
sudo -u n8n bash -c '
mkdir -p $HOME/.n8n
cat > $HOME/.n8n/.env << EOF
# Core
N8N_HOST=localhost
N8N_PORT=5678
N8N_LISTEN_ADDRESS=127.0.0.1
N8N_PROTOCOL=http
N8N_SECURE_COOKIE=false
N8N_USER_FOLDER=/home/n8n
N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true
# Encryption (generated once, never change)
N8N_ENCRYPTION_KEY=$(openssl rand -hex 32)
# Hardening: disable public API
N8N_PUBLIC_API_DISABLED=true
N8N_PUBLIC_API_SWAGGERUI_DISABLED=true
# Hardening: block workflows from reading env vars and n8n config files
N8N_BLOCK_ENV_ACCESS_IN_NODE=true
N8N_BLOCK_FILE_ACCESS_TO_N8N_FILES=true
# Hardening: no community package installs
N8N_COMMUNITY_PACKAGES_ENABLED=false
# Hardening: session timeout (hours)
N8N_USER_MANAGEMENT_JWT_DURATION_HOURS=8
# Hardening: auto-prune execution data (7 days)
EXECUTIONS_DATA_PRUNE=true
EXECUTIONS_DATA_MAX_AGE=168
# Telemetry off
N8N_DIAGNOSTICS_ENABLED=false
# Enable Execute Command node (excluded by default in n8n 2.0+)
NODES_EXCLUDE=[]
EOF
chmod 600 $HOME/.n8n/.env
'
N8N_LISTEN_ADDRESS=127.0.0.1 binds to loopback only. No one reaches n8n except through Tailscale Serve. N8N_SECURE_COOKIE=false because TLS terminates at Tailscale, not at n8n. N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true makes n8n refuse to start if its config files are world-readable.
The hardening variables close several attack surfaces:
N8N_PUBLIC_API_DISABLEDremoves the REST API entirely. Even if someone reaches n8n through Tailscale, they can’t script against it.N8N_BLOCK_ENV_ACCESS_IN_NODEprevents Code nodes from reading environment variables, which is where the encryption key lives.N8N_BLOCK_FILE_ACCESS_TO_N8N_FILESblocks workflows from reading n8n’s own configuration and database.N8N_COMMUNITY_PACKAGES_ENABLED=falseprevents installing third-party nodes that could introduce supply chain risk.EXECUTIONS_DATA_PRUNEauto-deletes execution logs after 7 days so sensitive data from workflow runs doesn’t accumulate.
systemd service
Create /etc/systemd/system/n8n.service:
sudo tee /etc/systemd/system/n8n.service << 'EOF'
[Unit]
Description=n8n Workflow Automation
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User=n8n
Group=n8n
EnvironmentFile=/home/n8n/.n8n/.env
WorkingDirectory=/home/n8n
ExecStart=/home/n8n/.npm-global/bin/n8n start
Restart=on-failure
RestartSec=5
StartLimitBurst=5
StartLimitIntervalSec=300
# Filesystem
PrivateTmp=true
ProtectSystem=strict
ProtectProc=invisible
ProtectClock=true
ProtectHostname=true
ProtectControlGroups=true
UMask=0077
# Capabilities (minimum set for sudo cross-user execution)
CapabilityBoundingSet=CAP_SETUID CAP_SETGID CAP_DAC_READ_SEARCH CAP_AUDIT_WRITE
AmbientCapabilities=
# Isolation
RestrictNamespaces=true
LockPersonality=true
RemoveIPC=true
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl daemon-reload
sudo systemctl enable --now n8n
Run systemd-analyze security n8n to check your score. You should land around 5.4 MEDIUM. That’s the honest score for a Node.js service that needs to sudo across user boundaries.
Here’s why the score isn’t lower. Many systemd hardening directives (NoNewPrivileges, ProtectKernelModules, ProtectKernelTunables, SystemCallArchitectures, RestrictSUIDSGID, PrivateDevices, RestrictAddressFamilies, RestrictRealtime) implicitly set NoNewPrivileges=yes. That single flag blocks the SUID bit on sudo, which breaks the entire cross-user execution model. If n8n can’t sudo -u openclaw, it can’t run backup or tamper detection scripts. You can’t have both maximum systemd sandboxing and sudo-based user bridging on the same service. We chose the bridge.
CapabilityBoundingSet grants only the four capabilities sudo needs: CAP_SETUID and CAP_SETGID to switch users, CAP_DAC_READ_SEARCH to read sudoers files, and CAP_AUDIT_WRITE for audit logging (without it, Ubuntu’s sudo-rs panics). Every other capability is dropped. ProtectProc=invisible means n8n can’t see OpenClaw’s processes. UMask=0077 ensures files n8n creates are readable only by its own user.
If your workflows only use HTTP Request nodes and don’t need Execute Command with sudo, add back the full hardening directives (NoNewPrivileges=true, ProtectKernelModules=true, SystemCallArchitectures=native, etc.). You’ll score around 3.0 instead of 5.4.
Firewall and Tailscale Serve
# Allow n8n from Tailscale only
sudo ufw allow from 100.64.0.0/10 to any port 5678 proto tcp comment 'n8n via Tailscale'
# HTTPS access via Tailscale
sudo tailscale serve --bg --https 8443 http://127.0.0.1:5678
Don’t use port 5679 for Tailscale Serve. n8n’s internal Task Broker binds to 127.0.0.1:5679 at startup. Use 8443 or another free port instead.
Access the n8n dashboard at https://your-machine.tailnet.ts.net:8443. Create your admin account on first login. Use a strong password. This is the only account that can create workflows and manage credentials.
Verify
sudo systemctl status n8n
sudo ss -tlnp | grep 5678 # Should show 127.0.0.1:5678
Note on dangerous nodes: n8n 2.0+ excludes executeCommand and localFileTrigger by default via NODES_EXCLUDE. Our environment file sets NODES_EXCLUDE=[] to re-enable Execute Command, which our backup and tamper detection workflows need. We’re running on a dedicated box behind Tailscale with a single admin user. The risk profile is different from a shared n8n instance.
Locking n8n Away from the Agent
n8n is installed. Now prove the agent can’t read its secrets.
The setup so far gives us two users on the same box: openclaw and n8n. Each owns their home directory with chmod 700. That single permission bit is the foundation. But we should verify it, harden it, and make it visible.
Verify the boundary
# As openclaw, try to read n8n's encryption key
sudo -u openclaw cat /home/n8n/.n8n/.env
# Expected: Permission denied
# Try to read n8n's database (where encrypted credentials live)
sudo -u openclaw cat /home/n8n/.n8n/database.sqlite
# Expected: Permission denied
# Try to list n8n's home directory
sudo -u openclaw ls /home/n8n/
# Expected: Permission denied
# Confirm no shared groups
id openclaw
id n8n
# These should share NO groups
If any of those succeed, stop. Fix permissions before proceeding.
Hide processes from each other
By default, Linux lets any user see any other user’s processes via /proc. That means openclaw can see n8n’s process ID, its command-line arguments, and confirm it’s running. On its own, that’s not a credential leak. But it’s information a compromised agent shouldn’t have.
# Remount /proc with hidepid=2
sudo mount -o remount,hidepid=2 /proc
# Make persistent (on kernel 5.8+, hidepid=invisible is the modern alternative
# and plays better with systemd user sessions, but hidepid=2 works here)
echo 'proc /proc proc defaults,hidepid=2 0 0' | sudo tee -a /etc/fstab
# Verify: openclaw can only see its own processes
sudo -u openclaw ps aux
# Should NOT show n8n processes
Prevent environment variable snooping
/proc/PID/environ is already mode 0400 (owner-only) by default on modern kernels. But /proc/PID/cmdline is world-readable. Never pass secrets as command-line arguments. Our systemd unit uses EnvironmentFile instead, which keeps secrets out of the process command line.
# Verify: openclaw can't read n8n's environment
sudo -u openclaw cat /proc/$(pgrep -u n8n | head -1)/environ 2>&1
# Expected: Permission denied
# With hidepid=2, openclaw can't even find the PID
sudo -u openclaw ls /proc/$(pgrep -u n8n | head -1)/ 2>&1
# Expected: No such file or directory
What this means for the trust model
n8n encrypts credentials with AES-256-CBC using the N8N_ENCRYPTION_KEY. That key lives in /home/n8n/.n8n/.env, which is mode 600, owned by n8n:n8n, inside a 700 directory. The openclaw user cannot traverse the path, read the file, or see the process environment.
The agent can trigger n8n workflows via webhook on 127.0.0.1:5678. It sends an HTTP POST with a payload. n8n decrypts its own credentials server-side, executes the workflow, and returns only the result. The webhook caller never sees, receives, or has access to the credentials used inside the workflow.
The agent can turn the key through the slot in the glass. It can’t pull the key off the ring.
Workflow-as-Code: The Git Repo
n8n has built-in source control. You connect it to a Git repo, push workflows as JSON, and pull them into production. Credential stubs sync (name and type only). Secrets never touch Git.
Set up the repo
Create a private repo (e.g., your-org/n8n-workflows). In the n8n dashboard:
- Go to Settings > Environments
- Enter your Git repo SSH URL
- n8n generates an ED25519 SSH key
- Add that key as a deploy key with write access on GitHub
- Choose your branch (
main)
The n8n dashboard is open for editing
You can edit workflows directly in the n8n UI. The protection isn’t locking the UI. It’s that the agent has no n8n API key and no access to the n8n database where one would be stored. If you prefer a harder lock, enable Protected Instance mode in Settings > Environments and route all changes through git.
What the agent sees
The agent can read the Git repo. It can clone it, browse the workflow JSON files, and understand what automations are available. It knows which webhooks exist and what payloads they expect.
The agent cannot push to the repo without approval. It cannot modify workflows through n8n’s API (it has no API key). It knows the menu. It can order from it. It can’t rewrite it.
Export workflows for the repo
# Export all workflows as separate JSON files
sudo -u n8n bash -c '
export PATH=$HOME/.npm-global/bin:$PATH
n8n export:workflow --all --separate --output=/home/n8n/workflows/
'
The --separate flag creates one JSON file per workflow. This makes git diffs readable and merge conflicts manageable.
Workflow 1: Workspace Backup
The first workflow is simple on purpose. It proves the pattern before we add complexity.
What it does: Every night at 2 AM, commit any changes in OpenClaw’s workspace and push to GitHub. If the backup fails, fire a high-priority alert via Red Alert. Success is silent.
Why n8n and not a cron script: The agent could run git commit && git push itself. But then the agent is its own backup system. If the agent is compromised, it can skip backups, rewrite history, or push poisoned commits. n8n runs as a different user, on a different schedule, with no dependency on the agent being healthy.
The workflow
[Cron Trigger] → [Execute Command] → [IF success] → (silent)
(0 2 * * *) (git add/commit/push) ↘
[HTTP: Red Alert "Backup FAILED"]
In n8n, build this:
-
Schedule Trigger node: cron expression
0 2 * * * -
Execute Command node:
sudo -u openclaw /home/openclaw/scripts/workspace-backup.sh
The backup script (/home/openclaw/scripts/workspace-backup.sh):
#!/bin/bash
cd /home/openclaw/.openclaw/workspace
git add -A
if git diff --cached --quiet; then
echo '{"status":"no_changes"}'
else
git commit -m "daily backup $(date +%Y-%m-%d)" && \
git push origin main && \
echo '{"status":"success","date":"'$(date +%Y-%m-%d)'"}'
fi
-
IF node: check if output contains
"status":"success"or"status":"no_changes" -
Success/no_changes branch: silent. Routine backups don’t need a notification. You’ll notice when one fails.
-
HTTP Request node (failure branch): POST to
https://redalert.cc/api/v1/alertswith the Red Alert API key in the Authorization header. Body:{ "title": "Workspace backup FAILED", "body": "Check n8n logs on omegon-hive.", "priority": "high" }. High priority triggers push and SMS. Not critical, because a single missed backup isn’t an emergency. But you want to know before the second one fails too.
The permission bridge
The Execute Command node runs as the n8n user. But the workspace is owned by openclaw. The n8n user can’t write to it.
Give the n8n user passwordless sudo for exactly one script:
# /etc/sudoers.d/n8n-backup
n8n ALL=(openclaw) NOPASSWD: /home/openclaw/scripts/workspace-backup.sh
The n8n user can execute this one script as openclaw. Nothing else.
The scripts live in /home/openclaw/scripts/, owned by openclaw:openclaw, mode 755. The n8n user can execute them via the sudoers entry but cannot modify them. The openclaw user can modify them, but only by pulling from the git repo. The agent can’t push without approval. So the chain is: you commit a script change, push to GitHub, pull on the box. Neither n8n nor the agent can rewrite the scripts they execute.
What n8n holds
The Red Alert API key (ra_ak_ prefix). That key can trigger escalating alerts to your phone. The agent doesn’t need it for this workflow. The agent doesn’t know it exists. If the agent is compromised, it can’t fire fake alerts or exfiltrate the key to spam your phone at 3 AM.
Red Alert supports a self-service agent connection flow via /agents.txt. POST the operator’s email, they approve via a link, and get an API key back. No dashboard account needed. The key goes straight into n8n’s credential store.
Workflow 2: Tamper Detection
This workflow answers one question: did something modify the agent’s own rules?
SOUL.md defines what the agent will and won’t do. MEMORY.md stores what it remembers. AGENTS.md defines how it operates. If any of these change without a git commit from you, something is wrong. Either the agent rewrote its own rules (bad), or something else did (worse).
What it does: Every night at 3 AM, compare the live workspace files against the last known-good versions in Git. If they diverge, fire a critical alert via Red Alert.
The workflow
[Cron Trigger] → [Execute Command] → [IF diff found] → [HTTP: Red Alert "TAMPER DETECTED"]
(0 3 * * *) (git diff) ↘
(silent)
-
Schedule Trigger node: cron expression
0 3 * * * -
Execute Command node:
sudo -u openclaw /home/openclaw/scripts/tamper-check.sh
Same sudoers pattern as the backup workflow:
# /etc/sudoers.d/n8n-tamper
n8n ALL=(openclaw) NOPASSWD: /home/openclaw/scripts/tamper-check.sh
The tamper check script (/home/openclaw/scripts/tamper-check.sh):
#!/bin/bash
cd /home/openclaw/.openclaw/workspace
git fetch origin main
CHANGED=""
for file in SOUL.md MEMORY.md AGENTS.md TOOLS.md IDENTITY.md; do
if [ -f "$file" ]; then
LOCAL=$(sha256sum "$file" | cut -d" " -f1)
REMOTE=$(git show origin/main:"$file" 2>/dev/null | sha256sum | cut -d" " -f1)
if [ "$LOCAL" != "$REMOTE" ]; then
CHANGED="$CHANGED $file"
fi
fi
done
if [ -n "$CHANGED" ]; then
echo "{\"status\":\"tamper\",\"files\":\"$CHANGED\"}"
else
echo "{\"status\":\"clean\"}"
fi
-
IF node: check if status equals
tamper -
HTTP Request node (tamper branch): POST to
https://redalert.cc/api/v1/alertswith body:{ "title": "TAMPER DETECTED on omegon-hive", "body": "Modified files: {{ $json.files }}. Check the box immediately.", "priority": "critical", "dedupeKey": "tamper-omegon-hive" }. Critical priority escalates from push to SMS to phone call until acknowledged. ThededupeKeyprevents duplicate alerts if the tamper persists across multiple check cycles. -
Clean branch: silent. You don’t need an “all clear” message every night.
What this catches
The comparison is local files vs the last pushed commit. If you made changes and haven’t pushed yet, this will flag them. That’s by design. It forces a habit: push your changes. If you didn’t push and the files changed, something else did.
This also catches a subtle attack: a compromised agent that modifies SOUL.md to weaken its own restrictions, then behaves normally until the next prompt injection succeeds with the relaxed rules. The tamper check runs as n8n, not as the agent. The agent can’t suppress it, delay it, or falsify the result.
A note on scaling
This sudoers pattern works for a handful of scripts. If you find yourself writing fifteen sudoers entries, you’ve recreated the allowlist problem from 3.5 in /etc/sudoers.d/. At that point, consider a single wrapper script that accepts a command name and dispatches to known operations, or move the execution logic entirely into n8n nodes that don’t need cross-user access. The sudoers pattern is for bridging the user boundary on specific, auditable operations. Not for general-purpose command execution.
Workflow 3: Severe Weather → Red Alert
This workflow ties two external services together. One is free and public. The other is your break-glass escalation channel. n8n holds the secret for one and needs nothing for the other. The agent never touches either directly.
What it does: Every 30 minutes, check the National Weather Service for severe weather alerts in your area. If a tornado warning, severe thunderstorm warning, or other high-severity alert is active, trigger a critical alert via Red Alert that escalates from push notification to SMS to phone call until you acknowledge.
The workflow
[Cron Trigger] → [HTTP: NWS API] → [IF severe] → [HTTP: Red Alert] → [Wait for callback]
(*/30 * * * *) (free, no auth) ↘ ↓
[Log: no alerts] [Webhook: acknowledged]
-
Schedule Trigger node: cron expression
*/30 * * * * -
HTTP Request node (NWS):
GET https://api.weather.gov/alerts/active?area=OK&severity=Extreme,Severe
No authentication needed. The NWS API is free and public. Set a User-Agent header (NWS requires it): (your-app, your-email).
-
IF node: check if
$json.features.length > 0 -
HTTP Request node (Red Alert): POST to
https://redalert.cc/api/v1/alertswith the Red Alert API credential in the Authorization header. The body includes the NWS event title, headline, critical priority, a deduplication key built from the NWS alert ID, and a callback URL pointing to your n8n webhook for acknowledgment. -
Webhook node (separate workflow or wait node): listens for the callback when you acknowledge the alert.
Why this workflow matters
It demonstrates three things at once:
Two APIs, one secret. The NWS API needs no auth. Red Alert needs an API key. n8n holds the Red Alert credential. The agent knows neither endpoint is called directly. It doesn’t even know this workflow runs. n8n checks the weather independently.
Conditional escalation. Not every API response triggers an alert. n8n parses the NWS response, evaluates severity, and only escalates when it matters. A drizzle doesn’t page you at 3 AM. A tornado warning does.
The deduplication key. dedupeKey prevents the same NWS alert from triggering multiple Red Alert notifications across polling cycles. The NWS alert ID is stable. Red Alert ignores duplicates. This is the kind of operational detail that matters when a workflow runs every 30 minutes.
What n8n holds
The Red Alert API key. That key can trigger phone calls to your personal number. It’s the most sensitive credential in the entire setup. It lives in n8n’s encrypted credential store, owned by the n8n user, in a database the openclaw user cannot read.
If the agent is compromised, it cannot page you with fake emergencies. It cannot exfiltrate the Red Alert key. It doesn’t know the key exists.
What Else Fits This Pattern?
The three workflows above are the ones running on omegon-hive today. But the pattern generalizes to any case where the agent needs something done that requires a credential it shouldn’t hold.
Posting to social media. The agent drafts a thread. n8n holds the X/Twitter API key. A webhook triggers the post. The agent can compose content but cannot post directly, cannot read the API key, and cannot post without the workflow validating the payload first. If the agent is compromised, the attacker gets draft access, not publish access.
Sending email. The agent composes a message. n8n holds the SendGrid API key. The webhook triggers delivery. You could add an approval step: n8n sends the draft to Red Alert as a low-priority notification with “Send” and “Reject” action buttons. You review on your phone. Only your explicit acknowledgment releases the email. The agent never touches SMTP credentials.
Querying a production database. (Higher risk, evaluate carefully.) The agent needs customer data for a report. n8n holds the read-only database connection string. The workflow runs a parameterized query (not raw SQL), returns only the columns the workflow defines, and logs the query. The agent gets the data it asked for. It never gets a connection string it could use to run DROP TABLE. Note that putting database credentials on the same box as the agent raises the stakes significantly. An n8n vulnerability becomes a database breach. The social media and email examples above have smaller blast radii.
Each of these follows the same trust architecture: the agent has intent, n8n has capability, and git defines what’s allowed. The credential never crosses the boundary. The webhook is the narrow slot in the glass.
The Trust Architecture
Let’s look at the full picture. Four articles, four layers. Three are deployed. The fourth, off-box log shipping, is the planned next step.
┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐
Axiom (THE RECORD) [planned]
│ Immutable, off-box. Agent can't reach, can't tamper. │
Every log from every layer ships here.
└ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘
├─────────────────────────────────────────────────────────────┤
│ Tailscale ACLs (THE WALL) │
│ Agent can't reach your laptop, NAS, or other devices. │
│ n8n webhooks work on loopback. Nothing leaves the box │
│ except API calls to workflow endpoints. │
├─────────────────────────────────────────────────────────────┤
│ ┌──────────────────┐ webhook ┌──────────────────┐ │
│ │ OpenClaw │ ──────────→ │ n8n │ │
│ │ (THE AGENT) │ │ (THE HANDS) │ │
│ │ │ ← result ─ │ │ │
│ │ Full machine │ │ Holds secrets │ │
│ │ No ext secrets │ │ Runs workflows │ │
│ │ Can trigger │ │ Can't be edited │ │
│ │ Can't read keys │ │ by the agent │ │
│ └──────────────────┘ └──────────────────┘ │
│ │ │ │
│ reads only defined by │
│ ↓ ↓ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ Git Repo (THE PLAYBOOK) │ │
│ │ Workflow JSON + credential stubs. Agent can read. │ │
│ │ Agent can't push without approval. │ │
│ │ Every change has a commit hash and an author. │ │
│ └──────────────────────────────────────────────────────┘ │
├─────────────────────────────────────────────────────────────┤
│ Linux (THE CELL) │
│ Dedicated machine. No personal data. Cattle, not pets. │
│ Nuke and rebuild in 10 minutes. │
└─────────────────────────────────────────────────────────────┘
Each layer fails independently. Mostly.
If the agent is compromised: It can’t read n8n’s secrets (user isolation). It can’t modify workflows (no API key, no database access). It can’t reach your other devices (Tailscale ACLs). With off-box log shipping (the planned next layer), it won’t be able to suppress audit trails either. The damage ceiling is one disposable box with no external credentials on it.
If n8n is compromised: The attacker has the credentials n8n holds, but n8n can’t reach anything outside the box except the specific API endpoints its workflows call. Tailscale ACLs apply to n8n’s traffic too.
If the box is compromised: Root access collapses two layers at once. User isolation between openclaw and n8n is gone. The local Tailscale daemon can be reconfigured. These are correlated failures, not independent ones. What survives: Tailscale’s coordination server still enforces ACLs from the network side (the attacker can tamper with the local daemon but can’t change the policy other devices enforce), and the box has no personal data worth stealing. With off-box logging, the audit trail would survive too. The response isn’t forensics. It’s destruction. Nuke the box, rebuild from the playbook, rotate every credential n8n held.
No single layer is bulletproof. The point is that a breach in one layer doesn’t cascade into all the others. The Swiss Cheese model: each layer has holes, but the holes mostly don’t align.
Acknowledged risks
User isolation is not root isolation. The agent runs with full shell access. A local privilege escalation (kernel CVE, misconfigured SUID binary) could let a compromised agent become root and read n8n’s secrets. This is the same risk as any shared-host service. The proper fix is separate machines. We’re running both on one box because that’s what we have. Keep the kernel patched, keep the attack surface small, and know that user isolation is a speed bump for a determined local attacker, not a wall.
The loopback is shared. Both services listen on 127.0.0.1. The agent can reach n8n’s webhook port. User isolation stops file reads, not HTTP requests. If n8n has an unpatched vulnerability in its webhook handler (and it has before: CVE-2026-21858, CVSS 10.0), the agent could exploit it from localhost. Mitigation: keep n8n patched.
The agent holds the webhook key. The agent needs a pre-shared secret to authenticate against n8n’s webhooks. That secret lives somewhere the agent can read. This is inherently risky. A compromised agent can trigger any workflow it knows the URL for, with any payload. It can’t read the credentials inside those workflows, but it can invoke them. A compromised agent could trigger the Red Alert workflow with a fake emergency. It could trigger the backup workflow in a loop. The webhook secret is the one key we can’t take away from the agent without breaking the integration. Scope the damage by keeping webhook payloads simple, validating inputs inside n8n workflows, and logging every invocation.
n8n is not a vault. Three critical CVEs in March 2026 (CVSS 10.0, 9.4, 9.5) are a reminder that n8n’s credential store is a convenience, not a hardened secrets manager. For higher-stakes deployments, layer a purpose-built secret manager on top. n8n supports External Secrets integration with HashiCorp Vault, AWS Secrets Manager, and 1Password. Credentials are fetched at runtime and never stored in n8n’s database. We’re not doing that here. We’re calling out that the option exists when your threat model demands it.
Why we’re here
The obvious objection: a $5/month VPS running just n8n gives you stronger isolation than everything in this article. A second box means the kernel is the boundary, not chmod 700. Or skip self-hosting entirely. n8n Cloud exists. So does Zapier, Make, and every other managed workflow platform. You could have this running in 10 minutes without touching a systemd unit file.
We know. We’re building in the garage on purpose. Not because it’s optimal, but because it’s how you learn where the walls actually are. Managed services abstract the trust boundaries. Self-hosting on one box forces you to draw them yourself, discover where they leak, and decide which tradeoffs you’ll accept. This series documents what that looks like before the industry has converged on best practices. Some of this will look obvious in a year. Some will look overbuilt. We’re writing it down so we find out which is which.
In Part 2.5, we quoted Rahul Sood: “The perimeter is what the agent can do.” In 3.5, we moved the wall to the network. Now the perimeter has three boundaries: what the agent can reach (Tailscale), what the agent can access (user isolation), and what the agent can invoke (workflow definitions in Git). Each one is enforced by a different mechanism. Each one fails independently.
This is the control room. Not a dashboard. A trust architecture.