Files
Lucas Berger 903e73d616 feat: v1.3 Unraid Update Status Sync
Unraid GraphQL API foundation — connectivity, authentication, and
container ID format verified for native Unraid API integration.

Phase 14: Unraid API Access (2 plans, 4 tasks)
- Established Unraid GraphQL API connectivity via myunraid.net cloud relay
- Dual credential storage (.env.unraid-api + n8n env vars)
- Container ID format: {server_hash}:{container_hash} (128-char SHA256 pair)
- Complete API contract documented in ARCHITECTURE.md
- "unraid" test command added to Telegram bot

Phases 15-16 dropped (superseded by v1.4 Unraid API Native).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-09 07:47:31 -05:00

204 lines
7.5 KiB
Markdown

# Claude Code Instructions — Unraid Docker Manager
## n8n API Access
Credentials are stored in `.env.n8n-api` (gitignored). Contains `N8N_HOST` and `N8N_API_KEY`.
**IMPORTANT: Each Bash tool call is a fresh shell.** You must source the env file in the SAME command chain as the curl call. Variables do not persist across Bash calls.
### Loading credentials
```bash
. .env.n8n-api; curl -s ...
```
Never do this (variables are lost between calls):
```bash
# WRONG - separate Bash calls
source .env.n8n-api # Call 1: variables set, then lost
curl ... $N8N_HOST # Call 2: N8N_HOST is empty
```
### API response handling
n8n API responses for workflow GETs are very large (400KB+). **Never pipe curl directly to `python3 -c`** — it fails silently. Always save to a temp file first:
```bash
. .env.n8n-api; curl -s -o /tmp/n8n-result.txt -w "%{http_code}" \
"${N8N_HOST}/api/v1/workflows/${WF_ID}" \
-H "X-N8N-API-KEY: ${N8N_API_KEY}" \
&& python3 -c "
import json
with open('/tmp/n8n-result.txt') as f:
d = json.load(f)
print(d.get('id'), d.get('updatedAt'), d.get('active'))
"
```
### Pushing workflows (PUT)
The `active` field is **read-only** — do NOT include it in the PUT body or you get HTTP 400.
```bash
. .env.n8n-api
# Prepare payload (strip active field, keep nodes/connections/settings)
python3 -c "
import json
with open('n8n-workflow.json') as f:
wf = json.load(f)
payload = {
'name': wf.get('name', 'Docker Manager'),
'nodes': wf['nodes'],
'connections': wf['connections'],
'settings': wf.get('settings', {}),
}
if wf.get('staticData'):
payload['staticData'] = wf['staticData']
with open('/tmp/n8n-push-payload.json', 'w') as f:
json.dump(payload, f)
"
# Push via PUT
curl -s -o /tmp/n8n-push-result.txt -w "%{http_code}" \
-X PUT "${N8N_HOST}/api/v1/workflows/${WF_ID}" \
-H "X-N8N-API-KEY: ${N8N_API_KEY}" \
-H "Content-Type: application/json" \
-d @/tmp/n8n-push-payload.json
```
### Workflow IDs
| Workflow | File | n8n ID |
|----------|------|--------|
| Main (Docker Manager) | n8n-workflow.json | `HmiXBlJefBRPMS0m4iNYc` |
| Container Update | n8n-update.json | `7AvTzLtKXM2hZTio92_mC` |
| Container Actions | n8n-actions.json | `fYSZS5PkH0VSEaT5` |
| Container Logs | n8n-logs.json | `oE7aO2GhbksXDEIw` |
| Batch UI | n8n-batch-ui.json | `ZJhnGzJT26UUmW45` |
| Container Status | n8n-status.json | `lqpg2CqesnKE2RJQ` |
| Confirmation | n8n-confirmation.json | `fZ1hu8eiovkCk08G` |
| Matching | n8n-matching.json | `kL4BoI8ITSP9Oxek` |
### Push all workflows (copy-paste recipe)
```bash
. .env.n8n-api
push_workflow() {
local FILE=$1 WF_ID=$2 WF_NAME=$3
python3 -c "
import json
with open('$FILE') as f:
wf = json.load(f)
payload = {'name': wf.get('name', '$WF_NAME'), 'nodes': wf['nodes'], 'connections': wf['connections'], 'settings': wf.get('settings', {})}
if wf.get('staticData'): payload['staticData'] = wf['staticData']
with open('/tmp/n8n-push-payload.json', 'w') as f:
json.dump(payload, f)
"
local CODE=$(curl -s -o /tmp/n8n-push-result.txt -w "%{http_code}" \
-X PUT "${N8N_HOST}/api/v1/workflows/${WF_ID}" \
-H "X-N8N-API-KEY: ${N8N_API_KEY}" \
-H "Content-Type: application/json" \
-d @/tmp/n8n-push-payload.json)
echo " ${WF_NAME}: HTTP ${CODE}"
}
push_workflow "n8n-workflow.json" "HmiXBlJefBRPMS0m4iNYc" "Main"
push_workflow "n8n-update.json" "7AvTzLtKXM2hZTio92_mC" "Update"
push_workflow "n8n-actions.json" "fYSZS5PkH0VSEaT5" "Actions"
push_workflow "n8n-logs.json" "oE7aO2GhbksXDEIw" "Logs"
push_workflow "n8n-batch-ui.json" "ZJhnGzJT26UUmW45" "Batch UI"
push_workflow "n8n-status.json" "lqpg2CqesnKE2RJQ" "Status"
push_workflow "n8n-confirmation.json" "fZ1hu8eiovkCk08G" "Confirmation"
push_workflow "n8n-matching.json" "kL4BoI8ITSP9Oxek" "Matching"
```
### Common API endpoints
```
GET /api/v1/workflows — List all workflows
GET /api/v1/workflows/{id} — Get workflow by ID
PUT /api/v1/workflows/{id} — Update workflow (no `active` field!)
POST /api/v1/workflows/{id}/activate — Activate workflow
POST /api/v1/workflows/{id}/deactivate — Deactivate workflow
```
## Unraid API Access
Credentials use dual storage:
- `.env.unraid-api` (gitignored) for CLI testing — contains `UNRAID_HOST` and `UNRAID_API_KEY`
- n8n Header Auth credential "Unraid API Key" for workflow nodes
**IMPORTANT: Each Bash tool call is a fresh shell.** You must source the env file in the SAME command chain as the curl call.
### Loading credentials (CLI)
```bash
. .env.unraid-api; curl -X POST "${UNRAID_HOST}/graphql" \
-H "Content-Type: application/json" \
-H "x-api-key: ${UNRAID_API_KEY}" \
-d '{"query": "query { docker { containers { id names state } } }"}'
```
### n8n workflow nodes
- **Authentication:** n8n Header Auth credential (not environment variables)
- **URL:** `={{ $env.UNRAID_HOST }}/graphql` — reads host from n8n container env var
- **Credential:** "Unraid API Key" Header Auth — sends `x-api-key` header automatically
### n8n container setup
**Required environment variable:**
- `UNRAID_HOST` — Unraid myunraid.net URL (without /graphql suffix)
- Format: `https://{ip-dashed}.{hash}.myunraid.net:8443`
**Required n8n credential:**
- Type: Header Auth, Name: "Unraid API Key", Header: `x-api-key`, Value: API key
**Why myunraid.net URL:** Direct LAN IP fails because Unraid's nginx redirects HTTP→HTTPS, stripping auth headers on redirect. The myunraid.net cloud relay URL avoids this issue and provides valid SSL certs.
### API key creation
Create via Unraid WebGUI or SSH:
**WebGUI:** Settings -> Management Access -> API Keys -> Create
- Name: "Docker Manager Bot"
- Permissions: `DOCKER:UPDATE_ANY`
- Description: "Container update status sync"
**SSH:**
```bash
unraid-api apikey --create \
--name "Docker Manager Bot" \
--permissions "DOCKER:UPDATE_ANY" \
--description "Container update status sync" \
--json
```
## Project Structure
- `n8n-workflow.json` — Main n8n workflow (Telegram bot entry point)
- `n8n-*.json` — Sub-workflows (7 total, see table above)
- `.planning/` — GSD planning directory (STATE.md, ROADMAP.md, phases/)
- `ARCHITECTURE.md` — Architecture docs, contracts, node analysis
- `.env.n8n-api` — n8n API credentials (gitignored)
## n8n Workflow Conventions
- **typeVersion 1.2** for Execute Workflow nodes: `"workflowId": { "__rl": true, "mode": "list", "value": "<id>" }`
- **Docker API success**: 204 No Content = success (empty response body). Check `!response.message && !response.error`
- **Data chain pattern**: Use `$('Node Name').item.json` to reference data across async nodes. Do NOT rely on `$json` after Telegram API calls (response overwrites data).
- **Dynamic input pattern**: Use `$input.item.json` for nodes with multiple predecessors.
- **Telegram credential**: ID `I0xTTiASl7C1NZhJ`, name "Telegram account"
- **Static data persistence**: `$getWorkflowStaticData('global')` only tracks **top-level** property changes. Deep nested mutations are silently lost. Always use JSON serialization:
```javascript
// READ
const errorLog = JSON.parse(staticData._errorLog || '{}');
// MODIFY
errorLog.debug.enabled = true;
// WRITE (top-level assignment — this is what n8n actually persists)
staticData._errorLog = JSON.stringify(errorLog);
```
- **Keyword Router rule ordering**: `startsWith` rules (e.g., `/debug`, `/errors`) must come BEFORE generic `contains` rules (e.g., `status`, `start`), otherwise `/debug status` matches `contains "status"` first. Connection array indices must match rule indices, with fallback as the last slot.