docs(16): create API migration phase plans (5 plans in 2 waves)

This commit is contained in:
Lucas Berger
2026-02-09 09:19:10 -05:00
parent 5880dc4573
commit 4fc791dd43
6 changed files with 943 additions and 3 deletions
@@ -0,0 +1,240 @@
---
phase: 16-api-migration
plan: 05
type: execute
wave: 2
depends_on: [16-01, 16-02, 16-03, 16-04]
files_modified: [n8n-workflow.json]
autonomous: true
must_haves:
truths:
- "Inline keyboard action callbacks resolve container and execute start/stop/restart/update via Unraid API"
- "Text command 'update all' shows :latest containers with update availability via Unraid API"
- "Batch update loop calls update sub-workflow for each container successfully"
- "Callback update from inline keyboard works via Unraid API"
- "Batch stop confirmation resolves bitmap to container names via Unraid API"
- "Cancel-return-to-submenu resolves container via Unraid API"
artifacts:
- path: "n8n-workflow.json"
provides: "Main workflow with all Docker API calls replaced by Unraid GraphQL queries"
contains: "graphql"
key_links:
- from: "n8n-workflow.json container query nodes"
to: "Unraid GraphQL API"
via: "POST container list queries"
pattern: "UNRAID_HOST.*graphql"
- from: "GraphQL Response Normalizer nodes"
to: "Existing consumer Code nodes (Prepare Inline Action Input, Check Available Updates, etc.)"
via: "Docker API contract format"
pattern: "Names.*State.*Id"
- from: "Container ID Registry"
to: "Sub-workflow Execute nodes"
via: "Name→PrefixedID mapping for mutation operations"
pattern: "unraidId|prefixedId"
---
<objective>
Migrate all 6 Docker socket proxy HTTP Request nodes in the main workflow (n8n-workflow.json) to Unraid GraphQL API queries.
Purpose: The main workflow is the Telegram bot entry point. It contains 6 Docker API calls for container lookups used by inline keyboard actions, update-all flow, callback updates, batch stop, and cancel-return navigation. These are read-only lookups (no mutations — mutations happen in sub-workflows), so this is a query-only migration with normalizer and registry updates.
Output: n8n-workflow.json with zero Docker socket proxy references, all container lookups via GraphQL, Container ID Registry updated on every query, Phase 15 utility nodes wired into active flows.
</objective>
<execution_context>
@/home/luc/.claude/get-shit-done/workflows/execute-plan.md
@/home/luc/.claude/get-shit-done/templates/summary.md
</execution_context>
<context>
@.planning/PROJECT.md
@.planning/ROADMAP.md
@.planning/STATE.md
@.planning/phases/16-api-migration/16-RESEARCH.md
@.planning/phases/15-infrastructure-foundation/15-01-SUMMARY.md
@.planning/phases/15-infrastructure-foundation/15-02-SUMMARY.md
@.planning/phases/16-api-migration/16-01-SUMMARY.md
@.planning/phases/16-api-migration/16-02-SUMMARY.md
@.planning/phases/16-api-migration/16-03-SUMMARY.md
@.planning/phases/16-api-migration/16-04-SUMMARY.md
@n8n-workflow.json
@ARCHITECTURE.md
</context>
<tasks>
<task type="auto">
<name>Task 1: Replace all 6 Docker API container queries with Unraid GraphQL queries in main workflow</name>
<files>n8n-workflow.json</files>
<action>
Replace all 6 Docker socket proxy HTTP Request nodes in n8n-workflow.json with Unraid GraphQL queries. Each currently does GET to `docker-socket-proxy:2375/containers/json?all=true` (or `all=false` for update-all).
**Nodes to migrate:**
1. **"Get Container For Action"** (inline keyboard action callbacks)
- Currently: GET `http://docker-socket-proxy:2375/containers/json?all=true`
- Feeds into: "Prepare Inline Action Input" Code node
- Change to: POST `={{ $env.UNRAID_HOST }}/graphql`
- Body: `{"query": "query { docker { containers { id names state image } } }"}`
- Add Normalizer + Registry Update Code nodes between HTTP and "Prepare Inline Action Input"
2. **"Get Container For Cancel"** (cancel-return-to-submenu)
- Currently: GET `http://docker-socket-proxy:2375/containers/json?all=true`
- Feeds into: "Build Cancel Return Submenu" Code node
- Same GraphQL transformation + normalizer + registry update
3. **"Get All Containers For Update All"** (update-all text command)
- Currently: GET `http://docker-socket-proxy:2375/containers/json?all=false` (only running containers)
- Feeds into: "Check Available Updates" Code node
- GraphQL query should filter to running containers: `{"query": "query { docker { containers { id names state image imageId } } }"}`
- NOTE: GraphQL API may not have a `running-only` filter. Query all containers and let existing "Check Available Updates" Code node filter (it already filters by `:latest` tag and excludes infrastructure). The existing code handles both running and stopped containers.
- Add `imageId` to the query for update-all flow (needed for update availability checking)
4. **"Fetch Containers For Update All Exec"** (update-all execution)
- Currently: GET `http://docker-socket-proxy:2375/containers/json?all=false`
- Feeds into: "Prepare Update All Batch" Code node
- Same transformation as #3 (query all, let Code node filter)
5. **"Get Container For Callback Update"** (inline keyboard update callback)
- Currently: GET `http://docker-socket-proxy:2375/containers/json?all=true`
- Feeds into: "Find Container For Callback Update" Code node
- Standard GraphQL transformation + normalizer + registry update
6. **"Fetch Containers For Bitmap Stop"** (batch stop confirmation)
- Currently: GET `http://docker-socket-proxy:2375/containers/json?all=true`
- Feeds into: "Resolve Batch Stop Names" Code node
- Standard GraphQL transformation + normalizer + registry update
**For EACH node, apply:**
a. Change HTTP Request to POST method
b. URL: `={{ $env.UNRAID_HOST }}/graphql`
c. Body: `{"query": "query { docker { containers { id names state image } } }"}` (add `imageId` for update-all nodes #3 and #4)
d. Headers: `Content-Type: application/json`, `x-api-key: ={{ $env.UNRAID_API_KEY }}`
e. Timeout: 15000ms
f. Error handling: `continueRegularOutput`
g. Add GraphQL Response Normalizer Code node after HTTP Request
h. Add Container ID Registry update Code node after normalizer (updates static data cache)
i. Wire normalizer/registry output to existing downstream Code node
**Wiring pattern for each:**
```
[upstream] → HTTP Request (GraphQL) → Normalizer → Registry Update → [existing downstream Code node]
```
**Phase 15 standalone utility nodes:** The standalone "GraphQL Response Normalizer", "Container ID Registry", "GraphQL Error Handler", "Unraid API HTTP Template", "Callback Token Encoder", and "Callback Token Decoder" nodes at positions [200-1000, 2400-2600] should remain in the workflow as reference templates. They are not wired to any active flow (and that's intentional — they serve as code templates for copy-paste during migration). Do NOT remove them.
**Consumer Code nodes remain UNCHANGED:**
- "Prepare Inline Action Input" — searches containers by name using `Names[0]`, `State`, `Id`
- "Build Cancel Return Submenu" — same pattern
- "Check Available Updates" — filters `:latest` containers, checks update availability
- "Prepare Update All Batch" — prepares batch execution data
- "Find Container For Callback Update" — finds container by name
- "Resolve Batch Stop Names" — decodes bitmap to container names
All these nodes reference `Names[0]`, `State`, `Image`, `Id` — the normalizer ensures these fields exist in correct format.
**Special case: "Prepare Inline Action Input" and "Find Container For Callback Update"** — These nodes output `containerId: container.Id` which feeds into sub-workflow calls. The `Id` field now contains a 129-char PrefixedID (from normalizer), not a 64-char Docker hex ID. This is correct — the sub-workflows (Plan 02 actions, Plan 03 update) have been migrated to use this PrefixedID format in their GraphQL mutations.
</action>
<verify>
Load n8n-workflow.json with python3 and verify:
1. Zero HTTP Request nodes contain "docker-socket-proxy" in URL (excluding the Unraid API Test node which already uses $env.UNRAID_HOST)
2. All 6 former Docker API nodes now use POST to `$env.UNRAID_HOST/graphql`
3. 6 GraphQL Response Normalizer Code nodes exist (one per query path)
4. 6 Container ID Registry update Code nodes exist
5. All downstream consumer Code nodes are UNCHANGED
6. Phase 15 standalone utility nodes still present at positions [200-1000, 2400-2600]
7. All connections valid (no dangling references)
8. Push to n8n via API and verify HTTP 200
</verify>
<done>
All 6 Docker API queries in main workflow replaced with Unraid GraphQL queries. Normalizer and Registry update on every query path. Consumer Code nodes unchanged. Phase 15 utility nodes preserved as templates. Workflow pushed to n8n.
</done>
</task>
<task type="auto">
<name>Task 2: Wire Callback Token Encoder/Decoder into inline keyboard flows</name>
<files>n8n-workflow.json</files>
<action>
Wire the Callback Token Encoder and Decoder from Phase 15 into the main workflow's inline keyboard callback flows. This ensures Telegram callback_data uses 8-char tokens instead of full container IDs (which are now 129-char PrefixedIDs, far exceeding Telegram's 64-byte limit).
**IMPORTANT: First investigate the current callback_data encoding pattern.**
Before implementing, read the existing Code nodes that generate inline keyboard buttons to understand how callback_data is currently structured. The nodes to examine:
- "Build Container List" (in n8n-status.json, but called via Execute Workflow from main)
- "Build Container Submenu" (in n8n-status.json)
- Any Code node in main workflow that creates `inline_keyboard` arrays
Current pattern likely uses short container names or Docker short IDs (12 chars) in callback_data. With PrefixedIDs (129 chars), this MUST change to use the Callback Token Encoder.
**If callback_data currently uses container NAMES (not IDs):**
- Container names are short (e.g., "plex", "sonarr") and fit within 64 bytes
- In this case, callback token encoding may NOT be needed for all paths
- Only paths that embed container IDs in callback_data need token encoding
**If callback_data currently uses container IDs:**
- ALL paths generating callback_data with container IDs must use Token Encoder
- ALL paths parsing callback_data with container IDs must use Token Decoder
**Investigation steps:**
1. Read Code nodes that create inline keyboards in n8n-status.json and main workflow
2. Identify the exact callback_data format (e.g., "start:containerName", "s:dockerId", "select:name")
3. Determine which paths (if any) embed container IDs in callback_data
4. Only wire Token Encoder/Decoder for paths that need it
**If token encoding IS needed, wire as follows:**
For keyboard GENERATION (encoder):
- Find Code nodes that build `inline_keyboard` with container IDs
- Before those nodes, add a Code node that calls the Token Encoder logic to convert each PrefixedID to an 8-char token
- Update callback_data format to use tokens instead of IDs
For callback PARSING (decoder):
- Find the "Parse Callback Data" Code node in main workflow
- Add Token Decoder logic to resolve tokens back to container names/PrefixedIDs
- Update downstream routing to use decoded values
**If token encoding is NOT needed (names used, not IDs):**
- Document this finding in the SUMMARY
- Leave Token Encoder/Decoder as standalone utility nodes for future use
- Verify that all callback_data fits within 64 bytes with current naming
**Key constraint:** Telegram inline keyboard callback_data has a 64-byte limit. Current callback_data must be verified to fit within this limit with the new data format.
</action>
<verify>
1. Identify current callback_data format in all inline keyboard Code nodes
2. If tokens needed: verify Token Encoder/Decoder wired correctly, callback_data fits 64 bytes
3. If tokens NOT needed: verify all callback_data still fits 64 bytes with new container ID format
4. All connections valid
5. Push to n8n if changes were made
</verify>
<done>
Callback data encoding verified or updated for Telegram's 64-byte limit. Token Encoder/Decoder wired if needed, or documented as unnecessary if container names (not IDs) are used in callback_data.
</done>
</task>
</tasks>
<verification>
1. Zero "docker-socket-proxy" references in n8n-workflow.json
2. All container queries use Unraid GraphQL API
3. Container ID Registry updated on every query
4. Callback data fits within Telegram's 64-byte limit
5. All sub-workflow Execute nodes pass correct data format (PrefixedIDs work with migrated sub-workflows)
6. Phase 15 utility nodes preserved as templates
7. Push to n8n with HTTP 200
</verification>
<success_criteria>
- n8n-workflow.json has zero Docker socket proxy references (except possibly Unraid API Test node which is already correct)
- All 6 container lookups use GraphQL queries with normalizer
- Container ID Registry refreshed on every query path
- Callback data encoding works within Telegram's 64-byte limit
- Sub-workflow integration verified (actions, update, status, batch-ui all receive correct data format)
- Workflow valid and pushed to n8n
</success_criteria>
<output>
After completion, create `.planning/phases/16-api-migration/16-05-SUMMARY.md`
</output>