19 KiB
phase, plan, type, wave, depends_on, files_modified, autonomous, must_haves
| phase | plan | type | wave | depends_on | files_modified | autonomous | must_haves | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 16-api-migration | 05 | execute | 2 |
|
|
true |
|
Purpose: The main workflow is the Telegram bot entry point. It contains 6 Docker API calls for container lookups used by inline keyboard actions, update-all flow, callback updates, batch stop, and cancel-return navigation. Additionally, the batch update flow currently calls the update sub-workflow serially per container — this plan also implements the updateContainers (plural) mutation for efficient parallel batch updates.
Output: n8n-workflow.json with zero Docker socket proxy references, all container lookups via GraphQL, Container ID Registry updated on every query, Phase 15 utility nodes wired into active flows, and hybrid batch update strategy (plural mutation for small batches, serial with progress for large batches).
<execution_context> @/home/luc/.claude/get-shit-done/workflows/execute-plan.md @/home/luc/.claude/get-shit-done/templates/summary.md </execution_context>
@.planning/PROJECT.md @.planning/ROADMAP.md @.planning/STATE.md @.planning/phases/16-api-migration/16-RESEARCH.md @.planning/phases/15-infrastructure-foundation/15-01-SUMMARY.md @.planning/phases/15-infrastructure-foundation/15-02-SUMMARY.md @.planning/phases/16-api-migration/16-01-SUMMARY.md @.planning/phases/16-api-migration/16-02-SUMMARY.md @.planning/phases/16-api-migration/16-03-SUMMARY.md @.planning/phases/16-api-migration/16-04-SUMMARY.md @n8n-workflow.json @ARCHITECTURE.md Task 1: Replace all 6 Docker API container queries with Unraid GraphQL queries in main workflow n8n-workflow.json Replace all 6 Docker socket proxy HTTP Request nodes in n8n-workflow.json with Unraid GraphQL queries. Each currently does GET to `docker-socket-proxy:2375/containers/json?all=true` (or `all=false` for update-all).Nodes to migrate:
-
"Get Container For Action" (inline keyboard action callbacks)
- Currently: GET
http://docker-socket-proxy:2375/containers/json?all=true - Feeds into: "Prepare Inline Action Input" Code node
- Change to: POST
={{ $env.UNRAID_HOST }}/graphql - Body:
{"query": "query { docker { containers { id names state image } } }"} - Add Normalizer + Registry Update Code nodes between HTTP and "Prepare Inline Action Input"
- Currently: GET
-
"Get Container For Cancel" (cancel-return-to-submenu)
- Currently: GET
http://docker-socket-proxy:2375/containers/json?all=true - Feeds into: "Build Cancel Return Submenu" Code node
- Same GraphQL transformation + normalizer + registry update
- Currently: GET
-
"Get All Containers For Update All" (update-all text command)
- Currently: GET
http://docker-socket-proxy:2375/containers/json?all=false(only running containers) - Feeds into: "Check Available Updates" Code node
- GraphQL query should filter to running containers:
{"query": "query { docker { containers { id names state image imageId } } }"} - NOTE: GraphQL API may not have a
running-onlyfilter. Query all containers and let existing "Check Available Updates" Code node filter (it already filters by:latesttag and excludes infrastructure). The existing code handles both running and stopped containers. - Add
imageIdto the query for update-all flow (needed for update availability checking)
- Currently: GET
-
"Fetch Containers For Update All Exec" (update-all execution)
- Currently: GET
http://docker-socket-proxy:2375/containers/json?all=false - Feeds into: "Prepare Update All Batch" Code node
- Same transformation as #3 (query all, let Code node filter)
- Currently: GET
-
"Get Container For Callback Update" (inline keyboard update callback)
- Currently: GET
http://docker-socket-proxy:2375/containers/json?all=true - Feeds into: "Find Container For Callback Update" Code node
- Standard GraphQL transformation + normalizer + registry update
- Currently: GET
-
"Fetch Containers For Bitmap Stop" (batch stop confirmation)
- Currently: GET
http://docker-socket-proxy:2375/containers/json?all=true - Feeds into: "Resolve Batch Stop Names" Code node
- Standard GraphQL transformation + normalizer + registry update
- Currently: GET
For EACH node, apply:
a. Change HTTP Request to POST method
b. URL: ={{ $env.UNRAID_HOST }}/graphql
c. Body: {"query": "query { docker { containers { id names state image } } }"} (add imageId for update-all nodes #3 and #4)
d. Headers: Content-Type: application/json, x-api-key: ={{ $env.UNRAID_API_KEY }}
e. Timeout: 15000ms
f. Error handling: continueRegularOutput
g. Add GraphQL Response Normalizer Code node after HTTP Request
h. Add Container ID Registry update Code node after normalizer (updates static data cache)
i. Wire normalizer/registry output to existing downstream Code node
Wiring pattern for each:
[upstream] → HTTP Request (GraphQL) → Normalizer → Registry Update → [existing downstream Code node]
Phase 15 standalone utility nodes: The standalone "GraphQL Response Normalizer", "Container ID Registry", "GraphQL Error Handler", "Unraid API HTTP Template", "Callback Token Encoder", and "Callback Token Decoder" nodes at positions [200-1000, 2400-2600] should remain in the workflow as reference templates. They are not wired to any active flow (and that's intentional — they serve as code templates for copy-paste during migration). Do NOT remove them.
Consumer Code nodes remain UNCHANGED:
- "Prepare Inline Action Input" — searches containers by name using
Names[0],State,Id - "Build Cancel Return Submenu" — same pattern
- "Check Available Updates" — filters
:latestcontainers, checks update availability - "Prepare Update All Batch" — prepares batch execution data
- "Find Container For Callback Update" — finds container by name
- "Resolve Batch Stop Names" — decodes bitmap to container names
All these nodes reference Names[0], State, Image, Id — the normalizer ensures these fields exist in correct format.
Special case: "Prepare Inline Action Input" and "Find Container For Callback Update" — These nodes output containerId: container.Id which feeds into sub-workflow calls. The Id field now contains a 129-char PrefixedID (from normalizer), not a 64-char Docker hex ID. This is correct — the sub-workflows (Plan 02 actions, Plan 03 update) have been migrated to use this PrefixedID format in their GraphQL mutations.
Load n8n-workflow.json with python3 and verify:
- Zero HTTP Request nodes contain "docker-socket-proxy" in URL (excluding the Unraid API Test node which already uses $env.UNRAID_HOST)
- All 6 former Docker API nodes now use POST to
$env.UNRAID_HOST/graphql - 6 GraphQL Response Normalizer Code nodes exist (one per query path)
- 6 Container ID Registry update Code nodes exist
- All downstream consumer Code nodes are UNCHANGED
- Phase 15 standalone utility nodes still present at positions [200-1000, 2400-2600]
- All connections valid (no dangling references)
- Push to n8n via API and verify HTTP 200 All 6 Docker API queries in main workflow replaced with Unraid GraphQL queries. Normalizer and Registry update on every query path. Consumer Code nodes unchanged. Phase 15 utility nodes preserved as templates. Workflow pushed to n8n.
IMPORTANT: First investigate the current callback_data encoding pattern.
Before implementing, read the existing Code nodes that generate inline keyboard buttons to understand how callback_data is currently structured. The nodes to examine:
- "Build Container List" (in n8n-status.json, but called via Execute Workflow from main)
- "Build Container Submenu" (in n8n-status.json)
- Any Code node in main workflow that creates
inline_keyboardarrays
Current pattern likely uses short container names or Docker short IDs (12 chars) in callback_data. With PrefixedIDs (129 chars), this MUST change to use the Callback Token Encoder.
If callback_data currently uses container NAMES (not IDs):
- Container names are short (e.g., "plex", "sonarr") and fit within 64 bytes
- In this case, callback token encoding may NOT be needed for all paths
- Only paths that embed container IDs in callback_data need token encoding
If callback_data currently uses container IDs:
- ALL paths generating callback_data with container IDs must use Token Encoder
- ALL paths parsing callback_data with container IDs must use Token Decoder
Investigation steps:
- Read Code nodes that create inline keyboards in n8n-status.json and main workflow
- Identify the exact callback_data format (e.g., "start:containerName", "s:dockerId", "select:name")
- Determine which paths (if any) embed container IDs in callback_data
- Only wire Token Encoder/Decoder for paths that need it
If token encoding IS needed, wire as follows:
For keyboard GENERATION (encoder):
- Find Code nodes that build
inline_keyboardwith container IDs - Before those nodes, add a Code node that calls the Token Encoder logic to convert each PrefixedID to an 8-char token
- Update callback_data format to use tokens instead of IDs
For callback PARSING (decoder):
- Find the "Parse Callback Data" Code node in main workflow
- Add Token Decoder logic to resolve tokens back to container names/PrefixedIDs
- Update downstream routing to use decoded values
If token encoding is NOT needed (names used, not IDs):
- Document this finding in the SUMMARY
- Leave Token Encoder/Decoder as standalone utility nodes for future use
- Verify that all callback_data fits within 64 bytes with current naming
Key constraint: Telegram inline keyboard callback_data has a 64-byte limit. Current callback_data must be verified to fit within this limit with the new data format.
- Identify current callback_data format in all inline keyboard Code nodes
- If tokens needed: verify Token Encoder/Decoder wired correctly, callback_data fits 64 bytes
- If tokens NOT needed: verify all callback_data still fits 64 bytes with new container ID format
- All connections valid
- Push to n8n if changes were made Callback data encoding verified or updated for Telegram's 64-byte limit. Token Encoder/Decoder wired if needed, or documented as unnecessary if container names (not IDs) are used in callback_data.
Hybrid strategy (from research Pattern 4):
- Batches of 1-5 containers: Use single
updateContainers(ids: [PrefixedID!]!)mutation directly in main workflow (fast, parallel, no progress updates needed for small count) - Batches of >5 containers: Keep existing serial loop calling update sub-workflow per container with Telegram message edits showing progress (user sees "Updated 3/10: plex" etc.)
Implementation in the batch update Code node ("Prepare Update All Batch" or equivalent):
Find the Code node that prepares the batch update execution. This node currently builds a list of containers to update and feeds them to a loop that calls Execute Workflow (n8n-update.json) per container.
Add a branching IF node after the batch preparation:
- IF
containerCount <= 5→ "Batch Update Via Mutation" path (new) - IF
containerCount > 5→ existing serial loop path (unchanged)
New "Batch Update Via Mutation" path:
-
"Build Batch Update Mutation" Code node:
const containers = $input.all().map(item => item.json); // Look up PrefixedIDs from Container ID Registry (static data) const staticData = $getWorkflowStaticData('global'); const registry = JSON.parse(staticData._containerIdRegistry || '{}'); const ids = []; const nameMap = {}; for (const container of containers) { const name = container.containerName || container.name; const entry = registry[name]; if (entry && entry.prefixedId) { ids.push(entry.prefixedId); nameMap[entry.prefixedId] = name; } } return [{ json: { query: `mutation { docker { updateContainers(ids: ${JSON.stringify(ids)}) { id state image imageId } } }`, ids, nameMap, containerCount: ids.length, chatId: containers[0].chatId, messageId: containers[0].messageId } }]; -
"Execute Batch Update" HTTP Request node:
- POST
={{ $env.UNRAID_HOST }}/graphql - Body: from $json (query field)
- Headers:
Content-Type: application/json,x-api-key: ={{ $env.UNRAID_API_KEY }} - Timeout: 120000ms (120 seconds) — batch updates pull multiple images, needs extended timeout
- Error handling:
continueRegularOutput
- POST
-
"Handle Batch Update Response" Code node:
const response = $input.item.json; const prevData = $('Build Batch Update Mutation').item.json; // Check for GraphQL errors if (response.errors) { return { json: { success: false, error: true, errorMessage: response.errors[0].message, chatId: prevData.chatId, messageId: prevData.messageId }}; } const updated = response.data?.docker?.updateContainers || []; const results = updated.map(container => ({ name: prevData.nameMap[container.id] || container.id, imageId: container.imageId, state: container.state })); return { json: { success: true, batchMode: 'parallel', updatedCount: results.length, results, chatId: prevData.chatId, messageId: prevData.messageId }}; -
Update Container ID Registry after batch mutation — container IDs change after update:
const response = $input.item.json; if (response.success && response.results) { const staticData = $getWorkflowStaticData('global'); const registry = JSON.parse(staticData._containerIdRegistry || '{}'); // Refresh registry entries for updated containers // The mutation response contains new IDs — update registry staticData._containerIdRegistry = JSON.stringify(registry); } return $input.all(); -
Wire the batch mutation result into the existing batch update success messaging path (the same path that currently receives results from the serial loop). The response format should match what the existing success messaging expects.
Serial path (>5 containers) — UNCHANGED: Keep the existing loop calling Execute Workflow (n8n-update.json) per container with Telegram progress edits. This path is already migrated by Plan 16-03 (n8n-update.json uses GraphQL internally).
Key wiring:
Prepare Update All Batch → Check Batch Size (IF: count <= 5)
→ True: Build Batch Mutation → Execute Batch Update (HTTP, 120s) → Handle Batch Response → Registry Update → Format Batch Result
→ False: [existing serial loop with Execute Workflow calls, unchanged]
<success_criteria>
- n8n-workflow.json has zero Docker socket proxy references (except possibly Unraid API Test node which is already correct)
- All 6 container lookups use GraphQL queries with normalizer
- Container ID Registry refreshed on every query path
- Callback data encoding works within Telegram's 64-byte limit
- Sub-workflow integration verified (actions, update, status, batch-ui all receive correct data format)
- Hybrid batch update: small batches (<=5) use updateContainers mutation, large batches (>5) use serial with progress
- Container ID Registry refreshed after batch mutations
- Workflow valid and pushed to n8n </success_criteria>