13 KiB
phase, plan, type, wave, depends_on, files_modified, autonomous, must_haves
| phase | plan | type | wave | depends_on | files_modified | autonomous | must_haves | |||||||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 16-api-migration | 05 | execute | 2 |
|
|
true |
|
Purpose: The main workflow is the Telegram bot entry point. It contains 6 Docker API calls for container lookups used by inline keyboard actions, update-all flow, callback updates, batch stop, and cancel-return navigation. These are read-only lookups (no mutations — mutations happen in sub-workflows), so this is a query-only migration with normalizer and registry updates.
Output: n8n-workflow.json with zero Docker socket proxy references, all container lookups via GraphQL, Container ID Registry updated on every query, Phase 15 utility nodes wired into active flows.
<execution_context> @/home/luc/.claude/get-shit-done/workflows/execute-plan.md @/home/luc/.claude/get-shit-done/templates/summary.md </execution_context>
@.planning/PROJECT.md @.planning/ROADMAP.md @.planning/STATE.md @.planning/phases/16-api-migration/16-RESEARCH.md @.planning/phases/15-infrastructure-foundation/15-01-SUMMARY.md @.planning/phases/15-infrastructure-foundation/15-02-SUMMARY.md @.planning/phases/16-api-migration/16-01-SUMMARY.md @.planning/phases/16-api-migration/16-02-SUMMARY.md @.planning/phases/16-api-migration/16-03-SUMMARY.md @.planning/phases/16-api-migration/16-04-SUMMARY.md @n8n-workflow.json @ARCHITECTURE.md Task 1: Replace all 6 Docker API container queries with Unraid GraphQL queries in main workflow n8n-workflow.json Replace all 6 Docker socket proxy HTTP Request nodes in n8n-workflow.json with Unraid GraphQL queries. Each currently does GET to `docker-socket-proxy:2375/containers/json?all=true` (or `all=false` for update-all).Nodes to migrate:
-
"Get Container For Action" (inline keyboard action callbacks)
- Currently: GET
http://docker-socket-proxy:2375/containers/json?all=true - Feeds into: "Prepare Inline Action Input" Code node
- Change to: POST
={{ $env.UNRAID_HOST }}/graphql - Body:
{"query": "query { docker { containers { id names state image } } }"} - Add Normalizer + Registry Update Code nodes between HTTP and "Prepare Inline Action Input"
- Currently: GET
-
"Get Container For Cancel" (cancel-return-to-submenu)
- Currently: GET
http://docker-socket-proxy:2375/containers/json?all=true - Feeds into: "Build Cancel Return Submenu" Code node
- Same GraphQL transformation + normalizer + registry update
- Currently: GET
-
"Get All Containers For Update All" (update-all text command)
- Currently: GET
http://docker-socket-proxy:2375/containers/json?all=false(only running containers) - Feeds into: "Check Available Updates" Code node
- GraphQL query should filter to running containers:
{"query": "query { docker { containers { id names state image imageId } } }"} - NOTE: GraphQL API may not have a
running-onlyfilter. Query all containers and let existing "Check Available Updates" Code node filter (it already filters by:latesttag and excludes infrastructure). The existing code handles both running and stopped containers. - Add
imageIdto the query for update-all flow (needed for update availability checking)
- Currently: GET
-
"Fetch Containers For Update All Exec" (update-all execution)
- Currently: GET
http://docker-socket-proxy:2375/containers/json?all=false - Feeds into: "Prepare Update All Batch" Code node
- Same transformation as #3 (query all, let Code node filter)
- Currently: GET
-
"Get Container For Callback Update" (inline keyboard update callback)
- Currently: GET
http://docker-socket-proxy:2375/containers/json?all=true - Feeds into: "Find Container For Callback Update" Code node
- Standard GraphQL transformation + normalizer + registry update
- Currently: GET
-
"Fetch Containers For Bitmap Stop" (batch stop confirmation)
- Currently: GET
http://docker-socket-proxy:2375/containers/json?all=true - Feeds into: "Resolve Batch Stop Names" Code node
- Standard GraphQL transformation + normalizer + registry update
- Currently: GET
For EACH node, apply:
a. Change HTTP Request to POST method
b. URL: ={{ $env.UNRAID_HOST }}/graphql
c. Body: {"query": "query { docker { containers { id names state image } } }"} (add imageId for update-all nodes #3 and #4)
d. Headers: Content-Type: application/json, x-api-key: ={{ $env.UNRAID_API_KEY }}
e. Timeout: 15000ms
f. Error handling: continueRegularOutput
g. Add GraphQL Response Normalizer Code node after HTTP Request
h. Add Container ID Registry update Code node after normalizer (updates static data cache)
i. Wire normalizer/registry output to existing downstream Code node
Wiring pattern for each:
[upstream] → HTTP Request (GraphQL) → Normalizer → Registry Update → [existing downstream Code node]
Phase 15 standalone utility nodes: The standalone "GraphQL Response Normalizer", "Container ID Registry", "GraphQL Error Handler", "Unraid API HTTP Template", "Callback Token Encoder", and "Callback Token Decoder" nodes at positions [200-1000, 2400-2600] should remain in the workflow as reference templates. They are not wired to any active flow (and that's intentional — they serve as code templates for copy-paste during migration). Do NOT remove them.
Consumer Code nodes remain UNCHANGED:
- "Prepare Inline Action Input" — searches containers by name using
Names[0],State,Id - "Build Cancel Return Submenu" — same pattern
- "Check Available Updates" — filters
:latestcontainers, checks update availability - "Prepare Update All Batch" — prepares batch execution data
- "Find Container For Callback Update" — finds container by name
- "Resolve Batch Stop Names" — decodes bitmap to container names
All these nodes reference Names[0], State, Image, Id — the normalizer ensures these fields exist in correct format.
Special case: "Prepare Inline Action Input" and "Find Container For Callback Update" — These nodes output containerId: container.Id which feeds into sub-workflow calls. The Id field now contains a 129-char PrefixedID (from normalizer), not a 64-char Docker hex ID. This is correct — the sub-workflows (Plan 02 actions, Plan 03 update) have been migrated to use this PrefixedID format in their GraphQL mutations.
Load n8n-workflow.json with python3 and verify:
- Zero HTTP Request nodes contain "docker-socket-proxy" in URL (excluding the Unraid API Test node which already uses $env.UNRAID_HOST)
- All 6 former Docker API nodes now use POST to
$env.UNRAID_HOST/graphql - 6 GraphQL Response Normalizer Code nodes exist (one per query path)
- 6 Container ID Registry update Code nodes exist
- All downstream consumer Code nodes are UNCHANGED
- Phase 15 standalone utility nodes still present at positions [200-1000, 2400-2600]
- All connections valid (no dangling references)
- Push to n8n via API and verify HTTP 200 All 6 Docker API queries in main workflow replaced with Unraid GraphQL queries. Normalizer and Registry update on every query path. Consumer Code nodes unchanged. Phase 15 utility nodes preserved as templates. Workflow pushed to n8n.
IMPORTANT: First investigate the current callback_data encoding pattern.
Before implementing, read the existing Code nodes that generate inline keyboard buttons to understand how callback_data is currently structured. The nodes to examine:
- "Build Container List" (in n8n-status.json, but called via Execute Workflow from main)
- "Build Container Submenu" (in n8n-status.json)
- Any Code node in main workflow that creates
inline_keyboardarrays
Current pattern likely uses short container names or Docker short IDs (12 chars) in callback_data. With PrefixedIDs (129 chars), this MUST change to use the Callback Token Encoder.
If callback_data currently uses container NAMES (not IDs):
- Container names are short (e.g., "plex", "sonarr") and fit within 64 bytes
- In this case, callback token encoding may NOT be needed for all paths
- Only paths that embed container IDs in callback_data need token encoding
If callback_data currently uses container IDs:
- ALL paths generating callback_data with container IDs must use Token Encoder
- ALL paths parsing callback_data with container IDs must use Token Decoder
Investigation steps:
- Read Code nodes that create inline keyboards in n8n-status.json and main workflow
- Identify the exact callback_data format (e.g., "start:containerName", "s:dockerId", "select:name")
- Determine which paths (if any) embed container IDs in callback_data
- Only wire Token Encoder/Decoder for paths that need it
If token encoding IS needed, wire as follows:
For keyboard GENERATION (encoder):
- Find Code nodes that build
inline_keyboardwith container IDs - Before those nodes, add a Code node that calls the Token Encoder logic to convert each PrefixedID to an 8-char token
- Update callback_data format to use tokens instead of IDs
For callback PARSING (decoder):
- Find the "Parse Callback Data" Code node in main workflow
- Add Token Decoder logic to resolve tokens back to container names/PrefixedIDs
- Update downstream routing to use decoded values
If token encoding is NOT needed (names used, not IDs):
- Document this finding in the SUMMARY
- Leave Token Encoder/Decoder as standalone utility nodes for future use
- Verify that all callback_data fits within 64 bytes with current naming
Key constraint: Telegram inline keyboard callback_data has a 64-byte limit. Current callback_data must be verified to fit within this limit with the new data format.
- Identify current callback_data format in all inline keyboard Code nodes
- If tokens needed: verify Token Encoder/Decoder wired correctly, callback_data fits 64 bytes
- If tokens NOT needed: verify all callback_data still fits 64 bytes with new container ID format
- All connections valid
- Push to n8n if changes were made Callback data encoding verified or updated for Telegram's 64-byte limit. Token Encoder/Decoder wired if needed, or documented as unnecessary if container names (not IDs) are used in callback_data.
<success_criteria>
- n8n-workflow.json has zero Docker socket proxy references (except possibly Unraid API Test node which is already correct)
- All 6 container lookups use GraphQL queries with normalizer
- Container ID Registry refreshed on every query path
- Callback data encoding works within Telegram's 64-byte limit
- Sub-workflow integration verified (actions, update, status, batch-ui all receive correct data format)
- Workflow valid and pushed to n8n </success_criteria>