From 4fc791dd43e966406c5583c36ac2d545db8e8c19 Mon Sep 17 00:00:00 2001 From: Lucas Berger Date: Mon, 9 Feb 2026 09:19:10 -0500 Subject: [PATCH] docs(16): create API migration phase plans (5 plans in 2 waves) --- .planning/ROADMAP.md | 10 +- .../phases/16-api-migration/16-01-PLAN.md | 139 ++++++++++ .../phases/16-api-migration/16-02-PLAN.md | 193 ++++++++++++++ .../phases/16-api-migration/16-03-PLAN.md | 219 ++++++++++++++++ .../phases/16-api-migration/16-04-PLAN.md | 145 +++++++++++ .../phases/16-api-migration/16-05-PLAN.md | 240 ++++++++++++++++++ 6 files changed, 943 insertions(+), 3 deletions(-) create mode 100644 .planning/phases/16-api-migration/16-01-PLAN.md create mode 100644 .planning/phases/16-api-migration/16-02-PLAN.md create mode 100644 .planning/phases/16-api-migration/16-03-PLAN.md create mode 100644 .planning/phases/16-api-migration/16-04-PLAN.md create mode 100644 .planning/phases/16-api-migration/16-05-PLAN.md diff --git a/.planning/ROADMAP.md b/.planning/ROADMAP.md index cc9d76c..196165a 100644 --- a/.planning/ROADMAP.md +++ b/.planning/ROADMAP.md @@ -83,10 +83,14 @@ Plans: 4. User can batch update multiple containers via Unraid API 5. User can "update all :latest" via Unraid API 6. Unraid update badges clear automatically after bot-initiated updates (no manual sync) -**Plans**: TBD +**Plans**: 5 plans Plans: -- [ ] 16-01: TBD +- [ ] 16-01-PLAN.md -- Container Status workflow migration (n8n-status.json) +- [ ] 16-02-PLAN.md -- Container Actions workflow migration (n8n-actions.json) +- [ ] 16-03-PLAN.md -- Container Update workflow migration (n8n-update.json) +- [ ] 16-04-PLAN.md -- Batch UI workflow migration (n8n-batch-ui.json) +- [ ] 16-05-PLAN.md -- Main workflow routing migration (n8n-workflow.json) #### Phase 17: Cleanup **Goal**: All Docker socket proxy artifacts removed from codebase @@ -134,7 +138,7 @@ Phases execute in numeric order: 1-14 (complete) → 15 → 16 → 17 → 18 | 13 | Documentation Overhaul | v1.2 | 1/1 | Complete | 2026-02-08 | | 14 | Unraid API Access | v1.3 | 2/2 | Complete | 2026-02-08 | | 15 | Infrastructure Foundation | v1.4 | 2/2 | Complete | 2026-02-09 | -| 16 | API Migration | v1.4 | 0/? | Not started | - | +| 16 | API Migration | v1.4 | 0/5 | Not started | - | | 17 | Cleanup | v1.4 | 0/? | Not started | - | | 18 | Documentation | v1.4 | 0/? | Not started | - | diff --git a/.planning/phases/16-api-migration/16-01-PLAN.md b/.planning/phases/16-api-migration/16-01-PLAN.md new file mode 100644 index 0000000..facb9f4 --- /dev/null +++ b/.planning/phases/16-api-migration/16-01-PLAN.md @@ -0,0 +1,139 @@ +--- +phase: 16-api-migration +plan: 01 +type: execute +wave: 1 +depends_on: [] +files_modified: [n8n-status.json] +autonomous: true + +must_haves: + truths: + - "Container list displays same containers with same names and states as before" + - "Container submenu shows correct status for selected container" + - "Pagination works identically (same page size, same navigation)" + artifacts: + - path: "n8n-status.json" + provides: "Container status queries via Unraid GraphQL API" + contains: "graphql" + key_links: + - from: "n8n-status.json HTTP Request nodes" + to: "Unraid GraphQL API" + via: "POST to $env.UNRAID_HOST/graphql" + pattern: "UNRAID_HOST.*graphql" + - from: "n8n-status.json HTTP Request nodes" + to: "Existing Code nodes (Build Container List, Build Container Submenu, Build Paginated List)" + via: "GraphQL Response Normalizer transforms Unraid response to Docker API contract" + pattern: "Names.*State.*Id" +--- + + +Migrate n8n-status.json from Docker socket proxy to Unraid GraphQL API for all container listing and status queries. + +Purpose: Container status is the most-used feature and simplest migration target (3 read-only GET queries become 3 GraphQL POST queries). Establishes the query migration pattern for all subsequent plans. + +Output: n8n-status.json with all Docker API HTTP Request nodes replaced by Unraid GraphQL queries, wired through GraphQL Response Normalizer so downstream Code nodes see identical data shape. + + + +@/home/luc/.claude/get-shit-done/workflows/execute-plan.md +@/home/luc/.claude/get-shit-done/templates/summary.md + + + +@.planning/PROJECT.md +@.planning/ROADMAP.md +@.planning/STATE.md +@.planning/phases/16-api-migration/16-RESEARCH.md +@.planning/phases/15-infrastructure-foundation/15-01-SUMMARY.md +@.planning/phases/15-infrastructure-foundation/15-02-SUMMARY.md +@n8n-status.json +@n8n-workflow.json (for Phase 15 utility node code — Container ID Registry, GraphQL Response Normalizer) +@ARCHITECTURE.md + + + + + + Task 1: Replace Docker API queries with Unraid GraphQL queries in n8n-status.json + n8n-status.json + +Replace all 3 Docker API HTTP Request nodes with Unraid GraphQL query equivalents. For each node: + +1. **Docker List Containers** (used for list view): + - Change from: GET `http://docker-socket-proxy:2375/containers/json?all=true` + - Change to: POST `={{ $env.UNRAID_HOST }}/graphql` with body `{"query": "query { docker { containers { id names state image status } } }"}` + - Set method to POST, add headers: `Content-Type: application/json`, `x-api-key: ={{ $env.UNRAID_API_KEY }}` + - Set timeout to 15000ms (15s for myunraid.net relay) + - Set `options.response.response.fullResponse` to false (we want body only, matching current Docker API behavior) + - Set error handling to `continueRegularOutput` to match existing pattern + +2. **Docker Get Container** (used for submenu/status view): + - Same transformation as above (same query — downstream Code node filters by name) + +3. **Docker List For Paginate** (used for pagination): + - Same transformation as above + +After converting each HTTP Request node, add a **GraphQL Response Normalizer** Code node between each HTTP Request and its downstream Code node consumer. The normalizer code must be copied from the standalone "GraphQL Response Normalizer" utility node in n8n-workflow.json (at position [200, 2600]). The normalizer transforms Unraid response shape `{data: {docker: {containers: [...]}}}` to flat array `[{Id, Names, State, Image, Status}]` matching Docker API contract. + +**Wiring pattern for each of the 3 queries:** +``` +HTTP Request (GraphQL) → GraphQL Response Normalizer (Code) → existing Code node (unchanged) +``` + +The normalizer handles: +- Extract `data.docker.containers` from GraphQL response +- Map `id` → `Id` (preserve full Unraid PrefixedID) +- Map `names` → `Names` (array, keep leading slash convention) +- Map `state` → `State` (UPPERCASE → lowercase: RUNNING→running, STOPPED→exited, PAUSED→paused) +- Map `image` → `Image` +- Map `status` → `Status` + +**Also update Container ID Registry cache** after normalizer: Add a Code node after each normalizer that updates the Container ID Registry static data. Copy the registry update logic from the "Container ID Registry" utility node (position [200, 2400] in n8n-workflow.json). This ensures name-to-PrefixedID mapping stays fresh for downstream mutation operations. + +Rename the HTTP Request nodes from Docker-centric names: +- "Docker List Containers" → "Query Containers" +- "Docker Get Container" → "Query Container Status" +- "Docker List For Paginate" → "Query Containers For Paginate" + +Keep all downstream Code nodes (Build Container List, Build Container Submenu, Build Paginated List, Prepare * Request) completely unchanged — the normalizer ensures they receive Docker API format. + +**Implementation note:** The normalizer should be implemented as inline Code nodes in this sub-workflow (not references to the main workflow utility node, since sub-workflows cannot cross-reference parent workflow nodes). Copy the normalizer logic from n8n-workflow.json's "GraphQL Response Normalizer" node and embed it in each position needed. Similarly for registry cache updates. + +To keep the workflow lean, use a single shared normalizer node where possible. If all 3 HTTP Request queries produce the same shape and feed into separate downstream paths, consider whether a single normalizer can serve multiple paths via the existing Route Action switch node routing, or if 3 separate normalizers are needed due to n8n's connection model. + + +Load n8n-status.json with python3 and verify: +1. Zero HTTP Request nodes contain "docker-socket-proxy" in URL +2. All HTTP Request nodes use POST method to `$env.UNRAID_HOST/graphql` +3. GraphQL Response Normalizer Code nodes exist between HTTP requests and downstream Code nodes +4. Downstream Code nodes (Build Container List, Build Container Submenu, Build Paginated List) are UNCHANGED +5. All connections are valid (no dangling references) +6. Push to n8n via API and verify HTTP 200 + + +All 3 container queries in n8n-status.json use Unraid GraphQL API instead of Docker socket proxy. Normalizer transforms responses to Docker API contract. Downstream Code nodes unchanged. Workflow pushed to n8n successfully. + + + + + + +1. Load n8n-status.json and confirm zero "docker-socket-proxy" references +2. Confirm all HTTP Request nodes point to `$env.UNRAID_HOST/graphql` +3. Confirm normalizer Code nodes exist with correct state mapping (RUNNING→running, STOPPED→exited) +4. Confirm downstream Code nodes are byte-for-byte identical to pre-migration versions +5. Push to n8n and verify HTTP 200 response + + + +- n8n-status.json has zero Docker socket proxy references +- All container data flows through GraphQL Response Normalizer +- Container ID Registry cache updated on every query +- Downstream Code nodes unchanged (zero-change migration for consumers) +- Workflow valid and pushed to n8n + + + +After completion, create `.planning/phases/16-api-migration/16-01-SUMMARY.md` + diff --git a/.planning/phases/16-api-migration/16-02-PLAN.md b/.planning/phases/16-api-migration/16-02-PLAN.md new file mode 100644 index 0000000..9d868bc --- /dev/null +++ b/.planning/phases/16-api-migration/16-02-PLAN.md @@ -0,0 +1,193 @@ +--- +phase: 16-api-migration +plan: 02 +type: execute +wave: 1 +depends_on: [] +files_modified: [n8n-actions.json] +autonomous: true + +must_haves: + truths: + - "User can start a stopped container via Telegram and sees success message" + - "User can stop a running container via Telegram and sees success message" + - "User can restart a container via Telegram and sees success message" + - "Starting an already-running container shows 'already started' (not an error)" + - "Stopping an already-stopped container shows 'already stopped' (not an error)" + artifacts: + - path: "n8n-actions.json" + provides: "Container lifecycle operations via Unraid GraphQL mutations" + contains: "graphql" + key_links: + - from: "n8n-actions.json mutation nodes" + to: "Unraid GraphQL API" + via: "POST mutations (start, stop)" + pattern: "mutation.*docker.*start|stop" + - from: "GraphQL Error Handler" + to: "Format Start/Stop/Restart Result Code nodes" + via: "ALREADY_IN_STATE mapped to statusCode 304" + pattern: "statusCode.*304" +--- + + +Migrate n8n-actions.json from Docker socket proxy to Unraid GraphQL API for container start, stop, and restart operations. + +Purpose: Container lifecycle actions are the second-most-used feature. This plan replaces 4 Docker API HTTP Request nodes (1 container list + 3 actions) with GraphQL equivalents, using Container ID Registry for name-to-PrefixedID translation and GraphQL Error Handler for ALREADY_IN_STATE mapping. + +Output: n8n-actions.json with all Docker API nodes replaced by Unraid GraphQL mutations, restart implemented as sequential stop+start (no native restart mutation), error handling preserving existing statusCode 304 pattern. + + + +@/home/luc/.claude/get-shit-done/workflows/execute-plan.md +@/home/luc/.claude/get-shit-done/templates/summary.md + + + +@.planning/PROJECT.md +@.planning/ROADMAP.md +@.planning/STATE.md +@.planning/phases/16-api-migration/16-RESEARCH.md +@.planning/phases/15-infrastructure-foundation/15-01-SUMMARY.md +@.planning/phases/15-infrastructure-foundation/15-02-SUMMARY.md +@n8n-actions.json +@n8n-workflow.json (for Phase 15 utility node code — Container ID Registry, GraphQL Error Handler, HTTP Template) +@ARCHITECTURE.md + + + + + + Task 1: Replace container list query and resolve with Container ID Registry + n8n-actions.json + +Replace the container lookup flow in n8n-actions.json. Currently: +- "Has Container ID?" IF node → "Get All Containers" HTTP Request → "Resolve Container ID" Code node + +The current flow fetches ALL containers from Docker API, then searches by name in Code node to find the container ID. Replace with Unraid GraphQL query + Container ID Registry: + +1. **Replace "Get All Containers"** (GET docker-socket-proxy:2375/v1.47/containers/json?all=true): + - Change to: POST `={{ $env.UNRAID_HOST }}/graphql` + - Body: `{"query": "query { docker { containers { id names state image } } }"}` + - Headers: `Content-Type: application/json`, `x-api-key: ={{ $env.UNRAID_API_KEY }}` + - Timeout: 15000ms, error handling: `continueRegularOutput` + - Rename to "Query All Containers" + +2. **Add GraphQL Response Normalizer** Code node after the HTTP Request (before Resolve Container ID). Copy normalizer logic from n8n-workflow.json utility node. This transforms GraphQL response to Docker API contract format so "Resolve Container ID" Code node works unchanged. + +3. **Add Container ID Registry update** after normalizer — a Code node that updates the static data registry with fresh name→PrefixedID mappings. This is critical because downstream mutations need PrefixedIDs. + +4. **Update "Resolve Container ID"** Code node: After normalization, this node already works (it searches by `Names[0]`). However, enhance it to also output the `unraidId` (PrefixedID) from the `Id` field, so downstream mutation nodes can use it directly. Add to the output: `unraidId: matched.Id` (the normalizer preserves the full PrefixedID in the `Id` field). + +Wire: Has Container ID? (false) → Query All Containers → Normalizer → Registry Update → Resolve Container ID → Route Action + + +Load n8n-actions.json and verify: +1. "Get All Containers" node replaced with GraphQL query +2. Normalizer Code node exists between HTTP Request and Resolve Container ID +3. Resolve Container ID outputs unraidId field +4. No "docker-socket-proxy" in any URL + + +Container lookup uses Unraid GraphQL API with normalizer. Container ID Registry updated on every lookup. Resolve Container ID outputs unraidId (PrefixedID) for downstream mutations. + + + + + Task 2: Replace start/stop/restart HTTP nodes with GraphQL mutations + n8n-actions.json + +Replace the 3 Docker API action nodes with Unraid GraphQL mutations: + +1. **Replace "Start Container"** (POST docker-socket-proxy:2375/v1.47/containers/{id}/start): + - Add a **"Build Start Mutation"** Code node before the HTTP Request that constructs the GraphQL mutation body: + ```javascript + const data = $('Route Action').item.json; + const unraidId = data.unraidId || data.containerId; + return { json: { query: `mutation { docker { start(id: "${unraidId}") { id state } } }` } }; + ``` + - Change HTTP Request to: POST `={{ $env.UNRAID_HOST }}/graphql`, body from expression `={{ JSON.stringify({query: $json.query}) }}` + - Headers: `Content-Type: application/json`, `x-api-key: ={{ $env.UNRAID_API_KEY }}` + - Timeout: 15000ms, error handling: `continueRegularOutput` + - Add **GraphQL Error Handler** Code node after HTTP Request (before Format Start Result). Copy error handler logic from n8n-workflow.json utility node. Maps `ALREADY_IN_STATE` → `{statusCode: 304}`, `NOT_FOUND` → `{statusCode: 404}`. + - Wire: Route Action → Build Start Mutation → Start Container (HTTP) → Error Handler → Format Start Result + +2. **Replace "Stop Container"** (POST docker-socket-proxy:2375/v1.47/containers/{id}/stop?t=10): + - Same pattern as Start: Build Stop Mutation → HTTP Request → Error Handler → Format Stop Result + - Mutation: `mutation { docker { stop(id: "${unraidId}") { id state } } }` + - Timeout: 15000ms + +3. **Replace "Restart Container"** (POST docker-socket-proxy:2375/v1.47/containers/{id}/restart?t=10): + Unraid has NO native restart mutation. Implement as sequential stop + start: + + a. **Build Stop-for-Restart Mutation** Code node: + ```javascript + const data = $('Route Action').item.json; + const unraidId = data.unraidId || data.containerId; + return { json: { query: `mutation { docker { stop(id: "${unraidId}") { id state } } }`, unraidId } }; + ``` + b. **Stop For Restart** HTTP Request node (same config as Stop Container) + c. **Handle Stop-for-Restart Result** Code node: + - Check response: if success OR statusCode 304 (already stopped) → proceed to start + - If error → fail restart + ```javascript + const response = $input.item.json; + const prevData = $('Build Stop-for-Restart Mutation').item.json; + if (response.statusCode && response.statusCode !== 304 && !response.data) { + return { json: { error: true, statusCode: response.statusCode, message: 'Failed to stop container for restart' } }; + } + return { json: { query: `mutation { docker { start(id: "${prevData.unraidId}") { id state } } }` } }; + ``` + d. **Start After Stop** HTTP Request node (same config as Start Container) + e. **Restart Error Handler** Code node (same GraphQL Error Handler logic) + f. Wire: Route Action → Build Stop-for-Restart → Stop For Restart (HTTP) → Handle Stop-for-Restart → Start After Stop (HTTP) → Restart Error Handler → Format Restart Result + + **Critical:** The existing "Format Restart Result" Code node checks `response.statusCode === 304` which means "already running". For restart, 304 on the start step would mean the container didn't actually stop then start — it was already running. This is correct behavior for the existing Format Restart Result node. + +**Existing Format Start/Stop/Restart Result Code nodes remain UNCHANGED.** They already check: +- `response.statusCode === 304` → "already in desired state" +- `!response.message && !response.error` → success (Docker 204 No Content pattern) +- The GraphQL Error Handler output maps to match these exact patterns. + +Rename Docker-centric HTTP Request nodes: +- "Start Container" → "Start Container" (keep name, just change URL/method) +- "Stop Container" → "Stop Container" (keep name) +- Remove old "Restart Container" single-node and replace with stop+start chain + + +Load n8n-actions.json and verify: +1. Zero "docker-socket-proxy" references in any node URL +2. Start and Stop nodes use POST to `$env.UNRAID_HOST/graphql` with mutation bodies +3. Restart implemented as 2 HTTP Request nodes (stop then start) with intermediate error handling +4. GraphQL Error Handler Code nodes exist after each mutation HTTP Request +5. Format Start/Stop/Restart Result Code nodes are UNCHANGED from pre-migration +6. All connections valid +7. Push to n8n via API and verify HTTP 200 + + +Container start/stop use single GraphQL mutations. Restart uses sequential stop+start with ALREADY_IN_STATE tolerance on stop step. Error Handler maps GraphQL errors to statusCode 304 pattern. Format Result nodes unchanged. Workflow pushed to n8n. + + + + + + +1. Load n8n-actions.json and confirm zero "docker-socket-proxy" references +2. Confirm start/stop mutations use correct GraphQL syntax +3. Confirm restart is 2-step (stop → start) with 304 tolerance on stop +4. Confirm GraphQL Error Handler maps ALREADY_IN_STATE to statusCode 304 +5. Confirm Format Start/Stop/Restart Result Code nodes are byte-for-byte identical to pre-migration +6. Push to n8n and verify HTTP 200 + + + +- n8n-actions.json has zero Docker socket proxy references +- Start/stop operations use GraphQL mutations with Error Handler +- Restart operates as sequential stop+start with ALREADY_IN_STATE tolerance +- Format Result Code nodes unchanged (zero-change migration for output formatting) +- Container ID Registry updated on container lookup +- Workflow valid and pushed to n8n + + + +After completion, create `.planning/phases/16-api-migration/16-02-SUMMARY.md` + diff --git a/.planning/phases/16-api-migration/16-03-PLAN.md b/.planning/phases/16-api-migration/16-03-PLAN.md new file mode 100644 index 0000000..5eced76 --- /dev/null +++ b/.planning/phases/16-api-migration/16-03-PLAN.md @@ -0,0 +1,219 @@ +--- +phase: 16-api-migration +plan: 03 +type: execute +wave: 1 +depends_on: [] +files_modified: [n8n-update.json] +autonomous: true + +must_haves: + truths: + - "User can update a single container and sees 'updated: old_version -> new_version' message" + - "User sees 'already up to date' when no update is available" + - "User sees error message when update fails (pull error, container not found)" + - "Update success/failure messages sent via both text and inline keyboard response modes" + artifacts: + - path: "n8n-update.json" + provides: "Single container update via Unraid GraphQL updateContainer mutation" + contains: "updateContainer" + key_links: + - from: "n8n-update.json mutation node" + to: "Unraid GraphQL API" + via: "POST updateContainer mutation" + pattern: "updateContainer" + - from: "n8n-update.json" + to: "Telegram response nodes" + via: "Format Update Success/No Update/Error Code nodes" + pattern: "Format.*Result|Format.*Update|Format.*Error" +--- + + +Replace the 5-step Docker update flow in n8n-update.json with a single Unraid GraphQL `updateContainer` mutation. + +Purpose: The most complex workflow migration. Docker requires 5 sequential steps (inspect→stop→remove→create→start+cleanup) to update a container. Unraid's `updateContainer` mutation does all this atomically. This dramatically simplifies the workflow from 34 nodes to ~15-18 nodes. + +Output: n8n-update.json with single `updateContainer` mutation replacing the 5-step Docker flow, 60-second timeout for large image pulls, and identical user-facing messages (success, no-update, error). + + + +@/home/luc/.claude/get-shit-done/workflows/execute-plan.md +@/home/luc/.claude/get-shit-done/templates/summary.md + + + +@.planning/PROJECT.md +@.planning/ROADMAP.md +@.planning/STATE.md +@.planning/phases/16-api-migration/16-RESEARCH.md +@.planning/phases/15-infrastructure-foundation/15-01-SUMMARY.md +@.planning/phases/15-infrastructure-foundation/15-02-SUMMARY.md +@n8n-update.json +@n8n-workflow.json (for Phase 15 utility node code — Container ID Registry, GraphQL Response Normalizer, Error Handler) +@ARCHITECTURE.md + + + + + + Task 1: Replace container lookup and 5-step Docker update with single GraphQL mutation + n8n-update.json + +Completely restructure n8n-update.json to replace the 5-step Docker update flow with a single `updateContainer` GraphQL mutation. The current 34-node workflow has these stages: + +**Current flow (to be replaced):** +1. Container lookup: Has Container ID? → Get All Containers → Resolve Container ID +2. Image inspection: Inspect Container → Parse Container Config +3. Image pull: Pull Image (Execute Command via docker pull) → Check Pull Response → Check Pull Success +4. Digest comparison: Inspect New Image → Compare Digests → Check If Update Needed +5. Container recreation: Stop → Remove → Build Create Body → Create → Parse Create Response → Start +6. Messaging: Format Success/No Update/Error → Check Response Mode → Send Inline/Text → Return + +**New flow (replacement):** + +**Stage 1: Container lookup** (similar to Plan 02 pattern) +- Keep "When executed by another workflow" trigger (unchanged) +- Keep "Has Container ID?" IF node (unchanged) +- Replace "Get All Containers" with GraphQL query: POST `={{ $env.UNRAID_HOST }}/graphql`, body `{"query": "query { docker { containers { id names state image imageId } } }"}`, timeout 15000ms +- Add GraphQL Response Normalizer after query +- Add Container ID Registry update after normalizer +- Update "Resolve Container ID" to also output `unraidId` and `currentImageId` (from `imageId` field in normalized response for later comparison) + +**Stage 2: Pre-update state capture** (new Code node) +- "Capture Pre-Update State" Code node: Extracts `unraidId`, `containerName`, `currentImageId`, `currentImage` from resolved container data. Passes through `chatId`, `messageId`, `responseMode`, `correlationId`. + +**Stage 3: Update mutation** (replaces stages 3-5 above) +- "Build Update Mutation" Code node: Constructs GraphQL mutation body: + ```javascript + const data = $input.item.json; + return { json: { + ...data, + query: `mutation { docker { updateContainer(id: "${data.unraidId}") { id state image imageId } } }` + }}; + ``` +- "Update Container" HTTP Request node: + - POST `={{ $env.UNRAID_HOST }}/graphql` + - Body: from $json (query field) + - Headers: `Content-Type: application/json`, `x-api-key: ={{ $env.UNRAID_API_KEY }}` + - **Timeout: 60000ms (60 seconds)** — container updates pull images which can take 30+ seconds for large images + - Error handling: `continueRegularOutput` +- "Handle Update Response" Code node (replaces Compare Digests + Check If Update Needed): + ```javascript + const response = $input.item.json; + const prevData = $('Capture Pre-Update State').item.json; + + // Check for GraphQL errors + if (response.errors) { + const error = response.errors[0]; + return { json: { success: false, error: true, errorMessage: error.message, ...prevData } }; + } + + // Extract updated container from response + const updated = response.data?.docker?.updateContainer; + if (!updated) { + return { json: { success: false, error: true, errorMessage: 'No response from update mutation', ...prevData } }; + } + + // Compare imageId to determine if update happened + const newImageId = updated.imageId || ''; + const oldImageId = prevData.currentImageId || ''; + const wasUpdated = (newImageId !== oldImageId); + + return { json: { + success: true, + needsUpdate: wasUpdated, // matches existing Check If Update Needed output field name + updated: wasUpdated, + containerName: prevData.containerName, + currentVersion: prevData.currentImage, + newVersion: updated.image, + currentImageId: oldImageId, + newImageId: newImageId, + chatId: prevData.chatId, + messageId: prevData.messageId, + responseMode: prevData.responseMode, + correlationId: prevData.correlationId + }}; + ``` + +**Stage 4: Route result** (simplified) +- "Check If Updated" IF node: Checks `$json.needsUpdate === true` + - True → "Format Update Success" (existing node — may need minor field name adjustments) + - False → "Format No Update Needed" (existing node — may need minor field name adjustments) +- Error path: from "Handle Update Response" error output → "Format Pull Error" (reuse existing error formatting) + +**Stage 5: Messaging** (preserve existing) +- Keep ALL existing messaging nodes unchanged: + - "Format Update Success" / "Check Response Mode (Success)" / "Send Inline Success" / "Send Text Success" / "Return Success" + - "Format No Update Needed" / "Check Response Mode (No Update)" / "Send Inline No Update" / "Send Text No Update" / "Return No Update" + - "Format Pull Error" / "Check Response Mode (Error)" / "Send Inline Error" / "Send Text Error" / "Return Error" +- These 15 messaging nodes stay exactly as they are. The "Handle Update Response" Code node formats its output to match what these nodes expect. + +**Nodes to REMOVE** (no longer needed — Docker-specific operations replaced by single mutation): +- "Inspect Container" (HTTP Request) +- "Parse Container Config" (Code) +- "Pull Image" (Execute Command) +- "Check Pull Response" (Code) +- "Check Pull Success" (IF) +- "Inspect New Image" (HTTP Request) +- "Compare Digests" (Code) +- "Check If Update Needed" (IF) +- "Stop Container" (HTTP Request) +- "Remove Container" (HTTP Request) +- "Build Create Body" (Code) +- "Create Container" (HTTP Request) +- "Parse Create Response" (Code) +- "Start Container" (HTTP Request) +- "Remove Old Image (Success)" (HTTP Request) + +That's 15 nodes removed, replaced by ~4 new nodes (Normalizer, Registry Update, Build Mutation, Handle Response). Plus updated query and resolve nodes. Net reduction from 34 to ~19 nodes. + +**Adjust "Format Update Success"** Code node if needed: It currently references `$('Parse Create Response').item.json` for container data. Update to reference `$('Handle Update Response').item.json` instead. The output fields (`containerName`, `currentVersion`, `newVersion`, `chatId`, `messageId`, `responseMode`, `correlationId`) must match what Format Update Success expects. + +**Adjust "Format No Update Needed"** similarly: Currently references `$('Check If Update Needed').item.json`. Update reference to `$('Handle Update Response').item.json`. + +**Adjust "Format Pull Error"** similarly: Currently references `$('Check Pull Success').item.json`. Update reference to `$('Handle Update Response').item.json`. Field mapping: `errorMessage` stays as-is. + +**CRITICAL: Update Container ID Registry after mutation** — Container updates recreate containers with new IDs. After successful update, the old PrefixedID is invalid. Add registry cache refresh in the success path. However, since we can't easily query the full container list mid-update, rely on the mutation response (which includes the new `id`) and do a targeted registry update for just the updated container. + + +Load n8n-update.json with python3 and verify: +1. Zero "docker-socket-proxy" references +2. Zero "Execute Command" nodes (docker pull removed) +3. Single "updateContainer" mutation HTTP Request node exists with 60000ms timeout +4. Container lookup uses GraphQL query with normalizer +5. Handle Update Response properly routes to existing Format Success/No Update/Error nodes +6. All 15 messaging nodes (Format/Check/Send/Return) are present +7. Node count reduced from 34 to ~19 +8. All connections valid (no references to deleted nodes) +9. Push to n8n via API and verify HTTP 200 + + +n8n-update.json uses single updateContainer GraphQL mutation instead of 5-step Docker flow. 60-second timeout accommodates large image pulls. Format Success/No Update/Error messaging nodes preserved (with updated node references). Container ID Registry refreshed after update. Workflow reduced from 34 to ~19 nodes. Pushed to n8n successfully. + + + + + + +1. Zero "docker-socket-proxy" references in n8n-update.json +2. Zero "Execute Command" nodes (no docker pull) +3. Single updateContainer mutation with 60s timeout +4. ImageId comparison determines if update happened (not image digest) +5. All 3 response paths work: success, no-update, error +6. Format Result Code nodes reference correct upstream nodes +7. Push to n8n with HTTP 200 + + + +- n8n-update.json has zero Docker socket proxy references +- Single GraphQL mutation replaces 5-step Docker flow +- 60-second timeout for update mutation (accommodates large image pulls) +- Success/no-update/error messaging identical to user +- Container ID Registry refreshed after successful update +- Node count reduced by ~15 nodes +- Workflow valid and pushed to n8n + + + +After completion, create `.planning/phases/16-api-migration/16-03-SUMMARY.md` + diff --git a/.planning/phases/16-api-migration/16-04-PLAN.md b/.planning/phases/16-api-migration/16-04-PLAN.md new file mode 100644 index 0000000..7b06536 --- /dev/null +++ b/.planning/phases/16-api-migration/16-04-PLAN.md @@ -0,0 +1,145 @@ +--- +phase: 16-api-migration +plan: 04 +type: execute +wave: 1 +depends_on: [] +files_modified: [n8n-batch-ui.json] +autonomous: true + +must_haves: + truths: + - "Batch selection keyboard displays all containers with correct names and states" + - "Toggling container selection updates bitmap and keyboard correctly" + - "Navigation between pages works with correct container ordering" + - "Batch exec resolves bitmap to correct container names" + - "Clear selection resets to empty state" + artifacts: + - path: "n8n-batch-ui.json" + provides: "Batch container selection UI via Unraid GraphQL API" + contains: "graphql" + key_links: + - from: "n8n-batch-ui.json HTTP Request nodes" + to: "Unraid GraphQL API" + via: "POST container list queries" + pattern: "UNRAID_HOST.*graphql" + - from: "GraphQL Response Normalizer" + to: "Existing Code nodes (Build Batch Keyboard, Handle Toggle, etc.)" + via: "Docker API contract format (Names, State, Image)" + pattern: "Names.*State" +--- + + +Migrate n8n-batch-ui.json from Docker socket proxy to Unraid GraphQL API for all 5 container listing queries. + +Purpose: The batch UI sub-workflow fetches the container list 5 times (once per action path: mode selection, toggle update, exec, navigation, clear). All 5 are identical GET queries to Docker API. Replace with GraphQL queries plus normalizer for Docker API contract compatibility. + +Output: n8n-batch-ui.json with all Docker API HTTP Request nodes replaced by Unraid GraphQL queries, wired through normalizer. All existing Code nodes (bitmap encoding, keyboard building, toggle handling) unchanged. + + + +@/home/luc/.claude/get-shit-done/workflows/execute-plan.md +@/home/luc/.claude/get-shit-done/templates/summary.md + + + +@.planning/PROJECT.md +@.planning/ROADMAP.md +@.planning/STATE.md +@.planning/phases/16-api-migration/16-RESEARCH.md +@.planning/phases/15-infrastructure-foundation/15-02-SUMMARY.md +@n8n-batch-ui.json +@n8n-workflow.json (for Phase 15 utility node code — GraphQL Response Normalizer) +@ARCHITECTURE.md + + + + + + Task 1: Replace all 5 Docker API container queries with Unraid GraphQL queries in n8n-batch-ui.json + n8n-batch-ui.json + +Replace all 5 Docker API HTTP Request nodes with Unraid GraphQL query equivalents. All 5 nodes are identical GET requests to `docker-socket-proxy:2375/containers/json?all=true`. Each one: + +**Nodes to migrate:** +1. "Fetch Containers For Mode" — used when entering batch selection +2. "Fetch Containers For Update" — used after toggling a container +3. "Fetch Containers For Exec" — used when executing batch action +4. "Fetch Containers For Nav" — used when navigating pages +5. "Fetch Containers For Clear" — used after clearing selection + +**For EACH of the 5 nodes, apply the same transformation:** + +a. Change HTTP Request configuration: + - Method: POST + - URL: `={{ $env.UNRAID_HOST }}/graphql` + - Body: `{"query": "query { docker { containers { id names state image } } }"}` + - Headers: `Content-Type: application/json`, `x-api-key: ={{ $env.UNRAID_API_KEY }}` + - Timeout: 15000ms + - Error handling: `continueRegularOutput` + +b. Add a **GraphQL Response Normalizer** Code node between each HTTP Request and its downstream Code node consumer. Copy normalizer logic from n8n-workflow.json's "GraphQL Response Normalizer" utility node. + + The normalizer transforms: + - `{data: {docker: {containers: [...]}}}` → flat array `[{Id, Names, State, Image}]` + - State mapping: RUNNING→running, STOPPED→exited, PAUSED→paused + +**Wiring for each of the 5 paths:** +``` +[upstream] → HTTP Request (GraphQL) → Normalizer (Code) → [existing downstream Code node] +``` + +Specifically: +1. Route Batch UI Action → Fetch Containers For Mode → **Normalizer** → Build Batch Keyboard +2. Needs Keyboard Update? (true) → Fetch Containers For Update → **Normalizer** → Rebuild Keyboard After Toggle +3. [exec path] → Fetch Containers For Exec → **Normalizer** → Handle Exec +4. Handle Nav → Fetch Containers For Nav → **Normalizer** → Rebuild Keyboard For Nav +5. Handle Clear → Fetch Containers For Clear → **Normalizer** → Rebuild Keyboard After Clear + +**All downstream Code nodes remain UNCHANGED.** They use bitmap encoding with container arrays and reference `Names[0]`, `State`, `Image` — the normalizer ensures these fields exist in the correct format. + +**Implementation optimization:** Since all 5 normalizers do the exact same thing, consider creating them as 5 identical Code nodes (n8n sub-workflows cannot share nodes across paths — each path needs its own node instance). Keep the Code identical across all 5 to simplify future maintenance. + +Rename HTTP Request nodes to remove Docker-specific naming: +- "Fetch Containers For Mode" → keep name (not Docker-specific) +- "Fetch Containers For Update" → keep name +- "Fetch Containers For Exec" → keep name +- "Fetch Containers For Nav" → keep name +- "Fetch Containers For Clear" → keep name + + +Load n8n-batch-ui.json with python3 and verify: +1. Zero HTTP Request nodes contain "docker-socket-proxy" in URL +2. All 5 HTTP Request nodes use POST to `$env.UNRAID_HOST/graphql` +3. 5 GraphQL Response Normalizer Code nodes exist (one per query path) +4. All downstream Code nodes (Build Batch Keyboard, Handle Toggle, Handle Exec, etc.) are UNCHANGED +5. Node count increased from 17 to 22 (5 normalizer nodes added) +6. All connections valid +7. Push to n8n via API and verify HTTP 200 + + +All 5 container queries in n8n-batch-ui.json use Unraid GraphQL API. Normalizer transforms responses to Docker API contract. All bitmap encoding and keyboard building Code nodes unchanged. Workflow pushed to n8n successfully. + + + + + + +1. Zero "docker-socket-proxy" references in n8n-batch-ui.json +2. All 5 HTTP Request nodes point to `$env.UNRAID_HOST/graphql` +3. Normalizer nodes present on all 5 query paths +4. Downstream Code nodes byte-for-byte identical to pre-migration +5. Push to n8n with HTTP 200 + + + +- n8n-batch-ui.json has zero Docker socket proxy references +- All container data flows through GraphQL Response Normalizer +- Batch selection keyboard, toggle, exec, nav, clear all work with normalized data +- Downstream Code nodes unchanged (zero-change migration for consumers) +- Workflow valid and pushed to n8n + + + +After completion, create `.planning/phases/16-api-migration/16-04-SUMMARY.md` + diff --git a/.planning/phases/16-api-migration/16-05-PLAN.md b/.planning/phases/16-api-migration/16-05-PLAN.md new file mode 100644 index 0000000..30936b6 --- /dev/null +++ b/.planning/phases/16-api-migration/16-05-PLAN.md @@ -0,0 +1,240 @@ +--- +phase: 16-api-migration +plan: 05 +type: execute +wave: 2 +depends_on: [16-01, 16-02, 16-03, 16-04] +files_modified: [n8n-workflow.json] +autonomous: true + +must_haves: + truths: + - "Inline keyboard action callbacks resolve container and execute start/stop/restart/update via Unraid API" + - "Text command 'update all' shows :latest containers with update availability via Unraid API" + - "Batch update loop calls update sub-workflow for each container successfully" + - "Callback update from inline keyboard works via Unraid API" + - "Batch stop confirmation resolves bitmap to container names via Unraid API" + - "Cancel-return-to-submenu resolves container via Unraid API" + artifacts: + - path: "n8n-workflow.json" + provides: "Main workflow with all Docker API calls replaced by Unraid GraphQL queries" + contains: "graphql" + key_links: + - from: "n8n-workflow.json container query nodes" + to: "Unraid GraphQL API" + via: "POST container list queries" + pattern: "UNRAID_HOST.*graphql" + - from: "GraphQL Response Normalizer nodes" + to: "Existing consumer Code nodes (Prepare Inline Action Input, Check Available Updates, etc.)" + via: "Docker API contract format" + pattern: "Names.*State.*Id" + - from: "Container ID Registry" + to: "Sub-workflow Execute nodes" + via: "Name→PrefixedID mapping for mutation operations" + pattern: "unraidId|prefixedId" +--- + + +Migrate all 6 Docker socket proxy HTTP Request nodes in the main workflow (n8n-workflow.json) to Unraid GraphQL API queries. + +Purpose: The main workflow is the Telegram bot entry point. It contains 6 Docker API calls for container lookups used by inline keyboard actions, update-all flow, callback updates, batch stop, and cancel-return navigation. These are read-only lookups (no mutations — mutations happen in sub-workflows), so this is a query-only migration with normalizer and registry updates. + +Output: n8n-workflow.json with zero Docker socket proxy references, all container lookups via GraphQL, Container ID Registry updated on every query, Phase 15 utility nodes wired into active flows. + + + +@/home/luc/.claude/get-shit-done/workflows/execute-plan.md +@/home/luc/.claude/get-shit-done/templates/summary.md + + + +@.planning/PROJECT.md +@.planning/ROADMAP.md +@.planning/STATE.md +@.planning/phases/16-api-migration/16-RESEARCH.md +@.planning/phases/15-infrastructure-foundation/15-01-SUMMARY.md +@.planning/phases/15-infrastructure-foundation/15-02-SUMMARY.md +@.planning/phases/16-api-migration/16-01-SUMMARY.md +@.planning/phases/16-api-migration/16-02-SUMMARY.md +@.planning/phases/16-api-migration/16-03-SUMMARY.md +@.planning/phases/16-api-migration/16-04-SUMMARY.md +@n8n-workflow.json +@ARCHITECTURE.md + + + + + + Task 1: Replace all 6 Docker API container queries with Unraid GraphQL queries in main workflow + n8n-workflow.json + +Replace all 6 Docker socket proxy HTTP Request nodes in n8n-workflow.json with Unraid GraphQL queries. Each currently does GET to `docker-socket-proxy:2375/containers/json?all=true` (or `all=false` for update-all). + +**Nodes to migrate:** + +1. **"Get Container For Action"** (inline keyboard action callbacks) + - Currently: GET `http://docker-socket-proxy:2375/containers/json?all=true` + - Feeds into: "Prepare Inline Action Input" Code node + - Change to: POST `={{ $env.UNRAID_HOST }}/graphql` + - Body: `{"query": "query { docker { containers { id names state image } } }"}` + - Add Normalizer + Registry Update Code nodes between HTTP and "Prepare Inline Action Input" + +2. **"Get Container For Cancel"** (cancel-return-to-submenu) + - Currently: GET `http://docker-socket-proxy:2375/containers/json?all=true` + - Feeds into: "Build Cancel Return Submenu" Code node + - Same GraphQL transformation + normalizer + registry update + +3. **"Get All Containers For Update All"** (update-all text command) + - Currently: GET `http://docker-socket-proxy:2375/containers/json?all=false` (only running containers) + - Feeds into: "Check Available Updates" Code node + - GraphQL query should filter to running containers: `{"query": "query { docker { containers { id names state image imageId } } }"}` + - NOTE: GraphQL API may not have a `running-only` filter. Query all containers and let existing "Check Available Updates" Code node filter (it already filters by `:latest` tag and excludes infrastructure). The existing code handles both running and stopped containers. + - Add `imageId` to the query for update-all flow (needed for update availability checking) + +4. **"Fetch Containers For Update All Exec"** (update-all execution) + - Currently: GET `http://docker-socket-proxy:2375/containers/json?all=false` + - Feeds into: "Prepare Update All Batch" Code node + - Same transformation as #3 (query all, let Code node filter) + +5. **"Get Container For Callback Update"** (inline keyboard update callback) + - Currently: GET `http://docker-socket-proxy:2375/containers/json?all=true` + - Feeds into: "Find Container For Callback Update" Code node + - Standard GraphQL transformation + normalizer + registry update + +6. **"Fetch Containers For Bitmap Stop"** (batch stop confirmation) + - Currently: GET `http://docker-socket-proxy:2375/containers/json?all=true` + - Feeds into: "Resolve Batch Stop Names" Code node + - Standard GraphQL transformation + normalizer + registry update + +**For EACH node, apply:** + +a. Change HTTP Request to POST method +b. URL: `={{ $env.UNRAID_HOST }}/graphql` +c. Body: `{"query": "query { docker { containers { id names state image } } }"}` (add `imageId` for update-all nodes #3 and #4) +d. Headers: `Content-Type: application/json`, `x-api-key: ={{ $env.UNRAID_API_KEY }}` +e. Timeout: 15000ms +f. Error handling: `continueRegularOutput` +g. Add GraphQL Response Normalizer Code node after HTTP Request +h. Add Container ID Registry update Code node after normalizer (updates static data cache) +i. Wire normalizer/registry output to existing downstream Code node + +**Wiring pattern for each:** +``` +[upstream] → HTTP Request (GraphQL) → Normalizer → Registry Update → [existing downstream Code node] +``` + +**Phase 15 standalone utility nodes:** The standalone "GraphQL Response Normalizer", "Container ID Registry", "GraphQL Error Handler", "Unraid API HTTP Template", "Callback Token Encoder", and "Callback Token Decoder" nodes at positions [200-1000, 2400-2600] should remain in the workflow as reference templates. They are not wired to any active flow (and that's intentional — they serve as code templates for copy-paste during migration). Do NOT remove them. + +**Consumer Code nodes remain UNCHANGED:** +- "Prepare Inline Action Input" — searches containers by name using `Names[0]`, `State`, `Id` +- "Build Cancel Return Submenu" — same pattern +- "Check Available Updates" — filters `:latest` containers, checks update availability +- "Prepare Update All Batch" — prepares batch execution data +- "Find Container For Callback Update" — finds container by name +- "Resolve Batch Stop Names" — decodes bitmap to container names + +All these nodes reference `Names[0]`, `State`, `Image`, `Id` — the normalizer ensures these fields exist in correct format. + +**Special case: "Prepare Inline Action Input" and "Find Container For Callback Update"** — These nodes output `containerId: container.Id` which feeds into sub-workflow calls. The `Id` field now contains a 129-char PrefixedID (from normalizer), not a 64-char Docker hex ID. This is correct — the sub-workflows (Plan 02 actions, Plan 03 update) have been migrated to use this PrefixedID format in their GraphQL mutations. + + +Load n8n-workflow.json with python3 and verify: +1. Zero HTTP Request nodes contain "docker-socket-proxy" in URL (excluding the Unraid API Test node which already uses $env.UNRAID_HOST) +2. All 6 former Docker API nodes now use POST to `$env.UNRAID_HOST/graphql` +3. 6 GraphQL Response Normalizer Code nodes exist (one per query path) +4. 6 Container ID Registry update Code nodes exist +5. All downstream consumer Code nodes are UNCHANGED +6. Phase 15 standalone utility nodes still present at positions [200-1000, 2400-2600] +7. All connections valid (no dangling references) +8. Push to n8n via API and verify HTTP 200 + + +All 6 Docker API queries in main workflow replaced with Unraid GraphQL queries. Normalizer and Registry update on every query path. Consumer Code nodes unchanged. Phase 15 utility nodes preserved as templates. Workflow pushed to n8n. + + + + + Task 2: Wire Callback Token Encoder/Decoder into inline keyboard flows + n8n-workflow.json + +Wire the Callback Token Encoder and Decoder from Phase 15 into the main workflow's inline keyboard callback flows. This ensures Telegram callback_data uses 8-char tokens instead of full container IDs (which are now 129-char PrefixedIDs, far exceeding Telegram's 64-byte limit). + +**IMPORTANT: First investigate the current callback_data encoding pattern.** + +Before implementing, read the existing Code nodes that generate inline keyboard buttons to understand how callback_data is currently structured. The nodes to examine: +- "Build Container List" (in n8n-status.json, but called via Execute Workflow from main) +- "Build Container Submenu" (in n8n-status.json) +- Any Code node in main workflow that creates `inline_keyboard` arrays + +Current pattern likely uses short container names or Docker short IDs (12 chars) in callback_data. With PrefixedIDs (129 chars), this MUST change to use the Callback Token Encoder. + +**If callback_data currently uses container NAMES (not IDs):** +- Container names are short (e.g., "plex", "sonarr") and fit within 64 bytes +- In this case, callback token encoding may NOT be needed for all paths +- Only paths that embed container IDs in callback_data need token encoding + +**If callback_data currently uses container IDs:** +- ALL paths generating callback_data with container IDs must use Token Encoder +- ALL paths parsing callback_data with container IDs must use Token Decoder + +**Investigation steps:** +1. Read Code nodes that create inline keyboards in n8n-status.json and main workflow +2. Identify the exact callback_data format (e.g., "start:containerName", "s:dockerId", "select:name") +3. Determine which paths (if any) embed container IDs in callback_data +4. Only wire Token Encoder/Decoder for paths that need it + +**If token encoding IS needed, wire as follows:** + +For keyboard GENERATION (encoder): +- Find Code nodes that build `inline_keyboard` with container IDs +- Before those nodes, add a Code node that calls the Token Encoder logic to convert each PrefixedID to an 8-char token +- Update callback_data format to use tokens instead of IDs + +For callback PARSING (decoder): +- Find the "Parse Callback Data" Code node in main workflow +- Add Token Decoder logic to resolve tokens back to container names/PrefixedIDs +- Update downstream routing to use decoded values + +**If token encoding is NOT needed (names used, not IDs):** +- Document this finding in the SUMMARY +- Leave Token Encoder/Decoder as standalone utility nodes for future use +- Verify that all callback_data fits within 64 bytes with current naming + +**Key constraint:** Telegram inline keyboard callback_data has a 64-byte limit. Current callback_data must be verified to fit within this limit with the new data format. + + +1. Identify current callback_data format in all inline keyboard Code nodes +2. If tokens needed: verify Token Encoder/Decoder wired correctly, callback_data fits 64 bytes +3. If tokens NOT needed: verify all callback_data still fits 64 bytes with new container ID format +4. All connections valid +5. Push to n8n if changes were made + + +Callback data encoding verified or updated for Telegram's 64-byte limit. Token Encoder/Decoder wired if needed, or documented as unnecessary if container names (not IDs) are used in callback_data. + + + + + + +1. Zero "docker-socket-proxy" references in n8n-workflow.json +2. All container queries use Unraid GraphQL API +3. Container ID Registry updated on every query +4. Callback data fits within Telegram's 64-byte limit +5. All sub-workflow Execute nodes pass correct data format (PrefixedIDs work with migrated sub-workflows) +6. Phase 15 utility nodes preserved as templates +7. Push to n8n with HTTP 200 + + + +- n8n-workflow.json has zero Docker socket proxy references (except possibly Unraid API Test node which is already correct) +- All 6 container lookups use GraphQL queries with normalizer +- Container ID Registry refreshed on every query path +- Callback data encoding works within Telegram's 64-byte limit +- Sub-workflow integration verified (actions, update, status, batch-ui all receive correct data format) +- Workflow valid and pushed to n8n + + + +After completion, create `.planning/phases/16-api-migration/16-05-SUMMARY.md` +