docs(03): create phase plan

Phase 03: Container Actions
- 4 plans in 4 waves (sequential due to shared workflow file)
- Ready for execution
This commit is contained in:
Lucas Berger
2026-01-29 21:48:58 -05:00
parent f0694f6c8c
commit 893412f405
5 changed files with 1310 additions and 0 deletions
@@ -0,0 +1,234 @@
---
phase: 03-container-actions
plan: 01
type: execute
wave: 1
depends_on: []
files_modified: [n8n-workflow.json]
autonomous: true
must_haves:
truths:
- "User can start a stopped container by name"
- "User can stop a running container by name"
- "User can restart a container by name"
- "Single container matches execute immediately without confirmation"
artifacts:
- path: "n8n-workflow.json"
provides: "Action routing and Docker API POST calls"
contains: "Route Action Message"
key_links:
- from: "Switch node (Route Message)"
to: "Action routing branch"
via: "contains start/stop/restart"
pattern: "start|stop|restart"
- from: "Execute Command node"
to: "Docker API"
via: "curl POST to /containers/{id}/start|stop|restart"
pattern: "-X POST.*containers"
---
<objective>
Implement basic container actions (start, stop, restart) for single-container matches.
Purpose: Enable users to control containers through Telegram by sending commands like "start plex" or "stop sonarr". When exactly one container matches, the action executes immediately.
Output: Extended n8n workflow with action command routing and Docker API POST calls.
</objective>
<execution_context>
@/home/luc/.claude/get-shit-done/workflows/execute-plan.md
@/home/luc/.claude/get-shit-done/templates/summary.md
</execution_context>
<context>
@.planning/PROJECT.md
@.planning/ROADMAP.md
@.planning/STATE.md
@.planning/phases/03-container-actions/03-CONTEXT.md
@.planning/phases/03-container-actions/03-RESEARCH.md
@.planning/phases/02-docker-integration/02-02-SUMMARY.md
@n8n-workflow.json
</context>
<tasks>
<task type="auto">
<name>Task 1: Add action command routing to workflow</name>
<files>n8n-workflow.json</files>
<action>
Extend the existing "Route Message" Switch node to detect action commands.
Add new routes for patterns:
- "start <name>" → action branch
- "stop <name>" → action branch
- "restart <name>" → action branch
The route should match case-insensitively and capture the container name portion.
Add a Code node after the route that:
1. Parses the action type (start/stop/restart) from message text
2. Parses the container name from message text
3. Returns: { action, containerQuery, chatId, messageId }
Example parsing:
```javascript
const text = $json.message.text.toLowerCase().trim();
const match = text.match(/^(start|stop|restart)\s+(.+)$/i);
if (!match) {
return { json: { error: 'Invalid action format' } };
}
return {
json: {
action: match[1].toLowerCase(),
containerQuery: match[2].trim(),
chatId: $json.message.chat.id,
messageId: $json.message.message_id
}
};
```
</action>
<verify>Send "start test" to bot - should route to action branch (may fail on execution, but routing works)</verify>
<done>Action commands route to dedicated branch, action and container name are parsed</done>
</task>
<task type="auto">
<name>Task 2: Implement container matching and action execution</name>
<files>n8n-workflow.json</files>
<action>
After the action parsing node, add nodes to:
1. **Docker List Containers** (Execute Command node):
- Same as existing status query: `curl -s --unix-socket /var/run/docker.sock 'http://localhost/v1.47/containers/json?all=true'`
2. **Match Container** (Code node):
- Reuse the fuzzy matching logic from Phase 2:
- Case-insensitive substring match
- Strip common prefixes (linuxserver-, binhex-)
- Return match results:
- `matches`: array of matching containers (Id, Name, State)
- `matchCount`: number of matches
- `action`: preserved from input
- `chatId`: preserved from input
3. **Check Match Count** (Switch node):
- Route based on matchCount:
- 0 matches → "No Match" branch
- 1 match → "Single Match" branch (execute action)
- >1 matches → "Multiple Matches" branch
4. **Execute Action** (Execute Command node on "Single Match" branch):
- Build curl command based on action:
```javascript
const containerId = $json.matches[0].Id;
const action = $json.action;
// stop and restart use ?t=10 for graceful timeout
const timeout = (action === 'stop' || action === 'restart') ? '?t=10' : '';
const cmd = `curl -s -o /dev/null -w "%{http_code}" --unix-socket /var/run/docker.sock -X POST 'http://localhost/v1.47/containers/${containerId}/${action}${timeout}'`;
return { json: { cmd, containerId, action, containerName: $json.matches[0].Name } };
```
5. **Parse Result** (Code node):
- Handle HTTP response codes:
- 204: Success
- 304: Already in state (also success for user)
- 404: Container not found (shouldn't happen after match)
- 500: Docker error
```javascript
const statusCode = parseInt($json.stdout.trim());
const containerName = $('Execute Action').first().json.containerName.replace(/^\//, '');
const action = $('Match Container').first().json.action;
if (statusCode === 204 || statusCode === 304) {
const verb = action === 'start' ? 'started' :
action === 'stop' ? 'stopped' : 'restarted';
return { json: { success: true, message: `${containerName} ${verb} successfully` } };
}
return { json: { success: false, message: `Failed to ${action} ${containerName}: HTTP ${statusCode}` } };
```
6. **Send Response** (Telegram Send Message node):
- Chat ID: `{{ $json.chatId }}`
- Text: `{{ $json.message }}`
- Parse Mode: HTML
For "No Match" and "Multiple Matches" branches, add placeholder Send Message nodes:
- No Match: "No container found matching '{{ $json.containerQuery }}'"
- Multiple Matches: "Found {{ $json.matchCount }} containers matching '{{ $json.containerQuery }}'. Confirmation required."
These placeholders will be replaced with proper callback flows in Plan 03-02.
</action>
<verify>
1. Start a stopped container: "start [container-name]" → should start and report success
2. Stop a running container: "stop [container-name]" → should stop and report success
3. Restart a container: "restart [container-name]" → should restart and report success
4. Try with partial name (fuzzy match): "restart plex" for container named "plex-server" → should work
</verify>
<done>Single-match container actions execute via Docker API and report results to Telegram</done>
</task>
<task type="auto">
<name>Task 3: Handle action errors gracefully</name>
<files>n8n-workflow.json</files>
<action>
Add error handling throughout the action flow:
1. **Docker List Error** (after Docker List Containers):
- Add IF node to check for curl errors
- On error: Send diagnostic message to user
```javascript
const hasError = !$json.stdout || $json.stdout.trim() === '' || !$json.stdout.startsWith('[');
return { json: { hasError, errorDetail: $json.stderr || 'Empty response from Docker API' } };
```
2. **Execute Action Error** (after Execute Command):
- The parse result node already handles non-success codes
- Add stderr check for curl-level failures:
```javascript
if ($json.stderr && $json.stderr.trim()) {
return { json: { success: false, message: `Docker error: ${$json.stderr}` } };
}
```
3. **Error Response Format** (per CONTEXT.md - diagnostic details):
- Include actual error info in messages
- Example: "Failed to stop plex: HTTP 500 - Container is not running"
- Don't hide technical details from user
Ensure all error paths eventually reach a Send Message node so the user always gets feedback.
</action>
<verify>
1. Try to stop an already-stopped container → should report success (304 treated as success)
2. Try to start an already-running container → should report success (304 treated as success)
3. Try action on non-existent container → should report "No container found"
</verify>
<done>All error cases report diagnostic details to user, no silent failures</done>
</task>
</tasks>
<verification>
End-to-end verification:
1. "start [stopped-container]" → Container starts, user sees "started successfully"
2. "stop [running-container]" → Container stops, user sees "stopped successfully"
3. "restart [any-container]" → Container restarts, user sees "restarted successfully"
4. "stop [already-stopped]" → User sees success (not error)
5. "stop nonexistent" → User sees "No container found matching 'nonexistent'"
6. "stop arr" (matches sonarr, radarr, lidarr) → User sees placeholder about multiple matches
Import updated workflow into n8n and verify all scenarios via Telegram.
</verification>
<success_criteria>
- Single-match actions execute immediately without confirmation
- All three actions (start/stop/restart) work correctly
- Fuzzy matching finds containers by partial name
- 204 and 304 responses both treated as success
- Error messages include diagnostic details
- No silent failures - user always gets response
</success_criteria>
<output>
After completion, create `.planning/phases/03-container-actions/03-01-SUMMARY.md`
</output>
@@ -0,0 +1,310 @@
---
phase: 03-container-actions
plan: 02
type: execute
wave: 2
depends_on: ["03-01"]
files_modified: [n8n-workflow.json]
autonomous: true
must_haves:
truths:
- "Telegram Trigger receives callback_query updates from inline buttons"
- "Callback queries route to dedicated handler branch"
- "No-match suggestions show 'Did you mean X?' with inline button"
- "User can accept suggestion without retyping command"
artifacts:
- path: "n8n-workflow.json"
provides: "Callback query handling and suggestion flow"
contains: "Route Update Type"
key_links:
- from: "Telegram Trigger"
to: "Switch node (Route Update Type)"
via: "message or callback_query routing"
pattern: "callback_query"
- from: "HTTP Request node"
to: "Telegram Bot API"
via: "sendMessage with inline_keyboard"
pattern: "api.telegram.org.*sendMessage"
---
<objective>
Add callback query infrastructure and implement the "did you mean?" suggestion flow for no-match cases.
Purpose: Enable inline button interactions in Telegram. When a user's container name doesn't match exactly, show a suggestion with an inline button they can click to accept without retyping.
Output: Extended n8n workflow with callback handling and suggestion UI.
</objective>
<execution_context>
@/home/luc/.claude/get-shit-done/workflows/execute-plan.md
@/home/luc/.claude/get-shit-done/templates/summary.md
</execution_context>
<context>
@.planning/PROJECT.md
@.planning/ROADMAP.md
@.planning/STATE.md
@.planning/phases/03-container-actions/03-CONTEXT.md
@.planning/phases/03-container-actions/03-RESEARCH.md
@.planning/phases/03-container-actions/03-01-SUMMARY.md
@n8n-workflow.json
</context>
<tasks>
<task type="auto">
<name>Task 1: Configure Telegram Trigger for callback queries</name>
<files>n8n-workflow.json</files>
<action>
Modify the Telegram Trigger node to receive both message and callback_query updates:
1. Find the Telegram Trigger node in the workflow
2. Update the "Updates" field to include both types:
- In n8n UI: Updates → ["message", "callback_query"]
- In JSON: "updates": ["message", "callback_query"]
3. Add a new Switch node immediately after the Telegram Trigger and before the authentication IF node:
- Name: "Route Update Type"
- Mode: Rules
- Rules:
- Rule 1: `{{ $json.message }}` is not empty → Output "message"
- Rule 2: `{{ $json.callback_query }}` is not empty → Output "callback_query"
4. Restructure connections:
- Telegram Trigger → Route Update Type
- Route Update Type (message) → IF User Authenticated (existing flow)
- Route Update Type (callback_query) → new callback handler branch
For callback_query authentication, add a new IF node:
- Name: "IF Callback Authenticated"
- Condition: `{{ $json.callback_query.from.id }}` equals authorized user ID
- True: continue to callback processing
- False: no connection (silent ignore per CONTEXT.md)
</action>
<verify>
1. Send a regular message → should route to message branch and work as before
2. Workflow should not error on receiving callback_query updates
</verify>
<done>Telegram Trigger receives callback_query, updates route to appropriate branches</done>
</task>
<task type="auto">
<name>Task 2: Implement suggestion flow for no-match cases</name>
<files>n8n-workflow.json</files>
<action>
Replace the placeholder "No Match" branch from Plan 03-01 with a suggestion flow:
1. **Find Closest Match** (Code node):
After determining zero exact matches, find the closest container name:
```javascript
const query = $json.containerQuery.toLowerCase();
const containers = $json.allContainers; // Full list from Docker
const action = $json.action;
const chatId = $json.chatId;
// Simple closest match: longest common substring or starts-with
let bestMatch = null;
let bestScore = 0;
for (const container of containers) {
const name = container.Names[0].replace(/^\//, '').toLowerCase();
// Score by: contains query, or query contains name, or Levenshtein-like
let score = 0;
if (name.includes(query)) score = query.length;
else if (query.includes(name)) score = name.length * 0.8;
else {
// Simple: count matching characters
for (let i = 0; i < Math.min(query.length, name.length); i++) {
if (query[i] === name[i]) score++;
}
}
if (score > bestScore) {
bestScore = score;
bestMatch = container;
}
}
if (!bestMatch || bestScore < 2) {
return { json: { hasSuggestion: false, query, action, chatId } };
}
const suggestedName = bestMatch.Names[0].replace(/^\//, '');
const suggestedId = bestMatch.Id.substring(0, 12); // Short ID for callback_data
return {
json: {
hasSuggestion: true,
query,
action,
chatId,
suggestedName,
suggestedId,
timestamp: Date.now()
}
};
```
2. **Check Suggestion** (IF node):
- Condition: `{{ $json.hasSuggestion }}` equals true
- True: send suggestion with button
- False: send "no container found" message
3. **Build Suggestion Keyboard** (Code node, on True branch):
```javascript
const { chatId, query, action, suggestedName, suggestedId, timestamp } = $json;
// callback_data must be ≤64 bytes - use short keys
// a=action (1 char: s=start, t=stop, r=restart)
// c=container short ID
// t=timestamp
const actionCode = action === 'start' ? 's' : action === 'stop' ? 't' : 'r';
const callbackData = JSON.stringify({ a: actionCode, c: suggestedId, t: timestamp });
return {
json: {
chat_id: chatId,
text: `No container '<b>${query}</b>' found.\n\nDid you mean <b>${suggestedName}</b>?`,
parse_mode: "HTML",
reply_markup: {
inline_keyboard: [
[
{ text: `Yes, ${action} ${suggestedName}`, callback_data: callbackData },
{ text: "Cancel", callback_data: '{"a":"x"}' }
]
]
}
}
};
```
4. **Send Suggestion** (HTTP Request node):
- Method: POST
- URL: `https://api.telegram.org/bot{{ $credentials.telegramApi.accessToken }}/sendMessage`
- Body Content Type: JSON
- Body: `{{ JSON.stringify($json) }}`
5. **No Suggestion Message** (Telegram Send Message, on False branch):
- Chat ID: `{{ $json.chatId }}`
- Text: `No container found matching '{{ $json.query }}'`
</action>
<verify>
1. "stop nonexistent" → should show "No container found" (no suggestion if nothing close)
2. "stop plx" when "plex" exists → should show "Did you mean plex?" with button
3. Verify button appears and is clickable (don't click yet - callback handling in next task)
</verify>
<done>No-match cases show suggestion with inline button when a close match exists</done>
</task>
<task type="auto">
<name>Task 3: Handle suggestion callback and execute action</name>
<files>n8n-workflow.json</files>
<action>
Add callback processing for suggestion buttons on the callback_query branch:
1. **Parse Callback Data** (Code node, after IF Callback Authenticated):
```javascript
const callback = $json.callback_query;
let data;
try {
data = JSON.parse(callback.data);
} catch (e) {
data = { a: 'x' }; // Treat parse error as cancel
}
const queryId = callback.id;
const chatId = callback.message.chat.id;
const messageId = callback.message.message_id;
// Check 2-minute timeout
const TWO_MINUTES = 120000;
const isExpired = data.t && (Date.now() - data.t > TWO_MINUTES);
// Decode action
const actionMap = { s: 'start', t: 'stop', r: 'restart', x: 'cancel' };
const action = actionMap[data.a] || 'cancel';
return {
json: {
queryId,
chatId,
messageId,
action,
containerId: data.c || null,
expired: isExpired,
isSuggestion: true, // Single container suggestion, not batch
isCancel: action === 'cancel'
}
};
```
2. **Route Callback** (Switch node):
- Rule 1: `{{ $json.isCancel }}` equals true → Cancel branch
- Rule 2: `{{ $json.expired }}` equals true → Expired branch
- Rule 3: Default → Execute branch
3. **Cancel Handler** (Telegram node):
- Operation: Answer Query
- Query ID: `{{ $json.queryId }}`
- Text: "Cancelled"
- Show Alert: false
Then HTTP Request to delete the suggestion message:
- POST to `https://api.telegram.org/bot{{ $credentials.telegramApi.accessToken }}/deleteMessage`
- Body: `{ "chat_id": {{ $json.chatId }}, "message_id": {{ $json.messageId }} }`
4. **Expired Handler**:
- Answer callback query with "Confirmation expired. Please try again."
- Delete the old message
5. **Execute from Callback** (Execute Command node):
```javascript
const containerId = $json.containerId;
const action = $json.action;
const timeout = (action === 'stop' || action === 'restart') ? '?t=10' : '';
const cmd = `curl -s -o /dev/null -w "%{http_code}" --unix-socket /var/run/docker.sock -X POST 'http://localhost/v1.47/containers/${containerId}/${action}${timeout}'`;
return { json: { cmd, containerId, action, queryId: $json.queryId, chatId: $json.chatId, messageId: $json.messageId } };
```
6. **Parse and Respond** (Code node):
- Check status code (204/304 = success)
- Fetch container name for response message
- Answer callback query
- Delete suggestion message
- Send success/failure message
</action>
<verify>
1. "stop plx" → suggestion appears → click "Yes, stop plex" → container stops, suggestion message deleted
2. "stop plx" → suggestion appears → click "Cancel" → suggestion deleted, "Cancelled" toast
3. "stop plx" → wait 2+ minutes → click button → shows "expired" message
</verify>
<done>Clicking suggestion button executes the action and cleans up the UI</done>
</task>
</tasks>
<verification>
End-to-end callback flow verification:
1. Regular messages still work (status, echo, actions from Plan 01)
2. "stop typo" when similar container exists → suggestion with button
3. Click "Yes" → action executes, success message appears
4. Click "Cancel" → suggestion dismissed
5. Wait 2 minutes, click → "expired" message
6. "stop nonexistent" with no close match → plain "not found" message
Import updated workflow and test all scenarios.
</verification>
<success_criteria>
- Telegram Trigger receives both messages and callback_queries
- Suggestion buttons appear for typos/close matches
- Clicking suggestion executes the action
- Cancel button dismisses suggestion
- Expired confirmations handled gracefully
- Old messages cleaned up after interaction
</success_criteria>
<output>
After completion, create `.planning/phases/03-container-actions/03-02-SUMMARY.md`
</output>
@@ -0,0 +1,329 @@
---
phase: 03-container-actions
plan: 03
type: execute
wave: 3
depends_on: ["03-02"]
files_modified: [n8n-workflow.json]
autonomous: true
must_haves:
truths:
- "Multiple container matches show confirmation with inline buttons"
- "Confirmation shows list of matching containers"
- "User can confirm batch action with single button click"
- "Batch actions execute all matching containers in sequence"
artifacts:
- path: "n8n-workflow.json"
provides: "Batch confirmation flow with inline buttons"
contains: "Multiple Matches"
key_links:
- from: "Multiple Matches branch"
to: "HTTP Request for keyboard"
via: "Build confirmation keyboard"
pattern: "inline_keyboard.*Yes.*containers"
- from: "Callback handler"
to: "Batch execution loop"
via: "Execute action for each container"
pattern: "for.*containers"
---
<objective>
Implement batch confirmation flow for actions matching multiple containers.
Purpose: When a user's query matches multiple containers (e.g., "stop arr" matches sonarr, radarr, lidarr), show a confirmation with the list and an inline button. Per CONTEXT.md, batch actions require confirmation before execution.
Output: Extended n8n workflow with batch confirmation and sequential execution.
</objective>
<execution_context>
@/home/luc/.claude/get-shit-done/workflows/execute-plan.md
@/home/luc/.claude/get-shit-done/templates/summary.md
</execution_context>
<context>
@.planning/PROJECT.md
@.planning/ROADMAP.md
@.planning/STATE.md
@.planning/phases/03-container-actions/03-CONTEXT.md
@.planning/phases/03-container-actions/03-RESEARCH.md
@.planning/phases/03-container-actions/03-02-SUMMARY.md
@n8n-workflow.json
</context>
<tasks>
<task type="auto">
<name>Task 1: Build batch confirmation message with inline keyboard</name>
<files>n8n-workflow.json</files>
<action>
Replace the placeholder "Multiple Matches" branch from Plan 03-01:
1. **Build Batch Keyboard** (Code node):
```javascript
const matches = $json.matches;
const action = $json.action;
const chatId = $json.chatId;
const query = $json.containerQuery;
// List matched container names
const names = matches.map(m => m.Names[0].replace(/^\//, ''));
const shortIds = matches.map(m => m.Id.substring(0, 12));
// Build callback_data - must be ≤64 bytes
// For batch: a=action code, c=array of short IDs, t=timestamp
const actionCode = action === 'start' ? 's' : action === 'stop' ? 't' : 'r';
const timestamp = Date.now();
// Check size - if too many containers, callback_data might exceed 64 bytes
// Each short ID is 12 chars, plus overhead. Max ~3-4 containers safely
let callbackData;
if (shortIds.length <= 4) {
callbackData = JSON.stringify({ a: actionCode, c: shortIds, t: timestamp });
} else {
// Too many containers - use abbreviated approach or split
// For now, limit to first 4 and note limitation
callbackData = JSON.stringify({ a: actionCode, c: shortIds.slice(0, 4), t: timestamp });
}
// Format container list
const listText = names.map((n, i) => ` • ${n}`).join('\n');
return {
json: {
chat_id: chatId,
text: `Found <b>${matches.length}</b> containers matching '<b>${query}</b>':\n\n${listText}\n\n${action.charAt(0).toUpperCase() + action.slice(1)} all?`,
parse_mode: "HTML",
reply_markup: {
inline_keyboard: [
[
{ text: `Yes, ${action} ${matches.length} containers`, callback_data: callbackData },
{ text: "Cancel", callback_data: '{"a":"x"}' }
]
]
},
// Store full data for potential later use
_meta: {
action,
containers: matches,
timestamp
}
}
};
```
2. **Send Batch Confirmation** (HTTP Request node):
- Method: POST
- URL: `https://api.telegram.org/bot{{ $credentials.telegramApi.accessToken }}/sendMessage`
- Body Content Type: JSON
- Body: `{{ JSON.stringify({ chat_id: $json.chat_id, text: $json.text, parse_mode: $json.parse_mode, reply_markup: $json.reply_markup }) }}`
</action>
<verify>
1. "stop arr" when sonarr, radarr, lidarr exist → shows confirmation with list
2. Verify button text shows "Yes, stop 3 containers"
3. Both buttons are visible and clickable
</verify>
<done>Multiple matches show batch confirmation message with inline buttons</done>
</task>
<task type="auto">
<name>Task 2: Handle batch confirmation callback</name>
<files>n8n-workflow.json</files>
<action>
Extend the callback handler from Plan 03-02 to handle batch confirmations:
1. **Update Parse Callback Data** (modify existing Code node):
Add detection for batch vs single suggestion:
```javascript
// Existing code from Plan 03-02...
// Detect batch (c is array vs single string)
const isBatch = Array.isArray(data.c);
const containerIds = isBatch ? data.c : [data.c].filter(Boolean);
return {
json: {
queryId,
chatId,
messageId,
action,
containerIds, // Array for batch support
containerId: containerIds[0], // For single-container compat
expired: isExpired,
isBatch,
isCancel: action === 'cancel'
}
};
```
2. **Route for Batch** (update Switch node):
Add rule before single execution:
- Rule: `{{ $json.isBatch }}` equals true AND not cancel AND not expired → Batch Execute branch
3. **Batch Execute** (Code node that builds commands):
```javascript
const containerIds = $json.containerIds;
const action = $json.action;
const timeout = (action === 'stop' || action === 'restart') ? '?t=10' : '';
// Build array of commands
const commands = containerIds.map(id => ({
cmd: `curl -s -o /dev/null -w "%{http_code}" --unix-socket /var/run/docker.sock -X POST 'http://localhost/v1.47/containers/${id}/${action}${timeout}'`,
containerId: id
}));
return {
json: {
commands,
action,
queryId: $json.queryId,
chatId: $json.chatId,
messageId: $json.messageId,
totalCount: containerIds.length
}
};
```
4. **Execute Batch Loop** (use n8n's SplitInBatches or loop approach):
Option A - Sequential Execute (simpler):
```javascript
// In n8n, use a Code node that executes sequentially
const { execSync } = require('child_process');
const commands = $json.commands;
const results = [];
for (const { cmd, containerId } of commands) {
try {
const output = execSync(cmd, { encoding: 'utf8' }).trim();
const statusCode = parseInt(output);
results.push({
containerId,
success: statusCode === 204 || statusCode === 304,
statusCode
});
} catch (err) {
results.push({
containerId,
success: false,
error: err.message
});
}
}
const successCount = results.filter(r => r.success).length;
const failCount = results.length - successCount;
return {
json: {
results,
successCount,
failCount,
totalCount: results.length,
action: $json.action,
queryId: $json.queryId,
chatId: $json.chatId,
messageId: $json.messageId
}
};
```
NOTE: Using execSync in n8n Code node requires allowedModules in n8n settings. If not available, use multiple Execute Command nodes with SplitInBatches node.
5. **Format Batch Result** (Code node):
```javascript
const { successCount, failCount, totalCount, action } = $json;
const verb = action === 'start' ? 'started' :
action === 'stop' ? 'stopped' : 'restarted';
let message;
if (failCount === 0) {
message = `Successfully ${verb} ${successCount} container${successCount > 1 ? 's' : ''}`;
} else if (successCount === 0) {
message = `Failed to ${action} all ${totalCount} containers`;
} else {
message = `${verb.charAt(0).toUpperCase() + verb.slice(1)} ${successCount}/${totalCount} containers (${failCount} failed)`;
}
return {
json: {
message,
chatId: $json.chatId,
queryId: $json.queryId,
messageId: $json.messageId
}
};
```
</action>
<verify>
1. "stop arr" → confirm → all matching containers stop
2. Verify success message shows correct count
3. If one container fails, message shows partial success
</verify>
<done>Batch confirmation executes actions on all matching containers</done>
</task>
<task type="auto">
<name>Task 3: Clean up UI after batch action</name>
<files>n8n-workflow.json</files>
<action>
After batch execution, clean up the Telegram UI:
1. **Answer Callback Query** (Telegram node or HTTP Request):
- Query ID: `{{ $json.queryId }}`
- Text: (empty or brief toast)
- Show Alert: false
2. **Delete Confirmation Message** (HTTP Request node):
- POST to `https://api.telegram.org/bot{{ $credentials.telegramApi.accessToken }}/deleteMessage`
- Body: `{ "chat_id": {{ $json.chatId }}, "message_id": {{ $json.messageId }} }`
3. **Send Result Message** (Telegram Send Message):
- Chat ID: `{{ $json.chatId }}`
- Text: `{{ $json.message }}`
- Parse Mode: HTML
Ensure the flow is:
1. User clicks confirm → callback query answered (removes loading state)
2. Confirmation message deleted
3. Result message sent
This keeps the chat clean - only the result remains, not the intermediate confirmation.
</action>
<verify>
1. Click confirm → confirmation message disappears
2. Result message appears with count
3. No duplicate messages
4. Click cancel → confirmation message disappears, no result message
</verify>
<done>UI cleaned up after batch action, only result message remains</done>
</task>
</tasks>
<verification>
End-to-end batch confirmation verification:
1. "stop arr" (matches 3 containers) → confirmation with list → click confirm → all stop, "Successfully stopped 3 containers"
2. "restart arr" → confirmation → click confirm → all restart
3. "stop arr" → confirmation → click cancel → confirmation deleted, no action
4. "stop arr" → wait 2+ minutes → click → "expired"
5. Single container match (e.g., "stop plex") → still works (no confirmation, direct execution)
6. Suggestion flow (e.g., "stop plx") → still works (single suggestion button)
Import updated workflow and test all scenarios.
</verification>
<success_criteria>
- Multiple matches show batch confirmation with container list
- Confirm button executes all containers in sequence
- Cancel button dismisses without action
- Expired confirmations handled gracefully
- Success message shows accurate count
- Partial failures reported correctly
- UI cleaned up after action (confirmation deleted)
</success_criteria>
<output>
After completion, create `.planning/phases/03-container-actions/03-03-SUMMARY.md`
</output>
@@ -0,0 +1,429 @@
---
phase: 03-container-actions
plan: 04
type: execute
wave: 4
depends_on: ["03-01"]
files_modified: [n8n-workflow.json]
autonomous: true
must_haves:
truths:
- "User can update a container by name (pull new image, recreate)"
- "Update detects if image actually changed"
- "Version change shown when detectable from image labels"
- "No notification if image was already up to date"
artifacts:
- path: "n8n-workflow.json"
provides: "Container update workflow (pull + recreate)"
contains: "Update Container"
key_links:
- from: "Route Message switch"
to: "Update branch"
via: "update <name> pattern"
pattern: "update"
- from: "Docker inspect"
to: "Docker create"
via: "Config extraction and recreation"
pattern: "containers/create"
---
<objective>
Implement container update action (pull new image + recreate container with same config).
Purpose: Allow users to update containers via "update plex" command. The workflow pulls the latest image, compares digests to detect changes, and recreates the container with the same configuration. Per CONTEXT.md, only notify if an actual update occurred.
Output: Extended n8n workflow with full container update flow.
</objective>
<execution_context>
@/home/luc/.claude/get-shit-done/workflows/execute-plan.md
@/home/luc/.claude/get-shit-done/templates/summary.md
</execution_context>
<context>
@.planning/PROJECT.md
@.planning/ROADMAP.md
@.planning/STATE.md
@.planning/phases/03-container-actions/03-CONTEXT.md
@.planning/phases/03-container-actions/03-RESEARCH.md
@.planning/phases/03-container-actions/03-01-SUMMARY.md
@n8n-workflow.json
</context>
<tasks>
<task type="auto">
<name>Task 1: Add update command routing and container matching</name>
<files>n8n-workflow.json</files>
<action>
Extend the "Route Message" Switch node to handle update commands:
1. **Add Update Route** (to existing Switch node):
- Pattern: message contains "update" (case-insensitive)
- Route to new "Update Branch"
2. **Parse Update Command** (Code node):
```javascript
const text = $json.message.text.toLowerCase().trim();
const match = text.match(/^update\s+(.+)$/i);
if (!match) {
return { json: { error: 'Invalid update format', chatId: $json.message.chat.id } };
}
return {
json: {
containerQuery: match[1].trim(),
chatId: $json.message.chat.id,
messageId: $json.message.message_id
}
};
```
3. **Match Container** (reuse existing matching pattern):
- Docker List Containers (Execute Command)
- Fuzzy match logic (Code node)
- For updates: only single match supported (no batch update confirmation)
- If 0 matches: "No container found" (can reuse suggestion flow from 03-02 if available)
- If >1 matches: "Update requires exact container name. Found: sonarr, radarr, lidarr"
- If 1 match: proceed to update flow
4. **Multiple Match Handler** (for update only):
```javascript
const matches = $json.matches;
const names = matches.map(m => m.Names[0].replace(/^\//, '')).join(', ');
return {
json: {
message: `Update requires an exact container name.\n\nFound ${matches.length} matches: ${names}`,
chatId: $json.chatId
}
};
```
Then Send Message node.
</action>
<verify>
1. "update plex" (single match) → should proceed to update flow (may fail at execution, but routing works)
2. "update arr" (multiple matches) → should show "requires exact name" message
3. "update nonexistent" → should show "no container found"
</verify>
<done>Update commands route correctly, single matches proceed to update flow</done>
</task>
<task type="auto">
<name>Task 2: Implement image pull and change detection</name>
<files>n8n-workflow.json</files>
<action>
After single-match routing, implement the update steps:
1. **Inspect Container** (Execute Command node):
```javascript
const containerId = $json.matches[0].Id;
return {
json: {
cmd: `curl -s --unix-socket /var/run/docker.sock 'http://localhost/v1.47/containers/${containerId}/json'`,
containerId,
containerName: $json.matches[0].Names[0].replace(/^\//, ''),
chatId: $json.chatId
}
};
```
2. **Parse Container Config** (Code node):
Parse the inspect output and extract what we need:
```javascript
const inspect = JSON.parse($json.stdout);
const imageName = inspect.Config.Image;
const currentImageId = inspect.Image;
// Extract version from labels if available
const labels = inspect.Config.Labels || {};
const currentVersion = labels['org.opencontainers.image.version']
|| labels['version']
|| currentImageId.substring(7, 19);
return {
json: {
imageName,
currentImageId,
currentVersion,
containerConfig: inspect.Config,
hostConfig: inspect.HostConfig,
networkSettings: inspect.NetworkSettings,
containerName: $json.containerName,
containerId: $json.containerId,
chatId: $json.chatId
}
};
```
3. **Pull Image** (Execute Command node):
```javascript
const imageName = $json.imageName;
return {
json: {
cmd: `curl -s --unix-socket /var/run/docker.sock -X POST 'http://localhost/v1.47/images/create?fromImage=${encodeURIComponent(imageName)}'`,
...($json) // Preserve all context
}
};
```
4. **Inspect New Image** (Execute Command node):
```javascript
const imageName = $json.imageName;
return {
json: {
cmd: `curl -s --unix-socket /var/run/docker.sock 'http://localhost/v1.47/images/${encodeURIComponent(imageName)}/json'`,
...($json)
}
};
```
5. **Compare Digests** (Code node):
```javascript
// Parse new image inspect
const newImage = JSON.parse($json.stdout);
const newImageId = newImage.Id;
const currentImageId = $('Parse Container Config').first().json.currentImageId;
if (currentImageId === newImageId) {
// No update needed - stay silent per CONTEXT.md
return { json: { needsUpdate: false, chatId: $json.chatId } };
}
// Extract new version
const labels = newImage.Config?.Labels || {};
const newVersion = labels['org.opencontainers.image.version']
|| labels['version']
|| newImageId.substring(7, 19);
return {
json: {
needsUpdate: true,
currentImageId,
newImageId,
currentVersion: $('Parse Container Config').first().json.currentVersion,
newVersion,
containerConfig: $('Parse Container Config').first().json.containerConfig,
hostConfig: $('Parse Container Config').first().json.hostConfig,
networkSettings: $('Parse Container Config').first().json.networkSettings,
containerName: $('Parse Container Config').first().json.containerName,
containerId: $('Parse Container Config').first().json.containerId,
chatId: $json.chatId
}
};
```
6. **Check If Update Needed** (IF node):
- Condition: `{{ $json.needsUpdate }}` equals true
- True: proceed to recreate
- False: do nothing (silent, no message)
</action>
<verify>
1. "update [container]" with no new image → no message sent (silent)
2. Check workflow logs to confirm pull was attempted and digests compared
</verify>
<done>Image pull works, change detection compares digests correctly</done>
</task>
<task type="auto">
<name>Task 3: Implement container recreation workflow</name>
<files>n8n-workflow.json</files>
<action>
When update is needed, stop old container, remove it, create new one, start it:
1. **Stop Container** (Execute Command node):
```javascript
const containerId = $json.containerId;
return {
json: {
cmd: `curl -s -o /dev/null -w "%{http_code}" --unix-socket /var/run/docker.sock -X POST 'http://localhost/v1.47/containers/${containerId}/stop?t=10'`,
...($json)
}
};
```
2. **Verify Stopped** (Code node):
```javascript
const statusCode = parseInt($json.stdout.trim());
if (statusCode !== 204 && statusCode !== 304) {
return {
json: {
error: true,
message: `Failed to stop container: HTTP ${statusCode}`,
chatId: $json.chatId
}
};
}
return { json: { ...$json, stopped: true } };
```
3. **Remove Container** (Execute Command node):
```javascript
const containerId = $json.containerId;
return {
json: {
cmd: `curl -s -o /dev/null -w "%{http_code}" --unix-socket /var/run/docker.sock -X DELETE 'http://localhost/v1.47/containers/${containerId}'`,
...($json)
}
};
```
4. **Build Create Body** (Code node):
Build the container creation request from saved config:
```javascript
const config = $json.containerConfig;
const hostConfig = $json.hostConfig;
const networkSettings = $json.networkSettings;
// Build NetworkingConfig from NetworkSettings
const networks = {};
for (const [name, netConfig] of Object.entries(networkSettings.Networks || {})) {
networks[name] = {
IPAMConfig: netConfig.IPAMConfig,
Links: netConfig.Links,
Aliases: netConfig.Aliases
};
}
const createBody = {
...config,
HostConfig: hostConfig,
NetworkingConfig: {
EndpointsConfig: networks
}
};
// Remove fields that shouldn't be in create request
delete createBody.Hostname; // Let Docker assign
delete createBody.Domainname;
return {
json: {
createBody: JSON.stringify(createBody),
containerName: $json.containerName,
currentVersion: $json.currentVersion,
newVersion: $json.newVersion,
chatId: $json.chatId
}
};
```
5. **Create Container** (Execute Command node):
```javascript
const containerName = $json.containerName;
const createBody = $json.createBody;
// Write body to temp file to avoid shell escaping issues
// Or use curl's -d option with proper escaping
return {
json: {
cmd: `echo '${createBody.replace(/'/g, "'\\''")}' | curl -s -X POST --unix-socket /var/run/docker.sock -H "Content-Type: application/json" -d @- 'http://localhost/v1.47/containers/create?name=${encodeURIComponent(containerName)}'`,
...($json)
}
};
```
Alternative approach if shell escaping is problematic:
```javascript
// Use a Code node with HTTP request instead of Execute Command
// n8n Code nodes can make HTTP requests directly
```
6. **Parse Create Response** (Code node):
```javascript
let response;
try {
response = JSON.parse($json.stdout);
} catch (e) {
return { json: { error: true, message: `Create failed: ${$json.stdout}`, chatId: $json.chatId } };
}
if (response.message) {
// Error response
return { json: { error: true, message: `Create failed: ${response.message}`, chatId: $json.chatId } };
}
return {
json: {
newContainerId: response.Id,
currentVersion: $json.currentVersion,
newVersion: $json.newVersion,
containerName: $json.containerName,
chatId: $json.chatId
}
};
```
7. **Start New Container** (Execute Command node):
```javascript
const newContainerId = $json.newContainerId;
return {
json: {
cmd: `curl -s -o /dev/null -w "%{http_code}" --unix-socket /var/run/docker.sock -X POST 'http://localhost/v1.47/containers/${newContainerId}/start'`,
...($json)
}
};
```
8. **Send Update Result** (Telegram Send Message):
```javascript
const { containerName, currentVersion, newVersion } = $json;
const message = `<b>${containerName}</b> updated: ${currentVersion} → ${newVersion}`;
return { json: { text: message, chatId: $json.chatId } };
```
</action>
<verify>
1. "update [container]" when update available → container recreated, version change message sent
2. Container restarts successfully with same ports, volumes, networks
3. Check container is running after update
</verify>
<done>Container recreation workflow works, preserves configuration, reports version change</done>
</task>
</tasks>
<verification>
End-to-end update verification:
1. "update plex" (when update available):
- Image pulled
- Container stops
- Container removed
- New container created with same config
- Container starts
- Message: "plex updated: v1.32.0 → v1.32.1"
2. "update plex" (when already up to date):
- Image pulled
- Digests compared
- No further action
- No message sent (silent per CONTEXT.md)
3. "update arr" (multiple matches):
- Message: "Update requires exact container name..."
4. "update nonexistent":
- Message: "No container found..."
5. Post-update verification:
- Container running
- Same ports mapped
- Same volumes mounted
- Same network connections
Import updated workflow and test with a real container that has an update available.
</verification>
<success_criteria>
- Update command parses container name correctly
- Image pull succeeds
- Digest comparison detects changes accurately
- Container recreation preserves Config, HostConfig, Networks
- Version change displayed when detectable
- Silent when no update available
- All error cases report diagnostic details
- Container runs correctly after update
</success_criteria>
<output>
After completion, create `.planning/phases/03-container-actions/03-04-SUMMARY.md`
</output>