docs: complete v1.3 project research (STACK, FEATURES, ARCHITECTURE, PITFALLS, SUMMARY)

This commit is contained in:
Lucas Berger
2026-02-08 19:52:57 -05:00
parent c071b890ef
commit 07cde0490a
5 changed files with 1554 additions and 1288 deletions
+214 -318
View File
@@ -1,381 +1,277 @@
# Features Research: v1.1
# Feature Research: Unraid Update Status Sync (v1.3)
**Domain:** Telegram Bot for Docker Container Management
**Researched:** 2026-02-02
**Confidence:** MEDIUM-HIGH (WebSearch verified with official docs where available)
**Domain:** Docker container management integration with Unraid server
**Researched:** 2026-02-08
**Confidence:** MEDIUM
## Telegram Inline Keyboards
## Context
### Table Stakes
This research focuses on v1.3 milestone: syncing update status back to Unraid after bot-initiated container updates. The bot already updates containers successfully, but Unraid's UI continues showing "update available" badges and sending false-positive notifications afterward.
| Feature | Why Expected | Complexity | Dependencies |
|---------|--------------|------------|--------------|
| Callback button handling | Core inline keyboard functionality - buttons must trigger actions | Low | Telegram Trigger already handles callback_query |
| answerCallbackQuery response | Required by Telegram - clients show loading animation until answered (up to 1 minute) | Low | None |
| Edit message after button press | Standard pattern - update existing message rather than send new one to reduce clutter | Low | None |
| Container action buttons | Users expect tap-to-action for start/stop/restart without typing | Medium | Existing container matching logic |
| Status view with action buttons | Show container list with inline buttons for each container | Medium | Existing status command |
**Existing capabilities (v1.0-v1.2):**
- Container update via bot (pull image, recreate container)
- "Update All :latest" batch operation
- Container status display with inline keyboards
- Confirmation dialogs for dangerous actions
- Progress feedback during operations
### Differentiators
| Feature | Value Proposition | Complexity | Dependencies |
|---------|-------------------|------------|--------------|
| Confirmation dialogs for dangerous actions | "Are you sure?" before stop/restart/update prevents accidental actions | Low | None - edit message with Yes/No buttons |
| Contextual button removal | Remove buttons after action completes (prevents double-tap issues) | Low | None |
| Dynamic container list keyboards | Generate buttons based on actual running containers | Medium | Container listing logic |
| Progress indicators via message edit | Update message with "Updating..." then "Complete" states | Low | None |
| Pagination for many containers | "Next page" button when >8-10 containers | Medium | None |
### Anti-features
| Anti-Feature | Why Avoid | What to Do Instead |
|--------------|-----------|-------------------|
| Reply keyboards for actions | Takes over user keyboard space, sends visible messages to chat | Use inline keyboards attached to bot messages |
| More than 5 buttons per row | Wraps poorly on mobile/desktop, breaks muscle memory | Max 3-4 buttons per row for container actions |
| Complex callback_data structures | 64-byte limit, easy to exceed with JSON | Use short action codes: `start_plex`, `stop_sonarr` |
| Buttons without feedback | Users think tap didn't work, tap again | Always answerCallbackQuery, even for errors |
| Auto-refreshing keyboards | High API traffic, rate limiting risk | Refresh on explicit user action only |
### Implementation Notes
**Critical constraint:** callback_data is limited to 64 bytes. Use short codes like `action:containername` rather than JSON structures.
**n8n native node limitation:** The Telegram node doesn't support dynamic inline keyboards well. Workaround is HTTP Request node calling Telegram Bot API directly for `sendMessage` with `reply_markup` parameter.
**Pattern for confirmations:**
1. User taps "Stop plex"
2. Edit message: "Stop plex container?" with [Yes] [Cancel] buttons
3. User taps Yes -> perform action, edit message with result, remove buttons
4. User taps Cancel -> edit message back to original state
**Sources:**
- [Telegram Bot Features](https://core.telegram.org/bots/features) (HIGH confidence)
- [Telegram Bot API Buttons](https://core.telegram.org/api/bots/buttons) (HIGH confidence)
- [n8n Telegram Callback Operations](https://docs.n8n.io/integrations/builtin/app-nodes/n8n-nodes-base.telegram/callback-operations/) (HIGH confidence)
- [n8n Community: Dynamic Inline Keyboard](https://community.n8n.io/t/dynamic-inline-keyboard-for-telegram-bot/86568) (MEDIUM confidence)
**New scope:** Two directions for Unraid integration:
1. **Sync-back:** Clear Unraid's "update available" badge after bot updates container
2. **Read-forward:** Use Unraid's update detection data as source of truth for which containers need updates
---
## Batch Operations
## Feature Landscape
### Table Stakes
### Table Stakes (Users Expect These)
| Feature | Why Expected | Complexity | Dependencies |
|---------|--------------|------------|--------------|
| Update multiple specified containers | Core batch use case - `update plex sonarr radarr` | Medium | Existing update logic, loop handling |
| Sequential execution | Process one at a time to avoid resource contention | Low | None |
| Per-container status feedback | "Updated plex... Updated sonarr..." progress | Low | Existing message sending |
| Error handling per container | One failure shouldn't abort the batch | Low | Try-catch per iteration |
| Final summary message | "3 updated, 1 failed: jellyfin" | Low | Accumulator pattern |
Features users assume exist when managing containers outside Unraid's UI.
### Differentiators
| Feature | Why Expected | Complexity | Notes |
|---------|--------------|------------|-------|
| Clear "update available" badge after bot update | Users expect Unraid UI to reflect reality after external updates | MEDIUM | Requires writing to `/var/lib/docker/unraid-update-status.json` - known workaround for Watchtower/Portainer users |
| Prevent duplicate update notifications | After bot updates a container, Unraid shouldn't send false-positive Telegram notifications | MEDIUM | Same mechanism as clearing badge - update status file tracks whether updates are pending |
| Avoid breaking Unraid's update tracking | External tools shouldn't corrupt Unraid's internal state | LOW | Docker API operations are safe - Unraid tracks via separate metadata files |
| No manual "Apply Update" clicks | Point of remote management is to eliminate manual steps | HIGH | Core pain point - users want "update from bot = done" not "update from bot = still need to click in Unraid" |
| Feature | Value Proposition | Complexity | Dependencies |
|---------|-------------------|------------|--------------|
| "Update all" command | Single command to update everything (with confirmation) | Medium | Container listing |
| "Update all except X" | Exclude specific containers from batch | Medium | Exclusion pattern |
| Parallel status checks | Check which containers have updates available first | Medium | None |
| Batch operation confirmation | Show what will happen before doing it | Low | Keyboard buttons |
| Cancel mid-batch | Stop processing remaining containers | High | State management |
### Differentiators (Competitive Advantage)
### Anti-features
Features that set the bot apart from other Docker management tools.
| Anti-Feature | Why Avoid | What to Do Instead |
|--------------|-----------|-------------------|
| Parallel container updates | Resource contention, disk I/O saturation, network bandwidth | Sequential with progress feedback |
| Silent batch operations | User thinks bot is frozen during long batch | Send progress message per container |
| Update without checking first | Wastes time on already-updated containers | Check for updates, report "3 containers have updates" |
| Auto-update on schedule | Out of scope - user might be using system when update causes downtime | User-initiated only, this is reactive tool |
| Feature | Value Proposition | Complexity | Notes |
|---------|-------------------|------------|-------|
| Automatic sync after every update | Bot updates container AND clears Unraid badge in single operation - zero user intervention | MEDIUM | Requires detecting update success and writing status file atomically |
| Use Unraid's update detection data | If Unraid already knows which containers need updates, bot could use that source of truth instead of its own Docker image comparison | HIGH | Requires parsing Unraid's update status JSON and integrating with existing container selection/matching logic |
| Bidirectional status awareness | Bot shows which containers Unraid thinks need updates, not just Docker image digest comparison | MEDIUM-HIGH | Depends on reading update status file - enhances accuracy for edge cases (registry issues, multi-arch images) |
| Manual sync command | Users can manually trigger "sync status to Unraid" if they updated containers through another tool | LOW | Simple command that iterates running containers and updates status file |
### Implementation Notes
### Anti-Features (Commonly Requested, Often Problematic)
**Existing update flow:** Current implementation pulls image, recreates container, cleans up old image. Batch needs to wrap this in a loop.
Features that seem good but create problems.
| Feature | Why Requested | Why Problematic | Alternative |
|---------|---------------|-----------------|-------------|
| Full Unraid API integration (authentication, template parsing) | "Properly" integrate with Unraid's web interface instead of file manipulation | Adds authentication complexity, XML parsing, API version compatibility, web session management - all for a cosmetic badge | Direct file writes are the established community workaround - simpler and more reliable |
| Automatic template XML regeneration | Update container templates so Unraid thinks it initiated the update | Template XML is generated by Community Applications and Docker Manager - modifying it risks breaking container configuration | Clearing update status file is sufficient - templates are source of truth for config, not update state |
| Sync status for ALL containers on every operation | Keep Unraid 100% in sync with Docker state at all times | Performance impact (Docker API queries for all containers on every update), unnecessary for user's pain point | Sync only the container(s) just updated by the bot - targeted and efficient |
| Persistent monitoring daemon | Background process that watches Docker events and updates Unraid status in real-time | Requires separate container/service, adds operational complexity, duplicates n8n's event model | On-demand sync triggered by bot operations - aligns with n8n's workflow execution model |
---
## Feature Dependencies
**Progress pattern:**
```
User: update all
Bot: Found 5 containers with updates. Update now? [Yes] [Cancel]
User: Yes
Bot: Updating plex (1/5)...
Bot: (edit) Updated plex. Updating sonarr (2/5)...
...
Bot: (edit) Batch complete: 5 updated, 0 failed.
Clear Update Badge (sync-back)
└──requires──> Update operation success detection (existing)
└──requires──> File write to /var/lib/docker/unraid-update-status.json
Read Unraid Update Status (read-forward)
└──requires──> Parse update status JSON file
└──requires──> Container ID to name mapping (existing)
└──enhances──> Container selection UI (show Unraid's view)
Manual Sync Command
└──requires──> Clear Update Badge mechanism
└──requires──> Container list enumeration (existing)
Bidirectional Status Awareness
└──requires──> Read Unraid Update Status
└──requires──> Clear Update Badge
└──conflicts──> Current Docker-only update detection (source of truth ambiguity)
```
**Watchtower-style options (NOT recommended for this bot):**
- Watchtower does automatic updates on schedule
- This bot is intentionally reactive (user asks, bot does)
- Automation can cause downtime at bad times
### Dependency Notes
**Sources:**
- [Watchtower Documentation](https://containrrr.dev/watchtower/) (HIGH confidence)
- [Docker Multi-Container Apps](https://docs.docker.com/get-started/docker-concepts/running-containers/multi-container-applications/) (HIGH confidence)
- [How to Update Docker Containers](https://phoenixnap.com/kb/update-docker-image-container) (MEDIUM confidence)
- **Clear Update Badge requires Update success detection:** Already have this - n8n-update.json returns `success: true, updated: true` with digest comparison
- **Read Unraid Status enhances Container selection:** Could show "(Unraid: update available)" badge in status keyboard - helps users see what Unraid sees
- **Bidirectional Status conflicts with Docker-only detection:** Need to decide: is Unraid's update status file the source of truth, or is Docker image digest comparison? Mixing both creates confusion about "which containers need updates"
---
## Development API Workflow
## MVP Definition
### Table Stakes
### Launch With (v1.3)
| Feature | Why Expected | Complexity | Dependencies |
|---------|--------------|------------|--------------|
| API key authentication | Standard n8n API auth method | Low | n8n configuration |
| Get workflow by ID | Read current workflow JSON | Low | n8n REST API |
| Update workflow | Push modified workflow back | Low | n8n REST API |
| Activate/deactivate workflow | Turn workflow on/off programmatically | Low | n8n REST API |
| Get execution list | See recent runs | Low | n8n REST API |
| Get execution details/logs | Debug failed executions | Low | n8n REST API |
Minimum viable - eliminates the core pain point.
### Differentiators
- [ ] Clear update badge after bot-initiated updates - Write to `/var/lib/docker/unraid-update-status.json` after successful update operation
- [ ] Prevent false-positive notifications - Ensure status file write happens before user sees "update complete" message
- [ ] Integration with existing n8n-update.json sub-workflow - Add status sync as final step in update flow (text, inline, batch modes)
| Feature | Value Proposition | Complexity | Dependencies |
|---------|-------------------|------------|--------------|
| Execute workflow on demand | Trigger test run via API | Medium | n8n REST API with test data |
| Version comparison | Diff local vs deployed workflow | High | JSON diff tooling |
| Backup before update | Save current version before pushing changes | Low | File system or git |
| Rollback capability | Restore previous version on failure | Medium | Version history |
| MCP integration | Claude Code can manage workflows via MCP | High | MCP server setup |
**Rationale:** This solves the stated pain: "after updating containers through the bot, Unraid still shows update available badges and sends false-positive Telegram notifications."
### Anti-features
### Add After Validation (v1.4+)
| Anti-Feature | Why Avoid | What to Do Instead |
|--------------|-----------|-------------------|
| Direct n8n database access | Bypasses API, can corrupt state | Use REST API only |
| Credential exposure via API | API returns credential IDs, not values | Never try to extract credential values |
| Auto-deploy on git push | Adds CI/CD complexity, not needed for single-user | Manual deploy via API call |
| Real-time workflow editing | n8n UI is better for this | API for read/bulk operations only |
Features to add once core sync-back is working.
### Implementation Notes
- [ ] Manual sync command (`/sync` or `/sync <container>`) - Trigger when user updates via other tools (Portainer, CLI, Watchtower)
- [ ] Read Unraid update status for better detection - Parse `/var/lib/docker/unraid-update-status.json` to see which containers Unraid thinks need updates
- [ ] Show Unraid's view in status keyboard - Display "(Unraid: update ready)" badge alongside container state
**n8n REST API key endpoints:**
**Trigger:** User requests ability to see "what Unraid sees" or needs to sync status after non-bot updates.
| Operation | Method | Endpoint |
|-----------|--------|----------|
| List workflows | GET | `/api/v1/workflows` |
| Get workflow | GET | `/api/v1/workflows/{id}` |
| Update workflow | PUT | `/api/v1/workflows/{id}` |
| Activate | POST | `/api/v1/workflows/{id}/activate` |
| Deactivate | POST | `/api/v1/workflows/{id}/deactivate` |
| List executions | GET | `/api/v1/executions` |
| Get execution | GET | `/api/v1/executions/{id}` |
| Execute workflow | POST | `/rest/workflows/{id}/run` |
### Future Consideration (v2+)
**Authentication:** Header `X-N8N-API-KEY: your_api_key`
Features to defer until core functionality is proven.
**Workflow structure:** n8n workflows are JSON documents (~3,200 lines for this bot). Key sections:
- `nodes[]` - Array of workflow nodes
- `connections` - How nodes connect
- `settings` - Workflow-level settings
- [ ] Bidirectional status awareness - Use Unraid's update detection as source of truth instead of Docker digest comparison
- [ ] Sync on container list view - Automatically update status file when user views container list (proactive sync)
- [ ] Batch status sync - `/sync all` command to reconcile all containers
**MCP option:** There's an unofficial n8n MCP server (makafeli/n8n-workflow-builder) that could enable Claude Code to manage workflows directly, but this adds complexity. Standard REST API is simpler for v1.1.
**Sources:**
- [n8n API Documentation](https://docs.n8n.io/api/) (HIGH confidence)
- [n8n API Reference](https://docs.n8n.io/api/api-reference/) (HIGH confidence)
- [n8n Workflow Manager API Template](https://n8n.io/workflows/4166-n8n-workflow-manager-api/) (MEDIUM confidence)
- [Python n8n API Guide](https://martinuke0.github.io/posts/2025-12-10-a-detailed-guide-to-using-the-n8n-api-with-python/) (MEDIUM confidence)
**Why defer:** Unraid's update detection has known bugs (doesn't detect external updates, false positives persist). Using it as source of truth may import those bugs. Better to prove sync-back works first, then evaluate whether read-forward adds value.
---
## Update Notification Sync
## Feature Prioritization Matrix
### Table Stakes
| Feature | User Value | Implementation Cost | Priority |
|---------|------------|---------------------|----------|
| Clear update badge after bot update | HIGH | MEDIUM | P1 |
| Prevent false-positive notifications | HIGH | LOW | P1 |
| Manual sync command | MEDIUM | LOW | P2 |
| Read Unraid update status | MEDIUM | MEDIUM | P2 |
| Show Unraid's view in UI | LOW | MEDIUM | P3 |
| Bidirectional status (Unraid as source of truth) | MEDIUM | HIGH | P3 |
| Feature | Why Expected | Complexity | Dependencies |
|---------|--------------|------------|--------------|
| Update clears bot's "update available" state | Bot should know container is now current | Low | Already works - re-check after update |
| Accurate update status reporting | Status command shows which have updates | Medium | Image digest comparison |
### Differentiators
| Feature | Value Proposition | Complexity | Dependencies |
|---------|-------------------|------------|--------------|
| Sync with Unraid UI | Clear "update available" badge in Unraid web UI | High | Unraid API or file manipulation |
| Pre-update check | Show what version you're on, what version available | Medium | Image tag inspection |
| Update notification to user | "3 containers have updates available" proactive message | Medium | Scheduled check, notification logic |
### Anti-features
| Anti-Feature | Why Avoid | What to Do Instead |
|--------------|-----------|-------------------|
| Taking over Unraid notifications | Explicitly out of scope per PROJECT.md | Keep Unraid notifications, bot is for control |
| Proactive monitoring | Bot is reactive per PROJECT.md | User checks status manually |
| Blocking Unraid auto-updates | User may want both systems | Coexist with Unraid's own update mechanism |
### Implementation Notes
**The core problem:** When you update a container via the bot (or Watchtower), Unraid's web UI may still show "update available" because it has its own tracking.
**Unraid update status file:** `/var/lib/docker/unraid-update-status.json`
- This file tracks which containers have updates
- Deleting it forces Unraid to recheck
- Can also trigger recheck via: Settings > Docker > Check for Updates
**Unraid API (v7.2+):**
- GraphQL API for Docker containers
- Can query container status
- Mutations for notifications exist
- API key auth: `x-api-key` header
**Practical approach for v1.1:**
1. **Minimum:** Document that Unraid UI may lag behind - user can click "Check for Updates" in Unraid
2. **Better:** After bot update, delete `/var/lib/docker/unraid-update-status.json` to force Unraid recheck
3. **Best (requires Unraid 7.2+):** Use Unraid GraphQL API to clear notification state
**Known issue:** Users report Unraid shows "update ready" even after container is updated. This is a known Unraid bug where it only checks for new updates, not whether containers are now current.
**Sources:**
- [Unraid API Documentation](https://docs.unraid.net/API/how-to-use-the-api/) (HIGH confidence)
- [Unraid Docker Integration DeepWiki](https://deepwiki.com/unraid/api/2.4.1-docker-integration) (MEDIUM confidence)
- [Watchtower + Unraid Discussion](https://github.com/containrrr/watchtower/discussions/1389) (MEDIUM confidence)
- [Unraid Forum: Update Badge Issues](https://forums.unraid.net/topic/157820-docker-shows-update-ready-after-updating/) (MEDIUM confidence)
**Priority key:**
- P1: Must have for v1.3 launch - solves core pain point
- P2: Should have for v1.4 - adds convenience, not critical
- P3: Nice to have for v2+ - explore after validating core
---
## Docker Socket Security
## Implementation Notes
### Table Stakes
### File Format: /var/lib/docker/unraid-update-status.json
| Feature | Why Expected | Complexity | Dependencies |
|---------|--------------|------------|--------------|
| Remove direct socket from internet-exposed n8n | Security requirement per PROJECT.md scope | Medium | Socket proxy setup |
| Maintain all existing functionality | Bot should work identically after security change | Medium | API compatibility |
| Container start/stop/restart/update | Core actions must still work | Low | Proxy allows these APIs |
| Container list/inspect | Status command must still work | Low | Proxy allows read APIs |
| Image pull | Update command needs this | Low | Proxy configuration |
Based on community forum discussions, this file tracks update status per container. When Unraid checks for updates, it compares registry manifests and writes results here. To clear the badge, remove the container's entry from this JSON.
### Differentiators
| Feature | Value Proposition | Complexity | Dependencies |
|---------|-------------------|------------|--------------|
| Granular API restrictions | Only allow APIs the bot actually uses | Low | Socket proxy env vars |
| Block dangerous APIs | Prevent exec, create, system commands | Low | Socket proxy defaults |
| Audit logging | Log all Docker API calls through proxy | Medium | Proxy logging config |
### Anti-features
| Anti-Feature | Why Avoid | What to Do Instead |
|--------------|-----------|-------------------|
| Read-only socket mount (:ro) | Doesn't actually protect - socket as pipe stays writable | Use proper socket proxy |
| Direct socket access from internet-facing container | Full root access if n8n is compromised | Socket proxy isolates access |
| Allowing exec API | Enables arbitrary command execution in containers | Block exec in proxy |
| Allowing create/network APIs | Bot doesn't need to create containers | Block creation APIs |
### Implementation Notes
**Recommended: Tecnativa/docker-socket-proxy or LinuxServer.io/docker-socket-proxy**
Both provide HAProxy-based filtering of Docker API requests.
**Minimal proxy configuration for this bot:**
```yaml
# docker-compose.yml
services:
socket-proxy:
image: tecnativa/docker-socket-proxy
environment:
- CONTAINERS=1 # List/inspect containers
- IMAGES=1 # Pull images
- POST=1 # Allow write operations
- SERVICES=0 # Swarm services (not needed)
- TASKS=0 # Swarm tasks (not needed)
- NETWORKS=0 # Network management (not needed)
- VOLUMES=0 # Volume management (not needed)
- EXEC=0 # CRITICAL: Block exec
- BUILD=0 # CRITICAL: Block build
- COMMIT=0 # CRITICAL: Block commit
- SECRETS=0 # CRITICAL: Block secrets
- CONFIGS=0 # CRITICAL: Block configs
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
networks:
- docker-proxy
n8n:
# ... existing config ...
environment:
- DOCKER_HOST=tcp://socket-proxy:2375
networks:
- docker-proxy
# Plus existing networks
**Example structure (inferred from community discussions):**
```json
{
"containerId1": { "status": "update_available", "checked": "timestamp" },
"containerId2": { "status": "up_to_date", "checked": "timestamp" }
}
```
**Key security benefits:**
1. n8n no longer has direct socket access
2. Only whitelisted API categories are available
3. EXEC=0 prevents arbitrary command execution
4. Proxy is on internal network only, not internet-exposed
**Operation:** After bot updates a container successfully:
1. Read existing JSON file
2. Remove entry for updated container (or set status to "up_to_date")
3. Write JSON back atomically
**Migration path:**
1. Deploy socket-proxy container
2. Update n8n to use `DOCKER_HOST=tcp://socket-proxy:2375`
3. Remove direct socket mount from n8n
4. Test all bot commands still work
**Confidence: LOW** - exact format not documented officially, needs verification by reading actual file.
**Sources:**
- [Tecnativa docker-socket-proxy](https://github.com/Tecnativa/docker-socket-proxy) (HIGH confidence)
- [LinuxServer.io docker-socket-proxy](https://docs.linuxserver.io/images/docker-socket-proxy/) (HIGH confidence)
- [Docker Socket Security Guide](https://www.paulsblog.dev/how-to-secure-your-docker-environment-by-using-a-docker-socket-proxy/) (MEDIUM confidence)
### Integration Points
**Existing bot architecture:**
- n8n-update.json sub-workflow already returns `success: true, updated: true, oldDigest, newDigest` on successful update
- Three callers: Execute Text Update, Execute Callback Update, Execute Batch Update
- All three modes need status sync (text, inline keyboard, batch operations)
**New node requirements:**
- Read Update Status File (HTTP Request or Execute Command node - read JSON file)
- Parse Update Status (Code node - JSON manipulation)
- Write Update Status File (HTTP Request or Execute Command node - write JSON file)
- Update n8n-update.json to call status sync before returning success
**File access:** n8n runs in Docker container, needs volume mount or HTTP access to Unraid filesystem. Docker socket proxy already provides access - may need to add file system access or use Unraid API.
### Update Status Sync Mechanism
**Current state:** Unraid checks for updates by comparing local image digest with registry manifest digest. Results stored in `/var/lib/docker/unraid-update-status.json`. When container is updated externally (bot, Watchtower, CLI), Unraid doesn't re-check - status file shows stale "update available" until manually cleared.
**Community workaround:** Delete `/var/lib/docker/unraid-update-status.json` to force complete reset, OR edit JSON to remove specific container entry.
**Bot approach:** After successful update (pull + recreate), programmatically edit JSON file to mark container as up-to-date. This is what Unraid would do if it had performed the update itself.
**Alternatives considered:**
1. Call Unraid's "Check for Updates" API endpoint - requires authentication, web session, not documented
2. Trigger Unraid's update check via CLI - no known CLI command for this
3. Reboot server - clears status (per forum posts) but obviously unacceptable
4. Edit XML templates - risky, templates are config source of truth
**Selected approach:** Direct JSON file edit (community-proven workaround, lowest risk).
---
## Feature Summary Table
## Competitor Analysis
| Feature | Complexity | Dependencies | Priority | Notes |
|---------|------------|--------------|----------|-------|
| **Inline Keyboards** | | | | |
| Basic callback handling | Low | Existing trigger | Must Have | Foundation for all buttons |
| Container action buttons | Medium | Container matching | Must Have | Core UX improvement |
| Confirmation dialogs | Low | None | Should Have | Prevents accidents |
| Dynamic keyboard generation | Medium | HTTP Request node | Must Have | n8n native node limitation workaround |
| **Batch Operations** | | | | |
| Update multiple containers | Medium | Existing update | Must Have | Sequential with progress |
| "Update all" command | Medium | Container listing | Should Have | With confirmation |
| Per-container feedback | Low | None | Must Have | Progress visibility |
| **n8n API** | | | | |
| API key setup | Low | n8n config | Must Have | Enable programmatic access |
| Read workflow | Low | REST API | Must Have | Development workflow |
| Update workflow | Low | REST API | Must Have | Development workflow |
| Activate/deactivate | Low | REST API | Should Have | Testing workflow |
| **Update Sync** | | | | |
| Delete status file | Low | SSH/exec access | Should Have | Simple Unraid sync |
| Unraid GraphQL API | High | Unraid 7.2+, API key | Nice to Have | Requires version check |
| **Security** | | | | |
| Socket proxy deployment | Medium | New container | Must Have | Security requirement |
| API restriction config | Low | Proxy env vars | Must Have | Minimize attack surface |
| Migration testing | Low | All commands | Must Have | Verify no regression |
### Watchtower
- Automatically updates containers on schedule
- Does NOT sync status back to Unraid
- Community complaint: "Watchtower running on unraid but containers still say update after it runs"
- Workaround: Manual deletion of update status file
## MVP Recommendation for v1.1
### Portainer
- UI-based container management
- Shows its own "update available" indicator (independent of Unraid)
- Does NOT sync with Unraid's update tracking
- Users run both Portainer and Unraid UI, see conflicting status
**Phase 1: Foundation (Must Have)**
1. Docker socket security via proxy - security first
2. n8n API access setup - enables faster development
3. Basic inline keyboard infrastructure - callback handling
### Unraid Docker Compose Manager
- Manages docker-compose stacks
- Known issue: "Docker tab reports updates available after even after updating stack"
- No automatic sync with Unraid Docker Manager
**Phase 2: UX Improvements (Should Have)**
4. Container action buttons from status view
5. Confirmation dialogs for stop/update actions
6. Batch update with progress feedback
### Our Approach
- Automatic sync after every bot-initiated update
- Transparent to user - no manual steps after update completes
- Solves pain point that all other tools ignore
- Differentiator: tight integration with Unraid's native tracking system
**Phase 3: Polish (Nice to Have)**
7. Unraid update status sync (file deletion method)
8. "Update all" convenience command
---
## Confidence Assessment
## Edge Cases & Considerations
| Area | Confidence | Reason |
|------|------------|--------|
| Telegram Inline Keyboards | HIGH | Official Telegram docs + n8n docs verified |
| Batch Operations | MEDIUM-HIGH | Standard Docker patterns, well-documented |
| n8n API | MEDIUM | API exists but detailed endpoint docs required fetching |
| Unraid Update Sync | MEDIUM | Community knowledge, API docs limited |
| Docker Socket Security | HIGH | Well-documented proxy solutions |
### Race Conditions
- Unraid's update checker runs on schedule (user configurable)
- If checker runs between bot update and status file write, may re-detect update
- Mitigation: Write status file immediately after image pull, before container recreate
## Gaps to Address in Phase Planning
### Multi-Arch Images
- Unraid uses manifest digests for update detection
- Bot uses `docker inspect` image ID comparison
- May disagree on whether update is needed (manifest vs image layer digest)
- Research needed: Does Unraid use manifest digest or image digest in status file?
1. **Exact n8n API endpoints** - Need to verify full endpoint list during implementation
2. **Unraid version compatibility** - GraphQL API requires Unraid 7.2+, need version check
3. **n8n Telegram node workarounds** - HTTP Request approach needs testing
4. **Socket proxy on Unraid** - Deployment specifics for Unraid environment
### Failed Updates
- Bot update may fail after pulling image (recreate fails, container won't start)
- Should NOT clear update badge if container is broken
- Status sync must be conditional on full update success (container running)
### Infrastructure Containers
- Bot already excludes n8n and docker-socket-proxy from batch operations
- Status sync should respect same exclusions (don't clear badge for bot's own container)
### File Permissions
- `/var/lib/docker/` typically requires root access
- n8n container may not have write permissions
- Need to verify access method: direct mount, docker exec, or Unraid API
---
## Sources
**Community Forums & Issue Discussions:**
- [Regression: Incorrect docker update notification - Unraid Forums](https://forums.unraid.net/bug-reports/stable-releases/regression-incorrect-docker-update-notification-r2807/)
- [Docker Update Check not reliable for external container - Unraid Forums](https://forums.unraid.net/bug-reports/stable-releases/691-docker-update-check-not-reliable-for-external-container-r940/)
- [Watchtower running on unraid but containers still say update after it runs - GitHub Discussion](https://github.com/containrrr/watchtower/discussions/1389)
- [Docker update via Watchtower - Status not reflected in Unraid - Unraid Forums](https://forums.unraid.net/topic/149953-docker-update-via-watchtower-status-not-reflected-in-unraid/)
- [Docker compose: Docker tab reports updates available after updating stack - Unraid Forums](https://forums.unraid.net/topic/149264-docker-compose-docker-tab-reports-updates-available-after-even-after-updating-stack/)
**Workarounds & Solutions:**
- [Containers show update available even when up-to-date - Unraid Forums](https://forums.unraid.net/topic/142238-containers-show-update-available-even-when-it-is-up-to-date/)
- [binhex Documentation - Docker FAQ for Unraid](https://github.com/binhex/documentation/blob/master/docker/faq/unraid.md)
**Unraid API & Architecture:**
- [Docker and VM Integration - Unraid API DeepWiki](https://deepwiki.com/unraid/api/2.4.2-notification-system)
- [Using the Unraid API - Official Docs](https://docs.unraid.net/API/how-to-use-the-api/)
- [Dynamix Docker Manager - GitHub Source](https://github.com/limetech/dynamix/blob/master/plugins/dynamix.docker.manager/include/DockerClient.php)
**Docker Digest Comparison:**
- [Image digests - Docker Docs](https://docs.docker.com/dhi/core-concepts/digests/)
- [Digests in Docker - Mike Newswanger](https://www.mikenewswanger.com/posts/2020/docker-image-digests/)
---
*Feature research for: Unraid Update Status Sync (v1.3)*
*Researched: 2026-02-08*