Hardening My Docker Media Stack (ARR + Jellyfin)

After a busy PEN-100 training week, I finally had time to revisit something that’s been bothering me in my homelab: My Docker media stack was functional — but not properly hardened. I run the typical *arr ecosystem:

  • Prowlarr
  • Sonarr
  • Radarr
  • Bazarr
  • NZBGet
  • Jellyfin
  • Jellyseerr

It worked perfectly. Security-wise? It needed work. This post walks through how I hardened it, validated the changes, and tested my own setup — without pretending it’s bulletproof.

The Initial Problem

Originally, my containers were running with:

On most Linux systems, UID 1000 is your first regular user account. Not root (UID 0), but still a user with broad access to personal files. Additionally:

  • Volumes were writable
  • Default Docker capabilities were active
  • No explicit resource limits were defined

Functionality came first. Security was “future me’s problem.” Time to fix that.

Threat Model (Be Realistic)

This system is:

  • Not publicly exposed
  • Behind a Proxmox firewall (default DROP)
  • Only accessible via Nginx Proxy Manager in a separate VLAN
  • Docker API is not exposed
  • No docker.sock mounted into containers

So the realistic threat scenario is:

An application-level compromise (e.g., RCE in Sonarr or Prowlarr)

Not:

  • Internet-wide scanning
  • Remote Docker daemon takeover
  • Exposed privileged containers

The goal is not “perfect security.” The goal is reducing blast radius.

Step 1 – Dedicated Non-Root Service Account

Instead of using my primary user (UID 1000), I created a dedicated service account:

Why:

  • No interactive login
  • No home directory
  • Predictable UID/GID mapping
  • Isolation from my personal user account

Then I reassigned ownership:

All containers now run as:

Or explicitly:

Impact: If a container is compromised, it can only access what UID 1100 owns — not my personal files, not system files. This reduces blast radius significantly.

Step 2 – Network Isolation

I defined a user-defined bridge:

All containers run inside this network.

This isolates them from Docker’s default bridge and allows internal DNS resolution (e.g., curl http://sonarr:8989).

Important nuance:

Containers on the same bridge can communicate freely.
This is acceptable for my homelab threat model.

Full micro-segmentation would add complexity without significant security gain in this context.

Step 3 – Enforcing No New Privileges

Every container includes:

What this does:

  • Prevents processes from gaining additional privileges
  • Blocks setuid-based privilege escalation
  • Stops privilege abuse inside the container

This does not make container escape impossible. It limits in-container privilege escalation. It’s a strong and low-cost control.

Step 4 – Dropping Linux Capabilities

This was one of the biggest improvements. By default, Docker grants containers a reduced but still meaningful set of Linux capabilities. Even without --privileged, containers can:

  • Override file permission checks
  • Use raw sockets
  • Change file ownership
  • Bind to low ports
  • Modify certain kernel-level behaviors

To reduce this attack surface, I explicitly dropped all capabilities:

For example:

After redeploying, I verified:

Now, even if an attacker gains root inside the container, kernel-level interaction is heavily restricted. This does not prevent container escape entirely — but it meaningfully reduces the attack surface.

Step 5 – Resource Limits (DoS Protection)

Before hardening, containers had unlimited access to system resources. A compromised container could:

  • Exhaust RAM
  • Consume all CPU
  • Fork bomb the host

To mitigate this, I added:

For heavier services like Jellyfin:

Now, even if compromised, a container cannot easily take down the host. This improves resilience more than most people realize.

Step 6 – Read-Only Media Mounts

For Jellyfin:

If Jellyfin is compromised:

  • Media cannot be modified
  • Files cannot be deleted
  • Ransomware-style attacks are prevented at the container level

This is a simple but powerful control.

Pentesting My Own Setup

After hardening, I tested it.

1. External Scanning

Port scans using nmap fail from external networks as well as from other internal VLANs. Access is restricted by firewall policies at both the VLAN boundary and the Proxmox host level. This layered filtering is intentional and part of the overall segmentation strategy.

2. Docker Daemon Exposure

Checked dockerd:

No TCP listener.
No exposed Docker API.
Good.

3. Inside Container Testing

From inside a container:

  • Verified mounts (no docker.sock exposed)
  • Verified no host filesystem mounted
  • Verified /sys is read-only
  • Verified cgroup v2 is read-only

Attempting:

Worked — but only inside the container overlay filesystem. Not on the host. Important distinction: Container root ≠ Host root.

What This Does NOT Protect Against

Let’s stay honest. This setup does not protect against:

  • Kernel vulnerabilities
  • Docker daemon zero-days
  • Malicious container images
  • Application-layer RCE
  • API key abuse between services

It reduces impact. It does not eliminate risk. Security is layered, not absolute.

Final State Summary

After hardening, the stack now has:

  • Dedicated non-root service account
  • Explicit UID/GID mapping
  • Firewall-level isolation
  • No exposed Docker API
  • No docker.sock mounts
  • Dropped Linux capabilities
  • no-new-privileges enabled
  • Resource limits enforced
  • Read-only media mounts
  • Seccomp + AppArmor active (default Docker security options)

If one *arr service is compromised:

The attacker gains:

  • Container-level access
  • Limited internal network access

They do NOT gain:

  • Host root
  • Docker daemon control
  • System-wide filesystem access

That’s a major improvement over the original state.

Lessons Learned

  1. Docker is not a security boundary.
  2. UID mapping reduces blast radius.
  3. Capabilities matter more than most people think.
  4. Resource limits protect availability.
  5. Network segmentation should match your threat model.
  6. Hardening is iterative — not a one-time task.

Originally, I built this stack for convenience. Now it’s layered, validated, and intentionally designed. Not perfect. But significantly more resilient. And most importantly — I understand why.