If your unprivileged LXC started throwing "Status 30" after a Proxmox 9.x upgrade, and the container uses MergerFS, NFS, or another non-native bind mount, the problem is usually not inside the container. It is almost always at the mount layer. On affected systems, the container fails during startup because Proxmox tries to preserve or propagate ownership data across a bind mount that the underlying filesystem or storage stack does not handle the way Proxmox expects. The result is ugly and misleading: the LXC may fail to boot, or it may look mounted but behave as read-only.

In our experience managing Proxmox environments, this is exactly the kind of incident that burns time because the visible symptom points in the wrong direction. Jellyfin, Plex, Arr apps, backup jobs, or media importers start complaining about read-only storage, "os error 30", or broken mount points, so admins chase application permissions inside the container. That is usually wasted effort. If the host cannot stage the mount correctly for the unprivileged LXC, the container never had a clean storage path to begin with.

This runbook is for the real-world pattern people actually use: unprivileged LXCs with MergerFS pools, NFS shares, or mixed host-side storage mounted into the container. It is not a generic "containers versus VMs" article. If you need that broader context later, ServerSpan already covers it in When to Run a Workload in Proxmox LXC vs KVM in 2026. This article is the incident response path for proxmox status 30 unprivileged lxc.

What "Status 30" usually means in this Proxmox 9.x scenario

"Status 30" is not a nice, self-explanatory error. In this context it usually means the container hit a mount or filesystem problem early enough that startup or later write access broke, but the front-end symptom you see is generic. On affected Proxmox 9.1.5 systems, the more useful clues are in the LXC and host logs, not in the short GUI error.

Typical clues include:

  • failed to propagate uid and gid to mountpoint: Operation not permitted
  • failed to propagate uid and gid to mountpoint: Read-only file system
  • startup for container 'CTID' failed
  • Applications inside the container reporting read-only filesystem errors even though the host-side mount looks fine

The practical interpretation is simple. The host can usually still see the NFS or MergerFS path. The container cannot consume it correctly through the old bind-mount path you were using. That is why removing the mount point often lets the container boot again immediately.

Step 1: Confirm whether you are hitting the 9.1.5 bind-mount regression or a later configuration issue

Do not start rewriting configs until you know what package level you are actually on. Proxmox forum reports showed the regression after 9.1.5, and later reports showed that pve-container 6.1.1 fixed the bind-mount regression for many NFS users. So your first question is not "how do I remap UIDs?" It is "am I still on the broken container package?"

pveversion -v | egrep 'pve-manager|pve-container|pve-kernel'
apt policy pve-container

If you are on the affected 6.1.0-era package path, update first before doing surgery on the container config.

apt update
apt install pve-container
pveversion -v | grep pve-container

If you are still in an outage and need an emergency rollback while planning the real fix, the forum-documented temporary downgrade path was:

apt install pve-container=6.0.18

That is not a long-term strategy. Staying pinned to an old container package is a stopgap, not a solution. The real goal is to get onto the fixed package train and then verify whether your MergerFS or NFS bind-mount design still needs a manual LXC mount approach.

Step 2: Capture the actual failing config and logs

Once package level is known, capture the exact config before touching it. On Proxmox, that means both the CT config and the live mount diagnostics.

CTID=108
cp /etc/pve/lxc/${CTID}.conf /root/${CTID}.conf.bak.$(date +%F-%H%M%S)

pct config ${CTID}
grep -E '^(mp[0-9]+:|lxc.mount.entry:|lxc.idmap:|unprivileged:|features:)' /etc/pve/lxc/${CTID}.conf

Then run the container in the foreground with debug logging so you can see the mount failure directly:

lxc-start -n ${CTID} -F -l DEBUG -o /tmp/${CTID}-lxc-debug.log

And in another shell:

journalctl -b | egrep -i "lxc|pct|mount|idmap|read-only|status 30"
mount | egrep 'mergerfs|nfs|\.pve-staged-mounts'
findmnt -T /mnt/media
stat -f -c %T /mnt/media

These commands prove different things:

  • lxc-start ... DEBUG shows the real pre-start failure.
  • journalctl helps correlate container startup with kernel-side mount refusal.
  • mount and findmnt confirm what the host actually mounted.
  • stat -f -c %T helps you see whether you are sitting on a FUSE or network-backed filesystem rather than a native local one.

If the container works without the mount but fails with it, and your logs point at UID/GID propagation or read-only behavior, stop blaming the app stack. You are in mount-layer territory now.

Step 3: Identify the old mpX trap

This is the pattern that bites most people. The container config has an old-style mount point such as:

mp0: /mnt/media,mp=/media,ro=1

That used to be "good enough" for plenty of real setups. Then Proxmox tightened the bind-mount handling path. Native local storage usually survived. MergerFS, NFS, and other non-native layouts often did not.

If your storage is:

  • MergerFS
  • NFS mounted on the host and then bind-mounted into the LXC
  • Other FUSE-based or layered storage

then old mpX entries are the first thing to distrust.

This is also why containers without bind mounts often continue to boot fine after the same Proxmox update. The issue is not the container in isolation. It is the interaction between unprivileged mounting, bind-mount handling, and your storage backend.

Step 4: Replace mpX with raw lxc.mount.entry where appropriate

If you are still seeing failures after updating to a fixed pve-container release, or you are running MergerFS or another FUSE-backed stack that remains fragile with the old path, stop using mpX for that mount and switch to direct lxc.mount.entry lines.

First remove or comment out the old mount point line from /etc/pve/lxc/CTID.conf.

# old
# mp0: /mnt/media,mp=/media,ro=1

Then add a raw mount entry. Example for a read-only media path:

lxc.mount.entry: /mnt/media media none bind,create=dir,ro 0 0

Example for a writable path:

lxc.mount.entry: /mnt/downloads downloads none bind,create=dir 0 0

Two details matter here:

  • The source path is the host path.
  • The target path is relative to the container rootfs, so use media or downloads, not /media with a leading slash in this syntax.

This approach hands the mount to LXC more directly and avoids the exact bind-mount handling path that caused so many 9.1.5 headaches. It is not pretty. It is practical.

Step 5: Fix the unprivileged UID and GID map instead of cheating with a privileged container

This is the part people try to avoid by flipping the container to privileged mode. That is the lazy workaround, and it is the wrong one. A privileged LXC does not just "make permissions easier." It removes one of the main security boundaries between the container and the host.

Proxmox documents that unprivileged containers use remapped UID and GID ranges and support custom mapping through lxc.idmap plus delegated ranges in /etc/subuid and /etc/subgid. If you want an unprivileged container to access host files owned by a real media user or group, map that identity explicitly instead of giving up and running privileged. :contentReference[oaicite:1]{index=1}

Suppose the host-side media user is UID 1000 and GID 1000. A clean unprivileged map example looks like this:

unprivileged: 1
lxc.idmap: u 0 100000 1000
lxc.idmap: g 0 100000 1000
lxc.idmap: u 1000 1000 1
lxc.idmap: g 1000 1000 1
lxc.idmap: u 1001 101001 64535
lxc.idmap: g 1001 101001 64535

Then allow root on the host to use those ranges in /etc/subuid and /etc/subgid:

# /etc/subuid
root:100000:1000
root:1000:1
root:101001:64535

# /etc/subgid
root:100000:1000
root:1000:1
root:101001:64535

The logic is simple:

  • Map container IDs 0 through 999 to the normal unprivileged host range.
  • Map container ID 1000 directly to host ID 1000.
  • Map the rest of the container range back into the shifted host range.

The counts still need to total 65536. If you need more than one passthrough UID or GID, the map gets more complex fast. If you also need GPU groups like render or you are running Docker inside the LXC, stop improvising and document the math carefully before you restart anything.

Check the real host IDs before you write the map:

id mediauser
getent group render
getent group media

If you map the wrong UID or GID, the container may boot but your permissions will still be nonsense.

Step 6: Clear stale staged mounts before retesting

One thing the bad startup path can leave behind is a stale staged mount under Proxmox’s temporary LXC mount area. If you do not clear it, your next test can lie to you.

mount | grep ".pve-staged-mounts"
find /var/lib/lxc/.pve-staged-mounts -maxdepth 2 -type d -ls

If the failed container left a stale staged mount behind, unmount only the affected path:

umount -l /var/lib/lxc/.pve-staged-mounts/mp0

Do not blindly unmount random paths you do not understand. Match the staged mount to the container and mount point you were actually testing.

Step 7: Retest like an operator, not like someone hoping it works

Now restart the container cleanly and verify both startup and mount behavior.

pct stop ${CTID}
pct start ${CTID}
pct exec ${CTID} -- mount | egrep 'media|downloads|nfs|mergerfs'
pct exec ${CTID} -- sh -c 'id && ls -ld /media /downloads 2>/dev/null'

If the path should be writable, prove that it is writable:

pct exec ${CTID} -- sh -c 'touch /downloads/.pve-write-test && rm /downloads/.pve-write-test'

If the path is intentionally read-only, do not run a write test and then act surprised when it fails. Instead confirm that reads work and that your application sees the data it is supposed to see.

This is also the point where read-only storage errors inside the app stop being vague. If the container starts and the mount is present but writes still fail, you now know whether it is:

  • a deliberate ro mount
  • a host-side permission mismatch
  • a bad UID or GID map
  • a deeper application-level assumption about ownership

What usually goes wrong in this runbook

  • You are still on the broken container package. Update first.
  • You replaced the mount syntax but kept the wrong UID/GID map.
  • You mapped one media UID but forgot the group your app actually needs.
  • You changed the config but left stale staged mounts behind.
  • You tested permissions inside the container without checking the host-side ownership first.
  • You flipped the LXC to privileged because it was faster. That "fix" trades a mount problem for a security problem.

A production insight here: the worst version of this incident is not the container that fails to boot. It is the one that boots, mounts something half-wrong, and then gives you delayed write failures or app corruption later. A hard startup failure is annoying. A fake-success storage mount is worse.

When to stop patching and move the workload or the operations model

If your workload depends on layered storage, multiple passthrough IDs, GPU groups, Docker-inside-LXC, or fragile network shares, you need to decide whether this still belongs in an unprivileged LXC at all. Sometimes the right answer is "yes, but clean up the mapping and mount design." Sometimes the right answer is "this workload wants KVM, not LXC."

That is where ServerSpan’s existing Proxmox context helps. If you are still deciding whether the workload should stay in LXC or move to KVM, read the LXC vs KVM decision playbook. If the bigger issue is that your virtualization stack has outgrown home-lab style maintenance, this is exactly where managed Proxmox help becomes rational instead of optional.

For teams that need clean infrastructure without babysitting every host-level storage edge case, a properly planned virtual server environment is often the more stable path than turning every storage quirk into a permanent container exception. And if you need the broader Proxmox positioning, ServerSpan already covers that in Why Choose Proxmox for Your Virtualization Infrastructure.

The short runbook

  1. Check your package versions and confirm whether you are still on the broken 9.1.5-era container path.
  2. Capture the failing container config and debug startup logs.
  3. If you are still on the bad package version, update to pve-container 6.1.1 or later.
  4. Identify old mpX mounts on MergerFS, NFS, or FUSE-backed storage.
  5. Replace fragile mpX bind mounts with raw lxc.mount.entry where appropriate.
  6. Fix explicit unprivileged UID and GID mapping with lxc.idmap, /etc/subuid, and /etc/subgid.
  7. Clear stale staged mounts before retesting.
  8. Retest startup, mount visibility, and write behavior intentionally.

If you work through the problem in that order, "Status 30" stops being a vague Proxmox annoyance and becomes what it really is: a mount-path and identity-mapping failure with a specific fix path.

Source & Attribution

This article is based on original data belonging to serverspan.com blog. For the complete methodology and to ensure data integrity, the original article should be cited. The canonical source is available at: Proxmox 9.1.5 Status 30 in Unprivileged LXC: Runbook for MergerFS and NFS Storage.