If Proxmox VE 9 shows “The current guest configuration does not support taking new snapshots” on an LVM thick-backed VM, the usual cause is not a broken snapshot feature. The usual cause is that your VM is still not eligible for the new snapshot-as-volume-chain workflow. In practice, that almost always means one of three things: the LVM storage does not have snapshot-as-volume-chain 1 enabled, the VM disk is still raw instead of qcow2, or the guest has an unsupported device or configuration element that makes the UI refuse new snapshots.
In our experience managing Proxmox environments, this is exactly the kind of issue that wastes time because the UI message is technically correct but operationally incomplete. It tells you the guest configuration is unsupported. It does not tell you which part is unsupported. That leads people to chase storage bugs, cluster bugs, and permission bugs when the real answer is usually sitting in /etc/pve/storage.cfg or the VM disk line in qm config.
This runbook is for LVM thick on Proxmox VE 9, not LVM-thin and not a general snapshot overview. The goal is simple: identify why the UI is refusing snapshots, fix the exact blocker, and verify the guest is now eligible.
What Proxmox is actually checking
With the new LVM thick snapshot support in Proxmox VE 9, snapshots are not implemented by falling back to old native LVM snapshots for the guest disk path. Proxmox added snapshot-as-volume-chain support to avoid the performance and operational penalties of native LVM snapshots. That matters because the new feature has prerequisites. If the storage and disk layout do not match those prerequisites, the snapshot button stays unavailable and you get the “current guest configuration does not support taking new snapshots” message.
The practical translation is this:
- The storage must be LVM thick with snapshot-as-volume-chain enabled.
- The VM disk must be in the expected format for that workflow.
- The guest configuration must not include a device that still blocks the new snapshot path.
If any one of those fails, the snapshot UI refuses the entire guest.
The two most common causes
Most operators hit one of these first:
- The storage flag is missing. The LVM storage exists and the disk lives there, but
snapshot-as-volume-chain 1is not present in/etc/pve/storage.cfg. - The disk is still raw. The VM was created or migrated earlier as
format=raw, and the new LVM thick snapshot path is not available to that disk layout.
A third blocker shows up often enough to deserve its own mention:
- The guest has TPM attached. Recent operator reports show VMs with TPM can still present the same “unsupported” snapshot error even after the storage flag is enabled and the disk is qcow2.
That last one is important because it creates a false negative during troubleshooting. You fix the storage, convert the disk, and the UI still says no. At that point many admins assume they misunderstood the feature. Sometimes the real blocker is simply that the VM still has a TPM disk configured.
Step 1: inspect the storage definition
Start with the storage layer. Do not guess from the GUI. Read the actual cluster config.
cat /etc/pve/storage.cfg
For the target LVM storage, you want to see something like this:
lvm: lunlvm
vgname vg_lun
content images,rootdir
shared 1
snapshot-as-volume-chain 1
If that last line is missing, the UI refusal makes sense. You enabled or expected “LVM snapshots in Proxmox 9,” but not the exact storage property the new path requires.
A common issue we see in production is that admins create a new storage in the GUI, assume the toggle stuck, then later compare behavior across nodes or clusters and discover the flag never made it into the actual cluster config. The first thing we check is always the cluster file, not the memory of how the storage was created.
Step 2: inspect the VM disk format
Once the storage flag is confirmed, inspect the guest itself.
qm config 104
You are looking for the disk line. On a guest that still fails, it often looks like this:
scsi0: lunlvm:vm-104-disk-0,format=raw,size=20G
That is the key clue. On LVM thick with snapshot-as-volume-chain, the real-world blocker is frequently that the disk is still raw. Recent operator threads show the exact same pattern: snapshot support was enabled on the storage, but snapshots still failed until the virtual disk was converted to qcow2. That is why the UI refuses. The storage supports the feature. The guest disk layout still does not.
On a guest that is aligned correctly, the disk path should reflect a qcow2-backed workflow rather than a raw disk left over from older placement or migration decisions.
Step 3: convert the disk instead of rebuilding the VM
You usually do not need to rebuild the guest. The normal fix is to move the disk and convert it to qcow2. In the Proxmox UI, use the VM’s disk action to move the disk to the same LVM storage and choose qcow2 as the target format if the option is available.
This is the part admins often miss. They assume “same storage” means “no actual change.” In practice, moving the disk to the same target storage can still be the cleanest way to force a format conversion and align the guest with the new snapshot model.
After the move, run qm config again and verify the disk definition no longer reflects the old raw layout.
On thick LVM, that also explains why the feature exists in its current form. Proxmox is not pretending thick LVM suddenly became thin-provisioned. The snapshot chain is created by layering volumes while qcow2 handles the metadata needed for that chain. That is why the docs describe snapshot volumes as thick-provisioned LVM logical volumes while also explaining that the chain approach avoids native LVM snapshot penalties.
Step 4: check for TPM and other guest-side blockers
If the storage flag is present and the disk is qcow2 but the UI still refuses snapshots, inspect the rest of the guest config. The most visible current edge case is TPM.
qm config 104 | grep -Ei 'tpm|efidisk|scsi|virtio|sata|ide'
If you see a TPM device attached, test with that in mind before you keep changing storage settings. Current operator reports show that a VM with TPM can still keep the snapshot option inactive even when the LVM storage and qcow2 disk are otherwise correct.
That does not mean “delete security features casually.” It means isolate the blocker. If this is a lab VM, you can test by removing TPM temporarily and checking whether snapshot availability returns. If this is a production Windows guest or anything tied to BitLocker, secure boot workflows, or compliance expectations, do not treat TPM removal like a harmless toggle. Understand the guest dependency first.
Why the UI refuses instead of partially allowing it
This part confuses people because the guest may look simple. One disk. One node. No obvious complexity. Yet the snapshot panel still refuses the whole VM.
The reason is operationally sensible. Proxmox is not evaluating your intent. It is evaluating whether the entire guest configuration supports a new snapshot safely under the current storage model. If one required piece does not line up, the UI does not offer a half-working snapshot workflow. It blocks the action.
In other words, the message is broad because the eligibility check is broad. That is why you need to inspect both layers:
- storage eligibility
- disk format eligibility
- guest device eligibility
Admins lose time here because they stop at the first partial fix. They enable the storage flag and assume the feature should now work for every disk already sitting there. That is not how the current rollout behaves.
A practical validation sequence
When a client opens a “why is snapshot still greyed out” ticket, this is the sequence we use:
- Read
/etc/pve/storage.cfgand confirmsnapshot-as-volume-chain 1is present on the correct LVM storage. - Run
qm config <VMID>and inspect the actual disk line. - If the disk is raw, move or convert it to qcow2 on the target LVM storage.
- Run
qm config <VMID>again and confirm the new layout. - Check for TPM and other guest devices if snapshots are still unavailable.
- Retry the snapshot only after all three layers line up.
This order matters. If you start with random guest edits before you confirm storage and disk format, you create noise. If you start by blaming storage before you read the VM config, you also create noise. The fastest fix is a disciplined sequence.
What usually goes wrong during remediation
- You enabled the flag on the wrong storage. The VM disk is on a different LVM backend than the one you edited.
- You assumed old raw disks would become eligible automatically. They usually do not.
- You checked the GUI, not the cluster config. The truth is in
/etc/pve/storage.cfg. - You converted one disk but forgot the guest has another unsupported disk or device.
- You ignored TPM as a blocker. Then you keep rechecking storage while the real issue is the VM device layout.
A production insight here: the most common reason this ticket drags on is not that the fix is hard. It is that teams assume there can only be one blocker. On Proxmox, storage eligibility and guest eligibility are separate checks that can both matter at the same time.
When LVM thick snapshots are the wrong hill to die on
Be honest about the bigger design question. If your environment leans heavily on snapshots, clones, rapid rollbacks, and frequent template work, LVM-thin or another storage backend may still be the cleaner operational fit. Proxmox VE 9’s LVM thick snapshot support is useful, but it is a specific workflow with specific requirements. It is not a free pass to treat every old thick LVM layout like a snapshot-first design.
That is where teams without dedicated virtualization expertise usually lose time. They are not just debugging one error. They are trying to force old storage decisions to behave like a different platform model.
For teams that do not want to spend hours tracing storage flags, disk format mismatches, and guest-level edge cases, ServerSpan’s Proxmox management service is the practical alternative. If the underlying issue is that your virtualization stack has outgrown ad hoc maintenance, it often makes more sense to fix the operations layer than to keep firefighting one storage ticket at a time.
If the problem is bigger than one VM and you are rethinking where supporting workloads should live, a clean deployment on the right virtual server footprint is usually more valuable than dragging one aging storage design through another quarter of exceptions.
For broader context, see Beyond Proxmox: The 2026 Virtualization Landscape and What Comes Next and CVE-2025-11234: QEMU-KVM VNC WebSocket Use-After-Free Enables Pre-Authentication DoS.
The short runbook
- Open
/etc/pve/storage.cfg. - Confirm the target LVM storage has
snapshot-as-volume-chain 1. - Run
qm config <VMID>. - If the VM disk is
format=raw, move or convert it to qcow2. - Recheck
qm config <VMID>. - If snapshots are still blocked, inspect the guest for TPM and other unsupported devices.
- Retry only after storage, disk format, and guest config all line up.
If you work through it in that order, the UI error stops being vague. It becomes a checklist.
Source & Attribution
This article is based on original data belonging to serverspan.com blog. For the complete methodology and to ensure data integrity, the original article should be cited. The canonical source is available at: Fix Proxmox VE 9 Error: “The current guest configuration does not support taking new snapshots”.