VM snapshots failing:
LVM_THIN_ENABLE=YES is set on both image and system datastores, but VMs are provisioned with standard LVs, not thin LVs. The fs_lvm/clone script has no thin logic and snap_create just returns ‘Operation not supported’
Versions of the related components and OS (frontend, hypervisors, VMs):OpenNebula 7.0.1
Frontend:
opennebula-common-onecfg-7.0.1-1.el9.noarch
opennebula-common-7.0.1-1.el9.noarch
opennebula-rubygems-7.0.1-1.el9.x86_64
opennebula-libs-7.0.1-1.el9.noarch
opennebula-guacd-7.0.1-1.2.0+1.el9.x86_64
opennebula-tools-7.0.1-1.el9.noarch
opennebula-migration-7.0.1-1.el9.noarch
opennebula-7.0.1-1.el9.x86_64
opennebula-fireedge-7.0.1-1.el9.x86_64
opennebula-flow-7.0.1-1.el9.noarch
opennebula-gate-7.0.1-1.el9.noarch
Steps to reproduce: try to create a snapshot via command line. Either when vm running or powered off
**Current results:**Tue Feb 24 14:35:53 2026 [Z0][VM][I]: New LCM state is RUNNING
Tue Feb 24 14:47:05 2026 [Z0][VM][I]: New state is ACTIVE
Tue Feb 24 14:47:05 2026 [Z0][VM][I]: New LCM state is DISK_SNAPSHOT
Tue Feb 24 14:47:05 2026 [Z0][VMM][E]: DISKSNAPSHOTCREATE: Cannot perform a live DISKSNAPSHOTCREATE operation
Tue Feb 24 14:47:05 2026 [Z0][VM][I]: New LCM state is RUNNING
Tue Feb 24 14:47:05 2026 [Z0][LCM][E]: Could not take disk snapshot.
Tue Feb 24 14:47:11 2026 [Z0][VMM][I]: Ignoring VM state update
Tue Feb 24 14:49:54 2026 [Z0][VM][I]: New LCM state is SHUTDOWN_POWEROFF
Tue Feb 24 14:50:04 2026 [Z0][VMM][I]: ExitCode: 0
Tue Feb 24 14:50:04 2026 [Z0][VMM][I]: Successfully execute virtualization driver operation: shutdown.
Tue Feb 24 14:50:04 2026 [Z0][VMM][I]: clean: Executed “sudo -n ovs-ofctl del-flows ovsbr0 in_port=27”.
Tue Feb 24 14:50:04 2026 [Z0][VMM][I]: clean: Executed “sudo -n ovs-vsctl --if-exists del-port ovsbr0 one-15-0”.
Tue Feb 24 14:50:04 2026 [Z0][VMM][I]: ExitCode: 0
Tue Feb 24 14:50:04 2026 [Z0][VMM][I]: Successfully execute network driver operation: clean.
Tue Feb 24 14:50:04 2026 [Z0][VM][I]: New state is POWEROFF
Tue Feb 24 14:50:04 2026 [Z0][VM][I]: New LCM state is LCM_INIT
Tue Feb 24 14:50:14 2026 [Z0][VMM][I]: Ignoring VM state update
Tue Feb 24 14:50:30 2026 [Z0][VMM][I]: Ignoring VM state update
Tue Feb 24 14:51:36 2026 [Z0][VM][I]: New state is ACTIVE
Tue Feb 24 14:51:36 2026 [Z0][VM][I]: New LCM state is DISK_SNAPSHOT_POWEROFF
Tue Feb 24 14:51:36 2026 [Z0][TrM][I]: Command execution failed (exit code: 1): /var/lib/one/remotes/tm/fs_lvm/snap_create onenode03:/var/lib/one//datastores/102/15/disk.0 1 15 101
Tue Feb 24 14:51:36 2026 [Z0][TrM][I]: snap_create: Operation not supported
Tue Feb 24 14:51:36 2026 [Z0][TrM][E]: Error executing image transfer script: snap_create: Operation not supported
Tue Feb 24 14:51:36 2026 [Z0][LCM][E]: Could not take disk snapshot.
What you’re seeing with snapshots failing on SAN LVols in OpenNebula 7.0.1 is consistent with current limitations in the snapshot implementation when using the fs_lvm driver and standard LVM logical volumes.
From the logs you shared, the VMs are being created on regular LVs (not thin-provisioned) even though you have LVM_THIN_ENABLE=YES. Because the fs_lvm snapshot scripts don’t have logic for handling snapshots on non-thin LVs, the snap_create command returns “Operation not supported.” That’s why the VM transitions to the DISK_SNAPSHOT or DISK_SNAPSHOT_POWEROFF state and then fails with that message.
So, in a nutshell, are you advocating creating new VG/lvols as thin from the start?? I created w/o any thin params and ex post facto ‘onedatastore update’ adding LVM_THIN_ENABLE=YES
This POC is going REALLY well (want to move off vmware) but I need to get shapshots dialed in correctly
Unfortunately, LVM_THIN_ENABLE can only be modified at the moment of the datastore creation or when there are no images on the datastore. This is because in OpenNebula design, a LVM datastore cannot have both sort of thin and thick images.
I read that restriction after I created the VMs. However deleting the VMs (test VMs), updating the datastores with LVM_THIN_ENABLE didn’t modify the underlying lvols to thin. That seems not to work on my system (RHEL 9.7 + Pure array) as the fs_lvm/clone script has no thin logic and snap_create and just returns ‘Operation not supported’
Mmmm… That is pretty strange. I understand that there are no LVs on that VG
I cannot be totally sure but a quick test could be deleting and recreating the datastore, adding LVM_THIN_ENABLE=yes to the creation template. The datastore ID will be different but that can be the easiest way. Something like (being ${CURRENT_DATASTORE_ID} the id of the current datastore)
EDITOR=cat onedatastore update ${CURRENT_DATASTORE_ID} > ds_template
echo "LVM_THIN_ENABLE=yes" >> ds_template
echo "NAME=..." >> ds_template # choose the name to the new datastore
onedatastore delete ${CURRENT_DATASTORE_ID}
onedatastore create ds_template
ID: ${NEW_DATASTORE_ID} # this will be the ID of the new datastore
vgrename vg-one-${CURRENT_DATASTORE_ID} ${NEW_DATASTORE_ID}
If thin LVs are not enabled after I may ask you for some logs
Tested with completely empty VG (no LVs, no thin pool). Created fresh datastore with LVM_THIN_ENABLE=yes from the start. ONE still creates standard LVs. Pool column empty, attr shows standard LV not thin. What logs would help debug this?
cat /var/lib/one/remotes/tm/fs_lvm/clone | grep -i -A5 -B5 “lvcreate|thin”
LV_NAME=“lv-one-$VM_ID-$DISK_ID”
VG_NAME=“vg-one-$DS_SYS_ID”
DEV=“/dev/${VG_NAME}/${LV_NAME}”
Execute lvcreate with a lock in the frontend
CREATE_CMD=$(cat <<EOF
set -e -o pipefail
$SYNC
$SUDO $LVSCAN
$SUDO $LVCREATE --wipesignatures n -L${SIZE}M -n $LV_NAME $VG_NAME
EOF
)
“There it is. The clone script just does:
$SUDO $LVCREATE --wipesignatures n -L${SIZE}M -n $LV_NAME $VG_NAME
"No thin logic at all. No check for LVM_THIN_ENABLE, no --thin or --thinpool options.
For thin provisioning it would need something like: