Issue with shared datastore


I’ve installed fresh KVM + frontend and added NFS storage between KVM and frontend. Everything working as expected, but when I attach a disk from this datastore, instead of creating the link in the system directory of Nebula, it copy the image into the VMs directory in /var/lib/one/datastore/0/ID/. I don’t have this issue with our 7 other locations (location includes frontend +KVMs).

Using OpenNebula 6.4.0 with community package installed on Alma 8.8

Current behavior

Expected behavior

Do you have the same disk persistency in both cases ? Each disk image should have a PERSISTENT attribute.

Yes, there is persistent flag, but whatever is set YES/NO still the image is copied into the system store of Nebula

Compare both datastores templates and look out for the tm_mad attribute.

They are completely the same

faulty location:

location where linking images works:

Hello Yavor,

Could you check what deployment mode is set in the VM(s)?
You should look for the TM_MAD_SYSTEM attribute in the VM metadata (on this forum, this thing is IMHO wrongly referenced as VM Template too…)

onevm show ${VMID} -x

Also, check what is set in the VM Template used for instantiating the VM too. (Well it is not clear what is the “default” value there, though…)

Another thing is for new installations to consider using qcow2 for the tm_mad. In one-6.4 the shared tm_mad is a symlink to the qcow2 tm_mad, but I believe setting to qcow2 is clearer.

Best Regards,
Anton Todorov

1 Like

The TM_MAD_SYSTEM for the VM is set to:
Same is for the datastore.

I’ve already tried creating additional datastore qcow2 for testing but the result is just the same - image gets copied (to system datastore of Nebula) upon attaching it to the VM

I’m completely lost with this issue. Now this issue appears on other locations, where we didn’t had issues since upgrade. It’s either the storage template or the VM template, but in both cases i’m completely unable to locate the problem

I see that my response previous week doesn’t show the TM_MAD_SYSTEM, currently is set to
< ! [ CDATA [ ssh ] ]>
without the whitespaces

Upgrading to 6.6 doesn’t solve the issue as well. I’m seeing that Nebula creates “ds.xml” file on KVM, which is not created in other location for other VMs which have links created properly.

Using qcow2 as datastore type actually produces an error:

ERROR: ln.ssh: Command "set -ex -o pipefail
rebase_backing_files ()
local DST_FILE=$1;
for SNAP_ID in $(find * -maxdepth 0 -type f -print);
INFO=$(qemu-img info --output=json $SNAP_ID);
if [[ $INFO =~ “backing-filename” ]]; then
BACKING_FILE=${INFO/backing-filename": "/};
qemu-img rebase -f qcow2 -F qcow2 -u -b “${DST_FILE}.snap/$BACKING_FILE” $SNAP_ID;

cp /var/lib/one/datastores/106/088234fdd36a8f2da81b2beea72101b3 /var/lib/one/datastores/0/16/disk.0" failed: + cp /var/lib/one/datastores/106/088234fdd36a8f2da81b2beea72101b3 /var/lib/one/datastores/0/16/disk.0
cp: not writing through dangling symlink ‘/var/lib/one/datastores/0/16/disk.0’

cp: not writing through dangling symlink ‘/var/lib/one/datastores/0/16/disk.0’

This error is quite odd. Not sure why it is happening. Is probably worth looking at the symlink and what it points to.

When the image is not persistent it should point to a qcow2 overlay copied on the system datastore

root@provisionengine-one:/var/lib/one/datastores/0/1# ls -l
total 388
-rw-rw-r-- 1 oneadmin oneadmin   1839 Aug 16 15:47 deployment.0
lrwxrwxrwx 1 oneadmin oneadmin     13 Aug 16 15:47 disk.0 -> disk.0.snap/0
drwxrwxr-x 2 oneadmin oneadmin   4096 Aug 16 15:47 disk.0.snap
-rw-r--r-- 1 oneadmin oneadmin 374784 Aug 16 15:47 disk.1
-rw-rw-r-- 1 oneadmin oneadmin    871 Aug 16 15:47 ds.xml
-rw-rw-r-- 1 oneadmin oneadmin   6636 Aug 16 15:47 vm.xml
root@provisionengine-one:/var/lib/one/datastores/0/1# ls -l disk.0.snap/
total 6152
-rw-r--r-- 1 oneadmin oneadmin 6356992 Aug 17 15:11 0
lrwxrwxrwx 1 oneadmin oneadmin       1 Aug 16 15:47 disk.0.snap -> .

When the image is persistent, the symlink should point to the image datastore (no copying is done)

root@provisionengine-one:/var/lib/one/datastores/0/8# ls -l
total 388
-rw-rw-r-- 1 oneadmin oneadmin   1890 Aug 17 15:11 deployment.0
lrwxrwxrwx 1 oneadmin oneadmin     65 Aug 17 15:11 disk.0 -> /var/lib/one/datastores/1/7018dff3de0a50dcf2821952f91f0717.snap/0
-rw-r--r-- 1 oneadmin oneadmin 374784 Aug 17 15:11 disk.1
-rw-rw-r-- 1 oneadmin oneadmin    871 Aug 17 15:11 ds.xml
-rw-rw-r-- 1 oneadmin oneadmin   7032 Aug 17 15:11 vm.xml


i’m sorry but showing me your qcow working datastore, won’t make my installation working