Datastores not created in a new KVM host


I have just installed my firts KVM host. Until today, my OpenNebula “cluster” had only one server that acted as front-end and kvm host. Today, I have added a kvm host. After doing step-by-step (adding a “temporarly” password for “oneadmin” user in kvm host to allow scp and ssh-copy-id, I have test the installation… and it has failed. The error occurs when front-end executes this “remote” script:

[oneadmin@front-end ~]$ /var/lib/one/remotes/tm/qcow2/clone front-end:/var/lib/one//datastores/1/2f73546291bb78b3d0fe356e1c950805 kvm-host:/var/lib/one//datastores/0/7233/disk.0 7233 1
INFO: clone: Cloning /var/lib/one//datastores/1/2f73546291bb78b3d0fe356e1c950805 in kvm-host:/var/lib/one//datastores/0/7233/disk.0
ERROR: clone: Command "set -e -o pipefail

cd /var/lib/one//datastores/0/7233

rm -rf "/var/lib/one//datastores/0/7233/disk.0.snap"

mkdir -p "/var/lib/one//datastores/0/7233/disk.0.snap"

B_FORMAT=$(qemu-img info /var/lib/one//datastores/1/2f73546291bb78b3d0fe356e1c950805 | grep "^file format:" | awk '{print $3}' || :)
qemu-img create -o backing_fmt=${B_FORMAT:-raw} -b /var/lib/one//datastores/1/2f73546291bb78b3d0fe356e1c950805 -f qcow2  /var/lib/one//datastores/0/7233/disk.0.snap/0

rm -f "/var/lib/one//datastores/0/7233/disk.0"

ln -s disk.0.snap/0 /var/lib/one//datastores/0/7233/disk.0

cd /var/lib/one//datastores/0/7233/disk.0.snap

ln -s . /var/lib/one//datastores/0/7233/disk.0.snap/disk.0.snap" failed: qemu-img: Could not open '/var/lib/one//datastores/1/2f73546291bb78b3d0fe356e1c950805': Could not open '/var/lib/one//datastores/1/2f73546291bb78b3d0fe356e1c950805': No such file or directory
qemu-img: /var/lib/one//datastores/0/7233/disk.0.snap/0: Could not open '/var/lib/one//datastores/1/2f73546291bb78b3d0fe356e1c950805': No such file or directory
ERROR MESSAGE --8<------
Error copying front-end:/var/lib/one//datastores/1/2f73546291bb78b3d0fe356e1c950805 to kvm-host:/var/lib/one//datastores/0/7233/disk.0
ERROR MESSAGE ------>8--

I understand, after getting this error in oned.log (in front-end node), that “remote” script has failed because in kvm-host doesn’t exist datastore “1”… only “0”…
Also, I have noticed that in “Storage → Datastore” (in Sunstone web interface), it only appears the three datastores created in the front-end node, but nothing about kvm-host… so who or what process must created datastore structure in kvm-host?

I have added, doing some tests, a new “image” datastore in kvm-host, but ID is different (of course!), so this new datastore has a ID like “100”, so script fails too.

What can I do?

Furthermore, this problem add that in Sunstone, the free space for VMs (datastore 1) only shows front-end size but not kvm-host free space. In this way, I have 7.6 TB in front-end node and 9.1 TB in my new kvm-host, but these 9.1 TB doesn’t appear in Storage->Datastore.

Help, please :wink:


The qcow2 driver expect to have a shared filesystem configured (it is just an better/improved option that the standard shared TM_MAD). That said you have two options - define a shared filesystem (NFS for example) and symlink /var/lib/one/datastores/$DATASTORE_ID to a folder in the shared mount on both FE and Host(s) or reconfigure and use the ssh driver instead.

Hope this helps,
Anton Todorov

By the moment, I will do more tests exporting via NFS datastores “default” (with ID 1) and “files” (with ID 2) from front-end to kvm host. In the kvm host, opennebula-kvm-host installation packages created a datastore “system” (with ID 0, too). Then, I will mount in kvm host datastores “1” and “2” and, then, I will create two new datastores “default” (with ID…101, for example) and “fiiles” (with ID 102…).

With this configuration, all should run correctly… doesn’t it?

Another question in the same way: is there any configuration way to configure in a “transparently” mode for users both systems (a front-end that acts, also, as kvm host and my new kvm host)??? My idea is a user login in Sunstone, instantiate a template and, with no more interaction, opennebula scheduler determines in with host must create and run this new VM, depeding on free CPUs, for example.
This configuration should be, always, with local datastores in front-end (that acts as kvm host too) and new kvm-host. I can’t configure a “shared” (distributed) datastore in Ceph, Gluster, Lustre or similar… I can’t :frowning: