I didn’t find the answer in the documentation, so I’d like to ask here.
I have an OpenNebula server (Front, Oned) and one separated host with NVME disks.
I have also some hypervisors nodes which I planned to will use these NVME disks.
I made an iSCSI LUN for each server and then I want to configure LVM datastore on each hypervisor to decrease an overhead from the local filesystem. And I completely don’t understand how to to it.
Hello, you can use this addon https://github.com/OpenNebula/addon-lvm which was in past part of OpenNebula core and was replace by fs_lvm. It depends on your usage.
LVM driver - stores all images, persistent, non persistent and deployment images on LVM volumes
FS_LVM driver - stores persistent and non persistent images on filesystem and deployment images are copied to LVM volumes.
In past I used LVM and I prefered LVM driver.
I don’t think this is related to my question.
That add-on is quite the same as NFS, and I don’t suppose it doesn’t seem like a solution with high performance.
So, my question is the same. Did the creators of the OpenNebula think how to use the local LVM as system storage?
Because now this looks like a solution for the school project: Ceph, NFS, LVM over TCP, iSCSI over NFS…
None of the presented solutions doesn’t provide stable work with the disk subsystem. Opennebula itself doesn’t provide any instruments to work and capacity planning of the storage. It is full of frustration.
yes, you can buy “enterpise” edition and ask them for enteprise solution :D. Or develop own, as I did with HPE 3PAR support.
but, you can use FS_LVM. Images datastore on Frontend, images saved as files. LVM as system datastore, when you deploy VM, it should copy image from images datastore to system datastore LVM volume.