LVM setup on opennebula?

I have setup open nebula with a single kvm node. The datastore is shared using NFS now. Although I want to switch datastores to LVM. All this setup is running on proxmox using nested virtualization. I am trying to replicate our production setup to test some of our custom backup scripts. Prod server uses LVM for datastores.
What I did?
I attached another disk to kvm node and created a volume groups (i.e. vg-one-110 and vg-one-111) then I added a datastore to opennebula node, 110, 111 ids matched. Then I try to deploy a vm in vg-one-110 which is the system store but in the scheduler log there is no suitable datastore.
My doubt is for NFS I shared the storage with all hosts, maybe that is causing it because opennebula node can not see or access the vg on kvm node.

Hi, for LVM datastore support you need to install this addon

Thanks for reply.
I followed this doc LVM Datastore — OpenNebula 5.12.10 documentation for LVM setup. The link has the same procedure, should I redo them?

Hi, there are two LVM datastore types.

-fs_lvm
-block LVM

The first is included in the Opennebula, the second was part of Opennebula, later split into the addon. The main difference is about images datastore handling.

Fs-lvm stores images as files on a shared filesystem.

Block-lvm stores images as LV in VG.

In case, you want to use persistent images, I think is better to use block LVM. The only drawback needs to use cLVM if you want more nodes.

Fs LVM persistent image deployment takes a long time because of copying the image from the filesystem to LVM and on VM termination it copies back LVM LV to the filesystem. The advantage is no need for cLVM.

Hi there,

I followed all the guides out there to set up but none worked.
Lastly, this seems to have worked -

  1. Created system datastore with below attributes
DATASTORE 112 INFORMATION                                                       
ID             : 112                 
NAME           : lvm-system          
USER           : oneadmin            
GROUP          : oneadmin            
CLUSTERS       : 0                   
TYPE           : SYSTEM              
DS_MAD         : -                   
TM_MAD         : fs_lvm              
BASE PATH      : /var/lib/one//datastores/112
DISK_TYPE      : FILE                
STATE          : READY               

DATASTORE CAPACITY                                                              
TOTAL:         : 10G                 
FREE:          : 9.8G                
USED:          : 200M                
LIMIT:         : -                   

PERMISSIONS                                                                     
OWNER          : um-                 
GROUP          : u--                 
OTHER          : ---                 

DATASTORE TEMPLATE                                                              
ALLOW_ORPHANS="NO"
BRIDGE_LIST="172.17.26.223"
DISK_TYPE="FILE"
DS_MIGRATE="YES"
RESTRICTED_DIRS="/"
SAFE_DIRS="/var/tmp"
SHARED="YES"
TM_MAD="fs_lvm"
TYPE="SYSTEM_DS"

IMAGES         
  1. Then created a VG in KVM Node with VG-one-112 as docs suggest
  2. Created Image datastore with below config
ID             : 113                 
NAME           : lvm-image           
USER           : oneadmin            
GROUP          : oneadmin            
CLUSTERS       : 0                   
TYPE           : IMAGE               
DS_MAD         : fs                  
TM_MAD         : fs_lvm              
BASE PATH      : /var/lib/one//datastores/113
DISK_TYPE      : FILE                
STATE          : READY               

DATASTORE CAPACITY                                                              
TOTAL:         : 31G                 
FREE:          : 26.7G               
USED:          : 4.3G                
LIMIT:         : -                   

PERMISSIONS                                                                     
OWNER          : um-                 
GROUP          : u--                 
OTHER          : ---                 

DATASTORE TEMPLATE                                                              
ALLOW_ORPHANS="NO"
CLONE_TARGET="SYSTEM"
CLONE_TARGET_SSH="SYSTEM"
DISK_TYPE="FILE"
DISK_TYPE_SSH="file"
DRIVER="raw"
DS_MAD="fs"
LN_TARGET="SYSTEM"
LN_TARGET_SSH="SYSTEM"
RESTRICTED_DIRS="/"
SAFE_DIRS="/"
TM_MAD="fs_lvm"
TM_MAD_SYSTEM="ssh"
TYPE="IMAGE_DS"

IMAGES         
11             
12             
13             
  1. Shared Image datastore to kvm node

And it is working.
Even though it is working, I copied the datastore configuration from a working production environment. And I don’t understand all of them. It failed when I created datastores from sunstone and I had to add new attributes from production env to the current one.
Can you direct me to the documentation of such configuration?

The docs is here SAN Datastore — OpenNebula 6.2.0 documentation

The most important options are, for Image DS:

DS_MAD         : fs                  
TM_MAD         : fs_lvm 

and for System DS:

TM_MAD         : fs_lvm
BRIDGE_LIST

Hi,

I’m using LVM with SAN storage with 2 hosts, but i can’t create snapshot on the VM on this type datastore.

This is my error when i want to create it :
Fri Dec 10 09:18:51 2021: SNAPSHOTCREATE: error: unsupported configuration: internal snapshot for disk vda unsupported for storage type raw Could not create snapshot for domain 3af12166-4883-4899-bc25-7c9d027141ca.

here my configuration of my SAN datastore (SYSTEM)

root@CYLN-OPN-FRONT01:~# onedatastore show 103
DATASTORE 103 INFORMATION
ID : 103
NAME : CYLN-KVM-LVM-01
USER : oneadmin
GROUP : oneadmin
CLUSTERS : 100
TYPE : SYSTEM
DS_MAD : -
TM_MAD : fs_lvm_ssh
BASE PATH : /var/lib/one//datastores/103
DISK_TYPE : FILE
STATE : READY

DATASTORE CAPACITY
TOTAL: : 200G
FREE: : 170G
USED: : 30G
LIMIT: : -

PERMISSIONS
OWNER : um-
GROUP : u–
OTHER : —

DATASTORE TEMPLATE
ALLOW_ORPHANS=“NO”
BRIDGE_LIST=“cyln-kvm-01 cyln-kvm-02”
DISK_TYPE=“FILE”
DS_MIGRATE=“YES”
FORMAT=“qcow2”
SHARED=“YES”
TM_MAD=“fs_lvm_ssh”
TYPE=“SYSTEM_DS”

I don’t understand when i’m create a virtual machine the disk qcow2 that are stored on my datastore Image is after a raw disk on the VM…

<DISK>
  <ALLOW_ORPHANS><![CDATA[NO]]></ALLOW_ORPHANS>
  <CLONE><![CDATA[YES]]></CLONE>
  <CLONE_TARGET><![CDATA[SYSTEM]]></CLONE_TARGET>
  <CLUSTER_ID><![CDATA[100]]></CLUSTER_ID>
  <DATASTORE><![CDATA[CYLN-KVM-REPO-NFS]]></DATASTORE>
  <DATASTORE_ID><![CDATA[104]]></DATASTORE_ID>
  <DEV_PREFIX><![CDATA[vd]]></DEV_PREFIX>
  <DISK_ID><![CDATA[0]]></DISK_ID>
  <DISK_SNAPSHOT_TOTAL_SIZE><![CDATA[0]]></DISK_SNAPSHOT_TOTAL_SIZE>
  <DISK_TYPE><![CDATA[BLOCK]]></DISK_TYPE>
  <DRIVER><![CDATA[raw]]></DRIVER>
  <FORMAT><![CDATA[raw]]></FORMAT>
  <IMAGE><![CDATA[W2K19-STD-US_NP_QEMU_VIRTIO]]></IMAGE>
  <IMAGE_ID><![CDATA[17]]></IMAGE_ID>
  <IMAGE_STATE><![CDATA[2]]></IMAGE_STATE>
  <LN_TARGET><![CDATA[SYSTEM]]></LN_TARGET>
  <ORIGINAL_SIZE><![CDATA[30720]]></ORIGINAL_SIZE>
  <READONLY><![CDATA[NO]]></READONLY>
  <SAVE><![CDATA[NO]]></SAVE>
  <SIZE><![CDATA[30720]]></SIZE>
  <SOURCE><![CDATA[/var/lib/one//datastores/104/54e4a8296c37ea47a608e50f3d466487]]></SOURCE>
  <TARGET><![CDATA[vda]]></TARGET>
  <TM_MAD><![CDATA[fs_lvm_ssh]]></TM_MAD>
  <TYPE><![CDATA[BLOCK]]></TYPE>

I think it’s impossible to do a snapshot on Raw disk… but i want to know if it’s possible to change the type of my SAN datastore to store qcow2 disk ?

Kind regards

Please note that when using LVM your images (which are stored as files in the image DS) are converted to a LV on the system DS to be consumed by the VM this way [1]. As you’re using LVs only raw format is supported.

[1] SAN Datastore — OpenNebula 6.2.0 documentation

Hi conzales,
Thanks for your feedback,

There is not a volume formatting solution on the SAN part as offered today by OLVM / RHEV or even VMFS at VMware or CSVFS on Hyper-V.
Do you have no customer who uses an FC / SAN solution and which allows snapshots of their virtual machine on their KVM / OpenNebula infra?

I have the impression that my only solution is to go on NFS to have at least the same functionalities as on VMware or RHEV, but our company mainly uses SAN solutions with very large investment on high performance arrays.

Hope to get feedback from the community :slight_smile:

Hi @David_Martins,

At the moment there’s no way of achieve this using LVM, we have a ticket to implement it though. You can find the info here: Implement LVM snapshot operations · Issue #5109 · OpenNebula/one · GitHub

Okay, thank for your feedback, so for us the solution of OpenNebula isn’t possible today with KVM :confused:

Kind regard