Raw Device, KVM VMs without LVM?

Is it possible in Open Nebula to create KVM VMs on raw iscsi devices without using LVM at the VM host level?

I read Raw Device Mapping (RDM) Datastore — OpenNebula 6.2.2 documentation, but it is still not clear whether or not LVM would still be layered on at VM creation

Here is my use case:
We have existing qemu-KVM VMs that are running on raw luns presented to the Hosts via iSCSI.
Each iSCSI lun:

  • is available across all Hosts in the cluster (VMs are able to migrate among the hosts)
  • is dedicated to one VM
  • is presented to the VM in it’s raw form

If you were to look at the ‘virsh dumpxml’ output of any VM, it would look something like:

  • /usr/libexec/qemu-kvm*
  • *
  •  <driver name='qemu' type='raw' cache='none' io='native'/>*
    
  •  <**source dev='/dev/disk/by-id/scsi-3xxasomestring here01'/**>*
    
  •  **<target dev='vda' bus='virtio'/>***
    
  •  <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>*
    
  • *

Can Open Nebula support this use case?

I am researching Virtualization/Private Cloud management tools before narrowing the list of tools to evaluate. The tool must be able to support our existing environments. This is the one criteria that I have not been able to verify from my reading of OpenNebula docs thus far.

Thanks in advance for your input.

The RDM datastore simply passes the device path to hypervisor. This <**source dev='/dev/disk/by-id/scsi-3xxasomestring here01'/**>* is what it is gathered from the PATH variable in the image template.

Thank you - We will move forward with the eval

Per the docs “The RDM Datastore is an Image Datastore that enables raw access to block devices on Nodes. This Datastore enables fast VM deployments due to a non-existent transfer operation from the Image Datastore to the System Datastore.”.

Given the above, if ALL images for the OS’s and their data will be RDM’s, do we still need a SYSTEM datastore? If yes, how will it be used?

Yes, you still need a system datastore as noted here.

Thanks Daniel

I went ahead and tried it and see that it’s used to store the config files and a sym link to the iscsi lun used for the VM’s OS.

-rw-rw-r–. 1 oneadmin oneadmin 1893 Sep 27 09:14 deployment.0
lrwxrwxrwx. 1 oneadmin oneadmin 54 Sep 27 09:14 disk.0 → /dev/disk/by-id/scsi-361c5a0b06b5a616b0000629e2aa70001
-rw-r–r–. 1 oneadmin oneadmin 372736 Sep 27 09:14 disk.1
-rw-rw-r–. 1 oneadmin oneadmin 1049 Sep 27 09:14 ds.xml
-rw-rw-r–. 1 oneadmin oneadmin 5394 Sep 27 09:14 vm.xml

where
deployment.0 is the config file for libvirt (as would usually be found in /etc/libvirt/qemu/<vm_name>.xml
disk.0 is a sym link to the image/lun in the RDM datastore assigned to the VM (raw device/OS disk)
disk.1 is an ISO 9660 CD-ROM image containing a single script, context.sh, that sets variable values such as ETH0_MAC, TARGET=‘hda’ etc
ds.xml contains the config for the system datastore - onedatastore show <ds_name>
vm.xml contains the vm config - onevm show

The ISO is the contextualization service used to configure the guest OS. The variables set in the Context section on the VM Template are copied to the context.sh fiel to then be used by a daemon that runs in the guest OS provided it has the contextualization package installed.