Using datastores with ZFS

Hi,

In my integration adventure of my Xen infrastructure (diskless) with OpenNebula, today I try to use the datastores storage and have some problems.

I can monitor my xen hypervisor and completely create my vm on it, when I create the VM from my template, it does :

  • create a new ZFS dataset on my storage server
  • set a new zfs quota from template ‘User Input’ attributes
  • rsync the master rootfs from storage server to my previously created dataset
  • set a new nfs export directive to access the rootfs from the VM

All this works well with the xen deploy script.

So in my mind, I thought I can do this :

When I create a new VM, OpenNebula creates a new dataset in my system datastore, set the quota and nfs settings, rsync the master I choose from images datastore to the dataset previously created in the system datastore. And optionnaly add the correct kernel/initramfs in my os boot line => launch my new vm.
Goal : manage some users quota spacedisk with the frontend

So I create 3 dataset on my zfs pool :

  • For my “master” VM
  • For my VMs
  • For my customs kernels or initramfs

I install the zfs add-on on OpenNebula.
I create image, system datastore, seems OK I have capacity returned

I try to create a new image file : “master-jessie”, I am stuck here !
my “image” is a jessie rootfs, so I choose ‘generic storage datablock’ type and give a NFS path … not really sur for the options… for the path, I try different things and always returns me "Cannot parse image SIZE: "

Can I have explanation on the path attributes ? And on the datastores working ?
Do you think this is possible ? Even partially :slight_smile: would appreciate some clues :blush:

Sorry if it’s too rough but it is so rough in my mind :stuck_out_tongue:

Thanks,
Mura

Hi,

I have not much experience with ZFS, but generally for importing images you should take a look at datastore/zfs/stat and datastore/zfs/cp scripts. It looks like the stat script is failing to detect the size of the source image. If I understand correctly your source for the filesystem is zfs device, not a file. In this case you must rework the stat script to report the size of the root zfs device instead of simple file and then alter the cp script to use the image instead of file too.

More details regarding the storage driver: http://docs.opennebula.org/5.0/integration/infrastructure_integration/sd.html

It is interesting topic but I think it should be moved to the Development category.

Kind Regards,
Anton Todorov

Thanks for the answer Anton,

I feel lonely using this configuration on OpenNebula ! :slight_smile:
I try to look in the scripts and do some modifications seems to be the good way but didn’t have much time to progress last week.

I will come back soon with more questions about the datastores…

Tell me if I need to move the topics in the development category ?

Mura

Hi,

Somebody know how works the monitoring of System datastore ?
I created my first VM in ZFS system datastore, so the used size updated, but since, it never moved. I don’t understand when and where (on ONE or HOST server) the monitoring script is executed.

Thanks,
Mura

hi, did you ever take a look at the ZFS opennebula addon by @kvaps ?
EDIT: nevermind, see you’ve used that exact plugin, sorry :slight_smile:

EDIT2: the ZFS addon in its current form (as far as I understood from reading the docs) is able to create a block device on a ZFS volume, and control it from ONE using the SSH transfer manager.
The system datastore just holds symlinks from the default/files datastore, so ONE can tag them as “in use” or “locked” etc.

If I understand your use case correctly, you want to start a VM and manage ZFS volumes from ONE, right ? Wouldn’t it be much easier if you would start a VM from a block device on a zfs volume (which the ZFS addon currently handles for you), using opennebula plus contextualization to start a script that creates a ZFS volume and sets the quota for that volume from the contextualization info (the user input) ?

If you want to be able to create a zvol (assuming that’s what you mean when you say “dataset”) and set quota in a that zvol from one, you would have to change this script: https://github.com/OpenNebula/addon-zfs/blob/master/datastore/zfs/mkfs
to make it login to the ZFS host with SSH, and make sure it can “zfs create (name)” and “zfs set blabla”. (and use any added “user input” to parse a value from a template to a ZFS volume on a remote host.)
Then to be able to monitor it, you would have to change these scripts: https://github.com/OpenNebula/addon-zfs/tree/master/tm/zfs (lot of work…)

So, hoping I understand you correctly, the ZFS addon currently creates block devices on a ZFS volume. You want to be able to create ZFS volumes (dataset as you call m) from the ONE host on a remote ZFS server, right ?

Hi, Murasakiiru, you need to known:
ZFS is not cluster filesystem and it will not work on multiple hosts configuration.

Also you should know that the custom system storage feature has become available since version OpenNebula 5.0, and currently not developed for my zfs-addon.

If anyone want to develop it, need to write 2 new files:
/vmm/kvm/restore.zfs and /vmm/kvm/save.zfs

In your case, when a several hosts or if opennebula service separately from the host, I would suggest using a different driver:

  • iscsi:
    if you want to have ZFS-storage separately from hosts. You need yagnasrinath’s addon-iscsi fork with zfs support or same fork with my enchancemets (I have not tested this)

  • ceph:
    If you have serveral hosts, and want something like zfs, but more clustered and scalable.

  • lvm:
    You may configure clvm (lvm with cluster extension), and share it between your hosts. (may be quite difficult)

If I understand your use case correctly, you want to start a VM and manage ZFS volumes from ONE, right ?

Yes, globally, I want to do a maximum of manipulations via OpenNebula (create a new VM, create a new template …) retrieve maximum of informations to OpenNebula (how much CPU/MEM/DISK the users uses …) with my XEN/ZFS/DISKLESS system.

Wouldn’t it be much easier if you would start a VM from a block device on a zfs volume (which the ZFS addon currently handles for you), using opennebula plus contextualization to start a script that creates a ZFS volume and sets the quota for that volume from the contextualization info (the user input) ?

To launch my VMS diskless, I can’t use block device (So seems not to be the main using of the zfs driver from @kvaps). Today I am able to launch completely a VM with deploy script (create datastore, rsync my filesystem, set quota and launch) but I am unable to manage completely my datastores via OpenNebula (subject of this topic)

If you want to be able to create a zvol (assuming that’s what you mean when you say “dataset”)

For me, dataset is not zvol, zvol is a block device and dataset is … what I need to to diskless virtual machines … don’t know how to name that :stuck_out_tongue:

ZFS is not cluster filesystem and it will not work on multiple hosts configuration.

Yes I read it several times in this forum, but today I can’t change this unfortunatly and seems to work pretty good (several xen hosts and 1 ZFS storage with nfs sharing for ‘nfsroot’), do you know how much this configuration can be problematic ? Or have some reading references ?

Also you should know that the custom system storage feature has become available since version OpenNebula 5.0, and currently not developed for my zfs-addon.
If anyone want to develop it, need to write 2 new files:
/vmm/kvm/restore.zfs and /vmm/kvm/save.zfs

So I can’t use your zfs driver from scratch for a system datastore even with block device right now ?
Maybe it explain why the informations the system datastore retrieves me are strange :slight_smile: : At the beginning the datastore show no informations (I read that it is normal), when I deploy my first VM some information are retrieve but after the first VM, the information never change.
I forgot to say that I use the version 5.

It would be sooo cool if I can have 1 datastore for 1 user with his global quotas and this information retrieve in his dashboard. But seems that my goal is sooo far away.
If I downgrade in opennebula 4 I haven’t access to custom system datastore ?

Mura.