CephFS mountpoint for system datastore?

As VM snapshots are not possible with SAN or Ceph RBD, I am wondering just how bad of an idea it is to use cephfs for backing the vms with qcow2, any words of warning?

reference: Overview — OpenNebula 6.8.3 documentation

Of course it can be done, just remember to configure the required datastores setting TM_MAD to qcow2 or shared and DS_MAD as fs. But… I would not recommend it because of the performance penalty.

RBD is intended for block devices and works with them really well and outperforms cephfs because it mostly stores just chunks of VM image and do not worry about metadata (that kills a lot of performance). It depends on your installation, but I personally have seen performance penalties of 30% from using CephFS to use RBD, even more in some scenarios.

I’ve even seen RBDs with an OCFS2 filesystem on them to allow concurrency (some guys said that performance was better than CephFS :flushed: )