I am using Gluster as the back-end storage for my image and system datastores which consists of 9, RHEL 7 servers, replica 3. Our goal is to take out 3 Gluster servers, rebuild as RHEL 8 using CEPH and then migrate the data, VMs, etc. from the Gluster datastores to the new CEPH datastores.
I am wondering if this is even possible, first off. And if it is possible, what is the best course of action to do so (ex. re-build the servers, create new datastores attached as CEPH, then use something like ‘rsync’ to sync the old Gluster datastore with the new CEPH datastore).
For an existing VM, you can import each one of its disk images into the CEPH datastore and modify the template to point to the new imported disks. Another option is cloning the existing VM template without cloning the disks and modify the cloned VM template to point the new imported disks images into CEPH
Is it possible to sync the Gluster filesystem with the Ceph filesystem and then just re-link the datastore directories?
i.e., I stop all OpenNebula services and disable the hosts and power off all VMs. I then sync Gluster with Ceph. I remove the soft-links from Gluster to /var/lib/one/datastores/{x…z} and re-create the soft-links with Ceph. Having all of the same data, is this technically doable?
We have about 70 VMs and 30 images, so I’m not sure the method you gave would be a quick or easy turnaround.
Unfortunately, the sync process is not simple, as the GlusterFS and the CEPH internals are very different. VMs that contain CEPH disks do not access files contained in a POSIX location (as GlusterFS does). They use libRADOS CEPH library to access the image object directly using block mode.
Part of the process can be automated by scripting the registration of the VM disks in CEPH and creating customized templates based on the source VMs running in GlusterFS datastore
What if both filesystems are mounted via NFS and synced from one NFS mount (as Gluster) to the other NFS mount (as Ceph)? They both should appear to be the same, would they not?
Unfortunately, using CEPHfs as a NFS mountpoint to serve disk image files for VMs is not recommended and it has a huge performance penalty on VM block I/O operation.
As we stated, disks should be imported one by one (by scripting or manually) to the CEPH pool in order to migrate from GlusterFS to CEPH RBD. Using CEPHfs (NFS client) to store VM disks is not recommended.