in ONe 4.14, I used a non-shared system datastore with TM_MAD=ssh, and a CEPH-based image datastore. In 5.0, it is apparently possible to use CEPH also as a system datastore, and to use multiple CEPH datastores over a single CEPH RBD pool.What is the suggested way how to migrate an existing ONe installation to the CEPH-based system datastore without losing the running VMs and images?
when I create a new datastore over the same CEPH RBD pool, what should I so that there is not clash between the file names of the existing image datastore and the new system datastore?
how can I move the VMs to the new system datastore? Is suspend/resume sufficient, or should they be undeployed and then redeployed back?
what about the Files-datastore? My existing FILES_DS is also fs-based with TM_MAD=ssh. Could it also be moved to CEPH?
in ONe 4.14, I used a non-shared system datastore with TM_MAD=ssh, and a CEPH-based image datastore. In 5.0, it is apparently possible to use CEPH also as a system datastore, and to use multiple CEPH datastores over a single CEPH RBD pool.What is the suggested way how to migrate an existing ONe installation to the CEPH-based system datastore without losing the running VMs and images?
Migrating from the fs system datastore (shared or ssh) to the Ceph system datastore is not supported, there is currently no migration path. Note that using an fs system datastore along with Ceph is 100% supported.
when I create a new datastore over the same CEPH RBD pool, what should I so that there is not clash between the file names of the existing image datastore and the new system datastore?
Because of the naming schema, there will be no clashes. Objects in the system datastore will never have the same name as objects in the image datastore.
how can I move the VMs to the new system datastore? Is suspend/resume sufficient, or should they be undeployed and then redeployed back?
That is not supported… you can only migrate to another system datastore of the same type, so that won’t work.
what about the Files-datastore? My existing FILES_DS is also fs-based with TM_MAD=ssh. Could it also be moved to CEPH?
That’s not supported, but keep in mind that we think that using ssh for that is the best option, so we are not planning on migrating that to Ceph.
As the CEPH driver is based on ssh, if you haven’t volatile disks and no suspended VMs why not just replace the TM_MAD on the current system datastore from ssh to ceph?
With volatile disks I think it will be a little tricky - it is best to undeploy the VMs with volatile disks, then import the disks manually in CEPH following the naming convention (like “$POOL_NAME/one-sys-$VMID-$DISK_ID” check your current format in /var/lib/one/remotes/tm/ceph/mkimage) and then change the system datastore TM_MAD to ceph.
For the suspended VM’s you should import in CEPH the VM’s checkpoint file too.
It is best if you can test the procedure in testing environment first(create new “cluster” from single host and assign new SYSTEM(ssh) and IMAGES(ceph) datastores, The FILES datastore is irrelevant. To be on the safe side use different POOL_NAME for CEPH - just in case.
I have no experience with CEPH so I can’t help further but I believe such “migration” is possible because I’ve done similar procedure an year ago when “migrated” a customer from ssh to StorPool(ssh) as SYSTEM datastore.
OK, I will try to create a test cluster with FS-based system datastore, and try to migrate it to the CEPH-based under the oned’s hands ;-).
One question, though: is it possible to have multiple datastores of the same type on top of a single CEPH RBD pool? I have tried this with DS type “image”, and a newly created image was named “one-157”, while the last image in the pre-existing “image” datastore was “one-156”. So I guess I should be safe, as all the image names appear to be taken from a single sequence, and thus are unique.
Am I right, or am I on the verge of losing my data?
OK, I have tested the migration on a test cluster. The workflow is as follows:
undeploy all VMs on a cluster[1]
for each volatile disk, run the following command on the cluster node where it is currently stored: rbd --id ceph_login --pool ceph_pool import /var/lib/one/datastores/system_datastore_id/vm_id/disk.seq one-sys-vm_id-seq
In my case, the exact command for one disk was the following one: rbd --id libvirt --pool one import /var/lib/one/datastores/108/477/disk.1 one-sys-477-1
in Sunstone, edit the datastore parameters (open Storage_>datastores->id_of_the_fs-based_system_datastore). Copy the parameters from another CEPH-based datastore which uses the same CEPH pool. I have copied the following parameters: BRIDGE_LIST, CEPH_HOST, CEPH_SECRET, CEPH_USER, DISK_TYPE, POOL_NAME, SHARED, TM_MAD.
resume (redeploy) the VMs (I have selected them ail in Sunstone and then clicked on the |> (play) icon).
profit!
I don’t have any VMs with snapshots, etc. Interestingly enough, the context CD image is still stored in filesystem and not in CEPH (verified on a freshly created cluster with freshly created CEPH-based system datastore, not a migrated one).
Does anybody see any problem with this approach? So far I have tested it on a test cluster only.
-Yenya
[1] what is the difference between “Undeploy” and “Power off” in ONe 5.0?
According the sources the ssh and ceph TM_MAD share same context script so it is correct.
On “Undeploy” the image files are “returned” to the image datastore (detached if shared, or copied… depending on the setup.). On “Poweroff” the VM is just powered off - all disk files are kept on the hypervisor and all attachments are intact too.
What are the consequences? I have verified that even non-persistent VMs retain their data during undeploy-resume cycle, and transient disks are also kept intact.
Well I think it depends on the storage backend. For example if you have ssh TM_MAD and qcow2 for images: if the Host disk fails, you are losing the data of the powered off VMs. If they are undeployed their data is kept in the “datastore” (by default in the front-end).On shared storage, where the disks are always somewhere remotely - it is not big difference IMO.