I would like to have multiple CEPH-based datastores. What can and what cannot be shared between them?
Can they be on the same CEPH cluster? (I suppose yes)
Can they share the same CEPH pool? (probably not, as images are named “one-” starting from zero - can datastores define their own prefix instead of “one-”?)
Can they share the same CEPH user/secret?
Can they share the same libvirt secret UUID?
@Yenya , did you managed to enable multiple ceph-based DSes?
I am trying to add a second ceph-based datastore into our opennebula 5.12.0.4 installation.
I am using the same list of hosts in BRIDGE_LIST (there are 3 Raft HA ONE Front-end nodes) attribute, the same CEPH_SECRET and CEPH_USER. The CEPH_HOST attribute for each of two ceph-based DS has different values.
The file /etc/ceph/ceph.conf on all 3 Raft HA FNs has
Hosts listed in the [mon.a] section is for HDD ceph-based DS (#135). Hosts listed in the [mon.b] section is for SSD ceph-based DS (#136).
‘onedatastore list’ command shows sometimes real capacity of DS #135 (and at that time the capacity of DS #136 is 0) and sometimes it’s vice versa.
When I am trying to create an empty block device in DS #136 (at the moment when it has non-zero capacity) I am getting an error as below:
Thu Dec 23 15:38:31 2021 [Z0][ImM][I]: rbd --id cloud create --image-format 2 cloudssd/one-3674 --size 1048576 || exit $?" failed: rbd: error opening pool 'cloudssd': (2) No such file or directory
It seems for me the OpenNebula can’t operate on two ceph-based DS and randomly choosing DS settings for each of ceph-based DS. I guess to work properly each ceph command has to have a list of ceph mon hosts e.g.
@knawnd: so you want to use not only two Ceph pools, but instead two different Ceph clusters. I don’t believe this is supported in ONe, and even in Ceph itself, using more than one cluster in ceph.conf is apparently deprecated from v16 on.
Do you have any special reason why use two clusters instead of one with two pools (and one set of mons for both pools)?
Using more pools from one Ceph cluster should work (altough I did not test it myself).
As for me, I have been upgrading my Ceph cluster from the initial setup with Ceph 8 (IIRC) and CentOS 7.1 through several generations of hardware, up to the present setup of Ceph 15 on CentOS 8 Stream. Ceph is really well upgradable in-place.