@Yenya , did you managed to enable multiple ceph-based DSes?
I am trying to add a second ceph-based datastore into our opennebula 188.8.131.52 installation.
I am using the same list of hosts in BRIDGE_LIST (there are 3 Raft HA ONE Front-end nodes) attribute, the same CEPH_SECRET and CEPH_USER. The CEPH_HOST attribute for each of two ceph-based DS has different values.
The file /etc/ceph/ceph.conf on all 3 Raft HA FNs has
mon addr = 192.168.220.121:6789 192.168.220.122:6789 192.168.220.123:6789
mon addr = 192.168.0.18:6789 192.168.0.19:6789 192.168.0.20:6789
Hosts listed in the [mon.a] section is for HDD ceph-based DS (#135). Hosts listed in the [mon.b] section is for SSD ceph-based DS (#136).
‘onedatastore list’ command shows sometimes real capacity of DS #135 (and at that time the capacity of DS #136 is 0) and sometimes it’s vice versa.
When I am trying to create an empty block device in DS #136 (at the moment when it has non-zero capacity) I am getting an error as below:
Thu Dec 23 15:38:31 2021 [Z0][ImM][I]: rbd --id cloud create --image-format 2 cloudssd/one-3674 --size 1048576 || exit $?" failed: rbd: error opening pool 'cloudssd': (2) No such file or directory
It seems for me the OpenNebula can’t operate on two ceph-based DS and randomly choosing DS settings for each of ceph-based DS. I guess to work properly each ceph command has to have a list of ceph mon hosts e.g.
rbd --id cloud create --image-format 2 cloudssd/one-3674 --size 1048576 -m 192.168.0.18:6789 -m 192.168.0.19:6789 -m 192.168.0.20:6789
So I wonder if someone managed to configure several ceph-based DS in the OpenNebula? If someone did then I would appreciate a receipt.