Multiple CEPH datastores

I would like to have multiple CEPH-based datastores. What can and what cannot be shared between them?

Can they be on the same CEPH cluster? (I suppose yes)
Can they share the same CEPH pool? (probably not, as images are named “one-” starting from zero - can datastores define their own prefix instead of “one-”?)
Can they share the same CEPH user/secret?
Can they share the same libvirt secret UUID?

Thanks!

@Yenya , did you managed to enable multiple ceph-based DSes?

I am trying to add a second ceph-based datastore into our opennebula 5.12.0.4 installation.
I am using the same list of hosts in BRIDGE_LIST (there are 3 Raft HA ONE Front-end nodes) attribute, the same CEPH_SECRET and CEPH_USER. The CEPH_HOST attribute for each of two ceph-based DS has different values.
The file /etc/ceph/ceph.conf on all 3 Raft HA FNs has

[global]
STAGING_DIR=/var/tmp
RBD_FORMAT=2

[mon.a]
mon addr = 192.168.220.121:6789 192.168.220.122:6789 192.168.220.123:6789
POOL_NAME=cloud

[mon.b]
mon addr = 192.168.0.18:6789 192.168.0.19:6789 192.168.0.20:6789
POOL_NAME=cloudssd

Hosts listed in the [mon.a] section is for HDD ceph-based DS (#135). Hosts listed in the [mon.b] section is for SSD ceph-based DS (#136).

‘onedatastore list’ command shows sometimes real capacity of DS #135 (and at that time the capacity of DS #136 is 0) and sometimes it’s vice versa.

When I am trying to create an empty block device in DS #136 (at the moment when it has non-zero capacity) I am getting an error as below:

Thu Dec 23 15:38:31 2021 [Z0][ImM][I]: rbd --id cloud create --image-format 2 cloudssd/one-3674 --size 1048576 || exit $?" failed: rbd: error opening pool 'cloudssd': (2) No such file or directory

It seems for me the OpenNebula can’t operate on two ceph-based DS and randomly choosing DS settings for each of ceph-based DS. I guess to work properly each ceph command has to have a list of ceph mon hosts e.g.

rbd --id cloud create --image-format 2 cloudssd/one-3674 --size 1048576 -m 192.168.0.18:6789 -m 192.168.0.19:6789 -m 192.168.0.20:6789

So I wonder if someone managed to configure several ceph-based DS in the OpenNebula? If someone did then I would appreciate a receipt.

@knawnd: so you want to use not only two Ceph pools, but instead two different Ceph clusters. I don’t believe this is supported in ONe, and even in Ceph itself, using more than one cluster in ceph.conf is apparently deprecated from v16 on.

Do you have any special reason why use two clusters instead of one with two pools (and one set of mons for both pools)?

Using more pools from one Ceph cluster should work (altough I did not test it myself).

exactly

it seems for me it shouldn’t be hard to add such feature: one needs to add to rbd commands ceph mon hosts as in my example above.

I was not aware of that. I couldn’t found such info in the ceph docs. I would appreciate a link on such “feature” in the ceph docs.

We had initially HDD-based ceph storage and older ceph version. But now would like to add SSD-based one running on newest ceph version.

@knawnd: I meant this:
https://docs.ceph.com/en/pacific/rados/configuration/common/#running-multiple-clusters-deprecated
But reading this more carefully, it seems that only running the server-based parts of multiple Ceph clusters on the same hardware is deprecated, not using multiple clusters from the same client.

As for me, I have been upgrading my Ceph cluster from the initial setup with Ceph 8 (IIRC) and CentOS 7.1 through several generations of hardware, up to the present setup of Ceph 15 on CentOS 8 Stream. Ceph is really well upgradable in-place.