OpenNebula with Ceph integration

We are having issue with Ceph integration, i have installed OpenNebula 6.10.0.1 with ceph version 17.2.9 quincy (stable), we created osd pool with the disks that should be added with rbd type and created a test image.

root@master:/etc/ceph# ceph osd pool ls
one
root@master:/etc/ceph# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.81859 root default
-3 0.27280 host worker1
0 hdd 0.27280 osd.0 up 1.00000 1.00000
-5 0.54579 host worker2
1 hdd 0.54579 osd.1 up 1.00000 1.00000
root@master:/etc/ceph# ceph osd pool application get one
{
“rbd”: {}
}
root@master:/etc/ceph# sudo -u oneadmin rbd -p one ls
test-image-01
root@master:/etc/ceph# onedatastore list
ID NAME SIZE AVA CLUSTERS IMAGES TYPE DS TM STAT
115 ceph-images 0M - 0 0 img ceph ceph on
102 backup2 97.9G 80% 0 0 bck rsync - on
101 backup 0M - 0 0 bck restic - on
2 files 97.9G 48% 0 0 fil fs ssh on
1 default 97.9G 48% 0 33 img fs ssh on
0 system - - 0 0 sys - ssh on

and still always showing ‘0 M’ when trying to add a ceph datastore on opennebula :
root@master:/etc/ceph# sudo -u oneadmin onedatastore show 115
DATASTORE 115 INFORMATION
ID : 115
NAME : ceph-images
USER : oneadmin
GROUP : oneadmin
CLUSTERS : 0
TYPE : IMAGE
DS_MAD : ceph
TM_MAD : ceph
BASE PATH : /var/lib/one//datastores/115
DISK_TYPE : RBD
STATE : READY

DATASTORE CAPACITY
TOTAL: : 0M
FREE: : 0M
USED: : 0M
LIMIT: : -

PERMISSIONS
OWNER : um-
GROUP : u–
OTHER : —

DATASTORE TEMPLATE
ALLOW_ORPHANS=“mixed”
BRIDGE_LIST=“worker1 worker2”
CEPH_CONF=“/etc/ceph/ceph.conf”
CEPH_HOST=“10.10.144.3:6789,10.10.144.2:6789,10.10.144.4:6789”
CEPH_KEY=“/etc/ceph/client.libvirt.keyring”
CEPH_USER=“libvirt”
CLONE_TARGET=“SELF”
CLONE_TARGET_SHARED=“SELF”
CLONE_TARGET_SSH=“SYSTEM”
DISK_TYPE=“RBD”
DISK_TYPE_SHARED=“RBD”
DISK_TYPE_SSH=“FILE”
DRIVER=“raw”
DS_MAD=“ceph”
LN_TARGET=“NONE”
LN_TARGET_SHARED=“NONE”
LN_TARGET_SSH=“SYSTEM”
POOL_NAME=“one”
RESTRICTED_DIRS=“/”
SAFE_DIRS=“/var/tmp”
TM_MAD=“ceph”
TM_MAD_SYSTEM=“ssh,shared”
TYPE=“IMAGE_DS”

IMAGES

Hello,

The available space is computed via the hosts in the BRIDGE_LIST (worker1 and worker2)

Have you tried to execute the command ceph osd pool application get one from both of them?

There may be an authentication problem, connectivity problem to the ceph pool from these nodes to the ceph cluster

Hello brunorro,

Kindly check the outcome of the two nodes:

"root@worker2:/home/worker# ceph osd pool application get one
{
“rbd”: {}
}
root@worker2:/home/worker#

root@worker1:/# ceph osd pool application get one
{
“rbd”: {}
}
root@worker1:/# "