Datastore issue after ceph upgrade

Hi Community,

I have setup a test environment with a ceph cluster and a couple of mini-one single node environments just to see how open nebula behaves during a ceph upgrade.

My environment:

ceph cluster: 1 admin node, 3 monitor nodes, 3 osd nodes. All running luminous.

open nebula nodes (all minione):

Open nebula 6.4, centos 7, ceph 10.x packages from jewel repo (download.ceph.com)
Open nebula 6.4, centos 7, ceph 12.x packages from luminous repo (download.ceph.com)
Open nebula 6.4, centos 7, ceph 14.x packages from nautilus repo (download.ceph.com)
Open nebula 6.4, centos 7, ceph 15.x packages from octopus repo (download.ceph.com)
Open nebula 6.6, alma 9, ceph 17.x packages from quincy repo (download.ceph.com)

My datastore configs:

SYSTEMDS.txt:

NAME=“CEPH-SYSTEM-$CEPH_RELEASE”

ALLOW_ORPHANS=“mixed”

BRIDGE_LIST=“127.0.0.1”

CEPH_HOST=“$MON_LIST”

CEPH_SECRET=“$UUID”

CEPH_USER=“$CEPH_RELEASE”

DISK_TYPE=“RBD”

DS_MIGRATE=“NO”

POOL_NAME=“one-$CEPH_RELEASE”

RESTRICTED_DIRS=“/”

SAFE_DIRS=“/var/tmp”

SHARED=“YES”

TM_MAD=“ceph”

TYPE=“SYSTEM_DS”

IMAGEDS.txt

NAME=“CEPH-IMAGES-$CEPH_RELEASE”

ALLOW_ORPHANS=“mixed”

BRIDGE_LIST=“127.0.0.1”

CEPH_HOST=“$MON_LIST”

CEPH_SECRET=“$UUID”

CEPH_USER=“$CEPH_RELEASE”

CLONE_TARGET=“SELF”

CLONE_TARGET_SHARED=“SELF”

CLONE_TARGET_SSH=“SYSTEM”

DISK_TYPE=“RBD”

DISK_TYPE_SHARED=“RBD”

DISK_TYPE_SSH=“FILE”

DRIVER=“raw”

DS_MAD=“ceph”

LN_TARGET=“NONE”

LN_TARGET_SHARED=“NONE”

LN_TARGET_SSH=“SYSTEM”

POOL_NAME=“one-$CEPH_RELEASE”

RESTRICTED_DIRS=“/”

SAFE_DIRS=“/var/tmp”

TM_MAD=“ceph”

TM_MAD_SYSTEM=“ssh,shared”

TYPE=“IMAGE_DS”

Each open nebula host gets it’s own pool on ceph. So far so good.
I can download “apps” from the marketplace on each of the mini-one hosts and store them in the image ds. I can see that it’s taking up space. I can see the rados objects on the ceph cluster.

Now I upgrade my ceph cluster to nautilus. After the upgrade, my image datastore seems to be gone. In the storage tab it says it’s 0B in size (and 0B in use). I can still see the rados objects on the ceph cluster though.

What am I missing ?

NAME=“CEPH-IMAGES-$CEPH_RELEASE”
CEPH_USER=“$CEPH_RELEASE”
POOL_NAME=“one-$CEPH_RELEASE”

If these attributes of the datastore are changed, you would need to update those in the datastore as well.
The libvirt secret will keep working, but if ONE tries to connect to the rbd pool with the old username or old pool name, it won’t see the rbd pool. If you update the ceph version and change these names, you can prob. add the ceph datastore again with the updated names, and you should be able to see the pool and images again?

Hi Roland,

The poolnames don’t change after the upgrade. They’re just there to allow me to connect multiple mini-ones’s to a single ceph cluster.

So for example the mini-one with jewel ceph libraries have a user called jewel and on the ceph side a pool named one-jewel.

My ceph cluster has a couple of pools:

one-jewel
one-luminous
one-nautilus
one-octopus
one-quincy

That doesn’t change after the upgrade, it’s just a name.