Error registerring image in ceph pool one

Hello Everyone, we have been trying to upload a 10GB Centos template in raw format for KVM into ceph image datastore however the image is failing to register The set up is the following
1 Server running Opennebula
4 Servers running KVM
4 Servers Running Ceph
KVM and Opennebula servers are ceph clients with ceph-common installed
Bridge list is set to: ceph2 ceph3 ceph4 (OSD nodes)
Ceph host set to ceph1 monitor

qemu-img is installed on all the servers and oneadmin can use libvirt to access the one pool
the image gets transferred but when triying to upload an image the following can be seen in oned.log

Tue May 19 14:53:30 2020 [Z0][ImM][I]: cp: Copying local image /var/tmp/10737418240-kvm_template_centos7img_2019-08-09 to the image repository
Tue May 19 14:53:30 2020 [Z0][ImM][E]: cp: Command " set -e -o pipefail
Tue May 19 14:53:30 2020 [Z0][ImM][I]:
Tue May 19 14:53:30 2020 [Z0][ImM][I]: FORMAT=$(qemu-img info /var/tmp/957e5da8806400efebe27669d271a93c | grep “^file format:” | awk ‘{print $3}’ || : )
Tue May 19 14:53:30 2020 [Z0][ImM][I]:
Tue May 19 14:53:30 2020 [Z0][ImM][I]: if [ “$FORMAT” != “raw” ]; then
Tue May 19 14:53:30 2020 [Z0][ImM][I]: qemu-img convert -O raw /var/tmp/957e5da8806400efebe27669d271a93c /var/tmp/957e5da8806400efebe27669d271a93c.raw
Tue May 19 14:53:30 2020 [Z0][ImM][I]: mv /var/tmp/957e5da8806400efebe27669d271a93c.raw /var/tmp/957e5da8806400efebe27669d271a93c
Tue May 19 14:53:30 2020 [Z0][ImM][I]: fi
Tue May 19 14:53:30 2020 [Z0][ImM][I]:
Tue May 19 14:53:30 2020 [Z0][ImM][I]: rbd --id libvirt import --image-format 2 /var/tmp/957e5da8806400efebe27669d271a93c one/one-111
Tue May 19 14:53:30 2020 [Z0][ImM][I]:
Tue May 19 14:53:30 2020 [Z0][ImM][I]: # remove original
Tue May 19 14:53:30 2020 [Z0][ImM][I]: rm -f /var/tmp/957e5da8806400efebe27669d271a93c" failed: rbd: error writing to destination image at offset 0: (27) File too large
Tue May 19 14:53:30 2020 [Z0][ImM][I]: rbd: error writing to destination image at offset 1048576: (27) File too large
Tue May 19 14:53:30 2020 [Z0][ImM][I]: 2020-05-19T14:53:30.324+0000 7fa7bf7fe700 -1 librbd::io::ObjectRequest: 0x7fa7b4006ee0 handle_write_object: rbd_data.2824f9bf52a53.0000000000000000 fai
led to write object: (27) File too large
Tue May 19 14:53:30 2020 [Z0][ImM][I]: 2020-05-19T14:53:30.324+0000 7fa7bf7fe700 -1 librbd::io::ObjectRequest: 0x557c4db62e00 handle_write_object: rbd_data.2824f9bf52a53.0000000000000000 fai
led to write object: (27) File too large
Tue May 19 14:53:30 2020 [Z0][ImM][I]: rbd: error writing to destination image at offset 1081344: (27) File too large
Tue May 19 14:53:30 2020 [Z0][ImM][I]: 2020-05-19T14:53:30.341+0000 7fa7bf7fe700 -1 librbd::io::ObjectRequest: 0x557c4db633e0 handle_write_object: rbd_data.2824f9bf52a53.0000000000000000 fai
led to write object: (27) File too large
Tue May 19 14:53:30 2020 [Z0][ImM][I]: rbd: error writing to destination image at offset 4194304: (27) File too large
Tue May 19 14:53:30 2020 [Z0][ImM][I]: 2020-05-19T14:53:30.349+0000 7fa7bf7fe700 -1 librbd::io::ObjectRequest: 0x557c4dcd1440 handle_write_object: rbd_data.2824f9bf52a53.0000000000000001 fai
led to write object: (27) File too large
Tue May 19 14:53:30 2020 [Z0][ImM][I]: rbd: error writing to destination image at offset 8388608: (27) File too large
Tue May 19 14:53:30 2020 [Z0][ImM][I]: 2020-05-19T14:53:30.360+0000 7fa7bf7fe700 -1 librbd::io::ObjectRequest: 0x557c4dcd1640 handle_write_object: rbd_data.2824f9bf52a53.0000000000000002 fai
led to write object: (27) File too large
Tue May 19 14:53:30 2020 [Z0][ImM][I]: rbd: error writing to destination image at offset 12582912: (27) File too large
Tue May 19 14:53:30 2020 [Z0][ImM][I]: 2020-05-19T14:53:30.387+0000 7fa7bf7fe700 -1 librbd::io::ObjectRequest: 0x557c4dcd1840 handle_write_object: rbd_data.2824f9bf52a53.0000000000000003 fai
led to write object: (27) File too large
Tue May 19 14:53:30 2020 [Z0][ImM][I]: rbd: failed to import image
Tue May 19 14:53:30 2020 [Z0][ImM][I]: Importing image: 0% complete…failed.
Tue May 19 14:53:30 2020 [Z0][ImM][I]: rbd: import failed: (27) File too large
Tue May 19 14:53:30 2020 [Z0][ImM][E]: Error registering one/one-111 in 10.226.41.22
Tue May 19 14:53:30 2020 [Z0][ImM][E]: Error copying image in the datastore: Error registering one/one-111 in 10.226.41.22
Tue May 19 14:53:30 2020 [Z0][InM][D]: Monitoring datastore CEPH RBD Image (166)
Tue May 19 14:53:31 2020 [Z0][AuM][D]: Message received: AUTHENTICATE SUCCESS 47 -

Can anyone tell me how to increase the size that is accepted (Ceph is set up with bluestore)

Hello,

I see this issue persists. Frustrating. I have not seen any accepted size configuration on either ceph or opennebula. When an image fails to upload for me, it has been because because my one frontend was too weak (which caused a cluster leader failover in the middle of an upload), or the filesystem was too small. I see the error librbd is throwing, but I think there is a good chance we can mostly ignore it as ceph and one do not care about image size (from what I see).

However, we need to make sure there is enough space for your image on the filesystems and in your storage cluster.

  • Does the frontend filesystem have at least 20 GB of free space? It needs space for the initial image upload.
  • Does your ceph osd servers have enough space on / ? It needs to hold the copy and the conversion
  • Does your ceph pool have enough free space? Are you using mostly defaults and replicated pools? Erasure?

Another trouble spot could be the ceph bridge.

  • Do your ceph OSD servers have the oneadmin user set up for passwordless ssh?
  • Is your storage networking on a separate VLAN/subnet?
  • Try changing your ceph bridge setting to your opennebula frontend server.

I don’t recommend having your ceph OSD servers acting as the bridge.

Hello,
is you datastore have sized?
what is your ceph osd crush tunables mode ?

Hello Everyone i managed to get the problem resolved, the problem ended up being that the disks were formated to gpt using parted, which was causing the entire storage cluster to go bananas. i fixed the problem by dissasembling the cluster and before making new osds using the commmand ceph-volume lvm zap /dev/vdb after which i was able to properly upload data into my OSDS, thanks for the help in solving this. This one can be marked as solved.