During clone the dd (/dev/zero) uses 64KB blocks which is not aligned to the LVM LV size

We are using LVM iSCSI datastores.
During instantiation of a template a clone operation of the template takes place since it’s persistent.

I have create an extra disk like:

      DISK = [
        FORMAT = "qcow2",
        FS = "xfs",
        SIZE = "614400",
        TYPE = "fs" ]

This leads to the lvsize and dd alignment issues. Blocksize is set to 64KB by default and LVsize gets set to 600GiB (644GB). This will lead to an extra block writing past the LV boundary.

Found that we could set ZERO_LVM_ON_CREATE=“NO” on the datastore but that does not take effect. Blocksize could also be set but that would still cause issues since I have a need for different sizes.

Found the script:

Could potentially patch it there. But the script needs to attention regarding the blocksize. Skipping the last block could be a workaround but lead to half-a-block being non-zero:out.

dd will almost always end with a non-zero exit value.
Why not switch to having a WIPE_CMD which could be either dd or maybe schred as well?

Shred could be used to wipe more securely. 1,2 or even 3 passes.
Also has an option for zero fill where max security is not a requirement.

sudo shred -v -n 0 --random-source=/dev/urandom -z /dev/sdb

But quick test comparing shred vs dd filling with zeros only dd seems faster. Could be that blocksize. Didn’t see if shred could be optimized in that sense but…

But it appears that the error is that the mkfs fails. The sudoers files doesn’t seem to allow that. Whats missing really???

Driver Error
Sat Dec 17 11:08:55 2022: Error executing image transfer script: ERROR: mkimage: Command " # prints function content contains () { LIST=$1; ELEMENT=$2; SEPARATOR=$3; MATCH=0; for i in $(echo $LIST | sed "s/$SEPARATOR/ /g"); do if [[ "${ELEMENT}" = "${i}" ]]; then MATCH=1; break; fi; done; if [ "${MATCH}" = "0" ]; then return 1; fi; return 0 } set -e -o pipefail export PATH=/usr/sbin:/sbin:$PATH mkdir -p /var/lib/one//datastores/101/17 hostname -f >"/var/lib/one//datastores/101/17/.host" || : if [ "yes" = "yes" ]; then dd if=/dev/zero of="/dev/vg-one-101/lv-one-17-1" bs=64k || : fi [ "raw" = "swap" ] && mkswap /dev/vg-one-101/lv-one-17-1 if [ ! -z "xfs" ]; then contains "ext2,ext3,ext4,xfs" "xfs" "," if [ 0 != 0 ]; then log_error "Unsupported file system type: xfs. Supported types are: ext2,ext3,ext4,xfs" exit -1 fi FS_OPTS= mkfs -t xfs /dev/vg-one-101/lv-one-17-1 fi rm -f "/var/lib/one//datastores/101/17/disk.1" ln -s "/dev/vg-one-101/lv-one-17-1" "/var/lib/one//datastores/101/17/disk.1"" failed: dd: error writing '/dev/vg-one-101/lv-one-17-1': No space left on device 9830401+0 records in 9830400+0 records out 644245094400 bytes (644 GB, 600 GiB) copied, 459.276 s, 1.4 GB/s mkfs.xfs: error - cannot set blocksize 512 on block device /dev/vg-one-101/lv-one-17-1: Permission denied Could not create image /var/lib/one//datastores/101/17/disk.1

I added a ONE_FS alias with /sbin/mkfs into /etc/sudoers.d/opennebula and added that “role” to the /etc/sudoers.d/opennebula-node-kvm.

But how should it really work?

But… The script never calls the mkfs via sudo… So running as oneadmin manually work…
$sudo mkfs -t xfs /dev/vg-one-101/lv-one-18-1

This sound to me like a bug, could you please summarize into an issue?