[one-5.2] fs_lvm driver and VM termination

Hi,

I am trying fs_lvm “new” datastore on one-5.2 (single node for the moment).
Deploy is working fine, but I have 2 questions:

  1. on terminate, image file are stored in “raw” format, not qcow 2 ? - they are store initially in qcow2, they were downloaded from appmarket, and I verified with qemu-img info. Is there a variable missing ?

See one one the image info:

[root@one-2 ~]# oneimage show 8
IMAGE 8 INFORMATION
ID             : 8
NAME           : Front - Varnish - nginx-disk-0
USER           : oneadmin
GROUP          : oneadmin
DATASTORE      : images
TYPE           : OS
REGISTER TIME  : 11/03 14:34:56
PERSISTENT     : Yes
SOURCE         : /var/lib/one//datastores/101/c8a421a8121ec770bd20440842c6922c
PATH           : /var/lib/one//datastores/101/939628485af7d27998ed54d42e5b6225
FSTYPE         : qcow2
SIZE           : 8G
STATE          : used
RUNNING_VMS    : 1

PERMISSIONS
OWNER          : um-
GROUP          : ---
OTHER          : ---

IMAGE TEMPLATE
DEV_PREFIX="vd"
FROM_APP="47"
FROM_APP_NAME="CentOS 7.2 - KVM"
FSTYPE="qcow2"
MD5="eef00404ef9c2c347bb7d674d4b6eb9c"

VIRTUAL MACHINES

    ID USER     GROUP    NAME            STAT UCPU    UMEM HOST             TIME
    16 oneadmin oneadmin Front - Varnish runn  1.0      2G localhost    3d 12h21

[root@one-2 ~]# lvs
  LV          VG         Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root        centos     -wi-ao---- 50.00g
  swap        centos     -wi-ao---- 25.00g
  var         centos     -wi-ao---- 20.00g
  var_log     centos     -wi-ao---- 20.00g
  lv-one-10-0 vg-one-100 twi-aotz--  8.00g             0.00   0.54
  lv-one-12-0 vg-one-100 twi-aotz--  1.00g             0.00   0.88
  lv-one-16-0 vg-one-100 twi-aotz--  8.00g             0.00   0.54
  lv-one-21-0 vg-one-100 twi-aotz--  8.00g             0.00   0.54
  lv-one-9-0  vg-one-100 twi-aotz--  8.00g             0.00   0.54
  1. On two+ nodes, if the hypervisor crash, what is happening to VM, in this configuration ?
    How does a new (respawn) VM can boot using the LV on another HV without any loss of data ?

Thank you
Nicolas.

Hi,

juste a UP, because I had a network split brain (oned was separated from HV), and it now fails to found the VM.
Using the HA configuration, vm were launched twice …
It is in 4.12.

I would like to get sure that it is impossible to launch a vm twice using the same disk.

On image clone it is converted to RAW as it is the format used to store the data in LVM. After the VM is shutdown it is copied back to the images datastore with a dd command. The only way to change that behavior is modifying the TM driver to convert it to qcow2. The file is /var/lib/one/remotes/tm/fs_lvm/mvds. This is the part that copies the image back:

DUMP_CMD=$(cat <<EOF
    $DD if=$DEV of=$DST_PATH bs=64k
    $SUDO $LVREMOVE -f $DEV
EOF

A possible solution can be changing it to:

DUMP_CMD=$(cat <<EOF
    qemu-img convert -O qcow2 $DEV $DST_PATH
    $SUDO $LVREMOVE -f $DEV
EOF

If the LVM storage and system datastore NFS is available from both hosts you can use the fault tolerance hook:

http://docs.opennebula.org/5.2/advanced_components/ha/ftguide.html

You should use some kind of fencing mechanism. This is explained in the Frontend HA guide:

http://docs.opennebula.org/5.2/advanced_components/ha/frontend_ha_setup.html?highlight=fence#step-4-define-the-opennebula-service

1 Like