LVM datastorage usage: terminate vm issue

I’ve been tried to configure LVM datastorage in testing environment consists of two hosts:

  • nebula (front machine) with Sunstone
  • kvm-node-1 host with configured VG

nebula machine contains following:

root@nebula:/var/lib/one/datastores# onedatastore list
      ID NAME                SIZE AVAIL CLUSTERS     IMAGES TYPE DS      TM      STAT
       0 system                 - -     0                 0 sys  -       ssh     on  
       1 default            39.1G 70%   0                 4 img  fs      ssh     on  
       2 files              39.1G 70%   0                 0 fil  fs      ssh     on  
     100 images_shared      39.1G 70%   0                 2 img  fs      shared  on  
     104 lvm_system         39.1G 76%   0                 0 sys  -       fs_lvm  on  
     105 lvm_images         39.1G 70%   0                 1 img  fs      fs_lvm  on  
     106 lvm_system2        39.1G 76%   0                 0 sys  -       fs_lvm  on
root@nebula:/var/lib/one/datastores# ls /var/lib/one/datastores/
0  1  100  101  105  2
root@nebula:/var/lib/one/datastores# showmount -e
Export list for nebula:
/var/lib/one/datastores/105 192.168.122.0/24
/var/lib/one/datastores/100 192.168.122.0/24

kvm-node-1 machine contains following:

root@kvm-node-1:/var/lib/one/datastores# ls /var/lib/one/datastores/
0  100  104  105  106
root@kvm-node-1:/var/lib/one/datastores# mount|grep nfs
nfsd on /proc/fs/nfsd type nfsd (rw,relatime)
192.168.122.240:/var/lib/one/datastores/100 on /var/lib/one/datastores/100 type nfs4 (rw,relatime,vers=4.2,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.122.74,local_lock=none,addr=192.168.122.240)
192.168.122.240:/var/lib/one/datastores/105 on /var/lib/one/datastores/105 type nfs4 (rw,relatime,vers=4.2,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.122.74,local_lock=none,addr=192.168.122.240)
root@kvm-node-1:/var/lib/one/datastores# vgs
  VG       #PV #LV #SN Attr   VSize   VFree 
  vg-one-0   1   1   0 wz--n- <10,00g <9,98g

I can deploy VM with image to the hypervisor via Sunstone. This image is successfully start. But I can’t terminate VM due to errors:

Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 Command execution failed (exit code: 5): /var/lib/one/remotes/tm/fs_lvm/delete nebula:/var/lib/one//datastores/0/29/disk.0 29 105
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG E 29 delete: Command "    set -x
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 DEV=$(readlink /var/lib/one/datastores/0/29/disk.0)
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 if [ -d "/var/lib/one/datastores/0/29/disk.0" ]; then
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 rm -rf "/var/lib/one/datastores/0/29/disk.0"
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 else
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 rm -f /var/lib/one/datastores/0/29/disk.0
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 if [ -z "$DEV" ]; then
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 exit 0
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 fi
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 if echo "$DEV" | grep "^/dev/" &>/dev/null; then
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 sudo lvremove -f $DEV
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 fi
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 fi" failed: ++ readlink /var/lib/one/datastores/0/29/disk.0
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 + DEV=/dev/vg-one-0/lv-one-29-0
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 + '[' -d /var/lib/one/datastores/0/29/disk.0 ']'
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 + rm -f /var/lib/one/datastores/0/29/disk.0
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 + '[' -z /dev/vg-one-0/lv-one-29-0 ']'
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 + echo /dev/vg-one-0/lv-one-29-0
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 + grep '^/dev/'
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 + sudo lvremove -f /dev/vg-one-0/lv-one-29-0
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 Volume group "vg-one-0" not found
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG I 29 Cannot process volume group vg-one-0
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: LOG E 29 Error deleting /var/lib/one/datastores/0/29/disk.0
Fri Nov  9 16:04:55 2018 [Z0][TM][D]: Message received: TRANSFER FAILURE 29 Error deleting /var/lib/one/datastores/0/29/disk.0

How should I organise interchange between front machine and hypervisor machine with LVM datastorage to solve this issue?

Why nebula is trying to remove lv from own host?

hi, did you specify BRIDGE_LIST in lvm ds config? There have to be IP/hostname on which is lvm accessible, so in your case kvm-node

Hi, thank’s for reply!

Yes, I specified BRIDGE_LIST:

root@nebula:/var/lib/one/datastores/0# onedatastore list
      ID NAME                SIZE AVAIL CLUSTERS     IMAGES TYPE DS      TM      STAT
       0 system                 - -     0                 0 sys  -       ssh     on  
       1 default            39.1G 70%   0                 4 img  fs      ssh     on  
       2 files              39.1G 70%   0                 0 fil  fs      ssh     on  
     100 images_shared      39.1G 70%   0                 2 img  fs      shared  on  
     104 lvm_system         39.1G 76%   0                 0 sys  -       fs_lvm  on  
     105 lvm_images         39.1G 70%   0                 1 img  fs      fs_lvm  on  
     106 lvm_system2        39.1G 76%   0                 0 sys  -       fs_lvm  on  
    root@nebula:/var/lib/one/datastores/0# onedatastore show lvm_images
    DATASTORE 105 INFORMATION                                                       
    ID             : 105                 
    NAME           : lvm_images          
    USER           : oneadmin            
    GROUP          : oneadmin            
    CLUSTERS       : 0                   
    TYPE           : IMAGE               
    DS_MAD         : fs                  
    TM_MAD         : fs_lvm              
    BASE PATH      : /var/lib/one//datastores/105
    DISK_TYPE      : BLOCK               
    STATE          : READY               

    DATASTORE CAPACITY                                                              
    TOTAL:         : 39.1G               
    FREE:          : 27.4G               
    USED:          : 9.7G                
    LIMIT:         : -                   

    PERMISSIONS                                                                     
    OWNER          : um-                 
    GROUP          : u--                 
    OTHER          : ---                 

    DATASTORE TEMPLATE                                                              
    ALLOW_ORPHANS="NO"
    BRIDGE_LIST="kvm-node-1"
    CLONE_TARGET="SYSTEM"
    DISK_TYPE="BLOCK"
    DRIVER="raw"
    DS_MAD="fs"
    LN_TARGET="SYSTEM"
    SAFE_DIRS="/var/tmp /tmp"
    TM_MAD="fs_lvm"
    TYPE="IMAGE_DS"

    IMAGES         
    17             
    root@nebula:/var/lib/one/datastores/0# onedatastore show lvm_system
    DATASTORE 104 INFORMATION                                                       
    ID             : 104                 
    NAME           : lvm_system          
    USER           : oneadmin            
    GROUP          : oneadmin            
    CLUSTERS       : 0                   
    TYPE           : SYSTEM              
    DS_MAD         : -                   
    TM_MAD         : fs_lvm              
    BASE PATH      : /var/lib/one//datastores/104
    DISK_TYPE      : FILE                
    STATE          : READY               

    DATASTORE CAPACITY                                                              
    TOTAL:         : 39.1G               
    FREE:          : 29.9G               
    USED:          : 7.2G                
    LIMIT:         : -                   

    PERMISSIONS                                                                     
    OWNER          : um-                 
    GROUP          : u--                 
    OTHER          : ---                 

    DATASTORE TEMPLATE                                                              
    ALLOW_ORPHANS="NO"
    BRIDGE_LIST="kvm-node-1"
    DISK_TYPE="FILE"
    DS_MIGRATE="YES"
    RESTRICTED_DIRS="/"
    SAFE_DIRS="/var/tmp"
    SHARED="YES"
    TM_MAD="fs_lvm"
    TYPE="SYSTEM_DS"

    IMAGES         
    root@nebula:/var/lib/one/datastores/0# ping kvm-node-1
    PING kvm-node-1 (192.168.20.2) 56(84) bytes of data.
    64 bytes from kvm-node-1 (192.168.20.2): icmp_seq=1 ttl=64 time=2.93 ms
    ^C
    --- kvm-node-1 ping statistics ---
    1 packets transmitted, 1 received, 0% packet loss, time 0ms
    rtt min/avg/max/mdev = 2.931/2.931/2.931/0.000 ms

Hi, in lvm_system you have DS_MAD: - Try to change it to fs like you have in lvm_images

I’ve just tried to set DS_MAD="fs" but it doesn’t set neither over Sunstone nor onedatastore update 104.
Also, I’ve just read about DS_MAD in LVM Datastore (Create System Datastore) and DS_MAD didn’t mention for the System Datastore.

I’ve just tried to solve with DS_MAD from another approach and got this:

root@nebula:~# cat lvm_system.conf 
NAME="lvm_system2"
TM_MAD="fs_lvm"
DS_MAD="fs"
TYPE="SYSTEM_DS"
BRIDGE_LIST="kvm-node-1"
root@nebula:~# onedatastore create lvm_system.conf 
[one.datastore.allocate] SYSTEM datastores cannot have DS_MAD defined.

I’ve been solved my problem particularly by removing default System Datastore with id 0. Now, VM instances are creating in right VG (vg-one-104 instead of vg-one-0). I don’t know if it’s right behavior (removing default System Datastore) but it’s work for me by now. So, also VM instance terminate correctly. I set this topic as solved.

I have another problem which is might be related to this topic. I open new question which is available by clicking this link.