Datastore lvm + gfs2 - lose VMs

Hi! Need help of understanding how it works! I have 3 node. Create cluster and all ok! Create 30 VMs (10 on each node). Shutdown 1 node and lose all VMs on this node! What I do wrong? No VMs disks on other nodes. Have directories but disk.0 not present.
Thanks!

This need more informations about your setup. I personaly use cLVM and GFS2 too without problems.

What info need? I use this manual https://www.server-world.info/en/note?os=CentOS_7&p=pacemaker&f=3
After I shutdown node - VMs have UNKNOWN state. Then I POWER On node. After UP - VMs not start. Error - cant find disk.0.

I like to see results from pcs status and opennebula datastore config and oned.log.

Now I looking better on your question. You shutdown compute node and you expect that VMs auto migrate to second one?

[root@node01 ~]# pcs status
Cluster name: Skillserver
Last updated: Wed Sep 7 11:38:54 2016 Last change: Mon Sep 5 16:22:08 2016 by hacluster via crmd on node01
Stack: corosync
Current DC: node03 (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum
3 nodes and 15 resources configured

Online: [ node01 node02 node03 ]

Full list of resources:

scsi-shooter (stonith:fence_scsi): Started node01
Clone Set: dlm-clone [dlm]
Started: [ node01 node02 node03 ]
Clone Set: clvmd-clone [clvmd]
Started: [ node01 node02 node03 ]
Clone Set: fs_gfs2-clone [fs_gfs2]
Started: [ node01 node02 node03 ]
Resource Group: Cloud
Cluster_VIP (ocf::heartbeat:IPaddr2): Started node03
opennebula (systemd:opennebula): Started node03
opennebula-sunstone (systemd:opennebula-sunstone): Started node03
opennebula-novnc (systemd:opennebula-novnc): Started node03
nginx (systemd:nginx): Started node03

PCSD Status:
node01: Online
node02: Online
node03: Online

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled

DATASTORE

[root@node03 ~]# onedatastore list
ID NAME SIZE AVAIL CLUSTERS IMAGES TYPE DS TM STAT
0 system - - 0 0 sys - ssh on
1 default 2T 94% 0 7 img fs ssh on
2 files 2T 94% 0 2 fil fs ssh on

[root@node03 ~]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 ext4 158G 2.9G 147G 2% /
devtmpfs devtmpfs 24G 0 24G 0% /dev
tmpfs tmpfs 24G 76M 24G 1% /dev/shm
tmpfs tmpfs 24G 34M 24G 1% /run
tmpfs tmpfs 24G 0 24G 0% /sys/fs/cgroup
tmpfs tmpfs 4.8G 0 4.8G 0% /run/user/9869
/dev/mapper/vg_cluster-lv_cluster gfs2 2.0T 132G 1.9T 7% /var/lib/one/datastores
tmpfs tmpfs 4.8G 0 4.8G 0% /run/user/0

Yes, but VMs disks disappear.

Why you have TM mad SSH? You should set it to Shared. So you will be able to do live migrations.

When you need shutdown one node, you should migrate VMs to another. If you would like to migrate VM on node error, you need to use hook, as is documented here http://docs.opennebula.org/5.0/advanced_components/ha/ftguide.html, but don’t forgot to add fencing to that example hook script!!!.

You should use this transfer mode

http://docs.opennebula.org/5.0/deployment/open_cloud_storage_setup/fs_ds.html#shared-qcow2-transfer-modes

Also, you should read GFS2 documentation from RedHat

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Global_File_System_2/

about filesystem size where smaller is better, mounting considerations etc…

Thanks! Read now.