I’m using LVM as system datastore
Same LUN is provisioned to 4 nodes via FC
Everything is working great, but when new vm’s are created all IO on other vms in this datastore hangs for 10-15 seconds.
I think that it is happening when new LV are created.
What can cause it? Can it be fixed?
Nodes are Centos 7.6
clmd is not used
use_lvmetad = 0
VM’s on other datastores are not affected.
I’m not able to deploy new cluster with rancher on same datastore where rancher is.
Because when new VM’s are created rancher hangs and creation fails.
But it’s no problem to deploy cluster on another datastore on same storage but different VG.
My topology is:
LUN is presented to four nodes via FC and not to frontend.
When VM is created new LV is created for this vm and at this moment all other VM’s in same VG hangs their IO for a while (5-10-15 secs)
And all other operations with LVM on nodes hangs also (lvdisplay …)
I’m not using any blocking mechanisms as clmv etc.
Fully folowing the documentation.
Is it a bug or a feature?
Hello, what type of storage you use for exporting LUN? We have developed driver for HPE 3PAR, which create and export LUN for each VM disk.
Now I’m using self-made all-flash storage, based on quadstor.
It’s a test enviroment.
In production we are using EMC (Unity, VNX)
It seems to me that problem is in LVM locking mechanism, so it doesn’t depends on backend.
VM’s on same storage but different datastore are not affected.
We are using LVM with default configuration, except use_lvmetad = 0 and daemon itself is stopped.
Locking type is set to default.
LV’s are created by nodes itself, not by frontend. Frontend is not connected to FC.
Maybe I should take one of LVM locking management mechanisms?
ok, I have only experience with cLVM in multi node (3node) cluster. There is also relativeli new and better lvmlockd which can use DLM or SANLock
Is storage driver compatible with lvmlockd?
I see that lv commands a little differs when lvmlockd is used.
For example mounts should be exclusive and so on.