Does people using either official linstor addon or the unofficial one (ping @kvaps) has feedback between one big VG with dozen of disks of one VG per disk (if that’s what one storage pool per physical backing device means).
I’d like to help you on your journey with LINSTOR.
We have community members that uses either official or unofficial plugin. That depends on what you are looking for.
Official driver is supported by us, updated periodically and improving on every release.
When it comes to the storage layout, if you have 10 SSD on each node, my choice will be along with the official statement. One storage pool per physical backing device will give you the advantage of confining failure domains to a single storage device.
For now, I did not definitely choose between official and unofficial plugins, I started with the unofficial one after seeing the webinar and the blog post.
Do you have any concerns about this setup?
So, if I understand correctly, 1 SSD == 1 PV == 1VG == 1 storage-pool
So, a resource-group can spaw on multiple storage pools ?
One thing we want is the use of snapshot for TM clone but I’m wondering how it’s working with one storage-pool per SSD.
And LINSTOR RGs can have multiple LINSTOR storage pools. So you always use the same LINSTOR RG, it will choose one of the SPs and as there is a 1:1 mapping down to 1 physical disk your failure domain for that resource is that single disk.
So, as far as I understand:
when a first VM disk is created (like imported from market), a storage-pool is chosen to store it
for any VM running, the disk will be “cloned” on the same storage-pool when possible and fallback to “copy” if the storage-pool is full
saving the running VM disk to the long term storage is always in “copy” mode, so no storage shared between them