Best Linstor strategy to manage multi-disk hyperconverged infrastructure

Hello.

I discovered Linstor storage driver at the OpenNebulaCon 2022 and regrets not knowing it when I setup our LizardFS infrastructure.

I started some tests and I’m quite happy with the solution but I wonder how to correctly manage my 10 SSD per hypervisor.

Reading the documentation, I found that it may be better to create one storage pool per physical backing device.

Does people using either official linstor addon or the unofficial one (ping @kvaps :wink:) has feedback between one big VG with dozen of disks of one VG per disk (if that’s what one storage pool per physical backing device means).

Regards.

Hello Daniel,

I’d like to help you on your journey with LINSTOR.

We have community members that uses either official or unofficial plugin. That depends on what you are looking for.

Official driver is supported by us, updated periodically and improving on every release.

When it comes to the storage layout, if you have 10 SSD on each node, my choice will be along with the official statement. One storage pool per physical backing device will give you the advantage of confining failure domains to a single storage device.

Do you have any concerns about this setup?

1 Like

Hello @yusuf.

For now, I did not definitely choose between official and unofficial plugins, I started with the unofficial one after seeing the webinar and the blog post.

Do you have any concerns about this setup?

So, if I understand correctly, 1 SSD == 1 PV == 1VG == 1 storage-pool

So, a resource-group can spaw on multiple storage pools ?

One thing we want is the use of snapshot for TM clone but I’m wondering how it’s working with one storage-pool per SSD.

Someone replied on the mailing-list:

And LINSTOR RGs can have multiple LINSTOR storage pools. So you always use the same LINSTOR RG, it will choose one of the SPs and as there is a 1:1 mapping down to 1 physical disk your failure domain for that resource is that single disk.

So, as far as I understand:

  1. when a first VM disk is created (like imported from market), a storage-pool is chosen to store it
  2. for any VM running, the disk will be “cloned” on the same storage-pool when possible and fallback to “copy” if the storage-pool is full
  3. saving the running VM disk to the long term storage is always in “copy” mode, so no storage shared between them

Thanks.