Best Linstor strategy to manage multi-disk hyperconverged infrastructure

Hello.

I discovered Linstor storage driver at the OpenNebulaCon 2022 and regrets not knowing it when I setup our LizardFS infrastructure.

I started some tests and I’m quite happy with the solution but I wonder how to correctly manage my 10 SSD per hypervisor.

Reading the documentation, I found that it may be better to create one storage pool per physical backing device.

Does people using either official linstor addon or the unofficial one (ping @kvaps :wink:) has feedback between one big VG with dozen of disks of one VG per disk (if that’s what one storage pool per physical backing device means).

Regards.

Hello Daniel,

I’d like to help you on your journey with LINSTOR.

We have community members that uses either official or unofficial plugin. That depends on what you are looking for.

Official driver is supported by us, updated periodically and improving on every release.

When it comes to the storage layout, if you have 10 SSD on each node, my choice will be along with the official statement. One storage pool per physical backing device will give you the advantage of confining failure domains to a single storage device.

Do you have any concerns about this setup?

1 Like

Hello @yusuf.

For now, I did not definitely choose between official and unofficial plugins, I started with the unofficial one after seeing the webinar and the blog post.

Do you have any concerns about this setup?

So, if I understand correctly, 1 SSD == 1 PV == 1VG == 1 storage-pool

So, a resource-group can spaw on multiple storage pools ?

One thing we want is the use of snapshot for TM clone but I’m wondering how it’s working with one storage-pool per SSD.