what’s your experience with running virtualization (KVM/libvirt) and OpenNebula on large NUMA machines? Think SGI® UV™ series and similar.
Have you tried it?
What’s the performance?
Is manual CPU pinning necessary?
Any pitfalls we should be aware of (software versions, HW issues, etc.)?
I’ve not done anything on those mainframe-sized systems (what we used to call “real servers[tm]”), but at least some benchmarking…
Xen: cpu-pools2 implementation is able to live OK with such workloads, I think.
Ask on xen-users or visit a hackathon to get better input.
With the normal Xen scheduler you would absolutely need vCPU pinning to survive if you run high-load VMs. If you just run a lot of VMs but not some with, idk, 64-128 vCPUs then you won’t absolutely need pinning.
KVM: I don’t know anyone running anything heavy virtualized on KVM so hopefully someone else can chime in there. I’d expect you need to take some more care of NUMA topics but find the interrupt routing / tuning stuff for NUMA would be easier. (as far as easy goes on x86 big iron)
Should be fine, even that large systems aren’t bigger than a large cluster.
Yes, that’s pretty much what I expected. If I run a bunch of small(ish) VMs without crossing NUMA node boundaries, the system should behave reasonably well. Anything else will require pinning and tuning.
If I get a chance to run a few hands-on tests, I will post the results here. For posterity.