Sorry in advance if this is a silly/impossible idea.
I have 5+ workstations that I have set up throughout the house that my family and myself are using as our main computers. They are all currently running Windows 10.
What I’d like to do is set these all up as part of an OpenNebula cloud, and then run a Windows gaming VM on each workstation with a hardware pass-through for a GPU, keyboard, mouse, USB sound, and local storage for the VM. A friend has shown me the gaming VM concept using Unraid, and it seems like a very good way to utilize hardware more fully.
The idea is to get full usage out of the hardware I already have and that I may add in the future, as well as start expanding my career capabilities, as I have no practical cloud infrastructure/devops experience.
I realize that this isn’t a common use case, but I am curious if this may be possible at this stage of development; from the reading I’ve done so far, it seems that GPU pass-through is possible, but as more of a cloud workload concept.
Anyone have any thoughts/opinions, or ideally a pointer to guides that may help?
You describe “Multiseat” technology. I know it’s possible to configure your workstation as Opennebula node. The most difficult part of this would be using, for example, Nvidia GPU in your Windows VM, due to Nvidia driver limitation. Also sometimes motherboard configuration obstructs PCI passthrough operations between VM’s, e.g. USB controller and so on.
I have gotten this to work using Unraid (linux with some proprietary/open source gui tools out-of-the-box to set up gaming VMs), and have read about doing the same with multiple Linux distributions and several open-source tools to accomplish the same thing, using NVidia graphics cards.
When you say that may be more difficult here, is that because OpenNebula doesn’t really support Nvidia graphics cads in general, or doesn’t really support GPU passthrough to VMs? I mean, it makes sense, since this isn’t what OpenNebula is used for.
Edit:
After looking up what exactly multiseat technology is, I’m not sure that really is the case here, unless OpenNebula compute nodes are required to have a separate video card, mouse and keyboard after configuration.
I only really need one video card, mouse, and keyboard for the local gaming VM; the other compute node resources would be accessed/assigned from the OpenNebula management console, correct? The compute node VMs I would be using really wouldn’t need GUIs.
Hello!
Generally speaking, Opennebula is just orchestration software for KVM, therefore you have almost all features from KVM technology like PCI-passthrough.
My friend, who tried to create “multiseat” on desktop KVM machine, had several issues:
Not all of PCI slot on the motherboard is equal. GPU PCI-passthrough works not for all PCI slots;
If you use Nvidia GPU you should know about Nvidia driver limitation for using in KVM virtual machine. The protection depends on the driver version, manufacturer and so on.
He had to path GPU bios for VFIO support;
He had to try various parameters for qemu-kvm. (I’m not sure it is possible to configure all of them from Opennebula )
In Windows VM he had issues with GPU drivers installation ;
You need to find correct boot order for KVM node so that all of the conditions are met.
At the end of post, I want to say you need to get an experience that under bare KVM, and next compare it with Opennebula’s capabilities
This actually very helpful, and encouraging, as I’ve seen several guides for getting this working on a few Linux distros using KVM as the hypervisor.
Thanks for your time, and I’ll try to get this set up! I will start with just Linux using KVM as recommended, and move on from there.
It will help that I’m using the AMD Threadripper processor and motherboard from my friend, who was previously running multiseat on it using NVidia video cards (but using Unraid OS).