Hello,
I’m trying to share a GPU NVIDIA L40S using vGPU for some VMs (maybe 32, depending on vGPU applied L40S-1Q. First of all, I have read NVIDIA documentation and, after requesting a 90-day free licenses, I have installed “nvidia-vgpu” driver in my Ubuntu-22.04.
Then, I have got some “virtual functions” from NVIDIA driver, I have got lspci information and it seems that from the operative systems, there are 32 vgpus.
After that, I haver read and execute NVIDIA vGPU & MIG | and PCI Passthrough |. However, I haven’t got any available PCI device.
My /var/lib/one/remotes/etc/im/kvm-probes.d/pci.conf is:
:filter:
- '10de:*'
:short_address:
- 'c3:00.4'
- 'c3:00.5'
- 'c3:00.6'
- 'c3:00.7'
- 'c3:01.0'
- 'c3:01.2'
- 'c3:01.3'
- 'c3:01.4'
- 'c3:01.5'
- 'c3:01.6'
- 'c3:01.7'
- 'c3:02.0'
- 'c3:02.1'
- 'c3:02.2'
- 'c3:02.3'
- 'c3:02.4'
- 'c3:02.5'
- 'c3:02.6'
- 'c3:02.7'
- 'c3:03.0'
- 'c3:03.1'
- 'c3:03.2'
- 'c3:03.3'
- 'c3:03.4'
- 'c3:03.5'
- 'c3:03.6'
- 'c3:03.7'
- 'c3:04.0'
- 'c3:04.1'
- 'c3:04.2'
- 'c3:04.3'
:device_name:
- 'NVIDIA L40S'
:nvidia_vendors:
- '10de'
From CLI:
virsh nodedev-dumpxml pci_0000_c3_00_0 | egrep 'domain|bus|slot|function'
<domain>0</domain>
<bus>195</bus>
<slot>0</slot>
<function>0</function>
<capability type='virt_functions' maxCount='32'>
<address domain='0x0000' bus='0xc3' slot='0x00' function='0x4'/>
<address domain='0x0000' bus='0xc3' slot='0x00' function='0x5'/>
<address domain='0x0000' bus='0xc3' slot='0x00' function='0x6'/>
<address domain='0x0000' bus='0xc3' slot='0x00' function='0x7'/>
<address domain='0x0000' bus='0xc3' slot='0x01' function='0x0'/>
<address domain='0x0000' bus='0xc3' slot='0x01' function='0x1'/>
<address domain='0x0000' bus='0xc3' slot='0x01' function='0x2'/>
<address domain='0x0000' bus='0xc3' slot='0x01' function='0x3'/>
<address domain='0x0000' bus='0xc3' slot='0x01' function='0x4'/>
<address domain='0x0000' bus='0xc3' slot='0x01' function='0x5'/>
<address domain='0x0000' bus='0xc3' slot='0x01' function='0x6'/>
<address domain='0x0000' bus='0xc3' slot='0x01' function='0x7'/>
<address domain='0x0000' bus='0xc3' slot='0x02' function='0x0'/>
<address domain='0x0000' bus='0xc3' slot='0x02' function='0x1'/>
<address domain='0x0000' bus='0xc3' slot='0x02' function='0x2'/>
<address domain='0x0000' bus='0xc3' slot='0x02' function='0x3'/>
<address domain='0x0000' bus='0xc3' slot='0x02' function='0x4'/>
<address domain='0x0000' bus='0xc3' slot='0x02' function='0x5'/>
<address domain='0x0000' bus='0xc3' slot='0x02' function='0x6'/>
<address domain='0x0000' bus='0xc3' slot='0x02' function='0x7'/>
<address domain='0x0000' bus='0xc3' slot='0x03' function='0x0'/>
<address domain='0x0000' bus='0xc3' slot='0x03' function='0x1'/>
<address domain='0x0000' bus='0xc3' slot='0x03' function='0x2'/>
<address domain='0x0000' bus='0xc3' slot='0x03' function='0x3'/>
<address domain='0x0000' bus='0xc3' slot='0x03' function='0x4'/>
<address domain='0x0000' bus='0xc3' slot='0x03' function='0x5'/>
<address domain='0x0000' bus='0xc3' slot='0x03' function='0x6'/>
<address domain='0x0000' bus='0xc3' slot='0x03' function='0x7'/>
<address domain='0x0000' bus='0xc3' slot='0x04' function='0x0'/>
<address domain='0x0000' bus='0xc3' slot='0x04' function='0x1'/>
<address domain='0x0000' bus='0xc3' slot='0x04' function='0x2'/>
<address domain='0x0000' bus='0xc3' slot='0x04' function='0x3'/>
<address domain='0x0000' bus='0xc3' slot='0x00' function='0x0'/>
It seems all process based on NVIDIA documentation has been finished correctly, but from OpenNebula, after modifying /var/lib/one/remotes/etc/im/kvm-probes.d/pci.conf and run “onehost sync --force”, it seems that something is not working fine, because “onehost show -j 0” shows it:
[...]
"MAX_DISK": "1800515",
"USED_DISK": "74082"
},
"PCI_DEVICES": {},
"NUMA_NODES": {
"NODE": {
"CORE": [
{
"CPUS": "21:-1,45:-1",
"DEDICATED": "NO",
"FREE": "2",
[...]
There is NO PCI_DEVICES, but:
oneadmin@test-gpu:~$ onehost show 0 -j | grep PCI
"PCI_DEVICES": {},
"PCI_FILTER": "10de:26b9",
Then, from Sunstone, if I try to modify host in Infrastructure menu, in PCI tab I can’t see any PCI device:
So I can’t attach any PCI device (any NVIDIA vGPU device) to any VM.
Also, if I modify template and in “PCI Devices” add a PCI as “Specific device” with value “c3:01.3”, VM not starts and scheduler returns this error in rank_sched.log:
rank_sched.log:Fri Dec 19 11:39:56 2025 [Z0][SCHED][DD]: Host 0 discarded for VM 19. Unavailable PCI device.
What am I doing wrong? Am I missing something?
Please, I need help. If someone want more detailed information about what is configured in my system, please, tell me.
Thanks!!!




