I can't run Dockers with FireCracker

Hello guys !

I’m new on OpenNebula and after succefully implement Vcenter in my OpenNebula i want to level up and add an FireCracker hypervisor for running Docker containers.
I followed the OpenNebula doc, my FrontEnd communicate well with my FireCracker Node (in passwordless SSH). I succefuly add my FireCracker Host in Sunstone, I see my Ram and CPU.
The problem begin with the Datastores, I add an [Images] et [Files] Datastores on the FireCracker Host, but when I try to add an [System] Datastore i don’t see his capacity .

I try to add it in [share mode] and this time it’s working

But when I Instantiate a VM from a template, once again, following the doc. My freshly instantiate VM never run, keep pending again and again.

So if somebody had a similare issue, every informations will be helpful.


Infos :
Firecracker NODE : HP Proliant G6 - Alma Linux 8 and firecracker
FrontEnd : HP Proliant G6 - ESXI 6.5 - vOneCloud.ova
Use SSH for interconnexion
Use local storage of the FireCracker Node for containers

Hello @Tetley,

Please check that your VM has no special SCHED_REQUIREMENT, you can also check /var/log/one/sched.log to see scheduling errors.


Hello @ahuertas

I checked and my VM (Debian found on the DockerHub) don’t have SCHED_REQUIREMENTS

But now i have this error :

Driver Error
Wed Feb 23 13:44:44 2022: Error executing image transfer script: INFO: clone: Cloning /home/OpenNebula/164/102294766cba48eb2ba304862bbfbea0 in /home/OpenNebula//166/57/disk.0 ERROR: clone: Command " set -e -o pipefail if [ -d “/home/OpenNebula/164/102294766cba48eb2ba304862bbfbea0.snap” ]; then SRC_SNAP=“102294766cba48eb2ba304862bbfbea0.snap” fi tar -C /home/OpenNebula/164 --transform=“flags=r;s|102294766cba48eb2ba304862bbfbea0|disk.0|” -cSf - 102294766cba48eb2ba304862bbfbea0 $SRC_SNAP | ssh “tar -xSf - -C /home/OpenNebula//166/57"” failed: tar: /home/OpenNebula/164: Cannot open: No such file or directory tar: Error is not recoverable: exiting now tar: This does not look like a tar archive tar: Exiting with failure status due to previous errors Error copying /home/OpenNebula/164/102294766cba48eb2ba304862bbfbea0 to

The /home/OpenNebula/164/ represent my [Images] Datastore found on my FireCracker Node.
The /home/OpenNebula//166/ represent my [System] Datastore found on my FireCracker Node.

If i understand well, my datastore [Images] and my datastore [System] who are on the same host, don’t communicate well together. Once again i think that the problem is the [System] datastores of my FireCracker host who is faulty.

Do you think that i should reinstall my FireCracker Node or something is wrong with my FrontEnd configuration ?

Thank you

Hello @Tetley,

Before reinstalling I would check that everything is in place:

  • Check that the file /home/OpenNebula/164/102294766cba48eb2ba304862bbfbea0 is correct.
  • Run the command manually with set -x
  • Check that host and datastore are correctly monitored.
  • Check /var/log/one/oned.log to see if there are any other errors.