How to debug a VM BOOT_FAILURE

Hello

I’m getting a bug when instantiating multiple VMs. With 2 VMs the first one gets an error and the second one boots correctly (same VM template and command: onetemplate instantiate templateName )

If I terminate that VM and deploy it again, it works fine

This only happens when I instantiate through the CLI. Sunstone can deploy both of them

This is the VM log, is there any other log I can check?

Fri Oct 22 14:34:07 2021 [Z0][VM][I]: New state is ACTIVE
Fri Oct 22 14:34:07 2021 [Z0][VM][I]: New LCM state is PROLOG
Fri Oct 22 14:34:45 2021 [Z0][VM][I]: New LCM state is BOOT
Fri Oct 22 14:34:45 2021 [Z0][VMM][I]: Generating deployment file: /var/lib/one/vms/173/deployment.0
Fri Oct 22 14:34:49 2021 [Z0][VMM][E]: DEPLOY
Fri Oct 22 14:34:49 2021 [Z0][VM][I]: New LCM state is BOOT_FAILURE

If I try to instantiate 3, the first two fails and the only the latest one works

This seems to be happening only in the first group of “instaitions” after a cluster is created. So if I execute the same commands after terminating all the VMs it works fine. Im waiting for the hosts to be “on”, should I be waiting for something else?

Hi @Samu,

Could you share the content of oned.log containing the VM boot failure?

Sure, here it is oned.log (186.6 KB)