Failed VM deployment vs. host state

Hi guys,
I’m experiencing some HW related errors, which time to time cause my datastores unavailable on some nebula node. Deployment of any VM thus fails, but since libvirtd works and probes do not time out, Nebula considers the host as healthy and continues to deploy more VMs into it.

Is there a chance to take the information and use it in host monitoring or in scheduler? Like deployment of last six VMs failed, thus the node is probably faulty.