In our setup, the OpenNebula frontend is not exposed on Internet for security reason.
In such a setup, we can’t run a service like OneKE on our public network because OneGate is not publicly reachable.
I started reading libvirt documentation on channels to see if there is a way to expose a communication port inside the virtual machine and make onegate cli communicate through that port instead.
On the hypervisor side, we may need something to forward the communication
Before digging more the subject, does someone have an idea or suggestion on that topic?
yes we are aware of this issue. We’re actively working on providing the “onegate-proxy” service (currently in testing phase), which will be installed on hypervisor hosts. It will be based on https://www.kernel.org/doc/Documentation/networking/tproxy.txt which will cause all the traffic targeted for 169.254.169.254:5030 (example) endpoint inside guests to be routed via hypervisor hosts. Then something like a ssl tunnel or vpn (between frontends and hosts) could be used to reach the onegate endpoint.
It seems that the proxy is not a good solution for you after all. Our suggestion would be to modify the templates (Flow and VNF VM) and add third VNET, such that hosts deployed there would have access only to OneGate. Then this VNET would be added to VNF only and with the help of a start script you could configure NAT insde VNF to route OneGate traffic comming from RKE2 nodes. Please let us know if that is something possible to do in your infrastructure, we can provide more detailed description on how to achieve all that.