both the front end (Sunstone) and back end (KVM) are on the same physical server
server has 2 network interfaces
I have followed carefully the install guide of OpenNebula and it all went very well, everything is working, etc. The problem comes when I decided to use the second network interface, as well. Different resources on the Internet suggest that it would be best to utilize both ethernet cards for optimal results. For example, one can be put to a virtual bridge and serve the networking needs of the virtual machines (in a separate VLAN), while the other can be used for administrative purposes, like accessing the web GUI or transferring the VM disk images (in a separate VLAN).
But when I configure the 2-nd lan card, I lock myself out of the machine. More specifically, I can connect to the server on the 2-nd network interface only (enp5s2). I cannot connect on the 1-st interface (eno1) which is added to the bridge (br0).
I commented out GATEWAY= in the ifcfg-enp5s2 file and the result was the same. Tried the other file ifcfg-br0 - still the same. Any other thoughts?
By the way, please, note that everything works file when the eno1 is taken out of the virtual bridge (EVEN WITH BOTH GATEWAY=), so there must be something else.
Well, OK, it seems I know very little about networking, then. I guess, this is going to be one of those things that, if someone doesn’t tell me exactly how to do it step-by-step, I won’t figure it out.
So, is anyone willing to tell me just how to “setup route for 192.168.1.0 network”? I would appreciate that.
P.S.: Besides, this seems to be quite a standard setup. Judging by the Installation & Getting Started guide of OpenNebula, having 2 network interfaces is the norm. And one of them must be in a virtual bridge anyway. Then, CentOS is one of the couple supported OSes. It’s quite a popular one, also (I’d say a go-to option for virtual hosts, the other one being, maybe, Ubuntu Server). So I am amazed how I am the only one having this king of issue. I kept everything default and by the book. By the way, I had the exact same issue with OpenNebula 4.12.1 and that was the reason to go with just 1 network card then, because I didn’t have the time to investigate. Now I am setting up a brand new server with the latest OpenNebula and this time I have decided to track the issue down. I wasn’t sure where to ask - here, or on some CentOS related forum, or maybe on Stack Overflow, so I decided to start here first. The reason is that if I ask elsewhere I’d have to explain why one of the 2 interfaces is added in a virtual bridge. In the normal setup both network cards work OK. Asking here, I don’t have to explain that, because that’s what the documentation suggests.
No, I don’t. Please, note that the server is accessible on both network interfaces WHEN eno1 IS TAKEN OUT OF THE BRIDGE br0 and the bridge is deleted i.e. the default configuration when you just install CentOS with two network cards.
So, for example, the following scenario WORKS AS EXPECTED:
Could you post the output of ip route list and ip -d link list when the two interfaces are raw and when the bridge is configured? You said that you can reach the node via the second interface when the bridge is activated…
When you bring br0 up, the default route is replaced with 10.24.1.1 (br0) and you can connect only to 192.168.1.153 via enp5s2
The working state works via default route 192.168.1.200 (enp5s2).
There is nothing suspicious beside the doubled default route. So the issue could be on the other end - in the router or the workstation. Can you check the router for suspicious packets, is rp_filter enabled(but it should work against the claimed working state though), etc…
When you bring br0 up, the default route is replaced with 10.24.1.1 (br0) and you can connect only to 192.168.1.153 via enp5s2
Yes.
The working state works via default route 192.168.1.200 (enp5s2).
I guess so.
Can you check the router for suspicious packets, is rp_filter enabled
I would have to understand what rp_filter is in the 1-st place, but yes, I will check that. By the way, I forgot to mention that in both cases reverse access IS possible, i.e. I can ssh FROM the node TO my workstation in both scenarios.