How to segregate service network from internet access for VMs

Hi.
I am new to opennebula and there is one concept I am not yet getting around networking.

Here is my physical architecture:

  • 1 controller node
  • 2 hypervisors

Those three machines are connected through a private network on eth1. I can create a ovs bridge, say br1.
Each of these machines are also connected to internet through eth0? I can create a brdige named br0 for example.

Then when I add a node, how do I specify to the controller that he should connect through the private network for service tasks and how do I specify to the nodes that they should reach internet through eth0?

Thank you in advance for pointing me at the right direction ! I look forward to get my hands dirty :slight_smile:

I think you need to define as vnet as you may use.

for example.

1 net for private trafic.

onevnet show 18

VIRTUAL NETWORK 18 INFORMATION
ID             : 18
NAME           : Default-Private
USER           : user
GROUP          : users
CLUSTER        : -
BRIDGE         : virbrDEVEL
VLAN           : No
USED LEASES    : 26

PERMISSIONS
OWNER          : um-
GROUP          : u--
OTHER          : u--

VIRTUAL NETWORK TEMPLATE
BRIDGE="virbrDEVEL"
BROADCAST="10.38.2.255"
DESCRIPTION="Default net for testing "
DNS="xx.xx.xx.xx"
GATEWAY="10.38.0.1"
NETMASK="255.255.0.0"
NETWORK="10.38.2.0"
NETWORK_MASK="255.255.0.0"
PHYDEV=""
SECURITY_GROUPS="0"
VLAN="NO"
VLAN_ID=""

1 vnet for public trafic.

onevnet show 11
VIRTUAL NETWORK 11 INFORMATION
ID             : 11
NAME           : CloudPyme
USER           : user
GROUP          : users
CLUSTER        : production
BRIDGE         : virbrPUBLIC
VLAN           : No
USED LEASES    : 5

after that, at dom0 you must define the bridges (in these case, virbrPUBLIC and virbrDEVEL)

sample in our dom0

virbrPRIVATE 8000.d8x3x5ff38ax no eth1
virbrPUBLIC 8000.d8x3x5ff38ax no eth1.35

as you see, our PRIVATE is through eth1 and our public is at eth1.35

I hope that can help you :slight_smile:

Thanks a lto for your answer @alfeijoo
What I am trying to achieve is that ONE controller is talking to ONE hypervisors through the private network.
For example if I attach a node like this:

onehost create host01 --im kvm --vm kvm --net ovswitch

Then it doesn’t say which network it should use…
I am thinking… except maybe if I use an IP from the private network ! Would it be the solution?

Hi.

im not sure what (an how) ovswitch work, here usually use dummy.

first you need to know that adding host to opennebula you not need to define onevnets, because when create a host “host01” opennebule going to read the IP from /etc/hosts or just as to the DNS service, and of course, if you defined with private ip they will try to connect using that.

for example, our dom0 have a private network (10G) and public network, but our etc/host at opennebula host just define “host01” as 10.112.1.XXX

but as you know, inside the dom0 have the public ip, and a lof of bridges.

If you want know what network opennebula use to tal with “host01” just do a ping host01 or better, as oneadmin perfom an ssh to the host01 :smile:

Sure, that’s clear now.
I am implementing this right away. I’ll keep you posted.
Thank you so much.

hi vialcollet,

we tried the same a while ago, and found that the easiest way is to define internal names for your hosts(in /etc/hosts or DNS) and add them to your cluster using those internal names.

example:
hypervisor.01.example.com (on a public IP) and hypervisor.01.internal.example.com (on 10.0.0.1)
If you add the hypervisors using the internal name, all orchstration commands etc are sent over the internal network and you can use the pub interface for VM traffic.

hope this helps !

Hi @VURoland
Thanks a lot for your answer.
Yes that’s working fine like this.
Thanks again.