How to create isolated mutlitenand network with public ip mapping

Hi together,

I just came accross opennebula and think it is the simplicity I am searching for my little project. I will setup a multinode (2 - X) enviroment for our customers and projects.

My goal is to create a “providernetwork” which is reachable from everywhere in our house. In the cluster i want to create numbers of projects and teams. This projects and teams have to have all isolated private network addresses and do not need to talk with each other. I think here I have to create a single openvswitch bridge with vxlan, right?

So, my customers then just start instances and create their own “mini infrastructure”. If they need to connect directly to a VM, they should be able to catch an ip-address from mthe “provider” network and map it to the VM. How can I realize this?

Actually I seems that I have to create 3 network bridges. 1 simple linux Bridge for the provider network, 1 openvswitch/vxlan brdige to be able to isolate the different networks from my tenants and connect the virtual networks between the different hypervisors and 1 simple linux bridge to realize my storage/management network.

Have someone any suggestions for me if my setup should work?

To be a little bit more complete, here is a simple example-chart of the infrastructure I want to build.

Customers should be able to create their networks and all vms should talk to each other - no matter which host you are on.

If this work fine and a customer want to connect directly to a service on a vm, he should be able to “request” an public IP and map this to the Service.

It is a little bit similar to what Docker/Kubernetes does…

Hello @Jorg_Bode

Maybe IPAM driver can help you, please take a look!

Hi,

thank you for the idea with IPAM. I think this is not what I want. ONE is able to manage addresses in virtual networks. My task is to isolate the networks from each other without touching all the switches everytime to implement new vlans.

When this task is finished, I have to find a way to “map” adresses of the public network to a vm. Just like setting up the DMZ Zone of a homerouter…

Perhaps I simply do not understand how I can isolate the “cloud” virtual networks. I want to minimize the workload when creating new tenants.

I don’t know if I’m understanding well.

If you want to isolate 2 networks, you just have to create 2 different virtual networks in OpenNebula and then use different bridges in your physical hosts.

I do not want to isolate only 2 networks. I want to realize this:

1 Network for public acces. This is only a simple bridge.
1 Network for server/managment communication. This is only a simple bridge oder single nic.
1 Network for the different tenant networks. Here I have to separate alle the networks from each other which my tenants create. This is the part I do not understand.

The second part is to create the connectivity to “the outer world”. I think the “floating-ip” concept like Openstack do is not available? In this case I want to “map” an external IP through a virtual router to the VM. Is this possible?

Hello, as you can read in a prior pots we facing the same scenario at the university.
Have alook on there and let me know if need some guidance about what to look for.

The setup is a complex one an you need to know that there is no recipe that will fit you perfectly. I’m spending about full time in more than 2 months just to figure out how to do it.

https://forum.opennebula.io/t/iimplementation-of-openvirtualswitch-in-opennebula/7083

  1. Do you have acces/config/admin to switchs and router in you network?
  2. Wich Technologies they support?

another post you can read:
https://forum.opennebula.io/t/how-to-create-isolated-mutlitenand-network-with-public-ip-mapping

:smiley:

Hello,

We did something similar on our side. Is not yet automatize but hope the following diagrams helps to understand:

On this diagram all VMs at the botton can have a NIC on the Public VXLAN (VXLAN 1). There are all deployed in a Private VXLAN completely isolated from other priv thanks to the “Priv Buffered” VXLAN.
Each time a new VM is created in the Private VLAN, in the case you attache a NIC from the public an IP from a free lease is picked and use on your VM.

NAT Gw provide internet access to VM without a Public IP

I don’t know exactly if this kind of deployment is close enough to your needs but it mights help.

Regards

1 Like

Thank you for your explanation. It seems, that your setup is a little bit more complex as mine. But your network isolation is what I am loking for. As your describe, I need to create multiple tenants with their own private network - with the option, that one tenant use the same private network address as another tenant.

I think in my case (I start with 3 computenodes) a snd controller is not needed and I can setup the vxlan openswitch bridge manualy with redundant connections between the nodes. Can you provide me an example configuration of your network setup?

The next part is now to “map” “public” IP-addresses to internal VMs. I think it can be realized with portforwarding in the virtual-router, right?

Hi @Jorg_Bode

Sorry for the delay.

Thank you for your explanation. It seems, that your setup is a little bit more complex as mine. But your network isolation is what I am loking for. As your describe, I need to create multiple tenants with their own private network - with the option, that one tenant use the same private network address as another tenant.

This is possible to have 2 same private Address only if you have 2 separated VLANs on 2 different VNET on opennebula.
The setup I present support this. Each customer has its own private network on a VXLAN.

I think in my case (I start with 3 computenodes) a snd controller is not needed and I can setup the vxlan openswitch bridge manualy with redundant connections between the nodes. Can you provide me an example configuration of your network setup?

What do you mean by configuration ?
The diagram present the logic.
In terms of configuration on OpenNebula we configured the vxlan in evpn mode. this is performed in the file /var/lib/one/remotes/etc/vnm/OpenNebulaNetwork.conf on the front.

The next part is now to “map” “public” IP-addresses to internal VMs. I think it can be realized with portforwarding in the virtual-router, right?

It looks like you want to perform what AWS is doing. We do not do such NATing on our system as it require more systems. Basically port forwarding should work but IMHO it might not be the best solution. I do not have a clue for this for the moment.

Hope this helps

Hi !

Do you have any tutorial available for a complete setup of this functionality ? It will be very helpful !

I would like to use it with windows server 2019 ipam role.

Thank you :slight_smile: