Is there anyone who deploy with success opennebula on Hetzner?
I cannot find any tutorials for opennebula networking with Hetzner.
Is there anyone who deploy with success opennebula on Hetzner?
I’m running OpenNebula, based outside of Hetzner, with Hosts at Hetzner. Basically it’s multiple (single IPv4 /32 address, default GW is the br0’s main IPv4/v6) virtual networks that are linked to clusters who contain a single Hetzner host each. Host’s network has a bridge for all it’s IPs, as outlined in their Wiki. Datastores are ssh-based, with the main (image) DS located outside Hetzner in my case.
It depends on what you need to do, basically you need to set the br0 of your hosts with the default neworking config of your servers and as @wusel suggests you can use clusters to split single hosts in order to have consistent networking configuration based on each additional IP you can purchase on your hetzner hosts.
I know it may not reply to your question, but a warm suggestion… you may have better results using OVH dedicated hardware, where you can have a vRack (transparent private networking https://www.ovh.com/us/solutions/vrack/), there you’ll set br0 on the vRack interface (usually eth1), and configure the gateway access and default server ip on the public interface (usually eth0).
Then you can purchase IP address blocks and link them to the vRack itself (so not linked to the single server, but your whole infrastructure)… in this way you can live migrate VMs.
I’m running NodeWeaver (OpenNebula Hyperconverged Infrastructure product) clusters for years in OVH and used both dummy (br0 networking) and openvswitch networking w/o any issue.
note: I’m not related to OVH in any way, but I’ve also started using hetzner and faced a lot of difficulties in using the network the way I wanted, so I switched to OVH.
To me it totally depends on the features I need vs. the price tag I can afford It’s ~40€ per Core-i7 4c/8t 36 GB Box there, plus 6x 1€ for 6 add. v4 IPs. That’s roughly half of where OVH starts off, and as I’m running this for myself and in part for a non-profit club (Freifunk), Hetzner trumps OVH here. Yes, Hetzner’s network setup is partially interessting, but if you take the hosts just as edge routers, and put multiple of those into a tunneled segment (L2TPETH or OpenVPN/PeerVPN), it’s a rather cheap solution for computing as well.
configuring networking with Hetzner is a special topic ;-). I read a lot of tutorials about that. This problem all VM solutions shared when running on Hetzner Servers.
The best solution was provided by Hetzner itself
Hetzner KVM Ips - The easy way
All IP Adresse are directly routed to the servers, if you are unlucky the IP Adresse of your host are not in the same subnet as the IPs Adresses of the additional subnet you ordered.
So the guest system needs a direct access link to the host machine. This is done with the paramenter pointopoint in the network configuration.
This keyword enables the point-to-point mode of an interface, meaning that it is a direct link between two machines with nobody else listening on it. If the address argument is also given, set the protocol address of the other side of the link, just like the obsolete dstaddr keyword does. Otherwise, set or clear the IFF_POINTOPOINT flag for the interface.
totally agree with this.
I was just saying that if you’ve a little more than zero-budget you’ll save a lot of hassles and have a better infra using OVH. Hetzner pricing with serverbid is dirt cheap and unbeatable, but all the extra € of OVH is completely worthy IMHO.
To ease usage across hosts, I connect all (currently there) Hetzner hosts via PeerVPN over host’s main IPv4 addresses. Since internal traffic is free of charge and usually fast within Hetzner’s networks, to me it’s the quickest option. As an alternative you could use e. g. L2TP and bird with OSPF across the fully meshed links, OpenVSwitch could be an option as well. For my use case, PeerVPN is simply the most elegant solution
Therefore I basically have a shared VLAN between my hosts. I use 100.100.101.0/24 for it, last octet equals the host’s main IPv4 address. Let’s look at the hosts:
auto br0 iface br0 inet static bridge_ports eth0 bridge_stp off bridge_maxwait 0 bridge_fd 0 address 126.96.36.199 netmask 255.255.255.224 gateway 188.8.131.52
PeerVPN up script (fks0 is the PeerVPN interface):
#!/bin/bash /sbin/ip link set up dev fks0 /sbin/brctl addif brfks fks0 /sbin/ip addr add 100.100.101.36/24 dev brfks /sbin/ip route add 184.108.40.206 dev brfks /sbin/ip route add 220.127.116.11 dev brfks /sbin/ip route add 18.104.22.168 dev brfks
Looking at the OpenNebula network template:
VIRTUAL NETWORK 7 INFORMATION ID : 7 NAME : FKS-host1 USER : oneadmin GROUP : oneadmin CLUSTERS : 104,105,106 BRIDGE : brfks VN_MAD : dummy USED LEASES : 2 […] VIRTUAL NETWORK TEMPLATE BRIDGE="brfks" DESCRIPTION="FKS P2P via brfks at host1" DNS="22.214.171.124 2001:4860:4860::8888" GATEWAY="100.100.101.36" GATEWAY6="2001:db8:1703::de3" NETWORK_MASK="255.255.255.255" PHYDEV="" SECURITY_GROUPS="0" VN_MAD="dummy" […]
I then add single IPv4-IPs as address-ranges of size 1 in OpenNebula. Because of
GATEWAY="100.100.101.36" the path back will be through our host, even using a public v4 IP. (With the 100-Subnet, one could setup internal/natted services as well as a by-product.)
Deploying a VM then just works (note the 255.255.255.255; it’s a host route, which in Linux work even without the pointopoint keyword):
root@some-fks-vm:~# cat /etc/network/interfaces auto lo iface lo inet loopback auto ens3 iface ens3 inet static address 126.96.36.199 network 188.8.131.52 netmask 255.255.255.255 gateway 100.100.101.36 source /etc/network/interfaces.d/*.cfg root@some-fks-vm:~# ip route show default via 100.100.101.36 dev ens3 onlink
So, adding computing resources at Hetzner in an OpenNebula cloud is rather easy, even though Hetzner’s networking looks a bit strange at first
Note that I have no use for br0 anymore; initally, the hosts ran plan KVM via virt-mananger, and not all VMs have been migrated to OpenNebula yet.
Yes, it’s not as cool as having a “real” VLAN provided by one’s hoster. Then, in the era of software defined networks (SDN), you usually don’t get a “real” VLAN anymore anyway, and if you cut the costs with a bit DIY, I’m all in
(I use non-Hetzner v6 in a similar way there, but that’s a bit more complicated ;-))
thank you all
and custom parameter for pointopoint in network config in opennebula it works!
I assume that with Vms based on LXD it is the same procedure?
Is it possible with opennebula to create internal vnet with some vms on Hetzner Host
and expose some ports
through main Host IP address?
It looks too complicated solution for my one Hetzner Host with some VMs (for now)
but with multiple nodes it makes sense!
I wonder what is the proper solution when I want to expose some ports
from internal Vm network through main Hetzen Host IP
also/or on all Host nodes
Is it possible?
Doubt it; the point of VMs are that they are independent of each other. You could setup e. g. stunnel or similar for TCP, but that’s far more complicated than to get additional v4 IPs and use them on your VMs. For v6, you already got a /64, plenty of addresses