Network contextualizacion with Linux Container LXC images

Hello,

I’m running some tests with a LXC node because I want to run several containers to save memory and CPU as oppossed KVM VMs. After downloading some Linux Containers from Marketplaces (like CentOS-7, Alpine-3.16 and Debian_Sid), I have been able to run containers in my LXC node, but in all VMs, network is not working. From Sunstone, each container get an IP addres from my virtual network, but in LXC node, after running “lxc-attach $VM_ID”, I can check that container has not got IP.

Where is the problem?

Network configuration in my LXC node is like in other KVM nodes: an eth0 interface with no IP, connected to a linux bridge “br0” with an IP. However, I have some doubts about this bridge, because after installing “opennebula-node-lxc” package (my LXC node is running Ubuntu-20.04), automatically I get an “lxcbr0”. I have modified configuration in /etc/lxc/default to match with “br0”

root@nodo5:~# cat /etc/lxc/default.conf
lxc.net.0.type = veth
lxc.net.0.link = br0
#lxc.net.0.link = lxcbr0
lxc.net.0.flags = up
#lxc.net.0.hwaddr = 00:16:3e:xx:xx:xx

and, after seeing that network continues failing, I have modified /etc/default/lxc-net:

root@nodo5:/etc# cat default/lxc-net
# This file is auto-generated by lxc.postinst if it does not
# exist.  Customizations will not be overridden.
# Leave USE_LXC_BRIDGE as "true" if you want to use lxcbr0 for your
# containers.  Set to "false" if you'll use virbr0 or another existing
# bridge, or mavlan to your host's NIC.
USE_LXC_BRIDGE="false"
#USE_LXC_BRIDGE="true"

# If you change the LXC_BRIDGE to something other than lxcbr0, then
# you will also need to update your /etc/lxc/default.conf as well as the
# configuration (/var/lib/lxc/<container>/config) for any containers
# already created using the default config to reflect the new bridge
# name.
# If you have the dnsmasq daemon installed, you'll also have to update
# /etc/dnsmasq.d/lxc and restart the system wide dnsmasq daemon.
LXC_BRIDGE="br0"
#LXC_BRIDGE="lxcbr0"
#LXC_ADDR="10.0.3.1"
#LXC_NETMASK="255.255.255.0"
#LXC_NETWORK="10.0.3.0/24"
#LXC_DHCP_RANGE="10.0.3.2,10.0.3.254"
#LXC_DHCP_MAX="253"
# Uncomment the next line if you'd like to use a conf-file for the lxcbr0
# dnsmasq.  For instance, you can use 'dhcp-host=mail1,10.0.3.100' to have
# container 'mail1' always get ip address 10.0.3.100.
#LXC_DHCP_CONFILE=/etc/lxc/dnsmasq.conf

# Uncomment the next line if you want lxcbr0's dnsmasq to resolve the .lxc
# domain.  You can then add "server=/lxc/10.0.3.1' (or your actual $LXC_ADDR)
# to your system dnsmasq configuration file (normally /etc/dnsmasq.conf,
# or /etc/NetworkManager/dnsmasq.d/lxc.conf on systems that use NetworkManager).
# Once these changes are made, restart the lxc-net and network-manager services.
# 'container1.lxc' will then resolve on your host.
#LXC_DOMAIN="lxc"

Also, after some bad tests, I have stopped lxc-net daemon and, then, bridge “lxcbr0” has disappeared.
However, my containers don’t get IP address…

Also, I have noticed that containes are running as “root”. Is there any way to allow “oneadmin”?

Thanks.

@pczerny, @ahuertas

Hi,

After checking some configuration files, I have seen that package “one-context” RPM wasn’t installed in the LXC CentOS-7 image downloaded from Apps (MarketPlaces). After running “lxc-attach $VM_ID”, copy via USB one-context.RPM and configure manually IP, YUM has got downloaded all dependencies. Then, I have rebooted LXC container and, voilà, network is working fine.

Now… how can I modify and recreate original container CentOS-7-LXC and add “one-context” RPM package?

Thanks a lot!

Hello, if you ever get issues like these (images with context problems or images that fail to boot) when using marketplace images, there is a log you can check inside the image that pretty much logs the whole auto-contextualization process that runs when downloading the image. You can check that file by creating a VM and inspecting from it, or also manually mounting the image from the image datastore in the frontend FS.

Hello @dclavijo,

I have reconfigured LXC CentOS-7 image from MarketPlace and I have installed contextualization package, so now, when my LXC container boots, it gets an IP address. However, to connect to the container I need to login to the LXC host and, then, run “lxc-attach $VM_ID” because what I see in VNC tab is this:

so I can’t log into the container from the VNC tab (I have tried pressing “Ctrl+a” keys some times, but I don’t know if the problem is the keyboard layout (spanish) or other problem).

Also, I have added “lxc/tty1” in /etc/securitty file in the running container, but after rebooting it, I continue seeing the same message in VNC tab, so I can’t work with my LXC container in the same way I work with KVM images (from VNC tab using Guacamole).
And, also, I have tried adding to /etc/inittab “1:2345:respawn:/sbin/agetty tty1 38400 linux” but it doesnt’ work.

How can I reconfigure that feature?

Thanks.

So, thanks for pointing this out. This is actually a known issue from back when the LXD driver was implemented. It only seems to affect CentOS containers and to work around it you need to change the command that is exported by the VNC server, to, for example bash and you’ll get a bash loginless shell.