Securing new install

hi,

i am doing an install to test open nebula. After following the firts step i have sunstone running well.

One thing bothers me in the fact that a lot of services listen to the internet with no filtering and i wanted to know what is a risk and if there is any guide on what can be blocked/filtered etc…

tcp 0 0 0.0.0.0:2633 0.0.0.0:* LISTEN 4123/oned
tcp 0 0 0.0.0.0:29876 0.0.0.0:* LISTEN 4442/python2
udp 0 0 0.0.0.0:4124 0.0.0.0:* 4221/collectd
tcp 0 0 0.0.0.0:9869 0.0.0.0:* LISTEN 4423/ruby

I found this one: https://www.youtube.com/watch?v=j7i_RsjFjC4

i am listening to it right now, hope it give some advice here.

How do you secure your opennebula installs when the hosting of your machines are in datacenters like ikoula/ovh and other “public” hosting company ?

best regards,
Ghislain.

IP address to bind the sockets can be defined in the configuration files:

oned.conf for oned, and collectd (2633, 4124), sunstone.conf for ui and vnc proxy (9869, 29876)

You may want to proxy the services exposed to Internet through a SSL proxy

ok so

  • sunstone is the web gui , limit to the ip of admins and users of the gui
  • oned is … ? search for oned in the doc return nothing so, what access should it have, i dont know :slight_smile:
  • collectd : seems monitoring so should be limited to :
    guest and hosts => frontend and
    the frontend => all host/guest
  • vnc proxy, should be the same as collectd i guess

I speak from a simple install with all opennebual on the same machine with one test host all in ubuntu 18.04. Does it sounds good ?

regards,
Ghislain

oned is for the port is XML-RPC all components communicates with OpenNebula core through this port; including CLI or Sunstone. It could be localhost in most cases.

collectd is to talk to your hypervisors, it can listen on the private IP used to talk to them.

vnc-proxy same as sunstone, if you want to access VMs through VNC

thanks for those information.

One more thing about security. The controller has access in ssh to the hosts that seems quite normal. The question i have is that, if i am not mistaken, all the host have access to the controller via ssh to the oneadmin account.

If i am not mistaken that mean that if any host is compromised it will be able to compromise all the other host “via” the controller and destroy a lots of things in the controller including sneaky things like embedding malware in the images of the guest images.

Am i wrong here ?

best regards,
Ghislain.

Yes, you are right, host-to-host ssh is required for some operations so usually oneadmin credentials are shared. This means that if oneadmin account in a host is compromised, it could potentially log on the frontend and perform any operation (as oneadmin). This includes but not limited to: altering VLAN_ID for networks, QoS parameters, full DB access, change VM images…

BTW, if you want to not share the oneadmin credentials this can be done (at some extent) if you do not need some features, I think live-migration…

this frighten me.

you have seen kvm escape int he past and recently docker and lxd escape so this mean the whole opennebula base install is incredibly fragile against any escape (or direct compromise of any nodes).

Is that not a problem to have such a vulnerable architecture in a 2019 system that use a central controller ? You have a system with a central controller and that central controller is not making any control or security validation ? :frowning:

it could open point to point temporary channel limited to the operation needed (create ssh account with a forced command or rsync temporary share with limited access) then close it when done. i dont know i am just puzzled by this.

Ghislain.

It is doing it, based on ssh credentials. If the credentials get compromised you are in trouble as in any other system… As I said you can opt for not sharing the credentials.

hi Ruben,

First i wanted to thanks all dev that take time to answer a noob question here that perhaps do not make senses. Just throwing what bothers me here, so please do not take anything as an attack or anything else than a desire to understand how things works. It’s just that some signs hurt my spider senses so i ask bluntly :slight_smile: and there is no scientific proof that spider senses exists so i am surely just having mental issues.

so i agree there is perhaps a way to tweak scripts to not have two way ssh open channel, but, if opennebula is basicaly made to have every host has a complete control of the cluster with the main daemon running on a controller then removing this could lead to breakage at several level and unseen consequence for the people doing that. Especially if they are noobs like me int the ecosystem. and that at each upgrade of the packages.

This is a little like the scp issue we had in 5.8rc, the scp method did not work showing that the tests structure and all the dev use something else like ceph or iscsi or they would surely have catch it before. This perhaps means this is not the way opennebula core team expect it to work IRL. It is allways better to use a tool like the dev intended to use it that prevent you to stumble into corner case that nobody uses.

So i prefer to learn from the dev what the way they intended to use it so i dont fall into a corner case that will destroy my servers 6month from now :slight_smile:

the way i see it is that the opennebula cluster expect to have 2way access. It seems that the host => controler is only for file copy purpose, i dont know if it use a plugin type thing for transfert method yet as i just started but i will have a look and if yes i could try to write my own as an exercise to learn opennebula systems.

I’ll google it for me then :stuck_out_tongue:

Ghislain.

hi Ghislain.

Not problem at all! I think we all benefit from the interaction of the community. And you are right this can be seen as a problem. I was trying to point out that you don’t need to share the credentials across the nodes.

Sometimes we have to balance ease of use and security/performance. Most of the times we’ve opted for the first one so it easier to play with a technology which is difficult by nature, as it involves a lot of concepts from networking, virtualization and storage. We usually leave the tricky parts to the documentation, although we don’t always succeed…

I’ve double checked the document, and the secure configuration is spelled out, quoting from here:

If an extra layer of security is needed, it’s possible to keep the private key just at the frontend node instead of copy it to all the hypervisor. In this fashion the oneadmin user in the hypervisors won’t be able to access other hypervisors. This is achieved by modifying the /var/lib/one/.ssh/config in the front-end and adding the ForwardAgent option to the hypervisor hosts for forwarding the key:…

This way a compromised node will not automatically grant access to the rest of the nodes.

About 5.8rc scp issue, it has been because of an update in the openssh clients, using ‘.’ in the paths has been reported as insecure. So, even you are using a tool as intended the tools evolve and you have to adapt :wink:

Anyway, Just wanted to thank you for your time in testing OpenNebula and sharing your concerns and doubts will all of us. It’s certainly the best way for us, as a community, to improve and build a better OpenNebula.

Thanks

Ruben

thanks ruben, i will test those !

i have to try but i wonder if a host user change the .sshrc of oneadmin it could not use the forwarded agent to corrupt the controller that calling host the next times it connect. Will test this when i have the time.

Hi, you should no one of these port leave open public. For internodes communication use private network. For sunstone and xml-rpc use nginx with https.