I would like to ask for suggestion.
I only have 1 big server instead of multiple small servers.
Is it OK to install all inside one server for production?
Intel Xeon 96-cores after hyperthread
7TB RAID 10
The user could be up to 50 users with each VM running minimum 2 vCPU
Hi Fazli I believe that as many “infrastructure people” reading this might tell you, this will work but what happens if it fails (one memory module failure from all the sticks inside the server to achieve 512GB will reboot your server)?
You can install a hypervisor and then start creating the instances for OpenNebula on top (let’s assume 1X ONE front-end and 3X KVM hosts). After that you will have to create 4X VM instances as described.
To create VM instances inside those KVM hosts (these are now VM’s) you need to make sure that the CPU support this feature L2 nested virtualiztion
Once again it can be done and I believe that at hyperthreading reaching 96 cores you are using either 4 socket machines with 12-cores each which are most probably E5-2600 V2-V4 CPU’s which support the Nested VM’s L2 feature described.
Bottom line from my end - My first OpenNebula PoC was built this way and it worked but you will have some interesting moments with networking
Thanks for the reply.
Really appreciate it.
I will further investigate into this more.
Hello @fazli and @luke.camilleri I have it on one server in production. It is smaller config and running not so important services, but I think that there is no problem with running all on one server.
- I don’t recommend nested virtualization - loss of performance, more complicated setup.
- Loss of memory module - I experienced one time on fujitsu server - nothing happens, just subtracted from total memory. In worst case, Out Of Memory killer kill some process, eg VM :). But you can prevent this by installing twice capacity of memory and use mirroring. On the other hand, when you are using single server, you don’t care about availability much. From my and other people experiences, most times, single server have better availability that complex clustered setup.
One more thing - after power failure, at boot, VM don’t start automatically, unless you setup VM hook in OpenNebula to auto start VMs in Poweroff state.
Hello @feldsam. Noted on that. I will try to find a small server just to become the Front-end in the meantime. Thanks for the suggestion
It’s always better to have 2 servers instead of just 1 for hosting VM. Because if you have outage/upgrades/backup/etc, then you can migrate the VM (flush an host). Way better for your users.
I can also confirm that OpenNebula has a small footprint so you can clearly have it running on your hypervisor. It’s not like openstack. And if you stop OpenNebula, VMs keep running, so it’s even not a big deal to only have one OpenNebula instance.
however, how do you justify the lack of utilization for backup server? I find it very hard to convince management as they calculate utilization for all servers as a whole
The other server will not be a backup server, just spread the load as equally as possible on both nodes, also split similar services, like 1X dns server or 1X AD server on each node in case you have 2 of each , etc.
In case 1 of the nodes fails the service will be partially down or maybe no downtime at all if you plan redundancy at the application level
yes I didn’t say it was meant to be a backup server. It’s active-active, and if you keep your allocation below 50%, if one host fail, you can migrate all the VM on the second host. I think you get the point
And sometimes, it could be less espensive to buy 2 medium servers, than a very big one (specially when you start to hit +500GB of ram, but this might depend of your provider/country/market/etc).
That is the problem in my case. I cannot justify if the utilization is less than 50%. They will only take note when the whole utilization is 80%.
@madko However, I agree with the idea. For a small company, it is better to purchase 2 medium servers.
@luke.camilleri It seems that 2 nodes are the minimum for a production level deployment.
so they never let you buy spare disks? spare servers etc ? They don’t have risk management? But I can understand.
You can choose to put 80% of allocation both server, and if there is a problem, have overallocation and maybe, have a degraded service state. Or just pick and migrate only the critical VM.
Yes, you can switch your production workload on opennebula server, personally i have good experience with KVM hypervisour in opennebula, but must have installed older version 5.4.0 / 1 , in latest version having many bugs like (network interface, disk resize, kvm service, vm password reset etc…)