thanks for taking an interest into our appliance!
As for your questions:
Multiple versions of Kubernetes should not be a problem - marketplace could have more instances of the Kubernetes appliance differing by version. We would just have to design some sensible update scheme - for example what if some version of kubernetes requires particular version of a container runtime (e.g. docker 17.*) which is not reasonably installable on the current updated system. Should we leave the system old and insecure, or rather discard this Kubernetes version? We definitively should improve the update scheme of appliances in general, but supporting multiple versions of the same would end up on the best effort basis anyhow. This can be simplified in the feature when kubernetes and cri-o is locked stepped with each other - but it is not there yet (AFAIK) and cri-o is not production ready (IMHO) and you would need anyway some docker-compliant cli to debug the containers and cri-o has none (except their sister project podman - which is not production ready also…). On top of it there could be still other dependencies which may not be satisfied and my worry still stands: should we leave an insecure old system in the marketplace or drop the obsoleted version completely?
Regarding CNCF certification - I don’t have much knowledge about the process but if you have this on the mind: https://github.com/cncf/k8s-conformance/#certified-kubernetes then I think that we could attempt to do so.
Your last point is currently out of scope of the current appliance - doing the update (as far as I know) is tricky and doing it while preserving the user data is even more. The idea behind the current appliance is to provide simple deployment as possible but the life of the appliance is ultimately in the hands of the user from that point on.
And because I am not aware how to achieve an update of the kubernetes without any downtime - you can just follow the procedure below - the result will be the same (possibly with much shorter downtime) and it will be at the end much safer.
I would recommend doing the upgrade by switching over to another cluster:
- deploy a new cluster with the desired version next to the old one
- deploy the apps running on the old one
- freeze and migrate the latest data from the old one
- switch traffic to the new cluster
- destroy the old cluster
There may be possibly other schemes or procedures which will do a better job. Of course the appliance and/or opennebula can attempt to simulate and automate such process but as I said it is currently out-of-scope.
Nevertheless the Kubernetes appliance is still WIP and I personally plan to revisit the appliance and improve upon it. So maybe in the future the deployment and management would be more clever and may even provide the features you are requesting but I cannot commit to any timeline
Hope that my reply help to clear up some of your questions.