Hi all, I’ve been trying to understand how to administer open nebula with our docker build pipeline process. Our repository isn’t public, and so I must do a docker login to have access to pull the images down locally, however that doesn’t make them available on open nebula. We’re on latest stable that just came out 6.8
I’ve read the docs multiple times, but I’ve not been able to wrap my head around what I’m doing wrong when using oneimage create in a terminal for using our repo images, nor have I been able to add our repo to open nebula and have it just… make our repo images available like is done with the standard dockerhub repository.
Can anyone explain what I’m doing wrong, and better yet, can someone explain how to just make this example repo of
appflowyio available in open nebula to manage with the interface? Thanks.
I’ve read through Appliances and Marketplaces — OpenNebula 6.8.0 documentation multiple times and I really have no idea there is any way to do this. I’m not running my own registry, I’m just using hub.docker
Docker repository for appflowyio
Versions of the related components and OS (frontend, hypervisors, VMs):
- Debian 11 VM: minione:6.8 --frontend
- Debian 11 VM: opennebula-lxc latest stable from repo
- Debian 11 VM: opennebula-kvm latest stable from repo
Steps to reproduce:
Try to add public repo or add a public image that is downloaded to the frontend.
I’ll use appflowyio repo as an example since it is a project I’ve been running elsewhere until I can consolidate.
root@debian:~# docker pull appflowyio/appflowy_client Using default tag: latest latest: Pulling from appflowyio/appflowy_client cc3a38616e4b: Pull complete 9d96c5160b75: Pull complete 2789a99a9a2e: Pull complete 5777b20efffc: Pull complete f5d7413d3ccb: Pull complete b1e806158c61: Pull complete 92ed9a8b920c: Pull complete e614991f780d: Pull complete b16658aec5dd: Pull complete Digest: sha256:d16fdde63f45a3c1c9b69c94f5c8d4932dbe25c0e97c3faf960f2fbc9308bfc7 Status: Downloaded newer image for appflowyio/appflowy_client:latest docker.io/appflowyio/appflowy_client:latest
REPOSITORY TAG IMAGE ID CREATED SIZE appflowyio/appflowy_client latest 5a672fa28498 20 months ago 1.2GB
root@debian:~# oneimage create --name appflowy_client --path 'docker://appflowy_client?size=640' --datastore 1 ID: 8 root@debian:~# oneimage create --name appflowy_client2 --path 'docker://appflowyio/appflowy_client?size=640' --datastore 1 ID: 9
Empty market places, and errors from server on import no such images exist.
errors on server:
Thu Oct 26 12:33:40 2023 : Error copying image in the datastore: INFO: cp: Copying local image docker://appflowy_client?size=640 to the image repository ERROR: cp: Command "set -e -o pipefail; /var/lib/one/remotes/datastore/fs/../downloader.sh 'docker://appflowy_client?size=640' '/var/lib/one//datastores/1/f7b425197ecd77f10eba999ae5f853be'" failed: Unable to find image 'appflowy_client:latest' locally docker: Error response from daemon: pull access denied for appflowy_client, repository does not exist or may require 'docker login': denied: requested access to the resource is denied. See 'docker run --help'. Error response from daemon: No such image: appflowy_client:latest Error response from daemon: No such image: appflowy_client:latest Error copying Error copying docker://appflowy_client?size=640 to /var/lib/one//datastores/1/f7b425197ecd77f10eba999ae5f853be
Thu Oct 26 12:33:52 2023 : Error copying image in the datastore: INFO: cp: Copying local image docker://appflowyio/appflowy_client?size=640 to the image repository ERROR: cp: Command "set -e -o pipefail; /var/lib/one/remotes/datastore/fs/../downloader.sh 'docker://appflowyio/appflowy_client?size=640' '/var/lib/one//datastores/1/90c78a46b7eb90fe2eeee20cfbbbcf11'" failed: Error response from daemon: No such image: appflowyio:latest Error response from daemon: No such image: appflowyio:latest Error copying Error copying docker://appflowyio/appflowy_client?size=640 to /var/lib/one//datastores/1/90c78a46b7eb90fe2eeee20cfbbbcf11
A way to have public repo and custom repo docker images available in open nebula (ideally without doing any local importing via commands as we have dozens of images we build weekly)