Ceph S3 private market

Please, describe the problem here and provide additional information below (if applicable) …


Versions of the related components and OS (frontend, hypervisors, VMs): OpenNebula 7.0.1 and cephadm based Ceph 19.2.3 on Ubuntu 24.04.3 LTS (GNU/Linux 6.8.0-94-generic x86_64) in a HCI configuration.

Steps to reproduce: Setup a Ceph S3 bucket and connect OpenNebula via a private MarketPlace. There are two HA proxies (container within cephadm) with two virtual IPs in front of the Ceph RGW. These two VIPs are setup with the same DNS to be redundant, and loadbalancing.

Current results:
When I create images they show up under apps in the private marketplace, but when I create apps from vm’s I see that they have been transfered to Ceph by the used data size of the S3 bucket, but the VM don’t turn up as an app in the Private MarkedPlace. I see the same strange behavior for VM templates as with VMs.

Expected results:
Expected results would be to also see the VMs as apps under the private MarketPlace.

Does anybody have any idea why this is the case for me?

The last time I installed OpenNebula this worked, but at that time I did not use HA Proxy. Of this reason I suspect it can have someting to do with this. Maybe someone knows if there are some special ajustments that is needed when the S3 connection from OpenNebula passes through HA-Proxy?

Hello @strandte !

To isolate the issue have you tried to remove the HA proxy from the equation and use S3 gateway as an endpoint for the marketplace?
Could you, please, share your s3 marketplace config here (e.g. the output of the onemarket show -j <mp_id> . Remember to remove any sensitive information from the output before sharing it here.

God idea to share the config:
oneadmin@svonefront-pub:~$ onemarket show -j 103
{
“MARKETPLACE”: {
“ID”: “103”,
“UID”: “0”,
“GID”: “0”,
“UNAME”: “oneadmin”,
“GNAME”: “oneadmin”,
“NAME”: “Hbr MarketPlace”,
“STATE”: “0”,
“MARKET_MAD”: “s3”,
“ZONE_ID”: “0”,
“TOTAL_MB”: “1048576”,
“FREE_MB”: “1048576”,
“USED_MB”: “0”,
“MARKETPLACEAPPS”: {},
“PERMISSIONS”: {
“OWNER_U”: “1”,
“OWNER_M”: “1”,
“OWNER_A”: “0”,
“GROUP_U”: “0”,
“GROUP_M”: “0”,
“GROUP_A”: “0”,
“OTHER_U”: “0”,
“OTHER_M”: “0”,
“OTHER_A”: “0”
},
“TEMPLATE”: {
“ACCESS_KEY_ID”: “XXXXXXXXXXXXXXXXXXXX”,
“AWS”: “NO”,
“BUCKET”: “hbr.opennebula.markedplace”,
“DESCRIPTION”: “Hbr MarketPlace”,
“ENDPOINT”: “http://rgw.example.com:8080”,
“FORCE_PATH_STYLE”: “YES”,
“MARKET_MAD”: “s3”,
“REGION”: “default”,
“SECRET_ACCESS_KEY”: “XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX”,
“SIGNATURE_VERSION”: “s3”
}
}
}

I am as I told able to save images to the S3 bucket by using this command:

oneadmin@svonefront:~$ onemarketapp create --name ‘test’ --image 131 --market “Hbr MarketPlace”
ID: 154

You can se the file with awscli:

C:\Program Files\Amazon\AWSCLIV2>aws s3 ls s3://hbr.opennebula.markedplace/ --recursive --human-readable --endpoint-url http://rgw.example.com:8080
2026-02-04 14:35:17 1.2 MiB marketapp-154

I seem to be able to save a vm to the marked place with the command:

oneadmin@svonefront:~$ onemarketapp vm import 28 --vmname ‘WIN-SRV-fast’ --market 103
Do you want to import images too? (yes/no, default=yes): yes
ID: 155

To delete saved template use: onetemplate delete 73 --recursive

As you see it is no error messages!

But when I check with awscli it is not there, only the image transfered earlier:

C:\Program Files\Amazon\AWSCLIV2>aws s3 ls s3://hbr.opennebula.markedplace/ --recursive --human-readable --endpoint-url http://rgw.example.com:8080
2026-02-04 14:35:17 1.2 MiB marketapp-154

If I try to transfer an image which is in used or error state I get an error like this:
oneadmin@svonefront:~$ onemarketapp create --name ‘test34’ --image 138 --market “Hbr MarketPlace”
[one.marketapp.allocate] Cannot clone image in current state
This makes me believe that if the VM have some of the images in wrong state for transferring to the market place it would probably give a similar error?

I have tried to use the RGW directly, by pointing it to http://:8081.

RGW uses port 8081 and HA-Proxy uses port 8080. Both with OpenNebula and awscli the results is no different when going through HA-proxy or directly, so my question about extra settings for HA-proxy can probably be forgotten about.

Also, I do not seem to have found the correct logs for troubleshooting this problem. Where should I look for logs?

Marketplace related events are written to the /var/log/one/oned.log file.

We can’t reproduce the issue in our lab.

Do you have any relevant error messages in ceph logs?

Does replacing s3 with v4 in SIGNATURE_VERSION change the behavior?

According to our documentation, possible values for that attribute are s3, v2 and v4:

SIGNATURE_VERSION Depends on the S3 implementation, possible values are s3, v2 or v4

To change the value for that attribute one can execute onemarket update <marketplace_id> command, make changes and save them. After that, please, try to repeat the failing operation.

As far your question:

shouldn’t trigger such error for used images (both persistent and non-persistent ones) but it’s raised for the images in the error state.

What is the output of the oneimage show -j 138 command? What state does that image have?

oneadmin@svonefront-pub:~$ oneimage show -j 138
{
“IMAGE”: {
“ID”: “138”,
“UID”: “2”,
“GID”: “0”,
“UNAME”: “tstrand”,
“GNAME”: “oneadmin”,
“NAME”: “WINSRVOPN2-disk-1”,
“PERMISSIONS”: {
“OWNER_U”: “1”,
“OWNER_M”: “1”,
“OWNER_A”: “0”,
“GROUP_U”: “0”,
“GROUP_M”: “0”,
“GROUP_A”: “0”,
“OTHER_U”: “0”,
“OTHER_M”: “0”,
“OTHER_A”: “0”
},
“TYPE”: “1”,
“DISK_TYPE”: “3”,
“PERSISTENT”: “1”,
“REGTIME”: “1769779382”,
“MODTIME”: “1769779382”,
“SOURCE”: “one-ceph/one-138”,
“PATH”: “one-ceph/one-126”,
“FORMAT”: “raw”,
“FS”: “”,
“SIZE”: “754”,
“STATE”: “8”,
“PREV_STATE”: “8”,
“RUNNING_VMS”: “1”,
“CLONING_OPS”: “0”,
“CLONING_ID”: “-1”,
“TARGET_SNAPSHOT”: “-1”,
“DATASTORE_ID”: “112”,
“DATASTORE”: “one-ceph”,
“VMS”: {
“ID”: “28”
},
“CLONES”: {},
“APP_CLONES”: {},
“TEMPLATE”: {
“DEV_PREFIX”: “hd”,
“FROM_APP”: “111”,
“FROM_APP_MD5”: “9e650d0e7c6e017a91ca299c8f7ed766”,
“FROM_APP_NAME”: “Windows VirtIO Drivers - v0.1.285”
},
“SNAPSHOTS”: {
“ALLOW_ORPHANS”: “NO”,
“CURRENT_BASE”: “-1”,
“NEXT_SNAPSHOT”: “0”
},
“BACKUP_INCREMENTS”: {},
“BACKUP_DISK_IDS”: {}
}
}

Even if i changed from s3 to v4 it does not seem to be able to transferer a VM to the s3 bucket.

I have been busy with other things for a while, but only wanted to answer quickly your questions. I will look more deeply into this later.

Hello @strandte !

The provided information helped to identify the issue: the image #138 is CDROM (TYPE=1) and it’s persistent (PERSISTENT=1). According to its other attributes (e.g. “FROM_APP_NAME”: “Windows VirtIO Drivers - v0.1.285” and “SIZE”: “754”) it looks like it’s Windows VirtIO ISO disk.
We could reproduce the issue in our lab: indeed the commands
onemarketapp vm import <vm_id> --vmname <name> --market <mp_id> --yes and onemarketapp vm-template import <vm_id> --vmname <name> --market <mp_id> --yes
do not upload files to s3 private marketplace and don’t provide any hint/error.

While we are clarifying it with our developers the workaround is to use non-persistent CDROM/ISO images. We also will clarify if assigning persistent flag to read-only ISO/CDROM disks is a valid operation.

So please try to detach the image #138 from the VM, make that image non-persistent one, attach it back to the VM and create app in the s3 private marketplace.

Another test could be to instantiate any test VM with disk type OS and try to create an app from it in your S3 private marketplace.