Sunstone restarts during image upload

Hi All,

When I try to upload an image file to the default image datastore (ID 1), Sunstone service is restarted resulting in a failed upload. I see the following at the sunstone logs:

Jun 10 19:53:15 one-vm-ubuntu opennebula-sunstone[3259]: == Sinatra has ended his set (crowd applauds)
Jun 10 19:53:20 one-vm-ubuntu opennebula-sunstone[3259]: /usr/lib/ruby/3.0.0/tempfile.rb:238:in `size': No such file or directory @ rb_file_s_size - /var/tmp/thin-body20240610-3259-4a7tg9 (Errno::ENOENT)
Jun 10 19:53:20 one-vm-ubuntu opennebula-sunstone[3259]: #011from /usr/lib/ruby/3.0.0/tempfile.rb:238:in `size'
Jun 10 19:53:20 one-vm-ubuntu opennebula-sunstone[3259]: #011from /usr/share/one/gems-dist/gems/thin-1.8.2/lib/thin/request.rb:104:in `finished?'
Jun 10 19:53:20 one-vm-ubuntu opennebula-sunstone[3259]: #011from /usr/share/one/gems-dist/gems/thin-1.8.2/lib/thin/request.rb:78:in `parse'
Jun 10 19:53:20 one-vm-ubuntu opennebula-sunstone[3259]: #011from /usr/share/one/gems-dist/gems/thin-1.8.2/lib/thin/connection.rb:39:in `receive_data'
Jun 10 19:53:20 one-vm-ubuntu opennebula-sunstone[3259]: #011from /usr/share/one/gems-dist/gems/eventmachine-1.2.7/lib/eventmachine.rb:195:in `run_machine'
Jun 10 19:53:20 one-vm-ubuntu opennebula-sunstone[3259]: #011from /usr/share/one/gems-dist/gems/eventmachine-1.2.7/lib/eventmachine.rb:195:in `run'
Jun 10 19:53:20 one-vm-ubuntu opennebula-sunstone[3259]: #011from /usr/share/one/gems-dist/gems/thin-1.8.2/lib/thin/backends/base.rb:75:in `start'
Jun 10 19:53:20 one-vm-ubuntu opennebula-sunstone[3259]: #011from /usr/share/one/gems-dist/gems/thin-1.8.2/lib/thin/server.rb:162:in `start'
Jun 10 19:53:20 one-vm-ubuntu opennebula-sunstone[3259]: #011from /usr/share/one/gems-dist/gems/rack-2.2.8/lib/rack/handler/thin.rb:22:in `run'
Jun 10 19:53:20 one-vm-ubuntu opennebula-sunstone[3259]: #011from /usr/share/one/gems-dist/gems/sinatra-3.1.0/lib/sinatra/base.rb:1650:in `start_server'
Jun 10 19:53:20 one-vm-ubuntu opennebula-sunstone[3259]: #011from /usr/share/one/gems-dist/gems/sinatra-3.1.0/lib/sinatra/base.rb:1589:in `run!'
Jun 10 19:53:20 one-vm-ubuntu opennebula-sunstone[3259]: #011from /usr/lib/one/sunstone/sunstone-server.rb:1247:in `<main>'
Jun 10 19:53:20 one-vm-ubuntu opennebula-sunstone[3259]: 2024-06-10 19:39:11 +0000 Thin web server (v1.8.2 codename Ruby Razor)
Jun 10 19:53:20 one-vm-ubuntu opennebula-sunstone[3259]: 2024-06-10 19:39:11 +0000 Maximum connections set to 1024
Jun 10 19:53:20 one-vm-ubuntu opennebula-sunstone[3259]: 2024-06-10 19:39:11 +0000 Listening on 0.0.0.0:9869, CTRL+C to stop
Jun 10 19:53:20 one-vm-ubuntu opennebula-sunstone[3259]: 2024-06-10 19:53:15 +0000 Stopping ...
Jun 10 19:53:20 one-vm-ubuntu systemd[1]: opennebula-sunstone.service: Main process exited, code=exited, status=1/FAILURE
Jun 10 19:53:20 one-vm-ubuntu systemd[1]: opennebula-sunstone.service: Failed with result 'exit-code'.
Jun 10 19:53:20 one-vm-ubuntu systemd[1]: opennebula-sunstone.service: Consumed 2min 32.159s CPU time.
Jun 10 19:53:25 one-vm-ubuntu systemd[1]: opennebula-sunstone.service: Scheduled restart job, restart counter is at 1.
Jun 10 19:53:25 one-vm-ubuntu systemd[1]: Stopped OpenNebula Web UI Server.
Jun 10 19:53:25 one-vm-ubuntu systemd[1]: opennebula-sunstone.service: Consumed 2min 32.159s CPU time.

This is a new default Opennebula installation of 6.8.0 CE on a Ubuntu 22.04 VM (8GB memory, 4 CPU cores, 100 GB disk) and I am accessing the UI at http://IP:9869. I have checked that I have plenty of space at /var/tmp. The issue seems to be encountered with bigger files since I was able to upload a 250MB ISO file. I did not face this issue when I had installed Opennebula directly at the baremetal.
I am out of ideas what else to check. Appreciate any help.


Versions of the related components and OS (frontend, hypervisors, VMs):

  • Opennebula 6.8.0 running within Ubuntu 22.04 VM.
  • Host OS: Ubuntu 22.04 64 bit 5.15.0-112-generic
  • Hypervisor: KVM

Steps to reproduce:
Upload ISO or disk image through the Sunstone UI logged in as the default oneadmin user.

Current results:
Sunstone is restarted and the user needs to login again. The upload session is lost.

Expected results:
One should be able to upload larger image files, provided there is disk space.

Anyone has faced a similar issue? Thanks in advance for any input.

Hi, Alex.

Are you using Apache/Passenger to proxy Sunstone requests?. If so, can you please try to apply the following change to your systemctl configuration?

Create file /etc/systemd/system/httpd.service.d/override.conf

[Service]                                                          
PrivateTmp=false               

Restart apache2/httpd service

Best Regards,

Hi Alberto, I am not using apache or passenger. I am accessing Sunstone from its plain port http://IP:9869. Let me know if I can find out anything else. Thanks.

I tested the same default setup at a clean Debian11 installation and I get the same consistent error when I try to upload an ISO file at the default datastore:

Jun 17 07:22:27 debian11-one opennebula-sunstone[1461]: /usr/lib/ruby/2.7.0/tempfile.rb:226:in `size': No such file or directory @ rb_file_s_size - /var/tmp/thin-body20240617-1461-bkufps (Errno::ENOENT)

Just to repeat, that I am accessing directly Sunstone at port 9869 without any reverse proxy or intermediaries.

Hi, Alex.

Can you please post the contents of the following file?

/lib/systemd/system/opennebula-sunstone.service

Hi,

Please find below:

[Unit]
Description=OpenNebula Web UI Server
After=syslog.target network.target
After=opennebula.service
Wants=opennebula-novnc.service
AssertFileNotEmpty=/var/lib/one/.one/sunstone_auth

[Service]
Type=simple
Group=oneadmin
User=oneadmin
AmbientCapabilities=CAP_NET_BIND_SERVICE
ExecStartPre=-/usr/sbin/logrotate -f /etc/logrotate.d/opennebula-sunstone -s /var/lib/one/.logrotate.status
ExecStartPre=-sh 'gzip -9 /var/log/one/sunstone.log-* &'
ExecStart=/usr/bin/ruby /usr/lib/one/sunstone/sunstone-server.rb
ReadWriteDirectories=/var/lib/one /var/log/one/
ReadOnlyDirectories=-/var/lib/one/remotes
InaccessibleDirectories=-/var/lib/one/datastores
InaccessibleDirectories=-/var/lib/one/.ssh
InaccessibleDirectories=-/var/lib/one/.ssh-oneprovision
ReadWriteDirectories=/var/tmp
PrivateTmp=no
NoNewPrivileges=yes
PrivateDevices=yes
# ProtectSystem=strict is not known by old systemd, so we set
# full everywhere, and override by strict only where supported.
ProtectSystem=full
ProtectSystem=strict
ProtectHome=yes
ProtectKernelTunables=yes
ProtectKernelModules=yes
ProtectKernelLogs=yes
StartLimitInterval=60
StartLimitBurst=3
Restart=on-failure
RestartSec=5
SyslogIdentifier=opennebula-sunstone

[Install]
WantedBy=multi-user.target

It is the default one. I tried also to change the temp dir (/var/tmp) both at suntone config and systemd service but had same results. Thanks for assisting.

Interestingly enough, when putting Sunstone behind nginx reverse proxy I do not have the issue. Tested it with 6GB ISO file and found ok.

Below is the nginx config I used:

# No squealing.
server_tokens off;

# OpenNebula Sunstone upstream
upstream sunstone {
  server 127.0.0.1:9869;
}
# OpenNebula websocketproxy upstream
upstream websocketproxy {
  server 127.0.0.1:29876;
}
# OpenNebula FireEdge
upstream fireedge {
  server 127.0.0.1:2616;
}

# HTTP virtual host, redirect to HTTPS
server {
    listen 80 default_server;
    return 301 https://$server_name:443;
}

#
# Example Sunstone configuration (/etc/one/sunstone-server.conf)
#
#:vnc_proxy_port: 127.0.0.1:29876
#:vnc_proxy_support_wss: only
#:vnc_proxy_cert: /etc/ssl/certs/one-selfsigned.crt
#:vnc_proxy_key: /etc/ssl/private/one-selfsigned.key
#:vnc_proxy_ipv6: false
#:vnc_request_password: false
#:vnc_client_port: 443

# HTTPS virtual host, proxy to Sunstone
server {
    listen 443 ssl;
    listen [::]:443 ssl;
    server_name frontend;
    root         /usr/share/nginx/html;

    access_log  /var/log/nginx/opennebula-sunstone-access.log;
    error_log  /var/log/nginx/opennebula-sunstone-error.log;

    client_max_body_size 1G;

    error_page 404 /404.html;
        location = /40x.html {
    }
    error_page 500 502 503 504 /50x.html;
        location = /50x.html {
    }

    location / {
        # Handle inconsistency in the websockify URLs provided by Sunstone
        if ($args ~* "host=.+&port=.+&token=.+&encrypt=.*") {
            rewrite ^/$ /websockify/ last;
        }
        proxy_pass http://sunstone;
        proxy_redirect     off;
        log_not_found      off;
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   Host $http_host;
        proxy_set_header   X-Forwarded-FOR $proxy_add_x_forwarded_for;
    }

    location /websockify {
        proxy_http_version 1.1;
        proxy_pass https://websocketproxy;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_read_timeout 61s;
        proxy_buffering off;
    }

    ssl_certificate     /etc/ssl/certs/one-selfsigned.crt;
    ssl_certificate_key /etc/ssl/private/one-selfsigned.key;
    ssl_stapling on;
}

##
## OpenNebula XML-RPC proxy (optional)
##
upstream onexmlrpc {
  server 127.0.0.1:2633;
}
upstream vncxmlrpc {
  server 127.0.0.1:2644;
}
server {
    listen       2634 ssl;
    listen       [::]:2634 ssl;
    server_name  frontend;
    root         /usr/share/nginx/html;

    error_page 404 /404.html;
        location = /40x.html {
    }
    error_page 500 502 503 504 /50x.html;
        location = /50x.html {
    }

    location / {
        proxy_http_version 1.1;
        proxy_pass http://onexmlrpc;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Host $http_host;
        proxy_read_timeout 180s;
        proxy_buffering off;
    }
    location /RPC2/vnctoken {
        proxy_http_version 1.1;
        proxy_pass http://vncxmlrpc;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Host $http_host;
        proxy_buffering off;
    }

    ssl_certificate     /etc/ssl/certs/one-selfsigned.crt;
    ssl_certificate_key /etc/ssl/private/one-selfsigned.key;
    ssl_stapling on;
}

Grabbed this from Sunstone with Nginx Proxy - Working Configuration.

Seems that I replaced the issue with another one, as I need to find out how to make the nginx setup work as sunstone is complaining that it cannot reach fireedge public endpoint.

I fixed the Fireedge access issue through the reverse proxy by adding the below section at the nginx configuration:

# FireEdge
server {
    listen 2646 ssl;
    listen [::]:2646 ssl;
    server_name frontend;
    root /usr/share/nginx/html;

    error_page 404 /404.html;
        location = /40x.html {
    }
    error_page 500 502 503 504 /50x.html;
        location = /50x.html {
    }

    location / {
        proxy_http_version 1.1;
        proxy_pass http://fireedge;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Host $http_host;
        proxy_buffering off;
    }

    ssl_certificate     /etc/ssl/certs/one-selfsigned.crt;
    ssl_certificate_key /etc/ssl/private/one-selfsigned.key;
}

And configured the FireEdge public port to listen to 2646. Seems now the VNC access through guacamole is working again. The only issue is SPICE is now broken but I will open another topic for this.