Tar Mapper for LXD

Hello folks.

LXD support for ONE is a really cool thing, especially if you are planning to run only Linux-based machines. But unfortunately it’s disk support is limited to a loop or qemu-nbd devices with VM running fully on EMULATED block device. Which, I believe, adds unnecessary complexity and lowers LXD’s full potential. I didn’t run any benchmarks, but I believe that getting rid of any emulation layers would be beneficial for any IO intensive tasks you might run on your cloud.

So, as we already have mappers concept in LXD driver, I’ve implemented a simple TAR mapper. It’s logic is dead simple. It just extracts the supplied TAR archive onto your host’s existing file system and maps it into your LXD-driven VM. If your underlying FS is a btrfs or zfs, it also tries to create a subvolume for it.

For ONE 5.10.3

Default POOL_DIR is ‘/tank/one/lxd’

tar.rb

#!/usr/bin/ruby

# -------------------------------------------------------------------------- #
# Copyright 2002-2019, OpenNebula Project, OpenNebula Systems                #
#                                                                            #
# Licensed under the Apache License, Version 2.0 (the "License"); you may    #
# not use this file except in compliance with the License. You may obtain    #
# a copy of the License at                                                   #
#                                                                            #
# http://www.apache.org/licenses/LICENSE-2.0                                 #
#                                                                            #
# Unless required by applicable law or agreed to in writing, software        #
# distributed under the License is distributed on an "AS IS" BASIS,          #
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.   #
# See the License for the specific language governing permissions and        #
# limitations under the License.                                             #
#--------------------------------------------------------------------------- #

$LOAD_PATH.unshift File.dirname(__FILE__)

require 'mapper'

# Tar file mapping for system images, backed by tar and btrfs/zfs
class TarMapper < Mapper
    POOL_DIR = '/tank/one/lxd'

    COMMANDS = COMMANDS.merge({
        :btrfs      => 'sudo btrfs',
        :zfs        => 'sudo zfs',
        :tar        => 'sudo tar',
        :rm         => 'sudo rm',
        :chown      => 'sudo chown',
        :stat       => 'stat',
        :sha256sum  => 'sha256sum'
    })

    def initialize
        @has_pigz   = File.exists?('/usr/bin/pigz')
        @has_pbzip2 = File.exists?('/usr/bin/pbzip2')
        @has_pixz   = File.exists?('/usr/bin/pixz')
        @pool_fs    = fs_type(POOL_DIR)
    end

    def do_map(one_vm, disk, _directory)
        dsrc  = one_vm.disk_source(disk)
        
        disk_id = disk['DISK_ID']
        path    = target_path(one_vm.vm_name, disk_id)

        # If the path is already exists, the disk should be already unpacked
        return path if File.exists?(path)

        self.send(@pool_fs+"_unpack", dsrc, path)
    end

    def do_unmap(device, one_vm, disk, _directory)
        umount_dev(_directory)
    end

    def map(one_vm, disk, directory)
        return true if mount_on?(directory)

        device = do_map(one_vm, disk, directory)

        OpenNebula.log_info "Mapping disk at #{directory} using device #{device}"

        return false unless device

        return mount_dev(device, directory)
    end

    def unmap(one_vm, disk, directory)
        OpenNebula.log_info "Unmapping disk at #{directory}"

        real_path = directory

        is_rootfs = real_path =~ %r{.*/rootfs}
        is_shared_ds = File.symlink?(one_vm.sysds_path)

        real_path = File.realpath(directory) if !is_rootfs && is_shared_ds
        device    = one_vm.disk_source(disk)

        return unless do_unmap(device, one_vm, disk, real_path)

        true
    end

    def save(one_vm, disk, directory)
        dsrc  = one_vm.disk_source(disk)
        
        disk_id = disk['DISK_ID']
        path    = target_path(one_vm.vm_name, disk_id)

        return unless File.exists?(path)

        OpenNebula.log_info "Saving directory #{directory} back to disk.#{disk_id}"

        self.send(@pool_fs+"_pack", dsrc, path)
    end

    private

    def target_path(name, disk)
        POOL_DIR + "/containers/#{name}/disk.#{disk}/"
    end

    def get_vm_path(path)
        path.sub(/(containers\/.+?)\/.*$/, '\1')
    end

    def btrfs_pack(src, dst)
        OpenNebula.log_info "Packing of #{dst} back to #{src}"
        return false unless tar(src, dst)
        # Recursively delete subvols
        subvols = btrfs_get_subvols(dst)
        subvols.each do |vol|
            btrfs_subvol_delete(vol)
        end

        # Delete root dir if no subvols left
        vm_path = get_vm_path(dst)
        if File.exists?(vm_path) && dir_empty?(vm_path)
            btrfs_subvol_delete(vm_path)
        end
        true
    end

    def btrfs_unpack(src, dst)
        unless File.exists?(POOL_DIR + '/containers/')
            OpenNebula.log_info "Creating subvolume for containers at " + POOL_DIR + "/containers/"
            return false unless btrfs_subvol_create(POOL_DIR+"/containers/")
        end

        vm_path = get_vm_path(dst)
        unless File.exists?(vm_path)
            return false unless btrfs_subvol_create(vm_path)
        end

        # Speed hack using subvolumes
        if hash = cksum(src)
            unless File.exists?(POOL_DIR + '/images/')
                OpenNebula.log_info "Creating subvolume for images at " + POOL_DIR + "/images/"
                return false unless btrfs_subvol_create(POOL_DIR+"/images/")
            end

            # Create image snapshot first
            image = POOL_DIR + "/images/#{hash}"
            unless File.exists?(image)
                OpenNebula.log_info "Creating subvolume for disk #{src} at #{image}"

                tmp = POOL_DIR + "/tmp/"
                return false unless btrfs_subvol_delete(tmp)
                return false unless btrfs_subvol_create(tmp)

                OpenNebula.log_info "Unpacking disk #{src} to #{image}"
                return false unless untar(src, tmp)

                return false unless btrfs_snapshot(tmp, image, true)
                return false unless btrfs_subvol_delete(tmp)
            end

            # Instantiate 
            OpenNebula.log_info "Creating image instance of #{src} at #{dst}"
            return false unless btrfs_snapshot(image, dst)
        else
            # Just unpack the image
            OpenNebula.log_info "Creating image instance of #{src} at #{dst}"
            return false unless btrfs_subvol_create(dst)
            return false unless untar(src, dst)
        end

        dst
    end
    
    def zfs_pack(src, dst)
        OpenNebula.log_info "Packing of #{dst} back to #{src}"
        return false unless tar(src, dst)
        return false unless zfs_delete_subvol(dst)

        # Delete root dir if no subvols left
        vm_path = get_vm_path(dst)
        if File.exists?(vm_path) && dir_empty?(vm_path)
            zfs_delete_subvol(vm_path)
        end

        true
    end

    def zfs_unpack(src, dst)
        unless File.exists?(POOL_DIR + '/containers/')
            OpenNebula.log_info "Creating subvolume for containers at " + POOL_DIR + "/containers/"
            return false unless zfs_subvol_create(POOL_DIR+"/containers/")
        end

        vm_path = get_vm_path(dst)
        unless File.exists?(vm_path)
            return false unless zfs_subvol_create(vm_path)
        end

        # Just unpack the image
        OpenNebula.log_info "Creating image instance of #{src} at #{dst}"
        return false unless zfs_subvol_create(dst)
        return false unless untar(src, dst)
        dst
    end

    def dir_pack(src, dst)
        OpenNebula.log_info "Packing of #{dst} back to #{src}"
        return false unless tar(src, dst)
        return false unless dir_delete(dst)
        
        # Delete root dir if no subdirs left
        vm_path = get_vm_path(dst)
        if File.exists?(vm_path) && dir_empty?(vm_path)
            dir_delete(vm_path)
        end
        
        true
    end

    def dir_unpack(src, dst)
        OpenNebula.log_info "Creating image instance of #{src} at #{dst}"
        return false unless mkdirp_safe(dst)
        return false unless untar(src, dst)
        dst
    end

    def dir_delete(path)
        return true unless File.exists?(path)

        OpenNebula.log_info "Deleting #{path}"
        return false unless path_sanity_check(path)

        rc, out, err = Command.execute("#{COMMANDS[:rm]} -rf #{path}", false)

        unless rc.zero?
            OpenNebula.log_error("#{__method}: #{err}")
            return false
        end
        true 
    end

    def btrfs_subvol_create(path)
        OpenNebula.log_info "Creating btrfs subvolume #{path}"

        return false unless path_sanity_check(path)

        rc, out, err = Command.execute("#{COMMANDS[:btrfs]} subvolume create #{path}", false)

        unless rc.zero?
            OpenNebula.log_error("#{__method__}: #{err}")
            return false
        end
        true
    end

    def btrfs_get_subvols(path)
        rc, out, err = Command.execute("#{COMMANDS[:btrfs]} inspect-internal rootid #{path}", false)
        unless rc.zero?
            OpenNebula.log_error("#{__method__}: #{err}")
            return false
        end
        rootid = out.chomp.to_i
      
        subvols = {rootid => path}
      
        rc, out, err = Command.execute("#{COMMANDS[:btrfs]} subvolume list -p #{path}", false)
        unless rc.zero?
            OpenNebula.log_error("#{__method__}: #{err}")
            return false
        end
        out.split(/\n/).each do |line|
            line.match(/ID (\d+) .*parent (\d+) .*path (.*)$/) do |m|
                if subvols.key?(m[2].to_i)
                subvols[m[1].to_i] = File.join(path, m[3])
                end
            end
        end
      
        subvols.sort_by { |k, v| k }.reverse.map {|k, v| v }
    end

    def btrfs_subvol_delete(path)
        return true unless File.exists?(path)

        OpenNebula.log_info "Deleting btrfs subvolume #{path}"

        return false unless path_sanity_check(path)

        rc, out, err = Command.execute("#{COMMANDS[:btrfs]} subvolume delete #{path}", false)
        unless rc.zero?
            OpenNebula.log_error("#{__method__}: #{err}")
            return false
        end
        true
    end

    def btrfs_snapshot(src, dst, readonly=false)
        OpenNebula.log_info "Creating btrfs snapshot of #{src} to #{dst}"

        return false unless path_sanity_check(src) && path_sanity_check(dst)

        arg = readonly ? "-r" : ""

        rc, out, err = Command.execute("#{COMMANDS[:btrfs]} subvolume snapshot #{arg} #{src} #{dst}", false)
        unless rc.zero?
            OpenNebula.log_error("#{__method__}: #{err}")
            return false
        end
        true
    end

    def zfs_subvol_create(path)
        return false unless path_sanity_check(path)
        # Change /tank/ to tank to comply with ZFS
        pool = path.sub(/^\//, '').sub(/\/$/, '')
        OpenNebula.log_info "Creating zfs subvolume #{pool}"

        rc, out, err = Command.execute("#{COMMANDS[:zfs]} create #{pool}", false)

        unless rc.zero?
            OpenNebula.log_error("#{__method__}: #{err}")
            return false
        end
        true
    end

    def zfs_subvol_delete(path)
        return true unless File.exists?(path)
        return false unless path_sanity_check(path)

        # Change /tank/ to tank to comply with ZFS
        pool = path.sub(/^\//, '').sub(/\/$/, '')
        OpenNebula.log_info "Deleting zfs subvolume #{pool}"

        rc, out, err = Command.execute("#{COMMANDS[:zfs]} destroy #{pool}", false)
        unless rc.zero?
            OpenNebula.log_error("#{__method__}: #{err}")
            return false
        end
        true
    end

    def path_sanity_check(path)
        if path.empty? or !path.start_with?(POOL_DIR)
            OpenNebula.log_error("#{__method__}: Path not within #{POOL_DIR}")
            return false
        end
        true
    end

    def mkdirp_safe(path)
        return false unless path_sanity_check(path)

        rc, _out, err = Command.execute("#{COMMANDS[:su_mkdir]} -p #{path}", false)

        return true if rc.zero?

        OpenNebula.log_error("#{__method__}: #{err}")
        false
    end

    def cksum(path)
        # Do not checksum files larger than 4 GB
        if File.size(path) <= 4294967296
            rc, out, err = Command.execute("#{COMMANDS[:sha256sum]} -b #{path}", false)

            unless rc.zero?
                OpenNebula.log_error("#{__method__}: #{err}")
                return false
            end

            if out.length < 64
                OpenNebula.log_error("#{__method__}: #{err}")
                return false
            end

            out[0..63]
        else
            false
        end
    end

    def tar(file, path)
        return false unless path_sanity_check(path)
        driver = disk_type(file)

        c = case driver
        when :bzip2
            @has_pbzip2 ? '-I pbzip2' : '-j'
        when :gzip
            @has_pigz ? '-I pigz' : '-z'
        when :xz
            @has_pixz ? '-I pixz' : '-J'
        else
            ''
        end

        # Aim for faster compression
        env = case driver
        when :bzip2
            "BZIP2=-1"
        when :gzip
            "GZIP=-1"
        when :xz
            "XZ_OPT=-1"
        else
          ""
        end

        rc, out, err = Command.execute("#{env} #{COMMANDS[:tar]} #{c} --numeric-owner -Scf #{file} -C #{path} .", false)

        unless rc.zero?
            OpenNebula.log_error("#{__method__}: #{err}")
            return false
        end
        true
    end

    def untar(file, path)
        return false unless path_sanity_check(path)

        c = case disk_type(file)
        when :bzip2
            @has_pbzip2 ? '-I pbzip2' : '-j'
        when :gzip
            @has_pigz ? '-I pigz' : '-z'
        when :xz
            @has_pixz ? '-I pixz' : '-J'
        else
            ''
        end

        rc, out, err = Command.execute("#{COMMANDS[:tar]} #{c} -xif #{file} -C #{path}", false)

        unless rc.zero?
            OpenNebula.log_error("#{__method__}: #{err}")
            return false
        end
        true
    end

    def disk_type(ds)
        rc, out, err = Command.execute("#{COMMANDS[:file]} #{ds}", false)

        unless rc.zero?
            OpenNebula.log_error("#{__method__} #{err}")
            return
        end

        case out
        when /.*tar archive.*/
            :tar
        when /.*XZ compressed data.*/
            :xz
        when /.*bzip2 compressed data.*/
            :bzip2
        when /.*gzip compressed data.*/
            :gzip
        else
            OpenNebula.log("Unknown #{out} image format")
            nil
        end
    end

    def dir_empty?(path)
        (Dir.entries(path) - %w{. ..}).empty?
    end

    def mount_on?(path)
        _rc, out, _err = Command.execute("#{COMMANDS[:mount]}", false)

        if out.match(/ on #{path}/)
            OpenNebula.log_error("#{__method__}: Mount detected in #{path}")
            return true
        end
        false
    end

    def mount_dev(dev, path)
        OpenNebula.log_info "Mounting #{dev} at #{path}"

        mkdir_safe(path)

        rc, _out, err = Command.execute("#{COMMANDS[:mount]} #{dev} #{path} -o bind", true)

        if rc != 0
            OpenNebula.log_error("mount_dev: #{err}")
            return false
        end

        file = File.stat(dev)
        rc, _out, err = Command.execute("#{COMMANDS[:chown]} #{file.uid}:#{file.gid} #{path}", true)

        if rc != 0
            OpenNebula.log_error("chown_path: #{err}")
            return false
        end

        true
    end

    def fs_type(path)
        _rc, out, _err = Command.execute("#{COMMANDS[:stat]} --file-system --format=%T #{path}", false)

        if out.match(/btrfs|zfs/)
            out.chomp
        else
            'dir'
        end
    end
end

container.rb

  def new_disk_mapper(disk)
   ...

            when /.*gzip compressed data.*/, /.*bzip2 compressed data.*/, /.*XZ compressed data.*/, /.*tar archive.*/
                OpenNebula.log "Using tar disk mapper for #{ds}"
                TarMapper.new

Unfortunately to properly support save and restore operations I had to add a method to
mapper.rb

    # Save a disk from a given directory
    # @param disk [XMLElement] with the disk data
    # @param directory [String] Path to the directory where the disk has to be
    # mounted. Example: /var/lib/one/datastores/100/3/mapper/disk.2
    #
    # @return true on success
    def save(one_vm, disk, directory)
        # Nothing to do most of the time
    end

With such edits cold migrations are now possible

save

#!/usr/bin/env ruby

# -------------------------------------------------------------------------- #
# Copyright 2002-2019, OpenNebula Project, OpenNebula Systems                #
#                                                                            #
# Licensed under the Apache License, Version 2.0 (the "License"); you may    #
# not use this file except in compliance with the License. You may obtain    #
# a copy of the License at                                                   #
#                                                                            #
# http://www.apache.org/licenses/LICENSE-2.0                                 #
#                                                                            #
# Unless required by applicable law or agreed to in writing, software        #
# distributed under the License is distributed on an "AS IS" BASIS,          #
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.   #
# See the License for the specific language governing permissions and        #
# limitations under the License.                                             #
#--------------------------------------------------------------------------- #

$LOAD_PATH.unshift File.dirname(__FILE__)

require 'container'

require_relative '../../scripts_common'

vm_name   = ARGV[0]
checkpoint= ARGV[1]
xml       = $stdin.read

client    = LXDClient.new
one_xml   = begin
    client.get("containers/#{vm_name}")['metadata']['config']['user.xml']
rescue e
    nil
end
container = Container.get(vm_name, one_xml, client)

# ------------------------------------------------------------------------------
# Stop vnc connection and container & unmap devices if not a wild container
# ------------------------------------------------------------------------------
container.vnc('stop')
container.check_stop(false)

exit 0 if container.wild?

checkpoint = File.absolute_path(checkpoint)
File.open(checkpoint, 'w') { |f| f.write(one_xml) } unless one_xml.nil?

raise 'Failed to dismantle container storage' unless \
container.setup_storage('unmap')
container.setup_storage('save')

container.delete

restore

#!/usr/bin/env ruby

# -------------------------------------------------------------------------- #
# Copyright 2002-2019, OpenNebula Project, OpenNebula Systems                #
#                                                                            #
# Licensed under the Apache License, Version 2.0 (the "License"); you may    #
# not use this file except in compliance with the License. You may obtain    #
# a copy of the License at                                                   #
#                                                                            #
# http://www.apache.org/licenses/LICENSE-2.0                                 #
#                                                                            #
# Unless required by applicable law or agreed to in writing, software        #
# distributed under the License is distributed on an "AS IS" BASIS,          #
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.   #
# See the License for the specific language governing permissions and        #
# limitations under the License.                                             #
#--------------------------------------------------------------------------- #

$LOAD_PATH.unshift File.dirname(__FILE__)

require 'container'

require_relative '../../scripts_common'

checkpoint   = ARGV[0]
checkpoint   = File.absolute_path(checkpoint)

unless File.exists?(checkpoint)
    OpenNebula.log_error("Checkpoint file not found: #{checkpoint}")
    exit -1
end

puts %x{cat #{checkpoint} | #{__dir__}/deploy '#{ARGV.join("' '")}'}
File.unlink(checkpoint) if File.exists?(checkpoint)

The major downside of such approach is that if you delete your VM, actual container files are left stale on your machine’s FS. Luckily we can cope with that by using a hook on LCM_STATE=“CLEAN_RESUBMIT”

**lxd_cleanup

#!/usr/bin/env ruby
require 'open3'
require 'base64'
require 'rexml/document'

POOL_DIR = '/tank/one/lxd'

def log_function(severity, message)
    STDERR.puts "#{severity}: #{File.basename $0}: #{message}"
end

# Logs an info message
def log_info(message)
    log_function("INFO", message)
end

# Logs an error message
def log_error(message)
    log_function("ERROR", message)
end

# Logs a debug message
def log_debug(message)
    log_function("DEBUG", message)
end

def dir_empty?(path)
    (Dir.entries(path) - %w{. ..}).empty?
end

def execute(cmd, lock=false)
    stdout, stderr, s = Open3.capture3(cmd)
    [s.exitstatus, stdout, stderr]
end

def fs_type(path)
    rc, out, err = execute("stat --file-system --format=%T #{path}", false)

    if out.match(/btrfs|zfs/)
        out.chomp
    else
        'dir'
    end
end

def get_vm_path(id)
    File.join(POOL_DIR, 'containers', "one-#{id}")
end

def path_sanity_check(path)
    !(path.empty? or !path.start_with?(POOL_DIR))
end

def zfs_subvol_delete(path)
    return true unless File.exists?(path)
    return false unless path_sanity_check(path)

    # Change /tank/ to tank to comply with ZFS
    pool = path.sub(/^\//, '').sub(/\/$/, '')
    log_info "Deleting zfs subvolume #{pool}"

    rc, out, err = execute("sudo zfs destroy #{pool}", false)
    unless rc.zero?
        log_error("#{__method__}: #{err}")
        return false
    end
    true
end
alias zfs_clean zfs_subvol_delete

def btrfs_get_subvols(path)
    rc, out, err = execute("sudo btrfs inspect-internal rootid #{path}", false)
    unless rc.zero?
        log_error("#{__method__}: #{err}")
        return false
    end
    rootid = out.chomp.to_i
  
    subvols = {rootid => path}
  
    rc, out, err = execute("sudo btrfs subvolume list -p #{path}", false)
    unless rc.zero?
        log_error("#{__method__}: #{err}")
        return false
    end
    out.split(/\n/).each do |line|
        line.match(/ID (\d+) .*parent (\d+) .*path (.*)$/) do |m|
            if subvols.key?(m[2].to_i)
            subvols[m[1].to_i] = File.join(path, m[3])
            end
        end
    end
  
    subvols.sort_by { |k, v| k }.reverse.map {|k, v| v }
end

def btrfs_subvol_delete(path)
    return true unless File.exists?(path)

    log_info "Deleting btrfs subvolume #{path}"

    return false unless path_sanity_check(path)

    rc, out, err = execute("sudo btrfs subvolume delete #{path}", false)
    unless rc.zero?
        log_error("#{__method__}: #{err}")
        return false
    end
    true
end

def btrfs_clean(path)
    return true unless File.exists?(path)
    subvols = btrfs_get_subvols(path)
    subvols.each do |vol|
        btrfs_subvol_delete(vol)
    end
end

def dir_delete(path)
    return true unless File.exists?(path)

    log_info "Deleting #{path}"
    return false unless path_sanity_check(path)

    rc, out, err = execute("sudo rm -rf #{path}", false)

    unless rc.zero?
        log_error("#{__method}: #{err}")
        return false
    end
    true 
end
alias dir_clean dir_delete

def clean(id)
    log_info "Cleaning VM #{id}"
    path = get_vm_path(id)
    return unless File.exists?(path)

    pool_fs = fs_type(path)
    self.send(pool_fs+'_clean', path)
end

xml = Base64.decode64(ARGV[0])
doc = REXML::Document.new(xml)
id  = doc.root.elements['ID'].text

clean(id)

The whole thing is not thoroughly tested, but works as a concept for a quite some time already: I’ve used it on my two 5.10 setup for a year already. Beware, it can eat your data anyways, so DO BACKUPS.

Is anyone interested in such thing?

1 Like

BTW, the great reason to use BTRFS for hosting LXD virtual machines is that you can create subvolumes inside them even with unprivileged containers. Unfortunately they wouldn’t survive cold migrations to another host, but it is still a cool feature to use.

1 Like

This is not about Open Nebula but LXD VMs and Containers.

I use ubuntu 20.04 & 18.04 all w SNAP LXD

I just recently created both Windows 7 Ultimate and Ubuntu LXD VMs

I also had a variety of LXD containers (debian, ubuntu, centos) in use

I used CLI or REST API for everything

For REST API if you are a programmer you’d use GO, Python, Java, PHP, Rust

Wanted to have backups. BUT I’m a neanderthal.

I use BASH scripts :sunglasses:

I create descriptive BASH script names then in the script use CURL to get/set commands to LXD’s REST API

Again, installed SNAP LXD on 2 other machines

Then … on 1 machine used CLI or REST (via bash scripts) to COPY/CLONE my LXD VMs (Win or Linux)
or
my LXD Linux “system containers” (Debian, Fedora, CentOS, Ubuntu, Alpine, Oracle Linux, OpenSuse etc) to/from all 3 machines

$ lxc remote add MyLXDName YourLXDName YourIP

answer 2 security questions…

Then

CLI or REST API

$ lxc stop ctnr

$ lxc copy ctnr YourLXDName:YourCtnrName

All the other LXD/LXC. CLI and TEST API functions work the same.

If Open Nebula uses Go, Python, PHP, Rust they are probably using the LXD REST API slready…

The main problem with using tar images for creating VMs in OpenNebula is the extra archive/extract steps necessary for getting the container up and running, resulting in longer prolog/undeploy times, problem scaling with image size. While creating the driver that was a problem we wanted to avoid, because tar images on image datastores are the vanilla way of creating containers in LXD.

There is however, a clear performance gain once the container running on top of a non-virtual FS. Would it be cool for you to create this mapper as a separate addon in a github repo with the install instructions and the important notes (required hooks, etc.) ? This way it would be easier for users to test it/compare it.

I’m trying to avoid instantiation times by several approaches - using parallel versions of archivers and using snapshots on supported file systems. Also, I do not pack the image back on simple poweroff, only if save action is called. Also If your image is somewhere sub 1GB, it wouldn’t take long to start anyway, even with full unpacking.

I’m not sure if I can really publish it as a separate addon, because it is kind of hack to LXD driver, so it wouldn’t be just git cloning a folder to install it.

I’m not sure what are commenting about. This “hack” is just to support native FS in LXD, without virtualization. If you meant my “can’t survive migrations”, I meant that you CAN create subvols in a running VM, but if you start migration to another machine, it’s just packed back to a tar and on extraction it wouldn’t contain any subvolumes, everything would be in one root subvolume.

I’m not sure if I can really publish it as a separate addon, because it is kind of hack to LXD driver, so it wouldn’t be just git cloning a folder to install it.

Since besides adding a new mapper library, you also edit vanilla files, it is not a simple install. We could, however update the vanilla files such that they are capable of interacting with your tar mapper in case the image is detected as an archive.

In any case is way better to have the code for the hack as a separate git repo compared to files uploaded to a forum thread. You could create branches for the releases it works with and, being able to get a diff on the files should be enough for getting the hack up and running, with a link to this thread.

Seems to be out of the context

as intended ?

Ok, git repo is not a problem. I’m just unsure about the format. I mean, I can place edited files only, or I can place the whole lxd driver there. Unfortunately all my edits are against 5.10.3

It sounds a bit flakey to me, as it is. Demo code that would need a few rounds of “let’s make sure it doesn’t eat anyones data” and then be nice and tame.
Anything on the ONE side that would easy the setup of the tar-wrapper would be a good thing.

Is it important? I’d say yes: having benchmarked in depth the current LXC setup, I agree there is a strong need for a solution without nbd. it’s outright maddening that my (well-tuned) KVM vms with KVMs (shitty) overhead outperform LXC containers. LXC should be all means be the one that simply gives bare-metal performance and the current implementation pretty much ruins that.

So remain the questions:

  • did anything happen
  • will it get good enough support from ONE

support so it could become sustainable, or will it be another dead addon in 2 years?