I’m working on a bachelor thesis, and part of it is about building Windows images for OpenNebula using HashiCorp Packer. I have forked the repository, and I have created some local changes. Before I commit them, I want to ask OpenNebula developers a few questions:
What should be the policy regarding Windows ISO installer images? They can’t be part of the project, and Microsoft does not provide an easy and official way to download them. They need to be referenced in packer build files for builds to work. Should I use a subfolder named ISO, where the user of this build system provides the ISO images? Another approach is that there is also a hacky, unofficial, probably maintenance-costly way of automating the downloads by generating temporary download links from Microsoft. For example, GitHub - ElliotKillick/Mido: The Secure Microsoft Windows Downloader, which is based on this: GitHub - pbatard/Fido: A PowerShell script to download Windows or UEFI Shell ISOs.
What should be the policy for naming the build (Makefile) options? I think this could work, but maybe you know better:
Edition names
windows10Home
windows10HomeN
windows10Pro
windows10ProN
windows10ProEdu
windows10ProEduN
windows10Edu
windows10EduN
windows10ProWorkstations
windows10ProWorkstationsN
windows10Ent
windows10EntLTSC2015
windows10EntLTSC2016
windows10EntLTSC2019
windows10EntLTSC2021
windows11Home
windows11HomeN
windows11Pro
windows11ProN
windows11ProEdu
windows11ProEduN
windows11Edu
windows11EduN
windows1ProWorkstations
windows11ProWorkstationsN
windows11Ent
windows2016Essentials
windows2016Standard
windows2016Datacenter
windows2016StandardCore
windows2016DatacenterCore
windows2019Essentials
windows2019Standard
windows2019Datacenter
windows2019StandardCore
windows2019DatacenterCore
windows2022Standard
windows2022Datacenter
windows2022StandardCore
windows2022DatacenterCore
Should we mention somewhere that users should read Microsoft EULA before using this build tool because the build process skips the licence term and conditions page of the Windows installer and OOBE (Out-of-the-box experience)?
In the end, I want to mention that my build approach leverages only Windows Unattend.xml and Autounattend.xml files. I have set packer communicator to none.
Build process workflow:
Import VirtIO storage drivers
Partition the drive
Bypass Windows 11 requirements checks
Install Windows (apply Windows image to newly partitioned drive and make it bootable)
Install rest of the VirtIO drivers, guest agent and spice agent (in Specialize configuration pass)
Boot Windows into audit mode (not OOBE) and install Windows Updates via PowerShell module PSWindowsUpdate or by some other way (I’m currently evaulating ansible windows update module)
Put Unattend file in C:\ so the OOBE will be skipped and Unattend files will be cleaned up on next boot
Sysprep (generalize) the Windows and shutdown
When the user starts the new Windows image for the first time, the OOBE will be skipped, and the local Administrator will be disabled. This leaves the way for the Windows contextualization package to create the first user using context variables. This also needs to be discussed because Windows server editions have a default Administrator user, and there is also a possibility that the user of the images chooses the Administrator as their user. This needs to be reflected in the Contextualization package, which I plan to contribute as part of my thesis.
We also had an issue with this bit for building our testing images. Having the ISO available locally on the builder machine, or in a private web server, and referencing it on the packer script is probably less of a headache.
Your naming looks perfect
In the one-apps wiki mentioning the windows specifics like the ISO download and the reason why the image is not available on the marketplace.
These packages are built by RedHat directly from public sources and are referenced by the virtio-winvirtio-win-pkg-scripts repository. The download speeds are slow, but they provide a permalink to the latest stable or latest release version. Fedora documentation also mentions that they can’t ship this with their distribution, but the reasons do not have to do anything with licencing. So, this choice depends on the policies of the OpenNebula project. These builds can be easily downloaded using curl or wget.
These images are probably the same ones as the ones in the Fedora project. The major advantage is that OpenNebula controls this marketplace, and the download speeds are fast. As far as I know, the disadvantage is that there is no easy way to get links for specific versions of the latest stable or latest release builds. Only the latest stable build is accessible. Also, the download happens through a CDN. But this should not be an issue as wget can handle that.
Another issue I found with this approach is that the download can happen either in a separate builder, the same way as the cloud-init ISOs are built for existing distributions. The problem is that this is done asynchronously, and the long download time over slow internet links can cause the build to fail if the VirtIO drivers aren’t ready in time for Qemu to use them in the attached CD-ROM. The second approach is that there could be a shell-local provisioner in the qemu builder. But from my testing, it behaves very similarly, and the commands are executed with the qemu builder asynchronously, and we have the same race condition.
A possible solution could be the modification of build.sh file to download the VirtIO ISO before the build process starts. The modification of packer build command arguments is also possible. Such as -parallel-builds=1
which handles the upload of the packer image to opennebula in a standard packer manner like for other targets.
(I suppose the two works are complementary and the largest gain is such a combination)
Before I started this journey, I did think that the building process would happen directly in OpenNebula. Only after looking at the source did I see that it uses the qemu builder.
I will look into your idea since one of my goals is to integrate the build process into our OpenNebula instance, and it will probably be more efficient to build directly in OpenNebula than to create an image using qemu and upload it to OpenNebula.
On the other hand, if a person has enough privileges over the OpenNebula hosts, it’s easy to run this qemu builder inside and then link the finished image to the OpenNebula database. This approach only works if OpenNebula hosts use KVM and not something else like vSpehere.
The third option is to use a build container that will use a qemu builder inside, and it will have access to /dev/kvm. Currently, I am using this approach on my local machine and on one powerful server to build and test the images. After the build is finished, I start the HTTP server in my container/host with the exported image and import it into OpenNebula using the oneimage create command.This is not necessary when the container has a mounted folder from the OpenNebula datastore. In this case, the import can be done using a local path.
Finally, I want to apologize for my late reply. I did think that I made a reply already
I have revised my strategy since Windows 11 24H2 came out. I had trouble executing MSI installers in the specialize phase of Windows installation, so I have moved the installation of drivers and contextualization package into the audit phase of the installation. Next, I disabled the OpenNebula contextualization service right after the installation so it would not run and produce unnecessary log files. The service is enabled again in the OOBE phase when the finished image is started up for the first time. I have also changed the naming scheme for some Windows editions a bit and added more. I still need to do more testing for some editions.
After a lot of delays, you can finally check out my repo, where the changes are.
You must provide your Windows ISO files and change their paths in a file: variables.auto.pkrvars.hcl
Feel free to reach out via DM if you need some harder to get ISO files
I am currently working on documentation for Windows builds.
I want to ask for help regarding the structure of my commits and files in the repo so I can create a pull request. For example, if it will be merged using merge commit, the commits should be squashed, or a rebase will happen.
Also, I can create an issue for this if it is needed. But there are already some issues for some Windows Server appliances.
This is wonderful news. Please open a PR and we can discuss any needed changes over there. Documentation wise, clone the one-apps wiki and add the necessary changes in your fork. We can pull from there later on. If possible link it on the PR as well.
Also, I can create an issue for this if it is needed. But there are already some issues for some Windows Server appliances.
No need for an issue. The reason why these issues existed is because of the static builds we’ve had so far. If your changes allow building said windows apps, then we will close them as soon as they are being regularly build and tested on our internal CI.
Thank a lot for the dedication. This is really wonderful.