Hi, Can anyone help with the error I’m seeing with Terraform after the vm’s are created and running. Its as though terraform creates the vm’s but does not have a reference to listen for when they are running allowing the script to continue to the next step which runs ansible to configure the vm’s.
I run the scripts from the command line which remotely connects to opennebula. If I can get the vm creation loop working correctly the next step is to investigate the automation features of opennebula.
Thanks in advance.
Terraform apply output snippet.
opennebula_virtual_machine.tfdemo[2]: Creating...
opennebula_virtual_machine.tfdemo[0]: Creating...
opennebula_virtual_machine.tfdemo[1]: Creating...
opennebula_virtual_machine.tfdemo[2]: Still creating... [10s elapsed]
opennebula_virtual_machine.tfdemo[1]: Still creating... [10s elapsed]
opennebula_virtual_machine.tfdemo[0]: Still creating... [10s elapsed]
opennebula_virtual_machine.tfdemo[2]: Still creating... [20s elapsed]
opennebula_virtual_machine.tfdemo[1]: Still creating... [20s elapsed]
opennebula_virtual_machine.tfdemo[2]: Still creating... [30s elapsed]
opennebula_virtual_machine.tfdemo[1]: Still creating... [30s elapsed]
opennebula_virtual_machine.tfdemo[1]: Still creating... [40s elapsed]
╷
│ Error: resource not found
│
│ with opennebula_virtual_machine.tfdemo[2],
│ on terraform.tf line 217, in resource "opennebula_virtual_machine" "tfdemo":
│ 217: resource "opennebula_virtual_machine" "tfdemo" {
│
╵
╷
│ Error: resource not found
│
│ with opennebula_virtual_machine.tfdemo[1],
│ on terraform.tf line 217, in resource "opennebula_virtual_machine" "tfdemo":
│ 217: resource "opennebula_virtual_machine" "tfdemo" {
│
╵
╷
│ Error: resource not found
│
│ with opennebula_virtual_machine.tfdemo[0],
│ on terraform.tf line 217, in resource "opennebula_virtual_machine" "tfdemo":
│ 217: resource "opennebula_virtual_machine" "tfdemo" {
Is there a bug in opennebula terraform?
I’m still seeing the following error after a valid running vm is created.
I’ve tried many permutations of defining resource “opennebula_virtual_machine” “tfdemo” and still not able to stop the following error.
│ Error: resource not found │ with opennebula_virtual_machine.tfdemo[0], │ on terraform.tf line 208, in resource “opennebula_virtual_machine” “tfdemo”: │ 208: resource “opennebula_virtual_machine” “tfdemo” {
Full output from terraform init, plan, apply.
15:41 $ ./terraform-apply-dev.sh
Terraform v1.0.11
on linux_amd64
+ provider registry.terraform.io/hashicorp/local v2.1.0
+ provider registry.terraform.io/hashicorp/null v3.1.0
+ provider registry.terraform.io/hashicorp/template v2.2.0
+ provider registry.terraform.io/opennebula/opennebula v0.3.0
Initializing modules...
Initializing the backend...
Initializing provider plugins...
- Reusing previous version of opennebula/opennebula from the dependency lock file
- Reusing previous version of hashicorp/template from the dependency lock file
- Reusing previous version of hashicorp/null from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Using previously-installed opennebula/opennebula v0.3.0
- Using previously-installed hashicorp/template v2.2.0
- Using previously-installed hashicorp/null v3.1.0
- Using previously-installed hashicorp/local v2.1.0
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# opennebula_group.group will be created
+ resource "opennebula_group" "group" {
+ admins = (known after apply)
+ delete_on_destruction = true
+ id = (known after apply)
+ name = "dev-ks-one-node-grp"
+ template = <<-EOT
SUNSTONE = [
DEFAULT_VIEW = "cloud",
GROUP_ADMIN_DEFAULT_VIEW = "groupadmin",
GROUP_ADMIN_VIEWS = "cloud,groupadmin",
VIEWS = "cloud"
]
EOT
+ users = (known after apply)
+ quotas {
+ datastore_quotas {
+ id = 1
+ images = 3
+ size = 10000
}
+ image_quotas {
+ id = 8
+ running_vms = 3
}
+ network_quotas {
+ id = 10
+ leases = 6
}
+ vm_quotas {
+ cpu = 4
+ memory = 15360
+ running_cpu = 4
+ running_memory = 15360
}
}
}
# opennebula_image.goldimage will be created
+ resource "opennebula_image" "goldimage" {
+ datastore_id = 1
+ description = "Terraform image"
+ dev_prefix = "vd"
+ driver = "qcow2"
+ format = (known after apply)
+ gid = (known after apply)
+ gname = (known after apply)
+ group = "dev-ks-one-node-grp"
+ id = (known after apply)
+ lock = "MANAGE"
+ name = "Ubuntu 20.04 KVM"
+ path = "https://marketplace.opennebula.io/appliance/695f1a36-b970-4ccf-ace3-0863dcc86d2a/download/0"
+ permissions = (known after apply)
+ persistent = false
+ size = (known after apply)
+ target = (known after apply)
+ timeout = 10
+ type = (known after apply)
+ uid = (known after apply)
+ uname = (known after apply)
}
# opennebula_security_group.baseruleset will be created
+ resource "opennebula_security_group" "baseruleset" {
+ commit = true
+ description = "terraform security group"
+ gid = (known after apply)
+ gname = (known after apply)
+ id = (known after apply)
+ name = "dev-ks-one-node-sec"
+ permissions = (known after apply)
+ uid = (known after apply)
+ uname = (known after apply)
+ rule {
+ icmp_type = (known after apply)
+ ip = (known after apply)
+ network_id = (known after apply)
+ protocol = "ALL"
+ range = (known after apply)
+ rule_type = "OUTBOUND"
+ size = (known after apply)
}
+ rule {
+ icmp_type = (known after apply)
+ ip = (known after apply)
+ network_id = (known after apply)
+ protocol = "TCP"
+ range = (known after apply)
+ rule_type = "INBOUND"
+ size = (known after apply)
}
+ rule {
+ icmp_type = (known after apply)
+ ip = (known after apply)
+ network_id = (known after apply)
+ protocol = "ICMP"
+ range = (known after apply)
+ rule_type = "INBOUND"
+ size = (known after apply)
}
}
# opennebula_virtual_machine.tfdemo[0] will be created
+ resource "opennebula_virtual_machine" "tfdemo" {
+ context = {
+ "DNS_HOSTNAME" = "YES"
+ "HOSTNAME" = "$NAME"
+ "INIT_SCRIPTS" = "allow-root-access.sh"
+ "NETWORK" = "YES"
+ "PASSWORD" = "borg1a"
+ "SET_HOSTNAME" = "dev-ks-one-node-1"
+ "SSH_PUBLIC_KEY" = "$USER[SSH_PUBLIC_KEY]"
+ "START_SCRIPT" = "sed -i 's/.*PermitRootLogin.*/PermitRootLogin yes/g' /etc/ssh/sshd_config;sed -i 's/.*PasswordAuthentication.*/PasswordAuthentication yes/g' /etc/ssh/sshd_config;service sshd restart"
+ "TERRAFORM" = "is awesome"
+ "USER_DATA" = <<-EOT
#cloud-config
# Add groups to the system
# The following example adds the ubuntu group with members 'root' and 'sys'
# and the empty group cloud-users.
groups:
- linux: [root,sys]
- cloud-users
## Add users to the system. Users are added after groups are added.
users:
- name: linux
groups: sudo
shell: /bin/bash
sudo: ['ALL=(ALL) NOPASSWD:ALL']
ssh-authorized-keys:
- ssh-rsa xxxxxxxxxxxxxxxxxx xxx@xxx-vm
runcmd:
- echo cloud-comfig complete;ls -la
EOT
}
+ cpu = 4
+ gid = (known after apply)
+ gname = (known after apply)
+ group = "opennebula_group.group.name"
+ id = (known after apply)
+ instance = (known after apply)
+ ip = (known after apply)
+ lcmstate = (known after apply)
+ memory = 15360
+ name = "dev-ks-one-node-1"
+ pending = false
+ permissions = "600"
+ state = (known after apply)
+ template_id = -1
+ timeout = 15
+ uid = (known after apply)
+ uname = (known after apply)
+ vcpu = 8
+ disk {
+ computed_driver = (known after apply)
+ computed_size = (known after apply)
+ computed_target = (known after apply)
+ disk_id = (known after apply)
+ driver = "qcow2"
+ image_id = (known after apply)
+ size = 200000
+ target = "vda"
}
+ graphics {
+ keymap = "en-gb"
+ listen = "0.0.0.0"
+ port = (known after apply)
+ type = "vnc"
}
+ nic {
+ computed_ip = (known after apply)
+ computed_mac = (known after apply)
+ computed_model = (known after apply)
+ computed_physical_device = (known after apply)
+ computed_security_groups = (known after apply)
+ model = "virtio"
+ network = (known after apply)
+ network_id = (known after apply)
+ nic_id = (known after apply)
+ security_groups = (known after apply)
}
+ os {
+ arch = "x86_64"
+ boot = "disk0"
}
+ vmgroup {
+ role = (known after apply)
+ vmgroup_id = (known after apply)
}
}
# opennebula_virtual_network.vnet will be created
+ resource "opennebula_virtual_network" "vnet" {
+ automatic_vlan_id = (known after apply)
+ bridge = "br0"
+ clusters = [
+ 0,
]
+ description = "vm"
+ dns = "192.168.1.254"
+ gateway = "192.168.1.254"
+ gid = (known after apply)
+ gname = (known after apply)
+ group = "dev-ks-one-node-grp"
+ guest_mtu = 1500
+ hold_size = (known after apply)
+ id = (known after apply)
+ ip_hold = (known after apply)
+ mtu = 1500
+ name = "dev-ks-one-node-vnet"
+ network_mask = "255.255.255.0"
+ permissions = "660"
+ physical_device = (known after apply)
+ reservation_size = (known after apply)
+ reservation_vnet = (known after apply)
+ security_groups = (known after apply)
+ type = "fw"
+ uid = (known after apply)
+ uname = (known after apply)
+ vlan_id = (known after apply)
+ ar {
+ ar_type = "IP4"
+ global_prefix = (known after apply)
+ ip4 = "192.168.1.150"
+ ip6 = (known after apply)
+ mac = "02:09:0a:00:00:99"
+ prefix_length = (known after apply)
+ size = 1
+ ula_prefix = (known after apply)
}
}
Plan: 5 to add, 0 to change, 0 to destroy.
─────────────────────────────────────────────────────────────────────────────
Saved the plan to: plan.out
To perform exactly these actions, run the following command to apply:
terraform apply "plan.out"
make_bucket failed: s3://ubuntu-terraform-state An error occurred (BucketAlreadyOwnedByYou) when calling the CreateBucket operation: Your previous request to create the named bucket succeeded and you already own it.
upload: ./terraform.tfstate to s3://ubuntu-terraform-state/terraform.tfstate
opennebula_security_group.baseruleset: Creating...
opennebula_group.group: Creating...
opennebula_security_group.baseruleset: Creation complete after 1s [id=234]
opennebula_group.group: Creation complete after 1s [id=238]
opennebula_image.goldimage: Creating...
opennebula_virtual_network.vnet: Creating...
opennebula_virtual_network.vnet: Creation complete after 0s [id=133]
opennebula_image.goldimage: Still creating... [10s elapsed]
opennebula_image.goldimage: Still creating... [20s elapsed]
opennebula_image.goldimage: Still creating... [30s elapsed]
opennebula_image.goldimage: Still creating... [40s elapsed]
opennebula_image.goldimage: Still creating... [50s elapsed]
opennebula_image.goldimage: Still creating... [1m0s elapsed]
opennebula_image.goldimage: Creation complete after 1m0s [id=191]
opennebula_virtual_machine.tfdemo[0]: Creating...
opennebula_virtual_machine.tfdemo[0]: Still creating... [10s elapsed]
╷
│ Error: resource not found
│
│ with opennebula_virtual_machine.tfdemo[0],
│ on terraform.tf line 208, in resource "opennebula_virtual_machine" "tfdemo":
│ 208: resource "opennebula_virtual_machine" "tfdemo" {
│
╵
Since upgrading to the latest opennebula terraform provider everything works great, it’s a many times better than what was there before.
terraform {
required_providers {
opennebula = {
source = “opennebula/opennebula”
version = “~> 1.0.2”
}