An error occurred while creating a VM in version 7.0.0

With version 7.0.0, the SAN storage was working fine after deployment. However, after I restarted the front - end node, I got an error when creating a virtual machine. I’ve confirmed that my SAN storage can be connected properly, but it keeps showing an error:

I don’t have any ideas on how to handle this right now and I hope to get some help.

The VMs that have been created can still be used as normal, but an error occurs on the front - end when creating new ones.:

Wed Jul 16 10:31:10 2025: Error executing image transfer script: /var/lib/one/remotes/tm/fs_lvm_ssh/ln: line 94: [: too many a... see more details in VM log
root@manager1:/var/log/one# pvs
  PV                          VG         Fmt  Attr PSize     PFree  
  /dev/mapper/one_image_data  vg-image   lvm2 a--  <1024.00g      0 
  /dev/mapper/one_system_data vg-one-103 lvm2 a--  <1024.00g 274.37g
root@manager1:/var/log/one# vgs
  VG         #PV #LV #SN Attr   VSize     VFree  
  vg-image     1   1   0 wz--n- <1024.00g      0 
  vg-one-103   1  30   0 wz--n- <1024.00g 274.37g
root@manager1:/var/log/one# lvs
  LV             VG         Attr       LSize     Pool           Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv-image       vg-image   -wi-ao---- <1024.00g                                                              
  lv-one-15-0    vg-one-103 Vwi-a-tz--    <2.20g lv-one-15-pool        92.44                                  
  lv-one-15-2    vg-one-103 -wi-a-----    50.00g                                                              
  lv-one-15-pool vg-one-103 twi-aotz--    <2.20g                       92.44  25.68                           
  lv-one-17-0    vg-one-103 Vwi-a-tz--    <2.20g lv-one-17-pool        90.71                                  
  lv-one-17-pool vg-one-103 twi-aotz--    <2.20g                       90.71  25.59                      
root@manager1:/var/log/one# onedatastore list
  ID NAME                                                                               SIZE AVA CLUSTERS IMAGES TYPE DS      TM      STAT
 103 lvm_system                                                                        1024G 27% 0             0 sys  -       fs_lvm_ on  
 102 lvm_image                                                                        1006.9 87% 0            17 img  fs      fs_lvm_ on

Detailed error

root@manager1:/var/log/one# cat 76.log 
Wed Jul 16 10:30:55 2025 [Z0][VM][I]: New state is CLONING
Wed Jul 16 10:30:58 2025 [Z0][VM][I]: New state is PENDING
Wed Jul 16 10:31:02 2025 [Z0][VM][I]: New state is ACTIVE
Wed Jul 16 10:31:03 2025 [Z0][VM][I]: New LCM state is PROLOG
Wed Jul 16 10:31:10 2025 [Z0][TrM][I]: Command execution failed (exit code: 2): /var/lib/one/remotes/tm/fs_lvm_ssh/ln manager1:/var/lib/one//datastores/102/41c75168f1b131ea7dc5463019dd8479 10.9.200.3:/var/lib/one//datastores/103/76/disk.0 76 102
Wed Jul 16 10:31:10 2025 [Z0][TrM][I]: /var/lib/one/remotes/tm/fs_lvm_ssh/ln: line 94: [: too many arguments
Wed Jul 16 10:31:10 2025 [Z0][TrM][I]: /var/lib/one/remotes/tm/fs_lvm_ssh/ln: line 106: [: cannot access '/var/lib/one/datastores/102/41c75168f1b131ea7dc5463019dd8479' (No such file or directory): integer expression expected
Wed Jul 16 10:31:10 2025 [Z0][TrM][E]: ln: Command "    set -e -o pipefail
Wed Jul 16 10:31:10 2025 [Z0][TrM][I]: mkdir -p /var/lib/one/datastores/103/76
Wed Jul 16 10:31:10 2025 [Z0][TrM][I]: 
Wed Jul 16 10:31:10 2025 [Z0][TrM][I]: hostname -f >"/var/lib/one/datastores/103/76/.host" || :
Wed Jul 16 10:31:10 2025 [Z0][TrM][I]: 
Wed Jul 16 10:31:10 2025 [Z0][TrM][I]: # zero trailing space
Wed Jul 16 10:31:10 2025 [Z0][TrM][I]: if [ "yes" = "yes" ] && [ "yes" != 'yes' ]; then
Wed Jul 16 10:31:10 2025 [Z0][TrM][I]: LVSIZE=$(sudo -n lvs --nosuffix --noheadings --units B -o lv_size "/dev/vg-one-103/lv-one-76-0" | tr -d '[:blank:]')
Wed Jul 16 10:31:10 2025 [Z0][TrM][I]: dd if=/dev/zero of="/dev/vg-one-103/lv-one-76-0" bs=64k             oflag=seek_bytes iflag=count_bytes             seek="42948624384" count="$(( LVSIZE - 42948624384 ))"
Wed Jul 16 10:31:10 2025 [Z0][TrM][I]: fi
Wed Jul 16 10:31:10 2025 [Z0][TrM][I]: 
Wed Jul 16 10:31:10 2025 [Z0][TrM][I]: rm -f "/var/lib/one/datastores/103/76/disk.0"
Wed Jul 16 10:31:10 2025 [Z0][TrM][I]: ln -s "/dev/vg-one-103/lv-one-76-0" "/var/lib/one/datastores/103/76/disk.0"
Wed Jul 16 10:31:10 2025 [Z0][TrM][I]: 
Wed Jul 16 10:31:10 2025 [Z0][TrM][I]: ssh -n manager1 "tar -cSO /var/lib/one/datastores/102/41c75168f1b131ea7dc5463019dd8479 2> /dev/null" | tar -xO 2> /dev/null > "/dev/vg-one-103/lv-one-76-0"" failed:
Wed Jul 16 10:31:10 2025 [Z0][TrM][I]: Error cloning /var/lib/one/datastores/102/41c75168f1b131ea7dc5463019dd8479 to lv-one-76-0
Wed Jul 16 10:31:10 2025 [Z0][TrM][E]: Error executing image transfer script: /var/lib/one/remotes/tm/fs_lvm_ssh/ln: line 94: [: too many arguments/var/lib/one/remotes/tm/fs_lvm_ssh/ln: line 106: [: cannot access '/var/lib/one/datastores/102/41c75168f1b131ea7dc5463019dd8479' (No such file or directory): integer expression expectedmkdir -p /var/lib/one/datastores/103/76hostname -f >"/var/lib/one/datastores/103/76/.host" || :# zero trailing spaceif [ "yes" = "yes" ] && [ "yes" != 'yes' ]; thenLVSIZE=$(sudo -n lvs --nosuffix --noheadings --units B -o lv_size "/dev/vg-one-103/lv-one-76-0" | tr -d '[:blank:]')dd if=/dev/zero of="/dev/vg-one-103/lv-one-76-0" bs=64k             oflag=seek_bytes iflag=count_bytes             seek="42948624384" count="$(( LVSIZE - 42948624384 ))"firm -f "/var/lib/one/datastores/103/76/disk.0"ln -s "/dev/vg-one-103/lv-one-76-0" "/var/lib/one/datastores/103/76/disk.0"ssh -n manager1 "tar -cSO /var/lib/one/datastores/102/41c75168f1b131ea7dc5463019dd8479 2> /dev/null" | tar -xO 2> /dev/null > "/dev/vg-one-103/lv-one-76-0"" failed:Error cloning /var/lib/one/datastores/102/41c75168f1b131ea7dc5463019dd8479 to lv-one-76-0
Wed Jul 16 10:31:10 2025 [Z0][VM][I]: New LCM state is PROLOG_FAILURE

Hello,

Could you provide the next output regarding both datastores:

onedatastore show 102
onedatastore show 103

Confirm if I understood correctly, both datastores are available at Frontend Node, and SYSTEM DS is only available at Compute Nodes?

Thank you for your support.

root@manager1:~# onedatastore show 102
DATASTORE 102 INFORMATION                                                       
ID             : 102                 
NAME           : lvm_image           
USER           : oneadmin            
GROUP          : oneadmin            
CLUSTERS       : 0                   
TYPE           : IMAGE               
DS_MAD         : fs                  
TM_MAD         : fs_lvm_ssh          
BASE PATH      : /var/lib/one//datastores/102
DISK_TYPE      : BLOCK               
STATE          : READY               

DATASTORE CAPACITY                                                              
TOTAL:         : 547.6G              
FREE:          : 515.5G              
USED:          : 4.2G                
LIMIT:         : -                   

PERMISSIONS                                                                     
OWNER          : um-                 
GROUP          : u--                 
OTHER          : ---                 

DATASTORE TEMPLATE                                                              
ALLOW_ORPHANS="YES"
BRIDGE_LIST="node1 node2"
CLONE_TARGET="SYSTEM"
DISK_TYPE="BLOCK"
DRIVER="raw"
DS_MAD="fs"
LN_TARGET="SYSTEM"
LVM_THIN_ENABLE="yes"
PERSISTENT_SNAPSHOTS="NO"
QCOW2_STANDALONE="YES"
SAFE_DIRS="/var/tmp /tmp /var/lib/one"
TM_MAD="fs_lvm_ssh"
TYPE="IMAGE_DS"

IMAGES         
2              
6              
8              
14             
16             
26             
30             
39             
41             
...
root@manager1:~# onedatastore show 103
DATASTORE 103 INFORMATION                                                       
ID             : 103                 
NAME           : lvm_system          
USER           : oneadmin            
GROUP          : oneadmin            
CLUSTERS       : 0                   
TYPE           : SYSTEM              
DS_MAD         : -                   
TM_MAD         : fs_lvm_ssh          
BASE PATH      : /var/lib/one//datastores/103
DISK_TYPE      : BLOCK               
STATE          : READY               

DATASTORE CAPACITY                                                              
TOTAL:         : 1024G               
FREE:          : 334.4G              
USED:          : 689.6G              
LIMIT:         : -                   

PERMISSIONS                                                                     
OWNER          : um-                 
GROUP          : u--                 
OTHER          : ---                 

DATASTORE TEMPLATE                                                              
ALLOW_ORPHANS="YES"
DISK_TYPE="BLOCK"
DS_MIGRATE="YES"
PERSISTENT_SNAPSHOTS="NO"
RESTRICTED_DIRS="/"
SAFE_DIRS="/var/tmp"
SHARED="YES"
TM_MAD="fs_lvm_ssh"
TYPE="SYSTEM_DS"

IMAGES        

The information I see on each node is consistent.
manager1

root@manager1:~# pvs
  PV                             VG           Fmt  Attr PSize     PFree   
  /dev/mapper/new_one_image_data vg-image-new lvm2 a--  <1024.00g       0 
  /dev/mapper/one_image_data     vg-image     lvm2 a--  <1024.00g       0 
  /dev/mapper/one_system_data    vg-one-103   lvm2 a--  <1024.00g <334.43g
root@manager1:~# vgs
  VG           #PV #LV #SN Attr   VSize     VFree   
  vg-image       1   1   0 wz--n- <1024.00g       0 
  vg-image-new   1   1   0 wz--n- <1024.00g       0 
  vg-one-103     1  26   0 wz--n- <1024.00g <334.43g
root@manager1:~# lvs
  LV             VG           Attr       LSize     Pool           Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv-image       vg-image     -wi-a----- <1024.00g                                                              
  lv-image-new   vg-image-new -wi-ao---- <1024.00g                                                              
  lv-one-15-0    vg-one-103   Vwi-a-tz--    <2.20g lv-one-15-pool        92.44                                  
  lv-one-15-2    vg-one-103   -wi-a-----    50.00g                                                              
  lv-one-15-pool vg-one-103   twi-aotz--    <2.20g                       92.44  25.68                           
  lv-one-17-0    vg-one-103   Vwi-a-tz--    <2.20g lv-one-17-pool        90.71                                  
  lv-one-17-pool vg-one-103   twi-aotz--    <2.20g                       90.71  25.59                           
  lv-one-21-0    vg-one-103   Vwi-a-tz--    <2.20g lv-one-21-pool        91.35                                  
  lv-one-21-pool vg-one-103   twi-aotz--    <2.20g                       91.35  25.88                           
  lv-one-23-0    vg-one-103   Vwi-a-tz--    <2.20g lv-one-23-pool        92.47                                  
  lv-one-23-pool vg-one-103   twi-aotz--    <2.20g                       92.47  25.49                           
  lv-one-28-0    vg-one-103   Vwi-a-tz--    20.00g lv-one-28-pool        34.04                                  
  lv-one-28-2    vg-one-103   Vwi-a-tz--    20.00g lv-one-28-pool        0.01                                   
  lv-one-28-pool vg-one-103   twi-aotz--    40.00g                       17.02  15.59                           
  lv-one-32-0    vg-one-103   Vwi-a-tz--    20.00g lv-one-32-pool        14.03                                  
  lv-one-32-2    vg-one-103   -wi-a-----   100.00g                                                              
  lv-one-32-3    vg-one-103   Vwi-a-tz--    60.00g lv-one-32-pool        0.00                                   
  lv-one-32-pool vg-one-103   twi-aotz--   240.00g                       1.17   7.44                            
  lv-one-40-0    vg-one-103   Vwi-a-tz--    20.00g lv-one-40-pool        14.04                                  
  lv-one-40-pool vg-one-103   twi-aotz--    20.00g                       14.04  15.06                           
  lv-one-41-0    vg-one-103   Vwi-a-tz--    20.00g lv-one-41-pool        14.13                                  
  lv-one-41-pool vg-one-103   twi-cotzM-    20.00g                       14.13  15.14                           
  lv-one-53-0    vg-one-103   Vwi-a-tz--    20.00g lv-one-53-pool        14.12                                  
  lv-one-53-pool vg-one-103   twi-cotzM-    80.00g                       3.53   11.54                           
  lv-one-55-0    vg-one-103   Vwi-a-tz--    20.00g lv-one-55-pool        19.87                                  
  lv-one-55-pool vg-one-103   twi-aotz--    90.00g                       4.42   11.90                           
  lv-one-57-0    vg-one-103   Vwi-a-tz--    20.00g lv-one-57-pool        11.35                                  
  lv-one-57-pool vg-one-103   twi-aotz--    40.00g                       5.68   12.04           

node1

root@node1:~# pvs
  PV                             VG           Fmt  Attr PSize     PFree   
  /dev/mapper/new_one_image_data vg-image-new lvm2 a--  <1024.00g       0 
  /dev/mapper/one_image_data     vg-image     lvm2 a--  <1024.00g       0 
  /dev/mapper/one_system_data    vg-one-103   lvm2 a--  <1024.00g <334.43g
root@node1:~# vgs
  VG           #PV #LV #SN Attr   VSize     VFree   
  vg-image       1   1   0 wz--n- <1024.00g       0 
  vg-image-new   1   1   0 wz--n- <1024.00g       0 
  vg-one-103     1  26   0 wz--n- <1024.00g <334.43g
root@node1:~# lvs
  LV             VG           Attr       LSize     Pool           Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv-image       vg-image     -wi-a----- <1024.00g                                                              
  lv-image-new   vg-image-new -wi-ao---- <1024.00g                                                              
  lv-one-15-0    vg-one-103   Vwi-aotz--    <2.20g lv-one-15-pool        92.44                                  
  lv-one-15-2    vg-one-103   -wi-ao----    50.00g                                                              
  lv-one-15-pool vg-one-103   twi-aotz--    <2.20g                       92.44  25.68                           
  lv-one-17-0    vg-one-103   Vwi-aotz--    <2.20g lv-one-17-pool        90.78                                  
  lv-one-17-pool vg-one-103   twi-aotz--    <2.20g                       90.78  25.59                           
  lv-one-21-0    vg-one-103   Vwi-aotz--    <2.20g lv-one-21-pool        91.41                                  
  lv-one-21-pool vg-one-103   twi-aotz--    <2.20g                       91.41  25.88                           
  lv-one-23-0    vg-one-103   Vwi-aotz--    <2.20g lv-one-23-pool        92.48                                  
  lv-one-23-pool vg-one-103   twi-aotz--    <2.20g                       92.48  25.49                           
  lv-one-28-0    vg-one-103   Vwi-aotz--    20.00g lv-one-28-pool        34.05                                  
  lv-one-28-2    vg-one-103   Vwi-aotz--    20.00g lv-one-28-pool        0.01                                   
  lv-one-28-pool vg-one-103   twi-aotz--    40.00g                       17.02  15.59                           
  lv-one-32-0    vg-one-103   Vwi-a-tz--    20.00g lv-one-32-pool        14.03                                  
  lv-one-32-2    vg-one-103   -wi-a-----   100.00g                                                              
  lv-one-32-3    vg-one-103   Vwi-a-tz--    60.00g lv-one-32-pool        0.00                                   
  lv-one-32-pool vg-one-103   twi-aotz--   240.00g                       1.17   7.44                            
  lv-one-40-0    vg-one-103   Vwi-a-tz--    20.00g lv-one-40-pool        14.04                                  
  lv-one-40-pool vg-one-103   twi-aotz--    20.00g                       14.04  15.06                           
  lv-one-41-0    vg-one-103   Vwi-a-tz--    20.00g lv-one-41-pool        14.13                                  
  lv-one-41-pool vg-one-103   twi-cotzM-    20.00g                       14.13  15.14                           
  lv-one-53-0    vg-one-103   Vwi-a-tz--    20.00g lv-one-53-pool        14.12                                  
  lv-one-53-pool vg-one-103   twi-cotzM-    80.00g                       3.53   11.54                           
  lv-one-55-0    vg-one-103   Vwi-a-tz--    20.00g lv-one-55-pool        19.87                                  
  lv-one-55-pool vg-one-103   twi-aotz--    90.00g                       4.42   11.90                           
  lv-one-57-0    vg-one-103   Vwi-a-tz--    20.00g lv-one-57-pool        11.35                                  
  lv-one-57-pool vg-one-103   twi-aotz--    40.00g                       5.68   12.04        

node2

root@node2:~# pvs
  PV                             VG           Fmt  Attr PSize     PFree   
  /dev/mapper/new_one_image_data vg-image-new lvm2 a--  <1024.00g       0 
  /dev/mapper/one_image_data     vg-image     lvm2 a--  <1024.00g       0 
  /dev/mapper/one_system_data    vg-one-103   lvm2 a--  <1024.00g <334.43g
root@node2:~# vgs
  VG           #PV #LV #SN Attr   VSize     VFree   
  vg-image       1   1   0 wz--n- <1024.00g       0 
  vg-image-new   1   1   0 wz--n- <1024.00g       0 
  vg-one-103     1  26   0 wz--n- <1024.00g <334.43g
root@node2:~# lvs
  LV             VG           Attr       LSize     Pool           Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv-image       vg-image     -wi-a----- <1024.00g                                                              
  lv-image-new   vg-image-new -wi-ao---- <1024.00g                                                              
  lv-one-15-0    vg-one-103   Vwi-a-tz--    <2.20g lv-one-15-pool        92.44                                  
  lv-one-15-2    vg-one-103   -wi-a-----    50.00g                                                              
  lv-one-15-pool vg-one-103   twi-aotz--    <2.20g                       92.44  25.68                           
  lv-one-17-0    vg-one-103   Vwi-a-tz--    <2.20g lv-one-17-pool        90.71                                  
  lv-one-17-pool vg-one-103   twi-aotz--    <2.20g                       90.71  25.59                           
  lv-one-21-0    vg-one-103   Vwi-a-tz--    <2.20g lv-one-21-pool        91.35                                  
  lv-one-21-pool vg-one-103   twi-aotz--    <2.20g                       91.35  25.88                           
  lv-one-23-0    vg-one-103   Vwi-a-tz--    <2.20g lv-one-23-pool        92.47                                  
  lv-one-23-pool vg-one-103   twi-aotz--    <2.20g                       92.47  25.49                           
  lv-one-28-0    vg-one-103   Vwi-a-tz--    20.00g lv-one-28-pool        34.04                                  
  lv-one-28-2    vg-one-103   Vwi-a-tz--    20.00g lv-one-28-pool        0.01                                   
  lv-one-28-pool vg-one-103   twi-aotz--    40.00g                       17.02  15.59                           
  lv-one-32-0    vg-one-103   Vwi-a-tz--    20.00g lv-one-32-pool        14.03                                  
  lv-one-32-2    vg-one-103   -wi-a-----   100.00g                                                              
  lv-one-32-3    vg-one-103   Vwi-a-tz--    60.00g lv-one-32-pool        0.00                                   
  lv-one-32-pool vg-one-103   twi-aotz--   240.00g                       1.17   7.44                            
  lv-one-40-0    vg-one-103   Vwi-a-tz--    20.00g lv-one-40-pool        14.04                                  
  lv-one-40-pool vg-one-103   twi-aotz--    20.00g                       14.04  15.06                           
  lv-one-41-0    vg-one-103   Vwi-a-tz--    20.00g lv-one-41-pool        14.13                                  
  lv-one-41-pool vg-one-103   twi-cotzM-    20.00g                       14.13  15.14                           
  lv-one-53-0    vg-one-103   Vwi-a-tz--    20.00g lv-one-53-pool        14.12                                  
  lv-one-53-pool vg-one-103   twi-cotzM-    80.00g                       3.53   11.54                           
  lv-one-55-0    vg-one-103   Vwi-a-tz--    20.00g lv-one-55-pool        19.87                                  
  lv-one-55-pool vg-one-103   twi-aotz--    90.00g                       4.42   11.90                           
  lv-one-57-0    vg-one-103   Vwi-a-tz--    20.00g lv-one-57-pool        11.35                                  
  lv-one-57-pool vg-one-103   twi-aotz--    40.00g                       5.68   12.04 

I still have two questions that I don’t quite understand.

1、For the image source file created by me through commands, will it be completely copied to the corresponding path of the IMAGE DS, and will it be synchronized by LVM to the LUN of the corresponding real storage?
2、For example, if multiple physical nodes like manager\node1\node2 mount the same LUN, will the data remain consistent? Now, I’ve found that for the uuid files corresponding to the created images, under the condition of mounting the same LUN, the files are inconsistent. Is this normal?

Ignore the issue with the 104 IMAGE DS. It has the same configuration as 102, except that it was re - created later for testing purposes.

root@manager1:~# ls -alh /var/lib/one/datastores/104
ls: cannot access '/var/lib/one/datastores/104/f795e301c65d5d7550800340f235ca7a': Bad message
ls: cannot access '/var/lib/one/datastores/104/64409128b71e66f71a39fc087f28bf00': No such file or directory
ls: cannot access '/var/lib/one/datastores/104/1.txt': Bad message
total 24K
drwxr-xr-x  3 oneadmin oneadmin 4.0K Jul 16 17:24 .
drwxr-x--- 10 oneadmin oneadmin 4.0K Jul 16 16:31 ..
-?????????  ? ?        ?           ?            ? 1.txt
-?????????  ? ?        ?           ?            ? 64409128b71e66f71a39fc087f28bf00
-?????????  ? ?        ?           ?            ? f795e301c65d5d7550800340f235ca7a
drwx------  2 root     root      16K Jul 16 17:24 lost+found
root@manager1:~# 

root@node1:~# ls -alh /var/lib/one/datastores/104
ls: cannot access '/var/lib/one/datastores/104/f795e301c65d5d7550800340f235ca7a': No such file or directory
ls: cannot access '/var/lib/one/datastores/104/64409128b71e66f71a39fc087f28bf00': Bad message
ls: cannot access '/var/lib/one/datastores/104/1.txt': Bad message
total 24K
drwxr-xr-x 3 oneadmin oneadmin 4.0K Jul 16 17:24 .
drwxr-xr-x 7 oneadmin oneadmin 4.0K Jul 16 16:31 ..
-????????? ? ?        ?           ?            ? 1.txt
-????????? ? ?        ?           ?            ? 64409128b71e66f71a39fc087f28bf00
-????????? ? ?        ?           ?            ? f795e301c65d5d7550800340f235ca7a
drwx------ 2 root     root      16K Jul 16 17:24 lost+found

root@node2:~# ls -alh /var/lib/one/datastores/104
total 1.6G
drwxr-xr-x 3 oneadmin oneadmin 4.0K Jul 17 10:43 .
drwxr-xr-x 5 oneadmin oneadmin 4.0K Jul 16 16:36 ..
-rw-r--r-- 1 root     root        4 Jul 16 17:50 1.txt
-rw-r--r-- 1 oneadmin oneadmin  40G Jul 16 17:46 f795e301c65d5d7550800340f235ca7a
drwx------ 2 root     root      16K Jul 16 17:24 lost+found

I have also carefully read the content related to caching.

There is no such configuration parameter and software.

root@manager1:~# cat /etc/lvm/lvm.conf |grep use_lvmetad
root@manager1:~# 
root@manager1:~# systemctl stop  lvm2-lvmetad.service
Failed to stop lvm2-lvmetad.service: Unit lvm2-lvmetad.service not loaded.

Ignore the disk space size of 102. Due to later testing, I mounted the LUN of 102 to the IMAGE DS of 104, so it shows around 500G. But it was normal when the problem occurred.

Hello,
Could you check on your FE Node if the image file exist there?

Please also provide the output from you Compute Nodes as well:
ls -l /var/lib/one/datastores/102/

1、For the image source file created by me through commands, will it be completely copied to the corresponding path of the IMAGE DS, and will it be synchronized by LVM to the LUN of the corresponding real storage?

Yes, the image should be copied to the mount point of the configured IMAGE datastore. If you’re using a shared LVM setup, then synchronization depends on your storage architecture. LVM itself doesn’t replicate data it assumes the underlying storage (e.g., a SAN or shared block device) is already shared across nodes.

2、For example, if multiple physical nodes like manager\node1\node2 mount the same LUN, will the data remain consistent? Now, I’ve found that for the uuid files corresponding to the created images, under the condition of mounting the same LUN, the files are inconsistent. Is this normal?

It’s not recommended to activate and mount lvm on many nodes at the same time, unless you’re using a clustered filesystem (like GFS2 or OCFS2) with CLVM or LVM2 locking mechanisms.

The 102 IMAGE DS has been deleted. I newly created an image using the 104 IMAGE DS and then created a VM for testing.

root@manager1:~# oneimage create --datastore 104 --name Ubuntu22-test --path /var/lib/one/images/ubuntu22.qcow2 --description "Ubuntu22 test"
ID: 118

root@manager1:~# onedatastore show 104
DATASTORE 104 INFORMATION                                                       
ID             : 104                 
NAME           : lvm_image_new       
USER           : oneadmin            
GROUP          : oneadmin            
CLUSTERS       : 0                   
TYPE           : IMAGE               
DS_MAD         : fs                  
TM_MAD         : fs_lvm_ssh          
BASE PATH      : /var/lib/one//datastores/104
DISK_TYPE      : BLOCK               
STATE          : READY               

DATASTORE CAPACITY                                                              
TOTAL:         : 1006.9G             
FREE:          : 954.1G              
USED:          : 1.5G                
LIMIT:         : -                   

PERMISSIONS                                                                     
OWNER          : um-                 
GROUP          : u--                 
OTHER          : ---                 

DATASTORE TEMPLATE                                                              
ALLOW_ORPHANS="YES"
BRIDGE_LIST="node1 node2"
CLONE_TARGET="SYSTEM"
DISK_TYPE="BLOCK"
DRIVER="raw"
DS_MAD="fs"
LN_TARGET="SYSTEM"
LVM_THIN_ENABLE="yes"
PERSISTENT_SNAPSHOTS="NO"
SAFE_DIRS="/var/tmp /tmp /var/lib/one"
TM_MAD="fs_lvm_ssh"
TYPE="IMAGE_DS"

IMAGES         
109            
113            
114            
115            
117            
118     

root@manager1:~# df -h
Filesystem                                 Size  Used Avail Use% Mounted on
...
/dev/mapper/vg--image-lv--image           1007G   87G  870G  10% /var/lib/one/datastores/102
/dev/mapper/vg--image--new-lv--image--new 1007G  1.6G  955G   1% /var/lib/one/datastores/104

vim /opt/tp/vm-7.conf 
NAME = "ubuntu22-vm8-test"
...
DISK = [
  IMAGE_ID = "118",
  IMAGE_UNAME = "oneadmin",
  CLONE = "YES",
  CLONE_NAME = "$NAME-$VMID"
]
...



root@manager1:~# onetemplate create /opt/tp/vm-7.conf 
ID: 119

Then I conducted a creation test through the web, and the error message is:
oned.log

Fri Jul 18 08:41:58 2025 [Z0][PLM][I]: Adding new placement plan
Fri Jul 18 08:41:58 2025 [Z0][PLM][I]: Found 1 active plans
Fri Jul 18 08:41:59 2025 [Z0][DiM][D]: Deploying VM 92
Fri Jul 18 08:42:00 2025 [Z0][ReM][D]: Req:2784 UID:0 IP:127.0.0.1 one.vm.info invoked , 92, false
Fri Jul 18 08:42:00 2025 [Z0][ReM][D]: Req:2784 UID:0 one.vm.info result SUCCESS, "<VM><ID>92</ID><UID>..."
Fri Jul 18 08:42:04 2025 [Z0][TrM][E]: ln: Command "    set -e -o pipefail
Fri Jul 18 08:42:04 2025 [Z0][TrM][D]: Message received: TRANSFER FAILURE 92 /var/lib/one/remotes/tm/fs_lvm_ssh/ln: line 94: [: too many arguments/var/lib/one/remotes/tm/fs_lvm_ssh/ln: line 106: [: cannot access '/var/lib/one/datastores/104/7830ecf4b16758b854ea94a6e107a4aa' (No such file or directory): integer expression expectedmkdir -p /var/lib/one/datastores/103/92hostname -f >"/var/lib/one/datastores/103/92/.host" || :# zero trailing spaceif [ "yes" = "yes" ] && [ "yes" != 'yes' ]; thenLVSIZE=$(sudo -n lvs --nosuffix --noheadings --units B -o lv_size "/dev/vg-one-103/lv-one-92-0" | tr -d '[:blank:]')dd if=/dev/zero of="/dev/vg-one-103/lv-one-92-0" bs=64k             oflag=seek_bytes iflag=count_bytes             seek="42948624384" count="$(( LVSIZE - 42948624384 ))"firm -f "/var/lib/one/datastores/103/92/disk.0"ln -s "/dev/vg-one-103/lv-one-92-0" "/var/lib/one/datastores/103/92/disk.0"ssh -n manager1 "tar -cSO /var/lib/one/datastores/104/7830ecf4b16758b854ea94a6e107a4aa 2> /dev/null" | tar -xO 2> /dev/null > "/dev/vg-one-103/lv-one-92-0"" failed:Error cloning /var/lib/one/datastores/104/7830ecf4b16758b854ea94a6e107a4aa to lv-one-92-0

Fri Jul 18 08:42:05 2025 [Z0][RCM][I]: --Mark--
Fri Jul 18 08:42:05 2025 [Z0][DBM][I]: Purging obsolete LogDB records: 0 records purged. Log state: 0,0 - 0,0
Fri Jul 18 08:42:05 2025 [Z0][DBM][I]: Purging obsolete federated LogDB records: 0 records purged. Federated log size: 0.
Fri Jul 18 08:42:07 2025 [Z0][MKP][I]: --Mark--
Fri Jul 18 08:42:08 2025 [Z0][PLM][I]: Starting Plan Manager timer action...
Fri Jul 18 08:42:08 2025 [Z0][PLM][I]: Found 1 active plans

92.log

root@manager1:~# cat /var/log/one/92.log 
Fri Jul 18 08:41:47 2025 [Z0][VM][I]: New state is CLONING
Fri Jul 18 08:41:50 2025 [Z0][VM][I]: New state is PENDING
Fri Jul 18 08:41:59 2025 [Z0][VM][I]: New state is ACTIVE
Fri Jul 18 08:41:59 2025 [Z0][VM][I]: New LCM state is PROLOG
Fri Jul 18 08:42:04 2025 [Z0][TrM][I]: Command execution failed (exit code: 2): /var/lib/one/remotes/tm/fs_lvm_ssh/ln manager1:/var/lib/one//datastores/104/7830ecf4b16758b854ea94a6e107a4aa 10.9.200.3:/var/lib/one//datastores/103/92/disk.0 92 104
Fri Jul 18 08:42:04 2025 [Z0][TrM][I]: /var/lib/one/remotes/tm/fs_lvm_ssh/ln: line 94: [: too many arguments
Fri Jul 18 08:42:04 2025 [Z0][TrM][I]: /var/lib/one/remotes/tm/fs_lvm_ssh/ln: line 106: [: cannot access '/var/lib/one/datastores/104/7830ecf4b16758b854ea94a6e107a4aa' (No such file or directory): integer expression expected
Fri Jul 18 08:42:04 2025 [Z0][TrM][E]: ln: Command "    set -e -o pipefail
Fri Jul 18 08:42:04 2025 [Z0][TrM][I]: mkdir -p /var/lib/one/datastores/103/92
Fri Jul 18 08:42:04 2025 [Z0][TrM][I]: 
Fri Jul 18 08:42:04 2025 [Z0][TrM][I]: hostname -f >"/var/lib/one/datastores/103/92/.host" || :
Fri Jul 18 08:42:04 2025 [Z0][TrM][I]: 
Fri Jul 18 08:42:04 2025 [Z0][TrM][I]: # zero trailing space
Fri Jul 18 08:42:04 2025 [Z0][TrM][I]: if [ "yes" = "yes" ] && [ "yes" != 'yes' ]; then
Fri Jul 18 08:42:04 2025 [Z0][TrM][I]: LVSIZE=$(sudo -n lvs --nosuffix --noheadings --units B -o lv_size "/dev/vg-one-103/lv-one-92-0" | tr -d '[:blank:]')
Fri Jul 18 08:42:04 2025 [Z0][TrM][I]: dd if=/dev/zero of="/dev/vg-one-103/lv-one-92-0" bs=64k             oflag=seek_bytes iflag=count_bytes             seek="42948624384" count="$(( LVSIZE - 42948624384 ))"
Fri Jul 18 08:42:04 2025 [Z0][TrM][I]: fi
Fri Jul 18 08:42:04 2025 [Z0][TrM][I]: 
Fri Jul 18 08:42:04 2025 [Z0][TrM][I]: rm -f "/var/lib/one/datastores/103/92/disk.0"
Fri Jul 18 08:42:04 2025 [Z0][TrM][I]: ln -s "/dev/vg-one-103/lv-one-92-0" "/var/lib/one/datastores/103/92/disk.0"
Fri Jul 18 08:42:04 2025 [Z0][TrM][I]: 
Fri Jul 18 08:42:04 2025 [Z0][TrM][I]: ssh -n manager1 "tar -cSO /var/lib/one/datastores/104/7830ecf4b16758b854ea94a6e107a4aa 2> /dev/null" | tar -xO 2> /dev/null > "/dev/vg-one-103/lv-one-92-0"" failed:
Fri Jul 18 08:42:04 2025 [Z0][TrM][I]: Error cloning /var/lib/one/datastores/104/7830ecf4b16758b854ea94a6e107a4aa to lv-one-92-0
Fri Jul 18 08:42:04 2025 [Z0][TrM][E]: Error executing image transfer script: /var/lib/one/remotes/tm/fs_lvm_ssh/ln: line 94: [: too many arguments/var/lib/one/remotes/tm/fs_lvm_ssh/ln: line 106: [: cannot access '/var/lib/one/datastores/104/7830ecf4b16758b854ea94a6e107a4aa' (No such file or directory): integer expression expectedmkdir -p /var/lib/one/datastores/103/92hostname -f >"/var/lib/one/datastores/103/92/.host" || :# zero trailing spaceif [ "yes" = "yes" ] && [ "yes" != 'yes' ]; thenLVSIZE=$(sudo -n lvs --nosuffix --noheadings --units B -o lv_size "/dev/vg-one-103/lv-one-92-0" | tr -d '[:blank:]')dd if=/dev/zero of="/dev/vg-one-103/lv-one-92-0" bs=64k             oflag=seek_bytes iflag=count_bytes             seek="42948624384" count="$(( LVSIZE - 42948624384 ))"firm -f "/var/lib/one/datastores/103/92/disk.0"ln -s "/dev/vg-one-103/lv-one-92-0" "/var/lib/one/datastores/103/92/disk.0"ssh -n manager1 "tar -cSO /var/lib/one/datastores/104/7830ecf4b16758b854ea94a6e107a4aa 2> /dev/null" | tar -xO 2> /dev/null > "/dev/vg-one-103/lv-one-92-0"" failed:Error cloning /var/lib/one/datastores/104/7830ecf4b16758b854ea94a6e107a4aa to lv-one-92-0
Fri Jul 18 08:42:04 2025 [Z0][VM][I]: New LCM state is PROLOG_FAILURE

I’ve found a strange phenomenon. After this image was created, it wasn’t in the corresponding 104 directory on the Manager node, but in the 104 directory on the node1 node. Is this a normal phenomenon?

root@manager1:~# ls -alh  /var/lib/one/datastores/104/7830ecf4b16758b854ea94a6e107a4aa
ls: cannot access '/var/lib/one/datastores/104/7830ecf4b16758b854ea94a6e107a4aa': No such file or directory

root@node1:~# ls -alh  /var/lib/one/datastores/104/7830ecf4b16758b854ea94a6e107a4aa
-rw-r--r-- 1 oneadmin oneadmin 40G Jul 18 08:41 /var/lib/one/datastores/104/7830ecf4b16758b854ea94a6e107a4aa

root@node2:~# ls -alh  /var/lib/one/datastores/104/7830ecf4b16758b854ea94a6e107a4aa
ls: cannot access '/var/lib/one/datastores/104/7830ecf4b16758b854ea94a6e107a4aa': No such file or directory

When the problem occurred, I had the same LUN mounted on three nodes. As you said, I didn’t use any distributed file system or CLVM. I feel the problem lies with LVM, but I don’t know if it’s normal for the image source file to be stored like this. What is the overall image storage mechanism and the replication mechanism for launching cloud hosts?

I’m quite clear about SYSTEM DS. It requires creating VGs on all three nodes (Manager\node1\node2) simultaneously, activating and using them, and setting the correct format vg - one - <system_ds_id>.
What is the normal way to mount IMAGE DS?

  1. For creating an LV logical disk in IMAGE DS, does it need to be mounted on all three nodes simultaneously? Should the directory corresponding to the id of 104 IMAGE DS be kept consistent on the three nodes?
  2. For creating an LV logical disk in IMAGE DS, is it only necessary to mount it to the directory corresponding to the id of the front - end’s IMAGE DS?

How does a vm copy the image to the running state? Is it copied directly from the image file corresponding to the Manager? Or is it copied from their respective nodes?

Hello,
Let me shed some light on this behavior.

  1. When your IMAGE DS has attribute BRIDGE_LIST for IMAGE DS, then it downloads image directly to Compute Node.
  2. If your SYSTEM DS has attribute TM_MAD='lvm_fs_ssh', then when you build the VM one is looking for image on FE.

So, you may try other configurations:

  1. Remove 'BRIDGE_LIST` from IMAGE DS and redeploy image. (it will download image to FE, and it will look for it at these place for building VM)
  2. Change TM_MAD for SYSTEM DS to lvm_fs and redeploy image. (In this case you should have shared block device mounted on all Compute nodes. It will download image to Compute Node, and it will be available for build new VMs, because one will be looking for image on Compute Node where VM is scheduled to build.)

What is the normal way to mount IMAGE DS?
As far IMAGE DS doesn’t required updating files from different places in same time, you may mount it in different places, but lvm voulume should be active on all nodes. However, you may use just mounted block device for this purposes as well..

Thank you for your support. I tested according to the suggestions you provided and solved the problem. Now it can run normally. Thank you very much. :+1::100: