I have a question about different datastore types. I have some SSD disks in one of my hosts. I created a datastore (image) with path to mounted SSD raid. Then I created a DATABLOCK in this datastore. Everything fine I can see it created in this path. Now when I attach this to running VM it works fine but I see it copies the entire file inside my system datastore. Also I check the read/write IOPS which also falls on system datastore not the SSDs.
Question1: is this behavior normal that always copies of the images will store and run from system datastore or I missed configure something in my disk/format type?
Question2: If this is a normal behavior, what are the best practices to take advantage of faster disks and different datastore types by simply creating persistence DATABLOCK images and attaching them to specific VMs. I need these on some of my VMs running database.
Ok, I followed a blog post about multiple file systems with the exact situation (local SSD and shared ntfs).
I added this line to my VM template:
SCHED_DS_REQUIREMENTS NAME = “ssd_system”
Which ssd_system:
DATASTORE 158 INFORMATION
ID : 158
NAME : ssd_system
USER : maziyar
GROUP : oneadmin
CLUSTER : Multivac
TYPE : SYSTEM
DS_MAD : -
TM_MAD : shared
BASE PATH : /ssd-test/ssd/158
DISK_TYPE : FILE
STATE : READY
But still goes and create it in the default system path of the cluster. As you can see it says 37TB which is my ntfs default system datastore not the SSDs. Am I missing something in my datastore config?
Many thanks.