- TL/DR – 64GB of RAM with a 256GB NVMe drive will add 238.47 GB of RAM using a 400 TierNvmePct and 15.92 GB NVMe if using a 25 TierNvmePct as the percent is based off the 64GB of real RAM.
- Software
VCSA 8.0.3 24022515
ESXi 8.0.3 24022510 (fresh install) - Hardware
Dell OptiPlex 5070 Micro
CPU – i7-9700T
RAM – 64GB RAM (2x 32GB SODIMMS)
NVMe – 256GB Samsung PM9B1 empty for Memory Tiering – vmhba1
SATA – 2TB SanDisk SDSSDH32 for VMFS6 Guest VMs – vmhba0
USB – 128GB SanDisk Fit for ESXi 8 – vmhba32 - Reading – Memory_Tiering_over_NVMe_Tech_Preview_-_vSphere_8.0.3_Technical_Guide_Revision_1.0.pdf
ESXi 8.0 Memory Tiering 256GB NVMe 64GB RAM
- Enter maintenance mode
- SSH in
- Rum 4 commands to list the 256GB NVMe device name, create the tierdevice, validate, enable MemoryTiering and reboot the host via vCenter.
esxcli storage core adapter device list
-
esxcli storage core adapter device list
-
esxcli system tierdevice create -d /vmfs/devices/disks/t10.NVMe____PM9B1_NVMe_Samsung_256GB________________5C48BB31E1382500
-
esxcli system tierdevice list
-
esxcli system settings kernel set -s MemoryTiering -v TRUE
- After the reboot, in vCenter, the ESXi host under Configure / Hardware / Overview shows
Memory Tiering – Software
Tier 0 63.69 GB DRAM (Memory)
- “By default, the hosts are configured to use a DRAM to NVMe ratio of 4:1”
“The host advanced setting for Mem.TierNvmePct sets the amount of NVMe to be used as tiered memory using a percentage equivalent of the total amount of DRAM. A value between 1 and 400. The default value is 25” - 64GB of RAM x 0.25 = 16 GB
-
esxcfg-advcfg -s 25 /Mem/TierNvmePct
- Results in 79.61 GB of total RAM with 15.92 GB NVMe being added.
- Breaking the soft recommendaiton to only add 64GB of RAM via NVMe “It is recommended that the amount of NVMe configured as tiered memory does not exceed the total amount of DRAM”
- 64GB of RAM x 4.00 = 256 GB
-
esxcfg-advcfg -s 400 /Mem/TierNvmePct
- Results in 302.16 GB of total RAM with 238.47 GB NVMe being added since the 256GB NVMe when formated is 238.47 GB.
- Interesting to see the “datastore” is label “Consumed for Memory Tiering”
- 372 TierNvmePct
- To keep things proper, 238.47 / 64 = 3.726 x 100 = 372.6
-
esxcfg-advcfg -s 372 /Mem/TierNvmePct
- Resulted in 2GB less – 300.62 GB total and 236.93 GB NVMe. I set it back to 400 since it looks like ESXi will know the max amount to set to.
- In vCenter, ESXi hosts, VMs tab.
- You can draw the following metrics:
-
Read BW Memory DRAM (Tier 0) Read BW Memory PMem (Tier 1) Consumed Memory DRAM (Tier 0) Consumed Memory PMem (Tier 1) Active Memory DRAM (Tier 0) Active Memory PMem (Tier 1)
- The Tier 1 for the NVMe shows “-1 MB” for all VMs which could be a bad reading since it a tech preview or the tech is smart enough not to use it since there is the Tier 0 real RAM to use for the VMs…yet the “Read BW Memory DRAM (Tier 0)” reports a -1.
- TBD of this tech preview caused a PSOD or any VM performance issues. Aug 31st to Sept 7th 2024 (no issues)
The homelab host has not used all 64 GB of RAM during it peak workload.,
It is not economical for a lab to use the 1:4 default ratio to add 16GB of RAM from the 256GB NVMe which is why a production host would have drives up to 4TB and the 7300 TBW specifications