Server 2012 R2 Storage Tiering NTFS vs ReFS

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

markpower28

Active Member
Apr 9, 2013
413
104
43
I am seeing very interesting result on storage tiering.

2012 R2 Server. Dell T110 3430 with 16 GB RAM.

Storage Pool: LSI 9207 connect 4 x Seagate 240 GB SSD and 4 x WD SE 4 TB SATA

Storage Tiering: Get-StoragePool Pool1 | New-VirtualDisk -FriendlyName TieredSpace01 -ResiliencySettingName Mirror -StorageTiers $SSD, $HDD -StorageTierSizes 300GB, 7450GB –WriteCacheSize 16GB (this gives 33% OP on SSD and 16 GB write cache)

Benchmark is fine when using NTFS but terrible with ReFS. Anyone has seen this yet?


NTFS


ReFS


Regards,

Mark
 
Last edited:

markpower28

Active Member
Apr 9, 2013
413
104
43
For comparison

4 x SSD RAID 0 NTFS

4 x SSD RAID 0 ReFS

4 x HD RAID 0 NTFS

4 x HD RAID 0 ReFS


It seems maybe ReFS is not optimized for storage tiering somehow. Just want to see if other people observe the same behavior.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,546
113
I'm not sure Att0 is going to be the best tool for this. Att0 uses direct disk IO and is measuring performance of the disk itself (or, in this case, the Storage Spaces vDisk). The results should be rather independent of filesystem formatting.

Not sure how to explain the differences in your results.

You need to use a tool that works at the filesystem level. Simple graphical tool that doesn't take too much to use or interpret is Crystal Diskmark. More sophisticated tools is IOmeter.

In general, even Microsoft acknowledges that ReFS is not really mature yet to seeing performance issues with is - tiering or not - would not be surprising.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,546
113
I think "WOW" is a better word than "Interesting"!

NTFS shows 15x lower IOPs, 5x lower throughput, 2x lower average IO latency. Shocking.

Any chance you've got time to run this same IOmeter using a single SSD instead of the Storage Spaces pool? I'd be very interested to know if its just ReFS being inefficient vs some odd interaction between ReFS & Storage Spaces.

My prediction is that you will see similar results for ReFS w/out Storage Spaces.
 

markpower28

Active Member
Apr 9, 2013
413
104
43
couple more testing.

4 workers. 1 MB block throughput NTFS


4 workers. 4k block IOPS NTFS


4 workers 1 MB block throughput ReFS


4 workers 4K block IOPS ReFS
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,827
113
I think "WOW" is a better word than "Interesting"!

NTFS shows 15x lower IOPs, 5x lower throughput, 2x lower average IO latency. Shocking.

Any chance you've got time to run this same IOmeter using a single SSD instead of the Storage Spaces pool? I'd be very interested to know if its just ReFS being inefficient vs some odd interaction between ReFS & Storage Spaces.

My prediction is that you will see similar results for ReFS w/out Storage Spaces.
Am I reading that wrong? Looks like NTFS is running more IOPS/ throughput than ReFS
 

markpower28

Active Member
Apr 9, 2013
413
104
43
NTFS and ReFS does not make much difference when test on single SSD.

Single SSD NTFS 4K


Single SSD ReFS 4K


4 x SSD RAID 0 NTFS 4K


4 x SSD RAID 0 ReFS 4K


So far the only issue I have seen is storage tiering with ReFS