Tiered storage configuration

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Sean Ho

seanho.com
Nov 19, 2019
774
357
63
Vancouver, BC
seanho.com
You've indicated your primary pain-point / use case is bulk writes -- you want these to complete at 10Gbps wire rate. My question is why? If it's a bulk write, you're unlikely to need to read it right away, so what's the point of write cache, where eventually it'll need to be moved to spinners anyway? The spinners are still the bottleneck. ZFS special vdev is primarily for metadata and small block data that benefits from high I/O; that's not your expressed use case.

If you really have a use case with bulk write that immediately needs to be read, then you don't have a file sharing scenario but streaming data (e.g., video distribution, data logging).
 

jabuzzard

Member
Mar 22, 2021
45
18
8
Something like this then?

That's the closet thing I have seen to GPFS's tiering ability and looks very interesting. In my personal experience tiering on GPFS makes a huge difference to user experience of the file system performance. Most files are only accessed in a short period after creation. Certainly in an academic environment after three months later data is rarely accessed and I have lots of collected data. That said GPFS does have the small file in the inode feature that is a huge boon to performance if you are using dedicated SSD's for metadata.

However I would add that if you only have a small number of users accessing the file system tiering is not as useful.
 

donedeal19

Member
Jul 10, 2013
38
12
8
Hi, I'm interested in how is this turning out to be for you? Have you looked into the powershell script for setting up a tiering yet? Will you be moving onto server 2022?
I just made an tier pool with just 4 ssd and 4 hdd. Create a parity spaces with 20 gb writeback cache. Then create one simple tiered pool with 5 gb write back cache.
I used diskspd for looking at latency and iops. Used a 20gb vhd which happened to be near by.
First run was vhd from usb to the parity pool. Then I did copy vhd from parity to tiering pool.
File transfers are 150 p /600 t mb written. Then decided to delete the vhd file from the tiering and repaste.
I am seeing transfers for 2.6 gb per second, I'm clueless why it's much faster cached. Puzzled I used diskspd, to see what could be happening I opened up performance monitor too see it in real-time. The first and second run slow on the tiering 600 mb @1M. Switch to parity and seen about 86 gb @ 1M. My test runs was only for 60 seconds reads only. I would like to compare to what your findings.
I can rebuild the pool to try out 3 way tiering adding 4 nvmes to keep it 4 columns, then server 2022 at a later time.
For now I will play with this as I like nvme to be in a separate pool.GIF_20220809_233309.gif