Hi All,
after years of trying to get the hardware together for this, I have finallly made it happen.
Got a good deal on a 2113S-WN24RT (2U 24x NVME backplane), finally my dream of having enough pcie lanes for each drive + a 100GBE adapter (do I need it? NO! is it cool? Heck yeah!!!)
I also scored a good deal on 12x 11TB 9200 ECO NVMEs + I just got a decent deal on 2x 12.8TB 9300 MAX. Also got a 50GBE CX5 that I crossflashed to 100GBE (thanks to the forum).
The systems unfortunately came with a rev 1.0 H11SSW-NT board which meant no Rome upgrade. I then decided to pull the trigger on a H12SSW-NTR, the relevant pcie 4.0 risers and retimer cards so that the whole system would be pcie 4.0 capable (supposedly the included BPN-NVME3-216N-S4 can handle pcie 4.0 signals anyway).
I am yet to recieve the riser boards but I did get a 7443 as well ast 8x64GB 3200MHZ with the H12SSW-NTR. Unfortunately that board has an issue, but I was able to find a rev 2.0 H11SSW-NT and a 7K62.
I will definitely continue with my pcie 4.0 upgrade at a later date but for now I will set up the system.
I am thinking that 48 Cores would be wasted as a storage node, plus the thing idles at 260W, so I want to consolidate all my system onto it. I have installed PROXMOX and running TrueNas as a VM.
All 14 drives and 100GBE NIC were easy to passthrough. I am now trying to optimise my Truenas set-up to see if I can saturate 100GBE (because why not???)
So far I have played around a bit and it seems that a 8 drive z1 pool (92 ECO) with 32 threads will just about saturate it.
Single thread performance though is only 389MB/s. Not sure if this is normal. I have only tested this with TN-Bench so far.
I have a few questions for you guys:
1. Given the fault tolerance of SSDs vs HHDs (and that I will have a rust pool for back up) + the ability to resilver a lot faster, would it be ok to do a 12 wide raidz1 vded ?
I feel like 8 wide raidz1 would be good, but then I am left with 4 disks. Alternatively is 12 wide raidz2 ok (or would 2x 6 wide raidz1 in a pool be better?) This pool mainly just be media storage.
2. I was thinking about mirroring the 2x 12.8TB 9300 MAX and using those to store all my critical data.Am i better off just chucking those in as part of the above pool instead, have more resilience and store my critical data along with my media pool? (realistically I probably only have 4TB at best of data that is important to me).
3. Any particular advice on tuning the pool/system for NVME? I haven't tried SMB transfers yet, but I am worried it's going to run single threaded.
I have only seen a few posts on someone setting up a system like this and they are a bit dated now. So whilst I am a noob when it comes to this stuff, I thought it would be good to share this project.
Cheers,
Gio
after years of trying to get the hardware together for this, I have finallly made it happen.
Got a good deal on a 2113S-WN24RT (2U 24x NVME backplane), finally my dream of having enough pcie lanes for each drive + a 100GBE adapter (do I need it? NO! is it cool? Heck yeah!!!)
I also scored a good deal on 12x 11TB 9200 ECO NVMEs + I just got a decent deal on 2x 12.8TB 9300 MAX. Also got a 50GBE CX5 that I crossflashed to 100GBE (thanks to the forum).
The systems unfortunately came with a rev 1.0 H11SSW-NT board which meant no Rome upgrade. I then decided to pull the trigger on a H12SSW-NTR, the relevant pcie 4.0 risers and retimer cards so that the whole system would be pcie 4.0 capable (supposedly the included BPN-NVME3-216N-S4 can handle pcie 4.0 signals anyway).
I am yet to recieve the riser boards but I did get a 7443 as well ast 8x64GB 3200MHZ with the H12SSW-NTR. Unfortunately that board has an issue, but I was able to find a rev 2.0 H11SSW-NT and a 7K62.
I will definitely continue with my pcie 4.0 upgrade at a later date but for now I will set up the system.
I am thinking that 48 Cores would be wasted as a storage node, plus the thing idles at 260W, so I want to consolidate all my system onto it. I have installed PROXMOX and running TrueNas as a VM.
All 14 drives and 100GBE NIC were easy to passthrough. I am now trying to optimise my Truenas set-up to see if I can saturate 100GBE (because why not???)
So far I have played around a bit and it seems that a 8 drive z1 pool (92 ECO) with 32 threads will just about saturate it.
Single thread performance though is only 389MB/s. Not sure if this is normal. I have only tested this with TN-Bench so far.
I have a few questions for you guys:
1. Given the fault tolerance of SSDs vs HHDs (and that I will have a rust pool for back up) + the ability to resilver a lot faster, would it be ok to do a 12 wide raidz1 vded ?
I feel like 8 wide raidz1 would be good, but then I am left with 4 disks. Alternatively is 12 wide raidz2 ok (or would 2x 6 wide raidz1 in a pool be better?) This pool mainly just be media storage.
2. I was thinking about mirroring the 2x 12.8TB 9300 MAX and using those to store all my critical data.Am i better off just chucking those in as part of the above pool instead, have more resilience and store my critical data along with my media pool? (realistically I probably only have 4TB at best of data that is important to me).
3. Any particular advice on tuning the pool/system for NVME? I haven't tried SMB transfers yet, but I am worried it's going to run single threaded.
I have only seen a few posts on someone setting up a system like this and they are a bit dated now. So whilst I am a noob when it comes to this stuff, I thought it would be good to share this project.
Cheers,
Gio






