Recommendations for upgrading aging All-in-one build

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

rhombus

New Member
Feb 25, 2022
2
0
1
I have an All-in-one build that may be getting long in the tooth. In 2019 I had 2 drives go bad. Recently I have had another drive go bad in my zfs array. Its time to at minimum replace the drives. It may also be time to do some upgrades. My ZFS dataset has been going for 10+ years, starting back on solaris and early Napp-it. I love ZFS.

I have upgraded hardware several times over the years, this last iteration I built and has been running since 2016. It was built with mostly second hand parts. I also run a few VMs.

VMs

ESXi VMs:
• Win Server for AD, local DNS, WINS: 2core 8gb
• FreeNAS -Passthrough of HBAs: 4cores 64gb of ram
• Hosts 2 pools. One main 2vdev-Z2 pool for data storage (just one user accessing and copying, working on large files stored in pool), and a mirrored plex storage pool on 2 disks for plex to read and keep the main pool spun down.
• PlexServer - Windows VM - Transcoding up to 6 streams at a time: 16cores and 16gb allocated. Quadro added for transcoding
• Crashplan Pro Windows VM - Set to backup entire TrueNAS pool using mapped drives: 2core 16gb. Crashplan seems to use a lot of RAM for large backup dataset.
• Windows VM for downloading, 7zip, Unrar: 6core 16gb
• Windows VM for management: 2core 6gb


Hardware

2x Supermicro SC846TQ 24 bay chassis. One is setup as an JBOD Chassis

Main Chassis - ESXi 6
Supermicro SC846TQ 846TQ-R900B - Upgraded power supplies to SQ, PWS-920P-SQ
BPN-SAS-846TQ Backplane
Supermicro X9DRi-LN4F
2x E5-2960 v1
128gb DDR3 1333mhz
INTEL Dual Port 10GB SFP+ 82598EB 10 GIGABIT - Going to 10gbe switch (my desktop is 10gbe)
LSI 9261-8i PCIe 3.0 Raid card w/bbu for VM storage.
2x 480gb Samsung MZ7GE480HMHP-00005 PM853T OEM Enterprise SSD, mirrored (ssd mounted in case)
4x SAS Ultrastar 2tb drives in raid 10.
1x 4tb SATA drive for VM temporary storage of downloads. (internally mounted in case)
LSI SAS9207-8i PCIe 3.0 HBAs to intel expander (running passthrough to Freenas VM)
LSI SAS9207-8E to expander chassis (running passthrough to Freenas VM)
Intel RES2SV240 PCIe x4 SATA SAS RAID 6Gb/s 24-port Expander Card
SAS Breakout fan cables to backplane
Quadro P2000 for Plex transcoding since E5 does not have iGPU (Passthrough to Plex VM)

Expander JBOD Chassis - 20 free drive bays
SC846TQ 846TQ-R900B 24 bay chassis - PWS-920P-SQ
BPN-SAS-846TQ Backplane
Supermicro CSE-PTJBOD-CB3 JBOD IPMI controller
Intel RES2CV240 SG27402 RAID Expander
SAS breakout cables to chassis backplane

Power Usage:
Main Chassis avg 280w
JBOD Chassis avg 150w

Everything is on UPS and communicating for shutdowns.

Freenas VM Config
Main pool - 32TB. Almost full, 3TB left
2x Z2 vdevs of 10 3tb HGST Ultrastar 7K4000 3TB
2x Hotspares

Plex Pool of 2x WD Red 8TB drives (no redundancy), this is almost full with 2TB left. Pool gets data mirrored to it through a cron job every 2 hr from the main zfs pool to protect from bitrot. Plex reads this pool to keep the main array spun down to save power.

24 total drives in main case. 4x for VM storage, 20 for ZFS array. 4 drives in JBOD Expander Case




Goals
• Update old 3tb drives that seem to be increasing in failure (could be drives, could also be backplane or breakout cables causing issues). Thinking 8tb or 10tb drives
• Add additional space to ZFS pools, both main pool and plex pool. Target of at least 2-3x the storage.
• Want to maintain zfs system for bitrot protection
• Potentially upgrade mobo/cpu/ram for lower power usage.
• I feel like the majority of the power usage is from the number of disks.
• Take old 3tb drives and use them for an on-site backup pool
• Simplify if possible.


Questions are:
General recommendations on what to do?
Should I upgrade my hardware or just my disks?
Should I stick with the passive backplanes in my cases or go to expander backplanes?
Should I create a new pool and move the data or replace drives one by one and resliver (seems like a lot of work on older drives)?
Should I add a zil or slog? If so what?
What kind of pool should I create for a backup pool? Single vdev z1?
Whats the best price/performance disk for my main pool these days? Shucking external drives? Buying used enterprise SAS or SATA drives? I am thinking 8-10tb drives. I don’t want shingled drives. I like getting 500MBps speed while transferring and working on files.


Thanks!
 
Last edited:

berrmich

New Member
Jun 15, 2016
11
0
1
51
Well I got excited when I saw this post as I'm looking at the same thing. Mine is even more ancient than yours. Supermicro with x3440 xeon, 16gb ram. I'm recovering from a boot disk failure now but I noted that with my hardware, the latest ESXi I can use is 5.5. I'm also running out of room. I have 5x 2gb and 6x 4gb. Did you come to any decisions?
 

itronin

Well-Known Member
Nov 24, 2018
1,242
804
113
Denver, Colorado
Goals
• Update old 3tb drives that seem to be increasing in failure (could be drives, could also be backplane or breakout cables causing issues). Thinking 8tb or 10tb drives
• Add additional space to ZFS pools, both main pool and plex pool. Target of at least 2-3x the storage.
• Want to maintain zfs system for bitrot protection

see here

at $100USD/10TB seems like a pretty good price - read the through to see others' thoughts about the drives they are receiving. Are you okay with used? 3TB * 3 = 9TB - at first glance 10TB seems like a good size doesn't it?

• Potentially upgrade mobo/cpu/ram for lower power usage.
• I feel like the majority of the power usage is from the number of disks.
• Take old 3tb drives and use them for an on-site backup pool
Going to single CPU will help under load and a little bit (but probably less than you think) at idle.
Maybe something like an X10SRL-F and a 2680v4 if you want to get a bit newer. DDR4 though
Reuse one of your E5-2690 and find an X9SRL-F to reuse your existing memory.

All things being equal I counted 40 pcie lanes with your current card configuration. However if your expander contained all spinners you could get by with one of your HBA's in an x4 (x8 phys).

• Simplify if possible.
How do you feel about going to a single chassis? You'd probably drop 150w (probably a little more since you'd also not need the -8e HBA).


Questions are:
General recommendations on what to do?
Should I upgrade my hardware or just my disks?
Should I stick with the passive backplanes in my cases or go to expander backplanes?
Should I create a new pool and move the data or replace drives one by one and resliver (seems like a lot of work on older drives)?
Should I add a zil or slog? If so what?
What kind of pool should I create for a backup pool? Single vdev z1?
Whats the best price/performance disk for my main pool these days? Shucking external drives? Buying used enterprise SAS or SATA drives? I am thinking 8-10tb drives. I don’t want shingled drives. I like getting 500MBps speed while transferring and working on files.
Thanks!
Your questions are tough because they are pretty subjective.

No budget mentioned (that I saw). If the answers is as cheap as possible - well makes it even more difficult to answer.
I had a tough time determining your use case(s) and therefore guessing what your working set looks like at any given time.
FWIW its possible that the lack of specific questions and things being more of a general nature are why you haven't had folks chime in.
It was a bit of a puzzle figuring out which drives are connected to what. for example are your 480GB SSD mirrored on the HW raid controller or is that a ZFS mirror?
I didn't see you mention which type 1 hyp you are running. Since you are using HW raid - makes me think ESXI - but I probably shouldn't assume.
Since you have LSI 2308's I'm gonna guess you are NOT running ESXI 7? 2308's can be passed through in 7 but if I recall what I read - you have to mess with it a bit and you didn't mention that either.

I'd definitely upgrade your disks. I had 69K+ hours on my 24x3TB spinners - smart seemed fine - but ran out of space.
Went from 3TB to SAS3 8TB. Powered 'the old ones down. Went back and checked 'em a couple of months later - 6 failed immediately... LOL. Lots of hours. things do die.

I'm a fan of the TQ backplanes. Again, budget *but* you could get rid of the internal pcie powered expander if you went to a -24i (or maybe -16) HBA... In my mind you go from two SPoF to one and you'd reduce the power draw a bit. Do a degree this will depend on your use cases...
With the TQ you have the flexibility to do exactly what you are doing - some drives on the HBA some drives on a HW raid controller.

You have a mostly empty JBOD chassis. why not build your new pool there, migrate data then shutdown and swap the new pool drives into your existing main chassis?

You have a ZIL
I think you mean to ask about a dedicated SLOG devvice.
Is most of your IO sync or async? that matters.
What level of performance do you hope to achieve? will you be going faster than 10Gbe - that impacts the size of the deivce that you need.
budget. NVME? Radian RMS100? lots of options but you'll need lanes. If you can simplify the cards you are deploying you may free up slots/lanes.

Dealer's choice on your backup pool... It is backup. Me personally I like a separate box. You can always have it auto power on, replicate, then auto shutdown...

Best price/performance - I think that changes almost daily, New? Used? $10/TB < 12TB used seems to be where it was pre-chia and has gotten back there again. Shucking IMO is a personal choice, I also think its a bit of a game and an "alpha geek mentality". Lots of literature around on shingled drives and what to avoid. I'm NOT aware of any SAS3 drives that are shingled...

If it were me:
Come up with a plan that I can execute in phases to get where I want.
If you are on ESXI < 7 then I'd also be researching whether I want to replace HBA's (even for pass through)
Start with storage - I'm good with used enterprise spinners, probably go after the 10TB @99USD per drive. do I want 8, 10, 16, or 20 drives at the outset?
10TB work? great. How many vdev's you want? how does the storage line up with the performance you want. I get about 5Gbps with 2x8RZ2 16TB SAS3 drives. Its good enough for me.
Get my data moved onto (new to me drives) using the JBOD
Take a look at getting rid of the expander in the main chassis and obviating the JBOD for the main system - maybe I build a pure backup server in the JBOD chassis that will auto on replicate, auto off.
Once that all works
I replace the motherboard, cpu, and memory, going to a single CPU board (e5 v4) - so x10SRL-F (200USD) an E5-2680v4 (100USD) and DDR4 probably PC4-2400T since I can find 32G sticks for < 80USD each. If you wanted even newer you can probably go scalable and get a mobo with onboard SFP+ HW Raid (3108)... and simplify your card deployment a bit more even.

I've rambled a bit here but hopefully provided some food for thought. These are ideas. Not saying you have to do this or that. Just thinking out loud and using examples of what I've done to give you some starting points.
 
  • Like
Reactions: Markess