1 Petabyte FreeNAS build

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
That is more than a little bit scary. 100's of SATA drives via cascading SAS expanders and nothing labeled?

Please someone tell this gentleman to label those drives ASAP!
 

cactus

Moderator
Jan 25, 2011
830
75
28
CA
That usable keyword changes the meaning of this a lot to me. AND he is using mirrors.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
My main concern would be number of disks.
With a annual failure rate of around 2-3% per year, going higher over time, you have statistically a faulted disk every few weeks.

While you can switch an alert led on with a working disk to find its bay, you need a way to find a dead disk. This requires a disk map function with history or a printed location list and a proper labling of the Jbods

I would use 10-12 TB disks for this reasons and the best ones that I could get, propably HGST HE

I would avoid Sata. I have seen trouble reports with Sata + expander where a semi dead disk blocks an expander what makes it really hard to find the trouble disk. While newer expanders may be better than older ones, I would not use this combination outside home. Simply a risk that can be avoided.

I would try to reduce cabling and number of parts.
With 12 TB disk, a single 90 bay Supermicro toploader (or two 60 bay ones from HGST or Supermicro with 10TB disks) gives you around 1PB.

As this is not a high performance build, I would prefer Z2 arrays with 6 or 10 disks per vdev. Fast enough and would drastically reduce number of disks. This would also allow to fail any two disks.

What is the backup plan?
 
Last edited:
  • Like
Reactions: cactus

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Lots of problems here. Single host for that many disks will have terrible availability problems (any maintenance in any chassis is an outage). Raid10 yields paired failure domains - any disk fault needs to be dealt with immediately because you have data at risk, etc. Just say no...

Add to this that it is archival storage, so performance probably isn't an issue and you have a perfect use case for Ceph. Get a low-end server MB for each of those chassis. Run the pools with replication of 1 and you get the same replication as your raid 10, except that faulted drives are rebalanced automatically across all other disks and you can take each chassis offline for maintenance, 1 at a time, with no impact to availability.

This is, BTW, exactly what CERN does using Ceph - they are just a bit bigger.

Sent from my SM-G950U using Tapatalk
 

cheezehead

Active Member
Sep 23, 2012
723
175
43
Midwest, US
That is more than a little bit scary. 100's of SATA drives via cascading SAS expanders and nothing labeled?

Please someone tell this gentleman to label those drives ASAP!
From the Facebook thread, he's not labeling the drives and planning on using the sas3ircu method to locate the drive and then since the caddies have holes in them, verify the serial #'s that way.

What is the backup plan?
Apparently he's not replicating the array to a twin.


Add to this that it is archival storage, so performance probably isn't an issue and you have a perfect use case for Ceph. Get a low-end server MB for each of those chassis. Run the pools with replication of 1 and you get the same replication as your raid 10, except that faulted drives are rebalanced automatically across all other disks and you can take each chassis offline for maintenance, 1 at a time, with no impact to availability.

This is, BTW, exactly what CERN does using Ceph - they are just a bit bigger.
I was thinking this as well, too many moving parts.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
From the Facebook thread, he's not labeling the drives and planning on using the sas3ircu method to locate the drive and then since the caddies have holes in them, verify the serial #'s that way.
If a disk is dead/ fails completely, sas3ircu is no help, you have only the former WWN. This is why I have included a history function in my disk map function. (Alternatively you need a disk list with WWN, Serial and Enclosure slot)
 

cheezehead

Active Member
Sep 23, 2012
723
175
43
Midwest, US
If a disk is dead/ fails completely, sas3ircu is no help, you have only the former WWN. This is why I have included a history function in my disk map function. (Alternatively you need a disk list with WWN, Serial and Enclosure slot)
ZoL 0.7.0 just added this function natively, not sure how long till the code gets ported over to freenas.
 

cliffr

Member
Apr 2, 2017
77
32
18
45
I'm shocked this guy isn't on STH.

@PigLover @gea @Patrick and all are right. This is going to be a disaster.

Gluster Lustre Ceph good for that many.

One expander chassis to a system is my max with SATA. I'd explore two.

I know is archival storage. This is the problem with social media. People do stunts to show off and get likes. But doing this publicly just ensures your next employer can see how you're passionate about bad ideas.

When the old been there done that mistake dudes say label drives it's a good wisdom to heed.

And I've seen a SAS expander shelf f* its marbles and enclosure services stop working
 
  • Like
Reactions: realtomatoes