1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

1 Petabyte FreeNAS build

Discussion in 'FreeBSD and FreeNAS' started by cheezehead, Aug 11, 2017.

  1. cheezehead

    cheezehead Active Member

    Joined:
    Sep 23, 2012
    Messages:
    524
    Likes Received:
    119
  2. Patrick

    Patrick Administrator
    Staff Member

    Joined:
    Dec 21, 2010
    Messages:
    9,562
    Likes Received:
    3,032
    That is more than a little bit scary. 100's of SATA drives via cascading SAS expanders and nothing labeled?

    Please someone tell this gentleman to label those drives ASAP!
     
    #2
    Hank C, gea, K D and 3 others like this.
  3. cactus

    cactus Moderator

    Joined:
    Jan 25, 2011
    Messages:
    700
    Likes Received:
    39
    That usable keyword changes the meaning of this a lot to me. AND he is using mirrors.
     
    #3
  4. pricklypunter

    pricklypunter Well-Known Member

    Joined:
    Nov 10, 2015
    Messages:
    1,018
    Likes Received:
    275
    Bravery....I like it :)
     
    #4
  5. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    1,455
    Likes Received:
    426
    My main concern would be number of disks.
    With a annual failure rate of around 2-3% per year, going higher over time, you have statistically a faulted disk every few weeks.

    While you can switch an alert led on with a working disk to find its bay, you need a way to find a dead disk. This requires a disk map function with history or a printed location list and a proper labling of the Jbods

    I would use 10-12 TB disks for this reasons and the best ones that I could get, propably HGST HE

    I would avoid Sata. I have seen trouble reports with Sata + expander where a semi dead disk blocks an expander what makes it really hard to find the trouble disk. While newer expanders may be better than older ones, I would not use this combination outside home. Simply a risk that can be avoided.

    I would try to reduce cabling and number of parts.
    With 12 TB disk, a single 90 bay Supermicro toploader (or two 60 bay ones from HGST or Supermicro with 10TB disks) gives you around 1PB.

    As this is not a high performance build, I would prefer Z2 arrays with 6 or 10 disks per vdev. Fast enough and would drastically reduce number of disks. This would also allow to fail any two disks.

    What is the backup plan?
     
    #5
    Last edited: Aug 12, 2017
    cactus likes this.
  6. PigLover

    PigLover Moderator

    Joined:
    Jan 26, 2011
    Messages:
    2,476
    Likes Received:
    943
    Lots of problems here. Single host for that many disks will have terrible availability problems (any maintenance in any chassis is an outage). Raid10 yields paired failure domains - any disk fault needs to be dealt with immediately because you have data at risk, etc. Just say no...

    Add to this that it is archival storage, so performance probably isn't an issue and you have a perfect use case for Ceph. Get a low-end server MB for each of those chassis. Run the pools with replication of 1 and you get the same replication as your raid 10, except that faulted drives are rebalanced automatically across all other disks and you can take each chassis offline for maintenance, 1 at a time, with no impact to availability.

    This is, BTW, exactly what CERN does using Ceph - they are just a bit bigger.

    Sent from my SM-G950U using Tapatalk
     
    #6
  7. cheezehead

    cheezehead Active Member

    Joined:
    Sep 23, 2012
    Messages:
    524
    Likes Received:
    119
    From the Facebook thread, he's not labeling the drives and planning on using the sas3ircu method to locate the drive and then since the caddies have holes in them, verify the serial #'s that way.

    Apparently he's not replicating the array to a twin.


    I was thinking this as well, too many moving parts.
     
    #7
  8. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    1,455
    Likes Received:
    426
    If a disk is dead/ fails completely, sas3ircu is no help, you have only the former WWN. This is why I have included a history function in my disk map function. (Alternatively you need a disk list with WWN, Serial and Enclosure slot)
     
    #8
    cheezehead and Patrick like this.
  9. cheezehead

    cheezehead Active Member

    Joined:
    Sep 23, 2012
    Messages:
    524
    Likes Received:
    119
    ZoL 0.7.0 just added this function natively, not sure how long till the code gets ported over to freenas.
     
    #9
  10. cliffr

    cliffr Member

    Joined:
    Apr 2, 2017
    Messages:
    49
    Likes Received:
    18
    I'm shocked this guy isn't on STH.

    @PigLover @gea @Patrick and all are right. This is going to be a disaster.

    Gluster Lustre Ceph good for that many.

    One expander chassis to a system is my max with SATA. I'd explore two.

    I know is archival storage. This is the problem with social media. People do stunts to show off and get likes. But doing this publicly just ensures your next employer can see how you're passionate about bad ideas.

    When the old been there done that mistake dudes say label drives it's a good wisdom to heed.

    And I've seen a SAS expander shelf f* its marbles and enclosure services stop working
     
    #10
    realtomatoes likes this.
  11. Tom5051

    Tom5051 Member

    Joined:
    Jan 18, 2017
    Messages:
    209
    Likes Received:
    24
    hahah nice! So what is your backup strategy... Cloud?
     
    #11
Similar Threads: Petabyte FreeNAS
Forum Title Date
FreeBSD and FreeNAS Sick and tired of FreeNAS, need alternates>>> Friday at 6:25 AM
FreeBSD and FreeNAS FreeNAS 11 u2 with Intel Atom C3000 Series Denverton Aug 17, 2017
FreeBSD and FreeNAS FreeNAS SLOG Device? Aug 12, 2017
FreeBSD and FreeNAS FreeNAS on HP ML10 v2 Aug 1, 2017
FreeBSD and FreeNAS 10Gbe performance issue in FreeNas 11 Jul 29, 2017

Share This Page