Marriage = Storage Consolidation

Discussion in 'DIY Server and Workstation Builds' started by IamSpartacus, Oct 20, 2017.

  1. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    1,930
    Likes Received:
    421
    Some of you may remember my Final Bachelor Build from last year. Well, I'm coming up on my 1-year anniversary now and you know what marriage is all about...SACRIFICE!

    In all seriousness, I'd like to consolidate my home storage as much as I can as I'm running more hardware than I need and/or am making use of. I can easily eliminate one node by combining my VM storage and Bulk storage (media, surveillance video, VM snapshots, software, personal data, etc.) into a single server physical server

    Currently I'm running my VM storage on a baremetal FreeNAS server that is shared to my 2 ESXi hosts. I have a separate baremetal UnRAID server that houses all my other data. Now that I'm in the process of upgrading my spinners from 8TB Seagate SMR's to 10TB WD Gold drives, combining my storage is more viable than ever.

    Putting everything on FreeNAS would be one way to go as it would be easy to simply just create a new RAIDz2 pool for my spinners and be done with it. However, the whole drama that happened with FreeNAS Corral has left a bit of a bad taste in my mouth and I'm a little concerned with FreeNAS' future. So before I decide to go that route, I wanted to get some opinions on what other comparable alternatives are out there.

    Ideally I'd like to up my storage's performance if I'm going to take the time to reconfigure things so I'm not considering non-striped RAID setups (UnRAID/SnapRAID/FlexRAID) at this time.
     
    #1
    Patrick likes this.
  2. Peanuthead

    Peanuthead Active Member

    Joined:
    Jun 12, 2015
    Messages:
    757
    Likes Received:
    115
    Open Indiana with nappit?
     
    #2
    audio catalyst, Monoman and cperalt1 like this.
  3. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    1,930
    Likes Received:
    421
    Strong learning curve? Time is money these days.
     
    #3
  4. cheezehead

    cheezehead Active Member

    Joined:
    Sep 23, 2012
    Messages:
    694
    Likes Received:
    167
    IMO, FreeNAS will still be fine....give the drive size i'm a bit leary about raidz2, looking at rebuilding my setup on raid1 given the current pains with resilvering drives.
     
    #4
    cactus likes this.
  5. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    1,930
    Likes Received:
    421
    What drive sizes are you using and what is your resilvering time? Really need the 60TB of space but adding a 9th drive at this point will be tough.
     
    #5
  6. cheezehead

    cheezehead Active Member

    Joined:
    Sep 23, 2012
    Messages:
    694
    Likes Received:
    167
    couple of 2TB raidz2 sets, with the sets 80% full the rebuild time for me are long enough for it to bother me.
     
    #6
  7. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    1,930
    Likes Received:
    421
    Do you mind sharing the actual rebuild time so I can compare it to my 8TB UnRAID rebuilds?
     
    #7
  8. cheezehead

    cheezehead Active Member

    Joined:
    Sep 23, 2012
    Messages:
    694
    Likes Received:
    167
    18-22 hours depending what other usage there is
     
    #8
  9. K D

    K D Well-Known Member

    Joined:
    Dec 24, 2016
    Messages:
    1,411
    Likes Received:
    300
    With 5x 4 disk raid z1 in Freenas, rebuild time is around 4-6 hrs. Just replaced 4x 7200rpm drives with 5400 rpms
     
    #9
  10. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    1,930
    Likes Received:
    421
    What size are the disks?
     
    #10
  11. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    1,930
    Likes Received:
    421
    Ok thinking about this more, and being that roughly 60-70% of the data on my array is media, non-striped RAID still seems the way to go for my use case.

    However, in the interest of running both my VM and bulk storage on the same physical box, does something like SnapRAID (calling @rubylaser) work in say a Linux VM with simple HBA passthrough? The biggest reason I haven't gone SnapRAID in the past is the lack of caching. But the more I think about it, if SnapRAID doesn't calculate parity on the spot but instead relies on manual/scheduled scripts, does that mean writing to the array will simply perform at the speed of a single drive? Because if that's the case, that should be more than enough for my needs.
     
    #11
  12. nitrobass24

    nitrobass24 Moderator

    Joined:
    Dec 26, 2010
    Messages:
    1,081
    Likes Received:
    125
    Could always go with Syno SHR and throw an SSD cache on it.


    Sent from my iPhone using Tapatalk
     
    #12
  13. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    1,930
    Likes Received:
    421
    I'm not looking for any hardware, only software. I've got all the hardware I need. Thanks for the suggestion though.
     
    #13
  14. bitrot

    bitrot Member

    Joined:
    Aug 7, 2017
    Messages:
    95
    Likes Received:
    23
    He probably means XPEnology for the software, the unofficial Synology DSM for non-Synology hardware.

    Worth a try, but updates can be a bit tricky. I personally stick to unRAID for my media server needs. With enough cache space, the relative low array performance doesn’t really matter in every day use.
     
    #14
  15. msg7086

    msg7086 Active Member

    Joined:
    May 2, 2017
    Messages:
    184
    Likes Received:
    26
    SnapRAID only build parity blocks when you sync. So read and write speed will be single disk performance (you can always build RAID 0 underneath though).

    To prevent data loss you can sync every night, which should provide a good protection to your media files.

    Files other than media can go to a RAIDzx setup. That should lead to a much smaller setup and help to reduce the rebuild cost.
     
    #15
  16. rubylaser

    rubylaser Active Member

    Joined:
    Jan 4, 2013
    Messages:
    842
    Likes Received:
    229
    Yes, HBA passthrough through VMWare or Proxmox will work great with SnapRAID. I ran my array like that for a long time (I'm now back to a dedicated fileserver). With SnapRAID, syncing can use multiple disks. I often see my syncs going at 1800 MB/s, so it's not slow.

    Also, Trapexit has published directions to do a write cache volume with mergerfs. So you can have a write cache like you are used to with UnRAID. It's not nearly as simple to setuo, but it's not too tricky either.

    GitHub - trapexit/mergerfs: a featureful union filesystem

    Reads still come from the main array, but as you know, those modern 10TB disks move data very quickly on large sequential reads.



    Sent from my SM-G930V using Tapatalk
     
    #16
  17. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    1,930
    Likes Received:
    421
    Thanks!

    @rubylaser Now that you're back on a dedicated fileserver, what are you using for your VM storage? I'm actually giving strong consideration to moving back to local datastores on my ESXi boxes for simplicity.
     
    #17
  18. rubylaser

    rubylaser Active Member

    Joined:
    Jan 4, 2013
    Messages:
    842
    Likes Received:
    229
    I'm using a local ZFS pool on my Proxmox node at home ( a (7) striped mirrors of the 200GB HGST SAS SSDs). It's super fast and just easier for me to manage with my limited home freetime.
     
    #18
  19. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    1,930
    Likes Received:
    421
    Ahh you only have a single computer node huh? Having 2+ complicates things a big because I need to be able to move my VM's between nodes from time to time. But having separate boxes for both bulk and VM storage at home is just becoming overkill at this point so I want to shrink my footprint.
     
    #19
  20. rubylaser

    rubylaser Active Member

    Joined:
    Jan 4, 2013
    Messages:
    842
    Likes Received:
    229
    I do at home. My home Proxmox node is just for goofing around with :) I have a 3 node Proxmox cluster colocated at a local datacenter that I use Ceph on for shared storage. I host all my production VMs there.
     
    #20
Similar Threads: Marriage Storage
Forum Title Date
DIY Server and Workstation Builds Old storage second life Nov 28, 2019
DIY Server and Workstation Builds Storage server using Athlon CPU and 24HDD Nov 9, 2019
DIY Server and Workstation Builds NVMe storage server on a budget Nov 8, 2019
DIY Server and Workstation Builds storage: so, everything was working fine until... May 28, 2019
DIY Server and Workstation Builds Home storage server May 8, 2019

Share This Page