Marriage = Storage Consolidation

Discussion in 'DIY Server and Workstation Builds' started by IamSpartacus, Oct 20, 2017.

  1. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    1,933
    Likes Received:
    423
    I see. Yea my home cluster IS my "production" cluster ;). But as much as I like to plan for max uptime, I'm starting to realize there's only so far I'm willing to go for my home network. And having a shared bulk storage server and a separate shared VM storage server is just overkill. I want to either combine them into one physical box or move my VM storage back locally onto the hosts. I'm honestly learning towards #2 for simplicity sake since I just don't have the time I used to to spend on my "toys" (see thread title :D). Since I use Veeam I can bring a VM back up in a matter of minutes on a different host if one host was to become unavailable for whatever reason and that should be good enough for at home.
     
    #21
  2. Evan

    Evan Well-Known Member

    Joined:
    Jan 6, 2016
    Messages:
    2,867
    Likes Received:
    431
    If you think you have no time now... children ;)
     
    #22
    Stux and rubylaser like this.
  3. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    1,933
    Likes Received:
    423
    Exactly, what do you think this is in preparation for!
     
    #23
    Leonardo Rassi and Evan like this.
  4. whitey

    whitey Moderator

    Joined:
    Jun 30, 2014
    Messages:
    2,764
    Likes Received:
    858
    x's 3 here...so sad my 'geeking out' time late night jam sessions have largely gone out the door but wouldn't trade it for the world! Tons of fun those lil' savages are :-D haha
     
    #24
  5. mackle

    mackle Active Member

    Joined:
    Nov 13, 2013
    Messages:
    199
    Likes Received:
    34
    Enjoying my first, two days old, here :)
     
    #25
    rubylaser, msg7086 and Evan like this.
  6. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    1,933
    Likes Received:
    423
    Congrats!
     
    #26
  7. Stux

    Stux Member

    Joined:
    May 29, 2017
    Messages:
    30
    Likes Received:
    10
    Why not just get rid of the unraid and put the bulk and vm store under FreeNAS?

    You can use different pools.
     
    #27
  8. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    1,933
    Likes Received:
    423
    I've considered it but based on my use case (mainly media serving), non-striped pooling of my data seems to fit best. Not only will performance be better (streaming multiples files stored on different disks as opposed to lots of random IO from a multi-disk striped array, but I also have the added security of not losing all my data (we're talking 60TB and growing) if I lose more disks than I have parity.
     
    #28
  9. CookiesLikeWhoa

    Joined:
    Sep 7, 2016
    Messages:
    111
    Likes Received:
    24
    I don't see how a non-striped pool would perform better over a striped pool, since a non-striped pool is limited to IOPs/throughput of 1 disk, but I digress.

    I would vote for some flavor of ZFS, either FreeNAS or ZFS on Linux. With the new feature coming to openZFS where you can expand a pool and have it rebalance I can't think of any reason to not run ZFS. (OpenZFS on Twitter)
     
    #29
  10. nitrobass24

    nitrobass24 Moderator

    Joined:
    Dec 26, 2010
    Messages:
    1,081
    Likes Received:
    125
    It’s all about your use-case. For him streaming a media file is sequential IO. So on a single disk this is no problem. Streaming a second file, if it’s on the same disk the read pattern becomes random as it’s trying to read multiple files. However in a non-striped Raid you have a chance the second stream is located on a second disk and you still have two sequential operations.

    In a striped scenario you are almost always going to be doing random reads.

    For media playback ZFS doesn’t buy you anything since ARC and even L2ARC are not read-ahead caching, so unless you are playing your file a second time it will always be pulled from disk. So you still end up being limited to single drive performance (assuming single vdev).


    Sent from my iPhone using Tapatalk
     
    #30
  11. CookiesLikeWhoa

    Joined:
    Sep 7, 2016
    Messages:
    111
    Likes Received:
    24
    I under stand that. The question then is how many streams does he have at once? If it's regularly more than 1, then it would seem that a striped array would be the clear winner. Even then, I really question if a single disk's sequential IO is greater than an arrays random IO. Though I guess it depends on the number of disks and why type of array we are talking about. (A RaidZ2 array with 2 vdevs, each consisting of 5 drives, would likely have greater random read IO than a single drive's sequential IO for media, I would think)

    Again, I don't think performance is the right question here. I agree ARC/L2ARC would be useless for a media server, but that's not really the point of using a ZFS based file system for media. It's more about data integrity.

    If there is no point in data integrity then I can see why you would want to use JBOD. It would give the most raw storage; but I seem to recall the OP talking about parity earlier in the thread.
     
    #31
    Stux likes this.
  12. bitrot

    bitrot Member

    Joined:
    Aug 7, 2017
    Messages:
    95
    Likes Received:
    23
    One negative aspect of striped arrays when it comes to home media servers is power consumption, an often overlooked factor. In a non–striped array, only the disks containing the media that is accessed needs to spin, not all of them as is the case in a striped array. Depending on the number of disks (and the kind of disks you’re using) in your array, the power saved can be quite significant.

    Another advantage of non–striped arrays, particularly in comparison to ZFS pools, is the relative ease and cost effectiveness of expanding storage.
     
    #32
  13. i386

    i386 Well-Known Member

    Joined:
    Mar 18, 2016
    Messages:
    1,683
    Likes Received:
    412
    :eek:
    I hope that works with raid z2/z3 too and I can finally ditch hardware raid:cool:
     
    #33
  14. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,273
    Likes Received:
    752
    For L2Arc you can enable read ahead with a setting in /etc/system (Solarish)
    Code:
    set zfs:l2arc_noprefetch=0
    This is why L2Arc can be an advantage even with a lot of RAM
     
    #34
  15. Joel

    Joel Active Member

    Joined:
    Jan 30, 2015
    Messages:
    807
    Likes Received:
    155
    RaidZ2 has MUCH better READ performance than a single drive, its WRITE performance is limited to the number of vdevs.

    True statements all. ZFS pools have the other advantage of bitrot protection though. Not a huge factor for a video library perhaps, but for my DSLR photos it is something very attractive.
     
    #35
  16. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    1,933
    Likes Received:
    423
    If I'm readying 6 files off a single 8 disk RAIDz2 array, vs reading 6 files off 6 different drives in non-striped array, you're saying the RAIDz2 performance would be much better?
     
    #36
  17. markarr

    markarr Active Member

    Joined:
    Oct 31, 2013
    Messages:
    391
    Likes Received:
    101
    I know snapRAID does and I think a couple others have bitrot protection. Its also significantly easier to expand non-striped, as all you do is add the drive to the config and run a sync and its added, you can add disks one at a time to as many as you want.
     
    #37
  18. IamSpartacus

    IamSpartacus Well-Known Member

    Joined:
    Mar 14, 2016
    Messages:
    1,933
    Likes Received:
    423
    ZFS proponents often tout bitrot as a huge feature. While I definitely see it as an important consideration for enterprise/production servers and irreplaceable data, it's honestly not a factor at all for me when talking about a media server.

    For me VMs, ZFS is my choice. For my 50TB of media, not at all necessary. But then again I have gigabit internet so the thought of having to "replace" my media isn't so daunting :cool:.
     
    #38
  19. Joel

    Joel Active Member

    Joined:
    Jan 30, 2015
    Messages:
    807
    Likes Received:
    155
    In this scenario, I am assuming you're talking about a video server playing movies?

    Hmm, in this hypothetical, maybe not, because now we're comparing sequential (NSA) vs. random (ZFS). I suspect it would be fairly close though. In the real world though, ZFS would still be more flexible.
    - Can you guarantee that all six files that you want to access will be on different drives?
    - What if three files are on a single drive?

    So in the end, you're asking what's best, and the answer is always "It depends..." and it's really something that only you can decide.
     
    #39
  20. Joel

    Joel Active Member

    Joined:
    Jan 30, 2015
    Messages:
    807
    Likes Received:
    155
    Agree, which is exactly why I said...

     
    #40
Similar Threads: Marriage Storage
Forum Title Date
DIY Server and Workstation Builds Old storage second life Nov 28, 2019
DIY Server and Workstation Builds Storage server using Athlon CPU and 24HDD Nov 9, 2019
DIY Server and Workstation Builds NVMe storage server on a budget Nov 8, 2019
DIY Server and Workstation Builds storage: so, everything was working fine until... May 28, 2019
DIY Server and Workstation Builds Home storage server May 8, 2019

Share This Page