Marriage = Storage Consolidation

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Some of you may remember my Final Bachelor Build from last year. Well, I'm coming up on my 1-year anniversary now and you know what marriage is all about...SACRIFICE!

In all seriousness, I'd like to consolidate my home storage as much as I can as I'm running more hardware than I need and/or am making use of. I can easily eliminate one node by combining my VM storage and Bulk storage (media, surveillance video, VM snapshots, software, personal data, etc.) into a single server physical server

Currently I'm running my VM storage on a baremetal FreeNAS server that is shared to my 2 ESXi hosts. I have a separate baremetal UnRAID server that houses all my other data. Now that I'm in the process of upgrading my spinners from 8TB Seagate SMR's to 10TB WD Gold drives, combining my storage is more viable than ever.

Putting everything on FreeNAS would be one way to go as it would be easy to simply just create a new RAIDz2 pool for my spinners and be done with it. However, the whole drama that happened with FreeNAS Corral has left a bit of a bad taste in my mouth and I'm a little concerned with FreeNAS' future. So before I decide to go that route, I wanted to get some opinions on what other comparable alternatives are out there.

Ideally I'd like to up my storage's performance if I'm going to take the time to reconfigure things so I'm not considering non-striped RAID setups (UnRAID/SnapRAID/FlexRAID) at this time.
 

cheezehead

Active Member
Sep 23, 2012
723
175
43
Midwest, US
IMO, FreeNAS will still be fine....give the drive size i'm a bit leary about raidz2, looking at rebuilding my setup on raid1 given the current pains with resilvering drives.
 
  • Like
Reactions: cactus

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
IMO, FreeNAS will still be fine....give the drive size i'm a bit leary about raidz2, looking at rebuilding my setup on raid1 given the current pains with resilvering drives.
What drive sizes are you using and what is your resilvering time? Really need the 60TB of space but adding a 9th drive at this point will be tough.
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
With 5x 4 disk raid z1 in Freenas, rebuild time is around 4-6 hrs. Just replaced 4x 7200rpm drives with 5400 rpms
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Ok thinking about this more, and being that roughly 60-70% of the data on my array is media, non-striped RAID still seems the way to go for my use case.

However, in the interest of running both my VM and bulk storage on the same physical box, does something like SnapRAID (calling @rubylaser) work in say a Linux VM with simple HBA passthrough? The biggest reason I haven't gone SnapRAID in the past is the lack of caching. But the more I think about it, if SnapRAID doesn't calculate parity on the spot but instead relies on manual/scheduled scripts, does that mean writing to the array will simply perform at the speed of a single drive? Because if that's the case, that should be more than enough for my needs.
 

bitrot

Member
Aug 7, 2017
95
25
8
He probably means XPEnology for the software, the unofficial Synology DSM for non-Synology hardware.

Worth a try, but updates can be a bit tricky. I personally stick to unRAID for my media server needs. With enough cache space, the relative low array performance doesn’t really matter in every day use.
 

msg7086

Active Member
May 2, 2017
423
148
43
36
SnapRAID only build parity blocks when you sync. So read and write speed will be single disk performance (you can always build RAID 0 underneath though).

To prevent data loss you can sync every night, which should provide a good protection to your media files.

Files other than media can go to a RAIDzx setup. That should lead to a much smaller setup and help to reduce the rebuild cost.
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
Ok thinking about this more, and being that roughly 60-70% of the data on my array is media, non-striped RAID still seems the way to go for my use case.

However, in the interest of running both my VM and bulk storage on the same physical box, does something like SnapRAID (calling @rubylaser) work in say a Linux VM with simple HBA passthrough? The biggest reason I haven't gone SnapRAID in the past is the lack of caching. But the more I think about it, if SnapRAID doesn't calculate parity on the spot but instead relies on manual/scheduled scripts, does that mean writing to the array will simply perform at the speed of a single drive? Because if that's the case, that should be more than enough for my needs.
Yes, HBA passthrough through VMWare or Proxmox will work great with SnapRAID. I ran my array like that for a long time (I'm now back to a dedicated fileserver). With SnapRAID, syncing can use multiple disks. I often see my syncs going at 1800 MB/s, so it's not slow.

Also, Trapexit has published directions to do a write cache volume with mergerfs. So you can have a write cache like you are used to with UnRAID. It's not nearly as simple to setuo, but it's not too tricky either.

GitHub - trapexit/mergerfs: a featureful union filesystem

Reads still come from the main array, but as you know, those modern 10TB disks move data very quickly on large sequential reads.



Sent from my SM-G930V using Tapatalk
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
SnapRAID only build parity blocks when you sync. So read and write speed will be single disk performance (you can always build RAID 0 underneath though).

To prevent data loss you can sync every night, which should provide a good protection to your media files.

Files other than media can go to a RAIDzx setup. That should lead to a much smaller setup and help to reduce the rebuild cost.
Thanks!

Yes, HBA passthrough through VMWare or Proxmox will work great with SnapRAID. I ran my array like that for a long time (I'm now back to a dedicated fileserver). With SnapRAID, syncing can use multiple disks. I often see my syncs going at 1800 MB/s, so it's not slow.

Also, Trapexit has published directions to do a write cache volume with mergerfs. So you can have a write cache like you are used to with UnRAID. It's not nearly as simple to setuo, but it's not too tricky either.

GitHub - trapexit/mergerfs: a featureful union filesystem

Reads still come from the main array, but as you know, those modern 10TB disks move data very quickly on large sequential reads.
@rubylaser Now that you're back on a dedicated fileserver, what are you using for your VM storage? I'm actually giving strong consideration to moving back to local datastores on my ESXi boxes for simplicity.
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
Thanks!



@rubylaser Now that you're back on a dedicated fileserver, what are you using for your VM storage? I'm actually giving strong consideration to moving back to local datastores on my ESXi boxes for simplicity.
I'm using a local ZFS pool on my Proxmox node at home ( a (7) striped mirrors of the 200GB HGST SAS SSDs). It's super fast and just easier for me to manage with my limited home freetime.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
I'm using a local ZFS pool on my Proxmox node at home ( a (7) striped mirrors of the 200GB HGST SAS SSDs). It's super fast and just easier for me to manage with my limited home freetime.
Ahh you only have a single computer node huh? Having 2+ complicates things a big because I need to be able to move my VM's between nodes from time to time. But having separate boxes for both bulk and VM storage at home is just becoming overkill at this point so I want to shrink my footprint.
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
Ahh you only have a single computer node huh? Having 2+ complicates things a big because I need to be able to move my VM's between nodes from time to time. But having separate boxes for both bulk and VM storage at home is just becoming overkill at this point so I want to shrink my footprint.
I do at home. My home Proxmox node is just for goofing around with :) I have a 3 node Proxmox cluster colocated at a local datacenter that I use Ceph on for shared storage. I host all my production VMs there.