32TB ZFS Home Media Server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Benfish

New Member
Jul 15, 2014
4
0
1
35
Build’s Name: 32TB ZFS Home Media Server
Operating System/ Storage Platform: ESXi/napp-IT and OmniOS
CPU: Intel i7 920 or 930 or 950 (have all 3 not being used)
Motherboard: EVGA X58 (or new)
Chassis: NORCO RPC-4220 4U Rackmount Server Chassis w/ 20 Hot-Swappable SATA/SAS 6G Drive Bays
Drives: 20x 2TB SATA (looking at running RAID-Z2 for ~32 TB usable space)
RAM: 16 GB DDR3 1600Mhz
Add-in Cards: 2x IBM Serveraid M1015 (flashed LSI9211-IT)
Power Supply: standard ~500w ATX PSU
Other Bits: Blood sweat and tears!

Usage Profile: I plan to use the box to replace my current NAS which is a Thecus n7700Pro which currently has 500GB free of 16TB. I work in the IT industry as a field engineer and have many of the above just sitting around doing nothing. So rather than spend a few grand buying another out of the box solution i figured i would do it the hard way this time. The other option i have if it all gets too hard is to buy a second n7700 Pro for $500, fill it with 2TB drives and stack it with my current one.

The reason i would like to do the above instead of just adding a second n7700 is that this would allow me to run PLEX media server on a windows 2k11 VM that i can also use for other things like uTorrent and possibly get rid of my current server running pfSense and include that in the VM environment

I am looking for opinions on the above option, including specifically:

Will the last Gen i7 be enough grunt for what i need?

Can you confirm my suspicion that 16 GB will not be enough RAM? (I have a heap of 4GB PC3L 10600R DIMMS that i don't have a use for if i end up going with a server board)

will i have issues if i mix 2TB WD Green drives with 2TB IBM Storage 7.2k (rebadged Hitachi Ultrastar A7K2000) drives with SAS interposes removed?

In addition to the above i also have a full spec Dell Power edge 2850, 12x IBM HS20 blades and an IBM x3550 that i can steal some parts from. Although i doubt i can get anything useful from them!

Thanks for taking the time to read this far! I would appreciate any thoughts or suggestions on where to go from here.
 

Stanza

Active Member
Jan 11, 2014
205
41
28
Jeebus thats one overspec'd media server :eek:

if all you want is to serve media files / download torrents etc

Might I suggest a really cheap low powered motherboard with 2gb - 4gb ram
Keep your M1015's

Run Xpenology on it make a disk group SHR2 "eg Raid6" then make some volumes on that (partition it up however you like" to share out.

Keep all your other hardware for virtualization machines.... connect them to the media server via NFS and live a simple life.

Whilst I love ZFS and all it's benefits......it's way way overkill (hardware hungry wise) for a simple "bunch a heap of drives together as a large pool" to hold some media

.
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
+1

Another option is to use SnapRAID + mhddfs/AUFS for home bulk media storage. SnapRAID supports up to 6 parity disks, you only need to spin up one disk to read a file rather than your whole array with traditional RAID systems, you can use disks with existing data on them, and you use use disks of different sizes without losing the extra space on the larger disks. Also, since each disk stands alone, even if you lose your two parity disks and a third disk, the remaining disks are still viable and can be read independently, so all your data isn't lost.

Maybe I didn't understand your goal, so if you want this to be an all-in-one virtualization / storage platform, then the hardware you have would be a good start (you'd want more RAM), but I would still move to having a pool of SSD's for the VM storage and use something like SnapRAID for your bulk storage.

Your hardware will be more than enough for transcoding multiple 1080p streams via Plex, so I wouldn't worry about "grunt".
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
It looks like the backplane consists of 5 sets of 4 drives. You'd need a SAS expander regardless of the number of M1015 cards, maybe only use one card and an expander?
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
The Norco 4220 only houses 20 hot swap drives + 2 OS non hot swap drives. I would assume he is planning on connecting 16 drives to the two m1015's and then the remaining 4 bays to the SATA heads on the X58 motherboard via reverse breakout cable. I think he should be fine for drive connectivity :)
 
Last edited:

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
Awesome! I wasn't aware of how the Norco's are designed. Yes, the reverse breakout cable solution works pretty well in that case!
 

gea

Well-Known Member
Dec 31, 2010
3,167
1,196
113
DE
Using ESXi as the base allows you to use different operating systems for the NAS, the mediaserver and the firewall where each system is a VM that you can easily copy, clone or backup. (and you can use the "best of all" for every task)

16 GB seems ok as you can count like:
- use 1-2 GB for ESXi
- add 1-2 GB for the firewall OS
- add 3-4 GB for the Windows VM

that leaves 8 GB for your ZFS NAS (I would never-ever go back to any filesystem without copyonwrite, snaps or checksums like ext4 or ntfs). As OmniOS needs about 1-2 GB for itself, you can use about 6 GB as read-cache. More is faster but 6 GB readcache ist very good - not needed for a single person to watch a movie but needed if a second person watch the same movie with a delay or if you want to work on secure storage. with snaps and previous versions.

What you may do is to use a separate pool for VMs and mediafiles as this allows a "disk sleep" of the media pool.
 
Last edited:

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
It's good to have all these options :) I love ZFS for my VM storage and documents that need versioning. But I have found the checksumming, scrubbing, single disk spinup, lightweight system requirements, and ease of adding a single disk at a time, made SnapRAID for my media storage a simple decision for me.

As Gea says, ESXi makes for an easy virtualization platform (other good options are Hyper-V, KVM, Xen, or Proxmox (KVM + OpenVZ)). Another option could be to use Docker for things like uTorrent, and Plex and bypass virtualization altogether (this wouldn't be a good solution if you decided you really did want to move over pfsense to a VM environment).

Like I said, options are good, and there have been a number of them in this thread.
 
Last edited:

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
I am running Docker instances of Plex and Sabnzbd on my main fileserver, works quite well for simple things. If you want to try, check out timhaak's versions, I wrote up setup docs for him.
 

Benfish

New Member
Jul 15, 2014
4
0
1
35
Thanks for all the thoughts and suggestions! I had not come across SnapRAID but it looks great, only having to spin up one drive for contiguous reads seems like a good way to lower the power bill on the setup. Will this avoid any issues with having a mixed pool of WD Green drives with IBM Storage near line SAS drives?

Is there any issues running SnapRAID on a VM?

I was planning on using 2 IBM Serveraid cards and using the planar for the rest but i could just add a third. Would SnapRAID be able to see the drives through ESXi? Should the HBAs flashed IT or IR code?

If ESX is not an option then Docker looks like it could solve a few of my problems.
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
1. Yes, SnapRAID will work fine with a mix of Green drives because you don't need to work about TLER with it unlike traditional RAID system.
2. Yes, SnapRAID will work fine in a VM (I would pass through the disks to the virtual machine). Unfortunately, none of those i7 CPUs support VT-D, so passing them directly through is not an option.
3. I would flash your m1015's to IT mode with no OptionROM (not required, but it will be cleaner and make the boot process faster)
 

Benfish

New Member
Jul 15, 2014
4
0
1
35
Cheers for the answers Rubylaser, what issues will running ESXi without VT-D cause? Can i actually run 3 IBM Serveraid controllers or will the third be running at such a slow PCIe speed that its not worth it?
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
The EVGA X58 boards that I have seen have 3x PCIe x16 slots so you should be able to have enough bandwidth available to power all the cards at once. The M1015s are PCIe x8 cards so they will fit and should not throttle down. IIRC, these boards have about 40 PCIe lanes so there's no problem there. Some of the consumer boards only accept video cards in the first PCIe slot (closest to the CPU) but I don't think this is one of those. I think I ran this board in some of my older servers and used a simple PCI video card for the VGA since it doesn't have onboard video.

Yes, 40 PCIe lanes and you can do up to 4 x8 lanes so you're set for 3 cards.

Link to block diagram
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
VT-D is the most transparent method to passthrough disks via passed through controller to a virtualmachine. Since this isn't an option, the next best option is a physical RDM. This has a small level of abstraction, but still allows you to easily get at SMART data, etc. Here is a pretty thorough writeup on the different methods of providing disks to your virtual machine in ESXi.

Storage deployment on VMware ESXi: IOMMU/Vt-d vs Physical RDM vs Virtual RDM vs VMDK - FlexRAID