Building a new FreeNAS box, thoughts/feedback?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

GCM

Active Member
Aug 24, 2015
133
42
28
Nothing purchased yet, this is all in the planning phase:

So, any feedback would be greatly appreciated!

Here is the plan:

Chassis/barebones: SYS-6018R-TD8

HD: 8x UltraStar 7K4000

SSD: 2x 240GB (1 write cache 1 read cache)

RAM: 64 GB

CPU: E5-2603V3

Thoughts?
 

BlueLineSwinger

Active Member
Mar 11, 2013
176
66
28
Without more detail, it's kinda hard to say. What's the environment? How are you planning to use it? What applications? How many users? Are you planning to run any plugins/jails?
 

GCM

Active Member
Aug 24, 2015
133
42
28
Without more detail, it's kinda hard to say. What's the environment? How are you planning to use it? What applications? How many users? Are you planning to run any plugins/jails?
Graphics design storage (Video, assets, etc) about 10 users. No plugins planned as of now.

Why do you need 64 gig of ram? :)
Using ZFS, from what I've read, the more RAM the better.
 

Keljian

Active Member
Sep 9, 2015
428
71
28
Melbourne Australia
If you have the money, 64gig of ram is nice, but if you don't, 1gig per TB is plenty. (4x8=32)

I would prioritise network performance over An extra 32 gig of ram

A 10gbit Nic and suitable transceivers/cable/switch would be a worthwhile investment for that number of users/workload.
 
Last edited:

Keljian

Active Member
Sep 9, 2015
428
71
28
Melbourne Australia
It might actually be worthwhile going all 10gbit with multiple teamed nics in the server, and clients if you are hitting it with video, depending on whether you plan to work from it or not.

Also would be worth considering fast iop drives for the cache, pci-e ssd style etc.
 

GCM

Active Member
Aug 24, 2015
133
42
28
Thanks for the tips. Any chassis/mobo combo you can recommend?

The main issue is pricing, client is trying to come in at 5k or under.
 

Deslok

Well-Known Member
Jul 15, 2015
1,122
125
63
34
deslok.dyndns.org
why the seperated read and write caches? also from my understanding on ZFS that cpu might be a bit slow in that enviroment.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
Graphics design storage (Video, assets, etc) about 10 users. No plugins planned as of now.



Using ZFS, from what I've read, the more RAM the better.
IMHO, this needs further explaining before we can give you accurate guidance.

Are projects going to be "worked on" from the storage or placed there when done?

Are all 10 users going to be doing video editing and graphic design on this storage at once or could they all be backing up at 4PM before they head home at once? Could there be streaming from storage to conference room and other users while others are doing intensive tasks on the storage at the same time?

Can you explain a bit more how it will be utilized exactly?

Is sound an issue for chassis? What about size? What about drive capacity?
 

GCM

Active Member
Aug 24, 2015
133
42
28
IMHO, this needs further explaining before we can give you accurate guidance.

Are projects going to be "worked on" from the storage or placed there when done?

Are all 10 users going to be doing video editing and graphic design on this storage at once or could they all be backing up at 4PM before they head home at once? Could there be streaming from storage to conference room and other users while others are doing intensive tasks on the storage at the same time?

Can you explain a bit more how it will be utilized exactly?

Is sound an issue for chassis? What about size? What about drive capacity?

Are projects going to be "worked on" from the storage or placed there when done?
It's currently a mix. It's mostly photoshop/indesign work, but some Premiere/Aftereffets work will be done. Same with the local vs NAS working.

Are all 10 users going to be doing video editing and graphic design on this storage at once or could they all be backing up at 4PM before they head home at once? Could there be streaming from storage to conference room and other users while others are doing intensive tasks on the storage at the same time?
Most likely the answer is no. Sporadic use, perhaps a few consecutive use scenarios, but the real world example would probably be 2-3 concurrent users, with one of them being video.

Is sound an issue for chassis? What about size? What about drive capacity?
If by issue, do you mean "blazing fans all day" if so yes. If it's the usual hum of an average server, no. And we're shooting for 18TB minimum in usable space.
 

GCM

Active Member
Aug 24, 2015
133
42
28
why the seperated read and write caches? also from my understanding on ZFS that cpu might be a bit slow in that enviroment.
I'd like to keep as much possible space for the caches. Originally, I was planning 2x read and 1x write.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
I'd change the chassis for sure then.

If you're just using it as a file server for image manipulation and asset storage then the rest is fine, and likely pretty over kill.

Another option may be:
- E5-1620 V3
- 32GB RAM
- No Cache Drives

Monitor usage and add cache drives and ram when/where needed.
 
  • Like
Reactions: whitey

GCM

Active Member
Aug 24, 2015
133
42
28
I'd change the chassis for sure then.

If you're just using it as a file server for image manipulation and asset storage then the rest is fine, and likely pretty over kill.

Another option may be:
- E5-1620 V3
- 32GB RAM
- No Cache Drives

Monitor usage and add cache drives and ram when/where needed.
Good idea, but they're on the other side of the country ;)

Trying to hammer down a suitable setup that would require minimal hardware tinkering post setup.
 

Keljian

Active Member
Sep 9, 2015
428
71
28
Melbourne Australia
You will find that it is possible to wire up the server with 10gbe and put a mikrotik crs226 switch into production under $500 total if eBay is an option for the (dual Chelsio) network card, even with optical links.

The real question is "Are people going to be editing video that is on the server, using the server as their storage" - if the answer is yes, then fast storage is justified as is a fast network
 
Last edited:

GCM

Active Member
Aug 24, 2015
133
42
28
You will find that it is possible to wire up the server with 10gbe and put a mikrotik crs226 switch into production under $500 total if eBay is an option for the (dual Chelsio) network card, even with optical links.

The real question is "Are people going to be editing video that is on the server, using the server as their storage" - if the answer is yes, then fast storage is justified as is a fast network
Yeah, I was looking at the Mikrotik switch. This is one of my first "low budget" builds so, I'm bit out of my element!
 

miraculix

Active Member
Mar 6, 2015
116
25
28
I recommend reading Cyberjock's ZFS noob guide here. He's the official FreeNAS Mr. Crankypants but he provides very helpful info.

I may have totally misread/misunderstood something (hopefully someone corrects me if I did) but I think these are main points...

  • ZIL
    • There's no "write cache" SSD per se. There might be write caching in RAM but I don't recall for sure.
    • There is the concept of SLOG for sync writes, and using a dedicated ZIL device that's faster than the pool it supports
      • Use an SSD ZIL with good write endurance (or underprovision it) and write performance to support a pool of HDDs
      • Use an SSD ZIL with *really* good write performance/endurance to support a pool of SSDs (and maybe higher performance mirrored vdev setups).
    • The need for a ZIL device depends on what you're using FreeNAS for
      • NFS syncs writes by default, so definitely add a ZIL if you use NFS.
      • vSphere via iSCSI does not sync writes by default.
      • I'm not sure about CIFS/SMB... anyone?
    • Check the FreeNAS forum for specific ZIL device recommendations. I noticed Intel S3700 is recommended very often but there are other more exotic possibilities like ZeusRAM.
  • L2ARC
    • Primary read cache is in RAM ("ARC") and secondary read cache ("L2ARC") is optional.
    • Any SSD with good read performance is probably fine for L2ARC though there are specific recommendations (and specific SSDs to avoid)
    • Increasing RAM for more ARC generally provides better performance gains than adding L2ARC.
    • However, adding L2ARC actually increases RAM consumption, and you can kill performance if you are not careful and don't have adequate RAM.
    • 1GB RAM per 5GB L2ARC is the rule of thumb, so for a 240GB L2ARC you've already exceeded 48GB minimum... therefore go with 64GB RAM or more if you do decide to use that SSD for L2ARC.
  • vdevs
    • A ZFS pool is made up of one or more vdevs
    • Each vdev consists of multiple physical drives, and an individual vdev corresponds to non-ZFS RAID volumes you may be more familiar with. The most common examples:
      • Z1 uses 1 parity drive, similar to RAID5
      • Z2 uses 2 parity drives similar to RAID6
      • A pool with a single mirrored vdev is effectively RAID1
      • A pool with multiple mirrored vdevs is effectively RAID10 since there's striping across those multiple mirror vdevs. This arrangement is most recommended for situations requiring high performance, high availability, or both (iSCSI based vSphere datastore for VMs, 10GE networking etc.)
    • Just remember that performance (specifically IOPS) is constrained to the slowest disk within the vdev, striping happens across multiple vdevs in a pool, and striping is good (increases performance). Therefore for 8 drives...
      • A pool of 4 mirror vdevs (2 drives each) yields the best performance but lowest usable capacity.
      • A pool of two Z1 or Z2 vdevs (4 drives per vdev) is an alternative with lower performance but higher usable capacity (Z2 yielding less usable capacity than Z1 due to the extra parity disk)
    • Adding one more drive (nine total) to your system might be a decent performance/capacity compromise of one pool striping across three Z1 vdevs (3 drives per vdev). This is something I want to test myself, but for now I don't know how well the performance would compare to a pool of multiple mirrored vdevs.
Good luck!

EDIT: attempted to make this more readable :p
 
Last edited:

markarr

Active Member
Oct 31, 2013
421
122
43
Also with vdev to keep with zfs best practices follow the guidelines below when making the vdev for best performance:

RAIDZ1 vdevs should have 3, 5, or 9 devices in each vdev
RAIDZ2 vdevs should have 4, 6, or 10 devices in each vdev
RAIDZ3 vdevs should have 5, 7, or 11 devices in each vdev

You can make them what ever size you want but the stripe performance starts to drop, freenas by default will only show you the above options.
 

GCM

Active Member
Aug 24, 2015
133
42
28
Thank you everyone for the information!

I've changed the config around a ton, based on reading and based on hardware I already have.

I already had a Lenovo TS440 NIB, so I'll be utilizing that. It'll be much quieter than the last option.

So here is what I have planned out:


Lenovo TS440 (With extra 4 hotswap bays)
32GB RAM
8x 7k4's (Or perhaps I can step up to 5TB?)
Chelsio T4 variant

@markarr From my understanding, an 8 Disk RaidZ2 shouldn't have too much performance impact. Unless I up my drive capacity to 6TB, I'd be off of my target storage number.