NAS on Baremetal or VM...

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

OBasel

Active Member
Dec 28, 2010
494
62
28
I'd use Windows. More wife friendly since you stated it is a goal. I'd give up 200MB/s for quiet at home.
 
  • Like
Reactions: T_Minus

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,652
2,066
113
I'm interested in Plex because I've read it plays well with Roku, which my family happens to like. I don't know whether the same is true of mediatomb or not. Again, there are so many different alternatives there just isn't time to properly vet and rank them all.
This for us too. We use ROKU as our only source for TV/shows/movies/etc, and PLEX is built in channel making it SUPER easy from the looks so far.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,652
2,066
113
I'd use Windows. More wife friendly since you stated it is a goal. I'd give up 200MB/s for quiet at home.
The wife uses the shares/iscsi not the actual OS or anything related.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,652
2,066
113
Why do you think that a hardware raid can give a similar performance than a modern 4+ core Xeon with highspeed RAM in the higher gigabyte area. The extra CPU load of a modern ZFS system due data XOR is near zero. Hardware raid is mainly a relict from those ages where CPU performance was limited and server RAM was a few hundred Megabytes and software raid options were slow and not really a professional option.

But the main reason for Sun to develop a complete new way of thinking storage was security and scalability of storage up to the Petabyte/Exabyte ranges. Price was never an item because ZFS was targeted to datacenter use to compete against solutions like NetApp with a very similar feature set..

There are the following problems, ZFS is adressing
- Silent data errors and corruptions due to statistical longterm storage modifications, disk, backplane, cable, controller and driver problems .
The answer of ZFS is end to end checksumming all data and metadata with autorepair on read or scrub.

While you can use a hardware raid for ZFS, this repair option is only available if you use ZFS not only as a filesystem but use its software raid-features.

Inconsistent filesystems. A power outage during a write can damage a filesystem like ntfs or ext4.
The answer is CopyOnWrite where every datablock is written newly (no inline update of data) After the whole datablock is written, pointers are updated so a write action is done successfully or not.
ZFS adds an online scrubbing option, You do not need to offline to check data integrity as this can last days or a week on large arrays. An addition of Copy on Write are Snaps. This are frozen states of former data (or bootable system environments). Done without delay or limited in number of snaps.

Scalability of capacity and performance.
While ZFS should be slower on a single disk than for ex ext4 due the extra security, ZFS scales very well over number of datadisks regarding sequential performance and over number of raid-arrays (ZFS name them vdevs) regarding iops. There is no practical limit for capacity or number of files as ZFS is a 128bit filesystem.

Storage virtualisation
All disks and raid arrays/vdevs are building a pool where you can create filesystems (like partitions) that can grow dynamically up to pool size. You limit or ensure space with quotas and reservations.

ZFS and its ideas is storage for next decade. Even Microsoft is adding ZFS alike features in ReFS. The same is the case with btrfs.

Lets talk about problems of hardware raid.
Main problem beside limited performance and cache is the write hole problem. On a power outage during a write, mirrored disks may be updated different or raid 5/6 stripes may be inconsistent. You can or better must ad a BBU to limit this problem but you will never achieve a similar quality like when you completely avoid the problem with CopyOnWrite.

You are fixed to a special controller. If you do not have a compatible spare controller, you are lost when it fails.

Limited scalability to the real big data.
While you can add disks with espanders, a hardware raif does not really scale to Petabytes.

Additional advantages of ZFS
Superiour ramcache (ARC) that use all free RAM. You can extend with an SSD when needed (L2ARC)
Superiour logging options for secury sync write on external highspeed disks
Easy way to transfer or update filesystems locally or remote
Easy management as ZFS is filesystem, volume management, raid management and share management
Available on Solaris or its free forks like OmniOS where it comes from, on BSD, OSX and Linux

Only a few aspects.
Amazing information, and explanation. I greatly appreciate the time you took to write all that, and it's actually making me think more about using ZFS for the HD RAID6 array with triple parity, because, well I could do it if I went ZFS. I also have additional RAM, and could run 152gb instead of 128gb for some extra cache. You're opening my eyes more.

Is there any performance increase/decrease when using ZFS and different controllers? Obviously SAS1, SAS2, SAS3 controllers/expanders/backplanes will provide limited throughput based on spec, but is ZFS itself limited by anything?

What about SSD arrays how does ZFS handle that? Still better off using my 12GB/s RAID card for 12GB/s SSD I imagine, if that's the case I assume I can pass through the entire thing to OmniOS and make the iSCSI targets from there?

What about adding additional drives to the array pools?

What about iSCSI targets is that all done in OmniOS w/ZFS as filesystem or is it an add-on?

Do you run your media center/server stuff in OmniOS too or in another VM with another OS?

Has anyone made the onboard LSI 2108/2208 based controller 'pass through' instead of "RAID MODE" for ZFS? I have 7 PCIE slots but plan to cram at-least 4 things in there and as mentioned, would like to avoid the extra HBA/RAID card if not needed.

With the parity drives, do you guys run the same as I would with hardware RAID or is there a benefit using a faster parity drive or parity drive pool?

Just to be 100% clear you are suggestion OmniOS ZFS (File System) + Napp IT all in a VM, correct?


I need to decide, and get this new RAID6 array live, data migrated and going today/tomorrow.
 

NeverDie

Active Member
Jan 28, 2015
307
27
28
USA
I realize this may sound cavalier, but you may want to just download and install napp-it so you can give it a spin. You can be up and running in literally just a few minutes.
 
  • Like
Reactions: Patrick and T_Minus

britinpdx

Active Member
Feb 8, 2013
367
184
43
Portland OR
Has anyone made the onboard LSI 2108/2208 based controller 'pass through' instead of "RAID MODE" for ZFS?
There's information over at Calomel.org that documents testing done with an LSI 9265-8i, with the author stating "You need to setup each disk as a separate RAID0 array in the LSI raid controller. This is similar to a JBOD mode, but this method allows us to use all the caching and efficiency algorithms the LSI card can offer".
In that write up they mention using a single 9265-81 but testing up to 24 drives, so I presume the "Supermicro RAID 24 slot chassis" that they mention must have an expander ?

NOTE: I've never implemented ZFS yet, I'm a total noob trying to research and understand ZFS options and trade offs, and this was just one of the sources that I came across.
 

Deci

Active Member
Feb 15, 2015
197
69
28
It isnt a dedicated parity drive (like unraid etc) its the same as hardware raid in that all the disks contain parity data. you dont add additional drives to an existing pool, you add additional vdevs to the pool and the pool acts as a raid 0 of all the vdevs within it. you will notice that without an ssd write cache (200gb s3700 is a cheap option for this) when using it as vm storage over iscsi/nfs with sync writes enabled write speeds will be terrible (for testing you can disable it but i wouldnt suggest it long term).

iscsi is done via comstar in the nappit webui, there are a heap of guides out there to step you through it if it looks confusing.
 
  • Like
Reactions: T_Minus

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,652
2,066
113
I realize this may sound cavalier, but you may want to just download and install napp-it so you can give it a spin. You can be up and running in literally just a few minutes.
I've got it already, as well as various other software. I plan to do what you said with them, and just "look" into what I can do :) I appreciate the suggestion as it really does make the most sense.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,652
2,066
113
There's information over at Calomel.org that documents testing done with an LSI 9265-8i, with the author stating "You need to setup each disk as a separate RAID0 array in the LSI raid controller. This is similar to a JBOD mode, but this method allows us to use all the caching and efficiency algorithms the LSI card can offer".
In that write up they mention using a single 9265-81 but testing up to 24 drives, so I presume the "Supermicro RAID 24 slot chassis" that they mention must have an expander ?

NOTE: I've never implemented ZFS yet, I'm a total noob trying to research and understand ZFS options and trade offs, and this was just one of the sources that I came across.
Awesome, I thought this could be another option. Yes the 24B have SAS1 or SAS2 expander, maybe new ones SAS3.
I've done the whole single drive raid0 with the controller so know exactly what you mean too.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,652
2,066
113
It isnt a dedicated parity drive (like unraid etc) its the same as hardware raid in that all the disks contain parity data. you dont add additional drives to an existing pool, you add additional vdevs to the pool and the pool acts as a raid 0 of all the vdevs within it. you will notice that without an ssd write cache (200gb s3700 is a cheap option for this) when using it as vm storage over iscsi/nfs with sync writes enabled write speeds will be terrible (for testing you can disable it but i wouldnt suggest it long term).

iscsi is done via comstar in the nappit webui, there are a heap of guides out there to step you through it if it looks confusing.
Great info, the rAID6 is just for storage and media. VMs will go on a RAID10 of 4x200gb S3700. I have some HGST SAS SSD I was planning to use for Cache but will depend on performance, I think they're endurance is ~4PB for the sizes I have. Other arrays will be SSD too, nto sure if I should use ZFS for this or pass through hardware raid, guess it depends on the SSD. I want to use the expander & hot swaps so anything NOT 12gb/s will go that route.

I also posted another thread about snagging 1 missing piece of ram, then I would have 192GB in this box, so I could dedicate a good 64gigs RAM to cache if "dedicated" cache is an option.
 

Deci

Active Member
Feb 15, 2015
197
69
28
In a home situation its unlikely that you will need 64gb of ram cache unless the most frequently accessed data needs to be super fast? i would suggest giving the virtual at least 16-24gb as a good baseline though.

So to make your raid 10 within ZFS you would make a pool with 2 disks in a mirror, then expand that pool by adding a second vdev of 2 disks in mirror and you end up with a pool that looks like this (this pool has a few more mirrors added to it)

 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,652
2,066
113
Well, I hate waiting on things ;) so faster better.

Any idea what kind of sustained transfer to expect with 5x5TB RED in RAID6 on ZFS?



Todd
 

chinesestunna

Active Member
Jan 23, 2015
622
195
43
56
@T_Minus - if you are indeed going to ZFS, I would recommend against using hardware RAID cards. Most people at FreeNAS have deep reservations about that approach as it adds an layer of abstraction (running drives as JBOD) vs. direct drive access via something like a M1015. I don't think hardware RAID is the way to go at this point especially for home media/content storage. Unless you're running high concurrent access and queue depth applications with many users, I think software RAID gives more flexibility and resiliency (if hardware RAID controller fails, you'd need to get something that's same brand/make ideally in order to bring the array back up)
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,652
2,066
113
"If you like to order napp-it Pro with updates to newest bugfixes and nonfree extensions like comfortable ACL handling, disk and realtime monitoring or remote replication, please request a quotation."

There's a concern, how much is the "PRO" version with the bug fixes, real time monitoring, etc... ?
 

gea

Well-Known Member
Dec 31, 2010
3,175
1,198
113
DE
Threre is no real need for the Pro version @home but there are offerings for home use
(from 25 Euro per year up) as napp-it free is no crippleware.

With the free version, you miss nightly updates (you get only a stable release every few months), realtime graphs and other monitoring options (only text output or must use CLI commands), ACL management (must use Windows or CLI) and network replication (must use free scripts or SSH/CLI).

But of course, those who order Pro are paying for those who use the free edition as there is nobody else paying for development.
 
Last edited:
  • Like
Reactions: Chuckleb