Looking for TrueNAS - iSCSI - Windows filer advice

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

jcl333

Active Member
May 28, 2011
253
74
28
Hello all,

I posted a question to the TrueNAS forums about this, I got some good replies but I am still not sure I understand.
The post(s) is here for reference: Hardware config and iSCSI with Windows
- Also some great conceptual talk here: Similar system to iX-3216 WWIX do?
- Helpful mainly because it lists all my hardware with lots of detail​
I think maybe they think I am just an idiot and gave up on me, and maybe they are right :oops:
Basically, I don't just want the advise, I am trying to understand the why.....and want to finally make a decision
I work in IT, and I am not comfortable with anything I don't understand, I didn't like driving cars until I took one completely apart in college
- I work in a server team and do mainly VMware, Citrix, I am a specialist in hardware and networking, we have a separate guy for the SAN​

So, I found the articles here written by Patrick about the ZIL/SLOG/L2ARC and so forth, and I think I might *finally* get it.

To summarize, given the following scenario: (I will get to why in a minute)
- SM X10SRH-CF w/128GB RAM and 16-core CPU
- 16x 4TB 7200RPM SAS2 (Seagate ST4000NM0023) - setup as two 8x RaidZ2's striped, ≈43TB usable
- iSCSI connected to Server 2019 DE, just a filer, no VMs
- TrueNAS and Server 2019 are VMs on the same host (using passthrough, etc.)
- I won't provision 100% of the storage to iSCSI, maybe 24-32TB
- I will have another smaller volume just so that I can use some of the other features of TrueNAS, such as Transmission​
- My networking in the house is all 1Gig CAT6, I only use WiFi for mobile and IOT, might start getting into 2.5/5Gig Ethernet

OK, here is what I think I now understand about this:
- L2ARC is mostly pointless, this is just too small of an application, need to be doing much more storage and RAM
- Might even do more harm than good as it will consume RAM​
- Adding a ZIL/SLOG is of questionable benefit
- Could be a mirrored pair of 400Gig 12Gig SAS WD/HGST HUSMM1640ASS204​
- These are 10 DWPD/5 years, PLP, 1000+MB/s Read/Write, good devices, but just may not be utilized​
- I also have some 400Gig U.2 NVMe Seagate Nytro ST400KN0001, faster but these are read/mixed optimized drives​
- 3 DWPD, but that may be splitting hairs because that is still around 2PB each over 5 years, I will never hit that​
- After reading Patrick's articles, I imagine it would go slightly faster, but may not be worth it​
- Don't know about the new features in TrueNAS, don't understand them well enough

So, I would love to hear from some folks here, especially if Patrick feels like chiming in, on what you would do and especially why you think so.
Smacking my upside the head and telling me to get over this and move on from analysis paralysis is perfectly OK ;)

As to the why:
- I am much more comfortable with Windows than TrueNAS/BSD, and there is allot of software I can use on it
- In a few years after I am more comfortable with TrueNAS, I will probably look back on what a fool I was not to just use it as a filer directly​
- Doing SMB3+ natively in Server 2019 (or Windows vNext) has benefits vs. Samba, especially with 6-8 Win10 clients
- I could do SMB<->SMB and make Windows just another client, tempting, but harder to do things like RAM caching as a file server​
- I can do asynchronous deduplication (with ReFS or NTFS) from Windows with non-insane RAM requirements (saves me 4-5TB currently)
- I have heavily researched just doing storage spaces, but I don't think it is ready to trust my data to it yet, as much as I want to like it
- I may setup a separate test server just to run it for a couple years and see how it goes, or use it as a backup or something​
- On the other hand, ZFS seems rock solid, basically it has one job and it does it really well​
- Long term data integrity is my #1, then maybe power, performance, noise, etc. (have at least 2 other backups + one in the cloud)​
- My primary use cases are 20+ years of family pictures and video (allot), general files, and a bunch of DVD/BD ISO's (allot)
- Building my new theater, and thinking of getting into Plex or Kodi as I have not tried them before​

Thanks

-JCL
 

i386

Well-Known Member
Mar 18, 2016
4,240
1,546
113
34
Germany
- I can do asynchronous deduplication (with ReFS or NTFS) from Windows with non-insane RAM requirements (saves me 4-5TB currently)
I don't know if it's still true for Server 2016/2019, but 2012R2 could only use deduplication on volumes formatted with 4KByte cluster size. This will limit you to 16TByte volumes.

Edit: Microsoft bumped the limit to 16KByte cluster size/64TByte volumes (What's New in Data Deduplication)
 
Last edited:

kapone

Well-Known Member
May 23, 2015
1,095
642
113
You're WAY overthinking this. Sure, understanding things "under the hood" is always a good thing, and I'd say go for it. That being said...

- Long term data integrity is my #1, then maybe power, performance, noise, etc. (have at least 2 other backups + one in the cloud)
- My primary use cases are 20+ years of family pictures and video (allot), general files, and a bunch of DVD/BD ISO's (allot)
These two "requirements/needs/whatchamacallit" don't gel with what you've speced out. 16 core CPU and 128GB RAM for this?? A dual core with 4GB RAM will easily saturate a gigabit network. Now, if you already have the hardware...by all means, knock yourself out. I'm not even sure how/where Server 2019, iSCSI, TrueNAS etc came into the picture.

You need a NAS to store your bits. Is that statement accurate enough to describe your needs?
 

gea

Well-Known Member
Dec 31, 2010
3,155
1,193
113
DE
If I understand correctly:
You mainly want a simple SMB fileserver for your files.

Your current approach:
Use ESXi as base, then add a ZFS filer VM for mass storage, create a iSCSI Lun there and give it to a second VM, a Windows server as a local raw disk, format it as ntfs to share files via Windows server.

OK, this will work. I do the same with a Windows server and an application that insists on an ntfs filesystem while I want ZFS security over ntfs or snaps over Windows shadow copies. It works but has several problems

- performance is lower than sharing directly from ZFS
- if you accidently delete a file or want to go back to a snap, you can only rollback the whole LUN or you must hope for Windows shadow copies (a bad replacement for ZFS snaps)
- ZFS can guarantee file system consistency for itself due Copy on Write but not for a filesystem ontop like ntfs over iSCSI. You MUST enable sync and if you want a decent performance you need a good Slog. Anyway performance ist much lower compared to a simple ZFS filer where you do not need sync for a simple SMB file sharing.
- You add a lot complexity only to make it slow and to loose some ZFS features!

I would simply use the ZFS filer to share the files to your clients. Sync is not needed so best performance and you have a filebased access to ZFS snaps ex via Windows "previous versions". On SAMBA you need to care about setup as SAMBA knows nothing about ZFS.

About dedup.
Unless your dedup rate is > 10 I would not care about. Simply activate LZ4 compress under ZFS and you are mostly done. If you really need dedup, you can think of ZFS realtime dedup. Calculate up to 5 GB RAM per TB dedup-data (not pool size). With current ZFS you can also use a special vdev (a NVMe mirror) to hold the dedup table so no RAM is needed for realtime dedup.

If you care about SAMBA and SMB, you may consider a Solarish based OS instead Free-BSD. This gives you the best of all integration of ZFS into the OS and additionally its ZFS/kernelbased multithreaded SMB server. Often faster than SAMBA, offers SMB 3.1.1 and ntfs alike ACL permissions with local Windows SMB groups. From view of a Windows desktop, it behaves much like a real Windows server even with a perfect and always working integration of ZFS snap=Windows previous versions. It is also much simpler to configure than SAMBA. Just set smbsharing=on and this is it, no hassle with a SAMBA configuration file. It is also more CPU and RAM efficient (less overhead compared to a VM like FreeNAS). To try you can use my free ESXi ZFS server template with OmniOS that I offer now for more than 10 years, https://napp-it.org/doc/downloads/napp-in-one.pdf

The ZFS/kernelbased SMB server is available on Oracle Solaris (a commercial enterprise Unix where ZFS comes from and is native, free for demo and development) and the free Solaris forks like OmniOS that offers newest Open-ZFS features, OmniOS Community Edition
 
Last edited:
  • Like
Reactions: Lix

jcl333

Active Member
May 28, 2011
253
74
28
You're WAY overthinking this. Sure, understanding things "under the hood" is always a good thing, and I'd say go for it. That being said...





These two "requirements/needs/whatchamacallit" don't gel with what you've speced out. 16 core CPU and 128GB RAM for this?? A dual core with 4GB RAM will easily saturate a gigabit network. Now, if you already have the hardware...by all means, knock yourself out. I'm not even sure how/where Server 2019, iSCSI, TrueNAS etc came into the picture.

You need a NAS to store your bits. Is that statement accurate enough to describe your needs?
This is why I like this forum, good observation, yes this system is overkill for this use case, for a few reasons:
- I got the CPU, RAM, and most of my drives for free so all I had to do was buy the MB and the chassis, cables, other stuff
- I will run other workloads on this host besides TrueNAS and this Windows server
- For 10+ years I ran a 10-bay QNAP, then the backplane died and they wouldn't sell me another one, so I won't do that again
- I was able to re-purpose the chassis into a pretty sweet JBOD though​
- To your point, that QNAP was I think dual-core / 8GB RAM when I bought it, that hovered between 60-90W​
- For this system the average power consumption with 16 drives is around 300W, I modified IPMI to make the fans quiet​
- I actually have two of these setups, but only one chassis, the other one is a regular tower case that can hold 12 drives
- I also have an older quad-core Xeon SM setup with 32GB ECC RAM​
- That would be much more appropriate if this was the only workload​
- I may build the other system with the 7x 10Gig WD drives using that​

As to your question on "how/where Server 2019, iSCSI, TrueNAS etc came into the picture" I am not sure what you mean? I need to run some OS on this to serve out files somehow, right? I am not opposed to just sticking to TrueNAS directly on the hardware and calling it a day, especially if this path I am on just turns into a hot mess.

I admit hooking TrueNAS and Windows together with iSCSI for just a filer is odd, but in a way I also think that is what makes it interesting.
As I mention, I much prefer Windows as it is what I support at work, but TrueNAS is the better filer today. Windows is getting there, but really only in the Storage Spaces Direct area, and I don't feel inclined to run 4+ hosts and try and meet the strict hardware requirements.

TrueNAS with Samba worked in a very similar way to the QNAP, I never liked the way it stored the files and handled SMB, but I just put up with it.

But yes, "You need a NAS to store your bits" is the basic idea.

-JCL
 

jcl333

Active Member
May 28, 2011
253
74
28
Thanks very much for taking the time for such a meaningful reply.

If I understand correctly:
You mainly want a simple SMB fileserver for your files.
Well, it isn't the only thing I plan to do on the host hardware, but besides that yes.

Your current approach:
Use ESXi as base, then add a ZFS filer VM for mass storage, create a iSCSI Lun there and give it to a second VM, a Windows server as a local raw disk, format it as ntfs to share files via Windows server.
Yes, you got it. I might go with ReFS, still thinking it over.
Note that, the ZFS filer and Windows VMs live on an all-flash R10 hosted by an LSI RAID controller, along with whatever other VMs I do.


OK, this will work. I do the same with a Windows server and an application that insists on an ntfs filesystem while I want ZFS security over ntfs or snaps over Windows shadow copies. It works but has several problems
Finally! Someone who not only gets it, but has done it themselves, and doesn't think it's silly. Thank you so much for chiming in! I do this sort of thing at work as well, but usually with Fibre Channel RDMs or VVOLs to a Windows scale-out file server cluster, those are extremely nice.

- performance is lower than sharing directly from ZFS
Yup, I would expect that, but it should be fast enough for my needs, if I can get this config worked out

- if you accidently delete a file or want to go back to a snap, you can only rollback the whole LUN or you must hope for Windows shadow copies (a bad replacement for ZFS snaps)
Good point, I too feel that shadow copies can be hit or miss. We use it on a 500TB server at work with user files, and shadow copies save us maybe 30% of the time, the rest is retrieved from backup (EMC Networker, which is OK). This is an approach I might take as well, with say Veeam. I am assuming using snaps here would be most useful in case of say, undoing a cryptoware attack or similar.

- ZFS can guarantee file system consistency for itself due Copy on Write but not for a filesystem ontop like ntfs over iSCSI.
Yup, ReFS has similar capabilities (you have to enable them manually) but is not as mature as ZFS, and it has it's own set of design challenges.

You MUST enable sync and if you want a decent performance you need a good Slog. Anyway performance ist much lower compared to a simple ZFS filer where you do not need sync for a simple SMB file sharing.
OK, so regardless of the workload, you recommend using a Slog if you are doing iSCSI, it will definitely make a difference because of the synchronous writes.

One thing I could do is do that, and then just have the Windows Server access things using SMB3. I don't know what kind of issues I would have, but SMB3 has been improved to the point where you can put VMs on a remote share for Hyper-V clustering. In that case I assume it would incur some of the same kinds of synchronous write issues discussed above.
Ref: Overview of file sharing using the SMB 3 protocol in Windows Server

Actually, I could also look at doing profile containers (FSLogix) A Practical Guide to FSLogix Containers Capacity Planning and Maintenance
It works allot like iSCSI but uses SMB3. You mount a VHDX file on a remote server, and it presents it as block storage.
This would mean that there would just be a big (or several small) VHDX file(s) sitting on ZFS, so it would still defeat some of the benefits and still have the sync and performance challenges, but the ZFS machine could run stock with few changes. The only thing I don't know is how much of this is supported by Samba. Yeah, I know just adding complexity, but block storage is much better than network storage in a variety of use cases.


- You add a lot complexity only to make it slow and to loose some ZFS features!
Yes, I admit, these last couple points you make do make a good case for just using TrueNAS by itself.

I would simply use the ZFS filer to share the files to your clients. Sync is not needed so best performance and you have a filebased access to ZFS snaps ex via Windows "previous versions". On SAMBA you need to care about setup as SAMBA knows nothing about ZFS.
Oh really? Are there allot of customizations in SAMBA to do things like this? That is very interesting. Being able to use the snaps in a similar way to shadow copies sounds really cool.

About dedup.
Unless your dedup rate is > 10 I would not care about. Simply activate LZ4 compress under ZFS and you are mostly done.
I think I agree with you, it still sounds like it is not worth it unless you were doing something like VDI where dedupe makes a tremendous difference. With Server 2019 dedupe is basically free, I wish TrueNAS could implement it that way, or have the option when you don't need it to be real-time.

If you really need dedup, you can think of ZFS realtime dedup. Calculate up to 5 GB RAM per TB dedup-data (not pool size). With current ZFS you can also use a special vdev (a NVMe mirror) to hold the dedup table so no RAM is needed for realtime dedup.
Huh, that sounds really interesting, I do have some OK NVMe drives, but they probably would not have the endurance for a use case like this.
The 12Gig SAS SSDs I have would have the endurance, but about 1/2-1/3 the speed of NVMe.
I am wondering about some of the new features in TrueNAS and if any of them would be applicable to me.


If you care about SAMBA and SMB, you may consider a Solarish based OS instead Free-BSD. This gives you the best of all integration of ZFS into the OS and additionally its ZFS/kernelbased multithreaded SMB server. Often faster than SAMBA, offers SMB 3.1.1 and ntfs alike ACL permissions with local Windows SMB groups. From view of a Windows desktop, it behaves much like a real Windows server even with a perfect and always working integration of ZFS snap=Windows previous versions. It is also much simpler to configure than SAMBA. Just set smbsharing=on and this is it, no hassle with a SAMBA configuration file. It is also more CPU and RAM efficient (less overhead compared to a VM like FreeNAS). To try you can use my free ESXi ZFS server template with OmniOS that I offer now for more than 10 years, https://napp-it.org/doc/downloads/napp-in-one.pdf
This is actually very interesting, I have heard of OmniOS and I know a little of the history, but I have not played with it.
How actively developed would you say OmniOS is compared to TrueNAS? How active is the community?
Does OmniOS support the plugins/add-ons features like TrueNAS does?
You sound like you are one of the people fairly deeply involved in this.
I am a little worried that I would have to learn quite allot about Solaris/OmniOS to be comfortable with it. This is kind of my original issue for wanting to combine ZFS+Windows on this thread in the first place. But, I will take a look at it anyways.

If something breaks, I want to be able to fix it. One thing I am doing is having discrete servers as backups, and maybe have them use different tech, then if one goes completely FUBAR my data is not at risk.

The ZFS/kernelbased SMB server is available on Oracle Solaris (a commercial enterprise Unix where ZFS comes from and is native, free for demo and development) and the free Solaris forks like OmniOS that offers newest Open-ZFS features, OmniOS Community Edition
Yeah, I have allot of respect for Solaris, BSD, and AT&T UNIX, which I used to use thousands of years ago when I was a Banyan VINES guy, to this day there are things that could do that have not been replicated today, but it is long gone now.

-JCL
 

gea

Well-Known Member
Dec 31, 2010
3,155
1,193
113
DE
I am actively involved in an alternative to FreeNAS and Free-BSD so I can only give some arguments and you can decide if they are worth to consider.

You cannot compare FreeNAS/TrueNAS against OmniOS.
Free-BSD and OmniOS are Unix operating systems, Indeed Solaris was originally based on BSD Unix so there are a lot of similarities between them. Some things in Free-BSD like ZFS, dtrace or zones are adopted from Solaris. Others like the bootloader or driver in Illumos/OmniOS are a from Free-BSD as both share a similar open OSS licencing chema not so restricted than the Linux gpl.

FreeNas/XigmaNAS are more management tools. The comparable tool on Solarish (Oracle Solaris and free forks) is for example my napp-it or NexentaStor. The main difference between the two is that FreeNAS is more or less the market leader with a huge community and a many options due the add-ons you mentioned and OmniOS with its tools is a a quite small and very specialized solution for storage only.

OmniOS with or without napp-it is a very minimalistic and self sufficient operating system with a stable every 6 months and a long term stable every 2 years. OmniOS is one of the smallest Unix distributions but includes ZFS, a ZFS/kernelbased NFS and SMB server, Comstar, an enterprise ready FC/iSCSI stack (Configuring Storage Devices With COMSTAR - Oracle Solaris Administration: Devices and File Systems) and network virtualisation (ability to create virtual nics with vlans or virtual switches, similar to ESXi capabilities). All of them were developped by Sun and are full part of the OS iteslf. This means no 3rd party tools like SAMBA needed for a basic FC/iSCSI, NFS or SMB storage server. OmniOS has a strong focus on server and storage so no, no plugins and add-ons beside server services like webserver, S3 cloud or databases.

There is another distribution based on the same Solaris fork (Illumos), OpenIndiana. This is the successor of OpenSolaris with a desktop and server edition and much more add-ons, openindiana – Community-driven illumos Distribution. You can see it as a plug and play replacement for OmniOS but without the stable, long term stable or commercial support option of OmniOS, more intended for home use.

In short, OmniOS is perfect for a simple and fast ZFS filer, not so good if you want the add-ons but this is where you use ESXi (or Bhyve the virtualisation solution in Free-BSD and Illumos/OmniOS). In my opinion, VM server and storage must always run so best to keep them as simple and minimalistic as possible without dependencies that may affect stability.
 
Last edited:

kapone

Well-Known Member
May 23, 2015
1,095
642
113
This is why I like this forum, good observation, yes this system is overkill for this use case, for a few reasons:
- I got the CPU, RAM, and most of my drives for free so all I had to do was buy the MB and the chassis, cables, other stuff
- I will run other workloads on this host besides TrueNAS and this Windows server
- For 10+ years I ran a 10-bay QNAP, then the backplane died and they wouldn't sell me another one, so I won't do that again
- I was able to re-purpose the chassis into a pretty sweet JBOD though​
- To your point, that QNAP was I think dual-core / 8GB RAM when I bought it, that hovered between 60-90W​
- For this system the average power consumption with 16 drives is around 300W, I modified IPMI to make the fans quiet​
- I actually have two of these setups, but only one chassis, the other one is a regular tower case that can hold 12 drives
- I also have an older quad-core Xeon SM setup with 32GB ECC RAM​
- That would be much more appropriate if this was the only workload​
- I may build the other system with the 7x 10Gig WD drives using that​

As to your question on "how/where Server 2019, iSCSI, TrueNAS etc came into the picture" I am not sure what you mean? I need to run some OS on this to serve out files somehow, right? I am not opposed to just sticking to TrueNAS directly on the hardware and calling it a day, especially if this path I am on just turns into a hot mess.

I admit hooking TrueNAS and Windows together with iSCSI for just a filer is odd, but in a way I also think that is what makes it interesting.
As I mention, I much prefer Windows as it is what I support at work, but TrueNAS is the better filer today. Windows is getting there, but really only in the Storage Spaces Direct area, and I don't feel inclined to run 4+ hosts and try and meet the strict hardware requirements.

TrueNAS with Samba worked in a very similar way to the QNAP, I never liked the way it stored the files and handled SMB, but I just put up with it.

But yes, "You need a NAS to store your bits" is the basic idea.

-JCL
That's fair. I'll ask the obvious question.

"Truenas is the better filer today" - What makes you think so? While ZFS has its place, it is not the end all of storage. And Freenas is not without its own issues.
"I much prefer Windows" - Well, stick to it. There is absolutely nothing wrong with using Windows as a file server.

"I will run other workloads on this hardware" - That's fair, and THAT is where your complication(s) come in. Now we're edging away from just a NAS to something else (not quite SAN, but somewhere in the middle). This is the part where your hypervisor choices, storage fabric choices, network hardware/software choices make everything way more complicated.

My suggestion? Think hard. I'd do just a simple NAS (whatever your OS choice is, Windows/Freenas/whatever) on bare metal for now and call it a day. I would not use the hardware you have for that. eBay is full of tiny little machines that have at least one slot for add-in cards and they will be more than enough for this.

As an e.g.: This is one of the tiny systems I use. I use it for a bare metal firewall (pfSense) and another one for a bare metal domain controller (Server 2019, the second DC is virtualized).

I recently re-did my pfSense from virtual to physical and used a teeny tiny board. It's a Gigabyte B75-TN thin-mini ITX motherboard. Has an mSATA slot onboard, a pci-e 3.0 x4 slot (which is where my 10g Nic is) and takes direct 12v/19v DC :)

The power consumption (i5-3570s, 2x4GB SO-DIMM RAM, mSATA SSD, passive heatsink , Mellanox CX3 10g Nic) is excellent. ~14w at idle with a platinum PSU.

Note the power consumption. Idles at ~14w with a 10g card. If I take that out, it idles at under 10w. This is a 4c/4t system with 8GB of RAM, that will easily saturate a gigabit network. The pci-e 3.0 x4 slot onboard can easily hold a SAS/SATA card giving you many many storage options.

And this motherboard/cpu/ram combo was probably less than $75 when I bought it a year or two ago. It's likely even cheaper now.

There's your NAS (this is just one example).

Once you decide what "else" you want to use that other hardware for, that will change your design.
 

jcl333

Active Member
May 28, 2011
253
74
28
FreeNas/XigmaNAS are more management tools. The comparable tool on Solarish (Oracle Solaris and free forks) is for example my napp-it or NexentaStor. The main difference between the two is that FreeNAS is more or less the market leader with a huge community and a many options due the add-ons you mentioned and OmniOS with its tools is a a quite small and very specialized solution for storage only.
Yes, way back when I did try out NexentaStor, but at the time I had even less knowledge than now and couldn't get comfortable with it.
The market leader part is important, and while I'm not afraid of the command line just having a webui that can do most things (I know the others you speak probably do as well) to keep it simple is valuable. I have two kids and other time constraints, and I have already sunk a tremendous amount of time into this (yes, my fault, I know).

OmniOS with or without napp-it is a very minimalistic and self sufficient operating system with a stable every 6 months and a long term stable every 2 years. OmniOS is one of the smallest Unix distributions but includes ZFS, a ZFS/kernelbased NFS and SMB server, Comstar, an enterprise ready FC/iSCSI stack (Configuring Storage Devices With COMSTAR - Oracle Solaris Administration: Devices and File Systems) and network virtualisation (ability to create virtual nics with vlans or virtual switches, similar to ESXi capabilities). All of them were developped by Sun and are full part of the OS iteslf. This means no 3rd party tools like SAMBA needed for a basic FC/iSCSI, NFS or SMB storage server. OmniOS has a strong focus on server and storage so no, no plugins and add-ons beside server services like webserver, S3 cloud or databases.
I understand. On your second point, I do run into issues with the different interpretations of SMB, one of my bigger sticking points. SAMBA is good, Microsoft even helped develop it, but still there are issues. We have an EMC Unity at work, they made their own version of SMB, we hate it so much we are switching back to Windows servers and connect them with VVOLs. Otherwise they are good units though.

There is another distribution based on the same Solaris fork (Illumos), OpenIndiana. This is the successor of OpenSolaris with a desktop and server edition and much more add-ons, openindiana – Community-driven illumos Distribution. You can see it as a plug and play replacement for OmniOS but without the stable, long term stable or commercial support option of OmniOS, more intended for home use.
Yes, I heard of this one as well, especially before FreeNAS really settled in to the lead for some things like this.

In short, OmniOS is perfect for a simple and fast ZFS filer, not so good if you want the add-ons but this is where you use ESXi (or Bhyve the virtualisation solution in Free-BSD and Illumos/OmniOS). In my opinion, VM server and storage must always run so best to keep them as simple and minimalistic as possible without dependencies that may affect stability.
Yes, I don't disagree, people at work often want to put many roles on a server, and I stop them and try to keep things separated, because that is one of the great things about virtualization.

Thank you for sharing your perspectives.

-JCL
 

jcl333

Active Member
May 28, 2011
253
74
28
"Truenas is the better filer today" - What makes you think so? While ZFS has its place, it is not the end all of storage. And Freenas is not without its own issues.
So, this is mostly a take on comparing Truenas to Windows, at least in this use case. Yes, lots of commercial organizations use Windows as a filer. But, when you get into check summing, copy on write, and that type of data integrity stuff, then you have to get into storage spaces and ReFS, once you are there, I think ZFS just has a much longer track record. But, I do play with storage spaces allot, I think the day will come when I don't have such a reservation about it. But it will probably always stay out of the Truenas space just because it isn't free.

I think what will get it there is Microsoft is using this tech heavily in Azure, so they will beat the crap out of it and then it will trickle down to the lower scale use cases.

"I much prefer Windows" - Well, stick to it. There is absolutely nothing wrong with using Windows as a file server.
Yes, if I was not worried about silent data corruption or the R5 "write hole" or other issues that started to become more important when we get into the 10's of TB and long-term data storage. I have been using a QNAP for years until the backplane died, and that is using plain old mdadm under the covers, so straight-up software RAID.

Otherwise I would just toss in my really nice LSI hardware RAID controller with R6 and not look back.

"I will run other workloads on this hardware" - That's fair, and THAT is where your complication(s) come in. Now we're edging away from just a NAS to something else (not quite SAN, but somewhere in the middle). This is the part where your hypervisor choices, storage fabric choices, network hardware/software choices make everything way more complicated.
You are not wrong, but I do have a pretty high comfort level with VMware and all of these things you mention. And when I say that, I think I understand what I am doing and the different ways this could break / go south that I would trust my data with doing that, and I do have backups no matter what. So really my only problem is my lower comfort level with Truenas itself vs. my apprehension to just stick with storage spaces and Windows---I have reached the point where I can make SS work, but after looking at all the steps, requirements, and powershell commands to get there, you have to take a step back and consider if it is a good idea. Again, backups, and I am not saying I won't go this way, or I might run both, one backing up the other. Conversely, I think the Truenas method is fairly straight forward if I can wrap my brain around the requirements for iSCSI part of it.

Heck, this is not that different to what we do at work with RDMs passed through to Windows Servers, and that is 500TB+ in production for 4000+ people.

My suggestion? Think hard. I'd do just a simple NAS (whatever your OS choice is, Windows/Freenas/whatever) on bare metal for now and call it a day. I would not use the hardware you have for that. eBay is full of tiny little machines that have at least one slot for add-in cards and they will be more than enough for this.
Yup, that is the rub isn't it? That being said, it won't HURT anything to use something greatly overpowered. Heck, with 128GB of RAM and everything else I have, I think it could be quite pleasant to use. And of course, I already have it.

As an e.g.: This is one of the tiny systems I use. I use it for a bare metal firewall (pfSense) and another one for a bare metal domain controller (Server 2019, the second DC is virtualized).
Hehe, my pfSense box is on a former HPe SFF CAD workstation I pulled out of the trash from work, because it is small and quiet. I popped a Xeon, ECC RAM, and a server grade NIC in there. But, I guess I just can't help myself, I put ESXi on there and run pfSense as a VM. I am passing through the two NICs I use for WAN/LAN interfaces. I know, just because I can. It does have it's utility though, I can make clones, snapshots, and the like. I am going to put some other VMs on there such as a controller for my Ubiquiti access points, and I may start running a UTM inline with pfSense so that I can do filtering for when my kids get a little older and start using the Internet more.

Note the power consumption. Idles at ~14w with a 10g card. If I take that out, it idles at under 10w. This is a 4c/4t system with 8GB of RAM, that will easily saturate a gigabit network. The pci-e 3.0 x4 slot onboard can easily hold a SAS/SATA card giving you many many storage options.
See, now you are getting into my area of expertise. I have not used Gigabytes server stuff but almost all the desktops in my house use them.
For servers I tend to stay on the side of Xeon and ECC RAM and other server-grade stuff as much as possible.

Note that, unless you found a 30W platinum power supply, you don't get their best efficiency unless you are in the middle of their output range.
The SM chassis I have has a pair of 920W platinum power supplies, I unplug one of them to get better efficiency.
One thing I could do here is, I could ditch my separate pfSense box, and run it as a VM along with everything else, probably save 30-50W that way.

I was honestly quite tempted to buy one of the physical pfSense boxes they sell, you can get into the single digit power draw with those.

Do you have a 10Gig Internet connection? I have 1Gig fiber and routinely get 900/900, I have found that most of the time I am constrained by what the other end can do. The NIC I use is an Intel 340-T4, they are extremely solid in pfSense.

All this being said, it of course depends on what you are doing. If you ran allot of VPNs or other workloads, these would fall down. It's our use cases that let us get away with it.

And this motherboard/cpu/ram combo was probably less than $75 when I bought it a year or two ago. It's likely even cheaper now.

There's your NAS (this is just one example).

Once you decide what "else" you want to use that other hardware for, that will change your design.
So, I do have an extra SM X9SCM-F, very solid board. And I have a E3-1280 v2 on it which is near the top of the chart, a beast for what it is. And I have 32GB of ECC RAM on there. Two Intel NICs and IPMI.

I have considered taking that, slapping my Broadcom/LSI 9400 16i in there and call it a day for a filer, at the very least I think I would not feel bad running that 24/7 even with 16x 7200RPM SAS spinners and some SSDs, I'll bet it would hover just under 200W, maybe less. Now that I think about it, if I did this and used the 7x 10GB WD drives in R6, I would get about 43TB and the power draw should be reasonable, maybe even near 100W. I would not use the big chassis, I have a tower case that can do 12x spinners, and an external JBOD (former QNAP) that can do 10x drives + some SSDs with an Intel expander in it. Hmm, this is sounding better now.

Then there is the question of what to do with the much beefier server boards. For VMs I could just stick to hardware RAID, but it would be tempting to do iSCSI with that from the filer box. Maybe I make an insane workstation with it ;-) Seriously though, I was thinking I would just build one of those out as a server to backup the main filer, and thus only turn it on a few hours a week/month and leave it off all other times.

Now, at 10Gig you still thinking the lower power rig will keep up? The only place that would really matter though is doing a full backup of the data, which took over a day with the QNAP, I have not clocked the new stuff yet.

Not sure if the age of the X9 board and CPU are going to become an issue. I did get ESXi 7u1 on there but that will be the last version it will support. I don't know how good an idea it would be to run pfSense on that.... mostly I am thinking about things like Spectre and Meltdown and other vulnerabilities, you could make the argument that a firewall should be on newer hardware albeit low power. The bigger server board is much newer, but also overkill. This is one of the reasons I pass the NIC through to pfSense, it performs much better and does not expose the HV to the Internet.

Good discussion, but you can see how I can get into analysis paralysis, all this thinking and designing, I have to finally do something.

-JCL
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
Good discussion, but you can see how I can get into analysis paralysis, all this thinking and designing, I have to finally do something.

-JCL
Your entire response is completely fair, and completely...^ (read that statement above) :)

I don't blame you for it though, we all do it. I probably spent months designing just my storage backend (I run a business) and trying out many many different scenarios, including OmniOS, Freenas, Linux and a couple of other esoteric systems. Wanna know what I'm using (now)? :)

For the "production" side, I run a HA iSCSI SAN on Windows with Starwind vSAN. The config uses Adaptec HW RAID cards with RAID 6 arrays (for data) and pci-e SSDs (for VMs). Both nodes are identical, with identical storage types and capacity. But...I need less than 10TB in this HA config.

For the rest of my storage needs, it's plain jane Windows/HW RAID 6/SMB. No storage spaces, no ReFS, no additional moving parts. This storage (general purpose) uses Stablebit Drivepool with a RAMdisk as the "SSD cache". Drivepool also integrates with their "Cloud Drive" for backups. I use Cloud drive to do two things.
- Encrypted backups to Backblaze (this is a fairly small dataset)
- Asynchronous backup to my second house, which has another storage server (hint: it's running on that tiny motherboard I mentioned above) - This dataset again is fairly small.
- I don't care about additional protection for my media type stuff beyond putting it on a RAID-6 array. It can be re-created easily.

The two storage servers run both of these requirements flawlessly over 40gb links through a Brocade ICX-6610. Now, why didn't I do all this on Freenas/OmniOS/Linux etc etc? Well, performance for one and comfort factor for another. That's my choice, and by no means does it mean that any of the other systems/software are bad.
 

jcl333

Active Member
May 28, 2011
253
74
28
I don't blame you for it though, we all do it. I probably spent months designing just my storage backend (I run a business) and trying out many many different scenarios, including OmniOS, Freenas, Linux and a couple of other esoteric systems. Wanna know what I'm using (now)? :)
Well, this makes me feel a little better, hehe a little OCD group therapy...:)
My hope is that it will be worth it if I can reach a solution that can last me the next few years.


For the "production" side, I run a HA iSCSI SAN on Windows with Starwind vSAN. The config uses Adaptec HW RAID cards with RAID 6 arrays (for data) and pci-e SSDs (for VMs). Both nodes are identical, with identical storage types and capacity. But...I need less than 10TB in this HA config.
I like the Starwind stuff, I researched it a little bit. Adaptec and Areca are my favorites after LSI/Broadcom.
I am assuming you have multiple iSCSI LUNs so that the R6 serves the data and the SSDs serve the VMs with say, R10.
Do you think about the risk of the "write hole" or "silent data corruption" with this setup?
That aside, I would love it if I could just use my LSI Raid cards and call it a day, I have really nice ones with FBWC and super caps. I could also add SSD R/W caching if I bought a license for that feature.

At one point in my research, I started to question if the "write hole" or "silent data corruption" stuff was real, BS, or just an extreme corner case.
Then I wondered, wait, what do large SAN vendors do about this problem? The answer is - they do, that is what the PI "Protection Information" is for (T10 PI), that is why the SAN disks are formatted with 520-byte sectors instead of 512, so that they can essentially checksum all the data.
I even started to look into doing this myself, as there are some RAID controllers that support PI, but I had to give up because information about it appears to be too scarce, I could not find anyone doing this outside of SAN vendors.

What we need is a "best of both" - a hardware RAID card you can buy that employs some kind of data integrity method. It almost looks like you could come close to this with R6 + ReFS, you can manually turn on
check summing. But, my expectation with that would be it could detect corruption but it would not be able to correct it without access to another copy of the data, and I don't know if the whole volume would go FUBAR without direct disk access.

It is hard to tell how real all this is or who is actually correct. You can read articles like this one: ZFS won't save you: fancy filesystem fanatics need to get a clue about bit rot (and RAID-5) - Jody Bruchon and think, maybe it is such a rare problem it is not worth worrying about. Hundreds of millions of people store data on hardware and software without these features for years, but I guess you could say "their data is probably corrupt in some places, they just don't know it".


For my own anecdotal take on this, my data has been on a QNAP for 10+ years that was just software R6, no special ZFS features or anything. I can't say I have ever found corrupted data that I am aware of. For that matter, at my work we have used standard RAID hardware for many years and I can't remember any data going corrupt for no apparent reason. It was always something a user did, or some hardware that failed, and in all cases it was just restore from backup. Why don't backups suffer from this problem? I suppose they do?

I am tempted to contact Broadcom or another RAID vendor and ask them what their take is on it.


For the rest of my storage needs, it's plain jane Windows/HW RAID 6/SMB. No storage spaces, no ReFS, no additional moving parts. This storage (general purpose) uses Stablebit Drivepool with a RAMdisk as the "SSD cache". Drivepool also integrates with their "Cloud Drive" for backups. I use Cloud drive to do two things.
How are you using Stablebit Drivepool and R6 at the same time?
I am curious about your RAMdisk, I know Starwinds has a nice option for that.


- Encrypted backups to Backblaze (this is a fairly small dataset)
Yep, I plan to use Backblaze as my "3rd copy" offsite.

- Asynchronous backup to my second house, which has another storage server (hint: it's running on that tiny motherboard I mentioned above) - This dataset again is fairly small.
Yeah, I was thinking of doing this and locating the other one at my parents house which is about 70 miles away. But, they are on Comcast and I am afraid I would blow through their data caps too easily.

- I don't care about additional protection for my media type stuff beyond putting it on a RAID-6 array. It can be re-created easily.
No argument here, I am mostly worried about data that can't easily be re-created. Not that the time it takes to re-rip hundreds of DVDs (for example) is not worth anything. I imagine a scenario where I want to look at kids pictures or videos 10 years later.

I wish optical media would increase capacity faster... still waiting for that 1TB disc. There are MO drives that are really expensive, and I even looked into getting an LTO drive.


The two storage servers run both of these requirements flawlessly over 40gb links through a Brocade ICX-6610. Now, why didn't I do all this on Freenas/OmniOS/Linux etc etc? Well, performance for one and comfort factor for another. That's my choice, and by no means does it mean that any of the other systems/software are bad.
Yup, I have noticed that ZFS and similar solutions are prioritizing data integrity over performance, you have to design carefully and specifically if you also want performance. I seldom find myself wanting for performance with traditional solutions without even putting much thought into it, especially in the era of SSDs and NVMe.

When I went down this road I thought I would find a sweet spot / comfort zone solution in a few weeks / months, not so much.

-JCL
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
Stablebit - The way you do "SSD caching" with a RAMDisk is:

- Create your hardware RAID array
- Create a RAMDisk, let's say 16GB. This has nothing to do with anything else. It's just a 16GB RAM disk. I use the SoftPerfect one, but others may work just as well.
- Create a pool in Drivepool with just that RAID array. No other disk(s)
- Tell Drivepool to use the above created RAM disk as the SSD cache for this pool and configure it for immediate balancing.

Essentially, Drivepool is adding a very fast cache to a Windows disk (that's what a pool is, just a disk). Now, you can use this pool disk for ISCSI/SMB/NFS etc etc. I can disable the SSD cache anytime and nothing else has to change in the config, to continue operating.

Edit: To your other questions:

- I don't worry about the write-hole/data corruption for my production stuff, because keep in mind, it is duplicated across two nodes (and then a third copy in Backblaze). HW RAID card manufacturers have been doing this for a very long time. The Adaptecs have a BBU, my systems are on a UPS (that can sustain them for hours). They send out email alerts if a disk drops off etc etc.
- I decided not to use ZFS because of a number of reasons, but mainly comfort factor. I can manage my RAID arrays with a very nice and granular UI, without hunting down arcane commands on forums with broken English... :p. What I have works for me, works well, and requires very little maintenance on my part.

End of the day, I don't want to spend more time maintaining my systems than necessary. I have two little boys who eat up all my time as such.
 
Last edited:

jcl333

Active Member
May 28, 2011
253
74
28
Stablebit - The way you do "SSD caching" with a RAMDisk is:

- Create your hardware RAID array
- Create a RAMDisk, let's say 16GB. This has nothing to do with anything else. It's just a 16GB RAM disk. I use the SoftPerfect one, but others may work just as well.
- Create a pool in Drivepool with just that RAID array. No other disk(s)
- Tell Drivepool to use the above created RAM disk as the SSD cache for this pool and configure it for immediate balancing.

Essentially, Drivepool is adding a very fast cache to a Windows disk (that's what a pool is, just a disk). Now, you can use this pool disk for ISCSI/SMB/NFS etc etc. I can disable the SSD cache anytime and nothing else has to change in the config, to continue operating.
So, you would have to wait for the cache to populate after a reboot.
BTW, you know that the built-in RAM caching if you are using Windows Server is actually really good, it might interesting to see the trade-off between the ramdisk and the in-box caching. I don't know what happens if you try to serve out iSCSI though.

Edit: To your other questions:

- I don't worry about the write-hole/data corruption for my production stuff, because keep in mind, it is duplicated across two nodes (and then a third copy in Backblaze). HW RAID card manufacturers have been doing this for a very long time. The Adaptecs have a BBU, my systems are on a UPS (that can sustain them for hours). They send out email alerts if a disk drops off etc etc.
Well, I have everything I need to do the same thing, and I would like to.
But, the theory is that if you had corruption, it would just get replicated / backed up in a setup like that.

- I decided not to use ZFS because of a number of reasons, but mainly comfort factor. I can manage my RAID arrays with a very nice and granular UI, without hunting down arcane commands on forums with broken English... :p. What I have works for me, works well, and requires very little maintenance on my part.

End of the day, I don't want to spend more time maintaining my systems than necessary. I have two little boys who eat up all my time as such.
Well, I guess you and I are in same boat. Maybe I could look for some details on the RAID controllers I have to be more comfortable going that way.

-JCL
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
So, you would have to wait for the cache to populate after a reboot.
That's not how this caching works. After a reboot, the cache is empty. Notice I said, configure it for immediate rebalancing? A client writing to this pool disk (with the cache) is basically writing to the cache and the cache is immediately writing to the underlying disk (which is the HW RAID array). Think of a FIFO queue. The write operation for the client will finish sooner than the cache is done writing to the underlying disk, which is the point.

BTW, you know that the built-in RAM caching if you are using Windows Server is actually really good, it might interesting to see the trade-off between the ramdisk and the in-box caching. I don't know what happens if you try to serve out iSCSI though.
There is no built in RAM caching in Windows Server, unless you go the Storage Spaces route. I don't want to use it. Too many moving parts.

As far as iSCSI serving is concerned, that is exactly what I do out of the Stablebit pool disk (with the ram cache). The .vhdx disks are on that pool and automatically use the ram cache.

But, the theory is that if you had corruption, it would just get replicated / backed up in a setup like that.
No, it won't. The replication is done by Starwind before any data gets to any disk. This is RAM to RAM replication on the Starwind iSCSI stack on the two machines. Nothing to do with the underlying storage.

Well, I guess you and I are in same boat. Maybe I could look for some details on the RAID controllers I have to be more comfortable going that way.
:)
 

jcl333

Active Member
May 28, 2011
253
74
28
That's not how this caching works. After a reboot, the cache is empty. Notice I said, configure it for immediate rebalancing? A client writing to this pool disk (with the cache) is basically writing to the cache and the cache is immediately writing to the underlying disk (which is the HW RAID array). Think of a FIFO queue. The write operation for the client will finish sooner than the cache is done writing to the underlying disk, which is the point.
OK, so you start off with write caching, and then read caching as it gets populated? Yes, this is very much like the cache on a RAID controller.

There is no built in RAM caching in Windows Server, unless you go the Storage Spaces route. I don't want to use it. Too many moving parts.
Performance Tuning for Cache and Memory Manager Subsystems
At work, we give our Windows file servers as much RAM as we can, usually 32-64GB. You can go into the memory resource monitor to see how much is being used, it's called "Standby" memory. We often see our servers filling up the rest of the RAM they have to cache as much as possible. This is long before Storage Spaces, not sure, it might go back to NT 4.0 or even 3.51. We use it for file servers and also our Citrix Provisioning Servers for VDI (which are .vhdx files), the difference is night and day. There are some settings you can tweak for it, and under some circumstances it will not be enabled. And of course you want a UPS is you are going to use write caching.

I am sure there are big differences between this and a separate / 3rd party solution like what you are using, but I think you should give this a look.

As far as iSCSI serving is concerned, that is exactly what I do out of the Stablebit pool disk (with the ram cache). The .vhdx disks are on that pool and automatically use the ram cache.
Are there iSCSI features built into Stablebit, or does it just work well together with it?
The Stablebit seems nice, and seems to me you would use it instead of a raid controller rather than with one, but I guess it is nice to have the flexibility.

So, I was thinking about using the RAID controller, and I just notice that you can buy the MegaRAID CacheCade Pro 2.0 license key for like $50 (these cost like $270 when they came out with the controllers years ago). Reading up on this feature, it was great even back in the days of 30Gig SSDs and 3G/6G SAS. This is still heavily used on OEM cards by Lenovo and others.
LSI MegaRAID CacheCade Pro 2.0 Review
LSI MegaRAID CacheCade Pro 2.0 Review – Real World Results and Conclusion - The SSD Review
Hybrid SSD storage & LSI MegaRAID CacheCade Pro v2.0
Imagine putting 4x 400Gig 12G SAS disks into this thing? I'm thinking 16x 4TB SAS drives in R60 + 4x 400G R1 cache drives, could be really nice. And this RAID controller has a million cool little features, the user manual is almost 400 pages.

I'm thinking I do that with this server, and then for the 2nd server do either TrueNAS or Storage Spaces with the 7x 10Gig SATA drives, they are shucked drives and supposedly WD REDs, but I am still hesitant to put them into a hardware RAID.

-JCL
 

Whaaat

Active Member
Jan 31, 2020
302
158
43
- iSCSI connected to Server 2019 DE, just a filer, no VMs
- My networking in the house is all 1Gig CAT6, I only use WiFi for mobile and IOT, might start getting into 2.5/5Gig Ethernet
I would start from switching to 40/56Gb ethernet buying couple of mellanox cards and rewiring home with cheap cabling, forget about iSCSI, because with Windows Server 2019 you can have SMB Direct (RoCE) which is combined with 40GbE is way cooler than iSCSI or simple samba over 2.5/5/10Gig Ethernet. The only caveat is that client party has to run Win 10 for Workstations or Enterprise edition.