Shared storage ideas...vSphere 6.5

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

BSDguy

Member
Sep 22, 2014
168
7
18
53
Hi All

For a year I have been using a Windows box for my shared iSCSI storage using all flash (SATA) drives and 10Gb SFP+ for networking. I was using Starwinds Virtual SAN but since the license expired Starwinds won't renew it (even for a home lab). I've had so many issues with this setup that I am kind of glad the license wasn't renewed so I'm looking for a replacement for the shared iSCSI storage. I'm open to ideas!

So I have storage vMotioned all my VMs to temporary local SSD storage in my 3 hosts. I have about 35 VMs that use around 1TB of disk space. Not having shared storage is just painful and you lose HA/DRS.

To be honest I'm overwhelmed with all the storage options but I have narrowed it down to these 3 so far:

1) Buy a Synology (like the DS3018xs or DS3617xs) and fit the unit with a dual port 10Gb NIC - pricey but convenient and I'm assuming more stable than what I am used to

2) Use VMware vSAN (all flash NVMe) - would be great but trying to find affordable AND compatible cache AND capacity tier drives seems tricky

3) Build a new storage server based on all flash NVMe storage and Supermicro - not sure if this is possible but can you RAID NVMe SSDs yet? And what NAS/SAN software to use for iSCSI?

10Gb is a must as is an all flash setup (perferably with NVMe SSDs). I have 2 Supermicro 5028D-TN4T servers and a custom built Supermicro X10SL7-F based server. I have 4 Samsung SM863 480GB drives, 2 Samsung Pro 850 512GB and two Samsung 960 EVO 250GB NVMe drives.

Hope the forum can help me make the right decision for my home labs storage to take it to the next level ;-)
 

StammesOpfer

Active Member
Mar 15, 2016
383
136
43
Why not drop freenas or one of the other NAS distro to replace windows on the existing box? Or is this a multi purpose windows box?
 

I_D

Member
Aug 3, 2017
83
20
8
113
I too am interested in this question.

Let's say we go the Freenas route and turn it into an iscsi box.
If we then put a dual 10GBE nic in the esxi, could I segregate the iscsi traffic on 1 port, and the "regular" lan traffic on port 2?
Has anyone done this? If so, what settings did you configure on your esxi in order to get this working, and what models of dual 10GBE cards do you recommend?
How do you guys have your vmware storage setup? I am very interested in learning more about what works and what does not!
 

StammesOpfer

Active Member
Mar 15, 2016
383
136
43
Yeah it is pretty easy to do. Give the 2 interfaces IP's. Then point the SAN traffic at one IP and the management lan traffic at the other. Even better use separate vlans and subnets.
 

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
If you have enough RAM for a storage VM and some CPU power, you can virtualise a ZFS SAN/NAS storage server. This will give you all ZFS advantages with the easiness of an appliance with best preformance as connectivity between ESXi and storage is in software over the ESXi vswitch via vmxnet3 vnics. You can add external 10G connectivity for external traffic. For VMs, you can use an SSD pool or a mirror from Intel NVME devices in pass-through mode .

My preferred ZFS solution is Solarish where ZFS comes from due its best integration of OS, ZFS and services and because even the smallest Solaris distribution includes enterprise class iSCSI, NFS, SMS and network virtualisation, alls developped by Sun/Oracle and now also maintained with the free Solaris for Illumos.

For ESXi I offer a storage VM, now for more than 7 years, see https://www.napp-it.org/doc/downloads/napp-in-one.pdf
 
Last edited:
  • Like
Reactions: T_Minus and Evan

BSDguy

Member
Sep 22, 2014
168
7
18
53
I considered FreeNAS before (over a year ago though and haven't been following it) but from what I remember using FreeNAS/ZFS/iSCSI causes the storage to become fragmented which is problematic? Correct me if I am wrong...

Up until this point I have 3 servers, two were used in my vSphere 6.5 cluster and the third was used as a Windows box running Starwinds Virtual SAN for shared storage as well as Veeam for backups.

I can rebuild the 3rd server and use it as a SAN using different software and use a VM for Veeam backups rather. The problem is, my SAN server is about 4yrs old so it limited in terms of PCIe slots. There's only two and one is used for the dual port 10Gb NIC. The other 2000MB/s PCIe slot is too slow for an NVMe drive(s). I've used my SAN server as a FreeBSD server with ZFS for 2yrs and then for a year with Hyper-V and ESXi 6.0 with no problems but for some reason when using Starwinds Virtual SAN for a year I had so many issues (corrupt drives, drives disappearing in Windows, VMs becoming corrupted upon reboot and entire datastores vanishing in vSphere) so I am tired of those issues and looking for someone rock sold/stable from a storage point of view!

I'm really keen on vSAN but trying to find a practical solution to which drives to use. Using SATA drives seems to be a no no due to the low disk queue depth so I am considering using an all flash NVMe setup for vSAN. For this I was considering installing a Supermicro AOC-SLG3-2M2 NVMe SSD PCIe 3.0 card in each ESXi server and the installing a Samsung 960 EVO 1TB SSD into this card for the capacity tier. The cache tier is were I am strugging so was thinking of using an Intel Optane SSD 900P 280GB for this using the U2 to M2 adapter cable.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
vSphere 6.5 has iSCSI unmap enabled by default so no issues w/ space reclaim there anymore. Why go to that hassle though...'just use NFS' :-D
 

BSDguy

Member
Sep 22, 2014
168
7
18
53

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Yes is it indeed a nifty feature long needed...and umm a resounding YES to nfs on freenas served up to vSphere!
 

BSDguy

Member
Sep 22, 2014
168
7
18
53
Yes is it indeed a nifty feature long needed...and umm a resounding YES to nfs on freenas served up to vSphere!
Any issues with fragmentation like with iSCSI or is this not an issue anymore with the new version of FreeNAS (v11 now?).

I've never used NFS or seen it used anywhere, always used/seen iSCSI though so what are the pros/cons to each?

What about VAAI? And should I consider NAS4Free?
 

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
Can napp-it be used on a physical box installed rather than in a VM?
Yes, on top of Oracle Solaris (genuine ZFS, the fastest ZFS server with most features like encryption and much faster sequential resilvering than any Open-ZFS) or on the free Solaris forks OmniOS or OpenIndiana (Open-ZFS).

http://www.napp-it.org/doc/downloads/setup_napp-it_os.pdf
http://www.napp-it.org/doc/downloads/napp-it.pdf

about ZFS fragmentation
Fragmentation especially when a pool is nearly full is a problem of any CopyOnWrite filesystem. Sun developped superiour caches (Arc, L2Arc) to deliver most reads from RAM and a large rambased writecache for fast large serialized/sequential writes.

Another aspect is Trim on SSDs to keep write performance high. But if you trim on OS level you must keep a map of all writes what cost RAM and CPU so this is best with cheap SSDs and low write load. For a high performance system with steady write load, using enterprise SSDs with a decent overprovisioning is faster. While desktop SSDs with nominal 80k iops go down to 5-10k iops under steady write load, a good enterprise SSD like a Intel S37xx will keep >40k iops.
 

BSDguy

Member
Sep 22, 2014
168
7
18
53
Yes, on top of Oracle Solaris (genuine ZFS, the fastest ZFS server with most features like encryption and much faster sequential resilvering than any Open-ZFS) or on the free Solaris forks OmniOS or OpenIndiana (Open-ZFS).

http://www.napp-it.org/doc/downloads/setup_napp-it_os.pdf
http://www.napp-it.org/doc/downloads/napp-it.pdf

about ZFS fragmentation
Fragmentation especially when a pool is nearly full is a problem of any CopyOnWrite filesystem. Sun developped superiour caches (Arc, L2Arc) to deliver most reads from RAM and a large rambased writecache for fast large serialized/sequential writes.

Another aspect is Trim on SSDs to keep write performance high. But if you trim on OS level you must keep a map of all writes what cost RAM and CPU so this is best with cheap SSDs and low write load. For a high performance system with steady write load, using enterprise SSDs with a decent overprovisioning is faster. While desktop SSDs with nominal 80k iops go down to 5-10k iops under steady write load, a good enterprise SSD like a Intel S37xx will keep >40k iops.
Thanks for the helpful links, I am busy reading the PDF setup_napp-it_os.pdf.

It sounds like Oracle Solaris is the way to go to get the most up to date version of ZFS but one of the disadvantages mentioned is that it is not free but he does say that you can use it for testing/development. Does that mean I can use it unrestricted and without any time restrictions in my home lab for free?

I'll be using Samsung SM863 Enterprise SATA drives for my VM storageso hopefully this gives me some good IOPS!
 
Jan 4, 2014
89
13
8
Thanks for the helpful links, I am busy reading the PDF setup_napp-it_os.pdf.

It sounds like Oracle Solaris is the way to go to get the most up to date version of ZFS but one of the disadvantages mentioned is that it is not free but he does say that you can use it for testing/development. Does that mean I can use it unrestricted and without any time restrictions in my home lab for free?

I'll be using Samsung SM863 Enterprise SATA drives for my VM storageso hopefully this gives me some good IOPS!
Oracle will mean no compatible pool, no updates and oracle restriction.

Been running omnios ( nappit on top) now for a few years without major issues, both using nfs and iscsi to lunix and vmware.

Second box runs freebsd, and it is very easy to move my pool from one to the other without dataloss or rebuilds of the pool

You ssds will do fine

send from a mobile device, so typo's are to be expected :)
 

BSDguy

Member
Sep 22, 2014
168
7
18
53
Oracle will mean no compatible pool, no updates and oracle restriction.

Been running omnios ( nappit on top) now for a few years without major issues, both using nfs and iscsi to lunix and vmware.

Second box runs freebsd, and it is very easy to move my pool from one to the other without dataloss or rebuilds of the pool

You ssds will do fine

send from a mobile device, so typo's are to be expected :)
Thanks! Installed Oracle Solaris 11.3 in a test VM and its installed fine and haven't been prompted for any license key so I assume it will run "forever" unrestricted without needing a license purchased for home lab use?

I also installed napp-it but in the web gui under the about section it says it will expire on 1 Dec 2017. Do I need to purchase a license to use napp-it in a home lab setting?

I don't mind running v37 of ZFS as I'll never be moving the pool between different OS's.

Is VAAI supported with Oracle Solaris/napp-it/iSCSI?
 
Last edited:

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
I know you are tracking down the napp-it/Solaris path but just for reference/completness: (VAAI was added in 'limited' fashion in 9.3)

FreeNAS 9.3 Features - Support for VMware VAAI
29. VAAI — FreeNAS®11.1-RC1 User Guide Table of Contents

Still think they are missing VAAI for NFS so take that for what it's worth, was avail on TruNAS platform I researched since 9.10.2 known as 'VAAI for NAS' Wonder why it never trickled down for NFS protocol. All said, I live and die by NFS services in a LOT of my infra.
 
Jan 4, 2014
89
13
8
Thanks! Installed Oracle Solaris 11.3 in a test VM and its installed fine and haven't been prompted for any license key so I assume it will run "forever" unrestricted without needing a license purchased for home lab use?

I also installed napp-it but in the web gui under the about section it says it will expire on 1 Dec 2017. Do I need to purchase a license to use napp-it in a home lab setting?

I don't mind running v37 of ZFS as I'll never be moving the pool between different OS's.

Is VAAI supported with Oracle Solaris/napp-it/iSCSI?
Not the complete suite, all except 1 i believe in omnios/openindiana.
As for oracle, my oersonal dislike to any and everything that comes from larry band of crooks prevents me from finding out ;)

Freebsd's ISCSI implementation says it is enabled, but havent really tested it, as i only use it when i need to do updates or tinkering on the omnios box , so the. Uptime is more important than anything

send from a mobile device, so typo's are to be expected :)
 

BSDguy

Member
Sep 22, 2014
168
7
18
53
I know you are tracking down the napp-it/Solaris path but just for reference/completness: (VAAI was added in 'limited' fashion in 9.3)

FreeNAS 9.3 Features - Support for VMware VAAI
29. VAAI — FreeNAS®11.1-RC1 User Guide Table of Contents

Still think they are missing VAAI for NFS so take that for what it's worth, was avail on TruNAS platform I researched since 9.10.2 known as 'VAAI for NAS' Wonder why it never trickled down for NFS protocol. All said, I live and die by NFS services in a LOT of my infra.
Appreciate the comment!

To be honest I have never used NFS. I'm not ruling it out yet ;-) but am leaning towards iSCSI at this stage as that is what all our clients use so trying to keep my lab similar to our clients production environment.

Having said that, what setup can I use for iSCSI *and* with VAAI support (besides FreeNAS)? Maybe I'm having a bad Google search day but does napp-it support VAAI? I've been using unmap lately and its *awesome* for thin disks!

Edit: Does FreeNAS 11 support VAAI with iSCSI?
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
She been around WELL before iSCSI pretty certain (30+ yrs). I had a good breakdown of pros/cons of NFS/iSCSI but having a hard time tracking it down on STH forums post/s. Don't really want to re-type that all...bottom line, NFS is easier to deploy and manage (less layers of abstraction al la zvol/iSCSI target/VMFS format/etc.) and DAMN near as performant if not in lock-step 'most' of the time. Both are great cluster-aware storage protocols heavily relied upon in the Virtualization/Cloud space so you can't really go wrong if you deploy following best practices...more preference/familiarity/experience/what your 'shop' uses. No issues experimenting w/ both in parallel to see what flavor of 'stg proto koolaid' you like :-D

FreeNAS or any open-zfs/solarish (see Gea you got me doing that now hah) will serve-up/chew-up NFS/iSCSI I/O with ent class devices backing the pool.
 
Last edited:
  • Like
Reactions: audio catalyst

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
I would say NFS is more flexible and way easier to work with than iSCSI.
iSCSI used to have an advantage in ESX deployments but NFS seems much better now.
Keep in mind we are talking NFS on 10G or more and keep it in a separate vLAN if you can. (At 1G speeds iSCSI had advantages for sure and the load balancing was useful, NFS at 1G then don’t do it)

For simplicity sake NFS would be my choice.

One gotcha with NFS though, if you want to run some Microsoft clusters on your VMware hosts it’s not generally support than your filesystems are on NFS.
https://kb.vmware.com/s/article/2147661?language=en_US
 

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
Thanks! Installed Oracle Solaris 11.3 in a test VM and its installed fine and haven't been prompted for any license key so I assume it will run "forever" unrestricted without needing a license purchased for home lab use?

I also installed napp-it but in the web gui under the about section it says it will expire on 1 Dec 2017. Do I need to purchase a license to use napp-it in a home lab setting?

I don't mind running v37 of ZFS as I'll never be moving the pool between different OS's.

Is VAAI supported with Oracle Solaris/napp-it/iSCSI?
Beside the napp-it free edition there is a Pro edition with support and some extra features. After first setup there is a 30 day trial of the additional Pro features (no OS/ZFS restrictions, mainly comfort features).

There is currently no VAAI support in the free Solaris forks beside NexentaStor (there is a free community edition - forbidden for commercial use and with capacity restrictions) but for VM storage NFS is as fast but much easier than iSCSI and you can have concurrent SMB access for copy/clone/move/backup with snaps as Windows previous versions.