ZFS bottlenecks

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

groove

Member
Sep 21, 2011
90
31
18
If you like StarWind performance you could think about replacing non-RDMA Intel NICs with Mellanox CX3 (they are cheap on eBay) or CX4 cards to have iSER rather than iSCSI for both East-West traffic & vSphere uplinks as well. RDMA is king :)
What backend (Solaris/FreeBSD/Linux based ZFS) would you recommend for Ethernet (ROCE?) based iSer target? I don't believe FreeBSD supports iSer targets. Solaris seems to only support Infiniband based iSer targets. So the only choice seems to be a linux based system.

Would like to hear from all if my current assessment of iSer is correct or if I'm missing something.
 

LaMerk

Member
Jun 13, 2017
38
7
8
33
@NISMO1968
Mellanox Cards are great, but my VMWare-Hosts are on Intel-Cards.
I do not like Open-E, but they provide a product with support. I am just afraid of building a cluster by myself without support.
It seems that I have missed the point here :)
Since you did performance tests for Open-E ZFS and StarWind and found that the latter is 2 times faster and you don't like Open-E, then just use StarWind! By the way, they also have Support, which can help you with cluster building ;)
 

Stril

Member
Sep 26, 2017
191
12
18
41
Hi!

@LaMerk
Starwind is great - it only lacks one thing: replication and snapshots are missing - they can only be used with LSFS which is very slow (in my tests)...

I did a lot of testing, too. Starwind scales great with fast flash-storage, but I hope to find a solution WITH snapshots and replication.

...I will do some performance tests with OpenE and other CPUs and hope to find a faster solution.
 

Stril

Member
Sep 26, 2017
191
12
18
41
Mostly for DR-setups.
I need to replicate data to another building. My first thought was using Veeam-replication, but every replication results in a short outage while snapshot is taken. ZFS-replication is working without any problem, but performance is not as good as with starwind.

My current feeling is, that ZFS is GREAT with hybrid-pools, but cannot profit as much as it should from all-flash. I did a test with 24 SAS-SSDs some months ago. Performance was not much better than with a cheap spinning-pool with fast SLOG.

The third "alternative" would be something from DELL-EMC/HP/Netapp...
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Ah right, so used to HCI nowadays that I forgot you are not virtualizing - else you could have run ESXi FT on top of it.

I assume cross building links prevent extending the Starwind Cluster to the other building?
 

m4r1k

Member
Nov 4, 2016
75
8
8
35
Mostly for DR-setups.
I need to replicate data to another building. My first thought was using Veeam-replication, but every replication results in a short outage while snapshot is taken. ZFS-replication is working without any problem, but performance is not as good as with starwind.
Have you at least tried Oracle Solaris 11.4? As Gea pointed out, Solaris is much faster than OmniOS.
Support from Oracle is about 900 euro per year per server. I would personally prefer ZFS to any other "solution" like Starwind which frankly speaking has never been proven.

I mean your concern is to lose a single snapshot and you consider Starwind because it's faster? o_O

My current feeling is, that ZFS is GREAT with hybrid-pools, but cannot profit as much as it should from all-flash. I did a test with 24 SAS-SSDs some months ago. Performance was not much better than with a cheap spinning-pool with fast SLOG.
Same story here. Take a good look at Gea's ZFS performance white paper using Intel Optane and you will discover that a pool with Optane is nearly as fast as a full-flash one.

The third "alternative" would be something from DELL-EMC/HP/Netapp...
HPE with 3Par has some nice engineering solution for the DR (but actually most vendors do). There are at two issues here:
- Between the two DC/sites you will need redundant 10Gbps fibre.
- Sync write will kill the performance no matter what

NetApp is more similar to ZFS and it's very nice how it works and the stability in general. Operation is a pleasure and is a true storage. NetApp people will configure it, and you will forget about it.

The real issue with all the traditional vendors (e.g. you're not quoting Hitachi) is the cost. With about 1800 euro per year you can cover two DIY servers using Oracle Solaris. DELL-EMC is more expensive to re-open a support contract for an old SAN than to buy a new one.

You're gonna end up in the > 100K range for two device.
 

Stril

Member
Sep 26, 2017
191
12
18
41
Hi!

The problem with optane is, that it does not exist with SAS-interface --> cannot be used in cluster.

@Oracle-Solaris:
1800 Euro per year is not a problem, but does Oracle offer a "storage-cluster-solution"? I thought, this would lead to:
--> Pay Oracle for OS-support
--> Buy RSF-1
--> Configure iSCSI with napp-it
--> Be responsible for maintaining the whole solution.....
 

m4r1k

Member
Nov 4, 2016
75
8
8
35
Hi!

The problem with optane is, that it does not exist with SAS-interface --> cannot be used in cluster.

@Oracle-Solaris:
1800 Euro per year is not a problem, but does Oracle offer a "storage-cluster-solution"? I thought, this would lead to:
--> Pay Oracle for OS-support
--> Buy RSF-1
--> Configure iSCSI with napp-it
--> Be responsible for maintaining the whole solution.....
That's called ZFSSA which is in the same EMC, NetApp price range on SPARC machine.

If you want a DIY, well, you're are gonna need to get a bit dirty but once correctly set-up, ZFS is the most security thing you can get on x86.