My new OSNexus QuantaStor HA SAN

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
So u don't have a dedicated slog in place for the quantastor? I mean 800mb is not bad...
 

gcs8

New Member
Sep 27, 2022
15
2
3
QS6 is now out and the community edition got a bump in space.

“Community Edition (60TB raw capacity limit per server, time-limited 2-year license, 4x servers max per storage grid)”
 
  • Like
Reactions: Rand__

ano

Well-Known Member
Nov 7, 2022
634
259
63
what 60TB? thats new right?! will be be great for lab systems and some mild prod-test stuff.

we have quite a few systems from them on enterprice licenses. I only have good things to say about them/their support/the product, used them for 7-8-9-10? ish years for most SDS stuff, recently also their HA with SAS dual expanders and enclosure redundancy + zfs replication to other site.

I installed the 2 first QS6 systems a few hours ago, then saw this here, actually using them for power testing baseline and discoverd the v6 by accident when grabbing latest iso.

ZFS performance is what it is regardless of the system and overlay though, our solution is to throw hardware/cpu on the problem, and amd does seem to be king now, cant wait to benchmark stuff with genoa. lots of ram/memory channels + say 1 or 2 7443 or better if you have all flash is key.

ZFS on spinning drives = pretty amazing, really gets that last bit of juice
ZFS on all flash, nvme or SAS/sata is pretty sad, but most of the times its good enough though, we can max 2x100gbps coming out of the larger arrays, but limit is around there, think we got 24GBs on sustained random 128kb writes or so. but then we are talking 3 large jbods full of flash across 3x 9500 and 2x hosts, and it eats all the cpu/ram it can get.

Ref ram is king, a cheapo 7402 with 128 vs 256GB (16/32GB (30-40% diff) ram will see a massive bump for all flash stuff, and then but going 7443+3200ram 256GB you will get double the IOPS regardless of blocksize pretty much.
 

gcs8

New Member
Sep 27, 2022
15
2
3
what 60TB? thats new right?! will be be great for lab systems and some mild prod-test stuff.

we have quite a few systems from them on enterprice licenses. I only have good things to say about them/their support/the product, used them for 7-8-9-10? ish years for most SDS stuff, recently also their HA with SAS dual expanders and enclosure redundancy + zfs replication to other site.

I installed the 2 first QS6 systems a few hours ago, then saw this here, actually using them for power testing baseline and discoverd the v6 by accident when grabbing latest iso.

ZFS performance is what it is regardless of the system and overlay though, our solution is to throw hardware/cpu on the problem, and amd does seem to be king now, cant wait to benchmark stuff with genoa. lots of ram/memory channels + say 1 or 2 7443 or better if you have all flash is key.

ZFS on spinning drives = pretty amazing, really gets that last bit of juice
ZFS on all flash, nvme or SAS/sata is pretty sad, but most of the times its good enough though, we can max 2x100gbps coming out of the larger arrays, but limit is around there, think we got 24GBs on sustained random 128kb writes or so. but then we are talking 3 large jbods full of flash across 3x 9500 and 2x hosts, and it eats all the cpu/ram it can get.

Ref ram is king, a cheapo 7402 with 128 vs 256GB (16/32GB (30-40% diff) ram will see a massive bump for all flash stuff, and then but going 7443+3200ram 256GB you will get double the IOPS regardless of blocksize pretty much.
Yes, Steve upped it for QS6, was hoping for 80-100T but 60T for a lab is still a good bit for an all-flash array. There is tail that as long as you are doing cool stuff at home you can get your license bumped to what you need, just no support outside of the forum.

I just made a HA QS6 array at work that I am pretty excited for at the moment.

I did a lot of testing and bug reporting for QS6, but there are some cool things in the pipeline for NVMe performance profiles coming in the next year or so. So keep an eye out.
 
  • Like
Reactions: ano

ano

Well-Known Member
Nov 7, 2022
634
259
63
40% performance increase on same hw, latest v5 bionic and v6 focal.
 
  • Like
Reactions: gcs8