Best software defined storage for NVMe - low latency

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Firebat

New Member
Jan 12, 2022
12
2
3
Hi Community,
I'm in the need of a fast storage (=low latency, high IOPS) to serve as storage for virtual machines. I've taken a closer look at Ceph but I think it's not the right choice for an all flash based storage because I can't really get the full performance of the nvme drives on the road because of Cephs internal architecture. But I really like the idea of scale up and scale out. I'm pretty sure there are other systems out there that can fulfill my needs.

Does anyone here have some experience with Lightbits LightOS or StarWind SAN & NAS? Open source (or at least freely available for testing) would be nice but isn't a must.

Thanks in advance for your ideas. :)
 

Firebat

New Member
Jan 12, 2022
12
2
3
Feels a bit like my experiences so far But ZFS as filer misses all the nifty features newer SDS' deliver.
 

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,050
437
83
MooseFS?
Also, Nutanix CE (free limited to 4 nodes) While Nutanix itself is fairly expensive, it would support NVME and RDMA (makes sense with NVMe drives)
 
  • Like
Reactions: Firebat

Firebat

New Member
Jan 12, 2022
12
2
3
MooseFS (Pro?) has been on the radar but I havn't looked any further yet. Do you have experience in home or production use with it?

Nutanix is something I've heard of, but I thought you need piles of money to use it. Even worse than VMware Didn't know, that they have a "Community Edition". I'll take a look.
 

i386

Well-Known Member
Mar 18, 2016
4,220
1,540
113
34
Germany
how many petabytes do you need?
hdr or edr?
how many concurrent clients?
"streaming" like workloads? or random io?
 

Firebat

New Member
Jan 12, 2022
12
2
3
Capacity isn't needed that much. Performance is key. We are talking about 150 VMs with ~80TB of data. Nothing fancy. I get the feeling, that my initial idea is a bit over the top. Reason for evaluating Ceph was the possibility to add more nodes easily. This is something ZFS can not achieve (easily). My workload could be addressed with "traditional" storage solutions but expansibility/expandability is crucial. I don't want any more data silos. I want to be able to add a new 1U or 2U node and off we go.

I really like the ceph idea of adding new hardware --> rebelance --> shut down and remove oldest --> rebalance. There has to be a solution that can make good use of NVMe drives without the Ceph overhead. Or do you think a 3 node ceph cluster with nvme drives would perform "well enough"? If I invest around 70k €, I want to get the maximum out of it. But I don't want to involve 5 different sales departments of 5 different companies without some real life experience of other people.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
My Ceph results even with nvme were abysmal (as mentioned in the link above), but that might have been my own inability to tune it properly o/c.

But it might be an option to run a similar build as I have (with TrueNas Core) with the new Scale which then should fulfill the scale out requirements. I dont think they are there yet, but depending on your timeframe you could at least evaluate.
 

tjk

Active Member
Mar 3, 2013
481
199
43
But it might be an option to run a similar build as I have (with TrueNas Core) with the new Scale which then should fulfill the scale out requirements. I dont think they are there yet, but depending on your timeframe you could at least evaluate.
Scaleout is using GlusterFS, good luck with high performance with that.
 

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,050
437
83
I did an in-depth comparison between the architecture of VSAN and Nutanix and the latter is much better optimized for scale and performance.
TBH I don't have much experience with MooseFS, but StarWind vsan scares me as a) it involving lots of DIY and b) it's windows based solution (on Linux it's using Wine - o_O )
Josh has the whole series on this (see on the bottom of each of the links to others)

Some more great info on Nutanix CE:
 
  • Like
Reactions: Firebat and tjk

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,050
437
83
One more reminder: Any SDS will be very limited by networking performance, especially with NVMe drives. Plan on using the fastest networking you could afford and RMDA/RoCE should help noticeably reduce write latency.
 
  • Like
Reactions: Firebat and tjk

NateS

Active Member
Apr 19, 2021
159
91
28
Sacramento, CA, US
Another option you might want to check out is Vast Data. Their business model is maybe a bit more turnkey than what you're thinking of, but since your main objective is performance, I think they're well worth a look. IMO they have one of the best storage architectures out there right now.
 
  • Like
Reactions: JoeSW

Firebat

New Member
Jan 12, 2022
12
2
3
Thank you all a lot for your input so far, that really helps me to move the project further to completion.

And I'm not against "turnkey solutions" if they REALLY offer what's promised in the marketing brochures. StorPool has crossed my path (=google search) a few times, but I havn't found any real world experiences so far.

@NateS
I'll take a closer look
 

FancyFilingCabinet

New Member
Jan 13, 2022
1
0
1
KumoScale might be a very good fit. The performance figures (PDF warning ) are definitely there, and they've been doing some nice work to support open source communities, even if the software itself isn't OS.
If your using openstack for virtualisation, there's native integration support in the more recent releases.
 
Last edited:

korikaze

New Member
Jan 15, 2022
2
0
1
We use Starwind vSAN at work, albeit on Windows, but we're able to saturate the 25Gbps links pretty easily between the nodes using PM883 RAID arrays. I've been pretty impressed with their support if you decide to go the paid route.