4x NVMe shared over NFS/iscsi options

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
What would be the best Option for an NFs/iscsi server to provide shared storage for esxi and hyper-v hosts using P3600 drives.
 

gea

Well-Known Member
Dec 31, 2010
3,157
1,195
113
DE
Propably you can use any Linux/Unix and maybe the best is always related to a personal preference but I would use at least ZFS.

My preferred ZFS platform is Solarish, either genuine Oracle Solaris as this is the fastest and most feature rich (fastest sequential resilvering/raid rebuild, ZFS encryption) or one of the free forks.

Sun now Oracle also invented ZFS and NFS and integration of ZFS with OS and services is simply the best. Services like NFS or SMB are fully integrated into ZFS there. Their iSCSI stack Comstar is also one of the best enterprise solutions, see Configuring Storage Devices With COMSTAR - Oracle Solaris Administration: Devices and File Systems

Most of the commercial Solaris advantages are available with the free forks OpenIndiana or OmniOS that are developped independently from Oracle. Even the minimalst Solarish distribition like OmniOS or OI minimal includes FC/iSCSI, NFS, SMB or network virtualisation via the Crossbow framework as no 3rd party application is needed (all developped by Sun/Oracle and OpenSource now).

My second preference would be one based on BSD, another Unix (even Solaris or Apple OSX evolve from BSD). My last option for a ZFS server would be Linux as this is far away from "it just works" like on Solarish and even an OS update can break anything related to ZFS.
 
Last edited:
  • Like
Reactions: K D

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
Thanks @gea. The Chassis supports 6 drives at a maximum and probably 8 if I can get creative. is plan to eventually add more drives when I can afford them.

How would you recommend the drives be setup? Mirrors/Stripe/RaidZ? I am not to worried about data loss. Data will only be various vms I'm tinkering with and can afford to lose and I will have daily backups to a different server.

All hosts and storage are connected via 40gb.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,058
113
If you're not concerned with data loss then IMO don't give up the performance with ZFS use Ubuntu and linux/mdadm... faster, better driver support/updated sooner, etc...

Could do single drive setup, raid0, raid10 depending how much time you want to spend recovering ;) and IO requirements of the VMs. Could also do 2 mirrored setups and try to keep writes on one and mixed on another, etc... I'd base this more on your workload. If it's all tests/dev/etc then whatever is easiest and has the space and perf. you need.
 
  • Like
Reactions: Sergio and K D

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
Thanks @T_Minus. Are there any pre-built appliances that I can use other than the zfs based ones? I would prefer a gui.
 

gea

Well-Known Member
Dec 31, 2010
3,157
1,195
113
DE
For ESXi you mainly want iops.
In the past this meant Raid-10 alike storage with as many raid-1 as possible.

With NVMe like the P3600 you have a lot more iops so this may not be as relevant. With ZFS you have additionally a large rambased writecache and a huge readcache with enough RAM. Basically its a question of how large the pool should be and how much money you want to invest and how many pci lanes you can offer.

I am in the same situation, thinking about massive NVMe storage solutions. I am evaluating the new socket 3647 boards due the huge amount of pci-lanes, U.2 NVMe to use quite a lot of them and the new Intel Optane NVMe as they halves the latency even of a P3700 with around 8x the iops. I have just ordered a new 900P NVMe that is in a similar price range than the P3600 but in a whole different performance region, especially as trim or garbage collection is no longer needed as they address the NVMe more like RAM.

For the hardware, I check SuperMicro to use the appliance/OS of choice.
My next systems are like a Supermicro | Products | SuperServers | 2U | 5029P-E1CTR12L and I wait for a similar case with U.2 bays instead of SAS. Even such a single board is capable for max 9 nvme beside 10G and the SAS controller. In the meantime I will use U.2 and use 2,5" slotadapters to mount them inside.

btw
Care about crash resistance, redundancy/ Raid/ Snaps/Data security.
Even with a daily backup you have work to rebuild and a backup is like old bread, always from yesterday. And without a modern filesystem with checksums you cannot be sure that a backup is valid.
 
Last edited: