Okay, so after about a month of running FreeNAS to serve out iSCSI on my zaid1 array to my VM host, I came to the conclusion that while the setup is decent, it is by no means sufficient for down-the-road (and certainly don't teach me any new skills) So I started looking into improving file server performance.
So here's what I had in mind, to be carried out in steps:
a) Benchmark existing ESXi datastore performance via iSCSI (it's being served out via a large extent on a zpool)
b) Enable an NFS mount on the same zpool, setup an NFS mount to that zpool as another datastore
c) Copy the VM data from the iSCSI extent to the NFS mount, benchmark NFS mounted datastore and see if NFS yields improvements versus iSCSI
d) Migrate existing 10GbE infrastructure to 40GbE by swapping out existing SolarFlare cards to Mellanox ConnectX2, and see if increased bandwidth helps with both iSCSI and NFS (I strongly doubt it)
So this might sound like a viable exercise, I would like to look into more things, such as:
- Enabling iSER (iSCSI over RDMA) or NFS over RDMA (both of which are supported by the Mellanox cards)
The problem that I have is one of limited capacity to test RDMA transfers properly (4 RAIDed HDDs can only push about 200MBytes/sec, while 2 SATA-III SSDs can only push 1GB/sec. My guess is that I'll have to create a small RAM drive and see how fast I can get it to ingest/traverse data while I am playing with it) - of course, that does assume the iSCSI/NFS target can support RDMA at all. Considering how popular FreeNAS seems to be, it doesn't support iSER - so I might have to switch to Napp-it on OpenIndiana Hipster or OmniOS, since both have support for COMSTAR (which can do RDMA), while FreeBSD does not. Has anyone messed with RDMA in particular while playing with Napp-it?
I was thinking of doing the following to test RDMA:
- Setup a pair of t730 Thin client with Mellanox cards - both with M.2 SATA SSDs
- Setup one machine for Napp-it on OmniOS and configure a RAM disk, then configure RDMA via COMSTAR
- Setup the other machine for SCST so the initiator will work with the Mellanox RDMA drivers to facilitate faster transfer.
So here's what I had in mind, to be carried out in steps:
a) Benchmark existing ESXi datastore performance via iSCSI (it's being served out via a large extent on a zpool)
b) Enable an NFS mount on the same zpool, setup an NFS mount to that zpool as another datastore
c) Copy the VM data from the iSCSI extent to the NFS mount, benchmark NFS mounted datastore and see if NFS yields improvements versus iSCSI
d) Migrate existing 10GbE infrastructure to 40GbE by swapping out existing SolarFlare cards to Mellanox ConnectX2, and see if increased bandwidth helps with both iSCSI and NFS (I strongly doubt it)
So this might sound like a viable exercise, I would like to look into more things, such as:
- Enabling iSER (iSCSI over RDMA) or NFS over RDMA (both of which are supported by the Mellanox cards)
The problem that I have is one of limited capacity to test RDMA transfers properly (4 RAIDed HDDs can only push about 200MBytes/sec, while 2 SATA-III SSDs can only push 1GB/sec. My guess is that I'll have to create a small RAM drive and see how fast I can get it to ingest/traverse data while I am playing with it) - of course, that does assume the iSCSI/NFS target can support RDMA at all. Considering how popular FreeNAS seems to be, it doesn't support iSER - so I might have to switch to Napp-it on OpenIndiana Hipster or OmniOS, since both have support for COMSTAR (which can do RDMA), while FreeBSD does not. Has anyone messed with RDMA in particular while playing with Napp-it?
I was thinking of doing the following to test RDMA:
- Setup a pair of t730 Thin client with Mellanox cards - both with M.2 SATA SSDs
- Setup one machine for Napp-it on OmniOS and configure a RAM disk, then configure RDMA via COMSTAR
- Setup the other machine for SCST so the initiator will work with the Mellanox RDMA drivers to facilitate faster transfer.