NFSv3 vs NFSv4 vs iSCSI for ESXI datastores

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Can you point to the exact steps needed or list them?
Is that iser via IB or Ethernet thats working now?
Is there a switch involved? If so which one and any special config?
 

zxv

The more I C, the less I see.
Sep 10, 2017
156
57
28
Can you point to the exact steps needed or list them?
Is that iser via IB or Ethernet thats working now?
Is there a switch involved? If so which one and any special config?
I'm not through testing, but yes, I'd like to write this up.
I'd also still like to get it working on CX3 as well.

I'm using an Arista switch , and followed the steps show in the mellanox link above to enable PFC and ECN.
This link also shows steps to configure PFC and ECN on esxi.
The specific tool used to set ECN is different for CX3 vs CX4/5 cards and varies with driver versions, and that's the crucial step that's poorly documented.
I'll share more info as this progresses.
 

zxv

The more I C, the less I see.
Sep 10, 2017
156
57
28
You could try CentOS instead of Ubuntu for ZFS and then retest your iSER target (but would recommend SCST over iLO)

Btw could you share some benchmark results on iSER ramdisk target?
There are a couple of issues with the benchmarking currently.
I'm focusing for the moment on the CX3Pro.

I've tried a number of block devices, and strangely enough, Zfs is by far the easiest to use. For whatever reason, esxi will mount it, whereas esxi has issues with the same drives passed through as a LIO or SCST LUN or as mdadm raid0 stripes of them as the LUN. esxi wouldn't mount the LIO or SCT target because they reported the 4K physical block of the iodrive 2 cards. There were issues even if I format the iodrive with 512B blocksize.

I'd prefer to use Zvols eventually regardless, for the sake of resilience, but Zfs is the bottleneck.

I'd love to switch to FreeBSD, but I'm also using iodrive 2 for L2ARC, and I don't think there's any FreeBSD drivers for the iodrive. It looks like there are iodrive drivers for FreeBSD 8/9, but iSER in FreeBSD 11/12.

Heck I'd love to use illumos. Last time I tried Ominos it didn't recognize the mellanox CX3 card.

Anyway, yea a ramdisk target is what's needed for benchmark comparisons and testing. That said, it's better to know early if there are any show stoppers for using Zfs.

So, yea, I'll get some ramdisk benchmarks, and I'd like to get a few volunteers to help validate.

For those willing to test, if you have ConnectX-3 cards, esxi 6.7U1 iSER client, ubuntu 18.04 iSER target (or other distro with LIO/SCST), an ethernet switch that supports priority flow control, and are willing to try this, please PM me to discuss helping test! I only need a couple to help validate.
 

efschu2

Member
Feb 14, 2019
68
12
8
esxi 6.7 should be compatible with 4k block size, maybe your scst/lio is reporting your volblocksize/recordsize. Haven't used scst with esxi for a while but you should be able to set
scstadmin -open_dev [~yours~] -handler vdisk_blockio -attributes filename=/dev/zvol/[~yours~],blocksize=512
or
scstadmin -open_dev [~yours~] -handler vdisk_blockio -attributes filename=/dev/zvol/[~yours~],blocksize=4096

Maybe have a look here:
Generic SCSI Target Subsystem For Linux / [Scst-devel] iser targe, ubuntu 17.10
And btw this one:
VMware Knowledge Base
 
Last edited:

zxv

The more I C, the less I see.
Sep 10, 2017
156
57
28
esxi 6.7 should be compatible with 4k block size, maybe your scst/lio is reporting your volblocksize/recordsize. Haven't used scst with esxi for a while but you should be able to set
scstadmin -open_dev [~yours~] -handler vdisk_blockio -attributes filename=/dev/zvol/[~yours~],blocksize=512
or
scstadmin -open_dev [~yours~] -handler vdisk_blockio -attributes filename=/dev/zvol/[~yours~],blocksize=4096

Maybe have a look here:
Generic SCSI Target Subsystem For Linux / [Scst-devel] iser targe, ubuntu 17.10
And btw this one:
VMware Knowledge Base
Yea, thanks @efschu2, these options do set the logical block size.

When I set a 4K logical block size, it will attach the iscsi LUN, but will fail to mount. Watching the logs while mounting the drive helps to catch this issue.

This says esxi is compatible with 4K logical block size only for "Direct attached HDD drives".
VMware Knowledge Base


I've been able to use 512 byte logical with 32K physical for zvols from LIO and SCST tarets, and it logs spurious kernel messages about the physical block size, but works just fine.

I'm making progress getting the CX3 to work, but there are some issues not yet resolved.
 

zxv

The more I C, the less I see.
Sep 10, 2017
156
57
28
I looked closer, and here's more info on the block size issue.

zvols and ramdisk report a physical block size of 512, and LIO passes this through as hw_block_size.
But in addition they set a they set a hw_max_sectors, which esxi will complain about, but works.

targetcli /backstores/block/zvol get attribute
======================
block_size=512
hw_block_size=512 [ro]
hw_max_sectors=32768 [ro]

This may be the reason that mdadm arrays and SSDs did not work as backing stores. LIO passed through the hw_block_size=4096.
Note that the hw_block_size is read-only in LIO, and is read only in the underlying /sys/block/<device>/queue/physical_block_size.