Hi there! I've been working on deploying 40G networking in our house for a dedicated storage network, and because all of our PCs are Windows based, I opted to go for a Windows Server VM that supplies shares over SMB Direct (RDMA), through Infiniband Connect X-3s. Here's the layout of how I have things setup:

Unfortunately, my performance over the network has been quite poor. I ran the built in Infiniband performance testing tools like ib_write_bw and it was showing ~36g with 7us latency from my PC to the Windows VM, so my suspicion's turned to SMB possibly being the culprit. I'm seeing RDMA read/write in perfmon so I know direct is enabled/working... are these numbers normal? or does SMB Direct just suck over 10g? If so, any advice on what I should replace it with?
Crystaldisk scores for the storage pools below for example:
SSD Array:


HDD Array:


Single Passthrough'd NVME Drive:


2x Passthrough'd NVME Drives in a Windows Software RAID 0:


Any help is greatly appreciated!

Unfortunately, my performance over the network has been quite poor. I ran the built in Infiniband performance testing tools like ib_write_bw and it was showing ~36g with 7us latency from my PC to the Windows VM, so my suspicion's turned to SMB possibly being the culprit. I'm seeing RDMA read/write in perfmon so I know direct is enabled/working... are these numbers normal? or does SMB Direct just suck over 10g? If so, any advice on what I should replace it with?
Crystaldisk scores for the storage pools below for example:
SSD Array:


HDD Array:


Single Passthrough'd NVME Drive:


2x Passthrough'd NVME Drives in a Windows Software RAID 0:


Any help is greatly appreciated!
Attachments
-
23.2 KB Views: 0
-
23.7 KB Views: 0





















