Recent content by dante4

  1. D

    7050QX-32S DCB (Lossless ethernet)

    The end of thread Using CX3-pro for NVMe-over-RDMA or iSER (ESXi) is possible only with Direct Attach connection. In case of using Switch - you will meet with PSOD like this This happens in cases of WRITE_SAME, i.e. when you clone VM or copy file between VM by SMB (i.e. \\<IP>\C$) Based on...
  2. D

    7050QX-32S DCB (Lossless ethernet)

    You misunderstood me, I mean that 25GbE will require new switch and 25G switches are costly AF. And you can't downgrade 40G to 25G since it's completely different technology
  3. D

    7050QX-32S DCB (Lossless ethernet)

    To be fair. What other purpose may be behind using cheap piece of hardware to get cool results? While you are correct regarding 50GB, since they are using QSFP formfactor and may downgrade their speed to 40G (MCX416A-GCAT) the 25G will not help me at all. But that's still off-topic and goes to...
  4. D

    7050QX-32S DCB (Lossless ethernet)

    Indeed, you are correct. Missed that CX4 didn't had 4.0. Erm. And? Which point of mine it counters? x1 pcie 3.0 line is 985 MB/s. 985*16 is 15760 MB/s. Which is 126080 Mb/s.
  5. D

    7050QX-32S DCB (Lossless ethernet)

    So results as of today, would be helpful to receive the correction to config, since I'm not expert in DCBx Switch (DCS-7050QX-32S 4.28.10.1M) config: ESXi config: Ubuntu config: Results:
  6. D

    7050QX-32S DCB (Lossless ethernet)

    Not really correct comparison. It would be correct if I was asking something like "How to reach 80G from single CX3 pro", which is technically impossible. Correct comparison is "an average car won't speed up to 200 km/h", and indeed without tweaks to itself it will not reach such speed
  7. D

    7050QX-32S DCB (Lossless ethernet)

    Like which? Outside of RDMA vSAN lab (which is unsurprised, since they are not in HCL) I had no problem with them, so I'm genuinely interested what kind of problem you have met with Pro version
  8. D

    7050QX-32S DCB (Lossless ethernet)

    Erm, you clearly messed up your math. Or you are talking about CX3. CX3 is pcie 3.0 x8 cards. So max bandwidth for NIC will be 7880 MB/s, i.e. 63 040 Mbits/s, or around 1.5 ports under full load (outside of topic that's why right now I'm also buying one more CX3 to make storage sever connected...
  9. D

    7050QX-32S DCB (Lossless ethernet)

    If I would have platinum ESXi partner that would be solution indeed. But in such case there would be no need to use ConnectX-3 cards in first place, right?
  10. D

    7050QX-32S DCB (Lossless ethernet)

    Welp. 7.0 plays nice with ConnectX-3 Pro, in both terms of drivers and firmware tools. Since well, they are in HCL. Ubuntu is not really, since you can't use non-inbox drivers on newer kernel
  11. D

    7050QX-32S DCB (Lossless ethernet)

    Not really DCBx is costing like this. 100G and 40G connectX-4 is the same cost, since well, they ARE the same card.
  12. D

    7050QX-32S DCB (Lossless ethernet)

    Welp, degrading speed of the system is always great idea. I trough that it's basic logic if you propose to buy something - it should be at least not worse than current solution.
  13. D

    7050QX-32S DCB (Lossless ethernet)

    Most likely because it was Connectx-3 non-pro version, since non-pro support only rdma v1, which is obsolete. Good for you, I wasn't able to find 50usd for QSFP 50G. No, they are not. What I have read from fohdeesha is just - "Just buy newer NIC". Not a single advice of his were helpful...
  14. D

    7050QX-32S DCB (Lossless ethernet)

    MCX455A - single port, yeah, great idea to make bottleneck out of thin air. :) And CX416A 90$? Where? The min what I see is 190$. Yeah, 570$ (190*3$) sounds like awesome idea for something that cost 45$ (15$ * 3) Without any respect, but I wasn't really asking how to make ConnectX-4 works...
  15. D

    7050QX-32S DCB (Lossless ethernet)

    At least seems like Pause packets are working correctly in case of overflow