Recent content by chune

  1. C

    omnios+nappit 10gb performance: iperf fast, zfs-send slow

    Buffer did not appear to help. Weird thing is if i vmotion a VM from one AIO box to another AIO box i get the full 10gb speed. Any other suggestions?
  2. C

    omnios+nappit 10gb performance: iperf fast, zfs-send slow

    I have 64GB of RAM allocated to the omniOS VM, I read that more ram is preferred over a SLOG but maybe that is no longer the case. I do have sync disabled for my pools but i still get the slow speed on ZFS send. My target pool is empty and the sending pool is 50% full. I understand that my...
  3. C

    omnios+nappit 10gb performance: iperf fast, zfs-send slow

    begin tests .. Bennchmark filesystem: /pool01-hdd/_Pool_Benchmark Read: filebench, Write: filebench_sequential, date: 06.18.2019 begin test 4 ..singlestreamwrite.f .. begin test 4sync ..singlestreamwrite.f .. set sync=disabled begin test 7 randomread.f .. begin test 8 randomrw.f .. begin test 9...
  4. C

    omnios+nappit 10gb performance: iperf fast, zfs-send slow

    I finally made the jump to 10GB on my nappit AIO boxes and thought I had followed all of the recommendations, but I am still getting gigabit speed ZFS sends using NC. The weird thing is iperf gives me 8.9 Gbits/sec throughput so I'm not sure if any of the tuneables will help me here. The pool...
  5. C

    Esxi 6.7 / OmniOS 151028

    you may want to run your CPU by the vmware HCL. I'm pretty sure they dropped lga1366 support after 6.5 officially, unofficially it may still work fine
  6. C

    Chinese backdoors on Supermicro

    I have an x9 board I got off eBay that has corrupt ME firmware that I could not get to flash for the life of me. Can yafukcs dump ME firmware?
  7. C

    maximum PCI resources supermicro X9DRG-QF

    Four outputs per card so a total of 20. Funny thing is that it posts fine with 5x rx570 8gb and each one of those has 5 outputs per card for a total of 25
  8. C

    maximum PCI resources supermicro X9DRG-QF

    My X9DRG-QF will boot fine with 4x Radeon Pro Duo (dual GPU card with 16gb PER GPU) but as soon as i add the 5th it wont post. I am not using any hypervisor, just booting straight to lubuntu. Above 4g decoding is enabled and i have tried all of the different MMIO base settings. Is this a...
  9. C

    Oracle Solaris 11.4

    windows 10 will now remove smbv1 if you are not on a domain. I believe gea outlined in another thread that most solarish things still rely on smbv1.
  10. C

    VM with passthrough "freezes" entire ESXi box when shutdown/rebooting guest

    Sounds like you are just starting down the GPU passthrough rabbit hole, there are quite a few other threads on this but to sum it up: VDGA = aka gpu passthrough, you can load up a server with as many physical GPUs that it will post with (with above 4g decoding disabled) and pass each physical...
  11. C

    VM with passthrough "freezes" entire ESXi box when shutdown/rebooting guest

    same VM. Separate VMs has always worked fine
  12. C

    VM with passthrough "freezes" entire ESXi box when shutdown/rebooting guest

    I have 5 RX570 8gb passed through to a Xbuntu VM running on esxi 6.0 (u3 i think?). Its totally stable once it boots, but i have to reset the vm/host a few times for everything to boot properly initially. Not sure if its a passthrough issue or an xbuntu issue. The key with the 8gb cards was...
  13. C

    Intel 2U NVMe Backplane Kit

    according to this thread the plx chip one works in non-intel boards: Intel NVMe Backplane + AOC/Cables/Trays
  14. C

    Intel 2U NVMe Backplane Kit

    yes that is the plan, was there a thread about this i missed? Thanks!
  15. C

    Intel 2U NVMe Backplane Kit

    Unless you can confirm the x16 card has all four ports working in a supermicro x9 motherboard i will probably stick with the x8 one that includes the PLX switch. Let me know if you will sell just that kit and the price. Thanks!