I'm running 4 intel DC S3500 as the cache tier right now. (1 S3500 to 1 8tb SAS HDD, 4 of each) Not the best, but it's what what in the recycle bin at work.
What modifications to the VM configs did you make?
I've tried pciPassthru.use64bitMMIO="TRUE" and manually defining pciHole.start = "1200" and pciHole.end = "4040" on both VMs.
Also the stuff in the attached screenshot.
Anyone had success with passing through two video cards to different VMs? I have two RX 470s that I'm sending through to Win10 guests on ESXI 6.5. Host is a Precision T5610 with latest bios.
GPU1 -> VM1 (Seems to work, but BSOD when updating drivers)
GPU2 -> VM2 (Crashes the second anything...
Also noticing that in server core, HDMI drivers won't install and the GPUs cannot be used in the host. I think the problem with the DDA assignment is related, that there is something missing that's preventing the passthrough. DirectX?
What was working in v1607 is no longer working in 2016 v1709. I'm getting "element not found" when starting VMs with RX 470s assigned. I'll link my motherboard and the support thread I started on technet.
ASRock Rack > EP2C602-4L/D16
Win Server 2016 v1709 - Element not found when starting...
Anyone mind floating some recommendations to replace my dying 3ware 9650SE? I only have a 4x PCI-E slot free in my Asrock Rack EP2C602-4L/D16 and I'm torn between buying a new card or using the on-board Intel SCU ports to build a fake-raid. I have 3 6TB drives in RAID 5 and I'm running...
Anyone had luck improving the speed of network shares or iSCSI target with the server version of PrimoCache? I have a 3ware 9650se-4lpml RAID5 array with 3 6TB HGST drives that I'm trying to speed up so it can utilize more of my 10GB network.
Fairly new at non-copper ethernet. How would I post the fiber port output? Haven't tested a direct connection yet. Probably won't get a chance until this evening.
It seems the 2016 server seems pissed off about a few things. Hyper-V being one of them.
Failed to allocate virtual function for NIC 7FC2D3D1-C075-41F3-9DAC-289F09CC112B--A17DF296-0015-426E-8D29-306B51B2078F (Friendly Name: Network Adapter), status = The device is not in a valid state to...
Increased window size (this is also with jumbo frames enabled)
Send
iperf3.exe -c 192.168.1.3 -w 1M
Connecting to host 192.168.1.3, port 5201
[ 4] local 192.168.1.26 port 58938 connected to 192.168.1.3 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 482...
What are you connecting the Mellanox card to the switch with? DAC (how long)? Fiber (what SFP+)?
10GTek SPF+ (for Mellanox and UBNT) and 10m OM3 runs to each machine.
Amazon.com: 10Gtek for Mellanox MFM1T02A-SR, 10Gb/s SFP+ Transceiver module, MMF, 850nm, 300-meter: Computers & Accessories...
I was wondering if anyone had experience tuning the Mellanox cards for 10gb performance. I seem to be having an issue where sending is very slow, but receiving is considerably better. I am using the 5.35.12978.0 drivers on Win10 Client and 2016 Server. Firmware has been updated to 2.9.1200...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.