Gave this a shot with the AMI tools (AFU under EFI shell and Winx64). Neither will do it. I dug though a bunch of forums and found that it pretty much won't happen. The Intel BIOS is a modified AMI EFI BIOS, but they turn on a write protect module early on. Can only get passed this by disabling the security on the motherboard.I have W2600CB2 and did same thing (flashed bios to latest). Wondering if flashing it using AMI tools would be suitable and if so, is there a volunteer?
[Guide] Flashing modified AMI Aptio UEFI using AFU
I don't think it's an issue in ESXi 6.0 anymore, I just installed 6.0U2 and was toggling MMIO above 4G back and forth like crazy in a desperate attempt to get more than 1 GPU recognized in ESXi. The GPU didn't work, but the setting didn't prevent ESXi from starting upAlso found this while looking around. I'm sure this will save someone some anguish. S2600 boards have issues with ESXi. Here are the magic BIOS settings to fix this. S2600-CP2 vmware ESXi 5.5 hangs on ACPI initialization -- Boot loop See page 2 for the needed settings.
Good to know. There might be issues with that 2nd blue open ended socket because it uses the PCIe lanes from the second processor unlike all the rest of the sockets. I had issues getting a GPU to work in that socket as well.I don't think it's an issue in ESXi 6.0 anymore, I just installed 6.0U2 and was toggling MMIO above 4G back and forth like crazy in a desperate attempt to get more than 1 GPU recognized in ESXi. The GPU didn't work, but the setting didn't prevent ESXi from starting up
Is it possible you weren't seeing full R/W performance because you weren't using the 16x slot? My server has nothing in that slot and is what I was going to use anyway - avoiding firmware downgrade would be pretty brilliant!Moved the NVMe card to the 16x slot(since it is practically unusable anyway). Retested the card. Now I'm getting about what I expect for IOPS. IOPS are now 295k/103k for 4k random R/W. I think it had something to do with the PCIe MUX on the slot I was using.
I'm not sure if you can get it to boot without booting in EFI mode. At least I have not been able to. I think there is something wrong when the BIOS tries to load the legacy, BIOS mode, code from the NVMe. You could boot from a USB boot stick in EFI mode and have a bootloader like Grub or the MS bootloader and then point the boot option to your regular SSD. In any case, the drive would have to have some sort of EFI boot partition which would be problematic to add to an existing drive.@J Hart
Thank you so much for taking the time to so thoroughly detail your experience and trouble-shooting. It is people like you who make the Internet worthwhile!
I am in the process of upgrading a server containing an S2600CP4 Hyper-V Server 2012R2 with a Samsung 950 Pro in an Angelbird PX1, and of course it wouldn't even POST. Running the latest firmware of course, experience exactly the same as yours. Your solution is a lifesaver!
In my case, I don't want to run the OS from the NVMe drive, I want to continue to run it from the existing SATA SSD - the NVMe drive is for VMs. Will changing the boot option to EFI optimised boot cause an issue with the existing Hyper-V Server OS? I really don't want to have to go through the effort of reinstalling Hyper-V Server as GPT in UEFI mode and reconfigure from scratch if I don't have to... :-/
Thanks again!
Trevor
I tried putting my drive in that slot and the card came up in PCIe 2.0 mode with any of the 02.XX.XXXX firmware. I'm not sure how far back you have to go to circumvent that. I ended up with the 01.06 one which did work. How much do you lose with the fully updated one? About 30% of the sequential R/W and about 30% of the IOPS. The low IOPS(50k) only happened with that very bottom slot(not the x16 one, the one on the other end).Is it possible you weren't seeing full R/W performance because you weren't using the 16x slot? My server has nothing in that slot and is what I was going to use anyway - avoiding firmware downgrade would be pretty brilliant!
Sorry, I must not have been clear. I don't want the NVMe drive involved in booting at all. I will put it into EFI mode to get the drive to work, but I don't want it talking to that drive for booting, I want it to run from the legacy MBR SSD. I'm pretty sure that won't work - that I'll have to upgrade/convert to GPT or reinstall altogether, but I thought I'd map things out before I tackled it (because I will only have a small window to get it done - the server can't be offline for too long). Fortunately I have redundant SSDs in it, so if I mess something up it can be rolled back just by changing the boot order (or pulling a SATA cable). But, you know, the less time it takes to get it working the better, and a reinstall is a good four hours of reconfiguration :-/I'm not sure if you can get it to boot without booting in EFI mode. At least I have not been able to. I think there is something wrong when the BIOS tries to load the legacy, BIOS mode, code from the NVMe. You could boot from a USB boot stick in EFI mode and have a bootloader like Grub or the MS bootloader and then point the boot option to your regular SSD. In any case, the drive would have to have some sort of EFI boot partition which would be problematic to add to an existing drive.
Gotcha. That is what I was trying to get at as well. The MBR disk isn't going to boot. You could make another SSD with a new partition on it and then copy everything, or you can make a USB boot stick which will just load the EFI bootloader and then load the OS from the existing MBR partition.Sorry, I must not have been clear. I don't want the NVMe drive involved in booting at all. I will put it into EFI mode to get the drive to work, but I don't want it talking to that drive for booting, I want it to run from the legacy MBR SSD. I'm pretty sure that won't work - that I'll have to upgrade/convert to GPT or reinstall altogether, but I thought I'd map things out before I tackled it (because I will only have a small window to get it done - the server can't be offline for too long). Fortunately I have redundant SSDs in it, so if I mess something up it can be rolled back just by changing the boot order (or pulling a SATA cable). But, you know, the less time it takes to get it working the better, and a reinstall is a good four hours of reconfiguration :-/
In your step-by-step you point to the 02.05.0004 BIOS, but not the 01.06 one. Is that what you mean by step 2. "Make a second drive with the firmware you want"?I tried putting my drive in that slot and the card came up in PCIe 2.0 mode with any of the 02.XX.XXXX firmware. I'm not sure how far back you have to go to circumvent that. I ended up with the 01.06 one which did work. How much do you lose with the fully updated one? About 30% of the sequential R/W and about 30% of the IOPS. The low IOPS(50k) only happened with that very bottom slot(not the x16 one, the one on the other end).
There is probably some way around this by building a custom BIOS. I'd actually be happier if Intel didn't make this stupid "fix".
Not that scary when you have another SSD clone of the original drive. Takes less than five minutes to clone these drives 'cause Server Core is small and SSD to SSD = 500+MB/s :-DGotcha. That is what I was trying to get at as well. The MBR disk isn't going to boot. You could make another SSD with a new partition on it and then copy everything, or you can make a USB boot stick which will just load the EFI bootloader and then load the OS from the existing MBR partition.
Maybe if your existing partitions on the boot drive have sufficient space, you can use this procedure to switch the drive to UEFI boot. That should be fast, but man that looks scary to me to switch the partition type and hope it doesn't munch the tables completely.
Haha I actually had a tab open on that very page already... you can use this procedure to switch the drive to UEFI boot...