NVMe on Intel S2600CP

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

J Hart

Active Member
Apr 23, 2015
145
100
43
45
I know this probably doesn't exclusively belong here, but Intel sells an NVMe hotswap cage for the P4000 chassis if you really want to go all in. FUP8X25S3NVDK That includes the PCIe adapter and the hard to find expensive cables.
 

J Hart

Active Member
Apr 23, 2015
145
100
43
45
I have W2600CB2 and did same thing (flashed bios to latest). Wondering if flashing it using AMI tools would be suitable and if so, is there a volunteer? :D

[Guide] Flashing modified AMI Aptio UEFI using AFU
Gave this a shot with the AMI tools (AFU under EFI shell and Winx64). Neither will do it. I dug though a bunch of forums and found that it pretty much won't happen. The Intel BIOS is a modified AMI EFI BIOS, but they turn on a write protect module early on. Can only get passed this by disabling the security on the motherboard.

I had another browse of the Intel forums. Found this Can't run BIOS recovery on s2600cp2 And this procedure got things working. Here is a step by step on how to downgrade.

  1. Make a recovery stick from the 02.05.0004 BIOS. Inside of that package there is a directory called BIOS recovery. Copy those files to the root directory of a freshly wiped 4GB flash drive formatted with FAT32. Why 4GB? I have no idea but the 8GB sticks I normally use don't work
  2. Make a second drive with the firmware you want. Just put all the EFI files on it like normal.
  3. Shutdown the machine and open it up.
  4. Take out all the PCIe cards.
  5. Clear the BIOS via jumper. With the computer plugged in, set jumper AY(J1D2). Leave it in place for 5 seconds. Pull the power cords. Wait until everything powers down. Reset the jumper to the normal position. Plug in the machine. Wait for the BMC to power on. Now pull the power again. See the manual.
  6. Set BIOS recovery mode by moving jumper BA(J1D3).
  7. Replace all the covers. Put the recovery USB drive into one of the rear USB ports.
  8. Now here is where things may go wrong. Plug in the machine and hit the power button after the blue LED stops flashing. If everything goes correctly you should hear two beeps and see all the POST LEDs go amber and then continue on flipping through a bunch of codes and bringing up the normal BIOS splash screen. If that happens hit F6 to select the UEFI shell. Other things might happen here. If you get 2 beeps, all amber and then a single green, you just experienced a catastrophic error. I have no idea what causes this. If you get 2 beeps and then 3 beeps, the system doesn't think you have a recovery USB stick installed or it doesn't like something about the files on it.
  9. If everything worked, you should be able to select UEFI console. Hit escape to stop it from flashing 02.05.
  10. Plug in the second USB stick. Run "map -r".
  11. Go to the second USB stick. (fs1: probably).
  12. Run "iflash32.efi -u RXX.XX.XXXXrec.cap" It must be the rec capsule
  13. Everything should proceed at this point. If you get something about Smi failing, the security is not actually disabled.
  14. If you get a successful flash, take out the USB sticks. Shut down via the power button and start back up.
  15. LET THE MACHINE BOOT FULLY. DON'T TOUCH THE KEYBOARD. Ignore the version you see at this point. It will probably show an old version number. This is normal.
  16. You will probably be back at the EFI shell or maybe it will be trying to PXE boot. Doesn't matter. Power off again.
  17. Insert your target BIOS USB stick. And power back up. Hit F6 to enter the UEFI shell.
  18. You can interrupt or just let the startup.nsh script run. Doesn't matter, but you want to flash the rest of the firmware(BMC, ME, FRUSDR) at this point. Let it run. Tell FRUSDR to just update SDR. Reboot when finished.
  19. Now you should see the correct version. Go ahead into the BIOS. Load defaults. Set everything you need to set(VT-x, Vt-d, EFI optimized boot). Save and boot. Shutdown.
  20. Put back in the PCIe cards including the NVMe.
  21. Go back into BIOS one final time. You need to add the Windows bootloader back as an option. Go to the boot options part of BIOS and add a new boot option. See this for how to do it.
  22. Now you should be able to boot up just fine. Have a beer, do a dance. You have defeated the Intel BIOS monster.
And the results? PCIe 3.0 on the drive. Maximum bandwidth is now 2572/1554 R/W like it should be. IOPS are still low. Must be something with windows. I'll have a go at benching it in linux to see if it runs as it should.

Edit: missed a step to flash the older BMC, ME and FRUSDR which might cause problems if you don't downgrade them.
 
Last edited:

RyC

Active Member
Oct 17, 2013
359
88
28
Also found this while looking around. I'm sure this will save someone some anguish. S2600 boards have issues with ESXi. Here are the magic BIOS settings to fix this. S2600-CP2 vmware ESXi 5.5 hangs on ACPI initialization -- Boot loop See page 2 for the needed settings.
I don't think it's an issue in ESXi 6.0 anymore, I just installed 6.0U2 and was toggling MMIO above 4G back and forth like crazy in a desperate attempt to get more than 1 GPU recognized in ESXi. The GPU didn't work, but the setting didn't prevent ESXi from starting up
 

J Hart

Active Member
Apr 23, 2015
145
100
43
45
I don't think it's an issue in ESXi 6.0 anymore, I just installed 6.0U2 and was toggling MMIO above 4G back and forth like crazy in a desperate attempt to get more than 1 GPU recognized in ESXi. The GPU didn't work, but the setting didn't prevent ESXi from starting up
Good to know. There might be issues with that 2nd blue open ended socket because it uses the PCIe lanes from the second processor unlike all the rest of the sockets. I had issues getting a GPU to work in that socket as well.
 

J Hart

Active Member
Apr 23, 2015
145
100
43
45
Moved the NVMe card to the 16x slot(since it is practically unusable anyway). Retested the card. Now I'm getting about what I expect for IOPS. IOPS are now 295k/103k for 4k random R/W. I think it had something to do with the PCIe MUX on the slot I was using.
 
  • Like
Reactions: katit

vrod

Active Member
Jan 18, 2015
241
43
28
32
thank you for the great guide. I will try this once I am home next week. Did you try other PCIe slots with the NVMe? would be a shame if they only work to their potential in the X16 slot.
 

J Hart

Active Member
Apr 23, 2015
145
100
43
45
I hadn't tried all of the slots systematically. It was just the easiest to access for me.
 
  • Like
Reactions: vrod

vrod

Active Member
Jan 18, 2015
241
43
28
32
@J Hart - Thank you so much for the suggestion. I tried to set the boot options to EFI Optimized and the 950 pro now posts with the system! Awesome! Currently debugging some emulex 10gb cards but will test the bandwidth afterwards. Thanks again for this, finally it came to work! :)
 
  • Like
Reactions: J Hart

marv

Active Member
Apr 2, 2015
157
34
28
39
Thanks, it works :)
After step 14, I had to put recovery jumper back to deasserted position and also invoke bios reset jumper. Otherwise it wouldnt post..no beeps, black screen.

One more thing. I have SM951 AHCI version and setting efi optimized boot in bios results in black screen, no POST, bios reset is the only way out. So I managed to workaround it by using bootable usb flash with Clover EFI. In there I was able to choose windows boot manager. I suppose that Clover EFI would make this drive bootable in legacy systems such as x58 boards etc.
 
  • Like
Reactions: J Hart

TrevorX

New Member
Apr 25, 2016
27
5
3
@J Hart
Thank you so much for taking the time to so thoroughly detail your experience and trouble-shooting. It is people like you who make the Internet worthwhile!

I am in the process of upgrading a server containing an S2600CP4 Hyper-V Server 2012R2 with a Samsung 950 Pro in an Angelbird PX1, and of course it wouldn't even POST. Running the latest firmware of course, experience exactly the same as yours. Your solution is a lifesaver!

In my case, I don't want to run the OS from the NVMe drive, I want to continue to run it from the existing SATA SSD - the NVMe drive is for VMs. Will changing the boot option to EFI optimised boot cause an issue with the existing Hyper-V Server OS? I really don't want to have to go through the effort of reinstalling Hyper-V Server as GPT in UEFI mode and reconfigure from scratch if I don't have to... :-/

Thanks again!

Trevor
 

TrevorX

New Member
Apr 25, 2016
27
5
3
Moved the NVMe card to the 16x slot(since it is practically unusable anyway). Retested the card. Now I'm getting about what I expect for IOPS. IOPS are now 295k/103k for 4k random R/W. I think it had something to do with the PCIe MUX on the slot I was using.
Is it possible you weren't seeing full R/W performance because you weren't using the 16x slot? My server has nothing in that slot and is what I was going to use anyway - avoiding firmware downgrade would be pretty brilliant!
 

J Hart

Active Member
Apr 23, 2015
145
100
43
45
@J Hart
Thank you so much for taking the time to so thoroughly detail your experience and trouble-shooting. It is people like you who make the Internet worthwhile!

I am in the process of upgrading a server containing an S2600CP4 Hyper-V Server 2012R2 with a Samsung 950 Pro in an Angelbird PX1, and of course it wouldn't even POST. Running the latest firmware of course, experience exactly the same as yours. Your solution is a lifesaver!

In my case, I don't want to run the OS from the NVMe drive, I want to continue to run it from the existing SATA SSD - the NVMe drive is for VMs. Will changing the boot option to EFI optimised boot cause an issue with the existing Hyper-V Server OS? I really don't want to have to go through the effort of reinstalling Hyper-V Server as GPT in UEFI mode and reconfigure from scratch if I don't have to... :-/

Thanks again!

Trevor
I'm not sure if you can get it to boot without booting in EFI mode. At least I have not been able to. I think there is something wrong when the BIOS tries to load the legacy, BIOS mode, code from the NVMe. You could boot from a USB boot stick in EFI mode and have a bootloader like Grub or the MS bootloader and then point the boot option to your regular SSD. In any case, the drive would have to have some sort of EFI boot partition which would be problematic to add to an existing drive.
 

J Hart

Active Member
Apr 23, 2015
145
100
43
45
Is it possible you weren't seeing full R/W performance because you weren't using the 16x slot? My server has nothing in that slot and is what I was going to use anyway - avoiding firmware downgrade would be pretty brilliant!
I tried putting my drive in that slot and the card came up in PCIe 2.0 mode with any of the 02.XX.XXXX firmware. I'm not sure how far back you have to go to circumvent that. I ended up with the 01.06 one which did work. How much do you lose with the fully updated one? About 30% of the sequential R/W and about 30% of the IOPS. The low IOPS(50k) only happened with that very bottom slot(not the x16 one, the one on the other end).

There is probably some way around this by building a custom BIOS. I'd actually be happier if Intel didn't make this stupid "fix".
 

TrevorX

New Member
Apr 25, 2016
27
5
3
I'm not sure if you can get it to boot without booting in EFI mode. At least I have not been able to. I think there is something wrong when the BIOS tries to load the legacy, BIOS mode, code from the NVMe. You could boot from a USB boot stick in EFI mode and have a bootloader like Grub or the MS bootloader and then point the boot option to your regular SSD. In any case, the drive would have to have some sort of EFI boot partition which would be problematic to add to an existing drive.
Sorry, I must not have been clear. I don't want the NVMe drive involved in booting at all. I will put it into EFI mode to get the drive to work, but I don't want it talking to that drive for booting, I want it to run from the legacy MBR SSD. I'm pretty sure that won't work - that I'll have to upgrade/convert to GPT or reinstall altogether, but I thought I'd map things out before I tackled it (because I will only have a small window to get it done - the server can't be offline for too long). Fortunately I have redundant SSDs in it, so if I mess something up it can be rolled back just by changing the boot order (or pulling a SATA cable). But, you know, the less time it takes to get it working the better, and a reinstall is a good four hours of reconfiguration :-/
 
  • Like
Reactions: iaredavid

J Hart

Active Member
Apr 23, 2015
145
100
43
45
Sorry, I must not have been clear. I don't want the NVMe drive involved in booting at all. I will put it into EFI mode to get the drive to work, but I don't want it talking to that drive for booting, I want it to run from the legacy MBR SSD. I'm pretty sure that won't work - that I'll have to upgrade/convert to GPT or reinstall altogether, but I thought I'd map things out before I tackled it (because I will only have a small window to get it done - the server can't be offline for too long). Fortunately I have redundant SSDs in it, so if I mess something up it can be rolled back just by changing the boot order (or pulling a SATA cable). But, you know, the less time it takes to get it working the better, and a reinstall is a good four hours of reconfiguration :-/
Gotcha. That is what I was trying to get at as well. The MBR disk isn't going to boot. You could make another SSD with a new partition on it and then copy everything, or you can make a USB boot stick which will just load the EFI bootloader and then load the OS from the existing MBR partition.

Maybe if your existing partitions on the boot drive have sufficient space, you can use this procedure to switch the drive to UEFI boot. That should be fast, but man that looks scary to me to switch the partition type and hope it doesn't munch the tables completely.
 

TrevorX

New Member
Apr 25, 2016
27
5
3
I tried putting my drive in that slot and the card came up in PCIe 2.0 mode with any of the 02.XX.XXXX firmware. I'm not sure how far back you have to go to circumvent that. I ended up with the 01.06 one which did work. How much do you lose with the fully updated one? About 30% of the sequential R/W and about 30% of the IOPS. The low IOPS(50k) only happened with that very bottom slot(not the x16 one, the one on the other end).

There is probably some way around this by building a custom BIOS. I'd actually be happier if Intel didn't make this stupid "fix".
In your step-by-step you point to the 02.05.0004 BIOS, but not the 01.06 one. Is that what you mean by step 2. "Make a second drive with the firmware you want"?

Yes, completely agree with your sentiment about Intel. WTH benefit is there to the customer of hardware-enforced whitelisting, over and above an up-to-date Vendor Qualified Hardware List? Most manufacturers can't keep the VQL updated beyond initial release, how the heck are they going to be able to provide a product that doesn't severely limit their customers? The answer, self evidently, is they can't - whitelisting is a terrible idea outside mission critical infrastructure. Intel should sack the idiot who decided they should start implementing this. If they're concerned about support staff wasting time on 'rogue' non-compliant hardware they should make their devices as good as they can, then point customers to the VQL (like many many other vendors do already). While, of course, ideally it would be nice to have a manufacturer do whatever they can to try to help customers, nothing is free in this world, so I appreciate they have costs to consider, but a VQL and the manufacturer leaving you on your own to get 'unintended' devices working is better than them actively trying to stop you from using any piece of hardware they haven't explicitly enabled... (HP, I'm looking at you, too)
 

TrevorX

New Member
Apr 25, 2016
27
5
3
Gotcha. That is what I was trying to get at as well. The MBR disk isn't going to boot. You could make another SSD with a new partition on it and then copy everything, or you can make a USB boot stick which will just load the EFI bootloader and then load the OS from the existing MBR partition.

Maybe if your existing partitions on the boot drive have sufficient space, you can use this procedure to switch the drive to UEFI boot. That should be fast, but man that looks scary to me to switch the partition type and hope it doesn't munch the tables completely.
Not that scary when you have another SSD clone of the original drive. Takes less than five minutes to clone these drives 'cause Server Core is small and SSD to SSD = 500+MB/s :-D

Looks like a conversion is going to be my most likely plan of attack, so 'bout time I got used to the idea!
 

zhoulander

Active Member
Feb 1, 2016
181
46
28
Has anybody documented issues running BIOS 1.06.0001 with updated BMC/ME/SDR? I know for consumer boards the modding community likes updating the OROMS and ME to newer versions when possible.