NVMe on Intel S2600CP

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

J Hart

Active Member
Apr 23, 2015
145
100
43
44
I have been trying to get this to work without success, I'm guessing it is probably some BIOS setting I am missing. First, I am not trying to make the NVMe device boot the system. I'd just like to be able to use the drive as a storage device.

The setup S2600CP2J board with 2xE5-2670v1, 128gb ram. Nothing else attached. Boots fine to EFI shell or from USB device.

Insert Samsung 950 EVO NVMe M2 device in a PCIe adapter http://www.amazon.com/Lycom-DT-120-...&redirect=true&ref_=oh_aui_detailpage_o00_s00 and the machine freezes during POST. The code it freezes at should be a Memory User Error (PMIE which detected User Recovery Condition). The weird thing is if I try to interrupt the bootup and go to setup, I can as long as it is before that step. From there it boots fine into EFI shell and I am able to see the NVMe device as a block device(which I was super surprised by).

Normally I'd figure this is something wacky with the device or the adapter, but when I put it into another system, it works flawlessly (E5-2670 based SunFire 4170-x3). And even when I put it into an older system which shouldn't support it at all it was still working as a block device albeit at reduces speeds since it was in a PCIe v2 slot (E5540 based SunFire 4170 system).

Anyway I thought I'd just ask to see if anyone else had a similar issue on this board or anything else. I'm going to toss a post over to the Intel forums as well. Sounds like a BIOS problem to me
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,058
113
I haven't used that board or that system but I have played around with getting NVME to work in older SuperMicro 2011-v1/2 systems and in some it works just fine, and in others it's not detected at all no matter what I do. I'm sure it's something to do with BIOS but even same generation boards from same manuf. don't seem to always work with NVME...yet :)
 
  • Like
Reactions: J Hart

ultradense

Member
Feb 2, 2015
61
11
8
41
Hmm, was about to test the same board, but with 4*intel 750 400gb a few hours ago, but didn't have time to install OS. No problems during POST though.
 

J Hart

Active Member
Apr 23, 2015
145
100
43
44
Originally I was going to go that way as well. I'll keep trying. I suspect it is running out of space while constructing the memory map. I've seen other forum posts around this same issue and I'll see if I can work it out.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
Did you happen to clear CMOS/ BIOS reset and are you on the newest BIOS?

Normally if everything is up to date NVMe works fairly easily.
 

J Hart

Active Member
Apr 23, 2015
145
100
43
44
Alright so I got this working (and shockingly better than expected). I'll try to write down a bit of how to, so that in case someone else runs into these issues they can figure it out.

First thing I did when I got the board, S2600CP2J from NATEX, I installed the firmware updates from Intel. Really my main motivation was to update the SDR so that all the fans weren't running at full. Maybe the older BIOS wouldn't have had these issues but I don't know.

I bought a 500gb Samsung 950 Pro M.2 card. Of course this doesn't fit in the motherboard at all, so I needed an adapter. I ended up using one of the M.2 to PCIe boards floating around on Amazon. Assembled it all and popped it into the motherboard. This causes the system to hang during post. It hangs exactly at 10100001 which is the entry into IDE devices detect, its so close that sometimes the system doesn't even fully change the LEDs so you get 11110001 where the 2nd and 4th LEDs are at half intensity.

At this point I banged my head against the wall for 2 days. Things I tried, pulled all the RAM and replaced it with known good RAM(no change). Tested out the different PCIe slots to see if I could verify they were working(everything fine there). Reset the BIOS settings both by loading defaults and by clearing the CMOS via jumper in case there was some residual setting screwing things up(no change). Suspiciously if you enter the setup or boot selection screen you could get it to boot into the EFI shell and the device could be seen. Depending on how fast you hit this, you see more or less boot devices. We'll come back to this.

Anyway, I got to thinking maybe the drive itself or the adapter card is bad. I took the card to work where I have a few extra test servers and the drive worked fine. I was able to boot my test servers and see the drive, read and write to it everything. Ok so its not the drive.

At home, I put it drive into my desktop and here I ran into a problem. My desktop also did not like the drive with my normal BIOS settings. Eventually I ended up figuring out that the computer did not like the drive if a certain combination of Legacy Boot and CSM PCIe Oprom settings were employed.

This did the trick. I booted the S2600CP2J without the problem drive. And changed several BIOS settings that might have been tangentially related to CSM mode/Legacy mode. And the machine started up. Installed Windows 10 to see if I could get something to see the drive. Drive was visible to the OS(hooray!). Installed Windows 10 and surprisingly the Windows bootloader is detected by the UEFI bios and the machine booted from the NVMe drive(which I really thought was going to be an impossibility).

Went back and changed back the BIOS settings one at a time to find the needed setting and determined it was Boot options > EFI optimized boot > Enabled. Also reset back to defaults which broke the system again and just changed that one setting and the NVMe drive boots again.

About the weird thing where it was allowing me to boot if I was fast. When the CSM is enabled it loads the modules for a variety of old boot devices one at a time. USB sticks in MBR mode, NIC Bootroms, IDE drives, etc. Most BIOS load these before it lets you interrupt to go to setup/boot selection. This board is screwy in that if you are fast you can enter setup before it loads all of the modules.

TL;DR Bios -> Boot options -> EFI optimized boot -> Enabled allows your system to POST and boot from the NVMe device.
 

J Hart

Active Member
Apr 23, 2015
145
100
43
44
Now for the bad.... Everything was going good until I started to have a look at performance. Everything was coming up at about 1/2 of what it should have been for this device. Eventually I noticed in the Samsung software that the device was running at a PCIe rate of 5.0 Gbps instead of 10.0 Gbps.

What? How is it running at PCIe 2.0 instead of 3.0?

Well after some digging it turns out Intel made a terrible decision with these motherboards in the past. Here is the technical advisory for their change. It says unless your device is on the list. The BIOS will run the slot at PCIe 2.0 to avoid logging errors. Essentially this means that these boards are all PCIe 2.0 unless you run a BIOS of 01.06.0001 or earlier.

Oh and you cannot downgrade to before 02.05.0004 once you have upgraded because of a security issue. I've tried to do a downgrade via the jumper recovery method but it fails.
 

mixtecinc

Member
Feb 18, 2013
30
0
6
J Hart,

Do you know or have tested this board with and ADD-IN video card? if so which one did you try?

Thanks

Jsutin
 

J Hart

Active Member
Apr 23, 2015
145
100
43
44
2 of the slots are 8x PCIe open-ended (they are blue in the pictures). You can fit a 16x PCIe card in them. I have put a Nvidia GTX 970 in one of the slots. Works ok. There are two minor problems. First, you need to set an option in the BIOS to be able to see BIOS through this card. It is called VGA legacy something something. The second is that the card will operate at PCIe 2.0 and with 8x lanes instead of 16x. This means that it will have 1/4th the bandwidth as compared with its regular slot. Is it a problem? Not really. Most applications don't max out the PCIe bus. I tried a bunch of different benchmarks and applications and didn't have any problem.

I tried to max out the system last night using Folding at home. I ran 32 CPU threads and max GPU threads(1600ish). The system was pulling 500W, but the fans were only running at about half speed. That beefy 120mm CPU fan is really over kill. I measured the flow at 110 CFM with the processors maxxed out. In fact, the shroud and heatsinks present enough resistance that at full speed the pressure is sufficient to blow the shroud off if the case is open. The PCIe area cooling is very good too. The GTX 970 never got above 60C.
 
  • Like
Reactions: Ramos

J Hart

Active Member
Apr 23, 2015
145
100
43
44
Oh also going to add there is a 16x PCIe slot(8x electrical), but I couldn't use it because it bumps right up to the memory. Its designed for a riser. It does function without a riser, but your card can't really be any longer than the slot.
 

mixtecinc

Member
Feb 18, 2013
30
0
6
Oh also going to add there is a 16x PCIe slot(8x electrical), but I couldn't use it because it bumps right up to the memory. Its designed for a riser. It does function without a riser, but your card can't really be any longer than the slot.
Thanks J. Hart,

I am curious what would happen if you put two of the NVME cards in this computer and both show up in EFI BIOS, would the drives show up as available drives in the Intel Built in Software raid?

Can this motherboard use the Intel XEON 2600 V2 chips? This is certainly tempting.....

Cheers

Justin
 
Last edited:

J Hart

Active Member
Apr 23, 2015
145
100
43
44
I don't know what would happen if there were two drives. I'm guessing both will show up fine to EFI. The NVMe card isn't showing up as a device for the software raid.

e5-2600 v2 is a go.
 

ultradense

Member
Feb 2, 2015
61
11
8
41
Well after some digging it turns out Intel made a terrible decision with these motherboards in the past. Here is the technical advisory for their change. It says unless your device is on the list. The BIOS will run the slot at PCIe 2.0 to avoid logging errors. Essentially this means that these boards are all PCIe 2.0 unless you run a BIOS of 01.06.0001 or earlier.
Thank you very much for this warning!

Did you manage to fix this? I'm still running a very old BIOS (with the slow screen buildup) and want to upgrade so I can boot from NVMe (intel 750), but I don't want to lose PCIe3-speeds!
Any tips?
 

J Hart

Active Member
Apr 23, 2015
145
100
43
44
Thank you very much for this warning!

Did you manage to fix this? I'm still running a very old BIOS (with the slow screen buildup) and want to upgrade so I can boot from NVMe (intel 750), but I don't want to lose PCIe3-speeds!
Any tips?

The truth is, I can't figure out how to downgrade. Using the UEFI shell you can switch between any version which is signed which only really includes the last few releases. To get to a really old one takes some work. According to Intel, the trick should be to load the old firmware via BIOS recovery jumper. I tried to get this to work and the best I have gotten so far is to get the machine to actually boot into the recovery EFI shell and have it hang there. Things I have found you need. First, you need to have a FAT32 USB stick that the PEIM likes. I haven't figured out what the heck happens, but 3/4 of my USB sticks will cause a crash at this stage with 2 beeps and a CATERR in the BMC log. Maybe it is sticks over 4gb or maybe it is the partitioning. Ok so if it all goes good, the system will boot to the UEFI shell and run the startup command, but at this stage the system just locks for me. I have no idea why.

What do you lose by having the latest version? 1569/1435 MB/s (R/W) instead of 2500/1500. Worse, you get about 70k IOPS both read and write rather than 300k/110k.

At the moment, I am at the point where I don't really care so much. The performance is still very good imo for a workstation. I'll keep trying to downgrade the BIOS if I have a good idea, but at the moment, I don't really have any good ideas.
 

ultradense

Member
Feb 2, 2015
61
11
8
41
What do you lose by having the latest version? 1569/1435 MB/s (R/W) instead of 2500/1500. Worse, you get about 70k IOPS both read and write rather than 300k/110k.
Thanks!
The losing of so many IOps is enough reason for me to not touch this bios version. I'm using this for a 4 node hyperconverged storage spaces direct cluster, so every iop counts! If it was for a worksyation, I would probably settle also..

It's still very strange that a major Intel motherboard loses interoperability with an Intel 750 NVMe card though.

Verstuurd vanaf mijn ONE A2003 met Tapatalk
 

vrod

Active Member
Jan 18, 2015
241
43
28
31
OP is god, I've been at this exact same issue for weeks (950 pro with s2600cp4).. Wanted to use it for L2ARC for zfs but the damn thing just wouldn't post.

Intel wasn't too helpful, always liked to refer to their hardware compatibility list which has ONLY Intel components on (lol?).

I'm scared about the PCIe slowdown though.. One of my boards was updated to the newest and the second one I haven't looked at. Not sure how Intel can be allowed to downgrade the performance of the board intentionally..

Anyways, I now invested in a few 750's and even in raid0 (windows software stuff), I cannot eachieve more then 160/150k iops in anvil. However I can get over 4,3GB/s. I suppose the pcie slots would not be limited about iops in any way?
 

J Hart

Active Member
Apr 23, 2015
145
100
43
44
OP is god, I've been at this exact same issue for weeks (950 pro with s2600cp4).. Wanted to use it for L2ARC for zfs but the damn thing just wouldn't post.

Intel wasn't too helpful, always liked to refer to their hardware compatibility list which has ONLY Intel components on (lol?).

I'm scared about the PCIe slowdown though.. One of my boards was updated to the newest and the second one I haven't looked at. Not sure how Intel can be allowed to downgrade the performance of the board intentionally..

Anyways, I now invested in a few 750's and even in raid0 (windows software stuff), I cannot eachieve more then 160/150k iops in anvil. However I can get over 4,3GB/s. I suppose the pcie slots would not be limited about iops in any way?
I've tried to think what might be limiting the IOPS and am not really sure at the moment. One possibility might be that I screwed up something with Windows 10 configuration wise that is killing performance. The other is something more fundamental to PCIe 3 either the transfer latency or something with the overhead differences between the two. I was able to get the card to run at full speed in Linux on another machine.

Intel was likewise not too helpful for me. I think the issues with PCIe 2 vs 3 all boil down to a flaw in the v1 processors as every s2600 board Intel makes has this limitation. I think it is especially telling that v2 processors in the same board default to PCIe 3. They run a fine line in their technical specs by saying the board is capable of PCIe 3 (which is true if you use one of the 20ish validated boards) while in practice the boards are really PCIe 2.0 as 99% of the boards you might like to use are not validated.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,058
113
I just got a workstation on the way from member @britinpdx which has a version of these motherboards in it.
He said he updated to latest BIOS/Etc already, so I'll see what it can do with NVME when it arrives.

I actually have an Intel Specific NVME Kit for 1U that I'll have to test and see if the pcie bifurcation works, I'll be surprised but worth a shot.