NVMe on Intel S2600CP

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

zackiv31

Member
May 16, 2016
102
20
18
39
Just to be clear, the PCIe 2.0 limitation is *only* for v1 processors correct? From the advisory it says that v2 CPUs will have an option in the BIOS to specify the PCI link speed? I have some v2 processors in the mail, so I was hoping to upgrade the BIOS, have dual v2 chips, and be able to run at PCIe 3.0 speeds.

Intel is implementing a new BIOS option called “Processor PCIe Link Speed” that allows users to select the appropriate PCIe* link speed of PCIe slots in platforms employing the Intel® Xeon® E5-26xx V2, E5-24xx V2 or E5-16xx V2 families. By default, the installed PCIe* 3.0 adapters will run at their maximum native speed, but may be limited to PCIe* 2.0 speeds if desired. Refer to the relevant platform Technical Product Specification (TPS) for more details on this new BIOS option. The BIOS option “Processor PCIe Link Speed” will not be visible if the platform employs the Intel® Xeon® Processor E5- 26xx V1, E5-24xx V1, E5-16xx V1, or E5-46xx V1 families, and the BIOS will limit the PCIe slots speed to PCIe* 2.0 speeds unless the add-in adapters have been tested for robust operation at PCIe 3.0 speeds. Refer to the web page of PCIe* Gen 3 adapter support for Intel® Server Boards S4600/S2600/S2400/S1600/S1400 PCIe* Gen 3 Adapter Support for Intel® Server Board Product Families for the most up-to-date list of tested PCIe* 3.0 adapters that will run at their maximum native speed.
Still haven't gotten my machine to boot directly off NVMe with 01.08.0003 :(
 

J Hart

Active Member
Apr 23, 2015
145
100
43
44
Sorry I missed this question. I'll verify this when I get home and set up that machine again(just moved). The NVMe card I used shows up as a boot device after I installed Windows 10 on it in UEFI mode. It didn't boot straight away, I needed to go into the BIOS and switch the boot order to that device. I haven't tried to boot anything else from the NVMe, but I suppose there isn't any serious barrier to stop Linux from booting the same way. If it all failed I was just going to put a bootloader on a USB stick and them have it point to the image on the NVMe device.

If it still isn't showing up, it might be the PCIe slot. In the S2600CP boards, every slot is connected to processor 0 except for 1 of the x16 slots. for the S2600IP4 I think it is half and half between processor 0 and 1. Some systems have a restriction as to which processor a boot device can be attached otherwise it won't be initialized or seen by the UEFI BIOS and won't show up as a boot device. The map of what slots go where is in the S2600IP manual and you might just have to move it to a different slot to get it to be happy.
 

zackiv31

Member
May 16, 2016
102
20
18
39
Yah I have the NVMe adapter installed in the bottom PCIe slot, which is attached to CPU 0. I'm 90% sure I also tested this installed in the top slots (CPU 1), but still couldn't get it to appear in the BIOS. I ended up installing / to a standard SATA SSD, and putting /home on the NVMe drive.

Did you do this all with the 01.06.x BIOS ? I couldn't even get CPU 1 to work when I was on that older BIOS (with dual e5-2670 v1s). I flashed multiple times, could never get it to recognize both. Upgraded to 01.08.x BIOS and everythings working great (except for the PCIe 2.0 speed limitation). *Really* hoping this v2 upgrade solves all these issues..
 

vrod

Active Member
Jan 18, 2015
241
43
28
31
It of course depends on the graphics card you will run but even 2.0 x8 is sufficient in many cases.
 

mixtecinc

Member
Feb 18, 2013
30
0
6
I am curious if anybody has been able to boot the S2600CP from an Intel 750 drive either pcie card version or standalone drive using the V1 chips? If so what Bios version were you using? If i have been reading this thread correctly there should not be an issue with the V2 chips?

Thanks

Justin
 

zackiv31

Member
May 16, 2016
102
20
18
39
So I got my dual e5-2650L v2's installed and am experiencing terrible PCIe speeds. PCIe link speeds are all set to max (8GBps) in the BIOS (which was detected automatically), but my two m.2 => PCIe NVMe adapters with SM951's now only record ~300MB/s reads. It seems upgrading to v2 CPUs has managed to destroy my performance. Has anyone else experienced this?

I also upgraded my BIOS at this time, to Download Intel® Server Board S2600IP, Workstation Board W2600CR Firmware update package for IDA, OFU, EFI and WinPE* (which is the last BIOS that can be downgraded).

But, I did that before with my v1 e5-2670's and my speeds were the same, so I don't think it's related to upgrading the BIOS alone.

I've tried two of the same m.2 adapters, bought from eBay in top and bottom slots. But no luck.

Anyone have any ideas?

EDIT: Actually, looking at it more closely with the gnome-disks utility on a Samsung MZVPV256HDGL... my write speeds are to spec, 1200MB/s. Read speeds are ~300MB/s still (where they should be 2150MB/s)

EDIT2: lspci -vv confirms it's running at the correct link speed:

06:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller (rev 01) (prog-if 02 [NVM Express])
LnkSta: Speed 8GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
EDIT3: And an image of what gnome-disks is showing with 10 runs of 1000MB: http://i.imgur.com/JjqylPE.png

EDIT4: SOLVED? Resetting the BIOS with the jumper method seems to have resolved the issues... Getting over 2GB/s reads now on both drives. Not sure what setting I had that was causing this. I'll slowly re-enable some BIOS settings I had and see if I can trigger it.


TL;DR v2 chips enable PCIe 3.0 and I get max speed from my NVMe drives
 
Last edited:

zanechua

Member
May 6, 2016
78
12
8
30
Why do the v2's enable PCIe 3.0 but the v1's don't. Weird. It seems like a firmware restriction by Intel and v1's are definitely capable of doing PCIe 3.0 also.

Other than the Logging Errors. Is there any benefit of a higher version bios though?
 
Last edited:

zanechua

Member
May 6, 2016
78
12
8
30
So I tried the method that J Hart came out with to flash the bios of the S2600 to get PCI-E 3.0 speeds. Do not do this with a GTX 10xx Series. It simply does not work at PCI-E 3.0 speeds. Nothing i did made it work, the display would always just hang. Even having an updated ME with the GTX 1060 that I have does not work. Seems like there are other more incompatibilities when it was running at PCI-E 3.0. Not sure if there's any other way to get PCI-E 3.0 with a patched bios or something without actually having to purchase V2 E5's
 

zackiv31

Member
May 16, 2016
102
20
18
39
FWIW I can confirm that my GTX 1070 runs at 3.0 speeds with V2 chips... but yah, I could never get everything working to my satisfaction with v1 chips.
 
  • Like
Reactions: TrevorX

zanechua

Member
May 6, 2016
78
12
8
30
FWIW I can confirm that my GTX 1070 runs at 3.0 speeds with V2 chips... but yah, I could never get everything working to my satisfaction with v1 chips.
Good to know zakicv31. I just spent 2 hours or so wrecking my head as to why it works on even 01.08.0003 but when it goes to 01.06.0001 it just fails. Probably something to do with the PCI-E 3.0. I flashed back to 02.04.0003 which is just under the security lock downgrade version so incase anyone ever finds a solution to this....
 

TrevorX

New Member
Apr 25, 2016
27
5
3
Sorry I never came back to update this. Long story short that board was unrecoverable, I had to replace it. I took it home and spent several months playing with it on and off, absolutely nothing worked.

FWIW I can confirm that my GTX 1070 runs at 3.0 speeds with V2 chips... but yah, I could never get everything working to my satisfaction with v1 chips.
Thanks for that very useful information zackiv31. Can I please ask where you sourced your V2 chips from? I have a couple of E5-2670 C2 V1 CPUs I picked up for under USD$50 each, but the equivalent V2 CPUs are more than twenty times the price... Quite a steep price to pay to get PCIe 3.0 working properly!

I also wanted to share this, as I haven't seen it linked anywhere - it's the Intel technical advisory regarding PCIe 3.0 add-on card compatibility with this whole series of boards. It appears PCIe 3.0 had substantial problems with the V1 CPUs (and even some with the V2s, too) - the solution was to limit all but heavily qualified and tested products to PCIe 2.0, and with the V2 IvyBride EP CPUs a new option was enabled in the BIOS allowing the user to select PCIe 3.0 or 2.0 link speed manually. This option isn't present with V1 CPUs - it's PCIe 2.0. The earlier BIOS versions allowing PCIe 3.0 bandwidth for NVMe and graphics cards are almost certainly throwing up constant errors in the log files, but unless you're looking at them you won't even be aware that there's a problem.

So, very sadly it looks like Intel did a pretty poor job with PCIe 3.0 implementation on SandyBridge EP, which would have had a lot to do with the fact that it was such a new spec and there were very few products available to test with at the time.

It would be nice, however, if Intel could enable the user-selectable PCIe link speed option for V1 CPUs in a 'BETA' BIOS, that comes with plenty of warnings that it is unsupported and 'use at own risk', so we have the option to test PCIe 3.0 operation with our products - at least we would have the option, then.
 

zackiv31

Member
May 16, 2016
102
20
18
39
Thanks for that very useful information zackiv31. Can I please ask where you sourced your V2 chips from? I have a couple of E5-2670 C2 V1 CPUs I picked up for under USD$50 each, but the equivalent V2 CPUs are more than twenty times the price... Quite a steep price to pay to get PCIe 3.0 working properly!
I get bulk CPUs on occasion, but I'm afraid you're a little too late for my last sale, as I just sold all the e5-2628lv2's.

FS: Xeon e5/e7 2011 v2 CPUs, 64gb/16gb DDR3 ECC Reg

I'm currently sticking with an e5-2630Lv2, with is 2.4GHz base clock. If I were you I'd lookout for these low voltage chips, as they tend to be much cheaper.
 

wildpig1234

Well-Known Member
Aug 22, 2016
2,230
478
83
49
unfortunate but there is no point wasting our time on trying to get pcie 3.0 to work with s2600. unless you have v2 cpu. i doubt intel will release a bios that will allows this. after all, they really have no economic incentive to do that even if it doesn't do most people any harm.

At current cost, v2 chips just not worth it. you might save on not having to get another MB. but really, if i were going to put out $500, i would buy QS samples of e5-2686 v3 with 18 cores. and that is what i am saving up right now for.....
 

zanechua

Member
May 6, 2016
78
12
8
30
Are you able to boot from your NVMe drive? A year in and I still use a secondary SSD to mount my NVMe as /home
Oh I don't have an NVMe drive. Looked at benchmarks and it looked awesome but looked at real world performance and found absolutely no difference. Decided against getting an NVMe drive.

I was referring to trying out the downgrade successfully recently. I did the downgrade to try to use PCIe3.0 with my GTX 1060 but obviously that didn't work either.
 

thoff

New Member
Mar 29, 2017
2
0
1
38
@J Hart
HELP! Please help. Right now I have a dead server.
I followed your steps, right up to 15 - I successfully flashed firmware 1.06.0001, then pulled the USB drives and rebooted. I left the machine for a good 15 minutes, where it didn't get past the initial splash screen (showing Installed BIOS Se5C600.86B.02.06.0002 - I'm not concerned about the firmware number - I realise that will be reset when the full firmware gets flashed), with a little blinking cursor. I shut it down, inserted the target USB drive, then tried starting up again. Just does the same thing. I tried changing the BIOS recovery jumper back to the default position (because I wasn't sure if it should be returned at this point - I couldn't find that in your step-by-step) and then the server does precisely *nothing* - you turn it on, the console screen remains blank, the fans spin up, the diagnostic LEDs don't flicker, there's just an amber and a green. (I don't have the full manual here to check codes - didn't think I'd need it. Stupid me).

So WTH do I do now?? I am seriously $#!@@ing myself here - if I've bricked a perfectly good server my a$$ is on the line.
Did you leave the bios jumpers in the wrong spot? That can cause these symptoms.

After doing the bios flash the mobo will revert back to using the onboard video card by default.

This mobo is also super finicky when it comes to GPUs.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
I had that same problem with my Intel 4U... sat for 3 months but previously I'd installed CPU/RAM and ESXI on a USB drive. Plugged it in to finish updating/setting it up for a buddy, got busy came back 2 hours to a blinking cursor. I did add the HDD and SSD prior to booting so thought maybe something was odd there pulled all them out of hot swap, tried again to boot... this time I could see the LSI/SAS during boot, and then blinking cursor again! Ugh. Yanked PCIE INTEL LSI as well as onboard module, rebooted same thing... Went into BIOS to force boot from the USB drive(it saw it) and still blinking cursor.

Download latest update for the motherboard (fall of 2016) BIOS thinking I broke something or something got corrupt sitting.
Updated BIOS, rebooted, forced USB as boot drive in BIOS and blinking cursor.
Hair pulling out mode, it's now about 2hrs into diagnosing this and everything is stripped except RAM and CPU which is where I was heading.
... Until I got a better idea! Why not throw one of the SSD I have on my bench with Windows onit and see if that boots/works.

Windows 8.1 booted just fine, windows works fine, all ram, cpu, etc, work fine.
Put in the LSI Module and the 8x SSD in cages reboot, all SSD are recognized and seen during boot up.

That's where I ran out of time.

But, in my case it seems the cause was a corrupt USB drive and/or a broken one that was working fine with ESXI previously.
I should note this is/was my first failure of a high-end USB drive. (Sandisk Extreme or something along those lines, forget exact model)

Hope that story may help someone ;) and I hope I can wrap testing, and re-installing this thing today myself!!
 

frogtech

Well-Known Member
Jan 4, 2016
1,482
272
83
35
Is the PCIe 2.0/3.0 fiasco only an issue with this specific board? Or is it all of the S2600 products in that family?