PCIe SSDs on older server question

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

katit

Active Member
Mar 18, 2015
431
29
28
53
I have older server Intel S2600CP2 motherboard with following expansion options:

1712327786153.png

I am not sure if I am original in my idea, but, will this work?


If I install this adapter card, I want to install 2x PCIe SSDs and set them up as a mirror on Windows server.

Will this work? My main concern/question is expansion option on motherboard will be compatible?
 

nabsltd

Well-Known Member
Jan 26, 2022
426
290
63
If I install this adapter card, I want to install 2x PCIe SSDs and set them up as a mirror on Windows server.
No, because that card does not support 2x NVMe SSDs. It supports one NVMe SSD and one SATA SSD. You would need something like this, instead.

That said, you'd need to check if your motherboard has bifurcation support in the BIOS. You would need to set one of the x8 slots to x4x4 to support a dual NVMe PCIe card. The only manual I could find for the motherboard was all about hardware, with no info on the BIOS setup, so I don't know if your board supports it.

If your motherboard does not, you'd have to get something like this, although there are cheaper options from sites like AliExpress.
 
  • Like
Reactions: nexox and katit

katit

Active Member
Mar 18, 2015
431
29
28
53
If your motherboard does not, you'd have to get something like this, although there are cheaper options from sites like AliExpress.
So, with this specific card (which I don't mind getting) - I should be able to run 2x M.2 drives ?
 

Chriggel

Member
Mar 30, 2024
64
22
8
I think platforms of this era don't support PCIe bifurcation, so that rules out all passive adapters.

Even if it supports it, if you want to mirror your boot drive, then bifurcation isn't the "best" option anyway, because you'd also need to use Intel VROC or AMD RAIDXpert for hardware mirroring the drives. Devices like nabsltd showed you are basically similar to boot solutions offered by the big server vendors.
 
  • Like
Reactions: katit

nexox

Well-Known Member
May 3, 2023
689
282
63
Some platforms from that era did eventually receive BIOS updates to enable bifurcation, but the only way to know is to go into the settings and check, it could only be for one slot or just not enabled at all.

Bifurcation vs a PCIe switch doesn't change anything as far as how the OS sees the storage volumes, just how the signals are routed from the CPU to the drives, you need a software mirror either way.
 
  • Like
Reactions: katit

katit

Active Member
Mar 18, 2015
431
29
28
53
Here is screenshots of BIOS pages. @nexox, my main goal is to maximize storage performance.

I plan to run VMs on those drives.

Looks like I don't have latest BIOS but also looks like I am not missing much:

=================================================================================
BIOS R02.06.0006
=================================================================================
- Fixed: "ID_NET_NAME_ONBOARD=eno1" is the same for two onboard NICs which should be different.


=================================================================================
BIOS R02.06.0005
=================================================================================
- Hotfix for security hole about RtcRead.

=================================================================================
BIOS R02.06.0004
=================================================================================
- Fixed:There are pointers in the SMM communication buffer but they are not checked before use in IPMI driver.
=================================================================================

=================================================================================
BIOS R02.06.0003
=================================================================================
- Fixed:pWM Offset not working propertly in BIOS 2.05.0004.
- Fixed:Issue with DDR3 SPD Manufacturer field on Ventura DDR3 DIMMs in S2600GZ.
=================================================================================



IMG_9972.jpg

IMG_9973.jpg
 
Last edited:

nexox

Well-Known Member
May 3, 2023
689
282
63
Have you updated the BIOS to the latest available? It looks like there was a release as late as 2018, but the version numbers don't match up with your screenshots so I can't tell. Still, skimming the BIOS release notes I don't see bifurcation options added. Note you generally want to update the BMC firmware first before BIOS.
 

Chriggel

Member
Mar 30, 2024
64
22
8
You can check if there's something under PCI Configuration -> Socket 1, but I doubt it, especially with this BIOS from 2015.
You can update it: Firmware Update Package Update for EFI Intel® Server Boards and Intel® Server Systems Based on Intel® 60X Chipset
But a quick CTRL+F "bifurcation" in the release notes got zero hits, so either it never got it, or they didn't bother to mention it.

Not wanting to boot from it makes it a little easier, but without bifurcation, you're still limited to either one SSD per PCIe slot or a controller card.
 

katit

Active Member
Mar 18, 2015
431
29
28
53
Have you updated the BIOS to the latest available? It looks like there was a release as late as 2018, but the version numbers don't match up with your screenshots so I can't tell. Still, skimming the BIOS release notes I don't see bifurcation options added. Note you generally want to update the BMC firmware first before BIOS.
See above, I posted release notes I found on latest BIOS since the one I have

What is BMC firmware?
 
  • Like
Reactions: nexox

katit

Active Member
Mar 18, 2015
431
29
28
53
You can check if there's something under PCI Configuration -> Socket 1, but I doubt it, especially with this BIOS from 2015.
You can update it: Firmware Update Package Update for EFI Intel® Server Boards and Intel® Server Systems Based on Intel® 60X Chipset
But a quick CTRL+F "bifurcation" in the release notes got zero hits, so either it never got it, or they didn't bother to mention it.

Not wanting to boot from it makes it a little easier, but without bifurcation, you're still limited to either one SSD per PCIe slot or a controller card.
Controller card is OK. What is the cons of installing controller card?
 

Tech Junky

Active Member
Oct 26, 2023
363
122
43
2 points here.... It's Intel and it's OLD

Gen3
slots / lanes will be slow in comparison to current stuff on the market.
PLX switching to get it working will be expensive

For these it's time to consider a rebuild to make it worthwhile. Even a consumer board on AMD AM5 would produce better options / results / performance.

I'm running a 7900X AMD setup w/ a Kioxia U.3 drive that hits 6.5GB/s and is 15.36TB in size. While the market has shifted a bit in terms of pricing per TB again swinging higher than it should be. I picked up the drive in the last 6-9 months but it was considerably cheaper then. Prices have been jumping around recently where I'm seeing other 16TB drives at double or more now for some reason. It used to be a good argument for 8TB drives for M2@$800 vs U.x@$400 or 16TB options for ~$1000.

But, that's besides the point. If you put together an AMD AM5 you could use those cheap ~$70 passive adapters for quad M2 and set the slot for x4/x4/x4/x4 and run 4 drives off the slot for a whole lot cheaper than dealing with Intel / PLX. When you introduce PLX that price tag would net you 4 drives alone and then just add the cheap adapter to make it work.
 

nexox

Well-Known Member
May 3, 2023
689
282
63
Controller card is OK. What is the cons of installing controller card?
Cards with a PCIe switch cost more and use more power, aside from that not much. You don't have to use m.2 drives either, you can just grab a p4608, which is two p4600 SSDs and a switch on a single x8 card.
 

katit

Active Member
Mar 18, 2015
431
29
28
53
2 points here.... It's Intel and it's OLD

Gen3
slots / lanes will be slow in comparison to current stuff on the market.
PLX switching to get it working will be expensive

For these it's time to consider a rebuild to make it worthwhile. Even a consumer board on AMD AM5 would produce better options / results / performance.

I'm running a 7900X AMD setup w/ a Kioxia U.3 drive that hits 6.5GB/s and is 15.36TB in size. While the market has shifted a bit in terms of pricing per TB again swinging higher than it should be. I picked up the drive in the last 6-9 months but it was considerably cheaper then. Prices have been jumping around recently where I'm seeing other 16TB drives at double or more now for some reason. It used to be a good argument for 8TB drives for M2@$800 vs U.x@$400 or 16TB options for ~$1000.

But, that's besides the point. If you put together an AMD AM5 you could use those cheap ~$70 passive adapters for quad M2 and set the slot for x4/x4/x4/x4 and run 4 drives off the slot for a whole lot cheaper than dealing with Intel / PLX. When you introduce PLX that price tag would net you 4 drives alone and then just add the cheap adapter to make it work.
I am sorry. I read 2-3 times, but to me it's all over board :)
What is PLX switching and what is expensive?

I was suggested this card:

Is it PLX and is it expensive?
 

Chriggel

Member
Mar 30, 2024
64
22
8
What is BMC firmware?
It's the firmware for the Baseboard Management Controller, the management port that offers remote access to your hardware over the network.

Controller card is OK. What is the cons of installing controller card?
Not easy to answer, as there are many different kinds. In general, it requires a driver and adds complexity and latency to the storage stack. If this really is a disadvantage depends on your requirements. In some cases, it is not, but not all controllers are really suited for all workloads. The cheap ones are usually just meant as boot solutions. As you want to run VMs on it, it depends on how many and what these VMs do. With a cheap ASMedia chip, for example, I wouldn't expect the best performance. The one that has been recommended for 2x M.2 uses an ASMedia chip, these controller with two ports are generally the simplest and cheapest, but also not the fastest. As mentioned, these are rather boot solutions.

If you're going to add a controller anyway, you're not limited to M.2 nor 2 ports either, but this all depends on your needs and your budget.

Also, do you only have one CPU installed? And are you limited by the number of free PCIe slots? You could still add two seperate PCIe SSDs in two separate slots and run a mirror on those, without the need for a separate controller. Just because your board doesn't support bifurcation, it doesn't mean that you can't use PCIe SSDs at all.

I am sorry. I read 2-3 times, but to me it's all over board :)
What is PLX switching and what is expensive?
PLX switches are a type of switching chip to switch PCIe lanes like you would switch network ports, for example. PLX switches and solutions using those are not as common anymore today. Back in the day, they were pretty common because platforms didn't have as many PCIe lanes as they can have today. A solution with a PLX chip would be another way to add several SSDs to one port without having support for bifurcation. I think it can't hurt to know that this was/is a thing, but it's probably not a solution for you. It's a very special product, if there even is one you could still buy today that matches specifically for your use case, and even if, it will be very expensive.

All solutions mentioned in this thread so far don't use a PLX chip.
 
Last edited:

katit

Active Member
Mar 18, 2015
431
29
28
53
It's the firmware for the Baseboard Management Controller, the management port that offers remote access to your hardware over the network.

Not easy to answer, as there are many different kinds. In general, it requires a driver and adds complexity and latency to the storage stack. If this really is a disadvantage depends on your requirements. In some cases, it is not, but not all controllers are really suited for all workloads. The cheap ones are usually just meant as boot solutions. As you want to run VMs on it, it depends on how many and what these VMs do. With a cheap ASMedia chip, for example, I wouldn't expect the best performance. The one that has been recommended for 2x M.2 uses an ASMedia chip, these controller with two ports are generally the simplest and cheapest, but also not the fastest. As mentioned, these are rather boot solutions.

If you're going to add a controller anyway, you're not limited to M.2 nor 2 ports either, but this all depends on your needs and your budget.

Also, do you only have one CPU installed? And are you limited by the number of free PCIe slots? You could still add two seperate PCIe SSDs in two separate slots and run a mirror on those, without the need for a separate controller. You because your board doesn't support bifurcation, it doesn't mean that you can't use PCIe SSDs at all.

PLX switches are a type of switching chip to switch PCIe lanes like you would switch network ports, for example. PLX switches and solutions using those are not as common anymore today. Back in the day, they were pretty common because platforms didn't have as many PCIe lanes as they can have today. A solution with a PLX chip would be another way to add several SSDs to one port without having support for bifurcation. I think it can't hurt to know that this was/is a thing, but it's probably not a solution for you. It's a very special product, if there even is one you could still buy today that matches specifically for your use case, and even if, it will be very expensive.

All solutions mentioned in this thread so far don't use a PLX chip.
Thank you for detailed answer.

1. BMC. Yes, I think I just need to go ahead and update it. I posted separate topic here, I don't think this will help, but who knows?

2. I got this server populated with 128G and 2 CPUs. Nice case with 2.5 storage slots. I feel like it's enough of everything to run what I need. And I feel like a waste to start throwing it all out. I have no knowledge of market/stuff to know what MB I can just swap in without any other issues.

3. Storage is a biggest question right now. Currently we run exact same server with regular 2.5 SSD and don't have any issues, it's just small company server. Now since I am going with new OS (windows server Hyper-V), etc - I want to give it some boost and don't worry about it for another X years.

So. Storage. My requirement is to have 4TB of mirrored storage. I just want to keep it as simple as I can.
If I buy PCIe SSDs - what should it be and cost?
M.2 was my "idea" because storage itself is pretty reasonably priced for new. And speeds are great (I have one in my desktop)

When we talk about additional cards, etc - remember I have to buy +1
So, if I get card + 2 drives - I will have to buy 2 cards and 3 drives (so I have extra)

If I get 2x PCIe SSDs - I will need 3 so I have one spare laying just in case.


I can probably easily get by with 2TB. And then have additional 2.5SSDs for regular storage.


See below pics of open slots. I will remove graphics card as I don't really need it there..


IMG_9971.jpgIMG_9970.jpg
 

nexox

Well-Known Member
May 3, 2023
689
282
63
You almost certainly don't want to use desktop SSDs in a server, they don't perform well and their lifetime gets used up really quickly. There are enterprise grade m.2 SSDs but they're hard to find above 2TB. You seem to have quite a few slots available and they're full height, you can use U.2 drives each on their own cheap adapter board and just leave some PCIe lanes underutilized.
 
  • Like
Reactions: Chriggel

Chriggel

Member
Mar 30, 2024
64
22
8
You almost certainly don't want to use desktop SSDs in a server, they don't perform well and their lifetime gets used up really quickly. There are enterprise grade m.2 SSDs but they're hard to find above 2TB. You seem to have quite a few slots available and they're full height, you can use U.2 drives each on their own cheap adapter board and just leave some PCIe lanes underutilized.
+1, exactly my thinking. It would be a way to still add fairly modern storage to the system.

2. I got this server populated with 128G and 2 CPUs. Nice case with 2.5 storage slots. I feel like it's enough of everything to run what I need. And I feel like a waste to start throwing it all out. I have no knowledge of market/stuff to know what MB I can just swap in without any other issues.

3. Storage is a biggest question right now. Currently we run exact same server with regular 2.5 SSD and don't have any issues, it's just small company server. Now since I am going with new OS (windows server Hyper-V), etc - I want to give it some boost and don't worry about it for another X years.
Yes, this system was obviously intended for 2.5" SAS/SATA drives. This is also an option you could go for as this could still be good enough. And it gives you the benefit of being able to share parts between the systems.

I don't know which case this is specifically, but given the format of the S2600 board, it can probably fit most others E-ATX/SSI CEB/EEB board if you want to update to more modern hardware at some point, without changing the cases. Just look out for any proprietary solutions that may be used in the case, like front panel connectors or certain fan connectors, for example.

Not worrying about it for another X years, well...
I feel it's necessary to do the responsible thing at this point and warn you about running things that are potentially crucial for your company on ancient server, especially if you're not 100% sure what you're doing. Not immediately throwing them away is a good thing, but their usefulness in day to day operations of the company might be somewhat limited, so you want to watch out for that. The fact that you're already considering buying extra spare parts tells me that you've at least also started thinking about it yourself, which is good.
 
  • Like
Reactions: katit

katit

Active Member
Mar 18, 2015
431
29
28
53
Is this U.2 drives you referring to? Looks just like regular 2.5
or even better?

And cards like that?

or even this?


Pardon again, I am not in the know :) But can you explain in layman terms why this will work and same simple adapter + m.2 will not be as good?
Is it because of consumer vs server? Or something else I don't understand?

Never mind - I read above. m.2 don't have much server offerings.


So... For better storage speed - seems like u.2 IS a most viable solution for my specific case?
 

nexox

Well-Known Member
May 3, 2023
689
282
63
The Ableconn would be fine, more than one drive won't work without bifurcation, but I used to see somewhat cheaper single U.2 options from 10GTek, which I can't find today. The other option is an add in card format, but those are often more expensive if you can find a $20 adapter card. I would usually buy enterprise SSDs used, they tend to last forever and new ones are mostly quite expensive.
 

Tech Junky

Active Member
Oct 26, 2023
363
122
43
U drives are enterprise options and have higher endurance ratings and offer the same speeds as the consumer M2 options and are somewhat cheaper when talking about capacity beyond 4TB. However, right now they seem to have spiked in price. For instance last August a 16TB U drive would have cost ~$1000 but right now they're much higher in price. They will come down again but, the question is when.

You have a few options with them such as MCIO, PCB adapters, M2+cables, etc.

I would avoid the Micron ones due to personal experience and others with complaints about them running hot or not working. I went through 2 of them and neither lasted more than a week for some reason and ended up moving to Kioxia CD8 instead which runs cool @ 40C.

U.2 - work with U.2 adaspters
U.3 - work with both 2&3 adapters
U.2 adapters work with both dirves
U.3 adapters work only with 3 drives

If you want the full potential w/o guessing either get a PCB adapter or cable it using Oculink cables as they support the full speeds of Gen4