X10SRH-CF@$200+shipping, E5-2697A v4@89

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Glock24

Active Member
May 13, 2019
159
93
28
Aww, crap, yeah, I made that mistake too, lol! Why would anyone even list 10-base in this day and age? I even know the i350 doesn't do 10gig, I figured they just didn't list the separate chipset. I saw the beefy heatsink by the ports and figured it was something better than gig-e.

So other than the actual SAS controller, this is really no different than my X10SRL-F boards... Either way, it'll still replace one of my X9SRL-F boards, I just need to dig through my pile of spare xeons and see what decent 2011v3 chips I have lying around for it.
The SRH is a SRi with a SAS controller, The SRL has better PCI-E connectivity.


X10SRL
  • 2 PCI-E 3.0 x4 (in x8 slot) Auto-switch to 1 x8 and 1 x0,
  • 2 PCI-E 3.0 x8,
  • 2 PCI-E 3.0 x8 (in x16 slot)
  • 1 PCI-E 2.0 x4 (in x8 slot)

X10SRi
X10SRH
  • 1 PCI-E 3.0 x16,
  • 1 PCI-E 3.0 x4 (in x8 slot),
  • 2 PCI-E 3.0 x8
  • 1 PCI-E 2.0 x4 (in x8 slot)
  • 1 PCI-E 2.0 x2 (in x8 slot)
 

Cruzader

Well-Known Member
Jan 1, 2021
554
565
93
I figured they just didn't list the separate chipset. I saw the beefy heatsink by the ports and figured it was something better than gig-e.
You already see by it being a rj-45 and not sfp+ that its not gone have any interesting ports for more than management tbh
 
  • Like
Reactions: Samir

acquacow

Well-Known Member
Feb 15, 2017
787
439
63
42
You already see by it being a rj-45 and not sfp+ that its not gone have any interesting ports for more than management tbh
All of my 10gig-e at home is RJ45. Even my other supermicro X10 boards, though those are Xeon-D with onboard copper 10gige.
 
  • Like
Reactions: Samir

Fritz

Well-Known Member
Apr 6, 2015
3,386
1,387
113
70
The problem with 10g rj45 is the switches are still way too damned expensive.
 
  • Like
Reactions: Samir

Samir

Post Liker and Deal Hunter Extraordinaire!
Jul 21, 2017
3,314
1,484
113
49
HSV and SFO
pc4-2400t is starting to get close to $1USD / GB. the 6$GB ones though... $100... paying a premium for that dense a stick.

honestly I just did a search and sorted by lowest price. like maybe 4 days ago? here's one - a single at 45.

I think as everyone has said though as more and more intel e5 v3/v4 get decommed the memory and cpu's will keep falling.
Thank you for the link and great info. :) Since I would just be moving into v3/v4, I think I should wait for it to bottom since I haven't even fully utilized my ddr2 and ddr3 based servers yet. o_O
 
  • Wow
  • Like
Reactions: Markess and itronin

Jaket

Active Member
Jan 4, 2017
239
126
43
Seattle, New York
purevoltage.com
The problem with 10g rj45 is the switches are still way too damned expensive.
I agree, we had to pick up a 10g rj45 switch recently just because of the new Ryzen systems wanting to be able to do 10g without adding in a card to all the systems. Typically we use Arista 40g and break those out into 4x10g works so much better. Sadly those prices went up 2x over the last two years. :(
 
  • Like
Reactions: Samir

Glock24

Active Member
May 13, 2019
159
93
28
pc4-2400t is starting to get close to $1USD / GB. the 6$GB ones though... $100... paying a premium for that dense a stick.

honestly I just did a search and sorted by lowest price. like maybe 4 days ago? here's one - a single at 45.

I think as everyone has said though as more and more intel e5 v3/v4 get decommed the memory and cpu's will keep falling. motherboards though - they do seem to be in short supply. I lucked into a pair of X10DRH's in new series CSE-216' for 200.00 each - pays to watch the errant post to homelabbity (ROFL) - based on motherboard prices I'm going to swap some SRL's for those and hang onto the SRL's for the time being - just too hard to get. and just 3 years ago the SRL's were 130.00!!!!
I could not find 32GB 2400 RDIMMS for less than $45, best I could find was $44.99. I got them last night and today the bay offered me 5% bucks
 
  • Like
Reactions: itronin and Samir

Samir

Post Liker and Deal Hunter Extraordinaire!
Jul 21, 2017
3,314
1,484
113
49
HSV and SFO
I could not find 32GB 2400 RDIMMS for less than $45, best I could find was $44.99. I got them last night and today the bay offered me 5% bucks
I couldn't find them either for less than that. But it comes and goes in spurts--someone in August got a 16GB module for $4 shipped so the deals are on there if you're persistent.
 
  • Like
Reactions: itronin

Cruzader

Well-Known Member
Jan 1, 2021
554
565
93
All of my 10gig-e at home is RJ45. Even my other supermicro X10 boards, though those are Xeon-D with onboard copper 10gige.
If you got the lowend XeonD boards they will probably be rj-45 yeah.

Lowend/small compute edge units in locations you generaly dont want fibre tends to be its usecase.
 
  • Like
Reactions: Samir

bambinone

New Member
Dec 26, 2020
18
21
3
Chicago, Illinois
X10SRH
  • 1 PCI-E 3.0 x16
It's actually x8-in-x16 on the SRH, x16-in-x16 on the SRi.

The SRL has better PCI-E connectivity.
It's bonkers. The SRH consumes eight lanes for the SAS3008 HBA and four lanes (at Gen2 speed) for the i350 NIC, so you only get 28 usable vs 40 on the SRL. I guess it makes a bit more sense on the four-port CLN4F variant. If you don't need those devices—and/or don't need the gigabit NIC attached directly to the CPU—you're really shooting yourself in the foot with the SRH.

It makes you realize how quickly PCIe lanes get used up, and how earth-shattering it was for EPYC to hit the market with 128 lanes from a single socket.
 
Last edited:

Markess

Well-Known Member
May 19, 2018
1,162
780
113
Northern California
It's bonkers. The SRH consumes eight lanes for the SAS3008 HBA and four lanes (at Gen2 speed) for the i350 NIC, so you only get 28 usable vs 40 on the SRL. I guess it makes a bit more sense on the four-port CLN4F variant. If you don't need those devices—and/or don't need the gigabit NIC attached directly to the CPU—you're really shooting yourself in the foot with the SRH.

It makes you realize how quickly PCIe lanes get used up, and how earth-shattering it was for EPYC to hit the market with 128 lanes from a single socket.
I think it depends on what you plan to do. There's a slightly different focus between the two boards. For the small set of folks for which the X10SRH-CF ticks all the boxes, it will probably cost less in the end and may be the better choice.

For maximum flexibility in component selection & upgrades, plus available CPU lanes assigned to slots, X10SRL is the clear choice.

But if you are going to need a RAID/HBA card anyway, and SAS3008 is acceptable, then you really aren't "losing" 8 lanes with the X10SRH-CF. The i350 NICs in the X10SRH-CF support SR-IOV, and some other server/virtualization oriented features, that would require an add-in card on the X10SRL because that board's onboard i210s don't.

With the original enterprise customers, the lower cost of a motherboard that includes an appropriate RAID/HBA & NICs (vs a Motherboard + add-in cards) was probably a selling point. I'd bet a lot of customers using X10SRH-CF for pretty basic server/virtualization roles never even used the PCIe slots, so the lower lane count wasn't an issue. I don't use the PCIe slots in mine.
 
  • Like
Reactions: Samir and Fritz

bambinone

New Member
Dec 26, 2020
18
21
3
Chicago, Illinois
For the small set of folks for which the X10SRH-CF ticks all the boxes, it will probably cost less in the end and may be the better choice.
Oh, don't get me wrong, I love the heck out of my SRH. Until very recently I had it decked out with a ton of NVMe drives, two SAS expanders, a 10GbE NIC, a Quadro, etc. and I was doing SR-IOV on both the GbE and 10GbE NICs. Now it's just a fileserver, no more virt, but I was able to move the 10GbE NIC to a CPU-attached slot and side-grade to a cheaper processor with fewer cores and higher clocks. I don't plan to replace it any time soon!
 

zack$

Well-Known Member
Aug 16, 2018
708
338
63
This is exactly what we are doing with the SRH.

If you are on SAS/SATA drives then there is a ton of life still left in the SRH boards for file server duties.

If, however, you are moving to an all NVME workload/storage, I don't think the e5 v3/v4 CPUs will cut it (and consequently the SRH). There is just not enough pcie lanes !
 

bambinone

New Member
Dec 26, 2020
18
21
3
Chicago, Illinois
If, however, you are moving to an all NVME workload/storage, I don't think the e5 v3/v4 CPUs will cut it (and consequently the SRH). There is just not enough pcie lanes !
Agreed. All my data vdevs for ZFS are SAS/SATA, but I've got five NVMe drives in the CPU-attached expansion slots: two for SLOG, two for the special vdev, and one for L2ARC. If you're willing to forgo a 10+GbE NIC you can squeeze in two more, but an NVMe-augmented zpool sort of necessitates high-speed networking and vice-versa.

There are two PCH-attached expansion slots left, but they're fairly anemic (Gen2 x4-in-x8 and Gen2 x2-in-x4) and should be avoided for anything important.
 
  • Like
Reactions: Samir and Markess

Glock24

Active Member
May 13, 2019
159
93
28
Agreed. All my data vdevs for ZFS are SAS/SATA, but I've got five NVMe drives in the CPU-attached expansion slots: two for SLOG, two for the special vdev, and one for L2ARC. If you're willing to forgo a 10+GbE NIC you can squeeze in two more, but an NVMe-augmented zpool sort of necessitates high-speed networking and vice-versa.

There are two PCH-attached expansion slots left, but they're fairly anemic (Gen2 x4-in-x8 and Gen2 x2-in-x4) and should be avoided for anything important.
An x4 PCI-E 2 should be enough for a single port 10GbE port card, no?
 
  • Like
Reactions: Samir