Upgrade mobo and CPU for CSE-502

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

nachoguru

New Member
Oct 18, 2022
7
0
1
I have four 1U, short-depth, 200W Supermicro SYS-5015A-PHF systems, the ones that combine the CSE-502 case with the X7SPA-HF motherboard, rocking an Atom D510 system-on-a-chip.

They are not fast, but have served well over the years as quiet, little-used-but-constantly-on mail/web/backup servers in a colo rack. But now one of them has finally died.

I'm considering new hosting and/or new machines, so on my last trip to the colo, I brought these home, changed their little calculator batteries, updated their BIOS and BMC versions.

Except for the dead one, that is, because it just won't turn on, period. Swapping power supplies, drives, memory between the systems seems to indicate that it's the motherboard itself that has died... but I am open to suggestions.

I really like these little cases and want to treat these server's demise as my opportunity to cram something more modern in there -- not necessarily new, but new to me -- but the back of the case is one piece of metal that Supermicro seems to have intentionally designed to only match these old Atom mobo's.

So I have questions:

1) I'm not a handyman or electrical engineer, but I wonder: could I get a friend with a dremel tool to just cut a rectangle in the back where I could fit in the panel piece from a newer mobo? Has anyone out there done this successfully?

2) What would you put into such a case, if you were a cheap sonofa-- with almost no budget?

I know you'll need more context to answer that, the ol' "What are you going to use it for?" so here are the two alternative answers for that:

a) My goal with these is to combine some of the spare 2.5" enterprise SATA HDD's I have lying around to get some family photo storage into a few sites -- perhaps 1 back in the rack, 1 at my parents' place, 1 here, for a little private cloud rsync-ing or Synthing-ing or NextCloud-ing pics around. Those aren't huge drives of course, but were mostly spares that I'll likely never use, so I would like to see them get used for something before resigning them to the recycler. I would also put a large (for me) external USB drive on each one, 4GB or 8GB, again working from what's lying around here.

b) Given that low, low load use case, I suppose I should probably just put in another Atom and forget about it. But before I do, I just want to explore the alternatives I don't know about -- if these cases, power supplies, could manage something I could run virtualization on (QEMU/KVM/libvirt, unless you slap me and tell me different), like an E3-1265L V2 or even an E3-1220L V2, then that might be worth investing in, and they could go back into the rack and host sites (or backups, or mail, etc.) for the various rando side projects I will do... someday.

------------------

Further context: these are my "slow" servers. My "fast" servers are:

* one 1U D-1541 with SSDs, 128GB RAM
* two 1U Supermicro's with E3-1265L V2's with spinning rust HDDs, 32GB RAM
* eight nodes (two Dell 2U C6100's each with 4 nodes) of dual Xeon L5520's @ 2.27GHz, 3.5" HDDs, 48GB RAM

Just level-setting so you know that what's slow to you has been working fine for me for way longer than I expected.

Other notes, opinions:

All Linux all the time. Also open to the BSDs, and use pfSense for the firewalls. But as a software geek I am more comfortable at the Linux command line than I am with hardware or running yet more firmware and software from third parties like a specialized NAS or SAN or whatever.

Related: I also don't need IPMI or a BMC in any new machines, in fact they creep me TF out. I'm never far enough from my machines to need to leave those backdoors open.

Tangentially related: going forward I'd like to support, learn, play with Coreboot or open hardware of any kind, especially if I'm spending new money or learning something new. But Coreboot is not a hard requirement on this question. Just looking for links, suggestions, if you're also into the open hardware, open firmware world.

I find my big Dell 2U 4-node servers heavy, loud, intimidating to work on or move... they've been reliable but 2 of the 8 nodes are dead now, and I just turned them off for now. They're definitely never coming home, they're rack-only. Hoping that if I bring them new CR2032's on my next visit they'll come back to life.

I much prefer these half-depth-or-shorter, easy to carry units, even though they're not the most efficient these days. I have 13U to work with and this hodgepodge of gear has let me put off futzing with VMs for a long time. Now that I have started with VMs, I see the pain points of using such old hardware (but VMs were never an option on these Atoms anyway, I know that).

Thanks for reading all this and for any insights, warnings, wisdom, ideas you care to throw my way. Peace!
 

nachoguru

New Member
Oct 18, 2022
7
0
1
...and as soon as I pressed Post I thought of supplementary questions.

4) I have several dual 2.5" drive mount adapters so I rearranged things in one of these cases just to see what I could cram in there... with 4 x 2.5" 300GB Enterprise HDDs in there and a second high speed fan -- cabling and fan placement not final -- it draws about 79W most of the time, 85W on boot when the fans are not throttled. That's not too harsh on the 200W PSU, is it?

I know with SSDs and a more modern mobo/CPU/SoC, and with more easygoing fans than the monsters I've got on the shelf, this could all be lower. But any other red flags or suggestions?

Oh also, for anyone who is interested: the front dual-drive adapter case is screwed in on both sides as usual. But my extra, hacked-in one (towards the top of the image) shares 2 screws with the original one. On its other side, it is secured to the case with 3M black double-sided tape. That's along the top-of-image edge, because this case doesn't have holes to mount 2 such drive cages side by side.

5) Finally, there is a USB port in the middle of the mobo. Could I install the OS on a small USB inside the case, and boot from that, and then have all 4 drives be RAID or ZFS or BTRFS or whatever you cool kids are doing these days?

6) As you can tell, I am still not used to SSDs, nor ZFS / BTRFS. But open to learn if you can point me at your favourite guides, rather relying only on my random StackOverflow search results.


Thanks again all!
 

Attachments

Last edited:

Sean Ho

seanho.com
Nov 19, 2019
774
357
63
Vancouver, BC
seanho.com
Yes, since the rear I/O panels on the 502 are integral to the case, you'd need to take a dremel or angle grinder to them before you can fit any other board in there. It doesn't need to be precise; I/O panel is not necessary for the new board. Then you can use any board mATX or smaller. Plenty to choose from; depends on your budget. X9SCL should be super cheap; X11SSL is not much more and would have improved idle efficiency.

My concern about mounting 4x2.5" drives in the 502 (especially spinners) is heat.

If you don't want to spend much time on the software side, Unraid is very easy to use (and runs Slackware underneath). It also runs off a USB.

Quarter-cab colo is usually not cheap; if you're on a tight budget, consider consolidating to a few U?
 

nachoguru

New Member
Oct 18, 2022
7
0
1
Thanks Sean... between taking that time to help and me seeing Bayesian stats on your home page, you just bumped Shawn Wang swyx.io off the top of my list of favourite Seans. No small achievement!

I made some strikethrough edits above to save any future visitors here some time. And now to waste it anew:

Drive Cage Mounting Correction

A quick correction to my struck-through text above: the way I was able to cram 2x of the dual drive mounts into the case in the pic above was that they are actually only screwed to the case by sharing the 2 screws between them. That is, they are each both mounted to the case with a total of 2 screws that go through the case and then through both of the drive mount cages. Then on the outer edges where in a larger case, a normal person would use 4 more screws to hold these, I am only using heavy duty black 3M double-sided tape.

Sean is right that 4 of these old Enterprise drives are gonna get warm, even in my very low-load use case. I did put in the 2nd fan shown and a homemade baffle and will monitor temperatures. But I doubt this assembly will ever head back into the colo.

Internal USB Boot: Just No
A quick update: I tried installing an OS to a USB stick in that on-board USB port and wow... that was painfully slow to install and run. So I scratched out that dumb question above. I have now added a fast-but-consumer-grade SSD to use as a boot drive.

RAID or ZFS? TBD

I was going to take out one of the spinners and go with software RAID1 (mdadm) using 3 HDDs.

But then I'd have no real redundancy on the faster boot drive. And then of course I thought again about ZFS and BTRFS... so this has now turned into my "learn ZFS" machine. I will experiment some more with it here in the lab so that I don't mess up on my higher-end systems with Real Data. I'll post questions in a separate thread soon, I imagine.

Motherboards / CPUs

...wasn't that what this post was supposed to be about? Sorry folks.

I looked for newer Mini-ATX (6.7" x 6.7") boards to put in my empty unit and from what I can see there are none that take ECC RAM, so I would have to move up to mATX (9.6" x 9.6"). I mean, the old Atom boards weren't ECC either, but if I'm going to upgrade, I want to fix that.

So the boards Sean mentioned helped (thank you), and I'll probably get an X11SSH-F on eBay (for the M.2 slots) and a E3-1265L V2 as I have some RAM I can use with that. But first I'll go re-test that RAM. The E3-1220L V2 was also in the running since I'm just replacing an Atom here, but might as well get more cores/threads for not much more.

The much more capable, like 2x the CPU Benchmark, E3-1260L V5 unfortunately would mean I'd feel compelled to buy 128GB of DDR4 RAM, so I am trying to fight that urge to move all the way up to the X11s. But with Cyber Monday just around the corner, maybe I'll come across the right deals.

Colo Costs

I write software and host data for researchers, mostly Canadians, who mostly chose us because they expect data to stay in Canada. Back when we started, cloud wasn't a thing, and then (like shared hosting or VPS's) it was a no-no. These days, everyone seems to use it, but my clients would rather I didn't.

I'm overprovisioned, but that works for me because I'm also not that good at hardware, as you can see here. Servers and/or drives live and die, and I eventually get around to fixing or replacing them, but because I am paranoid about security and backups and redundancy, we just move the affected things to a spare server quickly, and turn off the old one for months. I putter at hardware like my dad would with cars, I guess.

I've looked at per-U colo, and will eventually switch to being in two colo's, each with a 1U Netgate firewall and (say) 2U of servers, each crammed full of VMs.

But I'm still at the learning stages with VMs and want to experiment more with them running on top of different filesystems and hardware and drive types because I don't trust myself to get all of that right just yet.
 

Sean Ho

seanho.com
Nov 19, 2019
774
357
63
Vancouver, BC
seanho.com
Oh that's quite an honour to be in the same category as swyx! But we all have our ways to contribute, and are all learning from each other. No need to rank! I usually hang out on ServerBuilds' Discord (as well as here!).

Drive temps: yep, just keep an eye on them. 50C and under should generally be ok, but heat does kill drives, so the cooler the better.

Boot drives: another option is SATA-DOM; many SM boards have two DOM ports so you can put boot/root on raid1 to minimise downtime.

MB/CPU: bear in mind X11SS* is 1151-1 Skylake (the second 'S'), so E3 v5/v6. E3 v2 is Ivy Bridge, e.g., X9SCL-F ('C' for Cougar Point chipset). X11SSH-F uses DDR4 ECC UDIMM. A pair of 16GB DIMMs should be plenty to get started, with room to add another pair later.

Colo: I can certainly empathise. I'm also in Canada and deal with clients that need the data domestic as well. Usually, at home rack space is abundant but power and network (particularly a diversity of transits) are limitations. In colo the opposite is true.