Ebay - Supermicro 4U CSE-417 72 Bay 599.00 OBO YMMV

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

itronin

Well-Known Member
Nov 24, 2018
1,234
793
113
Denver, Colorado

Supermicro 4U CSE-417 72 Bay SFF Barebone Server 2x 1400W PWS Rails 599.00 + 70.00 s/h


OBO of 465.00 + 70 s/h was accepted (Started at 425). I had the impression after they accepted my offer that the seller (The Server Store) might have accepted slightly less but YMMV.

It was a reasonably good deal for me as I needed 48 or so 2.5" bays and qty 2xCSE-216 was going to cost 500 or so without rails and that makes either an extra system or 1-2 more P/S's I'd have to run.
 
  • Like
Reactions: Rand__ and BLinux

GladLock96

Member
Nov 8, 2016
81
20
8
Do you think there would be any compatibility issues by swapping out those stock fans for Noctua fans for my homelab?
 

itronin

Well-Known Member
Nov 24, 2018
1,234
793
113
Denver, Colorado
Edit-updated with some relevant forum links to redirect.

Do you think there would be any compatibility issues by swapping out those stock fans for Noctua fans for my homelab?
I don't have any first hand experience as I am about to make my first (and large at that) foray into the SM eco-system.

perhaps someone else with more experience can speak up but this probably isn't the correct thread/forum.

quick search of supermicro && noctua turned up a lot of useful threads:

esp.
https://forums.servethehome.com/index.php?threads/supermicro-sc847-fan-choices.16724/#post-160280

https://forums.servethehome.com/index.php?threads/supermicro-4u-24-bay-chassis-gotchas.11625/

also the 8xx and 216 chassis threads are relevant too.
 
Last edited:

redeamon

Active Member
Jun 10, 2018
291
207
43
If you have low power 2.5's (i.e., HGSTs ssd's) you can IPMI the fans down to super low levels using ipmitools. I run my Dells 630/730's at barely audible levels with V4 L procs and low wattage 2.5's to get away with it. Most L procs can sustain higher temperatures which helps with the lack of static pressure when running this low. Source your parts right if noise is a major concern and you'll be good. There are solid ways around the server "noise" problem- PM me if you need help.
 
Last edited:
  • Like
Reactions: Samir

SRussell

Active Member
Oct 7, 2019
327
152
43
US
Supermicro 4U CSE-417 72 Bay SFF Barebone Server 2x 1400W PWS Rails 599.00 + 70.00 s/h

OBO of 465.00 + 70 s/h was accepted (Started at 425). I had the impression after they accepted my offer that the seller (The Server Store) might have accepted slightly less but YMMV.

It was a reasonably good deal for me as I needed 48 or so 2.5" bays and qty 2xCSE-216 was going to cost 500 or so without rails and that makes either an extra system or 1-2 more P/S's I'd have to run.

Did you ever purchase this unit?
 
  • Like
Reactions: Samir

itronin

Well-Known Member
Nov 24, 2018
1,234
793
113
Denver, Colorado
Did you ever purchase this unit?
I did. At the price I put in the original post.

Some observations and thoughts:

There are a lot of fans inside. more fans than my motherboard (x9srl-f) has headers for.
That number of fans is LOUD even with fan speed tuning.
The chassis I purchased came with (IIRC) 2x1400W -R PSU's. these are VERY LOUD.
The Chassis came with all 72 bays (expanders) chained together - so just 2 SFF-8087 for a single HBA.
The chassis is HEAVY (should have been obvious to me but was heavier than I expected).
The chassis is also deeper than say an 836. It is rather awkward.
Power draw with 60 or so SSD's was about 285-295W. Idle crept up to about 260W as I added more and more drives.
I don't like the generic rails it came with. Don't like how they install, don't like how they fit.

This chassis is a beast. I'll put it in the same category as a 48-60 3.5 drive chassis.. Just a little lighter perhaps.

At the time I purchased this I had planned to install:
about 46 SSD's and 10 or so disks. This eventually grew to about 66 SSD's.

Changes and things I did:

Pulled three fans. This changed the noise level a bit and did not really adversely affect temps - went up by about 1-2 degrees.
Replaced the PSU's with 1200W SQ.
Changed to 1 HBA per shelf (LSI 9205)
Installed over 60 SSD's and removed all the disks.
Got tennis elbow from adding / removing / adding / removing drives to/from sleds in a short period of time.
Had to get assistance to initially rack it. Don't put this much above waist level in a rack unless you have help.
With my puny lab/office environment I was only able to hit about 28Gbps with synthetic workloads - probably operator error. Typical active workload was more like 4-6Gbps.

Should I have purchased this beast?
Probably not, but its been fun to play with and I don't regret the purchase.

Am I using it right now?
No, its powered off and I'm probably going to pull it from the rack for the time being as I have a better use for 4U and 24 of the SSDs. My rack is small at 29U. I also like to change things around a bit so my environment is not really all that static. There's only 1 server that I don't mess with or I have hell to pay (house infrastructure).

Am I going to sell it?
No, probably not. A small voice inside my head keeps saying, but wouldn't it be fun to put a bunch of those 5TB Costco seagates in there. A bunch = 70 or so. A more rationale voice says no way - you are not shucking 70 5TB drives. But I can forsee some use cases in the future that will cause me to use it again and its just something I would not want to source again.

This is definitely a right tool for the right job kinda item and if you don't have the right job its more than a bit of waste. Fun, but wasted potential if you can't use most of it.

FWIW and slightly off topic, my transition from desktop/mini-tower servers to supermicro rack based is almost complete and the best deal and most flexible chassis I've found through this transition are Compellent SC-030's, especially if < $100 each. Even using 2.5" drives - just pick up some of those $3.00 dell 2.5 to 3.5 adapters and off you go.