So I bought a winterfell node

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

fake-name

Active Member
Feb 28, 2017
180
144
43
73
One of the servers I use to run my home projects started failing recently, and being a complete broke-ass, I spent a while trying to find a super-cheap replacement that might also be a slight upgrade.

Overall goal are to be competitive with my current setup, which is a dual E5-2660 V1 system. I'm going to reuse the RAM from that machine once I get this running. The machine will eventually run proxmox.

Anyways, I have no real requirement for rack compatability, so I wound up deciding to buy a Quanta "Winterfell" Open-Compute node (there's a long thread on them here).

Anyways, one of the issues I had when considering buying this thing was the lack of decent pictures of these things, so here we are (plus bringup).

So, if you haven't heard of them, "Winterfell" is the name of the second-generation intel-based Open compute project server nodes. It's a highly unusual chassis design done by Facefuck for their internal use that they wound up releasing, presumably in hopes that they'd get cheaper servers by virtue of more people buying them. The open compute project has substantial online documentation.

I went with winterfell, because I want E5 V2 CPUs. In this case, I'm using 2 E5-2650 V2 CPUs (mostly because they're also super cheap).

Here's the server itself:

IMG_3248.JPG
It's a 4" x 7" x 35" (!!!) box. Yes, they're enormously long.

They're designed to slot into a custom rack, and the end of the server shoves onto a set of bus-bars which provide power. The server itself therefore requires 12V at lots of amps, and nothing else.

Considering the mobos are specced to support 2x either 95W or 135W TDP (depending on the manual), the overall dissipation is therefore in the range of 250-400W, which translates to 20-33 amps at 12V. The power cabling is unsurprisingly ridiculous.

The overall chassis:
s_DSC00928.jpg s_DSC00929.jpg
s_DSC00947.jpg

Power input. The whole server is cooled by the two 60mm fans powered by the power input board. They run surprisingly quiet. The loudest part of the whole thing is actually the power supply I'm using for the 12V
s_DSC00930.jpg

These servers have a mezzanine 10GB SFP+ module.
s_DSC00931.jpg
s_DSC00932.jpg

They also have 2 PCI-e slots that can be set up as either 1 8X and 1 1X, or 2 4X slots. Physically, it's 1 16X and 1 8X that's open ended.
There's also a location for toollessly mounting one 3.5" HDD. I'm going to stick 2 SSDs in there.

s_DSC00934.jpg
s_DSC00937.jpg

The mobo itself has 2 SATA ports and 2 headers for a custom power cable to run storage devices. Fortunately, the cable is pretty simple (it's basically a floppy power connector -> SATA power connector), so it should be trivial to make one.
s_DSC00938.jpg
s_DSC00940.jpg

The bus-bar connector for the chassis is frankly kind of ridiculous and massively overkill here (they're rated to 105 amps!!!!!). One nice thing about the whole "Open" bit of the open-compute project is you can actually find documentation (here is the connector documentation).
It's designed for blind-mating.
s_DSC00942.jpg
s_DSC00941.jpg
s_DSC00945.jpg

The server should have a airflow duct, but in my case I had to specifically ask the seller to include it. I have no idea what they think you'd do without it.
s_DSC00958.jpg
s_DSC00959.jpg
s_DSC00960.jpg


My solution to getting 12V at all the amps is pretty simple. Just use bitcoin miner crap! It turns out you can buy breakout boards for common power supplies for ~$10. In this case, I'm using a Supermicro PWS-1K21P-1R, mostly because I have it and it's 80+ gold rated.

I powered the thing up in several steps. First, I checked the polarity of the cables, as I'm using leads from broken power supplies (whenever I have a power supply fail, I open the chassis and cut the leads off and save them). I then disconnected the power supply board from the main mobo, and powered that alone (see the next picture). Finally, I just booted the whole thing. Fortunately, no smoke was emitted.
s_DSC00961.jpg

The next challenge was the fact that it booted, but didn't *do* anything. There's a 1GB nic on the mobo, and when connected it appeared to come up (the link lights illuminate, and my switch saw some traffic from it), but plugging it directly to a test machine and trying to tshark the interface or arp-scan it yielded nothing. Either the onboard NIC isn't fully configured, or it's configured to not talk to anything.

I stuck a video card in the thing, but that didn't do anything either, so I suspect that the PCI-e connector is configured wrong (there's a *lot* of jumpers everywhere for it), or the crappy old 1-wide graphics card I have is dead (not unlikely, it's been floating about for a while on my workbench)

Now, again, there's a nice thing about the whole "open" bit, as the spec dictates a diagnostic header for the mobo. It has a 8-bit POST status code output, and a TTL serial tx/rx pair.

Reading the POST code manually yielded 0xAA, which the manual claims means the boot sequence has jumped to the BMC, but I'd expect the BMC to be the thing that talks on the 1GBE nic, and it's not talking.

Anyways, lets see if the serial port does anything. Some really horrible dangly wiring (it's a 2MM header, and I only have 0.1" header sockets on hand):
s_DSC00963.jpg
s_DSC00965.jpg
s_DSC00966.jpg
Hmm, it looks like it's talking 57600 baud. Let's hook up a serial interface:
It boots!!!.png

It boots! And is generating terminal control codes that confuse putty.

I only have RX connected because of the nightmare headers at the moment. I'll grab some 2MM female headers tomorrow at work (fortunately, we use them extensively for hardware there. Convenient!).

My long term goal here is to assemble a rack chassis that fits two of these + power-supply in a traditional 3U rack-mount chassis. I'm actually working up a 3d-printable bus-bar support so I can use the node completely stock, without having to do any adapting of the wiring.

More to come!
 

fake-name

Active Member
Feb 28, 2017
180
144
43
73
Ok, I got angry and did some horrible things with heatshrink. We have BIOS:

Overall.png

Serial BIOSes are always amusing.

Apparently Quanta calls this product "Freedom". Huh.
FRU.png

Some interesting oddments:

have-sas.png
There are a *bunch* of footprints for something that looks like a SAS SFF-8087 header on the motherboard. There was some discussion that they were for PCI-e <-> PCI-e communications between nodes, but apparently the BIOS, at least, thinks there are 4 SAS ports ~~somewhere~~.

Also of note: The BIOS lists 6 SATA ports:
drives.png

These are apparently configured to boot from the network by default.
Boot fail cause.png

There's a bunch of mentions of the BMC in the BIOS, but I wasn't able to actually find anything to control it. Supermicro mobos have me trained to assume you can do basic config (IP address, etc...) from the BIOS, but either I can't find it, or it's missing here.
 

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
657
244
43
This is awesome, as a fellow "broke as shit" hobbyist I'm really looking forward to seeing where this project goes. I'm also more than a little jealous of your skill and ability to reverse engineer this thing from scratch.

What was your cost for the node?
 

fake-name

Active Member
Feb 28, 2017
180
144
43
73
What was your cost for the node?
Chassis + mobo/heatsinks/etc was $85 shipped (they're cheap).

You need to add CPUs, RAM, and storage on top of that, but I'm reusing older stuff in my case (except for the CPUs, which were $127 for 2x E5-2650 v2).

Power supply breakout board was $10.
 

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
657
244
43
Chassis + mobo/heatsinks/etc was $85 shipped (they're cheap).

You need to add CPUs, RAM, and storage on top of that, but I'm reusing older stuff in my case (except for the CPUs, which were $127 for 2x E5-2650 v2).

Power supply breakout board was $10.
Yowza, that is *cheap* and I love it.

With a 3d printed power bus bar enclosure and a couple of dirt cheap server PSUs you could have an entire cluster up and running for peanuts, that's awesome.

Without going too off-topic here how would someone take the output from one of those 12V supply breakout boards and power HDDs? I've been thinking about cobbling together a drive shelf that way but I was never sure how to go about supplying the power.
 

fake-name

Active Member
Feb 28, 2017
180
144
43
73
Without going too off-topic here how would someone take the output from one of those 12V supply breakout boards and power HDDs? I've been thinking about cobbling together a drive shelf that way but I was never sure how to go about supplying the power.
It's pretty easy. Just get some inexpensive DC-DC converters, and use that.

Here's a homemade DAS I've been running for the last year or two:

30262012_10210667175027295_3738323733253718016_n.jpg
Sorry for the crappy image, it's a old-ass cellphone shot.

Anyways, it's two 4-bay hotswap bays from junked HP servers. At the time I made this, I was parasitically powering it by running it off 12V from the server it connected to via a cable connecting to a PCI-e power connection.

I've subsequently mounted a high efficiency mean-well power supply in the box and added a power switch, LED, etc...

The DC-DC modules are generic XL4005-based units from amazon. They're dead cheap, and work fine.

It connects through to a LSI SAS 9201-16E in the server with some 6 foot SAS cables. I suspect I'm technically running the SATA drives in it out-of-spec, but it's not had any issues so far.
 

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
657
244
43
@fake-name That's *awesome*, that's exactly what I was wondering how to do. I've also looked at buying a front drive cage from an HP server (like the whole front cage section of a DL380e gen8) and (ab)using that, those chassis and replacement parts are extremely cheap at times. The power connector pinout is posted a couple of places around the forum, so there's no need to figure it out yourself.

684886-001 647407-001 HP 12 LFF HDD CAGE BACKPLANE AND CABLE PROLIANT DL380E G8 | eBay

^^ This, for example, would make a pretty rockin' DAS (it even has the port multiplier built into the backplane already) with fans and power mounted behind it. You could even go as far as reversing some of the front panel connections and rigging the momentary power switch to a relay to switch on/off your PSU. Or hook everything to a $10 Pi so you can do remote bring-up/shutdown.

I don't want to derail your project thread any further so I'll pipe down now, thanks very much for the explanation and example!
 

fake-name

Active Member
Feb 28, 2017
180
144
43
73
@fake-name That's *awesome*, that's exactly what I was wondering how to do. I've also looked at buying a front drive cage from an HP server (like the whole front cage section of a DL380e gen8) and (ab)using that, those chassis and replacement parts are extremely cheap at times. The power connector pinout is posted a couple of places around the forum, so there's no need to figure it out yourself.

684886-001 647407-001 HP 12 LFF HDD CAGE BACKPLANE AND CABLE PROLIANT DL380E G8 | eBay

^^ This, for example, would make a pretty rockin' DAS (it even has the port multiplier built into the backplane already) with fans and power mounted behind it. You could even go as far as reversing some of the front panel connections and rigging the momentary power switch to a relay to switch on/off your PSU. Or hook everything to a $10 Pi so you can do remote bring-up/shutdown.

I don't want to derail your project thread any further so I'll pipe down now, thanks very much for the explanation and example!
Lol, small world, eh?

My major concern with something like that would be figuring out housing. I wouldn't exactly want the port multiplier board just dangling out in the breeze, so you'd need some sort chassis to stick it in.

I'm actually trying to find something that's 3U tall and ideally >30" deep to gut to use as the chassis for the winterfell nodes.
 

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
657
244
43
Honestly I was thinking of having someone fab a partial-box frame out of sheet metal (sides and bottom, open top/front/back) with maybe a 3d printed enclosure for the breakout board, some anchor points to mount the PSU, and another 3d printed mounting bracket/anchor somewhere along the back edge to mount useful things to (pi, SFF-8087 to SFF-8088 connector, etc, etc.) Maybe stick a plastic top on it if needed, or duct just far enough that the fans provide good drive airflow and leave the rest open ¯\_(ツ)_/¯

I've got more ideas than ability to implement them though, I need to take a CAD or modelling class so I put them into a form I can actually do something with :p

I'm actually trying to find something that's 3U tall and ideally >30" deep to gut to use as the chassis for the winterfell nodes.
The winterfell nodes are basically as enclosed as you need already with the shroud on right? You could do an open frame with guides to center them for their bus bars and just box up the power distribution. I'd suggest building the frame out of something like 80/20 for ease of use, but A: you'd be hard pressed to make 3u with the thickness of 80/20 unless you just made a tray (2 OU + 1" (26mm) is *very* close to 3U) and B: $$$.

Maybe keep an eye out for an actual Winterfell chassis? If you're only using two nodes you could probably fit two PSUs side by side into the third slot. That could actually solve the power distribution problem really, really nicely. If you can get your hands on the butt end of a Winterfell node you could gut it, mount two PSUs inside and feed the bus from the handy-dandy purpose-built connector on the back, then patch the three bus bars together and you're in business.




I did some back of the envelope math and a standard server PSU is like 4-5mm too tall to just place directly on top of a 2 OU enclosure, which is a shame because that'd make the whole thing super easy. Alternately you could do that and rack the Winterfell box underneath something short-depth like a switch or patch panel.
 
Last edited:

fake-name

Active Member
Feb 28, 2017
180
144
43
73
Honestly I was thinking of having someone fab a partial-box frame out of sheet metal (sides and bottom, open top/front/back) with maybe a 3d printed enclosure for the breakout board, some anchor points to mount the PSU, and another 3d printed mounting bracket/anchor somewhere along the back edge to mount useful things to (pi, SFF-8087 to SFF-8088 connector, etc, etc.) Maybe stick a plastic top on it if needed, or duct just far enough that the fans provide good drive airflow and leave the rest open ¯\_(ツ)_/¯
Having something custom fabbed is kind of the opposite of cheap. And I don't have any sheet-metal facilities myself (or the space to fit them, really. A decent brake is quite large).

I've got more ideas than ability to implement them though, I need to take a CAD or modelling class so I put them into a form I can actually do something with :p

The winterfell nodes are basically as enclosed as you need already with the shroud on right? You could do an open frame with guides to center them for their bus bars and just box up the power distribution. I'd suggest building the frame out of something like 80/20 for ease of use, but A: you'd be hard pressed to make 3u with the thickness of 80/20 unless you just made a tray (2 OU + 1" (26mm) is *very* close to 3U) and B: $$$.

Maybe keep an eye out for an actual Winterfell chassis? If you're only using two nodes you could probably fit two PSUs side by side into the third slot. That could actually solve the power distribution problem really, really nicely. If you can get your hands on the butt end of a Winterfell node you could gut it, mount two PSUs inside and feed the bus from the handy-dandy purpose-built connector on the back, then patch the three bus bars together and you're in business.

I did some back of the envelope math and a standard server PSU is like 4-5mm too tall to just place directly on top of a 2 OU enclosure, which is a shame because that'd make the whole thing super easy. Alternately you could do that and rack the Winterfell box underneath something short-depth like a switch or patch panel.
The 3x winterfell shelf is for the openrack spec. It's wider then standard racks (~19" between rack verticals, rather then the ~17 of standard racks, so that won't work.

Additionally, the shelves themselves don't have the power distribution. Take a look at the OpenCompute rack documentation.

2019-04-16 22_36_49-Open_Compute_Project_Intel_Ser.png

Basically, the shelves are literally just bits of metal (if you use them at all, I'm not /completely/ sure they're a thing. There are massive power supplies spaced throughout the rack, and they directly attach to multiple vertical bus-bars that connect to each node.

My idea is basically a |___| shaped chassis with some rails in it that align the nodes, with the 12V power supply in the middle.

Did I mention that one nice thing is there are complete solidworks models for the winterfell node freely available? It's *amazing*.


There's actually a company that basically does this already, though who knows how much their stuff costs (also, it's the newer 48V system design).

FWIW, the winterfell stuff is for OpenRack V1, which is the 12V/triple bus-bar system. OpenRack V2 is 48V, with a single central bus-bar that runs a "cubby" which holds 3 servers, and splits the central bus to each server.

Really, I'm hoping I can get a old-ass server off someone for a song (and maybe like $50 or less), and build off that.
 
Last edited:

fake-name

Active Member
Feb 28, 2017
180
144
43
73
MOAR. IT CONTINUES

So I ordered some short SATA cables, and made up a pair of the custom SATA power cables.

I also made a proper debug probe. It's based on a UM232H breakout board (which uses a FT232H interally).
I was a bit silly and wired up the POST code debug output to the BCBUS pins of the UM232H, so I can theoretically use the module to read the post code in addition to accessing the serial device.
s_DSC00969.jpg

3D printed crap: I made up a somewhat robust caddie for the power supply breakout. It at least prevents manipulating the power supply or cables from putting stress on the edge-connector, which is basically all I
wanted.
s_DSC00972.jpg

And another 3D printed thing to let me mount 2 2.5" SSDs in the one 3.5" drive slot. It's a bit floppy, but it's more then strong enough for what it needs to do.
s_DSC00974.jpg
This mobo does have two USB ports, but the second USB port is located in a massively obnoxious location. I have no idea what they were thinking. You cannot access the second connector without removing the entire drive/PCI-e riser bracket. There's not very much vertical clearance either. It won't even fit many thumbdrives.
s_DSC00981.jpg
In other news, I got the thing to properly boot with a GPU (proxmox, annoyingly, doesn't have a serial installer). That in and of itself turned out to be a horrible nightmare, mostly because I'm fairly sure the ebay reseller shipped this with a jumper configuration that was incorrect.

s_DSC00975.jpg
Basically, there are two sets of jumpers that ostensibly can configure the PCI-e interface. There is a set on the riser card, and a set on the motherboard itself. As the system was delivered, the jumpers were placed on the riser, and the motherboard they were open. This sort of worked.
With a Nvidia GPU, I could get into the BIOS, but when Proxmox tried to change graphical mode, it failed. The Debian installer started a graphical environment OK. I suspect there are some modes that work, and some that don't.
With a AMD GPU, you didn't get any graphics output at all.

I pulled the riser, and stuck a card directly in the mobo, and that worked OK, but was inconvenient as heck.

Moving the jumpers to the motherboard (leaving the jumpers on the riser completely open) seems to have fixed the issue, and I can now properly use a GPU in either riser slot.

s_DSC00985 correct jumpers.jpgs_DSC00986 Wrong Jumpers 1.jpg

I suspect (and this is an ongoing annoyance of mine) that these systems aren't actually being sold as they came from the original datacentre. A bunch of the surplus server retailers on e-bay now fancy themselves "system integrators", and by that they mean "it posts, so it must be fine".

I suspect they're mix-and-matching the parts (I mean, they mechanically fit), and not doing proper testing (after all, the documentation for these specific motherboards just isn't available anywhere I've been able to find). Without a PCI-e card, the system did post fine, so unless they specifically tested the PCI-e slots, it's not really fully functional.

I wish they'd just resell the servers as they received them from the original user. I don't trust ebay wackos to do due diligence on reconfiguring specialty custom servers.

-----

Anyways, rants aside, I've now got the node joined to my Proxmox cluster, and have been happily migrating VMs to it.

The system power draw is ~80-120 watts when idle, so they're nice and efficient. They're quite quiet as well. The loudest part of the server is the supermicro power supply.
 

Attachments

  • Like
Reactions: voxadam and TomUK

voxadam

Member
Apr 21, 2016
107
14
18
Portland, Oregon
This is excellent work; I especially appreciate your superb documentation.

Being in a similar situation myself; I've been thinking about a similar low-budget/high-WAF build. I'm currently trying to decide whether I should base my build off Winterfell boards or if I should go with Dell C8220 or C6220 boards (similar to what's been talked about in this thread). I'd love to hear if you'd given any thought to using any non-Windmill/Winterfell boards such as the C8220 or C6220.

Again, stellar write-up.
 
Last edited:

fake-name

Active Member
Feb 28, 2017
180
144
43
73
This is happening:

Overall Chassis with servers.png

It turns out that the power connectors will fit 1/8" bus-bar. They're a little tiny bit out-of-spec (3.175 mm vs the spec of 3 mm ± 0.1 mm). I have some 1/8" bus bar to use, which is handy.

Also: fake-name/MiscellaneousProjects

:)

------

This is excellent work; I especially appreciate your superb documentation.
I generally try to write documentation I'd like to read, but thanks!

Being in a similar situation myself; I've been thinking about a similar low-budget/high-WAF build. I'm currently trying to decide whether I should base my build off Winterfell boards or if I should go with Dell C8220 or C6220 boards (similar to what's been talked about in this thread). I'd love to hear if you'd given any thought to using any non-Windmill/Winterfell boards such as the C8220 or C6220.
Uh, I didn't do that much research, really. That's a interesting alternative, though it doesn't have the nice cooling system/PCIe riser this does.

I'm planning on ordering some SFF-8087 headers to see if I can get more sata ports for these things. It has footprints for 3 SAS interfaces, and 2 PCIe-over-SAS-physical-layer connectors.

@hmartin got one of the SAS ports to work, and I have a hot-air soldering system so I think I can probably place the right-angle connectors as well.

Worst case, I lift some pads that aren't being used anyways, so YOLO? Time to do a digikey order.
 
Last edited:

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
657
244
43
Having something custom fabbed is kind of the opposite of cheap. And I don't have any sheet-metal facilities myself (or the space to fit them, really. A decent brake is quite large).
Ah, my last full-time gig had a sheet metal shop so getting simple stuff done just costs me the materials as long as it's a slow day and I can bribe someone with beer. I forget that it's a lot more expensive when you have to pay for labor as well as materials. 3d printing is the cost that gets me, I never bought a printer so I end up having to pay to have things printed and mailed to me when I need them.

Which breakout board is your caddy designed for? I think I could use the same to solve a really annoying PSU clearance problem in one of my machines.

+1 on the awesome documentation, thanks for making all of your findings available!
 

fake-name

Active Member
Feb 28, 2017
180
144
43
73
It's for a X20 Breakout Board ASIC GPU mining 10 Port Chain Sync for Server PSU

If you buy one, order on ebay. It's the same seller, and reading about, people complain about their shipping times from their normal store front. On ebay, they have feedback to worry about, so they ship faster, apparently.

Let me know if you want a STL or w/e. I don't put those in version control because they're hueg.


It's a mildly janky breakout. I've actually been thinking about designing a custom breakout, because I want to break out the SMBus interface on the power supply too, but haven't gotten around to it.

Actually, dammit, I should have ordered connectors on the digikey order I just placed. Fuuuuuuu


------

Ah, my last full-time gig had a sheet metal shop so getting simple stuff done just costs me the materials as long as it's a slow day and I can bribe someone with beer. I forget that it's a lot more expensive when you have to pay for labor as well as materials. 3d printing is the cost that gets me, I never bought a printer so I end up having to pay to have things printed and mailed to me when I need them.
Heh, I'm actually printing this stuff on my work's printer.
 

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
657
244
43
It's for a X20 Breakout Board ASIC GPU mining 10 Port Chain Sync for Server PSU

If you buy one, order on ebay. It's the same seller, and reading about, people complain about their shipping times from their normal store front. On ebay, they have feedback to worry about, so they ship faster, apparently.

Let me know if you want a STL or w/e. I don't put those in version control because they're hueg.


It's a mildly janky breakout. I've actually been thinking about designing a custom breakout, because I want to break out the SMBus interface on the power supply too, but haven't gotten around to it.
Thanks! If you wouldn't mind popping the STL up online somewhere I could probably make use of it. I'm going to attempt to cram one of these 1u HP PSUs into my little Node 304 mITX build to solve my issues with PSU/motherboard clearance without spending a bunch more money. (High quality SFF PSUs are bonkers expensive!)

Make a post if you decide to build a better breakout board, I could jump in for a couple if you want help bringing an order up to MOQ or a price breakpoint and I'm sure you'd get some interest from others.
 

fake-name

Active Member
Feb 28, 2017
180
144
43
73
Thanks! If you wouldn't mind popping the STL up online somewhere I could probably make use of it. I'm going to attempt to cram one of these 1u HP PSUs into my little Node 304 mITX build to solve my issues with PSU/motherboard clearance without spending a bunch more money. (High quality SFF PSUs are bonkers expensive!)
This is specific to one particular supermicro power supply breakout boards. It also only provides 12V, nothing else. It's also independently switched. You push the button (or stick 12V in one of the ports), and it turns the power supply on until the 12V goes away, or you push the button again.

You could probably rig a set of DC-DC converters to give you 5V/3.3V/etc..., but it'd wind up being a big hairy wiring mess, likely.

Anyways, I stuck the STL here: fake-name/MiscellaneousProjects

If you have a different breakout, I can *probably* tweak the design to work, if you can get me the relevant mechanical dimensions.
 

arglebargle

H̸̖̅ȩ̸̐l̷̦͋l̴̰̈ỏ̶̱ ̸̢͋W̵͖̌ò̴͚r̴͇̀l̵̼͗d̷͕̈
Jul 15, 2018
657
244
43
This is specific to one particular supermicro power supply breakout boards. It also only provides 12V, nothing else. It's also independently switched. You push the button (or stick 12V in one of the ports), and it turns the power supply on until the 12V goes away, or you push the button again.

You could probably rig a set of DC-DC converters to give you 5V/3.3V/etc..., but it'd wind up being a big hairy wiring mess, likely.

Anyways, I stuck the STL here: fake-name/MiscellaneousProjects

If you have a different breakout, I can *probably* tweak the design to work, if you can get me the relevant mechanical dimensions.
Ah, I'm working with the HP supplies I have on hand. I think I can take the design and tweak it to fit these and the appropriate breakout board, I've needed an excuse to learn some basic modelling anyway. I was just going to use a Pico PSU to handle conversion TBH, I can probably find/wire up something to convert extra 12v to 5V for drives and such too. I'm pretty sure the Pico's will handle switching a breakout board on/off with a motherboard signal too.
 
Last edited:

fake-name

Active Member
Feb 28, 2017
180
144
43
73
Ah, I'm working with the HP supplies I have on hand. I think I can take the design and tweak it to fit these and the appropriate breakout board, I've needed an excuse to learn some basic modelling anyway. I was just going to use a Pico PSU to handle conversion TBH, I can probably find/wire up something to convert extra 12v to 5V for drives and such too. I'm pretty sure the Pico's will handle switching a breakout board on/off with a motherboard signal too.
Oh, derp, why didn't I think of that.

The issue with switching the breakout on is that you need the PSU on to power the pico on to switch the PSU on, so you've got a chicken/egg situation.

I think most of the HP supplies (well, most supplies in general) have a 12V standby output, but it's almost certainly not wired to anything in a miner-focused breakout, so you'd need to find it and solder some cables onto the relevant pin.
 

onsit

Member
Jan 5, 2018
98
26
18
33
Ordered one of these that supports E5-2678V3. And picked up a couple DIY breakout boards Breakout Board Adapter compatible with HP 1200 watt DPS-1200FB | Parallel Miner

With 14AWG wire, you would only need to run 2 positive, and 2 negative from this breakout board (20amp max on 14awg for less than 20" or so.)

Debating if I want to go your route, and get some round terminals. Or if I should solder directly to those massive 6awg cables.

For anyone curious. The E5-2678V3 winterfell node has a nifty midplane card that allows running the node from a regular PSU as long as it has up to 40AMP available on a single 12v rail.



Open Datacenter Hardware - Leopard Server
 
Last edited: