EXPIRED HYVE Edge Metal G10 - Epyc 7642 - 250£/offer

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

epi1337

New Member
Oct 6, 2025
3
1
3
@luckylinux
I am interested in the TYAN M8036-L16-1F riser. Do you still have one available, and would you be willing to ship it to Germany? Have you already tested it with the HYVE Edge Metal G10, and does it work?
 

luckylinux

Well-Known Member
Mar 18, 2012
1,428
436
83
@luckylinux
I am interested in the TYAN M8036-L16-1F riser. Do you still have one available, and would you be willing to ship it to Germany? Have you already tested it with the HYVE Edge Metal G10, and does it work?
Not tested yet, I should still have it, but to be honest I don't want to deal with new Users, way too many Spammers, Fake, Bots, etc.
 

cybertinus

New Member
Aug 22, 2025
5
4
3
In the front of the servers you can place 3 M.2 NVMe's, via those convertor prints Hyve has developed, to convert from U.2 to M.2. But, I rather just plug in U.2 devices directly. Those are faster and then I can use 4 devices instead of 3. The issue is that you have to get those devices attached some how, which is hard because of all the M.2 stuff. But you can easily remove it. Just 4 screws and then you can remove the top plate. Then just pull up a pin and then you can remove the bottom plate. And then I had a 3D holder designed, for 4 U.2! And I'm sharing this file now with all of you, so you can print this holder yourself, if you want to. I have permission from the actual designer to do so.

This holder can hold 4 NVMe's, both 7 and 15mm thick. And it slides over the pins the bottom plate was attached to as well, and then it is in place.
The 15mm thick NVMe's you should place in the lowest screwholes, and the 3rd one from the bottom. The 7mm thick NVMe's should be placed in the second and forth screwhole from the bottom. Otherwise you will hit a bump in the chassis and the connector won't fit.

Don't print this in PLA, because the inside of the server can become too hot for it. Use PETG.
I have tested this print on a Bambu Labs P1S and it printed fine.

I hope this helps somebody!
 

Attachments

  • Love
Reactions: luckylinux

cybertinus

New Member
Aug 22, 2025
5
4
3
For those who are curious on how the holder looks exactly, I made some picturesIMG_8200_small.jpeg
The holder itself. As you can see it can hold 4 storage devices easily. In the top left holder you can see a small hole between the 3rd and 4th screw hole counted from the bottom. This was a small error in the 3D drawing. This has been resolved in the .step file I shared in my previous post


IMG_8199_small.jpeg

IMG_8198_small.jpeg

And this is the holder nicely placed in the server itself, with the original plates with the U.2->M.2 convertors removed. It fits nicely.

I hold the server in a 90 degree angle, to try to make the holder fall from the pegs. It stayed in place. If you really want to, you can add double sided tape to the bottom of the holder (very thin stuff, a millimeter thick at the most!).


I like this holder! :)
 

jd456

New Member
Sep 16, 2020
13
13
3

cybertinus

New Member
Aug 22, 2025
5
4
3
I ordered a few from there as well. Also waiting for delivery. they should be here between the 21st and the 27th... Then I can test them as well.
 

cybertinus

New Member
Aug 22, 2025
5
4
3
If you hate the build in IPMI, then this might be a good solution too: Introduction - Sipeed Wiki
Since a riser card for 1 of the slots is very hard to come by, I'm going to use that slot for this KVM solution. I will report on the results of it :).
I bought the full version (so with both PoE and wifi). I don't need wifi, but the difference between the only PoE version and the PoE+Wifi version was $2 ($69 vs. $71), so I saw no reason to not have Wifi. I will use PoE to power the KVM, because I can't power it via PCIe ofcourse, because I don't have a riser there. This also gives me the advantage to power on the server when it is off, because via PoE the KVM has power as well, so it can simulate a press on the power button :).
This also solves the issue of the machines hanging during the reboot. Then via the KVM you can powercycle the machine, without having to go to the server, which can be useful if you are accessing them remotely (which is my plan).
 

Terexxor

New Member
Aug 13, 2025
7
8
3
I have tested a Nvidia GPU and LSI HBA with a Goltrends PCI-E 3.0 riser cable from Amazon, both work fine. Currently running the LSI 24/7 in my lab.

Overall I have had two nodes running in an open air config since the first batch were shipped from the listing in first post of this thread, swapped coolers to Artic 4u rev 2 and replaced the stock fans with 2 x 120mm on the front using the rubber mounts and one strapped at the back to cool the rear NVMe and 25gbe NIC.

CPU sits idle around 27-30c and 40-50c with full load. Overall temps are fine.

Great little boxes for the price. If you dive deep into the BIOS there is options for CPU power management and TDP limiting although I have not tested this it might be useful for those who are running the stock 1u config.
 
  • Like
Reactions: inkey and flobert

flobert

Member
Sep 3, 2019
49
18
8
I have tested a Nvidia GPU and LSI HBA with a Goltrends PCI-E 3.0 riser cable from Amazon, both work fine. Currently running the LSI 24/7 in my lab.

Overall I have had two nodes running in an open air config since the first batch were shipped from the listing in first post of this thread, swapped coolers to Artic 4u rev 2 and replaced the stock fans with 2 x 120mm on the front using the rubber mounts and one strapped at the back to cool the rear NVMe and 25gbe NIC.

CPU sits idle around 27-30c and 40-50c with full load. Overall temps are fine.

Great little boxes for the price. If you dive deep into the BIOS there is options for CPU power management and TDP limiting although I have not tested this it might be useful for those who are running the stock 1u config.
Can you share some pics of your setup?
 

Terexxor

New Member
Aug 13, 2025
7
8
3
Can you share some pics of your setup?
Got them running in a “Lack Rack” setup to keep things cheap, could not justify spending more on a rack than a single node. Might swap the PSU to desktop units soon, this is my “Production” lab setup so it won’t move or change for a few years.

Will print one of the mounting units for the U.2 NVMe’s that was kindly posted above, for now they are held with the metal wires that clamped the SFF cables.

IMG_1099.jpgIMG_1100.jpg
 

flobert

Member
Sep 3, 2019
49
18
8
Got them running in a “Lack Rack” setup to keep things cheap, could not justify spending more on a rack than a single node. Might swap the PSU to desktop units soon, this is my “Production” lab setup so it won’t move or change for a few years.

Will print one of the mounting units for the U.2 NVMe’s that was kindly posted above, for now they are held with the metal wires that clamped the SFF cables.

View attachment 45903View attachment 45904
And the Glotrends Risers working in the x24 slot?
 

Terexxor

New Member
Aug 13, 2025
7
8
3
And the Glotrends Risers working in the x24 slot?
Yea running fine for a while now in one of the x24 slots, I would assume any standard PCI-e 3.0 or 4.0 adapter will work fine. There is a newer 5.0 cable which is slightly longer in length that I might test if I grab a RTX 5000 series GPU.

Ran a GPU for just over a month but I hardly transcode in Jellyfin so I removed that and replaced it with a LSI 9300-16i for my ZFS array which has been running fine for the last few weeks

There is enough space on top of the PSU to sit medium sized card or make place for a pci mount. Fractal r5 disk cage slots perfectly on top of the PSU with zero play or movement.

Also 8 channels is the minimum I recommend on this system after having one running 4 x 2933Y and one running 8 x 2666v memory.
 
  • Like
Reactions: inkey