Supermicro 6028U-TRTP X10DRU-i Barebone $349

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

vpgt

New Member
May 17, 2024
1
0
1
I used an official DOS utility to update old signed BIOS on X10DRU-i+ to 3.5. Now everything is OK.
Disclaimer: At your own risk. If you use the wrong BIOS YOU WILL BRICK YOUR MACHINE.

I tried several times and finally it worked. ME switch should be in 2-3 during update: otherwise it looks working but it isn't. I manually rename fdt & afudos to exe, then I used default afudos switches from bat file:
afudos %1 /P /B /N /K /R /ME
and added /X /GAN.

Step-by-step:

1. PowerOFF server (remove from wall socket). ME jump to 2-3
2. in DOS boot (for official unsigned bios v.3.5 X10DRU2.427): download from official site, create FreeDOS usb stick by Rufus put unpacked files on stick.
3. Manually rename fdt & afudos to *.exe
3. Create a new bat file: fl35.bat with these 3 lines

fdt -w 50 A5
afudos X10DRU2.427 /P /B /N /K /R /ME /X /GAN
fdt -w 50 00

4. Boot from USB, run fl35.bat, wait!

5. PowerOFF. ME to 1-2. CMOS reset (by screwdriver)
6. PowerON, enter bios, F3 default, F4 save&exit
7. After reboot - setup bios properly.
 

kreqqy

New Member
Feb 15, 2024
18
11
3
Anyone running these in a low power configuration? Can we use 1x CPU and still get all 12 SAS bays + 10GE?
 

dragonme

Active Member
Apr 12, 2016
347
27
28
I am late to this party.. and the thread has been week but alive so I will keep it so...

I generally run my home network on e-trash as that is all I generally can afford.. worse now after the chip shortage. ...

my current setup has some limitations and requirements and I might set up a new post to cover the whole build / decision making process.

Currently my 'rack' is a 24U short depth 'networking' b-line cabinet.. so not the traditional rack rails but screws for telco/switch stuff.. but moreover NOT deep enough for typical servers..

so the limfacs

1> need short depth server less than 25" deep
2> fairly low power but hyperconverged where storage and vm workloads live in the same box
3> use to like staying on ESXI but Broadcom has shit the bed... if I can run 8 I might do it long enough to learn proxmox but the end is near
4> I have a backup shelf with sff-8088 connection and would like to keep a connection to it
5> remote management BMC

for the last 9 years..

I found a very inexpensive Rackable System SGI 3U hybrid that was custom built for a big datacenter like google or amazon.
Intel board, 2x L5640 low power 65wtdp processors 64GB ram..
its a strange case in that the board connections are up front, and its setup as 1U with a single 90 riser.. and the 2U drive cage sits on top of the board. So essentially is a 1U which makes utilizing the other pci-e impossible
ESXI 6.7, where Napp-it boots first off VMFS that lives on an intel SAS/SATA daughter board in RAID Mirror, and is passed both the onboard Sata for one pool 5x8TB RAIDZ (media), and the onboard LSI 4I4E which hosts 2 intel 240GB SSDs in a ZFS Stripe for VMs and also connects to the external shelf for backup when needed .. a 15 drive RAIDZ pool with 3 stripe of 5 drive RAIDZ luns

Napp-it sucks for eye candy.. the interface looks circa 1980.. but it boot quick and the NFS storage it feeds back to ESXI for the VMs and the SMB shares from the media pool to Plex VM etc work well. Nappit reboots so quick in fact, I can update Omnios, and reboot it without stunning the VMs and they dont care.. no warnings..

So all this 2xL5640, 2 Raid Cards, 6 Intel SSDs, and 7 Iron Drives.. at about 25% cpu load typical runs just over 200w at the wall.... but it's getting old..

SO after nearly 10 years of running this board.. its time for a change

I have a mix of supermicro trash en route to both replace this primary server and also replace some even older boards.. I still have some machines around here rocking socket 775 CoreDuo cpus from 20 + years ago..

The primary server..
6028U-TR4T+ with a X10DRU-i+ 12 drive SAS expander backplane
2x E5-2630L low power and 128GB ram
might need the SAS 3008 12gb SAS card if the onboard wont run the SAS Expander (so that is a question for the community I guess.. do the onboard C612 SATA connectors drive a SAS expander?


I also have 2x X10DRL-i ATX boards coming, one to replace the board our of a server I build back in what like 2008 with a E8400 coreduo and another for a workstation/experimental machine to test on

Here is the real HACK of the project however.. I you guys are probably already thinking.. the supermicro 2U chassis is not a short or half depth.. and you are correct.. its not.. but I AM GOING TO TRY AND MAKE IT ONE.

I plan to cut the case between the fan wall and the backplane.. turn the fans around to suck instead of blow. put the case in the rack backward leaving the connection headers/PCI-E up front like the Rackable server, and set the drive backplane on TOP essentialy making in a 4U server.. but the compute will be living in a 2U space.

I just could not find a short depth server that had what I needed at a price I could pay...

I would love to hear thoughts on
Firmware.. some say the latest 3.5 is a mess... should I leave everything alone.. or update?
I am really fighting on ESXI or Proxmox.. I have been playing with Proxmox and things like multiple vSwitchs for internal NFS networks etc seem more tacky than baked in and learning Proxmox after just figuring out ESXI enough to be dangerous.. seems like a lot of work.. but I really detest Broadcom at the moment and would almost do it out of spite..

Hyperconverge..
I can run it the way I have been doing it NAPP-IT VM native on esxi/proxmox severing NFS/SMB back to the hypervisor and VMs..
or
I have seen some try and make Proxmox and its native ZFS do everything .. I messed with NappIT CS.. not a fan.. so I dont know how easy it would be or how it would mess with how proxmox handles the VMs from a pool / PBS backup perspective etc.
or
Instead of Napp-it , Truenas SCALE VM for the storage.. but it takes considerably longer to boot, love the interface way more than Napp-it.. and I am a MAC based house not Windows and while NappIT is built with great Windows sharing capabilities.. its MAC support is weak.. where Truenas seems to cater.. and without a License.. NappIT cuts a lot out of an already weak front end

this is a bit longer than I intended and I hope its not seen as a 'hijack' of the thread..

if you all want to keep replies to the technical side of the supermicro boards that's cool.. thinking this needs its own thread.. especially when I take the chopsaw to the case... haha.. hardware modding.. for the win..
 

frogtech

Well-Known Member
Jan 4, 2016
1,500
279
83
36
I am late to this party.. and the thread has been week but alive so I will keep it so...

I generally run my home network on e-trash as that is all I generally can afford.. worse now after the chip shortage. ...

my current setup has some limitations and requirements and I might set up a new post to cover the whole build / decision making process.

Currently my 'rack' is a 24U short depth 'networking' b-line cabinet.. so not the traditional rack rails but screws for telco/switch stuff.. but moreover NOT deep enough for typical servers..

so the limfacs

1> need short depth server less than 25" deep
2> fairly low power but hyperconverged where storage and vm workloads live in the same box
3> use to like staying on ESXI but Broadcom has shit the bed... if I can run 8 I might do it long enough to learn proxmox but the end is near
4> I have a backup shelf with sff-8088 connection and would like to keep a connection to it
5> remote management BMC

for the last 9 years..

I found a very inexpensive Rackable System SGI 3U hybrid that was custom built for a big datacenter like google or amazon.
Intel board, 2x L5640 low power 65wtdp processors 64GB ram..
its a strange case in that the board connections are up front, and its setup as 1U with a single 90 riser.. and the 2U drive cage sits on top of the board. So essentially is a 1U which makes utilizing the other pci-e impossible
ESXI 6.7, where Napp-it boots first off VMFS that lives on an intel SAS/SATA daughter board in RAID Mirror, and is passed both the onboard Sata for one pool 5x8TB RAIDZ (media), and the onboard LSI 4I4E which hosts 2 intel 240GB SSDs in a ZFS Stripe for VMs and also connects to the external shelf for backup when needed .. a 15 drive RAIDZ pool with 3 stripe of 5 drive RAIDZ luns

Napp-it sucks for eye candy.. the interface looks circa 1980.. but it boot quick and the NFS storage it feeds back to ESXI for the VMs and the SMB shares from the media pool to Plex VM etc work well. Nappit reboots so quick in fact, I can update Omnios, and reboot it without stunning the VMs and they dont care.. no warnings..

So all this 2xL5640, 2 Raid Cards, 6 Intel SSDs, and 7 Iron Drives.. at about 25% cpu load typical runs just over 200w at the wall.... but it's getting old..

SO after nearly 10 years of running this board.. its time for a change

I have a mix of supermicro trash en route to both replace this primary server and also replace some even older boards.. I still have some machines around here rocking socket 775 CoreDuo cpus from 20 + years ago..

The primary server..
6028U-TR4T+ with a X10DRU-i+ 12 drive SAS expander backplane
2x E5-2630L low power and 128GB ram
might need the SAS 3008 12gb SAS card if the onboard wont run the SAS Expander (so that is a question for the community I guess.. do the onboard C612 SATA connectors drive a SAS expander?


I also have 2x X10DRL-i ATX boards coming, one to replace the board our of a server I build back in what like 2008 with a E8400 coreduo and another for a workstation/experimental machine to test on

Here is the real HACK of the project however.. I you guys are probably already thinking.. the supermicro 2U chassis is not a short or half depth.. and you are correct.. its not.. but I AM GOING TO TRY AND MAKE IT ONE.

I plan to cut the case between the fan wall and the backplane.. turn the fans around to suck instead of blow. put the case in the rack backward leaving the connection headers/PCI-E up front like the Rackable server, and set the drive backplane on TOP essentialy making in a 4U server.. but the compute will be living in a 2U space.

I just could not find a short depth server that had what I needed at a price I could pay...

I would love to hear thoughts on
Firmware.. some say the latest 3.5 is a mess... should I leave everything alone.. or update?
I am really fighting on ESXI or Proxmox.. I have been playing with Proxmox and things like multiple vSwitchs for internal NFS networks etc seem more tacky than baked in and learning Proxmox after just figuring out ESXI enough to be dangerous.. seems like a lot of work.. but I really detest Broadcom at the moment and would almost do it out of spite..

Hyperconverge..
I can run it the way I have been doing it NAPP-IT VM native on esxi/proxmox severing NFS/SMB back to the hypervisor and VMs..
or
I have seen some try and make Proxmox and its native ZFS do everything .. I messed with NappIT CS.. not a fan.. so I dont know how easy it would be or how it would mess with how proxmox handles the VMs from a pool / PBS backup perspective etc.
or
Instead of Napp-it , Truenas SCALE VM for the storage.. but it takes considerably longer to boot, love the interface way more than Napp-it.. and I am a MAC based house not Windows and while NappIT is built with great Windows sharing capabilities.. its MAC support is weak.. where Truenas seems to cater.. and without a License.. NappIT cuts a lot out of an already weak front end

this is a bit longer than I intended and I hope its not seen as a 'hijack' of the thread..

if you all want to keep replies to the technical side of the supermicro boards that's cool.. thinking this needs its own thread.. especially when I take the chopsaw to the case... haha.. hardware modding.. for the win..
my favorite short depth chassis (if I had any) would be supermicro 825M and the supermicro 514/515 chassis


if you get rails you might need adapters to turn your threaded holes into cage nuts
the 825M is a little cramped, and only supports 3 drives, but you can fit an entire E-ATX or ATX dual proc mobo in it. I would probably go for ATX but E-ATX works. if opting for 825M you probably need a 1U heatsink for the first proc and 2U for the other.

the X10SRW-F and X11SSW-F are great boards for the 514 and i think the 514/515 can even fit up to E-ATX if it's the correct -W form factor
 

dragonme

Active Member
Apr 12, 2016
347
27
28
my favorite short depth chassis (if I had any) would be supermicro 825M and the supermicro 514/515 chassis


if you get rails you might need adapters to turn your threaded holes into cage nuts
yep.. this 2U comes with rails.. but I likely wont use them.. if I need to work on it.. I find it easier to just pull it out .. and usually needs good cleaning periodically anyway.. haha

those 2 short depths dont have enough drive space for my needs.. 8 3.5 drives min.. 12 better...
 

dragonme

Active Member
Apr 12, 2016
347
27
28
Ok.. a bit of an update and a question for those running the x10 series on ESXI 8

seems like a standard 8U2 installer worked fine with the usual warnings about CPU not supported in upcoming version, and installing to USB.. yada yada..

BUT

at least the networking stack is fine.. and the intel 10G x540 x4 nics worked out of the box

Updated to U3 via ssh and again.. all seems fine..

HOWEVER..

in the host hardware tab.. none of the device tree is passthrough capable.. I mean nothing..

are there NO C612 platform drivers in ESXI 8.. ?

anyone running this version of esxi on X10 platform .. please weigh in.... thanks
 

daisuke1983

New Member
Feb 9, 2025
2
0
1
Hi all,

I've got an SGI 2112-GP2 few years ago, which appears to be based on the Supermicro X10DRU-i+.
I'm looking to flash the latest Supermicro BIOS (v3.5) onto it to address vulnerability, but I'm concerned about potential issues:

- **DMI mismatch:** Will the update be rejected, or will I need to modify DMI tables?
- **Signature check:** Does the BIOS require a forced flash (e.g., AFUDOS /X /GAN)?
- **IPMI compatibility:** I don’t mind losing IPMI, but I want to avoid bricking the board.
- **Recovery options:** If something goes wrong, is a SPI programmer (CH341A) necessary?

I've been using SGI machines for a while and love how well they hold up over time.
I noticed a few people mentioned SGI systems earlier in this thread, so I figured this would be the best place to ask.

Has anyone successfully done this on an SGI variant of the X10DRU-i+?
Would appreciate any advice or experiences!

Thanks!
 

dragonme

Active Member
Apr 12, 2016
347
27
28
Hi all,

I've got an SGI 2112-GP2 few years ago, which appears to be based on the Supermicro X10DRU-i+.
I'm looking to flash the latest Supermicro BIOS (v3.5) onto it to address vulnerability, but I'm concerned about potential issues:

- **DMI mismatch:** Will the update be rejected, or will I need to modify DMI tables?
- **Signature check:** Does the BIOS require a forced flash (e.g., AFUDOS /X /GAN)?
- **IPMI compatibility:** I don’t mind losing IPMI, but I want to avoid bricking the board.
- **Recovery options:** If something goes wrong, is a SPI programmer (CH341A) necessary?

I've been using SGI machines for a while and love how well they hold up over time.
I noticed a few people mentioned SGI systems earlier in this thread, so I figured this would be the best place to ask.

Has anyone successfully done this on an SGI variant of the X10DRU-i+?
Would appreciate any advice or experiences!

Thanks!
googlefu is your friend .. and as it would happen, I believe all the info you require is here at servethehome..

there is a post that has a special version of the imp/BMC update... along with the flash steps and special overrides you will need since your board is likely 'locked down' with signed versions of both

he also has the flash operands that will flash the main firmware as well to latest....

I did both a x10DRL-i as well as an x10DRU-i following these instructions, with the attached files as well as some from supermicro site

as far as the flashing, I think I did BMC on FreeDOS... and the bios on EFI shell...

no isses...