SuperMicro x9drd-7ln4f-jbod Rev 1.02 @ $142

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

techtoys

Active Member
Feb 25, 2016
189
50
28
59
Not the smoking $130 deal ... but don't get me started on that.
$142 Direct Link + shipping
$200 Ebay link includes shipping (dropped to $159)
included 2 heatsinks

I have purchased from this seller before and they are prompt with all items received in working order. The direct site is a bit cheaper but does not include shipping. I am in the bay area close to them so I get USPS pretty quick and cheap. I received this yesterday and haven't tested it yet

The serverstore where a bunch of us got the Dell C6100's is also selling a build with this motherboard that may be a better deal. They include an 825 case and a couple of procs in the base configuration for $300 + shipping.
 
Last edited:
  • Like
Reactions: nezach and BLinux

copcopcopcop

Member
Feb 2, 2017
36
11
8
37
sweet! just grabbed one!

just as an FYI, i'm pretty sure the X9DRD-7LN4F-JBOD unofficially supports PCIE bifurcation with BIOS v3.3+.

this will make a nice spare/replacement for my current X9DR7-LN4F-JBOD as it's essentially the same board but it has two 16x pcie slots and no bifurcation support. :(


edit: my order had free shipping. $141 shipped.
 

Philmatic

Active Member
Sep 15, 2011
124
85
28
just as an FYI, i'm pretty sure the X9DRD-7LN4F-JBOD unofficially supports PCIE bifurcation with BIOS v3.3+.
It does, I have it. Running two NVMe on a AOC-SLG3-2M2 in the lowest PCIe 8x slot in 4x/4x bifurcation mode. Love this board!

Honestly, now that I see how much these run normally, I can't believe how I snagged mine. Someone posted it as X9DRD-7LN4F-JB0D instead of "O" and had it set for BIN at $125 and free shipping.

upload_2019-6-5_11-49-2.png
 
  • Like
Reactions: gseeley

BeTeP

Well-Known Member
Mar 23, 2019
661
441
63
how much these run normally
Who pays those "normal" prices when the same amount gets you a compatible barebones Dell or HP server with a decent rack mountable chassis, both heatsinks, redundant 80+ gold or platinum power supplies, hot-swappable drive bays and backplane already included?
 

Philmatic

Active Member
Sep 15, 2011
124
85
28
Who pays those "normal" prices when the same amount gets you a compatible barebones Dell or HP server with a decent rack mountable chassis, both heatsinks, redundant 80+ gold or platinum power supplies, hot-swappable drive bays and backplane already included?
Link? Also, customization and noise are extremely important to me.
 

BeTeP

Well-Known Member
Mar 23, 2019
661
441
63
LOL. You have just costed me $150. I went to look up a link and bought me another HP DL380p Gen8 (the whole server minus the drives) that I did not really need. But for the $150 shipped total I could just not resist. It comes with 2x E5-2650 CPUs and 48Gb of RAM.

Back to the requested link, HP DL380p barebones for $175 shipped is a config similar to what I was referring to originally. No RAM, No CPUs, 2x heatsinks, 2x 750W 80+ Gold PSU, P420i RAID, 4x 1GbE, 8x SFF drive bays.

I am not sure what kind of customization you are looking for. As for the noise - all my gear goes into a rack cabinet in the basement. I am more concerned with the current that all the fans draw than the noise they produce. At 100% each of 6 fans per chassis draws up to 3.3A at 12V - that's 40W per fan. Fortunately they usually sit at 25-33%.
 
Last edited:

ecosse

Active Member
Jul 2, 2013
466
113
43
LOL. You have just costed me $150. I went to look up a link and bought me another HP DL380p Gen8 I did not need. But for $150 shipped I could not resist. For the price it even includes 2x E5-2650 CPUs and 48Gb of RAM.

Here is the config I was referring to originally: HP DL380p barebones


I am not sure what kind of customization you are looking for. As for the noise - all my gear goes into a rack cabinet in the basement. I am more concerned with the current that all the fans draw than the noise they produce.
The customisation of HP servers are cr&p. They tend not to play well with other vendors card's for example, they try to make as much proprietary as they can get away with. Also got to consider the paywall for BIOS updates etc. Having said that, I find a tier 1 vendor kit generally rock solid. It really comes down to what you want from your server.
 

BeTeP

Well-Known Member
Mar 23, 2019
661
441
63
The customisation of HP servers are cr&p. They tend not to play well with other vendors card's for example, they try to make as much proprietary as they can get away with. Also got to consider the paywall for BIOS updates etc
From my experience their "tendency not to play well with other vendors" is way overblown. Basically it is limited to third-party adapters not supporting HP's "Sea of Sensors" and causing higher fan speeds (and therefore higher noise and power consumption). Fortunately I just need to use SAS HBA and 10GbE adapters and the HP branded ones are among the cheapest ones anyway. So I am fine with that.

The paywall sucks though and I am not going to be buying any new HPE hardware because of that. But for the older Gen8 systems I have already collected all the updates I need.

PS. sorry for hijacking the thread. I was just curious about benefits of paying more for the SM based system. PCIe bifurcation support is nice. What else?
 

manfri

Member
Nov 19, 2015
45
7
8
57
Who pays those "normal" prices when the same amount gets you a compatible barebones Dell or HP server with a decent rack mountable chassis, both heatsinks, redundant 80+ gold or platinum power supplies, hot-swappable drive bays and backplane already included?
i do not know about Dell but with HP

CON
you must use HP hard drive or the box start screaming.
Even ram can be picky.
CPU support i really do not know (never tried)
Hard disk cannot be used in non RAID mode without an additional HBA
Cannot install more than 16 drives and require proprietary drive cage and additional controller
To add more pci express expansion require a proprietary riser kit
Access to firmware upgrade only with a support contract


PRO
FLR network kit are really bargain because proprietary

Good hardware for production, less for homelab or home usage.
But the price are plummeting...
 

BeTeP

Well-Known Member
Mar 23, 2019
661
441
63
you must use HP hard drive or the box start screaming.
Even ram can be picky.
CPU support i really do not know (never tried)
Hard disk cannot be used in non RAID mode without an additional HBA
Cannot install more than 16 drives and require proprietary drive cage and additional controller
To add more pci express expansion require a proprietary riser kit
That's exactly what I have called "overblown"

I have 0(zero) HP branded hard drives and my servers sit at or below 33% fan speed. All an HP server needs from a hdd is a temperature sensor supported by ILO - and those are plentiful. I mean obviously HP does not support it and your average ebay seller would not have a clue but there are easy enough to come by.

Without HP branded memory the "HP Smart Memory Mode" will be disabled - but who cares. I use standard Samsung PC3L-12800R stuff and it works great. It does not affect the fan speed.

HP RAID controllers use PMC chips which do not support HBA mode. So if you need a HBA just get the HP H220 - it's one of the best LSI SAS 2308 based cards and very inexpensive at ~$30 shipped. But if you need a 6Gbps SAS RAID controller - the HP P420 has the best performance in its price range. So it's a wash.

If you need to have more than 12 LFF drives connected to your system - just get an external enclosure. Personally I am planning to move away from having anything but a pair of mirrored boot SSD drives in my servers.

The HP Gen8 2U servers (the platform comparable to the SM board in the title) come with the primary PCIe riser included - that's good for 2x x16 and 3x x8 slots. If you need more slots - the secondary risers can be found for like $15.
 
Last edited:

manfri

Member
Nov 19, 2015
45
7
8
57
That's exactly what I have called "overblown"

I have 0(zero) HP branded hard drives and my servers sit at or below 33% fan speed. All an HP server needs from a hdd is a temperature sensor supported by ILO - and those are plentiful. I mean obviously HP does not support it and your average ebay seller would not have a clue but there are easy enough to come by.

Without HP branded memory the "HP Smart Memory Mode" will be disabled - but who cares. I use standard Samsung PC3L-12800R stuff and it works great. It does not affect the fan speed.

HP RAID controllers use PMC chips which do not support HBA mode. So if you need a HBA just get the HP H220 - it's one of the best LSI SAS 2308 based cards and very inexpensive at ~$30 shipped. But if you need a 6Gbps SAS RAID controller - the HP P420 has the best performance in its price range. So it's a wash.

If you need to have more than 12 LFF drives connected to your system - just get an external enclosure. Personally I am planning to move away from having anything but a pair of mirrored boot SSD drives in my servers.

The HP Gen8 2U servers (the platform comparable to the SM board in the title) come with the primary PCIe riser included - that's good for 2x x16 and 3x x8 slots. If you need more slots - the secondary risers can be found for like $15.
That's exactly what I have called "overblown" tooo :cool:)))

At the time, when i've made the same consideration (1 and half year ago) i've choose another machine over HP because, for the prices and the already analyzed considerations HP was not a bargain (especially in EU)

And i live installing HP server, switches, SANS, and i've installed my test machine in an HP shop.

Believe me, i've paid this with my own money.... so i've considered everything :cool:))

As i said, now the prices of HP server are plummeting on G8 so, if if have to made my choice now i think that i've would have to go with HP.

PS:the prize for these SM chassis are "outrageous" and that's why my machines run on a test bench "naked". :cool:)) and, for my purposes, without SAS and RAID (thy're really are test machine....)

Good to know that the hard drives "compatibles" can be found without going crazy for the noise...


PS: the primary riser on G8 has only 2 slots pci3 , 1x16 and 1x8 on cpu 1 and one pci 2.0 electrical x4 attached to chipset.

To have the second riser you MUST install a second cpu also.

and another thing that i've forgot... no molex or sata power connector so install a USB3 card who require to be powered separately is a pain in the .ss

And the server of this age do not have USB3 on board. (i think all, not only HP)

Never tried to install some esoteric hardware like GPU or NVME drive but i fear that some limitation apply

That's why lot of people chose different hardware and the price of this hardware is so high...
 

Samir

Post Liker and Deal Hunter Extraordinaire!
Jul 21, 2017
3,582
1,677
113
49
HSV and SFO
Good to know about the increased limitations in later generations of HP servers. The G5's I have will pretty much take anything--even Dell parts. :eek:
 

techtoys

Active Member
Feb 25, 2016
189
50
28
59
Booted the board into bios. looks like v2 with 2013 build date. Good thing I used v1 processor .
Next step is to update the bios then install it in a CSE-836.
 
  • Like
Reactions: Samir

Philmatic

Active Member
Sep 15, 2011
124
85
28
Booted the board into bios. looks like v2 with 2013 build date. Good thing I used v1 processor .
Next step is to update the bios then install it in a CSE-836.
Good info! Good thing I have a v1 Xeon. I will use it to upgrade the bios so I drop in my 2650 v2’s.
 
  • Like
Reactions: Samir

techtoys

Active Member
Feb 25, 2016
189
50
28
59
Good info! Good thing I have a v1 Xeon. I will use it to upgrade the bios so I drop in my 2650 v2’s.
You and me both! The 2650 v2 seems to be in the same sweet spot as the old 2670 v1.
I picked up a pair of 2650 v2 last week for $120 or $60 each. The natex deal in 2016 was $65 for the 2670 v1.
 
  • Like
Reactions: nikalai

RedX1

Active Member
Aug 11, 2017
135
148
43
Hi



I have a lot of Supermicro equipment, some of my recent experience with this motherboard might be useful.

I obtained one of the same X9drd-7ln4f-jbod Rev 1.02 boards and installed into a CSE-836TQ. It came with 2 E5-2603v1 processors and passive heatsinks.

It had IPMI Firmware 3.39 and the BIOS version 3.0, I have not yet installed anything more recent.

IMG_2197 Comp.JPG


It has some interesting behaviour.

I installed the motherboard as delivered (No AOC) with 64GB ECC RDIMM from the SM approved list and the board came to life with no problems. I could install Win 10 and every flavour of Linux I tried installed with no problems.

This is one of the SM X9 motherboards which will not soft-start when Win 10 is installed. It needs to be completely powered down in order to boot into Win 10. This is a pain, more-so with the frequent Win 10 update cycle, as it need to be completely powered down in the middle of the update and that causes some consternation. I have a X9DRI-F and some other X9 boards with the same behaviour.

After some initial proving I wanted to upgrade the board with a LSI 9211-8i in IT mode, so I can access all the 16 drives in this chassis and also a Mellanox CX-312a 10g NIC.

I installed an NVidia GT 710 to get a better display and run LSI MegaRaid Storage Manager in Win 10. All this worked flawlessly with both the original E5-2603v1 and also a pair of E5-2640v1 CPU’s that I had.

Then I tried to upgrade to some E5-2637v2 processors. These are low core count - high frequency processors that are ideal for my work with this machine (Testing of embedded control system algorithms).

The machine will power up into BIOS and everything looks normal, no unwanted beeps or any other alarms.

When I try to install an OS, Win 10 or Linux the install fails. In Win 10 the circle of dots stops rotating and if I try to install Linux the machine stops at the GRUB screen. I tried to install from a USB 3 memory stick which works on other SM X9 motherboards and also an install from the CD drive. The behaviour is the same.

Interestingly, if I just try to install DOS with a Windows DOS USB 2 memory stick, the machine will boot into MS DOS with no problems.

After spending too much time researching this problem and attempting remedies, I gave up with the v2 processors and reinstalled the E5-2640v1 CPU’s and the machine now runs with no problems.

I wonder if any of the other purchasers of this SM board will experience the same issues. Any advice will be greatly appreciated,


Good Luck.


RedX1
 

KC8FLB

Member
Aug 12, 2018
74
58
18
I currently an running a Asus z9pe-d16 which is a workstation board with many x16 slots that I don't use at all. I am using it for a storage/VM server (unraid) This board goes for $300+ on eBay. Running dual 2670v1, 256gb ram, a couple of lsi hba. That's it.

I am thinking of buying/using this supermicro board instead and selling the Asus for this application thinking that the supermicro board is a probably better suited for server stability.

Opinions would be appreciated. Thank you.
 

techtoys

Active Member
Feb 25, 2016
189
50
28
59
@RedX1. I have a similar setup with these boards. Two out of four are currently running E5-2650 v2 & E5-2650L v2 with a windows server OS. A third one is booting to bios yesterday on E5-2630L v2. I plan on running another installation of Windows Server. I have 2 Mellanox cards in each board running 40G InfiniBand and 10G ethernet. I removed a Dell H310 from the CSE836 TQ a few days ago which is equivalent to the LSI 9211-8i. I upgraded the backplane to a cheap SAS836-EL1 so the onboard SAS 2308 can access all 16 drives over 1 4-channel port.

The only difference is that I am not running a graphics card and I am using different E5-26xx v2 procs.
Also, I installed the OS on a prior build using E5-2670 v1 and E3-1275L I just used the same SSD and changed hardware to this board. I think one os install was done via IPMI but I can't remember is this was on the older v1 or newer v2.
 
Last edited: