Wondering if this SuperMicro is a good deal?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

themaxx25

New Member
Mar 18, 2018
8
0
1
51
Hello All, I'm new to the forum. I've been navigating around the site for a little over a week now.

So I currently have a Qnap (TVS-EC1080+) which I'm about to outgrown by the end of summer...it was nice, but I need more capacity. I'm interested in a SuperMicro 24bay from eBay seller UnixSurpluscom. Wondering if the price of the below specs is reasonable and if anyone has dealt with them. Would you buy from them again?

Thank you!

They are asking $3488 (they offered it to me at $3399) + shipping:
Supermicro 4U 24x 3.5" Drive Bays 1 Nodes
Server Chassis/ Case: CSE-846BE1C-R1K28B
Motherboard: X9DRi-LN4F+
Backplane: BPN-SAS3-846EL1 24-port 4U SAS3 12Gbps single-expander backplane, support up to 24x 3.5-inch SAS3/SATA3 HDD/SSD
PCI-Expansions slots: Full Height 4 x16 PCI-E 3.0, 1 x8 PCI-E 3.0, 1 x4 PCI-E 3.0 (in x8)
* Integrated Quad Intel 1000BASE-T Ports
* Integrated IPMI 2.0 Management
2x Intel Xeon E5-2690 V2 Deca (10) Core 3.0GHz
256GB DDR3
16 x 16GB - DDR3 - REG PC3-10600R (1333MHZ)
1x AOC-S3008L-L8e HBA 12Gb/s FREENAS JBOD Controller
24x 3.5" Supermicro caddy
2x 1280Watt Power Supply PWS-1K28P-SQ
Rail Kit 4U
Standard 30 Day Warranty
 

StammesOpfer

Active Member
Mar 15, 2016
383
136
43
That seller has been around a long time and many people here have used them in the past. Their prices may not be the best all the time but they are not usually outrageous.

That is a lot of RAM and CPU (and $$$) if all you are doing is Storage.
 

Joshin

Member
May 11, 2017
30
12
8
54
Ditto to what @StammesOpfer said. That's a goodly amount of compute power. More than makes sense for storage. RAM would be lovely for ZFS, but that's probably excessive too.
 

WeekendWarrior

Active Member
Apr 2, 2015
357
147
43
56
The answer to the question of "if the price of the below specs is reasonable" boils down to whether the price compares favorably to market value such as the total cost of the components. In that regard, the price is probably good but not great as others noted.

The market price for the SAS3 backplane and your particular version of the 846 box is quite high - possibly because few are available on eBay etc. Those two components would cost over $2k if bought on eBay right now. The processors, motherboard, RAM, and HBA add around another $1500 on eBay, so the offered price is apparently better than component cost on eBay. Getting a system that is already put together is worth something - especially if someone is not very familiar with putting servers together or if your time is very valuable. Thus, this package may reflect a good value to you for these components. UnixSurplus is typically on the higher end of the reseller market and they don't seem to haggle much but they are reliable.

That said, we don't know your goals or reasons for selecting your above-described system, so we can't know if you're over-specifying your system. You'll pay a premium for the E5-2690v2 that would be hard for most people to justify relative to E5-2680v2 or E5-2670v1. You'll pay a huge premium for the particular 846 box you selected relative to a more conventional (older) one that can be had for $200-300. You'll pay a premium for the SAS3 backplane relative to a SAS2 backplane that can be had for $200-300. If you're focused on SAS3 performance, you can get the TQ backplane (which passes through any speed) for around $100 and use your selected SAS3 HBA with an expander (notwithstanding a large number of cables that would result from the HBA/TQ combination). You can achieve most of the performance you seek for far less $.

Additionally, many storage systems recommend 1GB of RAM for each TB of storage and your 24 bays would need to use 10TB disks to justify 256G of RAM (unless you are using a lot of RAM for other things).

Nonetheless, I hate it when someone asks a question and people respond not by answering their question but by telling the poster that they've asked the wrong question ;) So, I'm not doing that and several people have answered your question as posted.

Because your question implies that you don't know market value for these components and thus may not know whether they are overkill for your needs, this post supplements others' answers to explain why people are suggesting that either (1) you have special needs you're not telling us about or (2) you're over-purchasing (which is your prerogative of course).

Hope these thoughts help in some way.
 

StammesOpfer

Active Member
Mar 15, 2016
383
136
43
Not sure what the workload is but if you are just using it for (bulk) storage, this is an extreme overkill.

I'm sure others will be able tk find better listings on eBay for full servers, but a quick scan shows this for around a third of the cost.

SUPERMICRO 4U 846BA-R920B X9DRI-LN4F+ 2x E5-2630L 16GB 24x TRAYS 3x LSI 9210-8I | eBay
Link didn't work but threw the text into the search bar and that worked.

This looks like a fairly good start. If you are going to run ZFS then add some more RAM (32-64GB is what I would shoot for) and you should have a sweet storage server that still has some horsepower left over to run a couple of other services or VM's.
 

themaxx25

New Member
Mar 18, 2018
8
0
1
51
I appreciate the replies! I like that the forum tries to look out for others...that's the beautiful of good forum, to challenge one another and share different, yet helpful perspectives...even when it may not be asked...lol. Thank you WeekendWarrior for providing a very detailed post specific to my question..It's what I was after. :)

I definitely plan to use it more than just a storage array. My plan is to install ESXi and run multiple VMs....Domain Controller, VDI for the family, Sophos, some lite web hosting, etc. It's going to be a general purpose box. I haven't made up my mind on whether to use FreeNAS, I need to more research and would love to hear any additional thoughts from everyone.

Is there any reason that I couldn't use the 920Watt SQ PSU instead? Or should I stick with the larger PSU?

Again, thanks for the replies everyone!
 

StammesOpfer

Active Member
Mar 15, 2016
383
136
43
I appreciate the replies! I like that the forum tries to look out for others...that's the beautiful of good forum, to challenge one another and share different, yet helpful perspectives...even when it may not be asked...lol. Thank you WeekendWarrior for providing a very detailed post specific to my question..It's what I was after. :)

I definitely plan to use it more than just a storage array. My plan is to install ESXi and run multiple VMs....Domain Controller, VDI for the family, Sophos, some lite web hosting, etc. It's going to be a general purpose box. I haven't made up my mind on whether to use FreeNAS, I need to more research and would love to hear any additional thoughts from everyone.

Is there any reason that I couldn't use the 920Watt SQ PSU instead? Or should I stick with the larger PSU?

Again, thanks for the replies everyone!
Ok it makes more sense then why you might want a higher end system. You could still probably do it for less money by piecing together a few things along the lines of what @WeekendWarrior laid out. Use a SAS2 or TQ backplane, run E5-2660v1 or E5-2680v2 cpu's, and do 128-192gb RAM (8gb sticks instead of 16gb). You won't lose much performance but you could cut the price by a lot. There is something to be said for a complete system too though.

As for the 920sq power supplies you should be fine. Just doing rough back of the napkin power calcs.
2x CPU (130w max/ea) - 260w
3x add-in card @30w/ea - 90w
24x drives @15w/ea (for hot SAS drives) 360w
16x sticks RAM @ 1.5w/ea - 24w

Still leaves you with almost 200 watts for motherboard and fans. Run lower power drives and/or lower power CPU's and you are totally fine.
 
Last edited:

themaxx25

New Member
Mar 18, 2018
8
0
1
51
Thanks for getting back to me! Is the 920SQ quieter than the 1280SQ? Or the exact same?

I like the idea of having it all built out, but if I can save building it out myself, that could be a fun project! It's been awhile since doing even a PC build.
 

BLinux

cat lover server enthusiast
Jul 7, 2016
2,672
1,081
113
artofserver.com
I'll echo similar opinions as others. But in particular, I think the 12Gbps backplane and HBA is pointless unless you plan to use 12Gbps SAS SSDs. There are no spinning hard drives that will even saturate 3Gbps, but the only reason to go with 6Gbps is to avoid the hard drive size limitations. You can save yourself quite a bit in cost by going with a system with 846A backplane instead and use some 6Gbps HBAs which are considerably cheaper than 12Gbps parts.

If you do plan to use 12Gbps, the 846 with 24x 3.5" bays is the wrong form factor as most 12Gbps SAS SSDs I've seen are 2.5" format. You could of course, use some sort of adapter from 2.5"->3.5", but that just adds to cost that you could have avoided to begin with. Perhaps a Supermicro 216A is a better fit for something like 12Gbps SSDs?

For the dual CPUs and 24x HDD (assuming you're going with 3'5" spinning HDD), and as long as you're not adding some power guzzling GPU cards, using the PWS-920P-SQ should be plenty.
 

msg7086

Active Member
May 2, 2017
423
148
43
36
2660v2 or 2680v2 are at a sweet price spot, consider checking them out.

High end CPUs are better fits for those workloads that can't be split into multiple nodes, and sometimes comes with extra premium just for that point.
(For example, MSRP for 2690v4 is $2090 vs 2640v4 at $939 while only 40% faster, etc. You might want to compare the platform cost and make your own decision.)

Dual 920w sounds more than enough for such a build. We have a storage server with old CPU and 12 drives running at ~400W.
 

BLinux

cat lover server enthusiast
Jul 7, 2016
2,672
1,081
113
artofserver.com
@themaxx25 there is one more thing that might be worth mentioning if you are planning to do ZFS and are looking for very high throughput on reads: at least on ZFS on Linux, at higher throughput (3~5 GBytes/sec), CPU becomes a bottleneck. I have a non-paid (meaning, progress will be slow while I work on paid projects first) project to tune ZFS on Linux on an array of 22x 12Gbps SAS SSDs and my initial findings are that CPU bottlenecks on reads at those speeds. I've seen similar bottlenecks even with 24x HDD, but not as bad. Increasing block size seems to reduce the CPU load, so I'm suspecting it has to do with the checksums; but that's speculation and I need to confirm it.

anyway, i know most people say you don't need that much CPU for a storage server, but if you're doing very fast or very wide vdevs, that might not be the case and you may benefit from faster CPU.
 

BackupProphet

Well-Known Member
Jul 2, 2014
1,095
656
113
Stavanger, Norway
olavgg.com
That is not my experience, I've done read speeds that is limited by memory bandwidth. On both Linux and FreeBSD. I guess the opposite is true, that the tool you benchmarked most likely was capped by CPU.

Tested this recently on my old E5620
Code:
[olav@tank10 /tank]$ dd if=big_encrypted_file of=/dev/null bs=1M && w
19921+1 records in
19921+1 records out
20889466735 bytes transferred in 2.909313 secs (7180205402 bytes/sec)
12:11AM  up 9 days, 11:23, 2 users, load averages: 0.44, 0.34, 0.22
USER       TTY      FROM                                      LOGIN@  IDLE WHAT
olav       pts/0    192.168.100.48                           Mon11PM     - w
olav       pts/1    192.168.100.48                           12:00AM     4 top
 

themaxx25

New Member
Mar 18, 2018
8
0
1
51
@themaxx25 there is one more thing that might be worth mentioning if you are planning to do ZFS and are looking for very high throughput on reads: at least on ZFS on Linux, at higher throughput (3~5 GBytes/sec), CPU becomes a bottleneck. I have a non-paid (meaning, progress will be slow while I work on paid projects first) project to tune ZFS on Linux on an array of 22x 12Gbps SAS SSDs and my initial findings are that CPU bottlenecks on reads at those speeds. I've seen similar bottlenecks even with 24x HDD, but not as bad. Increasing block size seems to reduce the CPU load, so I'm suspecting it has to do with the checksums; but that's speculation and I need to confirm it.

anyway, i know most people say you don't need that much CPU for a storage server, but if you're doing very fast or very wide vdevs, that might not be the case and you may benefit from faster CPU.
Hey BLinux, that is very interesting! If you have any links to read up on this, I would appreciate it.
 

BLinux

cat lover server enthusiast
Jul 7, 2016
2,672
1,081
113
artofserver.com
That is not my experience, I've done read speeds that is limited by memory bandwidth. On both Linux and FreeBSD. I guess the opposite is true, that the tool you benchmarked most likely was capped by CPU.

Tested this recently on my old E5620
Code:
[olav@tank10 /tank]$ dd if=big_encrypted_file of=/dev/null bs=1M && w
19921+1 records in
19921+1 records out
20889466735 bytes transferred in 2.909313 secs (7180205402 bytes/sec)
12:11AM  up 9 days, 11:23, 2 users, load averages: 0.44, 0.34, 0.22
USER       TTY      FROM                                      LOGIN@  IDLE WHAT
olav       pts/0    192.168.100.48                           Mon11PM     - w
olav       pts/1    192.168.100.48                           12:00AM     4 top
that's interesting. i'll have to dig up my notes, but i benchmarked with dd, fio, iozone, and sysbench and all pretty much showed the same issue, on really fast reads, CPU maxed out. with the iozone test (and I think sysbench too; don't recall for sure right now), I even used multiple threads = number of cores available, and all cores maxed out. iozone reported CPU utilization like 1000% on reads, while on writes it was something like an order of magnitude less. this was the same for HDD and SSDs, just to differing degrees.

what kind of vdev did you get those results?
 

BLinux

cat lover server enthusiast
Jul 7, 2016
2,672
1,081
113
artofserver.com
Hey BLinux, that is very interesting! If you have any links to read up on this, I would appreciate it.
no, nothing i can link you to. this was just something i discovered in my own setup; and I could be very wrong about it or perhaps there's a special case in my setup i'm not identifying that has these boundary conditions. take my information with a grain of salt as it's still very preliminary and i haven't had the time to investigate it further.
 

AJXCR

Active Member
Jan 20, 2017
565
96
28
35
anyway, i know most people say you don't need that much CPU for a storage server, but if you're doing very fast or very wide vdevs, that might not be the case and you may benefit from faster CPU.
Who's saying this? Everything I've read says exactly the opposite.. Samsung has an excellent white paper around..

From what I've gathered.. more cores = more performance once you start to build big NVMe arrays...
 

BackupProphet

Well-Known Member
Jul 2, 2014
1,095
656
113
Stavanger, Norway
olavgg.com
that's interesting. i'll have to dig up my notes, but i benchmarked with dd, fio, iozone, and sysbench and all pretty much showed the same issue, on really fast reads, CPU maxed out. with the iozone test (and I think sysbench too; don't recall for sure right now), I even used multiple threads = number of cores available, and all cores maxed out. iozone reported CPU utilization like 1000% on reads, while on writes it was something like an order of magnitude less. this was the same for HDD and SSDs, just to differing degrees.

what kind of vdev did you get those results?
This is 15 drives in a 3x raidz setup. I have 96GB ram on this machine so I read the file from memory. Anyway when I benchmarked this, it was dd that bottlenecked, not ZFS.