a not so high density storage server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Deci

Active Member
Feb 15, 2015
197
69
28
I had a thread in the HDD section but here is probably a better fit at this point.

Operating System/ Storage Platform: likely end up on solaris 11.2/nappit or omnios/nappit
CPU: Xeon e5-1620v3
Motherboard: Supermicro X10SRH-CF
Chassis: 3x supermicro 24 drive (846 series) 4U + 1x supermicro (216 series) 24x2.5" 2u
Disks: 66x 1TB WD SE + 1 SSD for L2 + 2x 32GB satadom for the OS mirror
RAM: 128GB DDR4 - 8x 16gb 2133mhz
Add-in Cards: FusionIO iodrive2 365GB for ZIL, 8GB FC card, LSI 9207-8e
Power Supply: redundant as per chassis

The 2U case is going to be the brains, the 24 drive cases will just act as JBOD storage boxes attached to it, i had initially intended to use 2.5" drives but sourcing them and at a reasonable price was much harder than expected (in AU or to even get them to AU) for higher performance drives, going back to 3.5" and at this density takes up an extra 10RU but cuts costs considerably, compact would have been nice but wasnt essential.

I had planned to use more SSD for l2, but after looking into it a lot further and the actual hit rate of the L2 in the current system i feel the money can be better spent later expanding some pure SSD storage into the 2u case if required and the additional 6 slots over the 24 drive cases can be used for higher capacity slower drives as a bulk storage pool if required.

The parts are slowly coming in, i have the motherboard/cpu/iodrive and satadoms. the ram was purchased over the weekend and is on its way from the US, the cases will also have to make their way over from the US when they become avaliable, drives and SSD will be purchased locally for ease of any warranty claims.

The system will be acting as the new primary VM storage, the system that is currently running will be modified a bit and rebuilt as secondary storage to go to a second site to run redundant virtuals, both boxes will probably be setup to replicate their main VM storage to each other as a backup with the added bonus that there are systems capable of firing up the machines on either end with some minor config changes should something go wrong (this isnt/wont be the only backup system, it never hurts to have a couple of different types of backups).
 
  • Like
Reactions: MiniKnight

abstractalgebra

Active Member
Dec 3, 2013
182
26
28
MA, USA
Looks cool and also interested to see pictures and hear more about the build.

Would the much faster 3-4TB Drives end up being cheaper for you? Or perhaps two pools one Fast (SSD) and one slow (spinners).
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
66 1TB drives or 11 6TB drives... do you have cheap electricity :) :)
Running those 3 chassis just to power 66 1TB drives seems VERY excessive with drive sizes of today. I bet shipping 10-15 drives would be astronomically cheaper than 66 1TB drives to AU too ??
 

Deci

Active Member
Feb 15, 2015
197
69
28
Looks cool and also interested to see pictures and hear more about the build.

Would the much faster 3-4TB Drives end up being cheaper for you? Or perhaps two pools one Fast (SSD) and one slow (spinners).
The 2u chassis is there for future pure ssd options.

A smaller number of larger drives would be cheaper, but defeats the purpose of having lots of spinning disks for all the io that misses the cache, the thinking being that spreading load over more disks means the spinners can soak up what's missed by cache more easily. current arc stats show high 80s hit ratio and the 1tb of l2 shows 30% meaning quite a lot of the reads aren't super consistent and are going straight to the spinning disks.

@T_Minus The power consumption isn't a deciding factor, having superior random io over a large storage pool is more beneficial than the power savings by making it smaller/fewer disks.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
What kind of IOPs are you hoping for?

Having them all on backplanes with 1 controller per-backplane you're going to hit the limit quick, are you not?

It would seem if you wanted the best IOPs with spinners then going to 900gb SAS 15,000 rpm was more along what you are after.

None of those spinners RE or SE will have the latency a 10k or 15k rpm drive does either, so I'm curious the math, and calculations and your estimations, etc :)

Exciting project, and I look forward to PICTURES :D
 

Deci

Active Member
Feb 15, 2015
197
69
28
I dont have a set iops figure in mind, i will have a play around with the configs 11x 6 disk z2 vs 6x 11 disk z3 etc and see what it does just with raw disks, then throw the iodrive into the mix, i fully expect it to limit pure write speeds quite a bit but given its great small write IO capabilities i dont see it as a particularly bad trade off, the last part i will be adding is the L2, as it may just end up not being that beneficial given the ram thrown at the box for the ARC

Yes, there are limits in the SAS setup, but in theory there is 24Gbit/3Gbytes to each 4u chassis, i also dont expect them to ever perform at that level, partially due to bandwith constraints in/out of the two LSI chips and as the load on them is far more random than sequential.

The BEST IOPS and an acceptable level within budget are two very different kettles of fish. 70x 1tb SSD drives (even just consumer level) is ~$37,000 AUD, 70x 900gb 10k sas2 drives is about ~$14,000 AUD delivered (out of the US), 70x 1tb WD SE drives sourced locally are ~$8,000 AUD, the trade off is slower seeks but not massively slower reads/writes compared to the sas drive option, though obviously neither will compare to pure ssd, but thats just not affordable at the moment at the capacity thats required. Once larger drives start to come into the market with the new 3d nand, prices should start to fall a bit on the (relatively speaking) smaller drives. I can accept the trade off given its a bit over half the price for 80-85% of the performance.

But its all relative, a friend works with far larger and far faster storage/computing systems, they have a shef of high end PCI-E solid state drives as they replaced them for being too slow for the purpose they wanted, but their budget and their intended purpose makes that an option for them, my budget doesnt allow that so im trying to get as much as i can where i can with what my budget allows me to (keeping in mind this system has to last a couple of years as a base, even if some pure SSD is added later if nothing else but for applications that will benefit from the lower latency eg SQL/Email storage).
 
  • Like
Reactions: T_Minus

Deci

Active Member
Feb 15, 2015
197
69
28
Ram has arrived, so the motherboards in a usable state now.



I have ghetto rigged the board on a box and updated the firmware on the LSI3008 chip to the latest IT mode firmware for the 9300-8i and its now running memtest.



I had confirmation that the 4U supermicro cases have shipped but there is still a few more bits to order/get shipped.

Its getting there, just slowly.
 

Deci

Active Member
Feb 15, 2015
197
69
28
the SM951 and 4u supermicro cases have arrived, still waiting on the 2u, the LSI 9207, some sas cabling and the FC card.

threw the SM951 into a PCI-e to m.2 adaptor card and it shows up as a regular drive with no issues under solaris 11.2, standard sequential testing showed 1.5GB/s writes, reads are hard to judge as it requires a 260 odd gb test file due to the ram size. seems like a good drive for L2 purposes with its price/performance compared to the NVMe drives (though you dont get the same level of 4k performance)
 

Deci

Active Member
Feb 15, 2015
197
69
28
so to add to the pictures, 2u case has arrived and guts are installed, still waiting on some cables/jbod cards/the LSI and qlogic cards.

but first, 3x 24 bay 4u cases



general overview of the guts, the little SATADOM disks tucked away in the side.



fusionio drive, the current fiber card thats in there is likely going to be removed, its just temporary.



samsung SM951 hiding in behind the fusionio card.



4 of the 8x 16GB DDR4 dimms.

 
  • Like
Reactions: Hank C

Deci

Active Member
Feb 15, 2015
197
69
28
so, on to some basic testing while i wait for the drives to arrive.

8x 1tb raptors in a basic (raid0) vdev off a single expander on the SAS3 onboard chip, NO l2arc, iodrive2 as SLOG. interface via dual 4GB FC.



can saturate both 4GB links at once, with sync on, sync off makes no real world appreciable difference to speeds.
 
Last edited:
  • Like
Reactions: Kristian

Deci

Active Member
Feb 15, 2015
197
69
28
Just to add, the 9207-8e and qlogic 2532 card are in, still waiting on a low profile bracket to show up for the qlogic card.

 
  • Like
Reactions: Kristian

Deci

Active Member
Feb 15, 2015
197
69
28
Some further testing, there are 4x FC ports in the system off of 2 cards, each client is on a different virtual host via 2x brocade 4gb switches. client 1 and 2 are on emulex FC cards, clients 3 and 4 are on qlogic cards, the server is using qlogic cards, the emulex seem to give more even read/writes and the qlogics give better reads but slower writes, at least in this setup.





will be interesting to compare the speeds when there is a proper drive array behind it all, but even with this small array its looking very promising, an upgrade to 8GB may be on the cards a bit later.
 
Last edited:

Deci

Active Member
Feb 15, 2015
197
69
28
With the actual disks installed, here are some results, there appears to be a bottleneck of sorts, the speed stacks fairly linearly up to 22 drives and then only increases by small amounts for the given number of disks added, the disks also show this clearly when writing/reading as their activity lights arent on solid as compared to the 2-3 vdev tests, this is just numbers with DD bench in nappit, OS is solaris 11.2 all tests done with a 200+gb file size

66 drive basic array - 3165MB/s write, 1900MB/s read

unbalanced in chassis (random numbers of disks for each vdev in each chassis)
11 disk raidz3 1 vdev - 1400mb/s write, forgot to note
11 disk raidz3 2 vdevs - 2359mb/s write, 1644mb/s read
11 disk raidz3 3 vdevs - 2412mb/s write, 1800mb/s read
11 disk raidz3 6 vdevs - 2367MB/s write, 2233MB/s read
6 disk raidz2 11 vdevs - 2068MB/s write, 2206MB/s read

balanced in chassis (equal numbers of disks for each vdev in each chassis)
11 disk raidz2 2 vdevs - 2199mb/s write, 1701mb/s read
11 disk raidz2 3 vdevs - 2763mb/s write, 1840mb/s read
11 disk raidz2 4 vdevs - 2202mb/s write, 1976mb/s read
11 disk raidz2 5 vdevs - 2684mb/s write, 2094mb/s read
11 disk raidz2 6 vdevs - 2345mb/s write, 2204mb/s read
 

neo

Well-Known Member
Mar 18, 2015
672
363
63
I'm curious why you didn't choose to install your SATADOM modules into the yellow ports? Those are self powered and don't require the 2-pin power cable.
 

Deci

Active Member
Feb 15, 2015
197
69
28
boot issues with other drives attached to the onboard sata, it didnt like booting to the sata doms which were ports 4 and 5 when other drives were attached to ports 0-3
 
  • Like
Reactions: T_Minus

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
boot issues with other drives attached to the onboard sata, it didnt like booting to the sata doms which were ports 4 and 5 when other drives were attached to ports 0-3
Only this model motherboard or have you ran into this on other SM boards as well?

Curious as I have ~10 to install in various boards, and wondering if only some have this issue or all.