How to build a Dot HillFC 4824 equivalent?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

dude05

New Member
Nov 19, 2016
27
0
1
43
>I want to build from parts off ebay and fresh new SAS drives.
>FC 16Gbps out to my linux box with atto celerity fc card.
>Is it possible to use an old chassis, a new backplane and FC cards?

Also I would love to keep it whisper quite.

Is it doable?? Or is it best to buy a supermicro chassis and work from there? Appreciate your inputs.

My previous plan to do 16x 2.5 sas drives inside a chassis went well with Fractal Design XL and an areca 1883ix-16 card. Now looking at building a new external box.
 

dude05

New Member
Nov 19, 2016
27
0
1
43
Pretty much trying to replicate dothill. I already have sas3 internally and I understand there r limitations to cable length in case i go external. I cant keep my machine far from coz UHD over long distance Displayport is trouble I hear; but storage can be elsewhere. I am hoping to keep this outside the main room in an aircon rack. and if it can work out cheaper than branded boxes, ideally want to switch all my storage to external in the coming years.
 

i386

Well-Known Member
Mar 18, 2016
4,243
1,546
113
34
Germany
but storage can be elsewhere
If you want to build a diy san/nas you could use normal ethernet (with cheap 10/40gbe nics from ebay, like mellanox cx-3/2) and fibers for longer distances.

When you say "sas drive" do you mean actual 10/15k rpm sas drives or the nearline sas drives (7.2k rpm)?
 

dude05

New Member
Nov 19, 2016
27
0
1
43
If you want to build a diy san/nas you could use normal ethernet (with cheap 10/40gbe nics from ebay, like mellanox cx-3/2) and fibers for longer distances.

When you say "sas drive" do you mean actual 10/15k rpm sas drives or the nearline sas drives (7.2k rpm)?

targetting 10k drives 1.2tb ones. already got 600gb ones but i am hoping for much better performance. Hows the real world diff between 24x 1tb sata nearlines drives and 24x sas drives? i dont hv a 2.5" enterprise sata storage yet. So unable to compare. Is it worth the extra $$$ for sas drives?

looking at das. but storage stays 20 meters away from the workstation.
 

i386

Well-Known Member
Mar 18, 2016
4,243
1,546
113
34
Germany
Hows the real world diff between 24x 1tb sata nearlines drives and 24x sas drives?
Nearline: up to 220mb/s, ~75-120 iops
SAS: up to 250mb/s, ~150-220 iops

If you want performance use ssds. Even an "old", used enterprise sata ssd like the intel s3500 will outperform sas drives. (You will need ~50 15k rpm sas drives in raid 0 to get the random iops of that ssd)

In my opinion sas hdds are useless for performance.
 

cheezehead

Active Member
Sep 23, 2012
728
175
43
Midwest, US
dothill setups generally arn't known for their quiet but do offer redundant controllers and is an easy turn-key setup. Supermicro on the other hand would be an SSG chassis with SAS3 offers a few models with a bit more work setting up but give you more flexibility down the road. Also remember if looking at the Supermicro solution to make sure your OS can support the shared backplane scenario.
 

i386

Well-Known Member
Mar 18, 2016
4,243
1,546
113
34
Germany
Also I would love to keep it whisper quite.
Forgot this point until @cheezehead mentioned it.
You can get the server quiet, but not whisper quiet without fan mods. With fanmods you will need active coolers on the cpu(s) and maybe fans for add in cards.
 

dude05

New Member
Nov 19, 2016
27
0
1
43
I would ideally love to have SSD raid. Trouble is that i tried my first one using 850evo 1tb 4x on raid5 and i had constant troubles while running autodesk flame. this was last jan-feb. Since then i never looked at ssd raid again. My target is 2000MB/sec throughput for 4k uncompressed files r/w with least latency possible. It gotta be raid5 or 6 or 10 and when considering cost factor, i figured sas works cheaper and with 16-24 drives, gives me better performance. I could be totally wrong and I used the wrong ssd drives for raid5.
 

i386

Well-Known Member
Mar 18, 2016
4,243
1,546
113
34
Germany
850 evos (and other consumer ssds) are not made for such a workload, their perforamnce degrades under sustained reads/writes.

For 2+ gbyte/s throughput I would look at solutions with ssd caching. I don't know if the areca controllers support ssd caching, but if it does you could build that way a cheap storage box with high troughput.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Assuming you want 2GB/s throughput r/w (meaning both directions), i.e. maxing out a 16Gb/s FC connection, your bare minimum throughput wise will likely need to be at least 20x platter drives in a RAID10 config, and that's only assuming a sequential workload to/from a single client. SSD caching will likely be necessary to mitigate any random I/O in the mix. More likely you're going to need 30 or 40 drives in order to get the throughput required to sustain 2GB/s read/write, since reading and writing to the same array at the same time implies it's not going to be a sequential workload.

As far as SSD caching goes, IIRC there've been problems with the samsung consumer drives hanging off LSI HBAs. Although you shouldn't really attempt to use "consumer" drives for this purpose as their performance (both sequential and random) under sustained loads goes to crap; you'll either want enterprise SSDs or optane to use as a cache for your platter-based RAID.
 
  • Like
Reactions: dude05

dude05

New Member
Nov 19, 2016
27
0
1
43
I wouldn't need 2Gbps sustained rw. Its more like random rw. I read/write uncompressed movies 10bit-16bit HD-4K (as dpx image sequence)off the drive multiple streams. If it can do 4x the speed necessary for 4K uncompressed, i am a happy guy. My current rig - 16x sas 2.5 10K drives - gives me abt 1500MBps with write cache on over sas nd areca 1883ix-16. I can add another 4 more drives via external sas port but that thats the best i can hold inside the cage. My worry now is heat from drives are a bit too much being distributed into the mobo/pcie cards and perhaps its best to take the entire thing outside.

What chassis shud i get and what controller card can do the job for me. FC or SAS is fine with me for nxt 1-2 yrs but FC would be ideal in the long run coz of the distance i can run the cables.
 

dude05

New Member
Nov 19, 2016
27
0
1
43
Update:
In the end, i managed to do 16x 2.5 sas using icydock cage in front of a fractal design XL. Everything was nice and good and then i hit the major problem - HEAT. now i am realizing I did something completely moronic and gonna find ways to switch to SSD raid. Aircon at 18degrees in my room is not helping much. keeps the SAS drives at 41-48 degreeC and it slows down once it goes above 45degree C.
 

Attachments

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,804
113
Hard drives and NVMe drives generate a ton of heat.

Also, stay away from the Samsung consumer drives for performance. Those are known to have higher failure rates than normal in servers.

Looking for read or write IOPS? How much capacity do you really need?
 

dude05

New Member
Nov 19, 2016
27
0
1
43
Hard drives and NVMe drives generate a ton of heat.

Also, stay away from the Samsung consumer drives for performance. Those are known to have higher failure rates than normal in servers.

Looking for read or write IOPS? How much capacity do you really need?
@Patrick : Sorry. forgot to reply. I am looking at 3000MBps(with possible upgrade to 8000MBps) with 7-10TB storage for now which i can later expand to may be 20TB max.

currently thinking between 10x 960gb 853T samsung SSD and Intel S4500 960Gb and 5x Samsung 863a 1.9TB