Help with building JBOD for R815

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

joey238

New Member
Apr 14, 2018
2
0
1
40
Hi all,

I picked up 4 x Poweredge R815 servers at auction. They came fully loaded - 4 x Opteron 6276, 256gb ddr3, PERC H700, QLogic QLE8242, 2 x 1100w PSUs, etc.

I am very new to working with servers and have some basic questions for a build I'm considering.

Firstly, this is mostly just a project to have some fun tinkering with. I do not intend to actually make a profit (at least not within 2 years) but I want to get at least one of these servers set up to rent storage space -probably with Storj, as I've already been renting with Storj on a much smaller scale. Maybe farther down the road and depending on power consumption it might make sense to use any extra cpu power to mine with CryptoNight or something. We'll see.

Each node on Storj requires 1 core, at least 2 gb of RAM, and can be as large as 8 tb. Theoretically, then, as the Opteron 6276s each have 16 cores, I figure I could put up to 64 x 8 tb drives on one of these things. I don't want to drop a ton of money on enclosures of any kind, even cheap, 24-bay Norco rack mounts if it can be helped; I'd rather just connect a ton of 8tb barracuda SATAs (the cheapest solution I've found) on a shelving rack or something using some extra fans I have and either an ATX PSU or, ideally, one of the many 1100w PSUs I now have.

The questions begin. How to actually connect 48+ drives? I'm thinking I'll need a few SAS expansion cards? I have never used one. Do I need to connect them to each other and also to the H700 RAID controller? From what I can tell, the H700 can handle 8 tb drives. But it only has 2 ports and I don't think I have any use for hardware raid anyhow. Storj and other, actual blockchain-based storage services already handle backing up data via sharding and such, so the only use I can see for RAID is to create a ton of single RAID 0 arrays. Ideally, I think, I'd just mount each drive separately using LVM in Ubuntu. Should I get some other HBA, then? I've seen the LSI 9211-16i mentioned on here a few times. It seems I would just connect the HBA to 2 or 3, 6-port SAS expansion cards, and then connect 4 SATA drives per port using SAS to SATA breakout cables. Is this correct?

To power the drives, there are breaker boards that will work with the 1100w PSUs - something like this. These are awesome because there are 16, 6-pin ports (I'd split each one into 2-4 connections) and I can link up 2 of the 1100w PSUs if need be to account for the extra wattage used on startup. How would I connect them to the motherboard, though? Or even a standard ATX PSU? I don't see any jumper pins on the R815 like the kind I'm used to on ATX motherboards. Can I just turn the PSU(s) on before booting up the server?

Also, is the Qlogic fibre channel card of any potential use? Seems like a waste.

Sorry for so many questions and for being a total newb. I would greatly appreciate suggestions or recommendations of any kind.

Thanks!

Also, in case anyone is interested, I'm going to have a couple extra R815s. Will likely post in the For Sale Forum.
 

sinisterDei

Member
Mar 25, 2018
47
27
18
40
Houston, TX
Alright, a few things here.

Firstly, beware that the Opteron 6276 CPUs are 8/16 core. They have 16 integer execution units, but only 8 floating point units. It's more like 16 cores than an 8-core Hyperthreaded CPU is, but it's not exactly pure 16 cores either. Plus, you know, by today's standards they're slow as balls, if that matters.

Secondly, if you pick up the Barracuda drives that are from their consumer line, beware they have some power management features that LSI controllers (like the H700) disagree with and can cause drives to drop out. Google "disable apm seagate barracuda" and you should be able to figure out how to disable the feature if you still end up with those drives.

Depending on your specific H700, you should be able to flash them to their 9260-8i IT mode equivalent. This would turn them into nice HBAs and disable all the RAID functionality.

The H700 has two ports, but they're quad-channel 6Gb/s SAS connectors. You should be able to run them into a SAS expander and run a bunch of drives from them. How many is a 'bunch' I'm not exactly sure, but I would guess 128. You would achieve this by running the dual ports into the SAS expander, and then daisy chaining that expander to the next. Obviously, depending on your throughput requirements, this may or may not be a good idea also, since all the drives will be sharing 8x 6Gb/s total bandwidth to the HBA.

As for powering the drives, most chassis' that you would want to use capable of holding 24+ drives will have some kind of backplane, and that backplane will distribute power from between 2 and 6 power connections on a standard ATX power supply. Some backplanes have the SAS expander built in, others are just a bunch of drive connections set up for ease of hot-swap.
 

joey238

New Member
Apr 14, 2018
2
0
1
40
Wow, very informative. Thank you kindly. I didn't realize that about the processor cores and had no idea about the seagate power issues or that I could flash the h700s like that. If that works it will be perfect..and save me a bundle. Will attempt to do so tomorrow.

Much appreciated.
 

shanehm2

New Member
Jan 12, 2018
25
2
3
40
If your going with the route of Seagate get Ironwolf Pro's they have a 5 year warranty and are commonly used in NAS