Hi all,
I picked up 4 x Poweredge R815 servers at auction. They came fully loaded - 4 x Opteron 6276, 256gb ddr3, PERC H700, QLogic QLE8242, 2 x 1100w PSUs, etc.
I am very new to working with servers and have some basic questions for a build I'm considering.
Firstly, this is mostly just a project to have some fun tinkering with. I do not intend to actually make a profit (at least not within 2 years) but I want to get at least one of these servers set up to rent storage space -probably with Storj, as I've already been renting with Storj on a much smaller scale. Maybe farther down the road and depending on power consumption it might make sense to use any extra cpu power to mine with CryptoNight or something. We'll see.
Each node on Storj requires 1 core, at least 2 gb of RAM, and can be as large as 8 tb. Theoretically, then, as the Opteron 6276s each have 16 cores, I figure I could put up to 64 x 8 tb drives on one of these things. I don't want to drop a ton of money on enclosures of any kind, even cheap, 24-bay Norco rack mounts if it can be helped; I'd rather just connect a ton of 8tb barracuda SATAs (the cheapest solution I've found) on a shelving rack or something using some extra fans I have and either an ATX PSU or, ideally, one of the many 1100w PSUs I now have.
The questions begin. How to actually connect 48+ drives? I'm thinking I'll need a few SAS expansion cards? I have never used one. Do I need to connect them to each other and also to the H700 RAID controller? From what I can tell, the H700 can handle 8 tb drives. But it only has 2 ports and I don't think I have any use for hardware raid anyhow. Storj and other, actual blockchain-based storage services already handle backing up data via sharding and such, so the only use I can see for RAID is to create a ton of single RAID 0 arrays. Ideally, I think, I'd just mount each drive separately using LVM in Ubuntu. Should I get some other HBA, then? I've seen the LSI 9211-16i mentioned on here a few times. It seems I would just connect the HBA to 2 or 3, 6-port SAS expansion cards, and then connect 4 SATA drives per port using SAS to SATA breakout cables. Is this correct?
To power the drives, there are breaker boards that will work with the 1100w PSUs - something like this. These are awesome because there are 16, 6-pin ports (I'd split each one into 2-4 connections) and I can link up 2 of the 1100w PSUs if need be to account for the extra wattage used on startup. How would I connect them to the motherboard, though? Or even a standard ATX PSU? I don't see any jumper pins on the R815 like the kind I'm used to on ATX motherboards. Can I just turn the PSU(s) on before booting up the server?
Also, is the Qlogic fibre channel card of any potential use? Seems like a waste.
Sorry for so many questions and for being a total newb. I would greatly appreciate suggestions or recommendations of any kind.
Thanks!
Also, in case anyone is interested, I'm going to have a couple extra R815s. Will likely post in the For Sale Forum.
I picked up 4 x Poweredge R815 servers at auction. They came fully loaded - 4 x Opteron 6276, 256gb ddr3, PERC H700, QLogic QLE8242, 2 x 1100w PSUs, etc.
I am very new to working with servers and have some basic questions for a build I'm considering.
Firstly, this is mostly just a project to have some fun tinkering with. I do not intend to actually make a profit (at least not within 2 years) but I want to get at least one of these servers set up to rent storage space -probably with Storj, as I've already been renting with Storj on a much smaller scale. Maybe farther down the road and depending on power consumption it might make sense to use any extra cpu power to mine with CryptoNight or something. We'll see.
Each node on Storj requires 1 core, at least 2 gb of RAM, and can be as large as 8 tb. Theoretically, then, as the Opteron 6276s each have 16 cores, I figure I could put up to 64 x 8 tb drives on one of these things. I don't want to drop a ton of money on enclosures of any kind, even cheap, 24-bay Norco rack mounts if it can be helped; I'd rather just connect a ton of 8tb barracuda SATAs (the cheapest solution I've found) on a shelving rack or something using some extra fans I have and either an ATX PSU or, ideally, one of the many 1100w PSUs I now have.
The questions begin. How to actually connect 48+ drives? I'm thinking I'll need a few SAS expansion cards? I have never used one. Do I need to connect them to each other and also to the H700 RAID controller? From what I can tell, the H700 can handle 8 tb drives. But it only has 2 ports and I don't think I have any use for hardware raid anyhow. Storj and other, actual blockchain-based storage services already handle backing up data via sharding and such, so the only use I can see for RAID is to create a ton of single RAID 0 arrays. Ideally, I think, I'd just mount each drive separately using LVM in Ubuntu. Should I get some other HBA, then? I've seen the LSI 9211-16i mentioned on here a few times. It seems I would just connect the HBA to 2 or 3, 6-port SAS expansion cards, and then connect 4 SATA drives per port using SAS to SATA breakout cables. Is this correct?
To power the drives, there are breaker boards that will work with the 1100w PSUs - something like this. These are awesome because there are 16, 6-pin ports (I'd split each one into 2-4 connections) and I can link up 2 of the 1100w PSUs if need be to account for the extra wattage used on startup. How would I connect them to the motherboard, though? Or even a standard ATX PSU? I don't see any jumper pins on the R815 like the kind I'm used to on ATX motherboards. Can I just turn the PSU(s) on before booting up the server?
Also, is the Qlogic fibre channel card of any potential use? Seems like a waste.
Sorry for so many questions and for being a total newb. I would greatly appreciate suggestions or recommendations of any kind.
Thanks!
Also, in case anyone is interested, I'm going to have a couple extra R815s. Will likely post in the For Sale Forum.