Hey folks,
I've been a data hoarder for a while but recently I've taken the plunge into moving my unraid server into an old supermicro setup I bought off the 'bay. Formerly I'd had a desktop motherboard with a couple HBAs and pass-through backplanes so I had no issues with bandwidth. The machine has:
X9DRi-F motherboard
supermicro BPN-SAS2-846EL1 backplane
LSI 9211-8i (pci-e 2.0 x 8 lane) HBA
17 3.5" 5400rpm HDDs
Unraid needs to read from all the drives at once so overall bandwidth can be a bottleneck. When reading from all the drives at once I find it's capped at almost exactly 2 gigabytes/s. I want to go up to 24 drives, which would be limiting them to less than 100mb/s each. That'd be a pretty significant impairment. So I need to figure out where the bottleneck is, but it doesn't seem to be as simple as it could be.
Backplane. The EL1 has 3 SFF-8087 ports on the back. The server came with an LSI raid card connected to two of the ports. It came with two SFF8087 cables which I'm using.
As I understand it, the backplane lets you connect two SFF8087 connectors from the HBA to double the bandwidth, which should give me 4gb/s. Explained here: https://forums.servethehome.com/index.php?threads/home-server-build.18782/#post-182595
I only get 2gb/s, which says only one SFF8087 is being utilized. But performance seemed to fall off with just one SFF8087 connected (maybe I connected the wrong one?).
I have the latest firmware for the motherboard and HBA, not sure how to (or if I can) upgrade the firmware on the backplane. It seems like it could be a compatibility issue with the HBA, but I've tried it with a 9201 I had lying around, and I got the same bottleneck.
Any suggestions as to where I could go from here?
As an aside, I'm getting poor write performance with sata SSDs connected to the motherboard's sata iii ports. The motherboard's not that old; I would expect them to work at full speed (~500megabytes /s) but write speeds are poor (about 175 megabytes/s).
Thanks in advance!
I've been a data hoarder for a while but recently I've taken the plunge into moving my unraid server into an old supermicro setup I bought off the 'bay. Formerly I'd had a desktop motherboard with a couple HBAs and pass-through backplanes so I had no issues with bandwidth. The machine has:
X9DRi-F motherboard
supermicro BPN-SAS2-846EL1 backplane
LSI 9211-8i (pci-e 2.0 x 8 lane) HBA
17 3.5" 5400rpm HDDs
Unraid needs to read from all the drives at once so overall bandwidth can be a bottleneck. When reading from all the drives at once I find it's capped at almost exactly 2 gigabytes/s. I want to go up to 24 drives, which would be limiting them to less than 100mb/s each. That'd be a pretty significant impairment. So I need to figure out where the bottleneck is, but it doesn't seem to be as simple as it could be.
Backplane. The EL1 has 3 SFF-8087 ports on the back. The server came with an LSI raid card connected to two of the ports. It came with two SFF8087 cables which I'm using.
As I understand it, the backplane lets you connect two SFF8087 connectors from the HBA to double the bandwidth, which should give me 4gb/s. Explained here: https://forums.servethehome.com/index.php?threads/home-server-build.18782/#post-182595
I only get 2gb/s, which says only one SFF8087 is being utilized. But performance seemed to fall off with just one SFF8087 connected (maybe I connected the wrong one?).
I have the latest firmware for the motherboard and HBA, not sure how to (or if I can) upgrade the firmware on the backplane. It seems like it could be a compatibility issue with the HBA, but I've tried it with a 9201 I had lying around, and I got the same bottleneck.
Any suggestions as to where I could go from here?
As an aside, I'm getting poor write performance with sata SSDs connected to the motherboard's sata iii ports. The motherboard's not that old; I would expect them to work at full speed (~500megabytes /s) but write speeds are poor (about 175 megabytes/s).
Thanks in advance!