Build Thoughts

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

carlosmp

New Member
Mar 12, 2013
18
1
3
Hi,

I'm putting together a pair of servers for storage. Using some older SuperMciro X7DBE+ with 32GB RAM, DUal L5420s, and 2 M1015's (IT Mode). I have a few options that I'm considering:

2 x 60GB Kingston v300 for ZIL
2 x Intel 520 180GB for L2ARC
8 WD 3TB Red Drives in RAIDZ2

We're going to be using some infiniband cards on OmniOS, and see how network read/writes go from our Proxmox nodes (on C6100).

I'm thinking of buying more Red's and putting up 2 systems with 6 drives each to get replication/HA. This also lets us add more drives at a later time and expand if needed.

I've run some standard dd/iozone tests and did not really see any "major" improvements using the Zil/L2arc. I'm sure it's not going to be visible until the system is under actual load. I should probably just monitor it while running, and see what happens. I also tested some Constellation ES.2 SAS drives in the same configurations RAID10/RAIDZ2 with/without ZIL/L2 and the difference was about (5-10%) in iozone and the Reds were faster on dd by about 6% on Writes. Nothing earth shattering. For backup storage space, it's negligible. For NFS/iSCSI storage, it's probably worth the expense. These tests were run on FreeNAS, and my final system will be OmniOS + napp-it and should be a bit faster...

For the cost of 60GB SSD's it's not a bad thing to just throw at the server. Those Kingstons v300 use the same chipsets as the 520's so they should be ok, especially if mirrored. May not be 100% optimized, but should be close.

Anything else I should consider?

TIA,

Carlos.
 

gea

Well-Known Member
Dec 31, 2010
3,163
1,195
113
DE
only some thoughts

I have had X7DBEs too. I hope you can reuse them for heating your houses,
especially with 32 GB FB RAM, they will produce enormous heat without real computing power.
I throw them out for this reason using newer single Core 2011/1155 boards. (CPU is not critical for ZFS)

regarding ARC
with 32 GB RAM you will not use 2nd level ARC from SSD too often (check ARC statistics).
On benchmarks it is never used. But it is ok to add any fast SSD, not really critical.

regarding ZIL
Zil is only used on sync writes. On benchmarks ist is used when you set sync to always.
If you need sync writes (depends on applications, ex NFS on ESXi is always sync), you should
use (best in this order) Dram based like ZeusRam, SLC based with supercap, MLC based with supercap.
You should not use ones without supercap or it is unsure in case of power failures. Compared with sync
disabled, you suffer performance without gaining security.

Good ZILs are SSDs like Intel 320 or the SLC based SSD from Winkom (Winkom-shop) if you are from germany.
They offer quite cheap MLC and SLC SSD, last optionally in 120 GB with supercap.

Regarding pool-layout.
A Raid-Z2 build from 8x3 TB has a good sequential performance but is quite slow when used from
multiple VM's where you should look at I/O performance. For this, multiple mirrors with as much vdevs as
possible (or SSD only pools in Raid-Z2) are best.
 

carlosmp

New Member
Mar 12, 2013
18
1
3
gea,

Thanks for the comments. This server is mostly going to be used for backup storage. It's not going to host any real virtual OS disk systems. It may if there's a problem somewhere, and it will mostly be very small openvz containers that don't do much disk activity, and only for a short period of time. I actually removed half the RAM, since these boxes had 64GB of ram running several virtual machines on local storage. The only issue with these boxes is the 3 pcie slots, so for the current purpose, they're perfect. 2 slots for M1015's and 1 slot for a dual port QDR infiniband. Yes, a bit warmer than some of our other systems, but it's a controlled environment, so I'm not worried about the heat. In the Colo, heat is someone else's problem.

When doing some preliminary tests, I didn't observe any huge gains during dd/iozone tests between the Raid10 and RAIDz1. It was quite different from what I expected. I was hoping to see some huge gains in the write/read on RAID10 vs RAIDZ1, but RAIDZ1 was much faster (60%) while the read was abiout 12% less on the RAIDZ1. Since this unit will be primarily used for backup storage, 500MB/s is more than sufficient. Gigabit connectivity will only let 105-115 MB/s anyways, and NFS is not MPIO compatible. That's why we're switching the storage network to QDR Infiniband. It'll take a while to saturat 32Gbps, but the backups, etc. will not be impeded by the network connection.

from dd bs=2M count=50k (Write | Read) - Constellation ES2/SAS 3TB
RAID10+ZIL+L2 - 323 | 682
RAIDZ1+ZIL+L2 - 504 | 608

With respect to the ZIL SSD's, and the supercap issue; we are in a colo, and if we have a power failure and the data isn't written, the data isn't written. Been at this facility over 4 years, and never lost power. We're in South Florida with lots of hurricanes coming through. I'm not going to lose sleep over it. Again, these boxes are for image based backups, and a nightly consistency check will fail, and the next snapshot takes care of the issue if there was one. They're in the box, because the Kingstons were cheap, and faster with sata3 support than the intel 320. May not be used, but if for some reason we need it, we don't need to bring the box down to install/setup the ZIL.

Thanks for the feedback.

Carlos.