32TB ZFS with 5 more 2TB drives coming...

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,514
5,806
113
Ha! That is pretty darn cool. Doesn't seem super fast speed wise... but great to see that kind of volume.
 

sboesch

Active Member
Aug 3, 2012
467
95
28
Columbus, OH
That is some space you got there! I am also curious about your 65MB speeds. I had the same behavior, a failing drive.
 

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
lzgb compression. how do i check for failing drives?

perhaps I set up the ZFS partion wrong. I used gnop to align the partitions

pool: zshare
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
zshare ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
da1.nop ONLINE 0 0 0
da2.nop ONLINE 0 0 0
da3.nop ONLINE 0 0 0
da4.nop ONLINE 0 0 0
da5.nop ONLINE 0 0 0
raidz1-1 ONLINE 0 0 0
da6.nop ONLINE 0 0 0
da7.nop ONLINE 0 0 0
da8.nop ONLINE 0 0 0
da9.nop ONLINE 0 0 0
da10.nop ONLINE 0 0 0
raidz1-2 ONLINE 0 0 0
da11.nop ONLINE 0 0 0
da12.nop ONLINE 0 0 0
da13.nop ONLINE 0 0 0
da14.nop ONLINE 0 0 0
da16.nop ONLINE 0 0 0


my lsi controller is in IR mode, would that affect the speed that much, i'm only getting 65MB/s writes
 
Last edited:

sboesch

Active Member
Aug 3, 2012
467
95
28
Columbus, OH
The smart tools were of no help for me, the way I found my bad drives was to create 3 disk raidz arrays and bench marking them, weeding out the slower drives. I then used Seagate's SeaTools on the slower drives. Sure enough, every slow drive I found failed in SeaTools.
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
Before I deploy hard drives, I always at a minimum write zeros to the whole drive and then check them with smartmontools afterwards. I typically use badblocks to identify the bad sectors and write zeros to the disk at the same time. I would expect much better performance out of this once you weed out the bad disks.
 

sboesch

Active Member
Aug 3, 2012
467
95
28
Columbus, OH
Some lessons learned on my part, and I am learning lessons daily, about ZFS write performance. You will write as fast as your slowest disk in the ZFS data set.
I found that lzjb compression
Code:
zfs set compression=lzjb yourfilesystem
worked best for my general workloads. I disabled ZIL with the
Code:
zfs set sync=disabled yourfilesystem
on the datasets that are NFS and datasets that consistently write large files because I do not have SSD's configured for logging, I have SPEED GREED. NFS and ZFS when used together thrashes ZIL. Writing large files thrashes the ZIL quite a bit as well. ZFS with ZIL enabled will always perform slow on high latency disks, like commodity disks. You add the latency of the disk with the latency of the controller and you get a double penalty.You add a failing disk or disks you have a triple penalty. Do not disable ZIL because you can! ZIL is why ZFS is so cool! Again, I have SPEED GREED and am poor because of my SPEED GREED! (My race car is consuming all my funds right now).
If you are concerned about ZIL, and you really should be, as a best practice, I would mirror a couple SSDs and assign the ZIL to them.

Have you posted your Bonnie benchmark results yet?
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
If you really had SPEED GREED, you'd be using mirrors:) If speed is your ultimate goal with ZFS, it really is the way to go. Though, with this appearing to be a home storage server, the space lost wouldn't make a lot of sense.

While you are still in testing mode, if you are up to it, I'd love to see the results with 7 mirrored vdevs in your storage pool. Also, what are your Bonnie results on the pool as it's currently configured?
 

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
If you really had SPEED GREED, you'd be using mirrors:) If speed is your ultimate goal with ZFS, it really is the way to go. Though, with this appearing to be a home storage server, the space lost wouldn't make a lot of sense.

While you are still in testing mode, if you are up to it, I'd love to see the results with 7 mirrored vdevs in your storage pool. Also, what are your Bonnie results on the pool as it's currently configured?
I'll make Bonnie tonight and run it. Post it soon. What about buying piglover's arca 1261ml? Would that be better than the HP expander?
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,514
5,806
113
gain, I have SPEED GREED and am poor because of my SPEED GREED! (My race car is consuming all my funds right now).
If you are concerned about ZIL, and you really should be, as a best practice, I would mirror a couple SSDs and assign the ZIL to them.
I digress, I may have taken a 2U server through a 1g turn or two.
 

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
Ok so a 24 port sas card is out of my budget. I have a 32gb ssd I could use for a zil cache would that help? Maybe I'll rewire the hba so that its balanced I.e map the drives to each channel and then take note of those drives and their serial numbers then add them as 4x 3tb raidz vpool. I wonder if that might help.
 

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
haha thanks. the controllers with 24 ports and onboard cache and an external port is too spendy approaching 1200 usd, thats almost as much as i spent on drives for a controller which probably is something I should have really thought through when I started putting this beast together