Intel 750 raidZ in ZFS

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

vrod

Active Member
Jan 18, 2015
241
43
28
31
Does anyone have stats from raid'ing 4-6 of these? I got a nice opportunity to supply my 2 current 750's with 3 more for cheap. Planning to use these with ZFS, anything I should be aware of?
 

vrod

Active Member
Jan 18, 2015
241
43
28
31
Yeah I do know raidz is not the preferred way for many. This is just for my own esxi stuff, nothing productive or so. I plan to make daily dataset reps to another zfs pool of mine. Striped mirrors does work better but then I also lose half the space vs 16.5% on raidz. Isn't the main concern with raidz the rebuild times? With these ssd's that should happen pretty quickly. I'm going for the 400gbs
 

vrod

Active Member
Jan 18, 2015
241
43
28
31
Yeah, I know what you mean. Then the performance will be limited to the QPI link. :) I run the system with a single e5-2660v2 on a s2600cp board. These SSD's don't hit near the cap and I am also more looking for IOPS rather than MB/s.

So, right now I have 2x800gb's (pcie) and 2x400 coming in next week (U.2). Everything will run through CPU1 and I am considering whether to go with RAID0, RAIDz or RAID10 for the zfs pool. I am planning to cut the ssd in half space-wise so it will match the 400gb ssd's.

I have a dual 10gbe adapter for the network but i can't use the last pcie slot in the board unless I get a second 2660v2.

I will be doing hourly snapshots and replicate them to 2 seperate hdd based pools so I have "double-backup" so to say... The pool is just going to run some gameserver stuff for myself, random VM's, vm's i host for friends and such. Not talking anything near business critical stuff. I am going to do tests for all 3 types but people's input would def. be appreciated. :)
 

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
Go ahead and test, that's the best way to be sure. Your use case might be fine with raidz. However, if it's IOPS you want, you need striped mirrors (RAID10). For a single raidz, you have to complete a write to every disk before it's "done". So it's as fast as a single device. Reads can be a bit quicker, assuming no errors are encountered. A mirror set can spread the write over multiple stripes.

You can get a similar effect with many raidz sets in a stripe configuration, (RAID50/60) but you need a lot more devices to do it.

As you want speed here, assuming you have solid backups, you might consider straight stripes with no redundancy, RAID0. Make sure you have solid replication to slower storage. But if you do, and don't mind some occasional downtime to restore if something dies, it can work. If you are using quality SSDs that are in good shape, you probably won't see a lot of downtime in this configuration anyway.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
You want High IOPs for a number VM work loads... sorry, but you got the wrong drives. Those aren't going to do what you think when there are multiple OSs constantly reading and writing from them they'll greatly slow...

Also, if you're backing up hourly and it's for non-critical, home vms then why not run raid0 or each drive separately and spread the VMs around. Don't deal with ZFS and networking the storage, just use it in your hypervisor directly.
 

vrod

Active Member
Jan 18, 2015
241
43
28
31
I'll go with a raid0 then... these drives have an OK endurance anyway. I'm not expecting much load at all, I already had the 2 800gb's and got a couple of the 400gb's for cheap. I do know that the P3xxx series are better suited but these should do the job just fine for now. I'm counting anyways on that these will hold for the next couple of years or 3... :) whenever one dies I can then look into replacing.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
I won't spend much time on this because it's out there... but in a mixed workload consumer drives even NVME are slow even compared to their enterprise SATA relatives.

With that said here's a graph that illustrates the 'write' performance, or lack of...

http://tssdr1.thessdreview1.netdna-...Intel-750-400GB-Iometer-QD128-Full-Random.png

Look what happens to writes before even 100 seconds dropping down to around 22,000 for comparison S3700 400GB SATA drives do 33,000 and the newer s3710 around 40,000. Nearly 2x performance with SATA over Intel Consumer NVME. Mixed read/write (what VM load is) is not good on consumer work loads, if you have 10 VMs going there's always some 'disk access' going on and if some are logging and some are reading/writing (say a small DB) you're going to quickly hit that consumer 'wall' and drop performance.

Now, if you could use the Intel 750 as 'read only' drives for databases or whatever that would likely be a better use-case.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
I'm inexperienced in VSAN but I'd take a guess @whitey could provide some feedback on that.
 

vrod

Active Member
Jan 18, 2015
241
43
28
31
I know what you mean and I agree, however the workload on my vms are far more read-baser rather than write. I would say close to 95% read and just 5% write. My plex server only reads, gameservers only reads far the most time, vm's mostly reads too. Yes there's logging but this is so minimal that it has no effect. At least not to my knowledge. :)

As mentioned before I got 2 of these drives with 400gb capacity for cheap (actually 200 euros per drive), much cheaper than a single P3520 would have cost. It's kind of a no brainer to me, not to take the opportunity and see how it goes. :D they might do super well, or they might do super bad. I can always use them for something else; cache, for my work pc, for a lab, anything really.
 
  • Like
Reactions: pricklypunter

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
No experience with these drives directly but yes I would say vSAN capacity with a good write intensive cache tier should work fine.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
I know what you mean and I agree, however the workload on my vms are far more read-baser rather than write. I would say close to 95% read and just 5% write. My plex server only reads, gameservers only reads far the most time, vm's mostly reads too. Yes there's logging but this is so minimal that it has no effect. At least not to my knowledge. :)

As mentioned before I got 2 of these drives with 400gb capacity for cheap (actually 200 euros per drive), much cheaper than a single P3520 would have cost. It's kind of a no brainer to me, not to take the opportunity and see how it goes. :D they might do super well, or they might do super bad. I can always use them for something else; cache, for my work pc, for a lab, anything really.
Sounds like a plan.
I look forward to your testing, and sharing results with us :)
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
@vrod, I think to characterize your workload at 95% reads/5% writes may be a lil' over-optimistic :-D

I'd go w/ 70/30 if I'd venture to guess, you should be fine w/ 750's for capacity tier and I'd recommend Intel s3700 or HGST hussl/husmm series for write tier depending on sata/sas backplane unless you'r gonna bone up for Intel P3700's for cache tier :-D

Like you said it is just for a lab/home use although one word of caution...I seem to remember seeing a rather alarming rate of wear (early maybe) on Intel 750's if memory serves me correct so there's something to think about.
 

vrod

Active Member
Jan 18, 2015
241
43
28
31
I did hear that yeah, there's however guarantee on the drives till oct. 20th in 2020 so ai have some time. :) the 800gb drives I have had now for a year and a half only has like 5TB writes last time I looked, most of them being zeroing from my side og benchmarks. But let's see... i'll share my results when I can. :)
 

vrod

Active Member
Jan 18, 2015
241
43
28
31
So finally I got started with things. Built all SSD's into the box yesterday and have been testing performance ever since. I'm using a QLE8152 adapter with my freenas box and this seems to cause some bandwidth issues, I'm not getting close to 10G with iperf. Tops 3gbps per port. The ZFS pool locally is blazing fast, i dont have graphs right now, rather wanna do that once I have solved my bandwidth issue. :)
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Care to share some numbers?
Given that I can find high capacity 750's cheaper then same size ssds nowadays makes me question the use of regular ssds (for home where I probably never hit the endurance rating even one as low as 70GB/day since I won't have that every day for 5 years - I'll let it recuperate on christmas;))
 

vrod

Active Member
Jan 18, 2015
241
43
28
31
Yes I will surely post some numbers once I have solved the 10Gbe bandwidth issue. Currently waiting for another cpu to pair the current so I can use the last pcie slot. :) once that's received I can install a couple of mlx connectx2 adapters and test again.