Intel 905P 480GB VS DC P4510 2TB Speed Difference

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

WhosTheBosch

Member
Dec 20, 2016
41
4
8
Use Case: Proxmox 6.1, 30 VMs, 30 containers, on an Intel Silver 4510 with 128GB of RAM. Backup etc would be on different ZFS array. VM filesystems would be mainly NTFS and XFS. No database VMs. I will have Redis and Kafka though.

I currently have a 2TB 970 Evo Plus, but I want to learn ZFS and understand that this drive would be quite slow for my use case. I'm wondering if I would notice a speed difference between these two drives, or if the extra space in the 4510 is better? The VMs and containers aren't expected to be very large so I could make 480GB work as my hot drive and then move written data to a warm drive 960 Pro.

I would prefer extra space if I'll only save minimum amount of time going with the 905p. I know that databases would make the 905p the choice, as well as using it for a ZIL. However, in both cases I wouldn't setup a ZIL, and databases aren't a factor for me. I'm not sure how to determine if I would have a lot of QD1 activity which would make the 905P the much more obvious choice.
 

azev

Well-Known Member
Jan 18, 2013
768
251
63
In general I think optane will improve performance of your vm when compared with other SSD/NVME on the market today.
When you have 30VM all your data will pretty much be random, and my experience with upgrading my zfs with optane is very positive.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
The 905 will be significantly faster as you already know, *but*
ZFS performs best with sufficient free space, so if you max out the 480G then even then optane will get significantly slower.
There was a post at the FreeNas forum showing the effects of a pool filling up on its performance but couldnt find it on the quick.
Basically performance was best up to 30% fill grade, ok-ish to 50 and then deteriorated quickly.

So if your working set is up to 250G then Optane, else it might be better to get the 4510
For both you need to evaluate redundancy o/c so at least a pair would be recommended unless the data is not important.

Edit:
The path to success for block storage
 
Last edited:
  • Like
Reactions: UhClem

WhosTheBosch

Member
Dec 20, 2016
41
4
8
My working set will be more that 250G definitely.

For both you need to evaluate redundancy o/c so at least a pair would be recommended unless the data is not important.
It's a small server and this is for learning, so redundancy is not important. I will be making regular backups to other media.

Thanks, that's quite interesting. I wish I was rich enough to keep my pool 75% free. It's unfortunate that ZFS doesn't have the ability to defragment.

After doing more research, and calculating the extra space ZFS needs, I'll need 1-2TB of usable space + ZFS extra. I'm wondering which of these is better, since they're comparatively the same price:

4TB P4510
or
2x2TB 4510 striped (Is the extra performance worth the hassle? Does it perform close to Optane?)
or
480GB 900P (use <350GB) for hot data and a 1TB P4510 for warm data.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Actually you can defragment by moving the data off and on the drive again ;)

So maybe option 3 would be best for now
 

WhosTheBosch

Member
Dec 20, 2016
41
4
8
Actually you can defragment by moving the data off and on the drive again ;)

So maybe option 3 would be best for now
Ya I've just been reading about that. I see another option is to add bigger disks to the pool to increase it's size. I think option 3 should also allow me to have VMs using multiple channels to access different disks instead of a single one which should technically speed it up.
 

WhosTheBosch

Member
Dec 20, 2016
41
4
8
480GB 900P (use <350GB) for hot data and a 1TB P4510 for warm data.
I'm wondering if 2 x 1TB 4510 striped would be better than a single 2TB 4510. Looking at the Intel specs:


1TB 4510 2TB 4510
Sq Rd 2850 MB/s 3200 MB/s
Sq Wr 1100 MB/s 2000 MB/s
Rn Rd 465k IOPS 637K IOPS
Rn Wr 70k IOPS 81K IOPS
Pw Act 10 W 12 W

It seems that I could increase my random writes, but I'd be doubling my power cost.
 
Last edited:

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
I have no clue how Proxmox works with ZFS regarding sync/async writes, but that idea would only work if its using async writes since sync write performance is mostly latency bound and that won't change with a stripe.

However, the p4510 does not seem too bad of a drive compared to the 900p
see
SLOG benchmarking and finding the best SLOG
vs
SLOG benchmarking and finding the best SLOG

(when used as slog, i.e. sync writes). Note that the real life performance of a 900p is between 600 and 900 MB/s. 480 GB version and or 905p should be slightly faster.

You might want to find out how Proxmox used the datastore/drives so you can evaluate the actual performance drivers...
 

bards1888

New Member
May 2, 2020
9
5
3
Proxmox works the same as any other ZFS implementation (Illumos, FreeBSD, Solaris etc).

ZFS always has a ZIL. It either lives on the pool itself, which is the default, or on a dedicated device that you choose and add to the pool. The beauty of ZFS is that you can change attributes of individual datasets or volumes rather than accept the values from the pool. This flexibility allowing you to pick and choose which VM 'disks' use which features such as compression, sync, recordsize, logbias etc.

By default Proxmox creates datasets and volumes with sync=standard. This means that when your VM sends data to its virtual disk, ZFS will acknowledge the data has been written, even although the data might not have made it to the disk. The best example being .... if immediately after an asynchronous "write" you lose power, then you will probably lose that data.

Some applications, often databases & NFS, demand synchronous IO. By using something like the linux O_SYNC flag;

O_SYNC
Write operations on the file will complete according to the
requirements of synchronized I/O file integrity completion (by
contrast with the synchronized I/O data integrity completion
provided by O_DSYNC.)

By the time write(2) (or similar) returns, the output data and
associated file metadata have been transferred to the underly‐
ing hardware (i.e., as though each write(2) was followed by a
call to fsync(2)). See NOTES below.

Synchronous IO ensures that the data has been committed to stable storage (i.e. not just still in RAM) before responding to the VM/Application. You can force synchronous IO for a volume or dataset in ZFS (and proxmox) by setting sync=always.

You can also totally disable synchronous IO, this will override what the VM/Application thinks is happening, and the performance goes up accordingly. NOTE - THIS IS DANGEROUS AND CAN LEAD TO DATA LOSS.

For your case I would almost forget the dedicated ZIL and just make pools with the NVME devices you have, perhaps mirroring for redundancy.