Just received one PM9A3 960GB M.2 as they are really cheap now, got mine for €120 / $136 from a reseller.
Came with newest firmware GDC7302Q
And so far it outperforms my Hynix P31 500GB and Seagate FireCuda 520 500GB with 10-30% on MS-SQL. (running on PCIe Gen3 mobo)
You might consider a drive with PLP (Power-Loss-Protection), Samsung PM9A3 /983 DCT, don't look at performance numbers, what matters is that the SQL server can write synchronize IO to the power-loss-protected memory on the drive instead of writing to the "slow" nans.
Actually, a SATA SSD with...
I know it's a little off-topic, but how big it the gain of NIC SR-IOV on KVM, is it really worth it, talking "normal" network traffic. I bridge my onboard i210/i219s to my VMs and I see very good performance, couldn't really ask for more.
Talking iSCSI or similar, that might be another story...
Why does placing root and VM disk on the same drive increase wear-out? Let's say compared to placing root and VM disk on each 256GB M.2?
I would claim it would be even worse.
I don't in general see a lot of IO of the root-volumes, regardless of being root-volume of VM or the hypervisor.
Any reasons why it wouldn't work in KVM/QEMU based hypervisors?
Is it really a hardware thing? If all the "magic" happens in the VM OS? At the very least if you pass-through two HBA's to your Win VM it should work, right?
The bad performance must be a matter of enable/disable synchronous I/O. If you disable on ZFS (with is the same as XFS default) I assume the speed would be fine.
Not fair to compare ZFS with synchronous-I/O-ON and XFS synchronous-I/O-OFF...has to be the same ;-)
But their support sucks, their firmware and tools sucks and their 5300 Pro (SATA with PLP, a so-called RAID and enterprise target drive) doesn't support SCTERC/TLER, which is a big blinking neon-sign saying STAY AWAY
And if you open up the drive it just look cheap build, even the solder-joins...
A good example of running "stable" (older kernel) doesn't always mean you get a stable system!
RHEL comes to my mind where an old kernel is chosen to give the highest stability, but it is true? I feel there is a exaggerated trust in old kernels.
(sorry going a little off topic)
It was more for general info in regards to make use of the iGPU in a VM (passthrough).
And if a board doesn't have additional display outputs (beside the Aspeed BMC VGA output) then it is in particularly relevant to check for VHD support on the board.
I'm pretty sure the change for ERC-support is a lot higher if the drive says it does. But yes, you can never know for sure, but I'm unfortunately not in the position to make my own drive so I have to go with the 2nd best ;-)
But it is my assumption that must drives DO spend too much time trying to solve any "internal" problems, which actually is a good thing, unless it's running in a RAID configuration. So I do find it to be a very important feature. But on the fair side, I could imagine that platter-disks...
Would like to know if this disk support ERC (Error Recovery Control) which is rather a must-have for RAID use.
Can be determine by the command:
smartctl -l scterc /dev/sda
I did try Seagate support but they didn't really know what I was talking about and gave me a nonsense answer.
As it turn out it's not an LVM feature, but been in the kernel since 4.12 apparently introduced for LUKS but work with mdadm too. Don't know if RHEL added any additional stability/performance tweaking if you configure it under LVM instead of doing it native Linux, I'm guessing not.
But you must...
I'm guessing so. But since btrfs apparently is particular slow on RAID this it where there potentially could be the highest performance gain using LVM+RAID+dm-integrity.
I'm also curious about how much control you have of the location of those checksums, and if the array keep running if those...