My calculations are based on Norwegian energy prices, which is on average significantly lower than EU. SAS drives consumes more than 5watt, more like 7-9watt
6TB for 31 USD each is not a deal. The problem with smaller hard drives is energy cost. You're way better off getting a few 14TB hard drives for 100USD.
My spreadsheet says that even for 10 USD each, you will get less value for you money compared to 14TB hard drives for 100 USD.
He probably mirrors 3 special meta devices as he doesnt know that ZFS store an additional copy of the metadata by default. Or that he knows about it, and has set redundant_metadata=none
Log doesnt need mirror.
Cache is supposed to be invidual drives.
Another thing that can slow down performance is having very large block sizes. In my benchmarks, even for sequential reads, having the recordsize set to 128KB is optimal compared to 1MB+, going for 4MB or even 16MB can significantln slow down sequential read performance. For video I think...
L2arc can slow down performance. It's a very tricky feature and should always be benchmarked. Anyway, I think the main reason you have slow read performance is one or more of your hard drives is not working right. Does zpool events tell you something?
When "patching" the iSCSI host system, you have some options. Most likely you have no need to update the system. You probably run your SAN on its own VLAN and the only services exposed is TGT(or whatever you use) and SSH. Check for CVE's for your iscsi host system. Most likely there have not...
When you live migrate with kvm, there is no need to copy the VM content that is stored on disk. You only copy memory pages, ie what you have in RAM. Also, I would not worry about filling up an enterprise ssd over 80%, the performance difference is negligible.
I do not understand where you get these numbers from. My ZFS based zvols, even on a pool that is 80% full, exported as a iscsi LUN fully support live migration of VM's without any downtime. I done this myself with KVM. If you need reduntant SAN, I would look at another solution, like Ceph.
I am not sure you will get faster performance tossing out ZFS. The biggest performance benefit compared to other filesystems is compression, and it makes a seriously BIG difference, when you have data that compresses well.
If we're talking about using spinning rust and raid for iscsi, I can...
ZFS as a virtual block storage layer performs excellent, I have no idea why you would think otherwise. I use it with TGT, NVMe-oF, virtual machines, vectorized databases, where I seperate ZFS as storage device and the compute nodes which can connect to the ZFS storage over the network and read...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.