I'm in the process of setting up EnhanceIO ( GitHub - stec-inc/EnhanceIO: EnhanceIO Open Source for Linux ) for Debian/OpenMediaVault to use ssd cache for my download/temp/work pool of drives. Just wondering if anyone else has used it. There really isn't many reviews by users online about it but seems like a great way to not have to buy tons of SSDs and still get most of the benefit.
Sadly I'll have to compile in the support into the linux kernel. However, I was bleeding edge for a few years back in 2008 so have plenty of experience doing that. I looked at dbcache which is baked into the kernel but you have to change the partition structure of the drives to use it so kind of a no go for mostly full existing drives.
Hardware on hand for testing of enhanceio - 500GB Samsung 850 as cache drive using writeback mode. Max speed is a bit over 5gbit. I have a very large battery backup and this is just a work pool for downloads so I'm not worried about if it loses power. I can always recover the data in the cache that hasn't been written aka. Dirty.
If this testing works well, i want to pickup a Sun Oracle F80, 800GB PCI-E flash card. This presents 4x 200GB drives to the system. I'd then use drive 1 and 2 in a software raid 1 writeback mode for my main storage pool (46TB mergerfs), and drives 3 and 4 in a software raid 0 writeback for the work pool (4TB mergerfs). The f80 writes at over 10Gbit speeds so I could finally make full use of the 10gbit sftp+ fiber i have internally. I rarely send more than a few hundred GB of files at a time so this would fit my usage profile pretty well. Anyone see any faults in my thinking here?
Sadly I'll have to compile in the support into the linux kernel. However, I was bleeding edge for a few years back in 2008 so have plenty of experience doing that. I looked at dbcache which is baked into the kernel but you have to change the partition structure of the drives to use it so kind of a no go for mostly full existing drives.
Hardware on hand for testing of enhanceio - 500GB Samsung 850 as cache drive using writeback mode. Max speed is a bit over 5gbit. I have a very large battery backup and this is just a work pool for downloads so I'm not worried about if it loses power. I can always recover the data in the cache that hasn't been written aka. Dirty.
If this testing works well, i want to pickup a Sun Oracle F80, 800GB PCI-E flash card. This presents 4x 200GB drives to the system. I'd then use drive 1 and 2 in a software raid 1 writeback mode for my main storage pool (46TB mergerfs), and drives 3 and 4 in a software raid 0 writeback for the work pool (4TB mergerfs). The f80 writes at over 10Gbit speeds so I could finally make full use of the 10gbit sftp+ fiber i have internally. I rarely send more than a few hundred GB of files at a time so this would fit my usage profile pretty well. Anyone see any faults in my thinking here?
Last edited: