In a system with about 15TB of data, growing at about 0.5TB/year, with about 500GB of that data being the "working set" at any given time but the actual active data changing as users do different things, and about 20-50GB of essentially "wired" data (metadata, indexes, etc.) that is always part of the working set, what would be a good size for the "fast" tier (likely NVMe) when the slow tier is spinning rust?
I'm looking at enterprise NVMe, but the chassis is a 2U without U.2 support, so I probably have to go with M.2. Enterprise M.2 is much more expensive per GB than U.2, so I'd like to keep the size as small as possible but still give a decent cache hit ratio. If I have to, I could find space in the chassis for U.2, but I'd probably only do that if I needed 4TB or more of fast tier.
Although I am very likely going to use Linux for this, any sizing pointers based on experience with other environments would be appreciated.
Thanks.
I'm looking at enterprise NVMe, but the chassis is a 2U without U.2 support, so I probably have to go with M.2. Enterprise M.2 is much more expensive per GB than U.2, so I'd like to keep the size as small as possible but still give a decent cache hit ratio. If I have to, I could find space in the chassis for U.2, but I'd probably only do that if I needed 4TB or more of fast tier.
Although I am very likely going to use Linux for this, any sizing pointers based on experience with other environments would be appreciated.
Thanks.