Hi,
I decided to split up my large FreeNas pool into smaller ones. While doing that I want to optimize (recordsize of) the new pools for the different content that I have (video, photo, documents, software, vms).
While that is fairly simple for videos i struggled for photos. Historically I classified those as random IO with small blocksizes, but nowadays each photo has at least a couple of megabytes and while I might access many different ones after each other (browsing a gallery) per photo this is not random.
So next thing I though about was to read up on that, but I did not find much. O/c, application specific there are lots of info (databases primarily), but most I use (eg Lightroom) are working on file system level.
Then I wondered - can I measure that? - but also on that I did not find much.
Also, if all the tools do is to read files on the OS layer, is there such a thing as data specific blocksize at all? Or is it all depending on storage blocksize only and the source OS is agnostic? Also we have something like network TCP window scaling which changes the transport layer (potentially all the time), how does that change things...
Looks like I need to do some basic reading here first, but maybe somebody is further along that road, or has a totally different point of view
Cheers
I decided to split up my large FreeNas pool into smaller ones. While doing that I want to optimize (recordsize of) the new pools for the different content that I have (video, photo, documents, software, vms).
While that is fairly simple for videos i struggled for photos. Historically I classified those as random IO with small blocksizes, but nowadays each photo has at least a couple of megabytes and while I might access many different ones after each other (browsing a gallery) per photo this is not random.
So next thing I though about was to read up on that, but I did not find much. O/c, application specific there are lots of info (databases primarily), but most I use (eg Lightroom) are working on file system level.
Then I wondered - can I measure that? - but also on that I did not find much.
Also, if all the tools do is to read files on the OS layer, is there such a thing as data specific blocksize at all? Or is it all depending on storage blocksize only and the source OS is agnostic? Also we have something like network TCP window scaling which changes the transport layer (potentially all the time), how does that change things...
Looks like I need to do some basic reading here first, but maybe somebody is further along that road, or has a totally different point of view
Cheers