I picked up 2 Samsung / Dell pm1725A .. latest firmware 1.6TB pic-e NVME cards... eBay of course.. 70 bucks a pop...
well they came today and each had almost identical ~1.6 PB of data read/written and 48000+hours uptime.. not exactly spring chickens but heath still looked good from what I can make of NMVE tools version of smart output.
The use case will be for use in a ESXI server with 1 VM (nappit) having a 3008 HBA passed to it to manage the large 5x8tb data store pool
my previous servers I had this VM feeding internal AIO NFS volumes back to ZFS for VM storage but in this case I think I will keep the VMs native on a namespace
so ..
first.. ZFS / ESXI what is the best format for the namespaces .. its currently at 512.. I am thinking 4k would make more sense as pools get set up ashift=12 ?
Second... I want to partition a couple namespaces for ZIL and ARC.. the machine will have 128gb of ram.. and honestly I think regular l2arc in ram might be good enough. my current server is running just fine with no ZIL or ARC on the data pool .. but the VMs are on a pool of 2 stripped S3500SSDs
1 machine will have a 10gbe Nic.. but honestly the network is gigabit..
I dont thinK I will be running any active VM workloads on the pool requiring sync write like NFS.. but there will likely be times where NFS might serve out data to other machines.. and of course there will be SMB shares.. like to a VM for plex back end media.. again ... right now that pool on a 5 wide stripe of spinning iron seems to keep up..
I don't think ESXI can namespace the NVME natively.. so what.. should I pass it though to Ubuntu to manage setting up namespaces, then un-attach
each namespace should be treated like a LUN in ESXI.. so I was figuring I would pass though the namespace to Nappit .. to attach to the pool... rather than pass the Pcie device in total .. as I want to keep a namespace or 2 for native VMFS storage..
anywho.. school me on this .. its all new to to me... I go back far enough were storage was on punchcards so be gentle
haha
thanks in advance ..
well they came today and each had almost identical ~1.6 PB of data read/written and 48000+hours uptime.. not exactly spring chickens but heath still looked good from what I can make of NMVE tools version of smart output.
The use case will be for use in a ESXI server with 1 VM (nappit) having a 3008 HBA passed to it to manage the large 5x8tb data store pool
my previous servers I had this VM feeding internal AIO NFS volumes back to ZFS for VM storage but in this case I think I will keep the VMs native on a namespace
so ..
first.. ZFS / ESXI what is the best format for the namespaces .. its currently at 512.. I am thinking 4k would make more sense as pools get set up ashift=12 ?
Second... I want to partition a couple namespaces for ZIL and ARC.. the machine will have 128gb of ram.. and honestly I think regular l2arc in ram might be good enough. my current server is running just fine with no ZIL or ARC on the data pool .. but the VMs are on a pool of 2 stripped S3500SSDs
1 machine will have a 10gbe Nic.. but honestly the network is gigabit..
I dont thinK I will be running any active VM workloads on the pool requiring sync write like NFS.. but there will likely be times where NFS might serve out data to other machines.. and of course there will be SMB shares.. like to a VM for plex back end media.. again ... right now that pool on a 5 wide stripe of spinning iron seems to keep up..
I don't think ESXI can namespace the NVME natively.. so what.. should I pass it though to Ubuntu to manage setting up namespaces, then un-attach
each namespace should be treated like a LUN in ESXI.. so I was figuring I would pass though the namespace to Nappit .. to attach to the pool... rather than pass the Pcie device in total .. as I want to keep a namespace or 2 for native VMFS storage..
anywho.. school me on this .. its all new to to me... I go back far enough were storage was on punchcards so be gentle
haha
thanks in advance ..