Your thoughts on storage solution for Hypervisor (U2 vs SAS)

Which option would you recommend


  • Total voters
    11
Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

UnknownPommes

Active Member
Aug 28, 2022
101
32
28
So i am building a new system but not sure what to use for storage.

What i am looking for:
  • 6 - 8tb usable space
  • min 250k iops read & write, preferably ~400k
  • under 40w for all drives (and hba if required)
  • 1 drive parity / redundancy
  • 1pb+ endurance per drive
  • one pcie x16 3.0 slot for connection (ether for u2 or hba)
  • cheapest that fits those requirements

What would you recommend?
Idea 1:

4x U2 Drive like the intel p4600 ether in raidz1 if iops are over 250k or raid10 with larger ones if not
Would be using a pcie to 8643 bitfrucation adapter board like U.2 NVMe SSD SFF8643 auf SFF-8639 NVMe U.2 mit Kabel PCIe x16 Quad Port Adapter | eBay
I am not planing to use any trimode hba bc of cost and i am only going to use max 4 nvme ssds due to the powerdraw.

Idea 2:
Pcie to 4x M2 adapter board with semi consumer ssds, again ether raid z1 or 10,
endurance might be a problem with some ssds but there are m2 ssds like the mte220s that have 4.4pb (for the 2tb) which i am acutally using in many other systems

Idea 3:
6 or 8 sas ssds in ether raid 50 or 10 depending on the performance connected over an lsi 9310-8i, performance might be close tho.


(With all the "conventional raid names" like 10 or 50, i am referring to the zfs based counterparts)
What would you recommend? Any thought?
 

SnJ9MX

Active Member
Jul 18, 2019
130
83
28
I have a feeling this will be an increasingly common dilemma for the folks here and over on r/homelab. Realistically - grab 2x 7.68TB U.2 NVMes, stick them on a 2x PCIe card and you're set.

As far as $/[metric], >2TB is still more expensive but seems to quickly be dropping.

Performant SAS drives cost more than NVMe, likely due to ease of hooking up 4ish per system.
 

ano

Well-Known Member
Nov 7, 2022
654
272
63
vmfs is so slow, you will not be seeing those numbers easy

nvme is cheaper, but beware, you need good drives, slow nvme are slower than SAS (good ones

P4600 and P4610 are proven good
 

UnknownPommes

Active Member
Aug 28, 2022
101
32
28
for what exactly?
I thought most hypervisors (Esxi) use large block sizes to reduce io
well, this is going to be one of my main servers running a lot of vms & containers, databases for a lot of services, logging / monitoring stuff like zabbix for several hundred vms / devices and a bunch of other stuff and its running proxmox as hypervisor,
idk i based the numbers on what i measured at my current server that it is going to replace and doubled that for future proofing.
 
Last edited:

ano

Well-Known Member
Nov 7, 2022
654
272
63
you plan to run vmfs directly on them with 0 redundancy on hundreds of vms? or vsan?
 

UnknownPommes

Active Member
Aug 28, 2022
101
32
28
you plan to run vmfs directly on them with 0 redundancy on hundreds of vms? or vsan?
no,
i am running proxmox so no vmfs but zfs instead, this is homelab so most stuff not extremely critical regarding downtime. This system is not running hundreds of vms just the logging of them.
The setup is basically 3 servers running everything that needs to stay on 24/7 and around 25 servers that are on only when needed.
This post is concerning two identical 24/7 ones are going to be replaced by two new identical ones. Those are running stuff like dns, monitoring for all close to 30 servers, everything critical in a high availability setup.
And also, yes nearly all of the servers are using local storage, but thats fine for my setup.
Like i said this is not a super critical prod environment but rather my homecluster.

Edit:
Having local storage on all of the non 24/7 servers has the benefit that it safes power. If all the storage would be on 2 or 3 central servers, they would also need to stay on 24/7 using a lot of power, not even speaking of the increased networkload / cost. With local disks, when the server thats requires them is offline they are also offline and dont use power.