Dell VRTX for HPC - Is it old tech now?

chrgrose

New Member
Jul 18, 2018
20
7
3
I came across the Dell VRTX, and as an enclosure that accepts four blades, is apparently relatively quiet, and is perhaps (?) less power hungry than multiple R930/R830 I am really intrigued by it. A VRTX with 4 M830's looks to run about $1200-1300, not counting CPU/memory/storage, so the price is not bad too.

But I noticed that this machine is now about 10 years old and there appears to be no 'updated' version. So, I am really curious about its limitations. For example, the enclosure system board apparently has only a PCI-e 2.0 interface (found this info on a random review website, not the manual). My main interest is in HPC, basically running my own physics codes, which could use fast shared storage, but the VRTX also might be a good candidate to develop and learn MPI, which requires low latency mid- to high-bandwidth networking.

Any opinions on this? Am I getting my hopes up? Is the VRTX old hardware?

Thanks!
 

mmk

Member
Oct 15, 2016
97
25
18
Czech Republic
Are you sure about being able to fit four M830s in a VRTX? As far as I know the M830 is a double-wide blade and the VRTX can take only two of those.

If you're really set on blades then you may want to have a look at the fx2. It's a somewhat more modern alternative, although probably more expensive.

Personally I would say don't do anything with blades except throw them in the skip. It was a fashion item in IT for some time and fortunately they have more or less gone the way of the dodo. In practice they just create unnecessary SPOFs for little or no advantage..no matter what the vendors' advertisement material claims.
 

audiophonicz

Member
Jan 11, 2021
67
30
18
X2 throw them in the bin. Unless you are completely out of rackspace and need ultradense compute, skip the blades

And yes, the x30 series is G13, the x50 series is the new G15, only available in 6/700 series version so far.

Were cycling out our FX2s for R640s and R740s cuz the blades are a pain to manage. One non-hotswap storage sled for each pair of blades, you have to power off 2 hosts to replace a single drive. And if Dell wants you to update the CMC firmware, you gotta power off all 4.

Because you mentioned power: 4x FC430 blades + 2x FC332 sleds + 2x 10GbE Modules is running about 1353Wh.
1x FC430 avg about 230Wh (2x 2.5 SSD). 1x R630 avg about 290Wh (10x 2.5 SSD).
so technically 4x R630s would be 1160Wh or 192Wh less than an equivalent FX2 chassis
 

oneplane

Active Member
Jul 23, 2021
397
215
43
It used to be the fancy way to get dense compute and I/O, but the balance tipped back to 4-socket and later 2-socket systems relatively quickly. There may have been a small Opeteron-era advantage, but right now it's as previously described: mostly an expensive risk that only matters if you somehow are out of physical space.

If you do enjoy multi-node systems, the twinned/quad nodes make a bit more sense considering they are pretty much self-hosting nodes that just happen to make more efficient use of the available rack depth.