Dell VRTX chassis - A VERY innovative machine

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Reading storagereview.com, I stumbled onto the new Dell VRTX server. They do a fairly good job of describing the server in their review, but there are some additional features that will interest the STH audience:

On the surface, the Dell VRTX looks like a four-node Dell c6220 welded to a 25 slot disk chassis. All told, it's 4U and holds four dual Xeon E5 compute nodes with 24 DIMMs each. Those are great specifications, but the really exciting part is the level to which they have virtualized the internal wiring:

The VRTX has eight PCIe3 slots, but you get to decide how to allocate them. All four of the nodes are wired to a pair of programmable PCIe swithes, and you can use the web UI to allocate up to four PCIe slots to any given node.

The VRTX has up to 25 SAS-only disk slots, but all of them run through a SAS expander and to a very unique shared RAID card based on an LSI 2208 chip and SR-IOV. You can use the web UI to allocate any or all disks to any node, and you can even share disks across nodes (shared SAS) for clustering purposes. Having just one RAID card shared across four nodes will definitely limit throughput, but will make clustering a breeze.

Even the Gigabit ports are configured to use either highly available pass-through like a blade chassis or internal switching, with up to four ports per node and eight total ports out the back.

lastly, each node has a pair of SD cards designed for redundant copies of a VM operating system. That means a total of 25 SAS disks, 8 SAS/SATA disks, and 8 SD cards/disks per server.

All in all, this is a great server for a small (but very advanced) VM cluster. That said, you probably won't go rushing out to buy one. While we pay $750 for a working Dell c6100 and then dream of turning it into the perfect VM cluster and storage server, fulfilling that dream with a Dell VRTX will cost you at least $15K for a stripped down server and around $45K for a fully populated one.
 
Last edited:

Biren78

Active Member
Jan 16, 2013
550
94
28
I have had the VRTX bookmarked for quite some time now. Looks very interesting, especially for 2015 when they start coming off lease.
 

NetWise

Active Member
Jun 29, 2012
596
133
43
Edmonton, AB, Canada
Shoot for 2016 - they're rare like hen's teeth right now. Even Dell is having a hard time getting them internally for trade shows and the like. Also this first batch doesn't have the secondary path RAID controllers nor 10GbE off the back from the integrated ports (you could use PCI-e to add them).

Absolutely amazing DR in a box for the price.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
I'd rather have common part design. this whole MR-IOV is complicated compared to SR-IOV.

With vSan being software raid only (IT mode only with SMART access) raid controllers will be greatly devalued.


All raid controllers support clustering (two controllers, one host), but with MR-IOV? maybe not. hell they can't even deliver SR-IOV drivers for LSI or PMC cards.

Blades are a pita honestly. When things go wrong like a bad flash to the administrator - you can make something super-redundant into a brick. a very expensive hard to replace brick.

I could call up a number of buddies and have a DL380 G7 in an hour on a holiday but you'd have to have two VRTX on site.
 

NetWise

Active Member
Jun 29, 2012
596
133
43
Edmonton, AB, Canada
Well of course YOU don't like it ;). It's still a pretty impressive beast. Remember that VSAN requires a 300% hit on disks so to share say 6x1tb you need at least 18 of them across 3 hosts - that adds up. Also this thing does shared SAS across all 4 nodes - 2 active/active I've seen a lot of but not 4.

Doesn't change the act that for the cost of the 4x r620's I bought last fall, could have this with more memory and the 12x3tb shared disks. Instead of using vSAN, use the 2 on board 2.5" carriers per m620 and use vFlash. When they build the units with 2x10gbe out the back you'll be able to string those out to say a stack of something like pc5548's with 10GbE uplinks and provide 192 ports to the clients - that's a decent little SMB in a box. Considering the blades can do up to 768gb and dual 8 core, that's a LOT of compute power.

Sure, you could brick the chassis - you could also push out a bad flash to 4 rack servers at the same time, I've fixed situations where one has.

Short of going used, its a decently good deal. Doesn't hurt than the m620/m520 blades it uses are the same as the larger chassis and regularly found used on eBay.