Dell VRTX questions

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

OnSeATer

New Member
Jul 31, 2016
17
0
1
46
Hi,

I'm thinking about condensing my lab / homeprod setup down into a Dell VRTX and had a bunch of questions. Sorry to just spam a bunch of questions... would be very grateful for any help. Not 100% sure this is the right forum, apologies if there is somewhere better to post this.

- Is there any benefit to having 2x (vs 1x) CMC cards other than redundancy?

- If I'm asking an ebay seller about CMC licensing, what questions should I be asking? Is CMC licensing different from iDRAC licensing (presumably yes)?

- The 10gbe switch module is pretty expensive ... if I don't want to shell out for it yet, can I just put one NIC per blade into the VRTX's PCIe slots and connect them to an external switch?

- What's the deal with mezzanine cards? I recognize this is a very basic question, but I've never worked with blades before ... am I right that I need to get one of those 3N9XX "pcie-e bridge pass through mezzanine cards" for each m620 in order to be able to use it with the VRTX pcie slots? And I need one mezz card per pcie slot? Are there other useful or necessary mezz cards? Any thing else I need to make the blade work?

- The whole shared PERC setup seems restrictive ... no ability to expand pools, and no ability to pass-through bare drives (so no playing nice with storage spaces direct or zfs). So what's the best way to use it? My plan is to use a windows server 2016 instance running under hyper-v as my primary file server. I am more comfortable with software raid. Can I define single drive "raid" groups in the PERC, allocate those to the hyper-v blade and then pass those through to the filer guest instance and pool them in volume manager there? Is that dumb?

- I have read that the VRTX actually supports up to 3x single slot 150w GPUs or 1x 250w double-slot GPU ... anyone actually tried that (for GPU compute, not to drive a monitor)? With the way the pcie slots are arranged is there room for the way the power connectors are mounted on the top of consumer GPUs?

- Any other "gotchas" I should watch out for when shopping for a VRTX?

My planned use case is:
1x blade running windows server 2016 with guest instances for domain controller, PBX, file server, plex, home assistant, 1-2x virtual desktop and veeam
1x blade running database server (probably sql server either on linux or server 2012, may switch to postgres)
1x blade running linux docker host

Thanks very much if you've read this whole long post. Very grateful for any advice or thoughts.
 

Fzdog2

Member
Sep 21, 2012
92
14
8
* Starred my feedback, we have about 20 VRTX systems that are managed.

- Is there any benefit to having 2x (vs 1x) CMC cards other than redundancy?
no

- If I'm asking an ebay seller about CMC licensing, what questions should I be asking? Is CMC licensing different from iDRAC licensing (presumably yes)?
*CMC is different than iDRAC license

- The 10gbe switch module is pretty expensive ... if I don't want to shell out for it yet, can I just put one NIC per blade into the VRTX's PCIe slots and connect them to an external switch?
*Yes you can put PCIe 10gb cards in the PCIe slots, it is a 1:1 ratio card:blade

- What's the deal with mezzanine cards? I recognize this is a very basic question, but I've never worked with blades before ... am I right that I need to get one of those 3N9XX "pcie-e bridge pass through mezzanine cards" for each m620 in order to be able to use it with the VRTX pcie slots? And I need one mezz card per pcie slot? Are there other useful or necessary mezz cards? Any thing else I need to make the blade work?

- The whole shared PERC setup seems restrictive ... no ability to expand pools, and no ability to pass-through bare drives (so no playing nice with storage spaces direct or zfs). So what's the best way to use it? My plan is to use a windows server 2016 instance running under hyper-v as my primary file server. I am more comfortable with software raid. Can I define single drive "raid" groups in the PERC, allocate those to the hyper-v blade and then pass those through to the filer guest instance and pool them in volume manager there? Is that dumb?
*In the VMware installs we have, the storage behind the Shared PERCs actually counts as shared storage so vMotion/HA stuff all works even though it is a local RAID on the disks

- I have read that the VRTX actually supports up to 3x single slot 150w GPUs or 1x 250w double-slot GPU ... anyone actually tried that (for GPU compute, not to drive a monitor)? With the way the pcie slots are arranged is there room for the way the power connectors are mounted on the top of consumer GPUs?
*I've only dealt with GPUs on the FX2 platform, in which you are limited to the 75w of the PCIe slot. Not sure if the same applies to VRTX.

- Any other "gotchas" I should watch out for when shopping for a VRTX?
 

frogtech

Well-Known Member
Jan 4, 2016
1,482
272
83
35
I wouldn't bother with the 10 GbE switch, it's kind of weird actually, there's a hidden fabric/network inherit to the infrastructure of the chassis that makes doing static routes really weird. Just get a regular 10 gig switch, and a pass through module for management connectivity to the blades.

The shared storage is actually pretty cool, but yes it is a shame you can't do pass-through. I don't see why you couldn't do individual RAID 0s for each drive, assign them to blades and then pass them through to VMs, but that might not be optimal. You would probably get better performance by just creating a RAID volume in the CMC, assigning it to all nodes as a cluster shared volume, and create your VHDX with just that volume dedicated to your filing stuff. You would need it to be a shared volume anyway if you ever use any kind of high availability / clustering services.

The deal with the Mezz cards is yes, you need 3N9XX DP/N, 2 per blade, for full functionality. One mezzanine card is for Fabric B and one is for Fabric C. Fab B is for the storage functionality from the enclosure and Fab C is for the PCIe AIC functionality.
 
Last edited:

OnSeATer

New Member
Jul 31, 2016
17
0
1
46
Thank you very much! That is an extremely helpful reply and addresses a bunch of my questions. Sounds like if I go down this road the right approach is using the storage in a shared hardware raid volume as you describe ... shame to give up the features of software raid (zfs scrubbing, storage spaces tiering, etc.) but it seems like a necessary trade off with this system.

Hope you don't mind a couple of quick follow-ups?

- I've read conflicting information about whether I need VRTX-specific blades. Can I just buy any old m620 from ebay and expect it to work? Or do I need to get one that is specifically designed for VRTX? Dell documents seem to suggest there is no difference, but a few blog posts say otherwise.

- When you say to get a "pass through module for management connectivity" do you mean the VRTX R1-PT? Those seem hard to find on ebay ... if it is just for the blades' iDRAC, can I use the R1-2401 1Gb switch module instead? Those seem to be easily available...

- Fully loaded with m620 blades, it is really quiet enough to keep under a desk? The tech spec document says 40-45db with standard m620 blades, but the youtube videos are confusing. In one video it seems basically as quiet as a high-spec workstation, but in another it seems more like a normal server (not a screamer, but maybe like an HP dl380 with fans at 40-50%). Recognize this is very subjective but is it something you could sit next to all day?

Thank you again for your help! My current setup isn't working for me anymore, and this is really helpful as I try to figure out what to replace it with.
 

frogtech

Well-Known Member
Jan 4, 2016
1,482
272
83
35
- You don't need VRTX specific blades, any M620 will do. It's just that if you buy the VRTX say from dell or a vendor like CDW, SHI, etc, if they sell you a package with blades already the blades have the mezz cards included and they have a sticker on them near the front that says PCIe which just means they have the 3N9XX cards installed which are PCIe pass through adapters. The enclosure basically uses a PCIe switch for all of it's features aside from the networking.

- Yeah that's what I mean. It's available here: Dell Force10 I/O Gigabit Pass-Through Module PowerEdge VRTX Enclosure FT79X | eBay The blades iDRAC connectivity is actually connected to the enclosures hidden network fabric that is a part of the CMC and it's super simple and super dumb(literally), there's no configuration for it, basically, if you have a network cable plugged into one of the CMC network ports, you can access the blades BMC/iDRAC. Like I said, it's weird. I don't prefer it and it made doing some advanced networking more a pain than it needed to be with the switch modules which is why I suggest just using a pass through to an actual switch of your choice.

- Fully loaded, it's relatively quiet, but since you plan on adding in 10 GbE adapters, it might not be super quiet. I noticed that adding add-in cards made the fans speed up just a tad bit. I think 30% across the 80MM fans in the chassis? It's kind of like a low droning kind of noise. Not awful but TBH not sure I'd want to sit next to it all day. YMMV. Like you said this is subjective but I think you'll find your experience matching mine.

What is your budget? I'm not really sure I can recommend a VRTX just because of the pricing and currently availability on ebay. It seems like there's quite a few chassis that have randomly popped up on ebay in the last month or so that have been -completely- gutted, no motherboard, no backplane, no PSUs, no nothing. And I don't really get who is the target market for those...If you can get an enclosure for 1000 or less, with 4 power supplies, backplane, motherboard, 1 CMC, 1 Shared PERC8, at least 1 type of switch, then it shouldn't be too bad getting the rest of it loaded out.

In my case I spent quite a bit of time without a lab just waiting for a "good enough" VRTX to appear on ebay and then I wasted even more time piecing everything together for it, when I could've just done some something simpler.

YMMV.
 

OnSeATer

New Member
Jul 31, 2016
17
0
1
46
Thank you so much for your reply! It's very kind of you and the benefit of your experience with the VRTX is super relevant and helpful.

I am thinking the following for parts list / budget ... I didn't include RAM because I can harvest some from my existing servers

Chassis, 1x CMC, 1x PERC 8 (DELL POWEREDGE VRTX ENCLOSURE NO BLADES INSTALLED 1x CMC CONT. 1x Perc 8 | eBay): $1,900 (need to confirm CMC Enterprise license and backplane installed)
FT79X pass-through: $120 (per your link)
Chassis subtotal: $2,020

M620 barebones: $150
2x 3N9XX Mezz: $160 (total for both)
iDRAC 7 enterprise license $70
Barebones subtotal per blade: $380

E5-2660 (or comparable) matched pair: $150
Subtotal per blade pre RAM: $530

Chassis + 3x Blades: $3,610 (not including memory, pcie cards or storage)

... definitely more expensive than 3 rack servers would be, but maybe an acceptable premium to pay for the density.

Your comment on the benefit of simplicity does resonates strongly though. My current rack setup is unnecessarily complicated, sprawling and (despite my best efforts) loud. For noise, space and clutter reasons, I want to ditch the full depth rack entirely (for now) and move to a small switch-depth cabinet for networking gear + 1x freestanding tower for my 2.5" drives + 1x DAS for my 3.5" drives. But maybe this would just be trading the rackmount devil I know for the VRTX devil I don't.

The noise question is super important because this lives in my home office.

Honestly, I'm thinking about just trying to jam everything into a single high spec T620. Those are basically silent (specs claim 30dB in a typical config) and have a ton of bays. The problem is just that if I run out of CPU headroom on the one server there aren't any good fallback options ... budget wise it would be fine to just buy another one (3x T620s are about the same cost as the 3 blade VRTX configuration above), but they take up a lot of floor space. Based on current average utilization I'd be fine most of the time, but am not sure about peak utilization.

Thank you again (especially if you took the time to read this long reply).
 

frogtech

Well-Known Member
Jan 4, 2016
1,482
272
83
35
Thank you so much for your reply! It's very kind of you and the benefit of your experience with the VRTX is super relevant and helpful.

I am thinking the following for parts list / budget ... I didn't include RAM because I can harvest some from my existing servers

Chassis, 1x CMC, 1x PERC 8 (DELL POWEREDGE VRTX ENCLOSURE NO BLADES INSTALLED 1x CMC CONT. 1x Perc 8 | eBay): $1,900 (need to confirm CMC Enterprise license and backplane installed)
FT79X pass-through: $120 (per your link)
Chassis subtotal: $2,020

M620 barebones: $150
2x 3N9XX Mezz: $160 (total for both)
iDRAC 7 enterprise license $70
Barebones subtotal per blade: $380

E5-2660 (or comparable) matched pair: $150
Subtotal per blade pre RAM: $530

Chassis + 3x Blades: $3,610 (not including memory, pcie cards or storage)

... definitely more expensive than 3 rack servers would be, but maybe an acceptable premium to pay for the density.

Your comment on the benefit of simplicity does resonates strongly though. My current rack setup is unnecessarily complicated, sprawling and (despite my best efforts) loud. For noise, space and clutter reasons, I want to ditch the full depth rack entirely (for now) and move to a small switch-depth cabinet for networking gear + 1x freestanding tower for my 2.5" drives + 1x DAS for my 3.5" drives. But maybe this would just be trading the rackmount devil I know for the VRTX devil I don't.

The noise question is super important because this lives in my home office.

Honestly, I'm thinking about just trying to jam everything into a single high spec T620. Those are basically silent (specs claim 30dB in a typical config) and have a ton of bays. The problem is just that if I run out of CPU headroom on the one server there aren't any good fallback options ... budget wise it would be fine to just buy another one (3x T620s are about the same cost as the 3 blade VRTX configuration above), but they take up a lot of floor space. Based on current average utilization I'd be fine most of the time, but am not sure about peak utilization.

Thank you again (especially if you took the time to read this long reply).
No problem!

One thing to note is that the chassis you linked is in rack mode, so, it's missing a bottom side panel (would be the left panel if standing upright), a top panel, and a bottom panel. The top panel is mostly cosmetic but it's nice to have. The bottom panel is definitely required if you want to put it in tower mode. Without the bottom plastics there are no feet so the server would be resting on a rack ear. Not ideal. And that also means you couldn't use a caster kit if one ever popped up on ebay.

Re: the pricing and costs, you can find M620s for 117/ea from this seller:

LOT OF 2 Dell Poweredge M620 BARE BONE H710P 1GB BLADES | eBay

I bought 4 from that seller and they all included a 10 GbE 2-port network daughter card, and iDRAC7 enterprise. So I didn't have to buy the license. That would cut back on costs. If you're lucky you can get the cost for the 3N9XX down to 40/50 each, via best offer on ebay if sellers have enough in stock and want to move them. I bought 8 for 50/ea. More savings there. CPU and RAM is YMMV. I've gotten lucky and found sellers who were willing to move 16GB DIMMs for 30/ea. They're usually individual sellers and not resellers so YMMV on that also.

One last thing I didn't see you consider was the cost of the hard drive trays. I could not find a single seller who would budge on this. So, if you get the SFF enclosure, expect to spend 16/17ea*33 on drive trays.

Funny you mention getting multiple T620s. I looked at that route before but they're not cheap enough for me to consider buying more than 1. When I build out labs I build for clustering/high availability as that's what is most common in an enterprise. And I go for short depth chassis. Means I can use a rack that doesn't take up as much floor space. But that's just me. Handling shorter servers is way easier too especially if you want to sell, ship, or move.

Btw, the Dell VRTX is seriously heavy. If you're the kind of guy who likes to re-sell stuff to try different things, you might find you'll be sitting on the VRTX for a while if you try to sell it unless you know how to get it re-boxed and put on a palette. It pretty much can only be shipped freight.
 
Last edited: