Hi,
I'm thinking about condensing my lab / homeprod setup down into a Dell VRTX and had a bunch of questions. Sorry to just spam a bunch of questions... would be very grateful for any help. Not 100% sure this is the right forum, apologies if there is somewhere better to post this.
- Is there any benefit to having 2x (vs 1x) CMC cards other than redundancy?
- If I'm asking an ebay seller about CMC licensing, what questions should I be asking? Is CMC licensing different from iDRAC licensing (presumably yes)?
- The 10gbe switch module is pretty expensive ... if I don't want to shell out for it yet, can I just put one NIC per blade into the VRTX's PCIe slots and connect them to an external switch?
- What's the deal with mezzanine cards? I recognize this is a very basic question, but I've never worked with blades before ... am I right that I need to get one of those 3N9XX "pcie-e bridge pass through mezzanine cards" for each m620 in order to be able to use it with the VRTX pcie slots? And I need one mezz card per pcie slot? Are there other useful or necessary mezz cards? Any thing else I need to make the blade work?
- The whole shared PERC setup seems restrictive ... no ability to expand pools, and no ability to pass-through bare drives (so no playing nice with storage spaces direct or zfs). So what's the best way to use it? My plan is to use a windows server 2016 instance running under hyper-v as my primary file server. I am more comfortable with software raid. Can I define single drive "raid" groups in the PERC, allocate those to the hyper-v blade and then pass those through to the filer guest instance and pool them in volume manager there? Is that dumb?
- I have read that the VRTX actually supports up to 3x single slot 150w GPUs or 1x 250w double-slot GPU ... anyone actually tried that (for GPU compute, not to drive a monitor)? With the way the pcie slots are arranged is there room for the way the power connectors are mounted on the top of consumer GPUs?
- Any other "gotchas" I should watch out for when shopping for a VRTX?
My planned use case is:
1x blade running windows server 2016 with guest instances for domain controller, PBX, file server, plex, home assistant, 1-2x virtual desktop and veeam
1x blade running database server (probably sql server either on linux or server 2012, may switch to postgres)
1x blade running linux docker host
Thanks very much if you've read this whole long post. Very grateful for any advice or thoughts.
I'm thinking about condensing my lab / homeprod setup down into a Dell VRTX and had a bunch of questions. Sorry to just spam a bunch of questions... would be very grateful for any help. Not 100% sure this is the right forum, apologies if there is somewhere better to post this.
- Is there any benefit to having 2x (vs 1x) CMC cards other than redundancy?
- If I'm asking an ebay seller about CMC licensing, what questions should I be asking? Is CMC licensing different from iDRAC licensing (presumably yes)?
- The 10gbe switch module is pretty expensive ... if I don't want to shell out for it yet, can I just put one NIC per blade into the VRTX's PCIe slots and connect them to an external switch?
- What's the deal with mezzanine cards? I recognize this is a very basic question, but I've never worked with blades before ... am I right that I need to get one of those 3N9XX "pcie-e bridge pass through mezzanine cards" for each m620 in order to be able to use it with the VRTX pcie slots? And I need one mezz card per pcie slot? Are there other useful or necessary mezz cards? Any thing else I need to make the blade work?
- The whole shared PERC setup seems restrictive ... no ability to expand pools, and no ability to pass-through bare drives (so no playing nice with storage spaces direct or zfs). So what's the best way to use it? My plan is to use a windows server 2016 instance running under hyper-v as my primary file server. I am more comfortable with software raid. Can I define single drive "raid" groups in the PERC, allocate those to the hyper-v blade and then pass those through to the filer guest instance and pool them in volume manager there? Is that dumb?
- I have read that the VRTX actually supports up to 3x single slot 150w GPUs or 1x 250w double-slot GPU ... anyone actually tried that (for GPU compute, not to drive a monitor)? With the way the pcie slots are arranged is there room for the way the power connectors are mounted on the top of consumer GPUs?
- Any other "gotchas" I should watch out for when shopping for a VRTX?
My planned use case is:
1x blade running windows server 2016 with guest instances for domain controller, PBX, file server, plex, home assistant, 1-2x virtual desktop and veeam
1x blade running database server (probably sql server either on linux or server 2012, may switch to postgres)
1x blade running linux docker host
Thanks very much if you've read this whole long post. Very grateful for any advice or thoughts.