Hardware Behind the WowerEdge Dell EMC PowerEdge MX Innovation

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

rune-san

Member
Feb 7, 2014
81
18
8
Wanted to note a couple more things that I thought were pretty cool:

The compute sleds are persistent memory compatible. You lose 2 drive slots where the battery is installed in the front.

The Storage sled is dual port SAS, and because of the midplane-less design, you do NOT have to provision storage to the adjacent blade. You can provision storage in pieces to any blade with a PERC RAID controller installed. There is also a SAS Expander built into the Storage sled, allowing you to coalesce multiple Storage sleds together into a unified Storage module. You can also do cool things like building an HA active/passive design with the compute sleds, since they are Dual Port and not tied to the adjacent slot.

I'm trying to read more into HPE's counter regarding the system not being "truly composable" vs. Their own Synergy solution. I really thing this new chassis is really forward thinking and a potentially dynamite solution.
 

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
Hm. If there's no midplane, whats the actual interconnect method to "stitch" the machine together? Something crazy like 100Gbe on RoCE/iWARP?
 
Last edited:

rune-san

Member
Feb 7, 2014
81
18
8
Hm. If there's no midplane, whats the actual interconnect method to "stitch" the machine together? Something crazy like 100Gbe on RoCE/iWARP?
It's worth noting that there actually *are* still mid-planes in the system. There's a Power Plane to grid the PSU's together, and there's a small mid-plane in the Storage Fabric (SAS and FC).

The actual stitching is on the Ethernet side of the house. The Fabric is composed of 2 A switch modules, and 2 B switch modules. There is no midplane between those and the blades. Today, that's 4 25GbE ports from each blade with RoCE support. Later on, that's 100, 400, and possibly Gen-Z protocols. Without a mid-plane, replacing the fabric switches allows you to take immediate advantage of new technologies for every supporting compute sled you install.
 

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
It's worth noting that there actually *are* still mid-planes in the system. There's a Power Plane to grid the PSU's together, and there's a small mid-plane in the Storage Fabric (SAS and FC).

The actual stitching is on the Ethernet side of the house. The Fabric is composed of 2 A switch modules, and 2 B switch modules. There is no midplane between those and the blades. Today, that's 4 25GbE ports from each blade with RoCE support. Later on, that's 100, 400, and possibly Gen-Z protocols. Without a mid-plane, replacing the fabric switches allows you to take immediate advantage of new technologies for every supporting compute sled you install.
Huh. So there is a power distribution midplane and one for the storage. So there's 2 smaller things to worry about instead of one - any guesses as to whether they made it easier to get to compared to the current "unrack-teardown-and-swap" midplane setup on the M1000e enclosures? Because if it requires another derack-and-swap, then that renders the whole exercise rather moot.

The interconnect is now theoretically upgradeable by swapping out the interconnect switch, huh. Wonder if that makes the enclosure more expensive on the initial spend, and how much of a limiting factor is the actual electrical/data connectivity in the chassis wiring that switch to the blades...

Anyone have access to the service manual for the PowerEdge MX chassis? I kinda want to take a look at the service procedures and see how the rank-and-file will think about it...
 

rune-san

Member
Feb 7, 2014
81
18
8
Huh. So there is a power distribution midplane and one for the storage. So there's 2 smaller things to worry about instead of one - any guesses as to whether they made it easier to get to compared to the current "unrack-teardown-and-swap" midplane setup on the M1000e enclosures? Because if it requires another derack-and-swap, then that renders the whole exercise rather moot.

The interconnect is now theoretically upgradeable by swapping out the interconnect switch, huh. Wonder if that makes the enclosure more expensive on the initial spend, and how much of a limiting factor is the actual electrical/data connectivity in the chassis wiring that switch to the blades...

Anyone have access to the service manual for the PowerEdge MX chassis? I kinda want to take a look at the service procedures and see how the rank-and-file will think about it...
The difference is Dell (and technology in general's) core focus does not use a mid-plane. Composable environments are leveraging File based, and Object based storage technologies. Fiber and SAS are still needed to support and be usable in a great number of today's infrastructures, but FC has remained relatively flat, or decreasing in most markets, while file-based NAS Services continue to climb steadily in offerings. Because NAS and Object based storage gets to ride the wave of aggressive growth in the Ethernet Market, it gets to take advantage of Converged Infrastructure and speeds that completely overshadow even the most expensive of FC Standards. Combined with the port-based licensing model still common in the Block realm, it gets more and more difficult to recommend these block based storage methodologies.

Power Planes are almost always going to be a SPOF. I'm not aware of a design out there where it isn't. But a Power plane failing is also extremely rare (I've read about it, but I've never seen one happen).

The Storage mid-plane could fail, and will also limit upgradeability, but this core of this design is not based around that, but instead being able to scale the Ethernet, RoCE, and Gen-Z standards.