Hardware Behind the WowerEdge Dell EMC PowerEdge MX Innovation

Discussion in 'STH Main Site Posts' started by Patrick Kennedy, Aug 30, 2018.

  1. #1
  2. rune-san

    rune-san Member

    Joined:
    Feb 7, 2014
    Messages:
    78
    Likes Received:
    15
    Wanted to note a couple more things that I thought were pretty cool:

    The compute sleds are persistent memory compatible. You lose 2 drive slots where the battery is installed in the front.

    The Storage sled is dual port SAS, and because of the midplane-less design, you do NOT have to provision storage to the adjacent blade. You can provision storage in pieces to any blade with a PERC RAID controller installed. There is also a SAS Expander built into the Storage sled, allowing you to coalesce multiple Storage sleds together into a unified Storage module. You can also do cool things like building an HA active/passive design with the compute sleds, since they are Dual Port and not tied to the adjacent slot.

    I'm trying to read more into HPE's counter regarding the system not being "truly composable" vs. Their own Synergy solution. I really thing this new chassis is really forward thinking and a potentially dynamite solution.
     
    #2
  3. WANg

    WANg Active Member

    Joined:
    Jun 10, 2018
    Messages:
    492
    Likes Received:
    184
    Hm. If there's no midplane, whats the actual interconnect method to "stitch" the machine together? Something crazy like 100Gbe on RoCE/iWARP?
     
    #3
    Last edited: Aug 30, 2018
  4. rune-san

    rune-san Member

    Joined:
    Feb 7, 2014
    Messages:
    78
    Likes Received:
    15
    It's worth noting that there actually *are* still mid-planes in the system. There's a Power Plane to grid the PSU's together, and there's a small mid-plane in the Storage Fabric (SAS and FC).

    The actual stitching is on the Ethernet side of the house. The Fabric is composed of 2 A switch modules, and 2 B switch modules. There is no midplane between those and the blades. Today, that's 4 25GbE ports from each blade with RoCE support. Later on, that's 100, 400, and possibly Gen-Z protocols. Without a mid-plane, replacing the fabric switches allows you to take immediate advantage of new technologies for every supporting compute sled you install.
     
    #4
  5. WANg

    WANg Active Member

    Joined:
    Jun 10, 2018
    Messages:
    492
    Likes Received:
    184
    Huh. So there is a power distribution midplane and one for the storage. So there's 2 smaller things to worry about instead of one - any guesses as to whether they made it easier to get to compared to the current "unrack-teardown-and-swap" midplane setup on the M1000e enclosures? Because if it requires another derack-and-swap, then that renders the whole exercise rather moot.

    The interconnect is now theoretically upgradeable by swapping out the interconnect switch, huh. Wonder if that makes the enclosure more expensive on the initial spend, and how much of a limiting factor is the actual electrical/data connectivity in the chassis wiring that switch to the blades...

    Anyone have access to the service manual for the PowerEdge MX chassis? I kinda want to take a look at the service procedures and see how the rank-and-file will think about it...
     
    #5
  6. rune-san

    rune-san Member

    Joined:
    Feb 7, 2014
    Messages:
    78
    Likes Received:
    15
    The difference is Dell (and technology in general's) core focus does not use a mid-plane. Composable environments are leveraging File based, and Object based storage technologies. Fiber and SAS are still needed to support and be usable in a great number of today's infrastructures, but FC has remained relatively flat, or decreasing in most markets, while file-based NAS Services continue to climb steadily in offerings. Because NAS and Object based storage gets to ride the wave of aggressive growth in the Ethernet Market, it gets to take advantage of Converged Infrastructure and speeds that completely overshadow even the most expensive of FC Standards. Combined with the port-based licensing model still common in the Block realm, it gets more and more difficult to recommend these block based storage methodologies.

    Power Planes are almost always going to be a SPOF. I'm not aware of a design out there where it isn't. But a Power plane failing is also extremely rare (I've read about it, but I've never seen one happen).

    The Storage mid-plane could fail, and will also limit upgradeability, but this core of this design is not based around that, but instead being able to scale the Ethernet, RoCE, and Gen-Z standards.
     
    #6
Similar Threads: Hardware Behind
Forum Title Date
STH Main Site Posts Hardware Behind the AMD EPYC and Xilinx Alveo BOXX Oct 5, 2018
STH Main Site Posts Supermicro CEO Letter and 3rd Party Investigation Find No Hardware Hack Dec 12, 2018
STH Main Site Posts Microsoft HGX-1 at the AI Hardware Summit Sep 19, 2018
STH Main Site Posts A Quick Hardware Overview of DeepLearning12 8x NVIDIA Tesla Server Sep 15, 2018
STH Main Site Posts FreeNAS 11.1 Brings Better Hardware Support and Docker Dec 14, 2017

Share This Page