Need advice on complete homelab refresh

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

joltman

New Member
Nov 29, 2023
12
0
1
I've been running a home lab environment for over 15 years. I've had several devices in use during those years. However, it's been a while since I was in "hardware" mode, so I'm _very_ behind the times. I was hoping that my ADHD super power (believing I can master a skill in 24 hours, but really can't) would help me, and there's a _ton_ I've missed out on in the past 8 years. Here's my basic break down of current devices:

NAS01:
  • Chenbro NR40700 48 bay enclosure with backplane
  • SuperMicro Motherboard (don't remember model)
  • Intel(R) Xeon(R) CPU E5-2603 0 @ 1.80GHz
  • 32GB RAM
  • 150+ TB of disks in various ZFS RAIDZ2 or MIRRORs

ESXi01 (primary):
  • Dell R430
  • Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz
  • 96GB RAM
  • 1.25TB storage
  • 2 x 1Gb NICs
  • 1 x LSI 2308 Passthrough to VM for Tape Library access in Bareos Storage Daemon
ESXi02 (lowly secondary):
  • Lenovo Tiny
  • Intel(R) Core(TM) i5-4570T CPU @ 2.90GHz
  • 16GB RAM
  • 256GB storage
  • 1Gb NIC
Custom Desktop w/ Plex:
  • Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz
  • 32GB RAM
  • 256GB ROOT
  • 512GB M2 NVME for /home/user
  • 2 x 256GB ZFS MIRROR for docker volume storage
  • 1 x 1Gb NIC
Custom Desktop w/ BlueIris:
  • Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz
  • Supermicro X11SCZ-F
  • 8GB RAM
  • 256GB ROOT
  • 10TB camera recording storage
  • 1 x 1Gb NIC

Dell R210 II (pfSense CE 2.7.2 bare metal):
  • Intel(R) Xeon(R) CPU E3-1240 V2 @ 3.40GHz
  • 8 GB RAM
  • 2 x 256GB SSD in ZFS MIRROR
  • 2 x 1Gb NIC (one WAN, one LAN)

Here are the docker workloads that I'm currently running, and a couple that I'd like to add.

PlexRunning on old 4000 series Intel. Will need 4K transcode options, HOST GPU??
Jellyfin??FUTURE. Will need 4K transcode options, HOST GPU??
FrigateFUTURE. Can it share the GPU with HOST and PLEX/JF? Coral TPU?
ImmichFUTURE. Will need 4K transcode options, HOST GPU??
Jackett
2 x RADARR
2 x SONARR
LIDARR
READARR
Roon
Roon Extension Manager
LubeLogger
Bareos DB (Maria SQL)
Bareos Director
Bareos WebUI
Bareos Storage DaemonMove this to TrueNAS Scale and use Incus. Install the LSI2308 directly into NAS01 for Tape Library access.
Beets
Calibre-Web
Pingvin Share
smtpd
tftp-server
transmission
Unifi??
vlmcsd
ZNC
AVAHI for mDNS between VLANsFUTURE
Minecraft ServerFUTURE
2 x SyncThingCurrently running on TrueNAS Core
GiteaFUTURE
FreeRADIUSFUTURE
CertificatesFUTURE


Here are the VMs I'm currently running:

pfSensePassthrough AES device?
Linux Workstation to replace Desktop w/ 3 monitorsAnother dedicated GPU?
Linux Backup ServerCrashPlan runs here
Home AssistantWill need POE Zwave, POE Matter
Windows 2016
Windows 2019Active Directory
Home Desktop ReplacementFUTURE, DEDICATED GPU? This isn't a necessity.

It looks like a lot, but I believe in reality, I don't have much workload to consider. All of these servers are connected via 1Gb ethernet to a Cisco 3750 PoE+ Switch Stack with 1KW PSUs. That switch stack is running as a L3 switch with a transit network to pfSense. They are using OSPF to route. The switch is also an mDNS reflector for Roon, Roon Endpoints, and other IoT devices on the various VLANs.

What are the issues?

I've moved my main home entertainment system to 4K. In the process of obtaining 4K Linux ISOs, I've had to limit access to my 4K library to only internal users due to horrible ISP upload. The current desktop runs Plex and isn't beefy enough to transcode my 4K Linux ISOs. This has caused some lower WAF when she tries to view a show/movie on a non-4K TV. I don't want to keep several copies of Linux Distro just to accomodate "ease of viewing".

The other issue I have is power usage. The NAS, and all the other systems are pulling a lot of power. I don't have an estimate at this time as I've never really kept track of power usage. I'm sure the NAS is the most power hungry due to all the drives in there, and those drives won't change. I can tell you we have a high power bill.

What is the solution?

If I can consolidate all these "hosting" servers into a smaller cluster (for HA purposes) I am hoping I can save a lot on power/heat, and noise. I am also hoping I can upgrade the CPU/MOBO/RAM in the NAS as well to accomodate the massive increase in storage I've evolved into. Here's a diagram of a proposed solution:

1735776856183.png

The first big change here has to do with adding a new Layer 3 switch. This switch would be a 40Gb switch that can also do DHCP and static DHCP. However, it also needs to do some directed broadcast magic as well. Roon requires this. I'm still in the early stages of home work on the switch, but I think it should be possible.

The next big change is the use of 3 new Proxmox hosts in a cluster with Ceph as the shared storage. This is also brand new to me. I'm hoping that this configuration will allow me to have high availability of LXCs and VMs (specifically pfSense and Home Assistant). I work from home full time, and the wife WFH's a few days a week, so up time is critical.

I need help, please

Here's where I need a ton of help. I'm trying to decide if I've got the right idea behind a Proxmox cluster and what hardware I should get to accomplish my goals. Here are my thoughts right now:

NAS01:

This needs a new mobo/CPU/RAM. I'm thinking about:

Supermicro MBD MBDH12SSLiO
AMD Epyc 7F52
Noctua CPU Cooler NH-D9 TR5-SP6
128GB ECC 2666 RAM
Mellanox MCX455A-ECAT ConnectX-4 EDR

PVE Hosts:

This is the area that I'm really not sure what to do. I think my hang up is on disk storage. In the diagram above, I've got HDDs in a Ceph pool. The thought here was that if I have some large files I need to store (camera recordings, Pingvin Share files, temp downloads, etc) I could do that easily on the HDD Ceph pool. This frees up "expensive" drive slots in the NAS. To accomodate these larger HDDs, I'll need a larger case. The larger case means I could also install the same Mellanox card as above, and an Intel Arc A380 in each host. Hopefully, the dual Coral TPU as well. I was honestly just thinking about using the same Supermicro build above. But that comes to like $2500, and I'd need 3 for the cluster. I could really use some opinions on this host config or others that can accomodate 2 boot drives (SATA DOM), 2 NVME 4TB drives, and 2 or 4 16TB HDDs. I also need a case! The 45 Home Lab HL15 case is $1100!! I was thinking about the MS-O1, but I think the Intel Big Little CPU architecture, lack of space for HDDs, and no room for the Mellanox NIC make them a non-starter.

Network Switch:

Again, another area where I need to do a lot of homework. I think an L3 40Gb switch is totally doable. It's the directed broadcast and possibly any license changes needed to accomodate those things.


I'm sure I'm overthinking this whole ordeal. I just don't know what to do about my Ceph pools for the VMs. I'd really appreciate anyone's input and experience on these topics. Thanks for reading!
 

Tech Junky

Active Member
Oct 26, 2023
711
240
43
That's a whole lot of clutter to deal with.

A380 GPU will take care of the 4K needs for $100.

I would collapse Plex/BI/NAS functions into a single box.

For the cooler just use a PA120 for $35

The NAS CPU doesn't need to be an epyc even with combined functions as they're not demanding. With video offload to the A380 it will sit even lower. When converting files using QSV my 7900X uses less than 2 cores while processing the files.

I roll my box as an AIO setup including the router function and most of the time it's idle unless converting video files.
1735784696215.png

However having the power when needed or wanted boosts things to make it more efficient when it comes to the power use. As to the A380 I went with the Sparkle card because it was capped at PCIE power only to make sure it wouldn't be burning a hole in my wallet each month.

https://www.aliexpress.com/item/3256808115656852.html -- 12 drives / $130 and fits ATX boards
 

joltman

New Member
Nov 29, 2023
12
0
1
Thank you for responding! For the NAS system, I thought that the Epyc would provide more PCIe lanes that would be useful for the HBAs for the SAS expander and the tape library. I thought about putting Plex on the NAS. If I did, I'd definitely need the extra PCIe lanes, wouldn't I (GPU, 2 x HBAs)?

Do you have any thoughts/recommendations on the Proxmox cluster? Thanks again!
 

jode

Member
Jul 27, 2021
94
61
18
I am interested in your quest as I am currently underway building a similar infrastructure.

Questions:
1. Why upgrade to 40G networking, when it has become a dead-end for upgradability and 100G is more modern and affordable in 2025? Look in this forum for relevant threads.
2. Why invest in Ceph and a NAS at the same time? Or is the goal to eliminate the NAS in favor of Ceph in the future (not with the specs listed for the PVE machines)? Proxmox is very capable of replicating VMs and LXCs between nodes which can provide very functional HA for a home lab.
3. Why do you think you need a proxmox cluster when your current setup has been working fine without the HA provided by a (at least) three-node cluster? You'll need to be very careful not to exceed your current level of power usage with your plans. I'd start investing in ways to measure and track your current power usage (e.g. rackmount UPS are pretty good at that)

For reference (not as a recommendation), I am currently running a three node proxmox cluster on hacked video conferencing hw upgraded with 16GB RAM, nvme drives and Coral TPUs. The lot consumes ~35W total and runs a dozen LXCs in perfectly working HA configuration including a small nas (for personal files), frigate, gitlab, email server and a bunch of small nice-to-have home lab services I deem critical to my home lab. Ceph seems to be overkill for these poor dual core CPUs. My challenge is to downgrade or optimize the currently oversized hw running jellyfin - the existing cluster obviously isn't built to support jellyfin storage requirements.

I am currently working on extending the cluster (or building a second cluster) using more powerful hw (AMD 5900X, 128GB RAM, 30+TB NVME storage, 40-56gb network) that I can keep turned off and only start when needed due to the power consumption that runs about 100W per node.
 

joltman

New Member
Nov 29, 2023
12
0
1
@jode Thank you for your response! Hopefully I can answer your questions.

1. Why upgrade to 40G networking, when it has become a dead-end for upgradability and 100G is more modern and affordable in 2025? Look in this forum for relevant threads.
I did some searching on 100Gb switches. I did find Celestica Seastone DX010, but from what I could find, setting them up for L3 switching, OSPF, let alone directed broadcast seemed to be hit or miss. I couldn't find great documentation to follow. If you have a good source of documentation, and/or documented configs, could you post them?

2. Why invest in Ceph and a NAS at the same time? Or is the goal to eliminate the NAS in favor of Ceph in the future (not with the specs listed for the PVE machines)? Proxmox is very capable of replicating VMs and LXCs between nodes which can provide very functional HA for a home lab.
The reason I am keeping the NAS around is because I've got lots of storage on the NAS as is now. If I were to move all that data to Ceph (which would be the next logical step), I'd need to buy 2 more of every hard drive that's in the NAS now to put in the Ceph/PVE nodes. And each node would need to hold those disks, which means they'd need big honkin' cases and PSUs. The disks would come to around $4500 for each node. At least, that's how I understand Ceph. I've never set it up, so I could very well be wrong. The idea for Ceph on the PVE nodes is simply for LXCs/VMs datastores and HA between the nodes.

3. Why do you think you need a proxmox cluster when your current setup has been working fine without the HA provided by a (at least) three-node cluster? You'll need to be very careful not to exceed your current level of power usage with your plans. I'd start investing in ways to measure and track your current power usage (e.g. rackmount UPS are pretty good at that)
I should probably figure out how to get power data from the PDUs that I have now. But I thought that a cluster was a better idea simply for pfSense/Plex/Home Assistant/Frigate high availability. But, maybe I should slow down. I could just keep the bare metal pfSense host I have and build a single PVE node. I guess I was enticed by the clustering options and keeping services up and going during PVE maintenance...
 

Tech Junky

Active Member
Oct 26, 2023
711
240
43
provide more PCIe lanes that would be useful for the HBAs for the SAS expander and the tape library. I thought about putting Plex on the NAS. If I did, I'd definitely need the extra PCIe lanes, wouldn't I (GPU, 2 x HBAs)?

Do you have any thoughts/recommendations on the Proxmox cluster? Thanks again!
Epyc would unlock more lanes and options but, for a file server it's not really needed when you pick the right board that doesn't toggle slot allotments when using additional slots. I'm using a PG Lightning for this purpose that maintains slot bandwidth. I'm about to swap it out though for a MSI Carbon to collapse some of the guts into the board itself and should reduce the power draw by moving the TB ports to USB4 on the mobo for instance. I'm also adding a dual 10GE NIC to the mix due to WIFI 7 to prevent any bottlenecks. It's all about planning and research though when it comes to this stuff.

I can't comment on Proxmox though as I haven't used it and just run everything off Ubuntu instead as apps vs bottles/containers. I've played around with the virtual stuff though in the past but just don't feel like it's a benefit for what I'm doing. I can always spin up a VM if needed within linux for testing in a matter of a couple of mins.

As to the HBA issue there's a dual port Oculink card that can split to x4/x4/x4/x4 and just bump the GPU to slot 2 for a minor reduction in performance.
 

Greg_E

Active Member
Oct 10, 2024
206
49
28
You are going to set up this hypervisor system and still want DHCP and DNS handled from the switch? Just set up a VM to run those services. If you don't want to run the hypervisors all the time, then buy a mini PC to serve this role.

Only other suggestion would be look at XCP-ng and Xen Orchestra from sources before you commit to Proxmox, might as well test before you commit.
 

joltman

New Member
Nov 29, 2023
12
0
1
You are going to set up this hypervisor system and still want DHCP and DNS handled from the switch? Just set up a VM to run those services. If you don't want to run the hypervisors all the time, then buy a mini PC to serve this role.
@Greg_E Thanks for your response! I do want DHCP handled by the switch. The reason being if the virtualization host goes down, I will still be able to get on my network. If the switch goes down, I'm having a _very_ bad day. DNS will continue to be handled by pfSense. I tried pfBlockerNG, and it did block ads on the devices in my network. However, my wife is one of those people that actually clicks on ad links in Google search. I was always adding those URLs/sites to the white list. Eventually I just gave up because I didn't care to keep doing it.

Only other suggestion would be look at XCP-ng and Xen Orchestra from sources before you commit to Proxmox, might as well test before you commit.
I hadn't considered Xen at all. I'll have to look up the differences.

Do you have any advice on the hardware choices I've made above?

Epyc would unlock more lanes and options but, for a file server it's not really needed when you pick the right board that doesn't toggle slot allotments when using additional slots. I'm using a PG Lightning for this purpose that maintains slot bandwidth. I'm about to swap it out though for a MSI Carbon to collapse some of the guts into the board itself and should reduce the power draw by moving the TB ports to USB4 on the mobo for instance. I'm also adding a dual 10GE NIC to the mix due to WIFI 7 to prevent any bottlenecks. It's all about planning and research though when it comes to this stuff.
I just looked up this motherboard. Unless I'm mistaken, there is no IPMI on the board. That is a pretty nice feature to have and I've used it on my other servers here. Also, I really don't need to have WiFi onboard, as these systems will all be hardwired. I am interested in your "slot allotments" comment. I'm not sure I understand what adding a card would do in this case? Are you referring to PCIe lanes assigned to a slot? Or IOMMU groups changing? Thanks!
 

Greg_E

Active Member
Oct 10, 2024
206
49
28
Then I would handle dhcp on the router, but I'd still recommend doing both on a mini PC.

Tip to the wise, do not make the wife run her devices through your lab! Set up something simple for the rest of the people in your life to use. One of the reasons is that you are one drunk driver away from not being able to help them stay online. Yes it is an extra route to get your lab onto the web, but something that needs to be considered. And yes those drunks come out of nowhere at the most random of times.

The big differences in XCP to Proxmox is that the enterprise features in Xen Orchestra have been there a long time, Proxmox just released their first version a short time ago. Proxmox does somethings better, and some things worse. But with the storage the way you are going to run it, the natural would be XCP with the VMs stored on NFS or iSCSI.

One big thing to note, XCP does not support direct docker images. You'd need to build a host VM for docker. I think Proxmox allows direct docker images to be run on the stack. Something that you might want to verify.

And yes, IPMI can be very useful and something to look for when possible.
 
Last edited:

joltman

New Member
Nov 29, 2023
12
0
1
Then I would handle dhcp on the router, but I'd still recommend doing both on a mini PC.
I'd love to handle DHCP on pfSense, however they don't support dhcp-relay yet. Its something that has been requested _for years_ and they just simply don't do it. I could move to OPNsense, but they lag behind pfSense in security features. pfSense seems to have a better track record. Keeping DHCP on the L3 switch is more ideal in this enviornment. If I had a PVE/Xen clustered environment, I'd definitely move DHCP to an LXC/VM.

Tip to the wise, do not make the wife run her devices through your lab! Set up something simple for the rest of the people in your life to use. One of the reasons is that you are one drunk driver away from not being able to help them stay online. Yes it is an extra route to get your lab onto the web, but something that needs to be considered. And yes those drunks come out of nowhere at the most random of times.
This is always something in the back of my mind. I just assume that she'd have to get Comcast on the phone and have them reset the modem to be a modem/firewall/AP and she'd just go from there. It's probably a good idea to write something up for her, though. Maybe have a device waiting in the wings for her to just plug in.
 

mattventura

Well-Known Member
Nov 9, 2022
645
337
63
I agree that PVE is the way to go with where VMware/Broadcom is headed. Ceph is also another good option. PVE has great support for Ceph, so it's very easy to set up and avoids a lot of the manual work you'd otherwise have to do.

For the PVE hosts, I'd start smaller and grow from there. The big thing to avoid is mixing intel and AMD CPUs within the cluster, because VMs will likely run into issues with live migration. Specific recommendations depend heavily on whether or not you have a rack.

I would also advocate for a separate box for the router, though it can be virtualized. I run virtualized OpenWRT and it works great.
 

kapone

Well-Known Member
May 23, 2015
1,392
825
113
I've moved my main home entertainment system to 4K. In the process of obtaining 4K Linux ISOs, I've had to limit access to my 4K library to only internal users due to horrible ISP upload. The current desktop runs Plex and isn't beefy enough to transcode my 4K Linux ISOs. This has caused some lower WAF when she tries to view a show/movie on a non-4K TV. I don't want to keep several copies of Linux Distro just to accomodate "ease of viewing".
I don't understand this at all. Linux distros? 4K transcoding? What?

Edit: And please heed this advice. Don't do DHCP on the switch... If this is the type of infrastructure you're going to have, it's trivial to do a VM (or even better, a tiny bare metal machine) with Windows on it, that does DHCP/DNS (including DHCP relay and VLANs).
 
Last edited:

Sean Ho

seanho.com
Nov 19, 2019
847
402
63
BC, Canada
seanho.com
Plex load can be significantly mitigated by keeping separate libraries for 4k vs HD, and limiting 4k only to clients that can direct-play it. Transcode from HD source can be done very cheaply and at low-power/heat by a $70 TMM/uSFF of 7th-gen or later, e.g., the venerable m720q tiny. Mount the media over NFS or whatnot and use QSV to transcode. Gigabit would be fine unless you have a ton of simultaneous users.
 
  • Like
Reactions: nexox

bwahaha

Active Member
Jun 9, 2023
127
97
28
>Chenbro NR40700
>Noctua CPU Cooler NH-D9 TR5-SP6

The Chenbro chassis is 4u, but the space between the cpu and chassis lid looks 3u, with the power supplies taking up the bottom 1u. Also, that many hard drives, I expect some solid positive pressure and air flow. Maybe won't need fans for the heatsink?
 

louie1961

Active Member
May 15, 2023
383
168
43
Seriously, why do you need an HA cluster? Unless you are doing this for learning purposes, I would probably ditch the cluster and the ceph set up. Especially since you want to save energy as a stated goal in your next setup. I would probably ditch the separate NAS box as well and just virtualize TrueNAS inside a Proxmox VM. Networking inside of Proxmox via bridges will be almost as fast as 40GBE networking in my experience. You could shrink this down to one well designed server box and probably idle at 50-60 watts. I think you need to decide if HA is more important or if energy savings are more important, to be honest.
 
  • Like
Reactions: jode

joltman

New Member
Nov 29, 2023
12
0
1
The Chenbro chassis is 4u, but the space between the cpu and chassis lid looks 3u, with the power supplies taking up the bottom 1u. Also, that many hard drives, I expect some solid positive pressure and air flow. Maybe won't need fans for the heatsink?
This is a great observation that I didn't even consider! If I remember correctly, the current CPU does have a heatsink with fan. I'll have to go take a look. It is a SuperMicro board though.

Seriously, why do you need an HA cluster? Unless you are doing this for learning purposes, I would probably ditch the cluster and the ceph set up. Especially since you want to save energy as a stated goal in your next setup. I would probably ditch the separate NAS box as well and just virtualize TrueNAS inside a Proxmox VM. Networking inside of Proxmox via bridges will be almost as fast as 40GBE networking in my experience. You could shrink this down to one well designed server box and probably idle at 50-60 watts. I think you need to decide if HA is more important or if energy savings are more important, to be honest.
Ya know, I was thinking the same thing as I was writing out my first post. I just didn't want to admit that it was way overkill. I am going to scale back my plans. I'll need to come up with a new diagram, but here's my initial thoughts:

New L3 Core Switch:

This one is still up in the air for me. I know that 100Gb switches are "cheap" ($300 on ebay), but are all very loud. I'd been looking at Celestica Seastone DX010, but everything I've read so far leads me to believe that they're finicky. I just need something that works. I also need something that can do directed IP broadcast to other subnets. My Roon audio application and the endpoints use directed IP broadcasts to annouce themselves on the network. Endpoints include the app on mobile devices (Android, iPhone, Laptop). So I need to be able to take that traffic on certain ports and dump it over to the server VLAN. I don't think I can do that with an AVAHI container. Doing some reading, and I don't think even the Mellanox SX6036 (40Gb switch) does that. I think I'll have to post in the network section to find something that does. I would bet that the Cisco Nexus 9K does, but that is going to sound like a jet engine. Maybe udp-proxy-2020 can accomplish this?? This new switch would connect to my current 3750X stack which will be downgraded to access switch only.

NAS01:

This will get a new mobo/CPU/RAM. For the most part, I'm of the opinion that critical services should have their own hardware. However, I think moving bareos-sd and LSI 2308 SAS card to the NAS is the best idea just due to how tape backups work in bareos. If the TL doesn't have direct access to the data, then it has to first copy the data to a local drive (in large chunks of several hundred gigs at a time) then stream that directly to tape. It does this so that it doesn't "shoe shine" the actual tape by writing a bit, stopping, rewinding a tad to find the EOF, then writing again. I also think that enough people are saying it that I should do it, use a DHCP (and maybe DNS) container on the NAS. This would allow me to do all the DHCPing I want with static DHCP. I will say, I've had zero issues with DHCP on the Cisco switch. I've been doing that for years in a large, many remote locations environment and had no problem. Very stable.

Virtualization Platform (single host):

I think I will build this as well. There's no need for a cluster other than to look cool. The question becomes, what will run on the host. Most of my docker containers for sure. I think I'll do another DHCP container on this host as well so I don't lose DHCP during a NAS maintenance event. Maybe DNS too. What this hardware should be, I'm not sure. I also agree that the CPU should be the same between NAS01 and this host. I do need to check out the differences between XCP-ng and PVE. I will be honest here and say that YouTube influencers have put PVE in my head for years.

Applications:

There are a few containers/apps that could benefit from being directly on the NAS. Plex, Frigate, Pingvin Share, *Arrs. All of these containers require access to either the data on the NAS, or could benefit from storage expandability on the NAS (Frigate). The rest will live on the virtualization host as containers or full blown VMs. Maybe I need to increase from 2 NVMes to 4? Still want dedicated boot SATA DOMs in ZFS mirror.

pfSense (single host):

Again, everyone has more common sense than I do! It's probably time to decom the R210 II. It's long in the tooth, and power hungry. However, I'd like a replacement that has IPMI. My current Comcast internet plan is a greater than 1Gb down, but 20Mb up. I really wish something else was available in my area, but it's just not. I'd have a 10Gb NIC with copper RJ45 SFP for the WAN (can negotiate to 2.5Gb would be nice too), and ideally a 40Gb NIC for the LAN, but I'd settle for another 10Gb SFP+. If I did 10Gb, I'd need a QSFP breakout cable to 4x10Gb.


OK, I'm going to ponder this for a while. Then come up with another diagram. If anyone has any additional thoughts, I'd love to hear them! Thank you!