Hi all
before I start, thanks to all for the great infos I got from here, but now my first post is required
My Current homelab Setup:
Last month I upgraded all 3 servers with a Dual ConnectX-3 Pro VPI which are direct attached by 40gbit DAC cables in ETH Mode with each other mainly for SMB-direct.
Because they are only direct attached, I'm currently limited in the use of those babies with only migration and cluster communication.
My thought and Ideas
Now I want to "simplify" the setup with a 40Gbit switch, which I can "downlink" to the network and to have the possibility to upgrade some or all house cabling connections to 10gbit cooper.
I would like to use the 40gbit as converged network to also share the storage to the rest of the network with more than 1Gbit transparently.
So I need a switch for the 40Gbit to interconnect them with my network but keep the RDMA feature for SMB direct.
So the new setup should contain a core switch with at least 6 QSFP+ ports with DCB/PFC capability.
House cabling in general keeps 1Gbit cooper, but want the possibility to upgrade some of them to 10gbit cooper (i.e to the Office).
Possibility to have POE on all house cabling connections, because I love how I can just hook up those cheap netgear GS105PE switches if I need some more ports somewhere in the house for TV or whatever boxes
Currently I think I have to go anyway for at least two new switches.
One core switch for the servers and uplinking the "house" switch with some SFP+ and 6 QSFP+.
For the "house" switch it gets a bit more complicated, 1/10Gbit cooper POE+ with at least 32 ports (or even 48 to have some spare ports) uplinked with SFP+.
The switches would be in a rack in the basement, so doesn't have to be silent, but still would like to keep the noise as low as possible. But this wouldn't be a show stopper as long I don't hear it on the next floor with one door between
The power consumption should be as low as possible, currently my whole rack with 3 low power server and the switches are around 300W during idle times (at least was before the ConnectX, should probably measure it again some time).
Due it's a homelab, I'm fine with used hardware.
Currently I'm a bit stuck, because the "house" switch.
I found the Juniper EX4300M which is probably the best match for my use-case as "house" switch.
But the price is astronomical for a home network.
Didn't found something useful that is payable to get all what I need in one switch...
For the core switch currently I'm thinking of getting a used DCS-7050QX-32S, they are available at a reasonable price. And I could at least integrate it directly in the network.
My budget is variable if I find the perfect match, but I think there must be something below 2/3k for my use-case.
Interested in any suggestions about the whole network setup, but specifically have these questions too:
Sorry for the long text, but tried to bring some context for hopefully better recommendations
before I start, thanks to all for the great infos I got from here, but now my first post is required
My Current homelab Setup:
- 2 x Stacked SG500-28MPP
- 26 x 1000Base-T POE+ powered for the house cabling
- 4 x 1000Base-T -> Server 1 (Windows Server Management, VMs etc.)
- 4 x 1000Base-T -> Server 2 (Windows Server Management, VMs etc.)
- 4 x 1000Base-T -> Server 3 (Windows Server Management, VMs etc.)
Last month I upgraded all 3 servers with a Dual ConnectX-3 Pro VPI which are direct attached by 40gbit DAC cables in ETH Mode with each other mainly for SMB-direct.
Because they are only direct attached, I'm currently limited in the use of those babies with only migration and cluster communication.
My thought and Ideas
Now I want to "simplify" the setup with a 40Gbit switch, which I can "downlink" to the network and to have the possibility to upgrade some or all house cabling connections to 10gbit cooper.
I would like to use the 40gbit as converged network to also share the storage to the rest of the network with more than 1Gbit transparently.
So I need a switch for the 40Gbit to interconnect them with my network but keep the RDMA feature for SMB direct.
So the new setup should contain a core switch with at least 6 QSFP+ ports with DCB/PFC capability.
House cabling in general keeps 1Gbit cooper, but want the possibility to upgrade some of them to 10gbit cooper (i.e to the Office).
Possibility to have POE on all house cabling connections, because I love how I can just hook up those cheap netgear GS105PE switches if I need some more ports somewhere in the house for TV or whatever boxes
Currently I think I have to go anyway for at least two new switches.
One core switch for the servers and uplinking the "house" switch with some SFP+ and 6 QSFP+.
For the "house" switch it gets a bit more complicated, 1/10Gbit cooper POE+ with at least 32 ports (or even 48 to have some spare ports) uplinked with SFP+.
The switches would be in a rack in the basement, so doesn't have to be silent, but still would like to keep the noise as low as possible. But this wouldn't be a show stopper as long I don't hear it on the next floor with one door between
The power consumption should be as low as possible, currently my whole rack with 3 low power server and the switches are around 300W during idle times (at least was before the ConnectX, should probably measure it again some time).
Due it's a homelab, I'm fine with used hardware.
Currently I'm a bit stuck, because the "house" switch.
I found the Juniper EX4300M which is probably the best match for my use-case as "house" switch.
But the price is astronomical for a home network.
Didn't found something useful that is payable to get all what I need in one switch...
For the core switch currently I'm thinking of getting a used DCS-7050QX-32S, they are available at a reasonable price. And I could at least integrate it directly in the network.
My budget is variable if I find the perfect match, but I think there must be something below 2/3k for my use-case.
Interested in any suggestions about the whole network setup, but specifically have these questions too:
- Does it make sense to get two DCS-7050QX-32S for availability and for testing in a homelab? Thinking especially on this switch because as I read on STH, you can almost do anything on them
- Should I stop watching for a "house" switch that does just anything and keep the SG500 for 1Gbit and POE and buy a cheap 10Gbit switch for my planned partial upgrade to 10Gbit?
Sorry for the long text, but tried to bring some context for hopefully better recommendations