There's a number of ways to go about this.So to further qualify my original question on connectivity, what should I be using between the rack devices to maximize throughput in a cost-effective manner? If it is reasonably affordable, I would like max speed (40?) between the two 6610s in particular. Then I have my ESXI host and my pfSense. I'm a "buy once, cry once" kinda guy, so I am just trying to future proof this as much as possible. To frame this, if a 12TB HDD is 40% more expensive than the 8TB, I am going with the 8TB. Hopefully that paints a more clear picture of what I am going after here. Thanks in advance for any constructive input.
- Stack using two of the 40gbps ports on the back and you end up with 80gbps bandwidth between the switches and 32x 10gbps ports for devices.
- Stack using two 4x10g ports on the back and you end up with 80gbps bandwidth between the switches and 2x 40gbps ports and 16x 10gbps ports for devices.
- Stack using all 4 40gbps ports on the back and you end with 160gbps bandwidth between the switches and 16x 10g ports for devices.
- Stack using a single 40gb port....you get the idea.
(I'm not counting the 1gb ports, they are what they are).