Thats a great question. I have been wondering that for a while however at that point why not just use the 4x 10GB module instead. maybe the single 40Gb is cheeper.
I was asking because a lot of used ones come with 2X of the 40G modules in the back. Breakout cables are cheaper than switching the modules if they're already there and capable.Thats a great question. I have been wondering that for a while however at that point why not just use the 4x 10GB module instead. maybe the single 40Gb is cheeper.
As far as I remember QSFP are compatible with QSFP28 - switches will just not communicate at the advertised speed.recommends a QSFP28 cable for the MCX4131A
Was referring to Direct Attached Cables - the link above is for optics. I'm guessing DAC cables have EEPROMs - and maybe the card doesn't like the particular EEPROM?![]()
Compatible Optical Transceivers | Fully Tested | TXO
Our team here at TXO is at the forefront of OEM compatible optical transceiver coding, testing and customisation.www.txo-optics.com
Ok.. So I'm using the Brocade ICX-6610 in a stack configuration as a core/distro/access switch is fine in a home and small business setup. That is the setup I'm currently running today.Separate switches for core/access are mostly related to scale. If you need 1000 access switch ports, you don't want to scale your core switch(es) to that level, as it'll be more expensive. In a situation like that, access switches make sense.
The kind of stuff we do...collapsing everything into core switch(es) is completely practical. We aint runnin AWS from our garage, are we?![]()
I run a small-ish business from home as well, and I don't even use a stack, just a single ICX-6610. I have a cold standby in case the switch dies, but I don't need the extra ports, so why burn electrons?Ok.. So I'm using the Brocade ICX-6610 in a stack configuration as a core/distro/access switch is fine in a home and small business setup. That is the setup I'm currently running today.
Just wanted to conform to best practice based on the article.
Ok.. So it looks like I don't need to have separate switches for Layer 2 and Layer 3. The ICX-6610 would be fine in a collapsed core/distro/access setup.I run a small-ish business from home as well, and I don't even use a stack, just a single ICX-6610. I have a cold standby in case the switch dies, but I don't need the extra ports, so why burn electrons?
My business can certainly withstand the downtime it'd take to replace the switch with the cold standby (the configuration is backed up automatically, so it is a simple matter of de-racking/racking and connecting cables). Switches rarely die.
If your business absolutely must have the uptime, then yes, a stack is certainly a must.
pfSense = yup.Ok.. So it looks like I don't need to have separate switches for Layer 2 and Layer 3. The ICX-6610 would be fine in a collapsed core/distro/access setup.
What are you using in your compute and storage stack? Are you using a firewall like pfSense as well?
Wow, some engineers must have stock in power companies - not very efficientNope. That's idle power.
Are you using the free version of Starwind? I used Starwind 7 or 8 years ago..pfSense = yup.
Funny, you bring it up...I'm actually in the middle of reconfiguring my network/stack as we speak. Mostly related to consolidation and segregation. And I wasn't running a physical DC up until now, which creates some interesting DNS issues in a virtualized environment... Anyway, my new "stack" will be as follows:
- A physical domain controller
- ICX-6610 as the core/everything switch (with specific DNS things tied to the DC, in terms of using hostnames in ACLs and what not)
- A pair of bare metal Windows servers with Starwind in an HA config as the SAN - Everything below this runs off of the SAN, no local storage at all)
- A lightweight virtual server to host the core things (pfsense, nginx, second DC, WSUS, vCenter etc etc)
- As many compute servers as I need. Currently at 7, but will be consolidating them into 3-ish in the next few weeks.
I'm still vacillating between keeping pfSense virtual or move it to bare metal. I have no performance issues with it being virtual pre se.