491 pages. And the answer is on page 1! Thanks. I'm fairly sure there is no way to "stack" the ICX 6610 with the SX6036 (that would be REALLY cool by the way!) so I was trying to find a way to use the other two QSFP ports to link the two switches together at 40Gbe.
I might need to revert...
Sorry for replying to such an old post. I currently have two ICX 6610s setup following your post, so 1/2/1 and 1/2/6 (garage) stacks with 2/2/1 and 2/2/6 (server rack) and I have been using the remaining ports broken out into eight 10Gbe ports for my servers in the rack. And it has worked...
Yeah. Out of space to store it and I wouldn't be using it again. After checking the actual sell prices on eBay plus cost of packing materials plus cost of shipping, it just wasn't worth keeping.
I thought of posting a "come and get it!" note on here, but unfortunately, I hit the "this crap...
I finally made the call last weekend to e-waste a bunch of old server "stuff" that was cluttering up my SMALL rack and hobby room. Three old Dell servers (R610, R720 and R630) along with my old Aruba switches (two 48 port S2500, a 24 port S2500 and a 48 port S3500) along with some other misc...
Let's make sure we are talking about the same ports. The two management ports and the console port make a triangle on the right side of the switch. mgmt0 is on top directly above mgmt1, both rj45s. The console (serial) port is to the RIGHT of the two management ports and it too uses an rj45...
I've been working on getting a Mellanox SX6036 (blue front) up and running and ran into similar problems. When you say the switch has no IP address, I'm assuming (that word...) that you really mean no ip address THAT YOU KNOW. i.e.: The original owners had assigned it a static ip address. I...
The nodes I have are technically XC6420s, so they originally came with Nutanix. I plan to test them on the current Nutanix as well. My brother used a Nutanix cluster at a hospital data center and was blown away by just how fast that thing could sling bits from one node to another.
At this...
Crude... ;) I guess I am going to reconfigure how I have ports distributed to the nodes as it does appear I can bond the 1/2/x ports or the 1/3/x ports, just can't bond a 1/2/x port with a 1/3/x port.
Thanks for the definitive answer!
On the ICX 6610 with two rear ports broken out to 10Gb ports, can you create a lag that uses one of the rear ports (say 1/2/4) and one of the front 10Gb ports (say 1/3/2) that is also configured for 10Gb?
I'm getting "Error - Trunk port 2/2/4 and 2/3/2 do not have same default port speeds"...
Well. That was an easy fix. I found myself alone at home this afternoon, so I rebooted/reloaded the switches. Sure enough, when it came back up, all the ports connected and appear to be happy.
I hope it stays that way!
As verification, I swapped in the Dell SFP+ transceivers that would have been shipped with this server (new in box) in the x710-DA2 cards. Exact same results. All ports that were up before are still up, all ports down, still down. I then swapped in a new MPO breakout cable. Same results...
Oh, this brings up one other question: The output above states "Not member of any configured trunks" I thought a "truck" was created simply by assigning multiple tagged vlans to an interface? I wasn't aware you could create a "trunk" object and use that in the ICX 6610...
Like usual, this one has me scratching my head a bit. I have a four node C6400 server. Each node has an Intel x710-DA2 dual 10Gb SFP+ card (as installed from the factory per Dell). My switch is an ICX 6610 with another ICX6610 in the garage and they are stacked with the two stacking ports at...
My C6400 has the 2000 watt supplies as well. I only plan on bringing in one 240V circuit to a dual receptacle. While I know that having both PS's plugged into a single circuit does reduce reliability, well, I'm not running a hospital here. More than likely, anything that takes down the...
I had eight of the Intel F8N24's delivered Monday (I had already ordered them). Needless to say, everyone is happy again.
I'm in a bit of an "interesting" position right now: I don't have sufficient power to run this thing along with my existing home lab "stuff" (R740 serve, Netapp disk shelf...
Anyone else have this issue? When I'm trying to use mlxburn to create the bin (with "-wrimage"), it gives me this error: "Error: Image generation tool is missing. Exiting...".
I'm obviously missing something, but not sure what! Any suggestions? I'm running the command on Ubuntu server 24.04...
I've got a ConnectX 3 dual 40Gb card in each node as well. The OM3 "cables" for them literally came in this afternoon. The plan was to play with hyperconverged stuff through Proxmox and Nutanix and use the 40Gb link on a direct private network for the HC "stuff" and use the 10Gb ports for VM...
So. I found a forum reply on Dell that basically said the issue with the fan speeds was the X710-DA2 networking cards (factory installed) were not reporting temperatures correctly, so iDrac was taking steps. They recommended using a slightly older (last 6.x versus the newer 7.0.x) iDrac...
One of the first things I did (to try to solve the fan speed issue) was update all the firmware to the latest and greatest, so the nodes are all on the latest release for iDRAC 9. Node 1 is Dell service tag 91HSCS2.
I'll dig into ipmitools info when I get home.
Thanks!
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.