Yep: Imgur
Why on earth should a link light come on if a serial converter is connected to a eth port? Hmmm... that doesn't sound right to me at all (but I don't want to try with my adapter, sorry )...as they say in the business, wrong hole
Because a link light will come on with a loopback, which a serial adapter will certainly be able to provide. Problem is serial isn't really pulling/using power to the same level Ethernet is, so yeah, that could've fused the thing.Why on earth should a link light come on if a serial converter is connected to a eth port? Hmmm... that doesn't sound right to me at all (but I don't want to try with my adapter, sorry )...
Err... nope... or at least, I don't think so... because the ethernet link will come up if the board detects a signal on the cable, but looking at the pinout for the Serial-RJ45 the TX and RX are on pin 3 and 6, or where the green/green-stripe pair would be in T568B... and they are isolated from each other (the TX switch side is terminated on the base of a mosfet, either in the serial bridge chip or in a buffer, in both case it is seen as a capacitance to ground, to me it seems strange that a ethernet PHY could detect it as a remote peer...).Because a link light will come on with a loopback, which a serial adapter will certainly be able to provide. Problem is serial isn't really pulling/using power to the same level Ethernet is, so yeah, that could've fused the thing.
Very well possible; as stated, both signal pairs of a 100Base-T will hit both output pins of the serial port of the switch, or one input pin ad one output pin, but on the ethernet signalling the voltage is applied only between the wires of every single pair; there is no voltage between each pair and "ground" (in fact, unless they were using shielded wires, there's no common groud between two devices connected by an ethernet cable!).It works both ways too; I've personally seen a console port on a 6610-48 that very likely had an ethernet run plugged into it because the port was dead. Switch worked fine and the guy I sold it to was able to get a serial connection rigged up (I didn't have the knowledge for it) and confirm it was working and able to use it.
PoE: Stack unit 1 PS 1, Internal Power supply with 370000 mwatts capacity is up
PoE Info: Adding new 54V capacity of 370000 mW, total capacity is 370000, total free capacity is 370000
PoE Info: PoE module 1 of Unit 1 on ports 1/1/1 to 1/1/24 detected. Initializing....
PoE Event Trace Log Buffer for 2000 log entries allocated
PoE Event Trace Logging enabled...
PoE Error: Device 0 failed to start on PoE module.
PoE Error: Device 1 failed to start on PoE module.
Resetting module in slot 1 again to recover from dev fault
PoE Info: Hard Resetting in slot 1....
PoE Info: Resetting module in slot 1....completed.
PoE Error: Device 0 failed to start on PoE module.
PoE Error: Device 1 failed to start on PoE module.
So, what's the array? What speed are you getting writing a large file to the destination? Speed reading from a large file from the source? Have you tried running iperf between the two endpoints and seen what sort of speeds you're getting? Do you have any other connections that might be getting used (like an old gigabit link that you're keeping for redundancy)?Is there anyway I can find out why my speeds are not 40GB or so? For some unholy reason my rsync copies are hovering around 45MB/s which is abysmal when I should be getting a helluva lot more. Disks are Western Digital Reds, nothing else is going on between the arrays the disks are reading/writing from. CPU is low, memory is low, networking is barely a blip in speed. I cannot push the 40GB speed that I was hoping for.
Array is a pair of Supermicro 2U servers with dual 6 core/12 thread Xeon 2600, RAM is 128GB in each. Both are controlled by SAS controllers and the disks are SATA.So, what's the array? What speed are you getting writing a large file to the destination? Speed reading from a large file from the source? Have you tried running iperf between the two endpoints and seen what sort of speeds you're getting? Do you have any other connections that might be getting used (like an old gigabit link that you're keeping for redundancy)?
Both running 40GB hp cards reflashed to mellanox stock in ethernet mode. Jumbo frames is turned on for both.If you're not seeing errors on the output of "show interface ethe 1/2/X" and the log isn't showing anything either, it's unlikely to be a basic switching issue, but we'd need to know a lot more about source and destination before we could hazard a guess on anything else.
Alright, so even if there was something wrong with the disks, there's still a problem with the network throughput.Array is a pair of Supermicro 2U servers with dual 6 core/12 thread Xeon 2600, RAM is 128GB in each. Both are controlled by SAS controllers and the disks are SATA.
I ran iperf between the two and only when I was pushing 10+ threads was I getting anywhere near 30GB/s speed between the two.
Are both devices in the same VLAN? If not, where's the routing being done, on the switch or elsewhere? If you've got spare cards/10G ports, can you pull the 40G links and test over the 10G and see if you get better throughput, the same, or worse? (Either iperf or file transfer, preferably both.)Both running 40GB hp cards reflashed to mellanox stock in ethernet mode. Jumbo frames is turned on for both.
I'm shocked I'm not getting at least 100MB/s from these transfers as the disks can easily sustain 150-200MB/s in transfer speeds.
30gbps iperf is good. you're not going to do much better than that without newer CPUs and some kernel tweaking. 45MB/s file shares sounds like NFS with sync=enabled or something
You could test if the bottleneck is in the switch making a direct connection between the two systems... if both NICs are at 40Gbe...I am doing an rsync between two systems to replicate the data.
These are both 2620 Xeon V2 so relatively new. I'd love to know what kind of kernel tweaking i can do.
The 40GB ports are all in their own VLAN seperated out from any network and traffic. fully isolated. I have 10GB ports I can hookup as well.
You could test if the bottleneck is in the switch making a direct connection between the two systems... if both NICs are at 40Gbe...