I got a bunch of servers, mostly HPE, Dell and Sun machines and they all have a regular 1gbit ethernet from my local home network, so far so good.
Now i have installed SFP+ cards in some of them and a few machines had it built in so i added 10gbit SFP+ modules and used FC-FC fibre cables to connect them locally.
I also tried a few with direct copper cables, rated for 10gbit of course.
Edit: the connection in the OS are reported as 10000Mbit status.
What happening
When i do a test with Iperf i get 8-9gbit - good enough for me.
If i send a big file, 5-10-200gb it bogs down to 100-110MB/s.
It does not matter if i do it from and to a ramdisk, two 1tb nvme in raid0 or from a SAS- raid setup, it never goes above 110 maybe 115.
Ramdisk to nvme, ramdisk to Sas-raid, or Nvme to sas-raid is 5-10x that speed. So its not the drives either what i can understand.
Googling around on the matter some has mentioned server power or cpu/ram or internal IO limits causing this but i´m not sure i have that issue here.
Here´s the spec for two of the machines, they are identical.
2x Xeon 2699 V3´s
4x64gb DDR4 ram modules. 2666mhz
X520-DA2 Dual Port - 10GbE SFP+
LSI SAS 9305-16i
Ubuntu 20.04 Desktop version installed.
Other machines has older E5-2650L, E5-5650x and similar cpu´s, all of them 100-400gb ram, same problems with them.
Where can i start to look for problems?
Or is it OS specific tweaks need to be done?
I can install other OS´s if anyone of you have suggestions on a test (if it might be Ubuntu specific maybe).
Sorry for the text, i´m from Sweden so English isnt my native language
Now i have installed SFP+ cards in some of them and a few machines had it built in so i added 10gbit SFP+ modules and used FC-FC fibre cables to connect them locally.
I also tried a few with direct copper cables, rated for 10gbit of course.
Edit: the connection in the OS are reported as 10000Mbit status.
What happening
When i do a test with Iperf i get 8-9gbit - good enough for me.
If i send a big file, 5-10-200gb it bogs down to 100-110MB/s.
It does not matter if i do it from and to a ramdisk, two 1tb nvme in raid0 or from a SAS- raid setup, it never goes above 110 maybe 115.
Ramdisk to nvme, ramdisk to Sas-raid, or Nvme to sas-raid is 5-10x that speed. So its not the drives either what i can understand.
Googling around on the matter some has mentioned server power or cpu/ram or internal IO limits causing this but i´m not sure i have that issue here.
Here´s the spec for two of the machines, they are identical.
2x Xeon 2699 V3´s
4x64gb DDR4 ram modules. 2666mhz
X520-DA2 Dual Port - 10GbE SFP+
LSI SAS 9305-16i
Ubuntu 20.04 Desktop version installed.
Other machines has older E5-2650L, E5-5650x and similar cpu´s, all of them 100-400gb ram, same problems with them.
Where can i start to look for problems?
Or is it OS specific tweaks need to be done?
I can install other OS´s if anyone of you have suggestions on a test (if it might be Ubuntu specific maybe).
Sorry for the text, i´m from Sweden so English isnt my native language