Thanks for your input I appreciate it.
I actually changed some BIOS settings (enhanced I/O or some HP-Speak like that) and iperf is now at 20-25Gbps.
I just used MSCP and got 2.5GB/s to a NVME drive without even trying so this is absolutely a software/optimization issue.
edit: to be clear...
I think my problem is that I expected a 40Gbps connection to transfer at something substantially north of 10Gbps in single file transfers using NVME hardware with Epyc CPU's, lots of RAM and good cables with relatively little configuration and in real-world use-cases (i.e. simple file...
Sorry I forgot to mention. Debian Server A is on dual Epyc 7763 with a terabyte of RAM
Ubuntu Server B is on a dual Xeon 2697 v4 with 512GB RAM
Ubuntu Server C is on a single Epyc 7D12 with 384GB RAM
all are on PCI-E 3.0 8x. Loads of I/O in those servers.
I did a search on this and couldn't find what I felt were good guides/threads on performance or setup.
I've got a bunch of Connect-X 3 Pro cards in various servers all flashed to the latest Mellanox firmware on the site (2.42.5000). All equipped with Samsung F320 cards or Samsung PM9A3 nvme...
hey I saw that you had one of those 7V13 ES CPU's on a Gigabyte MZ32-AR0 motherboard. Have you tested it with LRDIMM's and if so did it work? More specifically 64GB LRDIMM's.
Ahh, so Uplinking is called Stacking in Brocade-land. Got it. Is that enabled on a per-port basis? If so can I plug a standard NIC into it afterward or would stacking need to be disabled?
I picked up a 7250 and can't use the 10gbe SFP+ ports as uplinks to other switches only as standard ports for servers. Is there some mode that I need to set on the switch to allow me to uplink to other switches?
If the DDR4 is cheap enough I will load the boat with 16GB sticks for all of my servers. If I can get them for $5 per stick or something like that I might as well. I did the same with $20 64GB DDR3 sticks.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.