Remember it's not just the work that the CPUs might be doing with what's attached to the PCIe controller, but also the PCIe controller itself. When this is resident on the CPU, simply shunting data back and forth over the PCIe bus - even if you're not doing anything computationally expensive with it - will still generate a fair amount of heat. Moving lots of bits is expensive.
Of course it'd be interesting to see if the temperature difference holds true for a system sitting completely idle, i.e. little to no IO at all.
Well, that's some of the reasons why server geeks in prop trading shops disable IRQ balancing, do interrupt pinning, over-provision cores to CPUS (running only 4 trading strategies on an 8 core CPU so Turboboost kicks it a little higher) , take advantage of NUMA RAM region locality and place specific models of specific network cards in specific PCIe slots - not everything in the machine is created equal even if they are side-by-side to each other, every little bit counts towards latency, and yes, the "uncore" components on CPUs also matter in heat generation. Sometimes these things are not immediately intuitive. Transient CPU load does not always equal heat generation, sometimes the peripheral components on the die can matter more.
And yes, you CAN test it out. Use numactl and stress to specify artificial workloads on the machine, and keep tabs on it via IPMI queries on the sensors.
Frankly, unless it shows at least a 15-20 degree celsius variance between the 2 sockets, I wouldn't even pay it any heed.