No contrary to your claim it's not "complete BS".
1. Dow Corning isn't even close to the best TIM you can get on the market (especially since it's the same TIM used for their other TIM based processors such as the 6700K, 4770K, 7700K etc...)
2. TIM should only be used on small dies (this is a large die hence your statement about thermal cycles is not applicable)
3. Xeon E3's are soldered and work fine with various thermal environment and cycling.
4. Previous HEDT were always soldered.
5. The other issue is the gap between the die and the tim/ihs.
6. Refer to processors that use TIM and see what temperatures they run at stock/overclocked versus soldered.
They didn't switch for reliability reasons they switched to save costs.
1) DC TIMs are the best you can get on the market for what matters: reliability. Their TIMs have virtually zero pump out, have free flowing fill capability, and are basically unaffected by either thermal cycling or time degradation.
2) TIM can be used on all size dies. Solder still has issues even with large dies and still suffer from thermal cycling.
3)Xeon E3 are using the same TIM as their various Core iX cousins
4) Previous gasoline was leaded
5) gap is always an issue irrespective of solder or TIM
6) Intel cares about reliability above all else in thermal solutions. Temperatures are within operational range and overclocking is buyer beware as always.
The cost differential between TIM and solder is noise. They aren't going to switch to save literally pennies per part.
If you think soldered dies are inferior and have issues with degradation then you need to refer to mainstream Sandy Bridge processors which are still working to this day. The fact that you are defending a practice of putting TIM on processors up to $2000 on a HEDT platform is why I decided to post this rebuttal. You can't claim someone is "insane" because they view Intel's practices of putting TIM as acceptable on HEDT to save pennies.
Cost is a non-issue for solder vs TIM. If they are going to TIM, it is for a viable engineering reason.
The reality is that X299 is lackluster and a rushed response to Threadripper while offering less features than Threadrippper and being priced more than Threadripper would potentially be. That's not even talking about that you need the 10 core processor just to get the full 44 lanes as the lower lineups all have 28 lanes.
X299 is not a response to Threadripper. It was always planned and is shipping basically on time that roadmaps have had it at for quite a while. Threadripper (and quite honestly the 12-18c i9s) are solutions in search of a problem. There is little point in low clock speeds and high core count outside of server workloads. We can talk more about Threadripper when it is more than some nebulous concept with zero concrete facts.
As far as PCIe lanes, meh. They pretty much go unused in 99.9% of the cases. The number of machines that are used with multiple GPUs outside of servers are minimal. SLI and XF are terrible solutions that don't provide any benefit the majority of the time. Most use cases for multiple GPUs outside of server have bandwidth to spare at 1 lane per.