A bit of a threadjack but years from now IMO it will eventually. Short distance will be wireless with long distance and high bandwidth scenarios will likely be glass. Corporate will take a full shift from traditional desktops/laptops to thin/cloud-based solutions. Current architectures rely upon machine imaging in which the network booting needs a wired connection. This could shift to glass but given the general costs associated with pulling fiber to the desktop outside of some high bandwidth/research scenarios, will favor copper due to cheaper costs.This might be the right place to ask it, but will CAT wiring fade away? It cannot support the faster link speeds, and we already starting to see 100GbE adopted/needed.
We will only see 40G on copper, CAT8 cables, its limited to 30m or so. For real high bandwidth Fiber it will be ! I Guess QSFP28 will be the next standard, expect to see much greater usage of the form factor.This might be the right place to ask it, but will CAT wiring fade away? It cannot support the faster link speeds, and we already starting to see 100GbE adopted/needed.
If you need the number of ports good option...Netgear M4300 10G SFP+ and 10G Base-T , 8+8 or 12+12 or 24+24 port
Blocking time is idle time while having many threads isn't a problem either: creating them might rather be, but it has nothing to do with IO waiting or SFP+. The application should probably redesigned anyway.The reduced latency with SFP+ / Fiber dramatically reduced the resource usage / threads needed as "blocking" time was significantly reduced.
Wrong assumptions, wrong conclusions.For me, choosing 10G-BaseT is like throwing money out of your window.
A thread itself consumes negligible amount of memory both in kernel and user space. I've never claimed that SFP+ doesn't provide lower latency nor that is not there measurable or beneficial, but that's all it does. It doesn't change software design, it doesn't solve the hunger in the world and the likes.Oh it has everything to do with that. The reduced wait time from 10us to 1-2us is very noticeable in the benchmarks we have done. Having thousands of threads that consume valuable memory that could be used better for other things is not always the right choice.
It depends, often it's part of the software business, but definitely it's not an excuse for "fixing" problems elsewhere.Refactoring working production code is a very risky business move.
No you're constantly talking about random stuff, not surprisingly without elaborating (and I challenge you to do so...), that you barely understand and is not at all related to SFP. Contention has nothing to do with cache size, cache thrashing is not caused by threading and depends on the workload type and/or how the code is written. Not a single thing you mentioned is related to the network stack or SFP+ in particular. I'm not going to spend any more time on this, Google can offer you (and the other four people who liked your post) a good starting point for a better understanding.I'm talking about keeping most stuff inside the L2 and L3 cache. Many threads will just cause slower execution because you will have a significant memory management overhead.