I really like the HGX SXM4 A100 setups instead of PCIE AIC with a bridge.
If your purchasing a whole server with everything I don't think there is much cost difference.
Right, we were investigating numa domains and such.
Also correct that our 3090 test rig is a single processor i9 9900k, so much simpler topology.
And we're talking about ~10ms/req, it seems much higher than could be possible with a numa issue and the machine not being under significant load.
No, we are trying to figure out if its just too early with pytorch or a driver issue or something else?
And by faster I ment lower latency per frame processed in small batch sizes, not more frames processed in big batches...
The use case is storing a buffer of all frames from a bunch of cameras that infeed into a vision system in a factory so we can scrub through the data if we want, it uses about 1-2tb per 24h of run time.
* all means frames that actually matter, not frames where for example no motion happened...
I would like to stand up 2-4TB of memory for a rotating image cache not writing to a disk?
These images are not critical to store for long periods, right now we just store a few minutes worth of content.
Suggestions?
I am interested in the 2x 48-500W and US-8-150W switches and 2x UAP‑AC‑HD
I'm not local, shipping to CA would be required, but I will take both switches at once?
Let me know.
Is anyone else doing inferance on a100 with pytorch?
We run a number of vision machines with the following cards
RTX Titan, RTX 3090, SXM A100
The important metric for us is frame to class latency with bursts of frames, we batch results up to N frames per batch and count the round trip time...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.