I can't imagine they would be too slow, maybe limited by the 4GB address space and the CUDA 5.0 sure, but not speed.its a waste for AI they are too slow for any purpose really unless you get them for $200. just buy a gtx 1070.
The 1070 offer up to 8GB and CUDA 6.1 (which means you'll be just behind the next Tensorflow when released since they've standardized their builds on the nvidia TX platform (aka CUDA 6.3) yea they've got me tongue-in-cheek about this too.
This reminds me of another question I forgot to ask here.
Can you virtually SLI these cards and run all 4xGPUs as 1 single config?