Why Tensorflow's Docker CPU image is terrible, and how to fix

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
For $240, if you are serious about learning Tensorflow, just get a NVIDIA GTX 1060 6GB. We recommend larger cards but the GTX 1060 6GB
is a good place to start. With that said, what if you just want to try Tensorflow on your CPU. Say that you have been bitten by the bug and just want to try Tensorflow while your GPU is slowly making its way to you, you can use the CPU image.

The CPU image is very easy to use and per the official instructions, you can use:
Code:
docker run -it gcr.io/tensorflow/tensorflow bash
The issue with that is you will get warnings at the start such as:
Code:
2017-05-19 15:45:27.683835: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU     computations.
2017-05-19 15:45:27.683898: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU     computations.
2017-05-19 15:45:27.683935: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-05-19 15:45:27.683958: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-05-19 15:45:27.683981: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
How to fix this:
  1. Hopefully, TF will compile at least assuming Haswell AVX2/ FMA instructions again in the future. Until then, compile your own.
  2. You can follow the standard instructions. If you want to use my base image servethehome/tensorflowcpu:dependencies you can follow instructions and build from source (starting with git clone steps). Keras is already installed in that image. Dockerfile is on Github.
  3. Currently working on the ./configure step in non-interactive mode so we can have a fully automated build. Any suggestions appreciated.
How much of an impact does compiling with AVX2 / FMA and etc instructions have? On a dual Xeon E5-2658 V3 system:

Training our MNIST image GAN with the base Tensorflow image ~2286s/ epoch * 100 epochs
Exact same GAN training compiling with AVX2/ FMA ~1440s/ epoch * 100 epochs

The net impact of that simple 5-minute up-front change is just shy of 24 hours of training time (~1.6 days v. ~2.6 days).

On why you want a GPU, one of the four GPUs onboard a GRID M40 will do the same in about a day. A faster card like a GTX 1070 will train the same in about 8 hours.
 

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
OK I tried this today. You run that image in -it mode and then you just start with git clone https://github.com/tensorflow/tensorflow and go from there.

@Patrick it's a nice image since I didn't need to setup Bazel or install any dependencies during the process.
 

OBasel

Active Member
Dec 28, 2010
494
62
28
Why don't they just do this by default? It's like 40% faster on my learning deep learning I'm doing