Installing tensorflow from source to improve computational performance
When installing tensorflow using pip or conda the "default" version is installed which usually is not optimized to fully exploit the architecture of the CPUs. This can be check running this small piece of code:
import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
It may return this warning:
tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
in this case indicating that compiling tensorflow in AVX2 and FMA could improve the performance.
To do so you have to install tensorflow from the source code. Here there is a guide of how to compile AXV2 and FMA when installing tensorflow:
Basically you need to follow the official installation of tensorflow (https://www.tensorflow.org/install/source) until the "Bazer build" where you have to run the commands specified in the guide. In the "flag form" (that you fill when executing "./configure") I selected all the default options . However, in the first option I specified the bin path to python3 (not python2) and in the last option ("optimization flags to use during compilation when bazel option") I provided the flags that I wanna use in the Bazel build part (--copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-mfpmath=both --copt=-msse4.1 --copt=-msse4.2).
The Bazer build takes time to compute (in my case >2h) and it generates a sh file which builds a .whl file that can be installed using pip (all of this is expplained in the guide).
In the end, I managed to run the "Hello, TensorFlow" without the compiling warnings. However, for running addanet there are some dependencies (tensorflow_estimator, etc) that are needed but not installed from the tensorflow source. I tried to install them separately (using Bazel: https://github.com/tensorflow/estimator) but I did not manage to run tensorflow without errors...
If the installation is OK (regular tensorflow installation from pip), you should be able to do this import without having any error:
import tensorflow.contrib
In my case I had the following error:
No module named 'tensorflow_estimator
And when I tried to upgrade tensorflow (pip install -U tensorflow) the error changed to:
ImportError: No module named 'tensorflow_estimator.python.estimator.tpu'
I stopped there but I think that it could be interesting to find out how to get those "whl" files already tuned to build tensorflow with all the dependencies and compilations we need. In this way we could incorporate those whl files in the singularity deifinition files (just running pip install), allowing a full exploitation of tensorflow.