Most of the tutorials out there on setting up deep learning tools such as TensorFlow and Keras seem to be focused on Ubuntu. This is great and all, but what if you prefer a different distribution? I personally am a big Arch Linux fan, and moreso, a Manjaro fan. So here is an overview of how I set up the latest Nvidia driver, CUDA, CUDNN, Python, TensorFlow (GPU Version) and Keras on a fresh install of Manjaro linux.
Preamble, I installed from scratch using Manjaro Architect, and opted for the Budgie DE. I auto-installed the Nvidia driver during the installation, but you can always perform the
mhwd installation of the Nvidia drivers by issuing the following command and following the prompts:
sudo mhwd -a pci nonfree 0300
The first step is to check that the drivers have been installed correctly, by running
nvidia-smi in the terminal. It should report your card and driver version. For me, this was
TITAN V and
The next step was to install the latest version of CUDA and CUDNN. Thankfully, this is much less painful on Manjaro than on Ubuntu, and this can be done with one line:
sudo pacman -S cuda cudnn
This step can take a while, especially over the slow ADSL network in Australia, so grab a coffee and relax!
The next step isn’t technically necessary, but is well worth doing to verify that CUDA and it’s compilation tools are correctly installed. Issue the following commands:
cp -r /opt/cuda/samples ~
Again, this process can take quite a long time but will verify that your CUDA installation is correct before proceeding to the next step. Once the process completes, issue the following commands:
If the lower line says
Result = PASS you are good to go. First, let’s do some cleaning up:
rm -rf samples
Now we need to install
bazel so that we can compile TensorFlow from scratch later on. Why do we need to compile TensorFlow from scratch? Basically because the version of CUDA installed by Manjaro is 9.1 (and we want the latest right?) and the prebuilt binaries of TensorFlow are compiled to use CUDA 9.0. We could of course rollback CUDA to 9.0, but in my case, we really wanted to use the default 9.1 to support our Titan V with Volta architecture. Installation is quite easy:
sudo pacman -S bazel
Making sure you select
bazel version to make sure that it is installed correctly.
Now one final step before we get onto compiling TensorFlow and installing Keras. We need to install and configure our Python environment correctly. I personally prefer using the Anaconda distribution of Python, because it comes packages with pretty much all of the libraries that I use for deep learning, including Jupyter, Numpy, Matplotlib, PIL and all of the usual suspects. To get started, issue the following commands:
This will download the latest Python 3 version of Anaconda (which is 5.1.0). Please note this version may be different, so check the Anaconda website and make sure you are grabbing the latest version (Python 3, Linux).
Now to install it:
Of course, the command above works for me, as I am using the
zsh shell. Feel free to replace it with
fish or whatever you are using. Then accept the license agreement and the default install directory
/home/username/anaconda3. Also accept the offer to prepend the installation directory in
Now if you are using
bash you are good to carry on, but if you are using an alternative shell such as
zsh you will need to make a quick modification. Firstly issue
nano ~/.bashrc and scroll to the bottom. You should see the following section:
# added by Anaconda3 installer
Simply copy this section, and add it to the bottom of your
.zshrc file instead, or whichever shell config file you are using. For
zsh, issue the following command after you have done it:
And then issue:
To make sure the Anaconda installation is successful.
Now let’s get on and start compiling TensorFlow. At the time of writing this, the latest version was 1.6.0 with 1.7.0 going into beta. So let’s grab 1.6.0 from the official TensorFlow sources and save it to our home folder:
Then perform the following:
tar xvzf v1.6.0.tar.gz
Now you will be asked a number of prompts, accept the default choices (by pressing enter) to all of them, except for:
Do you wish to build TensorFlow with CUDA support? YPlease specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to default to CUDA 9.0]: 9.1Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7.0]: 7.1.1
Make sure you select the correct compute capability of your GPU. Check on the Nvidia website if you are unsure. The Titan V for example is 7.0.
configure has completed, issue the following command to ensure a potential compilation bug is avoided:
sudo ln -s /opt/cuda/include/crt/math_functions.hpp /opt/cuda/include/math_functions.hpp
Now, compile time! This process can take a REALLY LONG time, so issue the following command and come back in a few hours.
bazel build --config=opt --config=cuda --incompatible_load_argument_is_label=false //tensorflow/tools/pip_package:build_pip_package
Now we need to create a wheel from the compiled code as follows:
And then install it with
pip install *.whl
Now you should have a working version of Tensorflow 1.6.0 compiled with CUDA 9.1 and CUDNN 7.1.1 available in your
python3 shell. Try it out!
We can also install Keras now with:
conda install -c conda-forge keras --no-deps
Some Finishing Touches
I tend to use my system as a remote access machine, serving Jupyter notebooks over the web and allowing
ssh access into the machine. The first thing I tend to get set up, is a static IP address so that I can run port forwarding over my network. There are tons of tutorials on this, but the IP4 configuration can be done directly in the Budgie Desktop in Manjaro.
I then get Jupyter Notebook set up and configured, and set up a
systemctl service to start it in the background each boot. Here is how I get started with this:
jupyter notebook password
I then need to uncomment a few lines and change them:
#c.NotebookApp.allow_origin = ''
c.NotebookApp.allow_origin = '*'### AND ####c.NotebookApp.ip = 'localhost'
c.NotebookApp.ip = '0.0.0.0'
Then I test it with:
Now, in order to make re-running the above command every time my system restarts, I set up a
systemd service to autostart it each boot.
sudo nano jupyter.service
And then populating it with the following boilerplate entry, making sure you enter your usual Notebook serving directory and put your correct home folder path in the script:
And then either copy it to
/etc/systemd/ or run
sudo systemctl enable jupyter
Done! Any issues let me know in the comments.