Serving and Deploying Keras Models using Flask, UWSGI, NGINX and Docker

Dr. Joe Logan
2 min readMar 28, 2018

--

Anyone who has tried (and failed) deploying their Keras models will understand the frustration of getting your trained models out there on the cloud. It is all well and good having all of the code sitting their in a Jupyter Notebook, but eventually you will want to deploy and run these models at scale. Hopefully this guide will help!

The Problem

There really isn’t that much documentation out there on how to serve a Keras model publicly, and much of the complexity in terms of setting up tools such as flask, uwsgi and nginx further add to the complexity, especially if you come from the Go or Node world. DigitalOcean provide a reasonable guide for setting up this stack on their virtual machines, but unfortunately it breaks when you try and predict using a Keras model due to thread safety, and takes quite a lot of research and tweaking to get it to run on this stack. All of this leads to a lot of wasted time fiddling around configuring a VM.

Enter Docker

Docker operates on the container level, rather than the VM level. Eschewing all of the technicalities of this, it means that you can focus on writing your code and leave the configuration to someone else out there on the internet. I personally have created a docker image that installs of the serving stack, so nginx, uwsgi and flask with all of the necessary configuration bundled in. Once you have written and packaged your code into this container, it can be deployed to Google Cloud, Azure, AWS or DigitalOcean with no need to concern yourself with config.

So lets assume the following application tree:

- .
- app
-- main.py # this contains your keras model and prediction code
- Dockerfile

Now we populate the Dockerfile.

FROM joelogan/keras-tensorflow-flask-uwsgi-nginx-dockerCOPY ./app /app

Note that the joelogan/keras-tensorflow-flask-uwsgi-nginx-docker image installs all of the serving frameworks, Python and a number of dependencies, such as Keras, TensorFlow, Pillow, Matplotlib and H5PY so that you can get up and running with serving your models easily. If you need any additional libraries (for example, Pandas) you can do it as follows in the Dockerfile:

FROM joelogan/keras-tensorflow-flask-uwsgi-nginx-dockerRUN pip install pandasCOPY ./app /app

Then we run docker build which will spin up a container inheriting all of the serving code (and Python, TensorFlow, Matplotlib and a few other dependencies) and copy your app code into the container.

docker build -t some_random_name .

Docker will proceed to download all of the dependencies and set up the container. Then we can run it and test it on your local machine:

docker run -d --name random_container -p 80:80 some_random_image

Or deploy it somewhere else, such as a DigitalOcean VM or Google Cloud.

And a big shout out to tiangolo, who I inherited the base for the keras-tensorflow-flask-uwsgi-nginx-docker image from.

--

--

Dr. Joe Logan
Dr. Joe Logan

Written by Dr. Joe Logan

Developer and AI enthisiast from Sydney. Founder of Alixir. Check me out @ https://jlgn.io

Responses (2)