Deep Learning on a Mac? Turi Create Review.

Dr. Joe Logan
6 min readOct 15, 2018

--

I am a self-confessed Apple fanboy, and own both an iMac Pro and a 2018 MacBook Pro. I also own a couple of custom built Linux machines, solely (and reluctantly) for deep learning and artificial intelligence development. Essentially as the platform of choice for graphics processing in the Mac is AMD, whereas the deep learning community relies heavily on Nvidia’s CUDA library. This is a huge shame, especially for the owners of the iMac Pro, where a pretty damn quick Vega 56 or Vega 64 sits unused.

Sure there are solutions out there that enable OpenCL / AMD to work with TensorFlow, but getting that all set up, let alone working, is simply a pain in the ass.

So imagine my excitement when I stumbled upon Apple’s Turi Create library, which is postulated to just work on the Mac. It incorporates full support for most modern AMD GPUs, and supposedly makes model creation and utilisation on CoreML and iOS devices painless.

Having come from Keras, TensorFlow and PyTorch running CUDA on Linux, I was excited to try out Turi Create, and see whether it could genuinely be a substitute for the tools that I know and begrudgingly love on Linux.

Step 1 — The Install

I use conda to manage my Python environments, so went ahead and set up a new environment and installed all of the base anaconda packages into it.

conda create -n Turi python=3.6

I made sure I selected Python 3.6 as the interpreter version, given that I know that Turi Create does not work with newer versions. I then activated it with

source activate Turi

Now I went ahead and installed the entire anaconda distribution packages into this environment before setting up Turi Create.

conda install anaconda

Finally, installing Turi Create was a breeze

pip install -U turicreate

To be honest, I was pretty impressed by this point. I was used to having to build TensorFlow from source, or hack at various CUDA libraries in order to get it to run. In true Apple fashion, this just worked.

Step 2 — The Turi Create Platform

Turi Create offers a number of deep learning implementations that can be used, but as I am specifically interested in computer vision, both classification and object detection, I found the following in the documentation.

  1. Image classification using ResNet or SqueezeNet.
  2. Object detection using YOLO v2.

I decided to move ahead and work on building an object detection system using the YOLO implementation provided by Apple.

Importing Turi Create is again, very straight-forward:

import turicreate as tc

Turi Create exposes a couple of new data types, which are necessary in order to collate and provide training and test data to your model. These are the SArray and SFrame types. The SFrame is a descendant of a Pandas Dataframe, and the SArray is kind of a hybrid between a Pandas Series and a Numpy array.

My approach was to create two SArray objects, one to store the image paths, and one to store the annotations. There is a pretty good guide here from Apple on how to format the SArray objects and create an SFrame for training.

However essentially, the annotations array needs to be of the following format:

And the height, width, x and y are as follows:

Data prepartation by creating the two SArray objects and compiling them into an SFrame was surprisingly the most time consuming part of the process. I had to take mask coordinates and convert them into a bounding box, and convert those coordinates to the ones that Apple expects.

Creating and training the model from there was actually very straight-forward:

train_data, test_data = data.random_split(0.8)model = tc.object_detector.create(train_data, max_iterations=1000, feature='image', annotations='annotations')

This runs 1000 training epochs with a pre-defined batch size of 32. It took around 10 minutes on my MacBook Pro.

Exploring and testing the data was a fantastic experience, using Turi Create’s built in visualisation tool:

test_data['predictions'] = model.predict(test_data)test_data['image_with_predictions'] = tc.object_detector.util.draw_bounding_boxes(test_data['image'], test_data['predictions'])test_data.explore()

This explorer made it incredibly easy to browse through the images in the SFrame and look at the predictions made by the model. All in all, a very seamless and straight-forward experience.

Saving the model for later use is also easy:

model.save('my.model')

As is converting it to CoreML for use in XCode natively:

model.export_coreml('my.model')

Step 3 — My Thoughts

I will start with the positives. Turi Create is undeniable an Apple product, and perhaps a look into the future of simplifying the process of model training and exporting. It is an absolute joy to work with on the Mac, and with no configuration whatsoever, automatically used my Radeon 560 discrete GPU in the MacBook Pro.

I was able to create a reasonably complex model using YOLO in a couple of hours, which was able to obtain reasonably decent results. Furthermore, I now know that I can easily use this model in CoreML and create mobile implementations with no tinkering. A big win for Apple.

So this all sounds too good to be true? Perhaps.

Apple proudly claim that you don’t need to be a machine learning expert to work with Turi Create, and that became evident the more that I used it. It has made significant trade-offs to improve simplicity and ease-of-use, and unfortunately that comes at a big cost.

Most glaring to me, was that there is no way of passing validation data into the process of training an object detection model, which makes it impossible to gauge whether your model is overfitting. My understanding is that they do provide this functionality for the simple image classification module, but it is missing from the object detection component at this point in time. The lack of checkpointing also makes it impossible to add this into your own code, which renders Turi Create pretty much redundant for anybody trying to create any real-world system of any practical benefit.

The focus on ease-of-use over configuration also prevents the control of any hyperparameters aside from batch size and number of epochs. Forget about any on-the-fly image augmentation, RPN tweaking or customising layers in the model.

So what are my final thoughts on Turi Create? The analogy that I like to use is that Turi Create is the iPad of machine learning. Much like the iPad, Turi Create abstracts away a lot of functionality in favour of ease-of-use and broad appeal. And it will have broad appeal, particularly to Mac users and mobile developers seeking a quick way of building a reasonable model to incorporate into their apps. Perhaps it is a window into the future, as abstracting away a lot of the complexity may be beneficial, as models become smarter and require less tweaking during training.

The great news here though, is that Turi Create is open sourced on GitHub, so I have no doubt that the community will extend and improve upon it. I may look into this myself, as I would love to at least have some kind of per-epoch validation metric built into YOLO, so that I can at least run some half-decent benchmarks against TensorFlow and determine whether the trade-offs have a major impact on accuracy.

I will also be starting work on building a React Native integration for CoreML if anybody is interested in collaborating!

--

--

Dr. Joe Logan
Dr. Joe Logan

Written by Dr. Joe Logan

Developer and AI enthisiast from Sydney. Founder of Alixir. Check me out @ https://jlgn.io

Responses (2)