So, the other day I was trying to infer from a model I’d just built.

But the tensorflow-keras preprocessing library insisted that it needed a Local file path and wouldn’t work with image URLs being fed in directly.

I really didn’t want to run a curl or wget for every file I was trying to run a prediction on. Neither was I interested in running an “download-then-upload” step every time, so I tried to find an easier, better way.

Turns out, keras has an inbuilt utility as get_file – that returns a local path after getting the file.

Plugged in that function as the input for image.load and good to go!

Here’s what the snippet looks like if you’re trying to predict on a trained model from a URL.

import numpy as np
from tensorflow.keras.preprocessing import image
from tensorflow.keras.utils import get_file

img_url = 'paste-url-here'
random_string_for_file_name = 'test_image'
img = image.load_img(get_file(random_string_for_file_name,img_url),target_size=(150,150))

img_arr = image.img_to_array(img)
img_arr = np.expand_dims(img_arr, axis=0)
images = np.vstack([img_arr])

classes = model.predict(images, batch_size=10)

And that’s about it!

Peace ✌