Monday, October 7, 2019
Cartopy Maps
Sunday, April 7, 2019
Using Pre-Built Keras Models
Using Pre-Built Keras Models¶
Here we'll take a look at a poorly framed photo of a dog with too many objects in the field with a pre-built model, resnet50.
To install tensorflow and keras, on Ubuntu, I followed Anaconda instructions to create tensorflow environment, then used conda to install whatever was missing in that environment. On Windows could not make that work and instead installed into the regular environment.
On Ubunnu, to run from right environment:
conda info --envs
conda environments:
base * /home/johnny/anaconda3
tensorflow_env /home/johnny/anaconda3/envs/tensorflow_env
conda activate tensorflow_env
For the example, I followed Learn OpenCV.
import numpy as np
from keras.preprocessing.image import load_img
from keras.preprocessing.image import img_to_array
from keras.applications.imagenet_utils import decode_predictions
from keras.applications import resnet50
from matplotlib import pyplot as plt
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
from PIL import Image
SIZE = (224, 224) # used to scale image before predictions
Instantiating a model¶
Documentation on several models is provided as part of Keras Docs. I selected resnet50 because it was the first example they give. Also it's fairly small (98 meg), so not the worst to download. Thia can take a minute because you are downloading 98 meg.
model = resnet50.ResNet50(weights='imagenet')
Loading a picture¶
Here we get a prediction of a picture of Cookiedough. A little bit of her same-colored larger friend Dollybear's tail sneaks into the image which might confuse the model.
# load cookie
img = Image.open("Cookiedough.jpg")
plt.imshow(img)
<matplotlib.image.AxesImage at 0x7fd89127de80>
Make predistions¶
Some preprocessing is required to convert the image to the expected size and format. Then we can just run the model and look at the predictions.
# preprocess image
img_thumbnail = img.resize(SIZE) # scale image down to SIZE expected by models
img_array = img_to_array(img_thumbnail) # convert the PIL image to a numpy array
img_batch = np.expand_dims(img_array, axis=0) # Convert into batch format
img_processed = resnet50.preprocess_input(img_batch.copy()) # preprocess formatted
# get predictions
predictions = model.predict(img_processed) # get predictions
decode_predictions(predictions) # display
[[('n02100583', 'vizsla', 0.3337977), ('n02085620', 'Chihuahua', 0.16568162), ('n04162706', 'seat_belt', 0.14960282), ('n02107312', 'miniature_pinscher', 0.06319394), ('n02099712', 'Labrador_retriever', 0.058791324)]]
Evaluation¶
These are not horrible but not great predictions. Wisdom says she's 75% Chihuahia and neither Vizsla nor seat_belt is a good guess for the other 25%. But the tail of the larger dog to the left could be confusing things. Let's zoom in on Cookiedough using the crop method First let's check the size of the uncropped image.
img.size
(4032, 3024)
Cropping¶
Let's crop to the right 1200 (about 35%), down 500 (about 15%), and just a little off the right and bottom.
img_crop = img.crop((1200, 500, 3700, 2900))
plt.imshow(img_crop)
<matplotlib.image.AxesImage at 0x7fd880420710>
img_thumbnail = img_crop.resize(SIZE)
img_array = img_to_array(img_thumbnail)
img_batch = np.expand_dims(img_array, axis=0)
img_processed = resnet50.preprocess_input(img_batch.copy())
# get predictions
predictions = model.predict(img_processed)
decode_predictions(predictions)
[[('n02085620', 'Chihuahua', 0.6342344), ('n02107312', 'miniature_pinscher', 0.11142456), ('n02091032', 'Italian_greyhound', 0.077309206), ('n04162706', 'seat_belt', 0.035910755), ('n02093428', 'American_Staffordshire_terrier', 0.032716673)]]
Evaluation of cropped image¶
The cropped image yields more realistic assessments. Good fur a first try.