现在可以导入图像并对其进行加工:
- from keras.preprocessing.image import load_img
- # load an image from file
- image = load_img('car.jpeg', target_size=(224, 224))
- plt.imshow(image)
- plt.title('ORIGINAL IMAGE')
一共分为三个步骤:
- 对图像进行预处理
- 计算不同遮挡部分的概率
- 绘制热图
- from keras.preprocessing.image import img_to_array
- from keras.applications.vgg16 import preprocess_input
- # convert the image pixels to a numpy array
- image = img_to_array(image)
- # reshape data for the model
- imageimage = image.reshape((1, image.shape[0], image.shape[1], image.shape[2]))
- # prepare the image for the VGG model
- image = preprocess_input(image)
- # predict the probability across all output classes
- yhat = model.predict(image)
- temp = image[0]
- print(temp.shape)
- heatmap = np.zeros((224,224))
- correct_class = np.argmax(yhat)
- for n,(x,y,image) in enumerate(iter_occlusion(temp,14)):
- heatmap[x:x+14,y:y+14] = model.predict(image.reshape((1, image.shape[0], image.shape[1], image.shape[2])))[0][correct_class]
- print(x,y,n,' - ',image.shape)
- heatmapheatmap1 = heatmap/heatmap.max()
- plt.imshow(heatmap)
是不是很有趣呢?接着将使用标准化的热图概率来创建一个遮挡部分并进行绘制:
- import skimage.io as io
- #creating mask from the standardised heatmap probabilities
- mask = heatmap1 < 0.85
- maskmask1 = mask *256
- maskmask = mask.astype(int)
- io.imshow(mask,cmap='gray')
最后,通过使用下述程序,对输入图像进行遮挡:
- import cv2
- #read the image
- image = cv2.imread('car.jpeg')
- image = cv2.cvtColor(image,cv2.COLOR_BGR2RGB)
- #resize image to appropriate dimensions
- image = cv2.resize(image,(224,224))
- maskmask = mask.astype('uint8')
- #apply the mask to the image
- final = cv2.bitwise_and(image,image,maskmask = mask)
- final = cv2.cvtColor(final,cv2.COLOR_BGR2RGB)
- #plot the final image
- plt.imshow(final)
猜猜为什么只能看到某些部分?没错——只有那些对输出图片类型的概率有显著贡献的部分是可见的。简而言之,这就是遮挡图的全部含义。
特征图——将输入特征的贡献可视化
特征图是另一种基于梯度的可视化技术。这类图像在 Deep Inside Convolutional Networks:Visualising Image Classification Models and Saliency Maps.论文中有介绍。
特征图计算出每个像素对模型输出的影响,包括计算相对于输入图像每一像素而言输出的梯度。
(编辑:ASP站长网)
|