CNN Explainer - Interpreting Convolutional Neural Networks (2/N)

Visualizing Gradient Weighted Class Activations with GradCAM

Greg Surma
6 min readJan 23, 2021

In today’s article, we are going to visualize gradient weighted class activations. It may sound confusing at first, but at the end of this article, you will be able to ‘ask’ Convolutional Neural Networks (CNNs) for visual explanations of their predictions. In other words, you will be able to highlight image regions responsible for predicting a given class.

This is the second part of the CNN Explainer series. If you haven’t checked the first part yet, feel free to do it now.

Why Interpretability Matters?

Let’s consider the following input image:

A simple CNN classifier would probably predict that this is a submarine, and it wasn’t different for our Resnet18 pretrained on imagenet.

It’s a submarine with 95% confidence and an aircraft carrier with 5%.

# submarine ~95%, aircraft carrier ~5%…

--

--