Member-only story

CNN Explainer - Interpreting Convolutional Neural Networks (2/N)

Visualizing Gradient Weighted Class Activations with GradCAM

--

In today’s article, we are going to visualize gradient weighted class activations. It may sound confusing at first, but at the end of this article, you will be able to ‘ask’ Convolutional Neural Networks (CNNs) for visual explanations of their predictions. In other words, you will be able to highlight image regions responsible for predicting a given class.

This is the second part of the CNN Explainer series. If you haven’t checked the first part yet, feel free to do it now.

Why Interpretability Matters?

Let’s consider the following input image:

A simple CNN classifier would probably predict that this is a submarine, and it wasn’t different for our Resnet18 pretrained on imagenet.

It’s a submarine with 95% confidence and an aircraft carrier with 5%.

# submarine ~95%, aircraft carrier ~5%
torch.return_types.topk(
values=tensor([[0.9504, 0.0472]], grad_fn=<TopkBackward>),
indices=tensor([[833, 403]]))

It’s hard to argue with that result, because we can see that the above image contains both the submarine and an aircraft carrier, but that’s it - CNN’s don’t give us more information out of box.

But what if we are more inquisitive and need more information?

Trying to localize submarine and aircraft carrier might sounds like a good start, that should bring us closer to seeing the ‘full picture’.

GradCAM

One of the approaches to highlight class activations (thus showing us class localizations) is GradCAM, that was developed in 2016 by Selvaraju et. al..

The idea behind GradCAM is quite simple yet very powerful, but before we go into the details, we need to review how feedforward neural networks work to fully understand it.

--

--

Responses (1)