Member-only story
CNN Explainer - Interpreting Convolutional Neural Networks (3/N)
Visualizing Boosted Convolutional Features

In today’s article, we are going to investigate what Convolutional Neural Networks (CNNs) learn during the object classification task. Visualizing CNN’s features would allow us to see what from CNN’s point of view makes thing a thing. By the end of this article, you will be able to visualize hierarchical features reflecting how CNNs ‘understand’ images.
In other words - if you are curious about what’s in the below image, keep reading!

This is the third part of the CNN Explainer series. If you haven’t checked previous parts yet, feel free to do it now.
The Essence of Deep Learning in Computer Vision
Deep Neural Networks in Computer Vision applications are usually trained by being exposed to a vast number of annotated visual examples, with the idea that the network will implicitly learn the essence of the matter necessary to give an accurate prediction, without being explicitly told what to look for.
For example, when training a mug detector, we would gather a dataset of various mugs, and hope that the CNN during training would implicitly learn what makes a mug a mug, without being explicitly told to look for a specific shape for example.
The notion of identity is a very interesting philosophical problem, investigated over the ages by various thinkers, that at the end of the day were just trying to answer the following question:
What makes a thing…