Jetson - Self-Driving Toy Car (Part: 2)

Computer Vision Sensing & CNN Perception Improvements 🤖🚗

Greg Surma
7 min readDec 21, 2020


In today’s article, we are going to improve Jetson’s sensing and perception abilities with Computer Vision and Machine Learning techniques. It would allow our toy car to learn how to handle new cases going far beyond the simple path following. By the end of this article, you’ll see how a self-driving toy car can learn how to take correct turns on crossroads, follow the driveable path, and stop when the road ends.

(source: author)

If you haven’t already checked the first part of the series, please take the time to do it now:

Also, feel free to check the corresponding codebase and follow along:

Sensing Improvements

The robustness of the robotics’ systems is often heavily dependent on its sensing capabilities. In other words, the better a robot senses its surrounding environment, the better potentially it can act upon it.

It’s no different in our self-driving toy car project, and we are going to focus on our most important sensor - the camera.


Currently, the front-facing camera is Jetson’s only source of information about its environment. In Part 1 of the series, it was a 160° wide-angle camera. It was getting the job done, but we could do better if our robot could see more. One of the possible approaches to that problem could be to add more cameras.

This is for example how the sensor suite looks like at Tesla vehicles.