Jetson - Self-Driving Toy Car (Part: 2)
In today’s article, we are going to improve Jetson’s sensing and perception abilities with Computer Vision and Machine Learning techniques. It would allow our toy car to learn how to handle new cases going far beyond the simple path following. By the end of this article, you’ll see how a self-driving toy car can learn how to take correct turns on crossroads, follow the driveable path, and stop when the road ends.
If you haven’t already checked the first part of the series, please take the time to do it now:
Jetson - Self-Driving Toy Car (Part: 1)
Car Assembly, System Design and Basic AI Autopilot Motion 🤖🚗
Also, feel free to check the corresponding codebase and follow along:
Jetson is a self-driving toy car project. It contains an end-to-end CNN vision system build in PyTorch. Check the…
The robustness of the robotics’ systems is often heavily dependent on its sensing capabilities. In other words, the better a robot senses its surrounding environment, the better potentially it can act upon it.
It’s no different in our self-driving toy car project, and we are going to focus on our most important sensor - the camera.
Currently, the front-facing camera is Jetson’s only source of information about its environment. In Part 1 of the series, it was a 160° wide-angle camera. It was getting the job done, but we could do better if our robot could see more. One of the possible approaches to that problem could be to add more cameras.
This is for example how the sensor suite looks like at Tesla vehicles.