Member-only story
Atari - Solving Games with AI 🤖 (Part 1: Reinforcement Learning)
Demystifying Double Deep Q-Learning

In today’s article, I am going to show you how to implement one of the most groundbreaking Reinforcement Learning algorithms - DDQN (Double Q-Learning). After the end of this post, you will be able to create an agent that successfully plays ‘any’ game using only pixel inputs.


Table of Contents
- Purpose
- Introduction to Reinforcement Learning
- Data preparation
- Improvements to DQN
- Performance
- What’s next?
Purpose
In the pursuit of the AGI (Artificial General Intelligence), we need to widen the domains in which our agents excel. Creating a program that solves a single game is no longer a challenge and it stands true even for the relatively complex games with enormous search spaces like Chess (Deep Blue) or Go (Alpha Go). The real challenge would be to create an agent that can solve multiple tasks.
We have a prototype of this - the human brain. We can tie our shoelaces, we can ride cycles and we can do physics with the same architecture. So we know this is possible. Demis Hassabis - Deep Mind’s CEO
Let’s create an agent that learns by mimicking the human brain and generalizes enough to play multiple distinct games.
Introduction to Reinforcement Learning
Before we proceed with solving Atari games, I would recommend checking out my previous intro level article about Reinforcement Learning, where I have covered the basics of gym and DQN.