This is an agent trained to play Snake using my implementation of NEAT (Neuroevolution of Augmenting Topologies), an algorithm described in Dr. Stanley's paper (here). NEAT is a type of neuroevolution that aims to evolve both the weights and structure of the player's neural network over the course of many generations. The player gradually develops its own strategies to increase its fitness.
Here is an example progression of the player's strategy:
Generation 0:
Generation 100:
Generation 200:
(With more input information and larger population size, the player can develop more complex strategies. However, the training time will increase significantly.)
python evolve_snake.py


