One thing I had always wanted to try after coming across it 5 or 6 years ago was to run some of the evolution! drawings through a GAN (generative adversarial network), and see what ‘new’ images would result. Well, I came across some Google Colabs that let you run the training there instead of on your own GPU, which then dumps images periodically into a Google Drive to let you see how it’s progressing (the one I used is here! https://colab.research.google.com/github/dvschultz/ml-art-colabs/blob/master/Stylegan2_ada_Custom_Training.ipynb)
I wanted to do some general tests to see how it would go, so I got 500 or so of the original crayon images and made them all 256×256 pixels, then fed them in and waited, adjusting the model here and there by starting over if I felt like it was going too off course. Here are some examples…

And some closeups of a select few that caught my eye

All in all, not bad! I’ll have to try again later with more ‘ideal’ cleaned up images(and perhaps at a higher resolution) but for this test, the results were very interesting! I’ll try some of the plant images, which should be fun, or maybe even mix them all together in a larger dataset and see how it responds? I’ll keep you updated!