

Aegius (Zach) who gave ~30 portraits from his project (Necrosis Among the Living).Thanks to spriters for providing the training data: So yes, there needs to be some post processing involved but the generated images are in the same style as FEGBA portraits. There is no guarantee that the colors will fall in the GBA range too, but many of the images can be converted to be suitable for ROM hacks with a simple resize and custom indexing of colors to 16.

Hence, the output image will be of the size 128x128 so you may have to crop and resize them down. Since portraits were 96x80, I resized them to 124x124. Just because how StyleGAN/StyleGAN2 works, the input and output images have to be squares with height and width in power of 2 (think 32x32, 64圆4). Practically there is no way of knowing this because a lot of images are similar or just too bad (helmet heads or double facing heads). Well, theoretically, there’s 5 models and each model can generate 2^32 images, which is 5*2^32 which is 21,474,836,480. How many different images can it generate? All operations happen on the virtual machine offered by Google Colab. Does anything get downloaded on my computer? Plus, CUDA GPU hosting is costly ($0.9/hr on AWS AFAIK) so I’d rather have it on a free service like Colab since it works 24/7 for free unless you overuse/abuse it.

StyleGAN2 requires CUDA enabled CUDA and I don’t have that. If the steps are slightly confusing, check out this tutorial video. Click on this link and follow the instructions.Īlternatively, you could do it the long way and click on the file Demo_FE_GBA_Portraits.ipynb here on on the github repo and then press the button Open in Colab when it shows up.
