This project is a continuation of my previous article. In it, I explained how we can use CycleGAN for image style transfer, and apply it to convert Fortnite graphics and make it look like PUBG.

CycleGain is a type of generative adversarial network capable of mimicking the visual style of one image and transferring it to another. We can use this to make the game’s graphics look like any other game or real world.

In this article, I wanted to share some more results using the same CycleGAN algorithm that I covered in my previous work. First, I’ll try to improve the GTA 5 graphics by optimizing them to look like the real world. Next, I’ll cover how we can achieve similar photo-realistic results without rendering the highly-detailed GTA graphics in the first place.

For the first task, I’ve taken as my source domain screenshots of games that we want to convert to something photo-realistic. The target domain comes from a city scene dataset that represents the real world (the purpose of which we want our game to be similar).

CycleGain Results

Based on approximately three days of training spanning about 100 epochs, the Cyclegun model does a great job of adapting GTA to the real-world domain. I really like how small details are not lost in this translation and the image retains its sharpness even at such a low resolution.

The main downside is that this neural network turned out to be quite materialistic: it confused Mercedes logos everywhere, ruining the almost perfect conversion from GTA to the real world. (This is because the city scene datasets were collected by the Mercedes owner.)

How to get similar photorealistic graphics with less effort
While this approach may sound very promising in improving game graphics, I don’t think the real potential lies in following this pipeline. By this I mean it seems impractical to render such a detailed image and then convert it to something else.

Wouldn’t it be better to synthesize an image with the same quality, but with far less time and effort in designing the game in the first place? I think the real potential lies in rendering objects with less detail and having the neural net synthesize the final image from this rendering.

So, based on the semantic labels available in the Cityscape dataset, I segmented the objects in the screenshots of GTA, giving us representations of graphics with less detail. Just look at it as a game rendering without designing certain objects, such as a road, car, house, sky, etc. in detail. This will serve as an input to our Image Style Transfer Model, rather than a highly detailed screenshot from the game.

Let’s see what quality of final images can be generated from such low detail semantic maps using CycleGAN.

Image synthesis results from semantic maps

Here are some examples of what it looks like when we recreated GTA graphics from a Semantic Map. Note that I did not make these maps by hand. This looked really tedious, so I just let another CycleGAN model do it (it is trained to do image segmentation using the Cityscape dataset).

It looks like a good conversion from a distance, but on closer look it’s clear that the image is fake and lacks any kind of detail.

Now, these results are 256p and generated on a GPU with 8GB of memory. However, the authors of the original paper showed that it is possible to produce a more detailed 2048 x 1024p image using a GPU with more than 24 GB of memory.

It uses a supervised learning version of CycleGain, called pix2pixHD, which is trained to perform similar tasks. And boy does the fake image look so adorable!

Conclusion

GANs have great potential to change how the entertainment industry will produce content going forward. They are capable of giving much better results than humans and in a very short time.

The same applies to the gaming industry as well. I’m sure that in a few years, it will revolutionize how game graphics are generated. It would be much easier to imitate the real world than to recreate everything from scratch.

Once we achieve that, it will be much faster to roll out new games as well. Exciting times with these advances in Deep Learning!

Leave a Reply

Your email address will not be published. Required fields are marked *