zondag 25 maart 2018

Advanced platform game level generation using Neural Nets

Basically what I want is to generate a platform game from a very limited black and white tile set, then do a pix2pix or style-transfer generation and have something similar to this as a result..
from an input like this:

Generating the Black and White image

I first made an algorithm to generate the maze, there is many out there. This is a normal tilebased platform game type map. Then I made an algorithm that generates the lines that break up the boring blocks, but basically keep the same shape. It makes the level look a bit techno and at least a bit more interesting and hides the fact that it is a tile-based level.. This was a bit harder, but I think it's still pretty basic, so I'll let you figure that one out yourself. The new thing is how I got to use deeplearning, using all the great libraries out there like deeplearn.js and p5.

Creating the environment

I drew the first image myself, using neural nets to generate and machine-parts for this sci-fi type game from the original black and white image. The technique used is often called a style-transfer and is based on google's original deep-dream algorithm.
I used pictures of the insides of whatches and a basic machine-texture that I found after some experimenting. It basically turns ANYTHING into a machine.
Having a tile as a base gives a certain predictability to both level-topology and the thinnest possible lines (or smallest details in the final image)
I got a lot of different outputs:


Then by incepting these images and using them (or mixed versions of them) as inputs for yet another style transfer I got some even weirder, less mechanical looking ones..

Then I used photoshop to mix all nice input together. 

So that image is part Photoshop and part Neural net..

Rendering in sections of 256x256

I kind of see this as an environment. Or an athmospere. You could draw the same input in many ways, to get the different environments needed in a typical gamemap.
For now, I limit myself to one environment. Once I have this to my liking in photoshop I export the styled level. And I do a style transfer to the next section. Pix2Pix can only do small pictures well, so I limit myself to 256x256 sections..

Even though this is better than I ever expected (giving as I lowered the size of the test file even further for speed), the sections don't exacty line up and you get a little line in between. The answer to this is to render a section inbetween..
This section doesn't really line up with the other two sections, but it allows me to fade out the edges and create a seamless huge map.
Also this will in future make it possible to blend styles together seamlessly. (I hope)
Thought I'd share how I'm trying to make even better game-art with NeuralNets.

Memory conservation

Now all this can be done automatically in the browser of the player, making the map that needs to be transfered to the player an ordinary tilemap, which could even be compressed as a gif.
I see MANY advantages in this technique.