Using machine learning to generate procedural cities.
May 16, 2019 7:00 PM   Subscribe

I have an idea for generating an infinite city for a game using machine learning, and would like some technical guidance.

So me and signal-II, have an idea: a side scrolling game with an endless procedurally generated 2.5-d pixelated city. We want the city to randomly morph as you go through it, creating different 'zones'.
I've been thinking about how to accomplish this and thought we could hand draw some examples of buildings, then train a neural network and have it generate interpolations of the different drawings, and by varying which of the building types we're mixing we could achieve the different zones.
I don't have much experience with machine learning, though I understand the basics. I've dabbled in text analysis and feature extraction, evolutionary algorithms, computer vision, etc. I have 12+ years' experience in Python, though mostly for web dev and some spatial analysis. I don't have a degree in comp sci or math beyond first year calculus.
Is there work done that's similar to what I'm proposing? A Python library I can just use? A Jupyter notebook I could study? If not, what keywords should I be researching? Does my idea make sense in the first place, i.e. is there a simpler and well known way to accomplish this?
posted by signal to Computers & Internet (10 answers total) 5 users marked this as a favorite
 
I think the standard answer here would be a generative adversarial network. Essentially, the space of all items that the GAN generates gets mapped to a lower-dimensional latent space that the "generator network" randomly walks around in. The generator produces synthetic images that another network classifies as being "good" or "bad". The "good" images (those that pass the classification network's idea of the object you're trying to generate) become the output of the network.

Once you find a "good" image (a city in this case), you can slowly walk around the "location" of the city in the latent space of the network to generate "nearby" cities that will slowly morph into other generated cities as you walk "further" away from the generated city. If you use the GAN to generate two cities, you could do a linear walk between them to generate the "in-between" cities.

I think an easier way is to simply look into image morph software.
posted by saeculorum at 7:34 PM on May 16, 2019


Here is an example of what I was referring to with the paper referenced.
posted by saeculorum at 7:39 PM on May 16, 2019


A (somewhat) oldie but a goodie: WaveFunctionCollapse: Bitmap & tilemap generation from a single example with the help of ideas from quantum mechanics.

There are Python ports linked at the bottom of the readme that you could probably easily take advantage of. Hopefully this helps!
posted by un petit cadeau at 7:40 PM on May 16, 2019 [1 favorite]


Neural networks are generally used for classification. I'm not sure you would use it for generation, directly. You might use it to filter cities that you generate on the fly, that have attributes that you like.

Starting from the top, you provide a machine learning toolkit some training input that says input X most likely matches some categorization Y.

In the test case of the German post office scanning envelopes, the classification exercise is that some scanned image of a number in an address on an envelope is most likely to be digit N.

If you have Python chops, this post and this thread might be useful for generating (dungeon) maps.

These scripts could be modified from making dungeons to making cities. Hallways become roads, rooms become neighborhoods, etc.

Once you have generated a bunch of maps and decided which ones look like the "best" cities, you might train a neural network on those "best" cities.

Why would you do this?

Scripts can automate the process of making cities, but maybe you've decided that some of the cities that come out of a script look like a "bad" city. A bad city might have a lot of dead-ends or one-way streets. (Or in the case of dungeons, a lot of hallways that go nowhere, etc.)

So the idea is that you make a bunch of cities with your script. Hundreds — thousands, maybe. You classify them (by hand, personally) as good or bad.

You then train a neural network on what you have identified or classified as a bad city, and what is a good city. It uses its error function to say that a city with low error is likely to be "good" (and by extension, a city with high error is "bad").

The next time your game makes a city with your script, you run it through your classifier — your neural network. It then uses its training to tell you if this city is a "good" city, or a "bad" city.

Your game would allow this procedurally-generated map to be used as a "good" city map — one that will have play value. Or your game will remake a new map and run it through the classifier to decide if it is a useful map.

Tensorflow is a common machine learning toolkit. Maybe start with the MNIST example that I linked and see if that helps clarify what neural networks do.
posted by They sucked his brains out! at 7:48 PM on May 16, 2019 [2 favorites]


I was about to post the WaveFunctionCollapse link too.

The thing with neural ML is that it takes hours (days) to train a network, and if you don't know exactly what you want to accomplish it's not going to be easy to experiment. And you also have to have lots and lots of example data (side views of city blocks, I suppose)

To make a full game, you'd still want to generate enemies, animations, interactions with the buildings, etc. which unless you have vast ML chops is easier in a traditional hand-built procedural framework.
posted by RobotVoodooPower at 7:52 PM on May 16, 2019


You might want to look at some SIGGRAPH papers - they may (or may not) incorporate the kind of machine learning you describe, but procedural generation of cities and maps has been a Thing for at least 20 years (probably longer).

Here’s a link that might help you get started.
posted by doctor tough love at 4:01 AM on May 17, 2019


Your idea sounds neat! I hope you figure out a way to implement it.

I'm not sure neural networks are the best approach. Those work best when you have a whole lot of data to train from. Is your friend prepared to draw 1,000,000 different buildings? Probably not. Maybe you could find an image library of that size you could train from?

No matter what you do you'll need to come up with some internal data representation of a building. Simple image recognition (say, OCR) works with the actual pixels of the source image. But flat images probably won't help you here; you want something more like rectangles or blocks.

The good news is once you settle on a representation there are all sorts of generative algorithms you could put to work. I'm thinking something rule based might work, like an L-system. Or maybe a Markov model. There's lots of techniques that are far short of a complex neural network.

Finally if you haven't seen it before, Shamus Young's old series on procedural cities is still good viewing. Also you may have seen it before but did you read the source code? Medieval Fantasy City Generator.
posted by Nelson at 7:19 AM on May 17, 2019


Response by poster: Thanks so much for all the answers, insight and links!
I think we'll finally go straight procedural, the overhead for GAN seems to be a bit much, and as this is in part a learning project for my 11 yo, it shouldn't be so black-boxy.
I am intrigued by the Wave Function Collapse link, though.
posted by signal at 9:49 AM on May 17, 2019


Response by poster: Nelson: I've done some stuff with L-systems before, that might be the way we go.
posted by signal at 9:50 AM on May 17, 2019


You might want to check out the work of Riley Wong - they did something similar (albeit with static images) in this project.
posted by taltalim at 6:02 AM on May 18, 2019


« Older Therapist gives sliding scale to one patient but...   |   Why is it so difficult to prevent buffer overflow... Newer »
This thread is closed to new comments.