I don't mean to entirely disparage Perlin noise. It's an incredibly useful tool for procedural generation, and it has provided an easy entry into the field for many people -- myself included. But at the same time it's very misleading, because it suggests that procedural generation is much easier than it really is. Most of the weekly “procedural generation system" postings on /r/proceduralgeneration disappear without a trace when their authors discover that the next step in procedural generation is much harder. The truth is that Perlin noise is a kind of happy accident. It does work very well for generating interesting landscapes, but not for any systematic or repeatable reasons.

Mike Cook recently posted a tutorial contrasting Possibility Space versus Generative Space. If you're creating a generative system to create X, then the Possibility Space is all the possible Xs while the Generative Space is all the Xs your system can actually generate. Mike illustrates the idea with this picture, using the example of Minecraft:

**Dragons Abound**, the Possibility Space includes every possible map -- including many degenerative cases, such as a map that is all ocean except for one small island. The Generative Space is the smaller space of all the maps that

**Dragons Abound**can

*actually*generate. Because of various rules it incorporates,

**Dragons Abound**cannot generate a map that is all ocean except for one small island, so this map is not in the Generative Space for

**Dragons Abound.**Possibility Space versus Generative Space is an interesting and useful way to think about procedural generation, and I encourage you to go read Mike's posting, which will explain this idea in more detail and -- bonus -- has some neat interactive toys to play with.

In addition to the Possibility Space and the Generative Space, another useful space to think about is “Interesting Space." If we're creating X, then this is the space of all

*interesting*Xs. So in the case of

**Dragons Abound**, this is the space of all

*interesting*fantasy maps. Interesting Space is a subset of Possibility Space and hopefully intersects with Generative Space -- for example, I hope that

**Dragons Abound**generates at least some maps that are interesting.

Before I go any further, I'm happy to concede that “interestingness" is vague and subjective. What I find interesting and what you find interesting are likely not identical. I don't even always know what

*I*find interesting -- I often see something new and find it interesting, even though I wouldn't have thought so before I saw it. Even worse, exposure to something interesting tends to make it not interesting. Once you see an interesting map feature on twenty consecutive maps, it's not as interesting any more. So “interestingness" is not easy to pin down. I'll have more to say about what makes something interesting in the future (and in the past I've had something to say about the related idea of Creativity Space), but for now that's not really important. All we need to agree upon is that some things are more interesting than other things, and that the more interesting things are in the “Interesting Space" and the less interesting things are not.

So how would we illustrate Interesting Space the way Mike illustrated Possibility Space and Generative Space? As I suggested above, Interesting Space is a subset of Possibility Space that (hopefully) intersects with our Generative Space:

A good generator has a lot of overlap with Interesting Space; a bad generator has little overlap.

These diagrams are only a metaphor, but within the limits of the metaphor, I want to ask this question: Do I have the

*shapes*of the different spaces right? The shape of Possibility Space doesn't really matter; it just needs to contain the other two. The shapes of the Generative Space and the Interesting Space are more interesting (sic) questions. I've drawn them both as single, compact shapes within the Possibility Space. Is that reasonable?

I think drawing the Generative Space as a compact ellipse is a good choice for many procedural generators, and particularly for something like Perlin noise. This shape implies that the generator creates all the solutions within a single well-defined and compact space. If the generator can create X then it can usually create all of X's neighbors. There aren't usually any big “holes" or weird distortions in the Generative Space.

There are a couple of reasons that Generative Spaces tend to look like this. First, on a practical level, the generator is some small amount of code relative to the Possibility Space. At this moment,

**Dragons Abound**is about 28K source lines of code. That's a lot by comparison to many other procedural map generators -- maybe it's the biggest ever -- but it's still very, very tiny in comparison to the Possibility Space of all fantasy maps. So no matter how clever the

**Dragons Abound**code is, it's Generative Space is only going to take up a small part of the Possibility Space, and because computer code by definition is algorithmic and deterministic, that space is going to be relatively compact -- that is, if

**Dragons Abound**can produce one sort of map, it can almost certainly produce a lot of other similar maps.

The second reason Generative Spaces are compact and consistent is strategic. My purpose in creating

**Dragons Abound**is to generate interesting maps -- to overlap as much as possible with the Interesting Space. And the Interesting Space is a tough target -- it's very small in comparison to the Possibility Space. In fact, it's really, really small. All of these spaces have a huge number of dimensions, and the size of the space essentially grows as the power of the number of dimensions. The real relationship between Interesting Space and Possibility Space is something like:

Except the Interesting Space is much, much smaller. This forces procedural generation into certain strategies. For example, why doesn't

**Dragons Abound**create entirely random maps? Well, in that case the Generative Space would look like this:

Except with the green dots much much smaller. The odds of actually hitting the tiny Interesting Space with one of the random maps is effectively zero. To improve the chances of hitting the Interesting Space, the creator of the procedural generation uses knowledge about the Possibility Space and the Interesting Space to guide generation -- the creator steers the Generative Space to be mostly in the neighborhood of Interesting Space, not just spread out randomly.

So for those reasons, I think it makes sense to think about the Generative Space as a connected, compact region.

But I think drawing the Interesting Space the same way is probably exactly wrong. It should be drawn as a scattered, disjoint set of weirdly-shaped areas. Why? Because (1) things can be interesting for different, orthogonal reasons, and (2) interestingness is a discontinuous function.

Consider, for example, these two map excerpts:

The first excerpt is interesting because the coastline is baroquely detailed and has many small islands. The second excerpt has an uninteresting, smooth shoreline, but the shore has interesting features -- it is uninhabited and has the ominous “Bleak Shore" name. To the extent that these excerpts are interesting, it's for entirely different reasons. So we wouldn't expect these two parts of the Interesting Space to be neighbors.

What's more, the two parts of Interesting Space probably aren't connected, either. There's no way to slowly change the first interesting map into the second interesting map without creating some uninteresting maps along the way. So these two parts of Interesting Space are disjoint.

So Interesting Space isn't a single region, but a bunch of disjoint regions. Do each of these separate Interesting Spaces have smooth, compact shapes?

Again, I don't claim to be able to define “interestingness" completely. But we can agree on some aspects. One of those is that just because X is interesting, it doesn't mean that 2X is twice as interesting. To take the Bleak Shore example above, one stretch of mysterious shore on a map is interesting, but ten would (counterintuitively) be very uninteresting. You can probably think of other examples where “interestingness" behaves in odd and counterintuitive ways. So interestingness has some elements of discontinuity -- it's not a smooth function that varies in a rational and predictable manner as we move through the Possibility Space.

Now let me return to the topic of Perlin noise. What would this graph look like for the Generative Space of Perlin noise in the Possibility Space of landscapes? I think it would look something like this:

The Perlin noise generator has a large amount of overlap with a particular Interesting Space within the universe of all possible landscapes. That's great -- almost everything you can do with Perlin noise falls within that region and looks pretty interesting.

But that easy success is deceptive. If you're new to procedural generation, you'll see how well Perlin noise works and you're likely to think that Perlin noise is a magic tool for generating all kinds of interesting landscapes, not to mention other sorts of procedural generation. But as the diagram illustrates, the Generative Space of Perlin noise is actually far away from many other parts of the landscape Interesting Space. A typical second step in procedural generation of landscapes is to add rivers. But when you try to implement rivers using Perlin noise you find that it is impossible! There's no way to push out the borders of the Perlin noise Generative Space to reach the distant Interesting Space that contains realistic rivers.

Perlin noise is deceptive in another way as well. I said above that Interesting Space is so small that if you want to hit it, you have to use some knowledge about the Possibility Space and the Interesting Space to guide the procedural generation. But the Perlin noise algorithm doesn't know anything about landscapes or geologic processes or what people find interesting and yet somehow works very well without any of that knowledge! Perlin noise happens to be the exception that proves the rule (*). But if you're new to procedural generation, you probably don't know the rule. In fact, you'll deduce the wrong rule -- that procedural generation in general should work

*like*Perlin noise.

*(* It doesn't just “happen" to be the exception. Perlin noise is a popular starting point precisely because it is the exception. It's easy to get started with Perlin noise without having to understand a lot about landscapes.)*

Consider another common starting point in procedural content generation: dungeon layout. Unlike landscape generation, dungeon layout doesn't have an algorithm similar to Perlin noise that magically creates solutions in Interesting Space. Given what we know about Generative Space and Interesting Space, we might guess that dungeon generators will have to either (1) incorporate specific knowledge about dungeons and interestingness, or (2) produce mostly uninteresting dungeons. And in fact, if you were to go survey the last year of dungeon generators posted to /r/proceduralgeneration you would find that most of them don't incorporate any knowledge about dungeons beyond making sure all the rooms are connected, and as a result, the output of these projects look more random than interesting.

So -- presuming you more-or-less accept what I've written -- what lessons should we take away from this notion of Interesting Space?

- Procedural generation is fundamentally hard because Interesting Space is a really small and difficult target.
- Despite the occasional exception like Perlin noise, we should not expect simple, naive algorithms to do a good job generating interesting content. Instead, good procedural generation needs to incorporate and leverage extensive, deep knowledge of the domain and what makes content interesting.
- But even with the best generation algorithms, Generative space is compact and Interesting Space is not, so we should expect our procedural generation to have a lot of misses, and engineer our systems accordingly.
- For complex domains, Interesting Space is disjoint and weirdly shaped. We shouldn't expect that an algorithm or approach that works well in one part of Interesting Space to necessarily work well in another part. Therefore we can expect procedural generation to incorporate many different algorithms to exploit different parts of Interesting Space.

So if I were to create a dungeon generator, I might think about dungeons for a while and decide that I think dungeons are interesting if they:

- Look like they were created to be the storerooms, jail cells and other subterranean infrastructure of a large castle, and
- Were later invaded by monsters and modified by the monsters to suit their needs for shelter, food, water, defense against attack, etc.

Very interesting! I'm a philosopher by profession so I'm naturally drawn to reflections of this kind. I wonder though whether you're being too pessimistic about the possibility of disconnected Generative Spaces. One might, in theory, make a procedural generation program that uses a number of different methods to generate its terrain (or whatever content), and for any given output uses only some of them. So it might use Perlin noise some of the time and a tectonic plate simulator some of the time, and so on. That way you might have a chance of hitting targets within different discrete areas of Interesting Space. Of course, the more you do this the more unwieldy the thing would get.

ReplyDeleteI mentioned in a previous comment that I'm working on a project which overlaps somewhat with yours. I've started a blog about it if you're interested: https://undiscoveredworlds.blogspot.com/ . No Perlin noise is involved, and I've hit something like 38k lines so far...

I totally agree with your point, and in fact Dragons Abound is much like that. A collection of disparate approaches aimed at different chunks of Interesting Space.

ReplyDeleteAnd thanks for the pointer to your blog -- it's now in my RSS feed, and I look forward to reading it!