Welcome back to my blog series on implementing procedurally-generated map compasses! So far I've implemented a good bit of Compass Design Language (CDL), a parser and an interpreter that can draw compasses like these examples:
I'm sure I'll have more work to do on CDL, but for now I want to start work on the procedural generation part of this project. I'll start that implementation in the next posting, but first I want to explore some topics that will help inform the process of building the procedural generation.To start with, I want to suggest some ways to think about procedural generation. In my opinion, “procedural generation" is a bit of a misnomer. It implies rote, mechanistic construction, like a passphrase generator (see, that word again!) that churns out NERVOUS RED MUSKOX and so on. But in fact when we work in procedural generation we're trying for just the opposite -- to build something that is creative and can author works that are novel and interesting. Of course, at some level computer algorithms really *are* rote and mechanical. But then again, so is an individual brain cell! Mind and creativity are phenomena that emerge from the complex interactions of many rote and mechanical processes. (Or at least so I believe.) So without going off the deep end about consciousness and creativity and so on, I want to talk about some of the mechanics that make up “individual brain cells" in procedural generation. And since these get generally more complex, I think of them as different levels of procedural generation.
Level 0 -- Unconstrained Random Generation / Completely Constrained Generation
I'll start at a level that definitely isn't procedural generation -- and that's a system that simply makes random choices throughout the creation process -- or a system that makes no choices. Random choice is of course a fundamental mechanism in computer gaming, and it's used constantly to pick the next monster to battle or the color of the gem you found in the chest. But it's hard to consider that any real sort of procedural generation. The converse of completely random choices is no choices at all. In this case you have a process that can only produce one object. And that object might be very good, but again it's hard to consider that procedural generation.
Level 1 -- Constrained Random Generation
This level of PG places simple constraints on each individual choice in the creation process. Each choice is constrained to a range that “usually" produces an acceptable result. You can imagine a space of all possible trees like this:
The goal is to generate trees within the “good trees" space. At Level 0, you simply choose randomly anywhere in the tree space:So you generate random trees within the entire space and some are good trees and many are not. To try to get a better ratio of good trees to bad trees, you can constrain your choices to an area of the space where there are more good trees than bad trees:In this case, we pick X and Y in the green ranges, and then we generate trees in this area:And now we're generating many fewer bad trees than we were before. But there are limitation to using only this level of constraints.
First of all, you're almost always going to continue to generate at least some bad trees, and depending on the complexity of your problem, you may be hard-pressed to generate more good than bad. If you have a situation where you need only good trees -- for example, you're generating this on the fly in a computer game -- then you're usually forced to constrain your choices so much that you can only generate a small portion of the good tree space:
And this diagram is misleadingly simple, because in reality procedural generation of something complex (like a fantasy map) involves thousands and thousands of choices/dimensions. That makes the “good trees" space vanishingly small. (In machine learning, this is called the Curse of Dimensionality.) Trying to stay within it by only constraining individual random choices is impossible.Even worse, if you do constrain your choices sufficient to stay in the good tree space, it's likely that you will throw away any chance you had to generate anything novel or interesting. The novel and the interesting trees are all in the edges and unknown parts of the good tree space. You've purposely limited yourself to a well-known and understood area of the good tree space, so you're not going to find any surprises there.
Second, there are some kinds of problems that can't be fixed by independent constraints. Suppose, for example, that your tree generator can choose a round tree or a conical tree shape, and can also choose leaves or needles. You can never generate both deciduous trees and fir trees, because to allow both will always create round trees with needles or conical trees with leaves. There's a dependency between the two parameters, and this means that individual constraints cannot keep you within the good tree space.
Despite these limitations, individual constraints are an important first element of procedural generation. Carefully designing your parameters so that you can constrain them to maximize your “good trees" makes everything else more efficient. And understanding what are individual constraints and what aren't is key to building an effective procedural generator. One of the first questions I ask myself as I build a choice in procedural generation is “What are the basic limits of this choice?" and its companion question “Is there some way that that a choice outside those limits could be good?" Answering these questions at least informally will help you build an understanding of the good tree space.
The goal at this level is for each individual part to be generated reasonably as often as possible without greatly restricting the solution space or excluding interesting solutions.
Level 2 -- Multi-part Constraints
Above I pointed out an example where there was a dependency between two different choices in procedural generation. Capturing this requires a multi-part constraint. These constraints can consider and control multiple parts or choices simultaneously in order to stay within the good tree space. An example of a multi-parameter constraint is one that forces needles and conical tree shapes to be chosen together.
Part of the reason rule-based systems and generative grammars are popular for procedural generation is because they provide an expressive system for implementing multi-parameter constraints. For example, in a typical grammar we could say something like this:
TREE => ROUND-BODY LEAVES | CONICAL-BODY NEEDLES
A tree is a round body with leaves OR a conical body with needles.
One kind of multi-part constraint that grammars are very good at capturing is structure -- the relationships between the parts of an object. In a grammar you can easily describe the parts of an object and how they relate to one another. The grammar for CDL describes the parts of CDL (e.g., a command can be RPOINT, CIRCLE, etc.) and the sequence and order of the parts (e.g., an RPOINT command is followed by a parenthesis, followed by a decimal number, followed by a comma, and so on). So grammars are often a good choice to generate something you can describe as a structure.
When I'm developing a procedural generation system, I try to understand the structure of what I'm creating either top-down (e.g., “What are the parts of a fantasy map and how do they relate to each other?") or bottom-up (e.g., “How do mountains relate to other parts of a fantasy map?"). I switch fluidly between the two as development dictates. I find that's it's usually good to at least start with the an initial top-down structure (e.g., “A map compass consists of an outer ring, some compass points, and then an inner ring") but each sub-part you identify becomes a new thing to generate, so as you dive down into a part you often start this over (e.g., "Compass points can include the cardinal directions only, the cardinal directions and the ordinal directions, or down to the interordinal directions") which leads to a natural trade-off between top-down and bottom-up.
As I break down the structure of something to generate, I'm also trying to think about the other kinds of dependencies the parts may have. For example, I might note that if the cardinal compass points are shaded to the right then all the other compass points should be shaded to the right as well. Depending upon your development process you may be able to implement this immediately or add it to a list of dependencies to be added later when all the parts are complete. But noting the dependency will help you plan ahead for that and avoid rework.
The goal at this level is to achieve consistency across all the parts of the procedural generation most of the time.
You can think of our two examples as neighbors in the compass space -- neighbors because they share a lot of the same features and characteristics but with some differences. I want to explore that outline to discover other neighbors that like our examples are in the good compass space.
Level 3 -- External Constraints
The previous two levels captured constraints that are intrinsic to the object being generated, so if they are complete the object should be consistent and correct, i.e., a round body with leaves or a conical body with needles, but never a round body with needles. Another kind of constraint is one that comes from outside of the object being generated, i.e., an external constraint. For example, if I'm generating trees for a mountainous area of a map, then the trees need to be conical bodies with needles because deciduous trees (even internally consistent ones ) shouldn't be in the mountains. These external factors place additional constraints on the procedural generation.
You might argue that external constraints could be added to the procedural generator to become internal constraints. For example, if I were generating the mountains and the trees on the mountains within the same procedural generation process, then this external constraint would be an internal constraint. That's not wrong, but it isn't practical to pull every external constraint into a single process. It's hard enough to generate mountains on their own without having to entwine that with the generation of trees. However, it is useful to consider whether the procedural generation would be easier if combined. You might find a situation where two different things are so intertwined with dependencies that generating them together makes everything easier.
The key question I ask myself when thinking about external constraints is “How will this be used once it is generated?" Most of the external constraints arise from making something “fit for purpose" -- suitable for how you are going to use it.
Level 4 -- Independent Subtypes
In many kinds of procedural generation, the object being generated has multiple subtypes. For example, if we're generating trees, they might be deciduous trees with leaves (like a maple tree) or they might be non-deciduous trees with needles (like a fir tree). Another example of this can be seen in games like NetHack where some dungeon levels are special and generated in a different way. Typically the subtypes might share some basic parameters or parts (e.g., size, having a trunk and branches, or number of rooms and stairways) but at some point diverge into independent parameters or parts (e.g., leaves versus needles, or irregular cavern-shaped rooms versus rectangular rooms). Enumerating these subtypes and adding them to the procedural generation is what I think of as Level 4.
This is a mechanical (but perfectly acceptable) way to capture human creativity. In the case of map compasses, I noted that some (human-authored) compasses are made to look like the sun:
I could add “sun compasses" to my compass generator to capture that variety without understanding how a person came up with that idea, or without creating a program that could come up with that idea on its own. Adding subtypes like this is a good way to “fake" the kind of inventiveness people have.You'll sometimes find as you're enumerating subtypes that you figure out a “universal theory" of generation that covers multiple subtypes. For example, you might realize that needles can be treated as very skinny leaves and deciduous and non-deciduous trees can actually be generated in the same way. When this happens, it usually creates an opportunity to explore some new areas in the space of what you're creating. If needles are skinny leaves, what are really fat leaves? You should be open to gaining a better understanding of what you're generating through the process of creating the generator.
It's all a big mystery. But I do have two examples of good compasses, so I can put them down in my compass space and at least I'll know that they're within the good compass space.
Now my goal is to draw around those two examples an outline of a part of the good compass space.Level 5 -- Simulated Processes
The things we generate procedurally are often items of the natural world that have arisen through complex processes, e.g., a map is a representation of the landform, biomes, and other aspects of the physical world, a tree is the result of biological processes, etc. For topics like this, there is a level of procedural generation that creates digital objects by simulating the same processes that create their natural world counterparts. For example, there are computer programs that create landforms by simulating the types of tectonic processes that lead to real-world landforms.
This level of procedural generation has the promise of generating very realistic objects, but in reality there are several challenges to this approach. First, many of the processes in our natural world are not completely or even well understood. The biology that results in a tree is still an area of active research. Second, even where processes are well-understood they are often complex enough that simulating them at complete fidelity in a computer program is impossible within realistic limits. So this level becomes a balancing act between abstraction and simplification of the natural processes, and creating a useful and realistic result.
Those caveats aside, simulated processes are one of the most powerful tools for procedural generation because they provide an understandable framework for creating complex, interlocking, multi-part dependencies that would be difficult or impossible to capture in a traditional rule-based grammar. And in many cases we do have the understanding and tools to create a simplified simulation of a natural process that produces realistic results.
In general, we don't actually care about the simulated process itself. If I want to generate a plausible landform for use in a computer game, it doesn't matter to the game whether I generated that landform through plate tectonics or with Perlin noise. So the key question to ask when you contemplate implementing a simulated process is “Is this the easiest way to get an acceptable result?" Answering this question might require you to think carefully about what constitutes an “acceptable" result. My experience is that the people who build procedural generation systems care much more about realism and fidelity than the users do, but you'll have to make your own judgement about that.
Implementing any sort of realistic simulated natural process is usually complex and difficult, so the key decisions are about how to simplify the simulation. Your aim is to build a simulation where every part is absolutely necessary to achieve your acceptable result and as simple as possible. For example, if having realistic rainwater channels in my landform was very important to me (and hence part of what I consider an “acceptable result") then I might find it worthwhile to implement a droplet-based erosion simulation. On the other hand, if I only cared that my landform look generally plausible, I would avoid simulating erosion and find an easier method.
Level 6 -- Artistic Processes
A less obvious problem with simulated processes is that they may not produce very interesting or creative results. Most landforms and trees and maps are, after all, pretty uninteresting. So if we're using procedural generation for an artistic endeavor like a computer game, we may find that perfectly realistic results are not actually very good for our needs. It turns out our goal was not to generate perfectly lifelike and representational objects, but to generate acceptably realistic but interesting and creative objects.
It's not hard to find real-world examples where creators have overruled realism in their own internal procedural generation in favor of some artistic purpose. Tolkien is often criticized for his unrealistic geography but many critics miss the point that Tolkien created the geography of Middle Earth to serve the narrative, not to be realistic. Mordor is surrounded by mountains to create a hero quest for Frodo, not because Tolkien ran a plate tectonics simulation. Of course we cannot model human-level creativity, but we can capture artistic and creative decisions at a simpler level. For example, we might build a map compass generator that favors symmetrical designs because we find those more interesting and beautiful.
Artistic goals are a kind of external constraints, so when I think about the artistic goals of a procedural generator, I ask myself a variant of the external constraints question: “What artistic purpose will this item serve, and where should that overrule the rules of consistency and structure?"
One particular kind of artistic goal that is worth thinking about is artistic style, which for the purposes of this discussion I will define as “a consistent treatment of the non-functional characteristics of similar elements." In other words, creating a coherent flavor across all the parts of the procedural item. To take an example from Dragons Abound, consider this map in the Knurden style:You'll see that it has a consistent color scheme across land, forest and mountains, that the trees and the mountains share the same sort of softly rounded shapes, and that the trees and mountains have similar highlights and shadows. As a result there's a consistency of presentation across the map that is pleasing to the eye.
One hallmark of simpler procedural generation is that you can look at the result and easily discern which parts were generated independently, because there is no consistency of choice across the parts. Style creates that connection between elements by making consistent choices on the non-functional characteristics of the parts.
Dragons Abound selects or generates its own style. As a simple example, Dragons Abound might have a rule that decides on a style for thin circles in a map compass by picking a width that will be used for consistently for all thin circles in the compass (rather than making a random choice for each). Or there might be a rule to enforce that any two elements grouped together should have a factor of (say) three in their size or spacing. Of course, you can also create a procedural system that implements a single style. Although Dragons Abound can generate maps in many styles, it would be fine to have a system that could only generate the Knurden style.
When I'm thinking about style in procedural generation, the two related questions I ask myself are “What non-functional characteristics of this design are defined by the style?" and “What are the values of those characteristics for this style?" In the Knurden style I might note that the land, forest and mountain colors are defined by the style, and that those colors fall into a range of green values. And indeed, if you were to look into the Dragons Abound settings for the Knurden style you'd see that those colors are forced into the values for this style:
colorsLandForce: [143, 163, 94], colorsForestForce: [112, 127, 73],
As far as building an artistic procedural generator goes, don't put too much pressure on yourself. You'll find that even small efforts to think about and intentionally address the style and artistic value of your procedural generation will pay off handsomely.
================
I've written more than I intended on the topic of the different levels of procedural generation, but I hope I've provided some useful ways to think about a growing scale of procedural generation complexity and corresponding questions to ask at each level.
In the coming postings I'll be walking through my procedural generation development process in detail, but I want to share a few specific thoughts first.
I like to think about the process of procedural generation as having two phases: Design and Execution. In the design phase, the procedural generation system makes choices about what it is creating -- all of those types of choices I talked about above. A procedural generation system to create a tree might start by deciding how many branches the tree will have; a procedural generation map system might start by deciding the landform. In the execution phase, the procedural generation system implements the choices by making a representation of the object, i.e., by drawing the branches or the landform.
Usually both phases are incorporated into a single system, but not always. Sometimes both parts are not even computers, such as when a human author writes a story based on the events from Dwarf Fortress. In that case the computer has played the design role by creating a plot that a person then executes as a story. Or a tool like Autodraw where a person provides a rough design and the computer turns it into a drawing.
I often reflect this split explicitly in my implementation of procedural generation, as I have done in map compasses, where I have separate systems for drawing compasses and for designing compasses. There are a couple of advantages to building a procedural generation system this way.
First of all, it creates a separation of concerns which helps avoid confusion between what is design and what is execution. If I have a problem with compass points in a generated compass, I can look separately at the design phase and the execution phase to help isolate where the problem lies. And often the kind of problem it is indicates whether it is in the design or the execution phase, helping me find it more quickly. This is more difficult when design and execution are intertwined.
Second, one system can help with the development of the other system. I usually build the execution part first (as I have done here) because I find having a tool to visualize designs very useful when I'm developing the design system. Of course the reality is incremental, and I'll be going to back to the drawing system to add new capabilities as I discover I need them while developing the design system. But even this switching back and forth is possible because of the separation of the two parts.
Third, having the two parts separate lets me create a hybrid system if I want. Most typically, I use this to take on the design role myself. I can design a map border and write it out in the Map Border Description Language, or I can create a map compass in CDL, and the have execution element in Dragons Abound do the actual drawing. (I'm a better designer than artist I guess.) In theory I could also use this to combine one half of the Dragons Abound map compass product generation system with the other half from (say) Watabou's compass rose generator, if both were implemented with a split between design and execution.
Building the design portion of a procedural generation system is generally the most daunting part. You want to generate medieval cities, but how do you even get started? I get started by doing what I might call “example-driven development."
As the name suggests, this depends upon having some examples of the things you want to generate. Often you can collect these from other sources -- as I've done with compass roses -- but if not, you can act as your own artist and generate some examples. And since these examples are only being used to guide development, they don't have to be fancy or polished. Just good enough that you can imagine what they should look like.
My very first step is to look through my examples and find a couple of examples that look somewhat like each other. They don't have to be completely similar but each should remind you of the other. So for map compasses, I might pick out these two compasses:
They're not exactly the same, but they have a lot of similarities. And now I'm going to use these two examples as a starting point to explore the “good map compass space."Remember earlier I talked about the bad tree and good tree spaces:
Well now I'm working in the compass space, but I don't know what's good and what's bad:It's all a big mystery. But I do have two examples of good compasses, so I can put them down in my compass space and at least I'll know that they're within the good compass space.
You can think of our two examples as neighbors in the compass space -- neighbors because they share a lot of the same features and characteristics but with some differences. I want to explore that outline to discover other neighbors that like our examples are in the good compass space.
I start this process by identifying where the two examples are the same and where they differ:
- Both compasses are drawn in black and white.
- Both compasses have compass points for both the cardinal and ordinal points.
- Both compasses have straight compass points.
- Both compasses have longer cardinal points than ordinal points.
- Both compasses have compass points drawn with light and dark halves.
- Both compasses have N/E/S/W labels on the outside of the cardinal points.
- Both compasses have a single ring decoration.
- Both compasses have the ring decoration in the outer half of the cardinal points.
- Both compasses have the ring decoration under the compass points.
- One compass has a circular scale for a ring decoration.
- One compass has a combination of a thick and thin circle for a ring decoration.
- One compass has fat compass points.
- One compass has skinny compass points.
- One compass has plain labels.
- One compass has italic labels.
Keeping in mind that this is a metaphor, I can view the features where the two compasses differ as representing the area in the compass space “between" the two compasses. I can create new compasses by starting with one of my example compasses and changing one or more of these differing values to match the other compass, or perhaps somewhere partway between the two compasses. For example, I could create the first compass but use the italic labels from the second compass:
In this way, I have “procedurally generated" a new compass. Conceptually, this new compass is somewhere “between" the two example compasses. And because this new compass shares all its features with known good compasses, there's a good chance that it's a good compass too. On the other hand, changes to the features the two compasses share move into the space outside of the two compasses. If I change the straight compass points to wavy compass points, I've created a compass that has a feature that neither of the example compasses shares:
That looks okay, but in general if my example compasses share a feature, that might mean that changing it pushes me off into Bad Compass Space. What if I make the big compass points wavy?Hmm. Not so good.
So by analyzing examples, you can derive likely (and less likely) rules for generating new examples. You can also use examples to validate rules. For example, if I had created a rule that said “Straight compass points can be replaced with wavy compass points," I could then look through my corpus of examples to check that rule, and I might notice that cardinal compass points are never wavy.
If you generate rules primarily by comparing similar examples and focusing on their differences, you'll end up with rules that are conservative: almost always producing an acceptable result, but without too many creative surprises. Conversely, if you compare very different examples, or focus on changing the shared features of similar examples, you're like to end up with liberal rules that produce more creative surprises but also produce more unacceptable results. Personally, I like to build a core of conservative rules and then carefully extend those rules to create more variety. You may find it better to start with an expansive set and prune it back to a better balance.
Whatever your approach, a useful practice when you implement a new rule is to generate a bunch of random examples at both extremes of the rule. For example, if I had a rule that said I could have between 1 and 32 compass points, I would generate some examples with the number fixed at 1 and some with the number fixed at 32, and narrow down that range until the examples looked “mostly" good, where the value of mostly depends on your tolerance for bad examples versus your desire for creative surprises. (And that's an example of a Level 1 Constraint, so I have now come full circle in this discussion.)
That's enough (probably too much) philosophy for one blog post. Next time I'll start on implementation.
This seems like as good a time as any to say: Thank you! I've been following this project for ages, and learning a lot from it. (I'm a sucker for this kind of content in general; and I just love how you break down complex designs into their constituent elements until they seem simple.)
ReplyDelete...But now you're just straight up writing a textbook. Very cool!!
Well, I'm not sure it's a textbook that would sell a lot of copies, but thank you very much for the kind words! I'm glad you enjoy the blog too.
ReplyDeleteI'd buy a copy, at any rate.
DeleteSpeaking as a philosopher myself, one can never have too much philosophy! One thing I've always been impressed with in your posts is how you carefully analyse different examples of the thing you're trying to create in considerable detail, so it's nice to see a more explicit "theory of procedural generation" laid out here. It's also somewhat humbling, since (despite being an analytic philosopher) my own method involves no such analysis or careful division of computational labour. Instead I just imagine the sort of thing I want the computer to produce, try to think about what steps I would take if I were to produce it myself, and then try to write code that emulates that process. Then I spend much longer tinkering with that code in an attempt to get it to do what I thought it should do from the start. Sometimes my ineptness as a programmer results in happy accidents - unintended results that don't resemble anything I would have planned, but which look cool anyway, and I keep them in; so I suppose I myself am, in a way, part of a larger procedural generation system. Or something.
ReplyDeletePutting it down in writing undoubtedly makes it sound cleaner than the reality. I muddle about as much as anyone I'm sure, but looking back on it I can see the patterns -- "Oh yeah, I was figuring out the limits for that parameter, that's like Level 1". I hope that this might give you some insight into your own processes. I'm definitely interested in the unexpected insights aspect that you mention; that definitely happens to me and I need to think about that some more.
ReplyDelete