Finally, tokens!
I’m noticing that my laser cutter develops a tremor sometimes. Not sure exactly what that’s about, but you can see it in some of the writing if you look closely.
The blips are a nice, chunky 5mm acrylic, which I thought would both feel nice in a bag to shuffle and more substantial on the board. I don’t recall if this is true of the original game, but I figured flamers ought to cause fire which sticks around for a little while, so they’ll be put down “blazing” side up, then flip to the blank side after a round, and only then be removed. And the flame tokens themselves will serve as the ammunition tokens. The goal tokens are intended to fit nicely on the board, and are pointy so that you can place them facing each other to designate a whole row of goal tiles. I wanted to leave myself the option of making search scenarios or scenarios in which you have to do things in order, so they have numbers on the back. If there’s a search, they can be face up (either shuffled or placed by the opponent), if sequential, facedown/number up.
I kind of want to use actual 3D terrain for goals, as well–I have some nice bits, like a control panel and a sci-fi medical station, which would do that job well. So I’m thinking I’ll probably just put those on top of the goal tokens, which should make them visually more obvious while still letting me get the atmosphere benefits of miniatures.
Changing topics entirely, I noticed something which I wanted to write down so I don’t have to think about it anymore for now, because I think it’s an insight to which I may want to return, but isn’t helpful for the game. I think it may matter that my approach in this game is very much to present rules heavy on the scenario-specificity and light on universality. To some extent, that puts a lot of burden on the players to do some analysis when they first read the scenario, but what you get for that is fewer universal rules to remember, and fewer, simpler rules to keep in mind in any particular scenario. I think that fits well with large collections, and games which we expect to be played with many other games in between. But it’s also remarkably similar to how I think of ethics.
I am a consequentialist, which is to say that I think the morality of an act is determined by its consequences. The most famous consequentialist moral theory is Utilitarianism, which basically says that we ought to optimize for the balance of pleasure over pain. But one of the major critiques of Utilitarianism, which applies to all simple consequentialisms, is that it places a massive calculatory burden on people which we simply can’t meet. That’s not much of a criticism, really, because it’s basically just a way of saying that we won’t reliably know what the best thing to do is, which seems entirely consistent with people’s actual experience. But it does identify that there’s a genuine challenge of identifying heuristics which are more tractable, and do a pretty decent job of guiding us toward acts which have the best consequences.
If you think about a perfect simulation of what would actually happen in the situation a game intends to simulate, you can think of the rules as the tractable heuristic for calculating it. Like with morality, it’s best if this heuristic is transparent and easy to use, but also has high fidelity to the perfect simulation. But (and this is the part I might want to return to) what doesn’t especially matter is if the heuristic you use in one situation is consistent with the heuristic from another. Generally, we expect that the rules which apply in one scenario will be extensions of the rules from others–were you to simply concatenate all the scenario rules and eliminate repetition, you’d have a single common rulebook which would be essentially workable. But it doesn’t have to be that way! Maybe Terminators can hit genestealers more easily than little T’au drones, so against genestealers, the action for shooting is just a blanket “roll a die, hit on 4+”, but against T’au, it’s 5+, or uses a more complicated process altogether.
I don’t know how I feel about that. Certainly, there’s a strong bias toward avoiding blatant incompatibilities between scenario rules. But I wonder whether I should import my relaxation about this in the moral realm. I know that morality is hard, and I’m not well-informed or smart enough to do it perfectly, so I’m okay with heuristics that conflict–it actually comforts me a little to know that I both try to respect autonomy and also reduce harm. There’s essentially a set of meta-heuristics about playing different heuristics off one another, and I feel like it does pretty well, so incompatibilities can be valuable in dialog with one another.