Thursday, February 12, 2015

[IndieDev/EonAltar] The Physics of RPGs

Physics isn't often the first thing to come to mind when one thinks Role-Playing Game, but even turn-based RPGs could successfully use a physics engine to overcome a number of issues. A physics engine gives you things like gravity, velocity, acceleration, forces, and collisions. And while a game like Skyrim might have something like a skill shot, where you fire your bow and you actually have an arrow with trajectory and therefore physics, many games do not. Most MMOs you press a button, and the ability will connect with the target or not, but you don't generally aim that ability in a way that physics matter.

In Eon Altar, the biggest boon that a physics engine gets us is collisions. By utilizing a physics engine to track geometric colliders and when they overlap, we can determine things such as when someone enters the hearing range of an NPC; when your movement marker is hovering near a target; when you're close enough to an item to pick it up from the ground; or when your party is near enough to an encounter that we should spawn the NPCs and populate statistics.

Note that there's little actual classic "physics" going on in the game. All of our actors (players and NPCs) are controlled by logic directly, as are our projectiles. Actor navigation uses something that Unity calls a NavMesh. Basically, a pre-calculated pathfinding mesh that Unity can run algorithms on to determine the fastest route to a location. We don't have bouncing balls, or rag-doll physics, and projectiles literally just take the quickest straight path to the target.



To give folks an idea of the examples I'm going to use, I present to you a screenshot from the current in-progress iteration of Eon Altar. Disclaimer: this isn't the final game, still under development, so on and so forth.

The screenshot above takes place in our test level--an infinite plane that designers and developers can throw whatever they like in to test all sorts of combat and exploration scenarios. In the left we have our mage, Muran, on on the right, we have an enemy--a bandit--who Muran is targeting (the green reticule). Between the pair we have a couple of walls, the closer one tall enough to block line of sight, the smaller one tall enough to block movement, but can be seen over.

The screenshot below has a bunch of debugging information showing. Specifically, green wireframe meshes are all of the different colliders we have in the scene at the moment. These colliders are known as trigger colliders. That is, they can overlap; they have no physical presence. They don't cause other things to bounce off them.



Each actor is dense with colliders. There's a small collider that encapsulates the actor them self, which we use for aiming projectiles and determining if they walked into something important (like a usable switch, an enemy's line of sight, etc.).

You'll also see a number of colliders around each actor: these are combat slots. Combat slots are used to align actors in melee. When an actor's collider enters another actor's combat slots, that slot is occupied. When they leave (or die), it's unoccupied and other actors may now occupy it. However, there's no requirement for actors to take up only a single slot. A large golem may end up taking two or three slots even (and said golem may have far more than six slots). If all the slots are occupied, no further actors can get in melee.

Just to the north of Muran you'll see the green targeting reticule, which has its own collider. When it overlaps with something that can be targeted, like other players, enemies, things on the ground, or so on, we can snap the reticule to that target.

You'll also notice two really large spherical colliders surrounding the NPC in the upper-right of the screenshot. These represent the sight and hearing of our bandit. When a player's collider enters one of those spheres, combat begins with the enemy having the first turn.

Finally, the walls also have colliders, which are used for line of sight for ranged attacks. If we draw a line from one actor to another, and it hits the target's collider rather than something else, we have line of sight and can shoot them.



The above screenshot shows our calculations for ranged attack, showing how the walls work. This time, for debug information we're showing Unity's NavMesh--basically the walkable areas--in light blue. You'll see that the walls carve out chunks of the NavMesh. Actors must walk around the walls.

The red/blue lines show the approximate calculations over the past 120 frames for whether we can shoot the end of the blue line. Red shows where we'd have to move (noting we have limited movement), and blue shows the actual line of sight calculation from there. We can see that there are valid calculations for Muran to move up and shoot over the wall by the bandit, despite not being able to walk over it, but we can't shoot through the taller wall.

While the NavMesh is strictly not the physics engine, for movement and line of sight we need to combine both pathfinding and physics calculations (ray casting) to create a path to move and shoot from. The player doesn't need to worry about the projectile missing physically (though perhaps their accuracy roll will fail), so when we actually shoot, we're not really using the physics engine at that point.

Despite our combat being turn-based, we lean heavily on the physics engine to perform calculations for colliders, rather than doing those calculations ourselves manually. We're not working on any sort of tile-based grid despite the floor being marked as such--that's just so we can quickly gauge distances in testing--so everything has to be calculated based on the geometry, and the physics engine comes with a number of optimizations that saves us run time and development time.

But that's not the whole story. Leaning on the physics engine comes with its share of problems. For instance, colliders generally only detect when they overlap. If one collider teleports from outside a collider to entirely inside in the span of a single physics frame, we'll never get notified that the colliders entered each other. If colliders are moving too quickly, it'll be just as if they were teleported. This is due to the physics engine using discrete calculations (sampling each frame), rather than continuous (interpolating the path between each frame). The latter is extremely expensive, however, and as long as we don't whip colliders around the level at high speeds or teleport them, it shouldn't be a problem.


Middle is Frame 1, left and right are Frame 2. If the colliders overlap, we're golden. If they never physically overlap, we've got problems.
Side bar: technically we can detect when a collider is contained completely inside another, but this ends up being quite expensive and we prefer to not do that if possible. Turns out, it generally isn't required.

Another issue is that eventually, between terrain, all of our interactable objects, and actors, we run the risk of having thousands of colliders in a level. While most graphics engines have optimizations to not render graphics that aren't visible on screen, physics engines generally rely on performing calculations on the entire solution. Which means we need to implement our own optimizations. Funnily enough, this includes leaning on the physics engine with a large collider (far bigger than the size of the screen) to turn on chunks of the level as we get closer so we're not running every physics object in the level all the time.

Overall, the physics engine has been extremely helpful in implementing a large number of systems. From exploration, to combat positioning, to ranged attacks, to NPC detection areas, and more. Even if you aren't using standard gravity or classical physics, the benefits a physics engine nets you can be immense. #IndieDev, #EonAltar, #GameDevelopment

2 comments:

  1. Fascinating stuff. I appreciate you sharing this with us.

    ReplyDelete
    Replies
    1. You're welcome! It's fun to write about, and formalize my thoughts/knowledge on the subject.

      Delete