Point A to Point B
We all know that AI is an important aspect of gameplay and that the focus of this type of behavior is to simulate humans as closely as possible by creating the illusion of characters being intelligent. This week's lecture took us on a trip to Insomniac studios where we learnt about how navigation of NPCs is done in the Insomniac Engine and how this creates a more immersive experience for the player. There were a lot of topics covered throughout the talk which focused on level navigation becoming an increasingly necessary component for fun in games nowadays.
Reddy Sambavaram speaks about the evolution of the AI Navigation in several of Insomniac's games including Resistance and Resistance 2, Ratchet and Clank Future: Tools of Destruction and A Crack in Time, etc. This blog will mainly go over the talk and discuss in some detail the engine designers' methods of navigation within their games. The information on this blog comes from Reddy Sambavaram's talk on Navigation in Insomniac Engine at GDC 2011, the slides of which can be found here.
Overview of Navigation
The gaming industry is constantly striving to give players an immersive experience and well designed enemy AI is crucial to this goal. Of course, in order to have great AI, they need to be able to seem like real players walking around in a living breathing environment. Navigation plays a critical role in emulating human behavior within NPCs. With that said, enemy AI has changed drastically across its history from heavily-scripted to much more dynamic and innate sensibility and characteristics. One of the primary objectives for Insomniac in improving their navigation systems was to relieve the game and level designers from having to be consciously mindful of movement and navigation capabilities of the NPC and to build the world around this.
Typically, navigation comes in the form of commands to the NPC telling them to go from a specific point to the other or to navigate their way to a particular goal. Of course restrictions are then placed upon the navigation at the same time as animations are played in order to enhance the realism of NPC movement. Spawn locations are also calculated based on where the player is and how he is playing. Insomniac quite often re-designs their single-player levels to support multiplayer modes and navigation plays a helping hand in restricting gameplay sections for the NPCs. Navigation has its own world representation to optimize processing and many approaches have been used in the past including the 2D grid-based approach, level designer point and connection placement, sphere or box-based approach and most recently a navigation mesh placed over the level's former mesh. Let's not forget A* which is very popular among those in the industry, being used to find paths from point A to point B when there are obstacles in the way. AI navigation is most certainly not a solved problem yet but is aggressively evolving.
Typically, the navigation system is given a query and then finds the path, smooths the path and gives its output of this path to the steering system which then works around dynamic obstacles and feeds the final output of velocities to the animation system and then the rendering system which figures out what needs to be displayed on the screen and when.
Stepping Stones
With Ratchet and Clank: Deadlocked, Insomniac made use of way-volume representation which was hand-made by the designers using volumes as nodes for an A* graph. Connections were present between these way-volumes which helped guide the NPC from one box to the next. When they moved to the PS3, Insomniac employed navigation meshes for Resistance: Fall of man. Their workflow at this time began with the designer who would lay the nav-mesh out in Maya after which tools would convert them into convex poly meshes for runtime. The polygons were treated as nodes in A* if they shared an edge. This, however, was a huge bottleneck on the PPU which was struggling to support 8 NPCs at any given time using navigation. Furthermore, LODs were placed so that if an NPC was within a certain distance, the AI would resolve to a much cheaper version of the behavior.
Going into Resistance 2, Insomniac knew they had to fix their PPU bottleneck and remove the AI LOD restriction. The designers wanted to lay down the navigation mesh on a much finer level, up to 3 times as fine meaning that 9 times the nav-mesh poly load would be their target. They needed to create 8 player co-op and they needed navigation to support it. This was a challenge in itself. Firstly, to fix the bottle neck, they moved the processing of the nav-meshes to the SPU. They separated the nav-mesh into differently colored clusters. They also decided that a simple triangle approach would work much faster than using convex polygons. Since the meshes were hand laid, it was common to see giant thin, long triangles as well as small, tiny ones. Tri-mesh and tri-edges were used as nodes in the A* algorithm due to the lack of uniform meshes which meant the shortest path wasn't always being found.
Insomniac then put in hierarchical pathfinding which came with its own baggage in which the high-level path didn't always imply a lower-level path as the possibility of it being just 2 disjoint meshes was always there. Around this time path caching was introduced so whenever a query came in, a check was made to find the start and the end points within the successful path in the previous string. Most of the time there was a hit which meant that they were spending less than 10% of their time finding a path. Path finding was therefore the least of their worries as far as the entire navigation system was concerned. Since most of the queries were within local range, hierarchical path finding didn't really buy them much time in their games and so they got rid of it. As a result, much more computation was spent in finding the path while using A*. In order to create a better approximation of the cost, the best point on the edge was also calculated for the path to go to which meant smoothing didn't have to do much. Furthermore, they parameterized path-queries allowing the AI to specify their abilities and allowing certain NPCs to use selective nav-meshes within the system.
In their endeavor to push all the processing on to the SPU, Insomniac batched all the nav queries and ran it full-frame deferred on the PPU. All data access was then isolated to be pushed to the SPU. They created a nav spu job which included finding a point on a mesh query with some restrictions and finding a path between the start and the destination in addition to computing the obstacle processing data for steering behavior. Then hand-optimized assembly routines were used to make it all much faster. Apart from this move to the SPU, they changed their string pull algorithm which posed some issue with huge enemies who had a 2-meter tolerance radius around them causing them to cut across the edges where collision would stop them in place. They proceeded to modify the smoothing which then picked up the slack for the issue and maintained a tangent tolerance instead of the standard string pull algorithm.
By design, NPCs had to mostly run all the time with Grims (zombie enemies) running somewhere around 8 meters per second - this posed an issue in which they not only had to go to the local bend point in the path but also had to orient themselves to the next bend point in the path. They weren't slowing down enough which gave AI designers quite a bit of trouble when attempting to communicate this to the animation system. The NPCs were not looking good in the scene and so a bezier curve was introduced at the bend point causing them to not only "arrive" at the bend point but also giving them time to face the next bend point appropriately. The NPC would then target the midpoint of the curve rather than the bend point causing them to run in a naturally smooth manner. Insomniac maintained a very lightweight steering approach which was done partly to have lots of NPCs on the screen at a time, especially for 8 player co-op. For each obstacle, escape tangents were calculated and given to the steering system which output the closest direction facing the bend point that didn't fall in between any 2 escape tangents. The only issue this created was that if there was no tolerance between the obstacle and the boundary edge, the NPC could potentially get stuck. So for every escape tangent, a sweep check was calculated. If the character ran into a boundary, the engine flagged the least significant bit of the tangent angle so the steering simply added 90 degrees to it. This worked well for Resistance 2.
At the end of Resistance 2 when profiling and fixing the navigation system, the team noticed that quite a lot of queries for finding a valid point on a mesh were coming out with similar data on the mesh from frame to frame. They spent hardly any time doing the A* or path-finding in their navigation system, 80% of the time was spent on obstacle treatment. Statistically, one obstacle was found in at least 3 other NPCs' paths. With Grims, there were spikes when 100s of them would run through a narrow corridor hiking the performance wherein which every enemy was seeing every other enemy as an obstacle. Since all the Grims shared the same threshold tolerance they had to maintain from the boundaries, the team introduced a special "Grim cache" in which every Grim obstacle would look around the boundary edges and cache the closest distance to the boundary for each one.
Resistance 3
All the caches introduced at the end of Resistance 2 begged for a formal roll-out into the system including the Grim cache. So the SPU processing was split into 3 passes. The first accumulated all obstacles in all paths, the second gathered the nearest boundaries for obstacles and the last pass ran through all the paths once more looking at all the obstacles and their boundary edge threshold tolerance computed in the previous pass, effectively flagging the tangents appropriately. They also explored a rendering solution experiment and came up with target scanning which was used to maintain NPC tolerance. Essentially, they thought of rendering the nav-mesh and clearing the rendered texels for the obstacles. The texel density used was one-fourth of the NPC's tolerance; this approach had to be discarded for two very important reasons - there was no budget for it and, more importantly, the gameplay would dynamically shrink the tolerance. A different approach was taken - whenever they wanted to maintain tolerance away from the poly, they took the nav-mesh, shrinked the boundary edges and re-computed the polys for every NPC.
Any navigation system supports custom edges which doesn't fit well into the navigation mesh solution. These custom links could be jumps, ladders, teleporting, etc. which connect different areas of the nav mesh. Traditionally a designer would place points across the mesh and link them up - this was unfriendly to changes as a small adjustment would cause the nav mesh to be re-laid and re-connected. This is also not in tune with the way a designer thinks. So designers were allowed to place boxes in various parts of the level and as long as the front-facing side of the box intersected the nav-mesh, they were fine. This was much more tolerant to level changes. Every triangle had (apart from its own edges) custom clue edges. This blended really well with the A* approach in which the NPC walked the triangle edges instead of the triangles themselves. No tweaking was required to support these custom clues. Essentially hybrids could then scale buildings and jump over walls without the need for designers to script all of this.
The manner in which other NPCs would use these custom links became a big part of their personalities. NPCs using custom links could modify the link's cost while other NPCs saw the modified cost till they were freed. Some NPCs used the custom links while others did not. Other than this information, custom NPCs had special metadata which would give them animations to walk across their custom link. This lead to NPCs using alternate paths in gameplay and finding various ways to get to the player when multiple enemies were in action. Later, the team at Insomniac made the decision to shaderize their code. The theory behind this was that the team didn't want to process any of the unused code in the memory which is a premium on the SPU. They would rather use it for data processing. And so they split their processing into 5 shaders with a core SPU nav driver layer which orchestrated the shaders and also acted as a data communicator from one to the other. These shaders were: point on mesh, A* and smoothing, find obstacles on each path, obstacle mesh interaction and flag path obstacle tangents. The result was an abundance of memory returned for data processing.
Nav mesh generation
In Resistance and Resistance 2, the level designer would make the custom nav mesh in Maya but this had a fragile tools pipeline which was a time sink as far as designers and QA was concerned. The in-built level editor was separated from Maya which didn't have the context on which to build the nav mesh. So a reverse pipeline was created in which parts of the level could be exported into Maya providing context to lay the nav mesh polys. After Resistance 2, the ability to lay down triangles in the level editor was added, making a huge difference for the designers. Now Insomniac has moved to a controlled auto generating scheme because traditionally creating a nav mesh was such great time sink for designers that the same nav mesh was often used for various enemy types and sizes. They integrated "recast" by Mikko Mononen into their system after cleaning it up and making some dependent parameters and exposing only a few to the design.
In essence, the process worked by taking the rendered geometry and voxelizing it into a voxel grid. Then navigatable voxels were built by filtering smooth rendered voxels and eroding them. Watershed partitioning took place after this and the catchment basin's contours were traced with some filtering. All this was triangulated/tesselated to generate the new nav mesh. However, the step where filtering takes place had some issues due to noise in the render geometry when rasterizing it into the voxel grid. The designers were expecting a smooth navigation mesh but the automatic generation scheme wouldn't allow for this and instead created an undesired artifact over certain uneven terrain. The designers had to be given enough power to control the generation scheme to create what they wanted while still managing to auto generate the nav mesh. Since tools to lay down triangles in the level editor were readily available, they took advantage of this and re-purposed the tool, allowing designers to make custom override polygons. After filtering on the rendered voxels took place with the scene geometry, the custom override polys were injected and the voxels of the NPC height above and below were keyed before being re-rendered. This smoothened out the noise in the rendered mesh giving designers the nav mesh they expected. This ability could be re-purposed to give designers the freedom to not make nav meshes in certain areas by not re-rendering after the voxels had been cleared. This afforded designers more control while maintaining the auto-generation of the nav mesh which was key as Insomniac aimed to push mesh generation to run-time.
In order to this, the team is now pushing for a solution to cutting out punching holes around the obstacles in the nav mesh as well as automatically generating a nav mesh for any dynamic areas in the level which have settled. They want NPCs to pick up and navigate into the newly added segments of the level. To cut down the cost they will use rendered geometry to create the nav mesh itself. Also, in order to keep this processing on the SPUs, they will break the world into 8x8 meter tiles and chunk up the NPCs into 3 or 4 categories. There will then be three different nav mesh dimensions getting rid of all the tolerance computing being done. The plan is to generate one tile at a time on the SPU.
In Conclusion
With the evolution of navigation on the rise in the gaming industry there is a lot left to figure out and even more left to aspire towards for next-gen consoles. Insomniac worked to make sure that their game and level designers were given plenty of creative freedom to do as they wish with the tools and techniques given which seems to be a leading trend with most game companies nowadays. While it used to be all about the manner in which the programming power would limit the intricacies of game design, it is now become about how much an engine or system can pull off in a title without hindering the game design.
Navigation is growing to simulate human behavior better and better as the years go by. Nav meshes are a topic of great interest and discussion in the industry and seem to be the future in AI that makes games far more immersive for players, drawing them into unique encounters with NPCs. One day we'll get there, we're just a few steps away.
No comments:
Post a Comment