Friday, February 22, 2013

Interior Mapping...





Room with a view

While browsing the internet searching for something interesting to talk about for the week when the terrific snow storm cancelled class, I came across a real interesting post-processing effect known as Interior Mapping. The concept behind Interior Mapping is simple and the algorithm isn't all too hard to understand because the physics behind it all makes sense. In this blog, I'll be going into more detail about the technique and introduce games that make use of this effect for simulating entire cities and breathing life into monumental structures. My information on Interior Mapping comes from a host of sources but the most useful of these is an article that was selected for the CGI conference in 2008 and can be found here


What is Interior Mapping?

Interior Mapping is a real-time shader technique which simply renders the interior of a building in perspective of the viewer (camera's position). The reason it is new, interesting and unique is that it does so without actually modelling or storing the interior of the building. The positions of floors and walls behind the windows are calculated using raycasting in the pixel shader and buildings are modeled regularly without taking into account extra calculations for Interior Mapping. Additionally, the number of rooms rendered does not influence the framerate or memory usage (surprisingly). 

Every room is lit and textured and may have furniture and/or animated characters based on your preference. There's no extra memory being used and very little additional asset creation as most features are simply duplicated or made to look similar (contain similar assets - clock, table, sink, etc.). Overall, this technique is useful for adding more depth and detail to buildings in games such as MMORPG's with large virtual cityscapes. 

Most of the time the expansive city landscapes are far too large to model not only detailed buildings but to additionally account for interior modelling and decoration to make the environment seem more realistic. Take for example an utterly massive virtual city such as that in Second Life. Tell me, do you feel it would be practical to account for every window of every building in the game world without a drop in framerate? Interior Mapping intends to solve this problem by introducing a simple new technique that builds off of traditional methods and uses an algorithm to significantly increase the graphical quality of buildings in real-time applications. 







Before Interior Mapping

Admittedly, Interior Mapping is not used industry-wide considering it's recent development and lack of adoption. Most prominently, building geometry has been kept simple enough while textures are applied to windows making them appear flat by simply changing the colors of surface elements. The details never appear to really be there and often the coloring/lighting is off despite the change in time of day/weather/external light source. 

A good example of this technique is its application in Spiderman: Web of Shadows where textures are simply glued on to building surfaces causing the structures to lack definition and detail. At most, some textures will be lit while others won't as the game world goes from day to night. 







Bump mapping sought to solve this problem by lighting details correctly. Furthermore, horizon mapping is used to show the shadows of these details and normal mapping stores the normals of texture elements, instead of their height. 







Unfortunately, none of these techniques correctly visualize the parallax effect when the viewer's perspective changes, thereby introducing displacement mapping which accounts for surface details by rendering their height-field perspectively correct. Of course, mipmaps and block maps can be used to reduce the quality in detail of the geometric building structures the further away they are from the viewer. Distant objects may be replaced by imposters or complete groups of buildings are made into single blocks with height field shapes on the inside. Again, none of these really accounts for detail inside the buildings.

Nowadays, games use techniques like reflection maps which make it impossible for the viewer to peer into the building windows or diffuse textures for objects relatively close to the window. Displacement maps are a commonly used technique which may be calculated separately from the exterior texture to handle geometrically correct rooms. The only downside to this is that a displacement map doesn't support surfaces perpendicular to the original polygonal surface leaving all walls with stretched colour lines. Block maps can fix this but, again, they cannot render furniture or characters inside the room because they are limited to height field geometry. The other drawback to this method is that the viewer will see two different rooms when looking through the same window of the same building from two different corner angles. 

I can continue to list more techniques such as generalized displacement mapping that may solve this problem but the point remains that for every fix there is another bug in the procedure. Interior Mapping boasts a confidant fix to all these issues or is at least considered the most accurate way to portray detailed room environments without sacrificing framerates.








How does it work?

Interior Mapping attempts to solve all of the aforementioned problems by directly rendering virtual walls through raycasting. The algorithm used figures out which wall is rendering in which room and uses this information to vary the lighting and textures per room. Furniture planes are then used to add furniture and animated characters to the room scene. 

There are several benefits to using Interior Mapping including the ability to generate these Interior Maps after a city's buildings have already been modeled or generating Interior Maps for curved surface buildings. Furthermore, the technique disregards the need to model or store the information of the interiors in geometry as the walls on exist as virtual geometry in the pixel shader. The ray used to determine the colour of each pixel can be collided in constant time regardless of the number of rooms or buildings. Since the camera direction is taken into account, perspectively correct rooms are rendered within the building when looking at it from the outside. Blending the interior's colours with a reflection map allows for a degree of reflection enhancing the realism of the building.

Let's take a look at how each component is rendered using raytracing within the GPU.


Ceilings 

Considering ceilings are going to be at regular distances, each ceiling is an infinite plan parallel to the XZ-plane. Interior Mapping efficiently chooses which plane to intersect the ray with for each particular pixel. Every pixel has a position in the 3D world which is calculated in object space while the position of the camera is transformed into object space as well. Once the height of the ceiling is known, the ray from the camera to the pixel can be intersected with the ceiling to find the position where the ray hits the ceiling. 

The position of this intersection is used to determine a color for the pixel in question. So we use the x- and z-coordinates of the intersection as uv-coordinates for a texture read for the ceiling or floor. The coordinates can even be multiplied by a constant to scale the texture to the desired size or the walls rotated by rotating the space of the object itself. 

The following diagram goes hand-in-hand with the explanation of the algorithm.





Walls

Now that we know how ceiling and floors are calculated, we also know how walls are calculated. The only difference here is that XY- and YZ- planes are used instead. In summation, the intersection of the ray is calculated with three different planes and of the  three resulting intersections, the one closest to the camera will be used. So when using various textures for the ceilings, floors and walls, the texture corresponding to the closest intersecting plane is used. This enables the interior to work with curved geometry. 

A diffuse texture is created that uses the alpha channel to store where window are in order to combine our interior mapping with exterior textures, in essence creating our fully modeled building. The alpha value determined whether the texture or the colour calculated by the Interior Mapping will be used based on a 1 or 0. Additionally, the colour of the reflection map will be combined with the colour from the Interior map.








Furniture & Animated Characters

An extra plane parallel to the surface of the building is used to display furniture a fixed distance to the inside. Again, this does not actually exist in geometry but is defined in the pixel shader. It is intersected with the ray from the camera to the pixel and if the intersection is closer than any of the intersections of the interior walls, the the furniture is shown. An animated texture allows the objects on the furniture plane to actually move but only short animations will be effective. Render to texture will allow more complex animations to appear on the furniture plane but this would require rendering an animated 3D character separately to a texture each frame and using this texture for the furniture plane. 

Since the furniture plane is not in fact geometrically correct, there is a possibility that the plane will display distortions or seams. The stronger the curvature and the further into the interior the plane is, the stronger the distortion will be. 






In Conclusion

Interior Mapping is a new and exciting technique that can be effectively and efficiently used to add graphical detail to a large thriving metropolis. Games such as Grand Theft Auto, Saints Row or even MMORPG's like Second Life can use this technique to make their game worlds more vibrant and interesting. The algorithm remains simple enough to implement and proves that a little work can go a long way in the world of post-processing. 








No comments:

Post a Comment