Wednesday, February 27, 2013

Graphical Advancements...





As we proceed...


So as I sat at home during Reading Week spending way too much time figuring out what to write about for this blog, I logged on to Facebook as most university students looking for inspiration do. Guess what? I found it! A post on my news feed drew my attention to Sony's Official PlayStation 4 announcement. 

This blog post is based on the specifications outlined for the next-gen console, specifically what's brewing in the graphics department and what I think about all of this. We will also talk about what the PS4 can and cannot do and compare it to a couple other gaming consoles, after all the Xbox 720 is coming out soon isn't it? We'll cover gamers' expectations as well as where the PS4 will deliver the most value for their ever-so-loyal customers. 


A Brief History

PSOne, PlayStation 2, PS3...PS4. 


I'll do you one better...

With the long-awaited PS4 just about ready to hit the shelves we slowly begin to see the evolution of gaming technology through Sony's eyes. Graphically advance in so many ways, the PS4 now steps into the limelight and attempts to show consumers how far their company is dedicated to providing their fans with the most value for their big money. But it's important to see the history that has brought the PS4 to its place, already ahead of the competition, amongst today's next gen consoles. The journey was long but well worth it as Sony has grown far beyond the walls that technologically limit consoles nowadays and have become a gaming giant in what seems like a fortnight. But no matter where you are in life, it is always important to remember where you came from and so I present the evolution of PlayStation... 





As you can clearly tell, PlayStation has come a long way. So now that we have honored the past let's move on to the future of gaming and where better to begin than the graphical aspects of the PS4.






The Core is Key

If it isn't apparent already, there is no design out for the PS4. Sony's PlayStation Announcement was released before the blueprints for the PS4's design came into the hands of the public. Of course there are mock-ups and fake designs out there but as far we know the PS4 could be shaped like a bagel. However, Sony did reveal their new Dualshock 4 which is meant to be the controller for their new console. The controller features motion-sensing through a 'light bar' that will interact with the camera, improved SixAxis motion control, a touchpad and a "share" button for PlayStation Network interaction. The light pad interestingly enough changes colours to identify players and help the camera keep track of individual player motion. 

Enough talk about the only piece of meat Sony has revealed thus far, let's move on to the juicy graphical specifications of this beast. Before we do this, let me just verify that the specifications I mention on this blog are based on information collected from multiples websites including techradar, vgleaksKotaku and digitaltrends. This is just to ensure that my sources of information are plausible, detailed and realistic projections that just make sense. More information about the PS4 including its launch dates and hard drive specs should be out by that time of year when E3 rolls around. 

I think it's pretty awesome that Sony has already codenamed the PS4 - 'Orbis'. The Xbox 720 is codenamed 'Durango', but more on that later. Keep these names in mind because I like them a lot more than the repetitive console names because they give the platforms a personality and are a lot cooler. Without further ado. 






Orbis will be clocked at 1.6 GHz and will be packing EIGHT...that's right I said EIGHT x86 processor cores. WOO! Sporting 18 Radeon GCN units, Orbis is ahead of the game with the super fast GDDR5 despite the 4GB of RAM assigned to its Graphics Core. Furthermore, Orbis will pack 1.84 teraflops of raw processing power and is expected to have a Blu-ray drive. It's 64-bit processor is based off AMD's "Jaguar" processor line which is generally built for mobile products. So game developers having been constrained by a market dominated by dual-core systems have something to look forward to with the PS4. Of course this also means that developers moving from PC to console game development will find it easier to adapt since ports between the two will take less time to optimize. Sony surprises us yet again with a low-power core bundled in to help process tasks in the background. 

Okay so that's all well and good, but what does this mean? What's all this information you're throwing at me and how does it make the PS4 better? Well I sure am glad you asked...






Bark and Bite

Does the Orbis live up to the hype and what has really changed since the PS3? It's certain that the PS4 can talk the talk...but can it really walk the walk?

Let's begin by focusing on Orbis' additional processor. Anyone with a PS3 can tell you that the download times for games are painfully long and that not all can be completed in the background. Orbis manages to fix this by using the additional processor to focus on the download while the player can begin playing the portion of the game that has already partially downloaded. This is a lot like live streaming and is common to the manner in which YouTube plays a video once it is partially downloaded. This neat feature is exciting for gamers who don't enjoy the long wait and would rather begin their adventure while the rest of the game continues to download in the background. 

Furthermore, the eight processor cores and high-speed GDDR5 RAM mentioned earlier will be optimal for allowing Sony to maximize their system's data pipe allowing for faster RAM and more of it. This in turn allows developers to create bigger textures, larger levels, more complex A.I., etc. Unifying the main system and video memory on Orbis allows the console flexibility over time and means that if the system's video card runs out of memory, the main system's memory can always simply pick up the slack. 

The new Blu-Ray drive will significantly shorten loading and installing times and USB 3.0 ports make it an altogether well-rounded system with fast connectivity options for networking and peripherals. 

The PS4 blows the WiiU out of the water and can be considered better than the Xbox 720 in many aspects. Again, the PS3 did the same thing against the Xbox 360 and succeeded without the Red Ring of Death. Sony's impressive innovation is taking over the console market as Orbis continues to out-do the known specification of Durango (which admittedly has the cooler name of the two - the D is not silent). At this point in its development stage, the argument can be made that the PS3 doesn't stack up fantastically against the graphical prowess and performance of a PC which would be cheaper just to upgrade from graphics card to memory and hard drive. Of course the PS4 isn't as impressive when compared to the PC, but it is important to understand and make note of how far the PS4 has come from its predecessors. So when it is stated that the PS4 is a "next-gen" console, it has the means to back up the claim based on the progression from where it was to where it is now. 






Food for thought

As a proud owner of the PSOne, PlayStation 2 and PlayStation 3 I have been rooting for Sony the whole way, I guess my parents just did a good job of raising me. 

Let's discuss the graphics and games. 

With highly anticipated titles like Killzone: Shadow Fall and Watch Dogs on the horizon, I take a look at the graphical advancements and I don't seem to see too much of a difference in the display of creativity. Take a look at the following videos which feature gameplay from Killzone and Watch Dogs as they attempt to strut the graphical prowess of Orbis. 









Now that you've seen the PS4's capabilities, what do you think? Not all that different from that of the PS3 I'd say. But we must remember that it's not always about the graphics, the PS4 has so much more to offer than superior graphics. Orbis' hardware is meant to handle so much more than a robust graphics system using tons of shaders. When we begin to measure consoles based on their graphics or how good games look on their respective platforms we get sidetracked by our superficial impressions with judging the book by its cover. What am I trying to get at?

Orbis is particularly great because of its untamed raw processing power. It can do so much in comparison to its predecessors and that is the mark of a true next-gen console. So expect impressive world scales, smoother and seamless animations as well as brilliantly crafted artificial intelligence. There's going to be far more to do in the scope of gaming capabilities and replay value will shoot through the roof. With all this content its no wonder that games coming out for the PS4 will be available through digital download. It is important to look forward to the games that are going to push Orbis' boundaries and really manage to surprise us with groundbreaking gameplay features. Personally I can't wait for God of War 4 to come out. It's going to be EPIC.






In Conclusion

The PS4 is a beast of a console with impressive graphical specifications and a lot more to offer the gaming market than many next-gen consoles. Of course most of our expectations are simple speculations and hopes for what Sony promises the PS4 will be, but I'm certainly looking forward to this wonderful machine exceeding my expectations and delivering more than just impressive graphics.

An interesting remark about next generation consoles came up on a forum amidst my research:

"The graphical gameplay of next generation consoles usually evolves to what the last generation's cut scenes looked like."

By that theory, the PS4's cutscenes might just look like motion pictures...woah.







Saturday, February 23, 2013

Depth of Field...





That's deep!

So let's talk Depth of Field. What is Depth of Field? How does it work? Where is it used in games? As Depth of Field grows in prominence in the world of gaming, we begin to see how effective it is and how often it is used and in what capacity. Adapted from the science of photography, depth of field is a post-processing affect that is used to illustrate, emphasize and create dramatic effect in photography, movies, shows and games. Turn on the television and you're bound to see this technique used in some capacity. It is popular and agreeably one of the most notable and intuitive gifts from the world of photography. But as with all effects, there are those that disagree with its effectiveness in video games and believe that the gaming industry could do without it. 

This blog aims to explore the science of the effect, what's behind it all, how it's used and why people just can't get a long with it.


Depth of Field Explained

Depth of Field refers to the range of distance that appears acceptably sharp and varies based on three factors:

1. Aperture Size

2. Focal Length

3. Subject-to-camera distance

Everything immediately in front of or in the back of the focusing distance gradually loses sharpness and blurs out. This is relative due to the fact that a DOF can either be short or long. 

Aperture lens is the adjustable hole through which light passes. To understand aperture size, the term Circle of Confusion must be explained. When light rays bounce of a point on an object, they reflect the specific point on the object in several directions. This happens in the shape of a cone which travels to the camera lens. Once it reaches here the rays are bent by the lens to converge at a single point on the focusing plane. This is the part of the object that is "in focus". Based on where the various rays of light hit the focusing plane, a circle is formed relative to the distance of the object in relation to the image plane. This circle is known as the Circle of Confusion. 




There is a permissible circle of confusion which dictates the area of true focus on the focusing plane. Any circle of confusion larger than this permissible circle is automatically out of focus. Whereas any circle smaller than the permissible one is considered in focus. These are used to determine the depth of focus which is directly co-related to the depth of field. The aperture size is then related to how much light is being allowed in on the focusing plane. In short, increasing the aperture size decreases the depth of focus and therefore the depth of field while decreasing the aperture size increases the depth of focus and inherently the depth of field. 

At this point, let's talk about focal length. Focal length is a measurement of how strongly a lens focuses light. It's distance is measured from the center of an optical lens to the focusing plane while the lens is focused at infinity. The focal length effectively changes the subject's reproduced image size in the subject area covered by the lens. This is known as the Field of View. In summary, the smaller the focal length (wide focal lens), the longer the depth of field and the larger the focal length (narrower focal lens), the shorter the depth of field.




Finally, the subject-to camera distance from the camera's lens to in focus subject. Simply, the larger the distance between the subject and the camera, the larger the depth of field will be. Inversely, the smaller the distance between the subject and the camera, the shorter the depth of field will be. 

Notice the focus on aperture size and the detailed explanation of the circle of confusion and focusing plane. This is due to the fact that the best way to adjust the depth of field is by adjusting the aperture size. Now that the science is out of the way, let's move on to how Depth of Field is used in games.







Gaming Applications

Depth of field is used in many instances as a clever way of pointing the player to where they need to go. In cutscenes, it is used as a means of drawing the character's attention to important characters while less important ones are blurred in the back. A common technique is when depth of field is used to focus the player's attention on specific details and crucial areas in an environment or even to make clear the path to follow in order to achieve the objective. 

Games like Call of Duty and Far Cry effectively use depth of field to focus the player's view on their specified target when zooming in and aiming down their sights. This ensures that the player can hone in on the kill by blurring the unnecessary clutter around the field of focus and ensuring that the player's range of sight is limited to the area around the gun's barrel. 






Sometimes depth of field is used to drive players' focus away from certain areas in the game and keep them focused on their current path. This is true of games like Witcher 2 where depth of field is effective in discouraging unnecessary exploration.






In games like Second Life where beautiful landscapes are recognized, depth of field can be used to focus the player's attention on, let's just say the finer things the game has to offer. 






Another important application of depth of field can be seen in Assassin's Creed where most executions involve spying on the assassin's target from amidst crowds of citizens while waiting for the opportune moment to strike. In this situation, depth of field is used to blur out the throng of people and focus in on the templar target who must be assassinated. 






Besides these applications lie numerous ways to implement depth of field effectively in to game mechanics to enhance the player's experience. Most games utilize depth of field in simpler ways, such as blurring distant and hard to reach areas which discourage players from attempting to reach places not meant to be visited. Other games use depth of field to point players in the right direction or to highlight the path meant to be taken or the manner in which an objective is meant to be completed. Most gamers playthrough tons of these scenarios without realizing how well the developers have managed to incorporate amazing post processing effects into their solid core of gameplay. 



Issues on the table & Conclusion

A lot of gamers believe that depth of field is ugly and unnecessary in gaming. These are the same people who don't particularly appreciate the beauty of post processing and turn all effects off to enjoy the game in its true raw form. While I am not trying to criticize these people, it is sometimes important to note the trouble that goes into implementing these effects within video games and appreciate the developers for taking the time to make the game look cleaner, crisper and simply better.





Time and time again the fact that some companies do it right while other may not is brought to the table. Let's look at a clear example of how it can be used negatively. In Stalker: Clear Sky, depth of field is utilized whenever the player reloads their gun. When this happens, everything but the gun is blurred. This sucks for two very valid reasons, you want to know who you're going to be aiming for once your gun is cocked and ready for another barrage of bullets, and you need to know who's shooting you and where you're going to go/what you're going to do about it. Games where effects such as depth of field aren't seamlessly woven into gameplay give post-processing a bad name.

Besides this, the argument is made on several forums that our eyes naturally focus in on what we're looking at, inherently blurring everything around our point of focus. Some gamers believe that depth of field must only be used in non-interactive cutscenes while others argue that such effects hurt their eyes and cause headaches. I can sympathize with this but have seen either side of the fence and have to say that I would take a game that does depth of field right in a heartbeat rather than shutting the effect down altogether. 

In conclusion, depth of field is a positively wonderful post processing effect if used in moderation effectively and efficiently to enhance gameplay. It is often easy mistake the boundary and over-do it, in essence, creating a headache for gamers and potential fans. The true gems are the games that manage to integrate depth of field into their games in a versatile manner without bringing it to the player's attention. This can be done and done well as demonstrated by several companies out there.

At the end of the day, give game developers a chance to wow you. If they fail, well then you can say you told them so. 








Friday, February 22, 2013

Interior Mapping...





Room with a view

While browsing the internet searching for something interesting to talk about for the week when the terrific snow storm cancelled class, I came across a real interesting post-processing effect known as Interior Mapping. The concept behind Interior Mapping is simple and the algorithm isn't all too hard to understand because the physics behind it all makes sense. In this blog, I'll be going into more detail about the technique and introduce games that make use of this effect for simulating entire cities and breathing life into monumental structures. My information on Interior Mapping comes from a host of sources but the most useful of these is an article that was selected for the CGI conference in 2008 and can be found here


What is Interior Mapping?

Interior Mapping is a real-time shader technique which simply renders the interior of a building in perspective of the viewer (camera's position). The reason it is new, interesting and unique is that it does so without actually modelling or storing the interior of the building. The positions of floors and walls behind the windows are calculated using raycasting in the pixel shader and buildings are modeled regularly without taking into account extra calculations for Interior Mapping. Additionally, the number of rooms rendered does not influence the framerate or memory usage (surprisingly). 

Every room is lit and textured and may have furniture and/or animated characters based on your preference. There's no extra memory being used and very little additional asset creation as most features are simply duplicated or made to look similar (contain similar assets - clock, table, sink, etc.). Overall, this technique is useful for adding more depth and detail to buildings in games such as MMORPG's with large virtual cityscapes. 

Most of the time the expansive city landscapes are far too large to model not only detailed buildings but to additionally account for interior modelling and decoration to make the environment seem more realistic. Take for example an utterly massive virtual city such as that in Second Life. Tell me, do you feel it would be practical to account for every window of every building in the game world without a drop in framerate? Interior Mapping intends to solve this problem by introducing a simple new technique that builds off of traditional methods and uses an algorithm to significantly increase the graphical quality of buildings in real-time applications. 







Before Interior Mapping

Admittedly, Interior Mapping is not used industry-wide considering it's recent development and lack of adoption. Most prominently, building geometry has been kept simple enough while textures are applied to windows making them appear flat by simply changing the colors of surface elements. The details never appear to really be there and often the coloring/lighting is off despite the change in time of day/weather/external light source. 

A good example of this technique is its application in Spiderman: Web of Shadows where textures are simply glued on to building surfaces causing the structures to lack definition and detail. At most, some textures will be lit while others won't as the game world goes from day to night. 







Bump mapping sought to solve this problem by lighting details correctly. Furthermore, horizon mapping is used to show the shadows of these details and normal mapping stores the normals of texture elements, instead of their height. 







Unfortunately, none of these techniques correctly visualize the parallax effect when the viewer's perspective changes, thereby introducing displacement mapping which accounts for surface details by rendering their height-field perspectively correct. Of course, mipmaps and block maps can be used to reduce the quality in detail of the geometric building structures the further away they are from the viewer. Distant objects may be replaced by imposters or complete groups of buildings are made into single blocks with height field shapes on the inside. Again, none of these really accounts for detail inside the buildings.

Nowadays, games use techniques like reflection maps which make it impossible for the viewer to peer into the building windows or diffuse textures for objects relatively close to the window. Displacement maps are a commonly used technique which may be calculated separately from the exterior texture to handle geometrically correct rooms. The only downside to this is that a displacement map doesn't support surfaces perpendicular to the original polygonal surface leaving all walls with stretched colour lines. Block maps can fix this but, again, they cannot render furniture or characters inside the room because they are limited to height field geometry. The other drawback to this method is that the viewer will see two different rooms when looking through the same window of the same building from two different corner angles. 

I can continue to list more techniques such as generalized displacement mapping that may solve this problem but the point remains that for every fix there is another bug in the procedure. Interior Mapping boasts a confidant fix to all these issues or is at least considered the most accurate way to portray detailed room environments without sacrificing framerates.








How does it work?

Interior Mapping attempts to solve all of the aforementioned problems by directly rendering virtual walls through raycasting. The algorithm used figures out which wall is rendering in which room and uses this information to vary the lighting and textures per room. Furniture planes are then used to add furniture and animated characters to the room scene. 

There are several benefits to using Interior Mapping including the ability to generate these Interior Maps after a city's buildings have already been modeled or generating Interior Maps for curved surface buildings. Furthermore, the technique disregards the need to model or store the information of the interiors in geometry as the walls on exist as virtual geometry in the pixel shader. The ray used to determine the colour of each pixel can be collided in constant time regardless of the number of rooms or buildings. Since the camera direction is taken into account, perspectively correct rooms are rendered within the building when looking at it from the outside. Blending the interior's colours with a reflection map allows for a degree of reflection enhancing the realism of the building.

Let's take a look at how each component is rendered using raytracing within the GPU.


Ceilings 

Considering ceilings are going to be at regular distances, each ceiling is an infinite plan parallel to the XZ-plane. Interior Mapping efficiently chooses which plane to intersect the ray with for each particular pixel. Every pixel has a position in the 3D world which is calculated in object space while the position of the camera is transformed into object space as well. Once the height of the ceiling is known, the ray from the camera to the pixel can be intersected with the ceiling to find the position where the ray hits the ceiling. 

The position of this intersection is used to determine a color for the pixel in question. So we use the x- and z-coordinates of the intersection as uv-coordinates for a texture read for the ceiling or floor. The coordinates can even be multiplied by a constant to scale the texture to the desired size or the walls rotated by rotating the space of the object itself. 

The following diagram goes hand-in-hand with the explanation of the algorithm.





Walls

Now that we know how ceiling and floors are calculated, we also know how walls are calculated. The only difference here is that XY- and YZ- planes are used instead. In summation, the intersection of the ray is calculated with three different planes and of the  three resulting intersections, the one closest to the camera will be used. So when using various textures for the ceilings, floors and walls, the texture corresponding to the closest intersecting plane is used. This enables the interior to work with curved geometry. 

A diffuse texture is created that uses the alpha channel to store where window are in order to combine our interior mapping with exterior textures, in essence creating our fully modeled building. The alpha value determined whether the texture or the colour calculated by the Interior Mapping will be used based on a 1 or 0. Additionally, the colour of the reflection map will be combined with the colour from the Interior map.








Furniture & Animated Characters

An extra plane parallel to the surface of the building is used to display furniture a fixed distance to the inside. Again, this does not actually exist in geometry but is defined in the pixel shader. It is intersected with the ray from the camera to the pixel and if the intersection is closer than any of the intersections of the interior walls, the the furniture is shown. An animated texture allows the objects on the furniture plane to actually move but only short animations will be effective. Render to texture will allow more complex animations to appear on the furniture plane but this would require rendering an animated 3D character separately to a texture each frame and using this texture for the furniture plane. 

Since the furniture plane is not in fact geometrically correct, there is a possibility that the plane will display distortions or seams. The stronger the curvature and the further into the interior the plane is, the stronger the distortion will be. 






In Conclusion

Interior Mapping is a new and exciting technique that can be effectively and efficiently used to add graphical detail to a large thriving metropolis. Games such as Grand Theft Auto, Saints Row or even MMORPG's like Second Life can use this technique to make their game worlds more vibrant and interesting. The algorithm remains simple enough to implement and proves that a little work can go a long way in the world of post-processing. 








Thursday, February 7, 2013

Post-processing in games





Post-Processed is good for you


Nowadays game titles use all types and forms of post-processing in their video games. From bloom to blur and lens flare, there's a lot that can be done to improve the look of a game with the help of some nifty light manipulation tools and a strong graphics card that can be used efficiently. It is quite fascinating how many of these ideas come about and how game studios choose to implement them within their own game effectively. There are, of course, those who aren't the type to enjoy all the added effects and filters that the game has gone through before you see it which is why nowadays games usually come with the option of turning down or even turning off these graphical settings. Some people prefer to play games the way they were originally made with no over-drawn effects blocking their experience while others like all the shininess and polished environments game makers strive to deliver. Either ways, there is a right and a wrong way of using post-processing and many arguments on either side of the table.

This blog intends to demonstrate what the pros and cons are to post-processing while mentioning some big name titles and how they got it right. We will discuss the various things one can do with a graphics card and how efficient it is to post-process depending on the requirements your PC/graphics card meets. A quick google search manages to turns up hundreds of links to forums where post-processing (especially HDR and bloom) is discussed extensively along with varying opinions on whether or not it needs to be present in video games. 







Effect and Affect

Today video games use all variations of visual effects applied to their scenes after they are rendered by the game engine. Sometimes developers intend to create a particular look or draw the player's attention to certain finer details in an attempt to enhance their gameplay mechanics. However, improvements in graphics rendering technology often result in particular effects being overused excessively. But more on this later, for now, let's talk about a few of the more popular post-processing video effects and give some practical examples. 




Lens Flare

Lens Flare occurs when a bright object (usually the sun or a light source) is in the shot. The bright light causes glare off every piece of glass as it passes through on the way to the film or optical receiver. This causes the little ghostly chain of circles on an imaginary line from the object through the center of the frame. While some people feel that lens flare is overused in games, others feel that getting rid of it will cause the Coconut Effect. The Coconut Effect is when an unrealistic visual effect resonates with an audience so much so that its absence would be detrimental to the audience's experience. 








Film Grain


Film grain adds a certain amount of graininess to the displayed image in order to evoke the look of an older film. This effect is usually in place within the background environment of a video game and isn't always noticeable. The image above does contain film grain but is not all too noticeable unless seen up close. 








Bloom

The bloom effect produces fringes of light around very bright objects in the scene. This effect is one of the most popular in games nowadays and is possibly the most blatant visual effect. Some people enjoy the bloom effect while others prefer to turn it down a great amount because they believe it doesn't do visual justice to detailed game assets. 






Tone Mapping

Tone mapping is the remapping of on-screen colours into a specific palette to create a unique effect. Games such as Mad World use a monochromatic filter which turns the display black and white. There is also a sepia filter which adds a brown tint to the display in order to simulate the look of an old film. This is often used in flashbacks or dreams within games. Some games choose to use more vivid colours or something esoteric to draw attention to areas of the game. 






Cel Shading

Cel shading is a style of computer rendering that imitates the look of hand-drawn artwork and animation. It effectively replaces the shading gradients of conventional rendering with flat colours and shadows shadows based on an artist-specified palette. Cel shading applies to the way the lighting is rendered and can be as realistically-proportioned and textured as any hand-drawing. 









Motion Blur

One of my least favorite post-processing effects, motion blur gives the illusion of an object moving faster than it really is. This way, it helps disguise screen tearing and low frame rates. In some way, it also makes the game look like it's being filmed with a camera. 








Depth of Field

Depth of field is a far more visually apparent effect where near or far objects blur away depending on the camera's focus. It uses a depth map  and is highly effective in concentrating the player's attention to one or more important areas in the scene.








Gaussian Blur

Often most apparent in video game cutscenes, the Gaussian Blur filter can be applied to distant objects to disguise object pop-in. It can also be applied to edges as a form of anti-aliasing. Gaussian blur could even be overlaid on top of an original frame to simulate bloom. 







Vginetting

Vignetting is the process of reducing brightness or saturation of the corners of the screen while emphasizing and drawing interest to the center of the image. Interestingly enough, vignetting is beginning to replace bloom as the most common game effect.








Summary

Let's now take a look at a really neat video that summarizes some of the more popular post-processing effects and how they look when practically implemented into video games. 






All the preceding post-processing effects are part of high dynamic range rendering (HDR Rendering) which takes place in most best-selling games out now. Post-processing is an important piece in fine-tuning visual effects to enhance the player's experience. In addition, most games use these effects to cover up blotches or mistakes in their rendering. Some game companies manage to do this effectively, while others don't approach it in the right light and end up creating more of a visual headache for players than improving their visual experience. Let's take a look at whose got it right and how.







Up the Wazoo!

A lot of gamers believe that post processing is being taken to an unnecessary extent and has been overused in many games nowadays. As mentioned earlier, this is the reason why gaming companies are now allowing their customers to adjust their post-processing effects to maximize performance or just for their games to be easier on the eyes. It is not that graphic designers use there methods throughout their games, but even a poorly executed cutscene with overused post-processing effects can ruin a player's experience. These effects are unique and fun in small doses but can get quite annoying and even take away from the gameplay at times.







A little while ago, some spare time found me immersed in F.E.A.R Project Origin which has several post-processing effects applied including bloom and motion blur to name a few. I enjoyed the story and was impressed with the gameplay but the motion blurring simply took a lot away from my experience. I would have to give my eyes frequent breaks just to readjust my focus and prevent headaches. Luckily, there are external programs and ways to turn of the motion blurring which helped a lot. The other annoying artifact was the film grain that appeared in an attempt to make the game feel like a movie. I understand this effect but with movies now in 3D and on blu-ray this is hardly relevant. I thought it would be a lot more impressive to show me the bare bones of the graphics and lighting, which at its core is executed very well in F.E.A.R. Project Origin. I thought the bloom was used well though, at times, could be overkill. 

My point behind this bit of a rant was to make it clear that there has to be moderation with post-processing. There are games that use it efficiently and effectively (Far Cry 3 and Battlefield 3) and other games that butcher it. While there is a large margin of opinion when considering the true effect of post-processing on gameplay and graphics, it can safely be said that all good things must be used in moderation. At times an overuse can even be at the cost of visual clarity or a maybe a framerate drop. Other times graphical tricks can leave the game looking less realistic in an attempt to be more life-like. As an example, say there was a game where you could see the player's reflection in a car on a sunny day. In some cases, the car would be a perfect reflection of the world including the player and minute details may be noticed (unlike as in real-life).
Then there are the games that do it right. 







Let's take Left 4 Dead for example. The game world in Left 4 Dead is colour corrected through post-processing and this effect is used within the game deliberately to brighten and saturate things like signs on buildings and lights to areas that you are supposed to visit in order to complete the level. There is a present film grain effect but it is not uniform and is increased or decreased depending on how much light there is where you are. This creates texture where there is non so that you do not have to stare at blackness. It also adds to the game's scare factor which is always a plus. Other effects used well within the game include vignetting and local contrast which are used in such instances as when the player is low on health. When this happens, the level of vignetting is turned up to simulate tunnel vision. To learn more about the visual effects used in Left 4 Dead, go here.

Now, of course, Left 4 Dead isn't the only game to get it right and there exist several developers who spend insane amounts of time on fine-tuning their visual effects to enhance the player's experience. The truth of the matter remains that some get it right and some get it wrong.  

A funny emphasis on games that get it wrong...





In Conclusion

HDR rendering is a fascinating piece of visual design and will continue to improve as graphic processors continue to grow more technically efficient. Post-processing effects can do wonders for games that use them right and even help hide some of the minor (or major) flaws in visual design so that they aren't noticeable. There then remain questions worth thinking about: As games get more and more realistic, is this necessarily a good thing? Do you want to be interactively playing in the real world? Or do you prefer the cartoony and unrealistic look and feel of games with effects such as cel and toon shading? 

Of course there are many answers and each is based on personal opinion, but it is worth considering or even trying to predict what the future of gaming graphics is going to be.