![]() |
Our depth of field effect in the game. |
A lens camera can only focus perfectly at one depth. Everything in front of that depth in the near field is out of focus, with each point blurred to a circle of confusion (a.k.a. point spread function) radius that increases towards the camera. Everything past that depth in the far field is also out of focus, with radius increasing towards infinity. When the circle of confusion radius is less than about half of a pixel, it is hard to notice points are out of focus, so the depth region around the plane of focus is called the focus (a.k.a. mid) field. Technically, "depth of field" is a distance specifying the extent of the focus field. However, in the industry, a "depth of field" effect is one that limits this from the common computer graphics pinhole camera effect of having an infinite depth of field to one that resembles a real camera with a small, finite depth of field.

The diagram below shows how the algorithm works. The implementation in our sample code computes the Signed CoC radius from depth. One can also directly write those values during shading. See the book chapter and our sample code below for full details.
The code linked below contains some changes from the original. It performs 4x downsampling in each direction during blurring to speed the gather operations. We disabled one of the occlusion tests within the far field to reduce artifacts from this downsampling (at the expense of introducing some glowy halos in the far field--use 2x downsampling and uncomment that test to avoid this.) We added support for guard bands to reduce artifacts at borders and work with other techniques (such as our Alchemy AO '11 and SAO '12 algorithms) General small performance improvements appear throughout. It has also been updated to compile under version 10 of the open source G3D Innovation Engine, although you need not compile it since we include a binary.
Here's a video result showing how the effect looks in motion:
Here are some high resolution shots that help to understand what the effect is doing. These are taken in Crytek's Sponza model with the far field blur disabled and relatively strong near field blur to show the algorithm's strength. There is FXAA antialiasing but no bloom in these images.
![]() |
Pinhole camera input buffer. |
![]() |
The output of our implementation of Gilham's method. Note the sharp edges on the foreground pillar. |
![]() |
The output of our method. Note the soft edges on the foreground pillar and smooth near-to-focus transition on the side walls. |
![]() |
The "Blurry Near Field" buffer with premultiplied alpha. |
![]() |
The "blurry far field" buffer described in the diagram above. This has 1/16 the number of pixels of the input and output. |
http://graphics.cs.williams.edu/papers/DepthOfFieldGPUPro2013/VVDoFDemo.zip (130 MB)
Anyone interested in our technique may also be interested in some alternatives that include:
- Scheuermann and Tatarchuk's Improved Depth-of-Field Rendering
- David Gilham's Real-Time Depth-of-Field Implemented with a Post-Processing only Technique
- Valiant's depth of field with bokeh effect in Killzone Shadow Fall
- McIntosh et al.'s Efficiently Simulating the Bokeh of Polygonal Aperturs in a Post-Process Depth of Field Shader
- John White and Colin Barré-Brisebois' More Performance!
- Gotanda's Star Ocean 4: Flexible Shader Management and Post-processing
- Sousa's accurate iris shapes in CryENGINE 3 Graphics Gems
- Kefei Lei and John F. Hughes' A Physically Plausible Algorithm for Rendering Depth of Field Using Few Samples per Pixel from I3D 2013
- Schedl and Wimmer's Simulating Partial Occlusion in Post-Processing Depth of Field Methods (also in GPU Pro4)
I patched the code on Sept. 22, 2013 to fix normalization of the far field radius - Morgan
