Deferred Rendering Pipeline for Three.js
Back in the days... :) This was the first deferred rendering implementation for the popular Three.js framework.
It supported point lights, spot lights, area lights as well as basic deferred shadow maps. The pipeline works roughly as follows:
- In a first pass store depth as postprojection z/w to a floating point rendertarget.
- The second pass stores view space normals into a rendertarget.
- In a third pass render a shadow map for a simple directional light source. This is just plain old shadowmapping.
- Next render a proxy sphere geometry for each point light inside the scene. Inside the fragment shader for this sphere, sample the depth buffer and reconstruct the pixels view space position by unprojecting z/w and multiplying it with the inverse projection matrix.
- Figure out the lights position in view space and calculate its attenuation with respect to the pixels view space position.
- Write the result into the framebuffer. Repeat this step for each light source and accumulate every lights contribution.
- In the last pass the lightbuffer is sampled by using the uv’s of a fullscreen quad.
- The pixel’s view space position is again reconstructed as described above.
- The occlusion of the shadow map is determined by projecting the reconstructed view space position into light space. This is done by multiplying view space position with the inverse view matrix. The result is the world position.
- Multiplying with the light’s viewProjectionMatrix yields the light clip space position.
- Depth of the projected position is compared with the corresponding pixel from the shadowmap and the pixel’s occlusion can be determined. Standard deferred shadow mapping, so to say.
- In a last step compute the directional light’s contribution based on it’s position and the view space normal from the second rendertarget.