The whole process is tracked in another blog of mine here: http://xingdugpu.blogspot.com/
And the final result is the first post.
My first step is to build a GPU ray tracer. I could build path tracer or photon mapping tracer with such a basic framework. All the preparation stuffs are done in this part, like how to upload the scene to GPU using a texture, and how to display a texture in the view port.
Here's a demo showing the result of the simple GPU ray tracer.
The next step is turn it into a path tracer. The algorithm actually is more simple for path tracing. For each hit point, no matter it is diffuse or reflective or refractive, one secondary ray has to be generated. The only difference lies in the BRDF, so it's more uniformed and more suitable for GPU implementation. A rough result could be seen from the following images.
Yet, the refraction is not correct. Not only because I used a low max depth, but also because no fresnel reflection is included. To make it right, fresnel reflection is added. Also the depth of field is included by changing the camera model and doing a distributed ray tracing, and it is for free in path tracing. Images below shows the result adding depth of field and fresnel reflection.
The last step is tuning the color. The radiance gathering process is using radiance instead of rgb values. A gamma correction is used here to generate more mild images as below:
Other available resources about the project are listed here.
The paper's my favorite.