Friday, September 28, 2012

Level set vs Color field

For the mesh extraction part of particle based fluid simulation, you have to create a surface field before actually execute the mesh extraction function(e.g. marching cube). And generally there're two ways of doing this: level set and color field.

For level set, the idea is to splat each particle to the region it belongs to, and in terms of different values assigned to a certain sample point, min operator could be employed to get the exact data. 

However, this could be slow. The level set field for a certain particle is continuous, that is, no matter how far the sample point is, there's always a value for that. So you have to decide a radius that the particle have to splat to. Theoretically, the larger the splatting radius is, the closer the final result is to the "ideal" field. 

Another huge problem is, this could only be useful for spherical particles. I'm using anisotropic particles(elliptic), and calculating the distance from an arbitrary point to an ellipsoid is not trivial task. With my current configuration, a single frame would cost more that 10 mins to create the level set field only. The problem lies in the solving part. By using ellipsoids, the distance solving part relies on iterative method, and that is the bottleneck for performance.

So in order to calculate a field faster, I turned to the "color field". Level set is defined as "signed distance field", while the color field is defined as "1 for the particle center, and 0 for outside region", and for the position within the radius of the particle, the value is decided by the smoothing kernel(e.g. B-cubic kernel).

The only problem for color field is, it does not satisfy the Eikonal Equation. And this would lead to a improper value for the normal. For the past few days, I've been thinking about methods that could eliminate or improve this part. 

One of my idea is extend the kernel. Right now the kernel is limited to the (-r, r) range, and other from that, all the values are 0. Let's imagine the level set method works in a similar way as kernel: the kernel for level set has no boundary. And that's what I've mentioned about: no matter how far the sample point is, there's always a value. 

If we could design a kernel without boundary, and also could return a value that somehow reflect the definition of color field, the problem would be improved. (Not totally solved cause the Eikonal Equation still remains a problem). 

If anyone has good ideas about this, don't hesitate to contact me. This could be huge.

Fluid simulation project

During the past few days, I re-wrote my fluid simulation project.

I modulized everything in a way similar to the fluid simulation pipeline in DreamWorks. Also I modified the mesh extraction part to make it much more faster.

The performance right now is 15~20 seconds per frame using single core CPU for 150K particles. I didn't use the optimized algorithm for single core because I'm planning to doing everything in parallel on CPU. So in terms of performance, this is the worst case. However, to my satisfaction this is still pretty quick.


Here is my new demo reel, including a clip for fluid simulation using 125K particles.

Yet this is not a good demo, because:
1. I'm using a too small smoothing radius for the particles, and ends up with really bumpy surface, which should not be the case.
2. SPH method suffers from severe compressible problem. That would lead to an additional layer in in the surface extraction part.
3. Initialization part was kind of wacky. I've done another simulation using the sample method for initializing particles from the Ghost SPH paper I've been working on. And the result is better.


For the 1st and the 3rd problem, I've already improved my project to solve these problem. The openGL version images are already there, and I'm planning to render out a maya version for a better demo.

However, because of the compressibility, the second problem could not be easily solve. I've tried to use WCSPH, yet still I can not find a proper configuration for that implementation, and also I don't think that is a good way to solve the problem. It's just a numerical method dedicated for this problem, but not physically based. My original plan is to implement the Ghost SPH as a complimentary part of my simulator, yet that paper suffers from lacking of elaboration, and even by contacting the author I still did not get a satisfying answer. Now I've started to wonder how they implement that paper.

The good news is, I'm planning to turn to FLIP for the solver part. FLIP combine the best part of eulerian method and lagrangian method, and I believe that would be the best solution for me. Hopefully that won't be too hard.

Another item on the to-do-list is to substitute marching cube with dual contour method.

BTW I might implement another version using PCISPH combine WCSPH for comparison. My personal expectation for the FLIP method is to be better than PCISPH + WCSPH.

In the end, I'm still obsessed with ghost SPH. If anyone want to discuss about that with me, I'll be love to talk about that.

I've been working on this project for a relative long time, and tried lots of things. like the creating level set field for ellipsoids(which is extremely time consuming), and converting obj file to level set. these would be done after I finished the compressibility problem. In addition, my tracer project has to be postponed, cause this project has the most priority to me.

Wednesday, September 19, 2012

Demo Reel v0.22

Updated the demo reel.

Too much homework recently. Do not have enough time for my personal project.

I hope that I can get more time for updating the fluid sim. I've done lots of improvement during the summer but haven't integrated into the reel yet.


Friday, September 14, 2012

New demo for the GPU tracer

the image rendered with tone mapping satisfied me a lot. So I made a new demo for the tracer. I turned off the antweak bar and the fps viewer for less distraction.

Here's the new demo:

Monday, September 10, 2012

Tone mapping

I made a slight change to my GPU path tracer in color transferring.

Previous I was using gamma correction, with gamma equals 2.2, now I'm using the tone mapping operator proposed by Paul Debevec. 

The difference is shown as following:

In the 1st comparison group I turned off the depth of field, just in order to focus on the color difference. And the render time is only 150s, no sufficient for full convergence but good enough for color comparison.
 Image rendered with tone mapping operator
Image rendered with Gamma correction

In the 2nd comparison group I kept all the features on and take 600s for the image to be fully converged.

Image rendered with tone mapping operator

Image rendered with Gamma correction

For the gamma correction group I'm using radiance 16 for the light, but for the tone mapping group I'm using 75 for light.

Personally speaking I prefer tone mapping. It's not that shiny and looks way much better!

New tracer

Based on the fact that I got stuck in implementing the GPU KD-tree paper, I decided to turn to something else.

I've been reading things related to HDR and found it's pretty interesting, and I'm not satisfied with my previous two tracers. So I want to re-write my tracer.

The plan could be like:

Step 1: set up the all the openGL stuff. Rendering everything on to a frame buffer and display the frame buffer with time. Setting up the basic scene is necessary for this step, including camera and the most simple ray trace function(could be just returning a color based on the pixel coordinates).

Step 2: enrich the scene. Setting up objects class and material class. Implement basic ray trace algorithm, which is easy. The material class should be compatible with the HDR, cause that's what I'm trying to focus on for this project. The result of this step is ray-traced image of a simple scene(could be a cornell box with a single sphere).

Step 3: build a CPU kd-tree for complicated objects like stanford bunny or armadillo. Change ray trace with monte carlo path trace. My implementation of material class have to be changed to be better modulized to used BRDF. The result of this one should be a nice rendered image which could function as reference image. Yet the process must be really slow.

Step 4: try implementing the GPU kd-tree paper again and ship everything onto GPU, or try implement photon mapping in the tracer. Haven't thought about that far yet.

Hopefully this project could keep me busy for a while. I'll keep the progress updated.