Friday, August 3, 2012

Cube sample

The most important thing today is I finished the cube sampling part.

Usually a cube is consist of 6 discontinuous surfaces, which is hard for constructing level set field. Actually I've done several experiments to create level set from discrete meshes like a triangle mesh, but the I failed all the time because I don't know how to deal with the discontinuity.

This time I'm using a trick to avoid the discontinuity. By using spherical surface on the corner and cylindrical surface on the edge, I could create correct level set data for a "soft" cube.

This is important because in most cases, the scene is just a box, and I need this cube sample to do construct all the solid particles.

Now everything new has pretty much be finished. The only thing left is to assemble all the parts together. Yet SigGraph is approaching, so I might not have enough time to finish this project. Anyway I'll try my best.


Here's a image of cube sample. The test scene for me would be a large spherical water drop falling in a cube. Or maybe a water cube falling in a sphere. I'm so excited it's almost done!

Solid particle sample finished

A good news is I totally understood the process of sampling today. 

I've been reading this paper for around 10 times to finally figure out the main process of sampling. I really wish the authors could explain more on the basic process instead of focusing on their contribution.

Solid particles are sampled only once at the beginning of the simulation. And later the solid particles would not move, or move only with the solid object. The paper mentioned there is a velocity for solid particle. That is its ghost velocity and would only be used in calculating the viscosity force for fluid particles that is near the solid surface. In integration, there is no real velocity for solid particles(as long as solid object does not move).

Because the relaxation process is really really slow for fluid volume samples, and based on the fact that all the fluid sample and solid sample has only to be done once during the simulation, I decided to write a file exporter/importer. I can run the simulator with a relatively higher relaxation iteration number and thus getting a better distribution of all the particles, and export particles to a certain file. This has to be done only once, and later when I will be testing other parts, I can just import initial state of particle from the generated file instead of sampling again.

Now the biggest problem to be is related to projection. In the sample process, a random sample is supposed to be projected to the surface which is described by a level set. Yet when I tried to use level set grid to do the projection work, I ended up with a crappy result. I think that is my fault in implementation. Hopefully I will fix that tomorrow, cause I don't want to use implicit surface only.

I'll post a picture of all kinds of particles when I fixed the existing problems.

Thursday, August 2, 2012

Ghost SPH Sample Finished

Finished the sampling part of ghost SPH.

The sampling process take a few steps as follows. The input is a level set field which describe the shape of the fluid.

1. Sample the surface, namely those voxels where level set change sign.

2. Relaxing surface particles. This would yield better distribution of surface particles(blue noise).

3. Sample the interior part of fluid. Using the surface particles as initial seed.

4. Apply volume relaxation. This is a extremely slow process, multiple iterations, and within each iteration, neighbor search has to be applied to each particle.

5. Sample air particles. Similar as volume sample. Using surface particles as seeds, sample air particles outside the surface within a single-smooth-kernel layer with the help of level set.

The result so far looks like this:

The green particles are ghost particles, and blue ones as fluid particles.

The next step would be transplanting all the calculation parts in my previous SPH simulator into this new one.

My understanding for the paper might not be 100 percent correct, now I still have a few confusion related to this paper. I'm contacting the author recently and hopefully they could unravel my questions.

I'll keep updated.

Tuesday, July 31, 2012

Preliminary result about Ghost SPH

I started re-writing my SPH simulator a few days ago, and the desired new feature is ghost SPH based on Hagit and Bridson's new paper: Ghost SPH for Animating Water
http://www.cs.ubc.ca/~rbridson/docs/schechter-siggraph2012-ghostsph.pdf

A main problem about SPH lies in the density gathering.

In real world, water is almost incompressible, which means at each sample point in the water, the density should be almost the same. However, in simulation, density for each particle may vary within a wide range. This would lead to conspicuous artifacts like y-stacking.

Though generally speaking the result still looks like water, the details could not satisfy me when I pay close attention to the details. After all it's the details that matters in getting distinguished.

To eliminate the problem, there're two important things to do: correct the density gathering and re-model the pressure calculation.

1. correct density gathering. This is the core idea of the ghost sph paper. By using another layer of ghost particles, we could eliminate the density deficiency for particles near the surface. Also the paper discussed about how to initialize the particles, which is rarely seen in other SPH paper and is exactly what I need. The technique used for initialize SPH particles is Poisson Disk Sampling, which could arrange all the particles with blue noise. The sampling technique is based on Bridson's another paper: Fast Poisson Disk Sampling in Arbitrary Dimensions
http://www.cs.ubc.ca/~rbridson/docs/bridson-siggraph07-poissondisk.pdf

2. re-model the method to calculate the pressure. Using Tait equation based on the WCSPH paper (whose performance could be improved by implementing the PCISPH). The Tait equation generate a pressure proportional to (rhow/rhow0)^7, so a correct density gathering is a prerequisite, or the pressure force would be too powerful and lead the simulator to an unstable stage.

Now I've pretty much finished the sampling part. The particle sampling is consist of 4 parts: surface sample, surface relaxation, volume sample, volume relaxation.

Here're two images of the result of applying particle sample to a simple spherical level set grid.




Friday, July 27, 2012

The new SPH simulator

Recently I've been working on re-writing my SPH simulator.

There're lots of problems in the previous version that drastically slow done the simulator. For simulating 125K particles, it takes around 3 mins to get a single frame with drawing triangle meshes.

The surface reconstruction part was not done by myself so I did not know the detail. However recently I found out why the previous version is so slow and rewrite the surface reconstruction part, including marching cube and openGL utilities. The result is a 6 time's improvement in performance.

Because of the glut library is pretty old, so I choose glfw library for this new version. I also modified the neighbor searching method to support multi-thread execution. For the surface reconstruction, anisotropic kernel is calculated for each particle to better describe the density distribution.

Here're some comparing images of a water-crown scene.
1. drawing elliptic particles only.
2. drawing triangle meshes with isotropic kernel. i.e. drawing spheres to present particles.

3. drawing triangle meshes with anisotropic kernel. i.e. drawing ellipsoids to present particles.

I've tried to implement WCSPH but I found out my initialization is problematic. Based on the fact that there's few paper or blog talking about initialization of SPH, I decide to implement the Bridson's Ghost SPH paper first. After finishing that paper, WCSPH could be finished with slight changes and so is the PCISPH.

These papers have been on my "to-do list" for a few weeks, but I was too tired after work everyday so the progress is slow. I'll keep updated.

Also, considering that I'm getting more and more familiar with openGL, I'm considering re-write my path tracer and try to add something new.

Tutorial: installing boost on Visual Studio 2010

Well because of work I got started to use the boost library. It's so handy that I want to install it on my own laptop. I followed a video to install the library, and here's my experience that I'm willing to share.

1. Download the latest boost library: http://www.boost.org/users/download/

2. Unzip the file to a certain folder, which would be used later as a path in the user configuration in Visual Studio 2010. Here I'll use "D:\boost_1_50_0\" for example.

3. A large part of the library is only header file, need not to be compiled. To get a full version(due to the "geek's nature"), we need to compile the library. 

Click "Start"->"All programs"->"Microsoft Visual Studio 2010"->"Visual Studio Tools"->"Visual Studio Command Prompt (2010)"

This command window has correct environment configuration for VS2010. Now go to the place where you unzip the boost library using "cd" command. Here the location is "D:\boost_1_50_0\", so the command is "cd D:\boost_1_50_0\".

4. Run command "bootstrap". Running this could give you a executable called "bjam", which is the building system for boost.

5. Run command:
bjam toolset=msvc-10.0 variant=debug,release threading=multi link=static 
The parameters setting may vary depending on user's own interest.

Notice that the compiling process may take more than 20 minutes. So take the time to do something else.

6. Now we've finished compiling the library, and it's time to turn to setting Visual Studio. Go to "C:\Users\%USERNAME\AppData\Local\Microsoft\MSBuild\v4.0"
Open the Microsoft.Cpp.Win32.user.props file and modify it.

Under the <IncludePath> tag add the root folder where you extract the boost library. Here is "D:\boost_1_50_0;"

Unver the <LibraryPath> tag do a similar thing, but with different path, under the folder "/stage/lib". Here is "D:\boost_1_50_0\stage\lib;"

In this way you don't have to configure your every project to support boost. As long as you're logging in as this certain user, the boost library would always be found without costing your time to set project property.


Try to some example code from boost document!
Create an empty win32 console project and add these lines of code to the main.cpp and see the result.
#include <boost/regex.hpp>
#include <iostream>
#include <string>

int main()
{
    std::string line;
    boost::regex pat( "^Subject: (Re: |Aw: )*(.*)" );

    while (std::cin)
    {
        std::getline(std::cin, line);
        boost::smatch matches;
        if (boost::regex_match(line, matches, pat))
            std::cout << matches[2] << std::endl;
    }
}

I followed this video tutorial: 
Big thanks to the author and hope everyone could benefit from that.

Sunday, May 20, 2012

GPU Path Tracer

Path tracing algorithm is extremely suitable for GPU implementation. So I made a GPU path tracer using C++ and CUDA.

The whole process is tracked in another blog of mine here: http://xingdugpu.blogspot.com/
And the final result is the first post.

My first step is to build a GPU ray tracer. I could build path tracer or photon mapping tracer with such a basic framework. All the preparation stuffs are done in this part, like how to upload the scene to GPU using a texture, and how to display a texture in the view port.
Here's a demo showing the result of the simple GPU ray tracer.

The next step is turn it into a path tracer. The algorithm actually is more simple for path tracing. For each hit point, no matter it is diffuse or reflective or refractive, one secondary ray has to be generated. The only difference lies in the BRDF, so it's more uniformed and more suitable for GPU implementation. A rough result could be seen from the following images.
Yet, the refraction is not correct. Not only because I used a low max depth, but also because no fresnel reflection is included. To make it right, fresnel reflection is added. Also the depth of field is included by changing the camera model and doing a distributed ray tracing, and it is for free in path tracing. Images below shows the result adding depth of field and fresnel reflection.

The last step is tuning the color. The radiance gathering process is using radiance instead of rgb values. A gamma correction is used here to generate more mild images as below:

Other available resources about the project are listed here. 
The paper's my favorite.