For the past few days I've been dedicated to solve the self-collision problem in cloth simulation, and here's some insights I've found for self-collision detection:
1. collisions are generated because of positions of vertices are changed. So ideally, assuming the cloth starts without any self-collision, a naive collision detection need to be performed as long as all vertices are moved.
2. there're two types of collision: continuous collision and static collision.
continuous collisions are detected by testing the trajectory against a surface. That is a ray-triangle intersection test for common case. The ray starts from the position of a vertex in last frame, and ends on the current position.
static collisions are detected by testing whether a vertex is under a certain surface, which is testing signed distance of a vertex and a triangle in general case.
3. problem for cloth simulation: cloth is just one single sheet of mesh, which means there's no negative distance for vertex against triangle. So static collision detection would fail because in case like a vertex is below triangle, you cannot distinguish if it is penetrating from above or coming from below without penetration.
4. problem for PBD: as I mentioned before, ideally a naive collision detection(contains either one type of the collision detection) has to be performed once vertices are moved. If PBD is being used, the problem would arise because the positions of vertices are moved in the resolving constraints pass without performing collision detection per iteration.
So as always is the case, some vertices are move into a certain surface by resolving constraints, while no collision are detected. In the following frame, this kind of collision will not be detected because continuous collision will not treat this one as a collision, while static collision fails because of cloth is too thin to have a negative value.
5. Possible solution: a. GIA(global intersection analysis). b. potential collision constraints.
a. GIA is proposed in this paper. Yet as mentioned in the paper, this method has limitations when it comes to a boundary-penetrating case. I had an idea for perfecting this method, by running flood-fill on both edge and surface.
b. potential collision constraints is the idea I come up with after these days. When doing intersection test, we set a proper threshold for particle-triangle intersection. And add potential collision constraints for resolving pass. They have not collide yet, but since the distance is smaller than the threshold, it's possible for them to collide in the resolving pass. So if they collide in the resolving path, collision will be corrected. Make sure all self-collisions are resolved before entering next frame, so that even the collision detection only support continuous collision, there won't be any problem.
These are two of my ideas, and I'll start a independent project on this. Since for the first idea I'm not sure how to implement it. There're too many topological things related. And for the second idea, I don't have any idea how to set a proper threshold.
Good luck to me!
Friday, January 25, 2013
Wednesday, January 23, 2013
Cloth Simulation using PBD
Recently I've been working on a cloth simulation project as a new homework assignment for CIS 563.
It turns out that cloth simulation is harder and thus more interesting than I imagined.
I'm following Matthias Muller's Position Based Dynamics paper for implementation. I've also done a solid simulation using another of his paper with similar idea. These ideas are really innovative. They do not make that much physical sense, but they follow physical laws, and most importantly, they are a lot faster than physically based method like mass-spring-damper system.
Here's a demo for the cloth simulation. Right now it does has stretch / bend / pinned point / collision constraints, but no self intersection has been taken into consideration.
In fact, the self-intersection is the most interesting part to me. Because the cloth is just a thin layer of unclosed mesh, it's impossible to define a collision with it: position would make sense on both side of the cloth.
I'm looking into this problem right now, following these papers:
http://www.cs.ubc.ca/~rbridson/docs/cloth2002.pdf
http://www.cs.ubc.ca/~rbridson/docs/cloth2003.pdf
http://graphics.pixar.com/library/UntanglingCloth/paper.pdf
Hopefully I can find some insights from these paper and improve this project in the following days.
It turns out that cloth simulation is harder and thus more interesting than I imagined.
I'm following Matthias Muller's Position Based Dynamics paper for implementation. I've also done a solid simulation using another of his paper with similar idea. These ideas are really innovative. They do not make that much physical sense, but they follow physical laws, and most importantly, they are a lot faster than physically based method like mass-spring-damper system.
Here's a demo for the cloth simulation. Right now it does has stretch / bend / pinned point / collision constraints, but no self intersection has been taken into consideration.
In fact, the self-intersection is the most interesting part to me. Because the cloth is just a thin layer of unclosed mesh, it's impossible to define a collision with it: position would make sense on both side of the cloth.
I'm looking into this problem right now, following these papers:
http://www.cs.ubc.ca/~rbridson/docs/cloth2002.pdf
http://www.cs.ubc.ca/~rbridson/docs/cloth2003.pdf
http://graphics.pixar.com/library/UntanglingCloth/paper.pdf
Hopefully I can find some insights from these paper and improve this project in the following days.
Saturday, November 3, 2012
Fluid mechanics: continuity equation
In order to overcome the problem in FLIP solver, I looked into fluid mechanics.
It turns out the condition for incompressibility is not correct, or to say, based on an incorrect assumption.
For continuous fluid, the governing equation is called Continuity Equation
I'll skip the proving part of this equation, lots of reading materials could be found online discussing proving of this equation. In symbolic form, this equation could be expressed as:
Compare to the incompressible confinement, we could see, it's just assuming material derivative for density is 0. Yet this assumption is not correct if fluid cells are marked by the particles: temporay incoherence would lead to a non-zero density material derivative.
One big problem associated with adding this term to the confinement is, we'll have to deal with both time derivative and spacial gradient in the Eulerian grid.
So one of the possible solutions would be: combine SPH and FLIP together in this step. The density carried by each particle provides sufficient information for the material derivative. I'll keep on reading and thinking about other possible solutions.
Solution for compressibility problem in Eulerian method
As I mentioned before, FLIP solver suffers from compressibility problem. In fact, all the grid-based method might be suffering from comressibility problem.
You might consider me to be naive, but I'll try to convince you:
The grid based solver calculate the velocity field based on two formula:
1st is the Navier Stokes equation, 2nd is the incompressible constraint.
For grid solver, all the calculations are done based on one simple assumption:
all the fluid cells have the same density as the rest density.
And this is the key why these solvers are compressible. Though, it's just a small difference, which is hard to tell visually. This problem would become apparent in FLIP solver: with the same particle input, using different grid resolution would lead to different result, some shrink the volume(high resolution), some increase the volume(low resolution).
I tried to associate a density coefficient for each cell for the pressure solve, but it does not contribute. If you combine the rhow and p term in the 1st equation as a whole, you'll find out although the density coefficient would affect the value for pressure, when calculating back to velocity field, this term would be eliminated.
So I realize the problem lies in the second formula. There should be something more than just velocity field to conserve the volume.
It turns out I'm right. Well it's a pity that I found out someone has already done research into this before, or maybe I could be the first one.
With a simple search I found 2 papers dealing with this problem:
This one by Nuttapong Chentanez and Matthias Muller,
and this one by Michael Lentine, Mridul Aanjaneya and Ronald Fedkiw.
What is interesting is that what I've been doing always match these guys research. The papers I've been referring are always done by these names.
So next step: reading these papers and integrate into the FLIP solver. This might be useful for SPH as well, but I need to take a further look.
You might consider me to be naive, but I'll try to convince you:
The grid based solver calculate the velocity field based on two formula:
1st is the Navier Stokes equation, 2nd is the incompressible constraint.
For grid solver, all the calculations are done based on one simple assumption:
all the fluid cells have the same density as the rest density.
And this is the key why these solvers are compressible. Though, it's just a small difference, which is hard to tell visually. This problem would become apparent in FLIP solver: with the same particle input, using different grid resolution would lead to different result, some shrink the volume(high resolution), some increase the volume(low resolution).
I tried to associate a density coefficient for each cell for the pressure solve, but it does not contribute. If you combine the rhow and p term in the 1st equation as a whole, you'll find out although the density coefficient would affect the value for pressure, when calculating back to velocity field, this term would be eliminated.
So I realize the problem lies in the second formula. There should be something more than just velocity field to conserve the volume.
It turns out I'm right. Well it's a pity that I found out someone has already done research into this before, or maybe I could be the first one.
With a simple search I found 2 papers dealing with this problem:
This one by Nuttapong Chentanez and Matthias Muller,
and this one by Michael Lentine, Mridul Aanjaneya and Ronald Fedkiw.
What is interesting is that what I've been doing always match these guys research. The papers I've been referring are always done by these names.
So next step: reading these papers and integrate into the FLIP solver. This might be useful for SPH as well, but I need to take a further look.
Tuesday, October 30, 2012
Flip solver update
for the past few days I've been working on this solver.
I imported the ghost sph sampler and anisotropic kernel part for surface reconstruction. This time everything was done with tbb(Thread Building Blocks), parallelizing all the particles. The performance is really satisfying: for 150K particles, the mesh reconstruction took less than 2 seconds per frame, including neighbor search, iterative matrix decomposition and color field gathering.
Compare to my previous implementation of this part, due to everything is done on the fly, the memory deallocation cost reduced drastically, and that is one of the major reason for boosting the performance.
However, I discovered a huge problem for this solver.
It is COMPRESSIBLE!
The error is introduced by particle advection. Within each single frame, the projection is ensured to be incompressible, however, after particles being advected, the number for fluid cells would change, and that is the reason why it is compressible.
With the 150k particle configuration, using 50*50*50 grid would lead to volume increasing, while 100*100*100 would shrink volume into a thin sheet.
In fact, setting the divergence to be free would not be sufficient for FLIP solver. Because the density(or mass) is carried with the particles, while for the grid solving part, there's no way in ensuring the density of each cell to be constant.
This problem could be alleviated with a certain grid size for a certain particle distribution. Yet only alleviation is possible, cause the particle and grid are kind of coupled in terms of density.
Right now I had an idea which might be useful for decoupling the two, but I need to take a further look into the physics and math.
I'll discuss with the author of FLIP paper, and hopefully I can find a better way for it soon.
I imported the ghost sph sampler and anisotropic kernel part for surface reconstruction. This time everything was done with tbb(Thread Building Blocks), parallelizing all the particles. The performance is really satisfying: for 150K particles, the mesh reconstruction took less than 2 seconds per frame, including neighbor search, iterative matrix decomposition and color field gathering.
Compare to my previous implementation of this part, due to everything is done on the fly, the memory deallocation cost reduced drastically, and that is one of the major reason for boosting the performance.
However, I discovered a huge problem for this solver.
It is COMPRESSIBLE!
The error is introduced by particle advection. Within each single frame, the projection is ensured to be incompressible, however, after particles being advected, the number for fluid cells would change, and that is the reason why it is compressible.
With the 150k particle configuration, using 50*50*50 grid would lead to volume increasing, while 100*100*100 would shrink volume into a thin sheet.
In fact, setting the divergence to be free would not be sufficient for FLIP solver. Because the density(or mass) is carried with the particles, while for the grid solving part, there's no way in ensuring the density of each cell to be constant.
This problem could be alleviated with a certain grid size for a certain particle distribution. Yet only alleviation is possible, cause the particle and grid are kind of coupled in terms of density.
Right now I had an idea which might be useful for decoupling the two, but I need to take a further look into the physics and math.
I'll discuss with the author of FLIP paper, and hopefully I can find a better way for it soon.
Wednesday, October 24, 2012
FLIP solver done!
Finally finished the FLIP solver.
Right now it's a first version, so still, lots of details need to be improved.
A quick demo for the new solver is here.
Well I'll keep on working on this project and come up with way better polished demo.
Right now it's a first version, so still, lots of details need to be improved.
A quick demo for the new solver is here.
Well I'll keep on working on this project and come up with way better polished demo.
Sunday, October 21, 2012
Fast sweeping
Still working on the FLIP solver.
The fast sweeping algorithm is significant for FLIP solver, in generating levelset data and extending velocity field.
Based on Hongkai Zhao's paper I implemented the fast sweeping algorithm for the FLIP solver.
However proving part for the paper was using 2d example. So I extended the proving part to 3D, and here's the scanned image of the proving(also the pseudo code for the core part is included).
Compared to the implementation I did during the summer, the current one would be more abstract because of I'm using the result of my own proving directly. As a result of which, the operation would be faster, because the number of comparison and numerical calculation are minimized.
As I always says, math and physics are real science compared to computer science. In my biased personal opinion, a huge difference between science and engineering is: science is continuous and engineering is discretized.
Thanks math and physics for always providing guidance for everything.
I really enjoy this kind of life, surrounded by science. Maybe I should go for a physics phd later. Seriously I'm not kidding.
The fast sweeping algorithm is significant for FLIP solver, in generating levelset data and extending velocity field.
Based on Hongkai Zhao's paper I implemented the fast sweeping algorithm for the FLIP solver.
However proving part for the paper was using 2d example. So I extended the proving part to 3D, and here's the scanned image of the proving(also the pseudo code for the core part is included).
Compared to the implementation I did during the summer, the current one would be more abstract because of I'm using the result of my own proving directly. As a result of which, the operation would be faster, because the number of comparison and numerical calculation are minimized.
As I always says, math and physics are real science compared to computer science. In my biased personal opinion, a huge difference between science and engineering is: science is continuous and engineering is discretized.
Thanks math and physics for always providing guidance for everything.
I really enjoy this kind of life, surrounded by science. Maybe I should go for a physics phd later. Seriously I'm not kidding.
Subscribe to:
Posts (Atom)