tag:blogger.com,1999:blog-63954754483364951502024-02-18T17:53:37.521-08:00XSolver's BlogXing Duhttp://www.blogger.com/profile/04578285011641083021noreply@blogger.comBlogger28125tag:blogger.com,1999:blog-6395475448336495150.post-38821791929594552632017-10-18T21:02:00.002-07:002017-10-19T15:55:36.937-07:00AWS S3 Transferring Data Across Accounts<span style="font-family: "courier new" , "courier" , monospace;"> Today I successfully transferred some data on AWS S3 from one account to another. </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> In the process I resolved </span><span style="font-family: "courier new" , "courier" , monospace;">an encryption related permission issue, which has little information on google given the misleading error message. </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> S</span><span style="font-family: "courier new" , "courier" , monospace;">o I decided to write this down to share with people who need help.</span><br />
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><b>Goal</b>:</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">Copy data from one S3 bucket to another S3 bucket.</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><b>Resources</b>:</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> - source account: "src_account"</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> - source bucket: "src_bucket"</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> - destination account: "dst_account"</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> - destination bucket: "dst_bucket"</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> - an instance with AWS CLI installed (can be your laptop too)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><b>Steps(high level)</b>:</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> - created a user on destination account.</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> - grant the account with permissions:</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> - read from source bucket, using "resource" field & ARN.</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> - write to destination bucket.</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> - grant this user read access from source bucket, using bucket policy, "principal" field and ARN</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> - (if encryption required) grant destination account access to encryption key on source account</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> - (if encryption required) grant permission on user to use the key for:</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> - decryption. required for reading.</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> - encryption. required for writing.</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><b>Steps(detailed)</b>:</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> - create a user on destination account: <sync_user></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> Keep the account key and secret to set up CLI.</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> - under IAM on destination account, attach a policy to <sync_user> with these statements:</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> {</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "Sid": "AllowReadSource",</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "Effect": "Allow",</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "Action": [</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "s3:ListBucket",</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "s3:GetObject"</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> ],</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "Resource": [</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "arn:aws:s3:::<src_bucket>/*",</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "arn:aws:s3:::<src_bucket>"</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> ]</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> },</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> {</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "Sid": "AllowWriteDestination",</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "Effect": "Allow",</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> </span><span style="font-family: "courier new", courier, monospace;">"Action": [</span><br />
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "s3:ListBucket",</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "s3: PutObject"</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> ],</span></div>
</div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "Resource": [</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "arn:aws:s3:::<dst_bucket>/*",</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "arn:aws:s3:::<dst_bucket>"</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> ]</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> }</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> - under <src_bucket> permission tab, attach statements to bucket policy:</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> {</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "Sid": "AllowReadOnlyOnFileForUser",</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "Effect": "Allow",</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "Principal": {</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "AWS": "arn:aws:iam::<dst_account>:user/<sync_user>"</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> },</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "Action": "s3:GetObject",</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "Resource": "arn:aws:s3:::<src_bucket>/*"</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> },</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> {</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "Sid": "AllowReadOnlyOnDirectoryForUser",</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "Effect": "Allow",</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "Principal": {</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "AWS": "arn:aws:iam::<dst_account>:user/<sync_user>"</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> },</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "Action": "s3:ListBucket",</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "Resource": "arn:aws:s3:::<src_bucket>/*"</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> }</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><b>Additional</b> steps if encryption is required for bucket:</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> - On source account, add external account to each encryption key used.</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> IAM -> Encryption Keys -> Choose the right region -> Add External Account -> <dst_account></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> - On destination account, add another policy with following statements:</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> {</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "Sid": "AllowUseOfTheKey",</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "Effect": "Allow",</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "Action": [</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "kms:Decrypt",</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "kms:GenerateDataKey*"</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> ],</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "Resource": [</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "arn:aws:kms:<region>:<src_account>:key/<key_id>"</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> ]</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> }</span></div>
<div>
</div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> - add another statement to destination bucket in bucket policy:</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> {</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "Sid": "Ensure config is encrypted on upload",</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "Effect": "Deny",</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "Principal": "*",</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "Action": "s3:PutObject",</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "Resource": "arn:aws:s3:::<dst_bucket>/*",</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "Condition": {</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "StringNotLike": {</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> "s3:x-amz-server-side-encryption-aws-kms-key-id": "arn:aws:kms:<region>:<src_account>:key/<key_id>"</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> }</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> }</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> }</span></div>
<div>
</div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><b>CLI</b>:</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> - add the created <sync_user> to CLI as a profile:</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">Note: the region needs to match the region of the KMS key used. (if any)</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> aws configure --profile <sync_user></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><b>Command</b>:</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">If files inside the bucket requires server-side encryption:</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> aws s3 cp s3://<src_bucket> s3://<dst_bucket> --recursive --sse aws:kms --sse-kms-key-id arn:aws:kms:<region>:<src_account>:key/<key_id> --profile=<aws-cli-sync-user-profile></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">otherwise:</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> aws s3 cp s3://<src_bucket> s3://<dst_bucket> --recursive --profile=<aws-cli-sync-user-profile></span></div>
</div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><b>Summary</b>:</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> This is a quite simple process and some online documents do a better job explaining the steps than I just did. </span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> I did attach some of my own understanding for the steps to help you understand why each step is needed. And the permissions I used are the absolutely minimum set of permissions to do such things. You can find templates that works with more permissions, but I feel it's not necessary to grant this user with more permissions than needed.</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> I spent a lot of time dealing with the encryption permission issue, since it was nowhere documented, and the error is surfaced as another permission deny. Took me a long time to figure out what was causing the issue. So I really wish this helps if you run into similar issues.</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">Note: encryption is done on a per-file level, and can be heterogenous within a single bucket. If you run into permission error with "GetObject" action, double check the file that's causing the issue to see if it has encryption enabled.</span></div>
Anonymoushttp://www.blogger.com/profile/03534347156739163614noreply@blogger.com0tag:blogger.com,1999:blog-6395475448336495150.post-80509655378277525452017-05-30T00:22:00.003-07:002017-05-30T00:25:03.200-07:00Performance between BoneCP and HikariCP<br />
<br />
I've been assessing HikariCP as a replacement for BoneCP for my server in the past week, and the result is somewhat surprising to me.<br />
Sharing it here in case other people were doing the same thing.<br />
<br />
The short conclusion is: BoneCP is slightly faster than HikariCP.<br />
<br />
Test environment:<br />
- BoneCP version: 0.8.0-RELEASE<br />
- HikariCP version: 2.6.1<br />
- Tested on 2 groups of servers located on Amazon AWS, with DB servers in the same availability zone.<br />
- The only difference in the version of jar deployed is the connection pooling library difference (as well as the configuration difference)<br />
<br />
The variation in configuration does not seem to make a big difference.<br />
<br />
Tested by using the recommended configuration from each library, and for<br />
1st test: trying to map the configuration by meaning<br />
2nd test: match number of connections per host<br />
<br />
Both tests give me the same result: the measured wall time for HikariCP is constantly 1~2ms slower than using BoneCP.<br />
<br />
This is tested on a live product which has >100K concurrent users all the time and the range of tested queries has covered a few benchmark tests.<br />
<br />
For faster database queries, this can be pretty significant: running an indexed select queries from a table costs 1ms on avg for BoneCP group but 2ms for HikariCP<br />
<br />
Similarly this affects other queries including inserts & updates & deletes. Due to the range of the queries I have, the difference is 1~2ms.<br />
<br />
After reading through this:<br />
https://github.com/brettwooldridge/HikariCP/wiki/Pool-Analysis<br />
<br />
I started to wonder if it's the validation overhead that's causing the performance difference.<br />
<br />
And the awesome developer for HikariCP told me there are ways to configure that:<br />
https://github.com/brettwooldridge/HikariCP/issues/900<br />
<br />
So I did a 3rd & 4th test by:<br />
- increase the window of validation check from 500ms to 5s<br />
- override the test connection query from using jdbc isValid method to a simple "SELECT 1"<br />
<br />
Unfortunately the result is the same, and the difference is roughly the same too.<br />
So the connection test is not the culprit for performance different, at least on my environment.<br />
<br />
Although I think the validation check is good and should be there, I've stopped at this point because I know BoneCP would probably be my go-to option given the performance result.<br />
<br />
For now I'm unable to explain the performance different, and that's what needs to be updated. I'll dig the source code a bit further when I have some more time to spend on this.Anonymoushttp://www.blogger.com/profile/03534347156739163614noreply@blogger.com0tag:blogger.com,1999:blog-6395475448336495150.post-73617304404514083152013-05-07T23:02:00.000-07:002013-05-07T23:02:44.618-07:00Progressive Photon MapperDuring the past few weeks I've been trying to write a new tracer. After careful consideration I choose to implement a progressive photon mapping integrator within my own architecture, which I simplified and customized based on the one in PRBT book.<br />
<br />
Right now I have only a simple scene to show the result of the integrator, later I'll focus more on the other part of the tracer (bsdf, weighting system, sampling, performance etc).<br />
<br />
Here's a comparison image of the PPM integrator<br />
<br />
First one has only one photon gathering pass, and second with 10 photon gathering pass. Each pass has 200K photons.<br />
<br />
Direct lighting and indirect light are not decoupled, which makes the most mathematical sense to me.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhNQzSZXi1plNfGJhBKFWZCsYl03Xu2g9p-snR8T6ThgrtRi8ouQyivZaxZBYj0kKJBqYIpjYp2kOnVYa39gmuCcSGXtor3cQBMqFj7EsR0ruj8dRXKbqXqp1UCuOaBE3HNNq7sI7jlEd7s/s1600/range_0.1_pass1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="249" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhNQzSZXi1plNfGJhBKFWZCsYl03Xu2g9p-snR8T6ThgrtRi8ouQyivZaxZBYj0kKJBqYIpjYp2kOnVYa39gmuCcSGXtor3cQBMqFj7EsR0ruj8dRXKbqXqp1UCuOaBE3HNNq7sI7jlEd7s/s320/range_0.1_pass1.jpg" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
pass 1</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8ErXVs8T7nVIs6Gn8zuwXxH1BqiGc2ANUa4eRd1cD4YEWHd7uOjlIT-nVA2pHZ__KkVuLaviliV26SDK11YkA1E1m4XmsITOhM8N_wsx860OYh4CFXFkCKsp4OWi3f1QgyhQd6_0iefxR/s1600/range_0.1_pass10.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="249" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8ErXVs8T7nVIs6Gn8zuwXxH1BqiGc2ANUa4eRd1cD4YEWHd7uOjlIT-nVA2pHZ__KkVuLaviliV26SDK11YkA1E1m4XmsITOhM8N_wsx860OYh4CFXFkCKsp4OWi3f1QgyhQd6_0iefxR/s320/range_0.1_pass10.jpg" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
pass 10</div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
Well I just realized this is not convincing enough. I should do a comparison between 200K photons per pass with 10 pass and 2M photon with 1 pass, and that would be my next post with other features added.</div>
<br />
<br />Anonymoushttp://www.blogger.com/profile/03534347156739163614noreply@blogger.com0tag:blogger.com,1999:blog-6395475448336495150.post-72255545265306645312013-02-18T10:31:00.000-08:002013-02-26T08:30:04.748-08:00New demo reelI made a new demo reel yesterday, adding the projects I've been working on recently into it.<br />
<br />
Here's my new reel:<br />
<div class="separator" style="clear: both; text-align: center;">
<br /><iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.youtube.com/embed/CjdRUwG9o5Y?feature=player_embedded' frameborder='0'></iframe></div>
<br />
<br />Anonymoushttp://www.blogger.com/profile/03534347156739163614noreply@blogger.com0tag:blogger.com,1999:blog-6395475448336495150.post-43756609801518099402013-02-14T20:00:00.000-08:002013-02-26T23:41:51.155-08:00the game: Penguin PlanetLast semester (Fall 2012) I took the CIS 568: Game Design Practicum, and I'm working with <a href="http://www.iamnop.com/">Nop</a> for a game in Unity3D.<br />
<br />
We both love arcade games, so made a game called "Penguin Planet", which is similar to the arcade game "Fill It".<br />
<br />
We collaborated to design all the aspects of this game, and lots of technologies are involved in this game in order to implement all the features included. We like it a lot and we're proud of it.<br />
<div>
<br />
<br />
Here's a demo for our game:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.youtube.com/embed/Nid42Q2O6dI?feature=player_embedded' frameborder='0'></iframe></div>
<br /></div>
<div>
<br />
And here's the link for downloading our game:<br />
http://dl.dropbox.com/u/122536698/Penguin%20Planet%20Game.rar<br />
<br />
Hope you enjoy it!</div>
Anonymoushttp://www.blogger.com/profile/03534347156739163614noreply@blogger.com0tag:blogger.com,1999:blog-6395475448336495150.post-66421979151425608752013-02-14T09:59:00.004-08:002013-02-14T09:59:38.762-08:00Implemented accurate solid-fluid interaction for my FLIP solverFor the past few days I've been working on my fluid simulation project.<div>
I've incoporated Christ Batty's Siggraph 2007 paper into this project.</div>
<div>
<br /></div>
<div>
Here's a demo about the result:</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<object width="320" height="266" class="BLOGGER-youtube-video" classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0" data-thumbnail-src="http://i.ytimg.com/vi/nu_hn7l-_WY/0.jpg"><param name="movie" value="http://www.youtube.com/v/nu_hn7l-_WY?version=3&f=user_uploads&c=google-webdrive-0&app=youtube_gdata" /><param name="bgcolor" value="#FFFFFF" /><param name="allowFullScreen" value="true" /><embed width="320" height="266" src="http://www.youtube.com/v/nu_hn7l-_WY?version=3&f=user_uploads&c=google-webdrive-0&app=youtube_gdata" type="application/x-shockwave-flash" allowfullscreen="true"></embed></object></div>
<div>
<span style="background-color: white; color: #333333; font-family: arial, sans-serif; font-size: 13px; line-height: 17px;">generate levelset for stanford bunny, apply fast poisson disk sampling method to get 128k particles.</span><br style="background-color: white; color: #333333; font-family: arial, sans-serif; font-size: 13px; line-height: 17px;" /><span style="background-color: white; color: #333333; font-family: arial, sans-serif; font-size: 13px; line-height: 17px;">grid size 100 cubic.</span><br style="background-color: white; color: #333333; font-family: arial, sans-serif; font-size: 13px; line-height: 17px;" /><span style="background-color: white; color: #333333; font-family: arial, sans-serif; font-size: 13px; line-height: 17px;">used anisotropic kernel for surface reconstruction. </span><br style="background-color: white; color: #333333; font-family: arial, sans-serif; font-size: 13px; line-height: 17px;" /><span style="background-color: white; color: #333333; font-family: arial, sans-serif; font-size: 13px; line-height: 17px;">pretty much include everything I've done for fluid simulation, except for marching cube.</span></div>
Anonymoushttp://www.blogger.com/profile/03534347156739163614noreply@blogger.com0tag:blogger.com,1999:blog-6395475448336495150.post-28887768495465524152013-02-04T18:41:00.003-08:002013-02-04T18:42:07.219-08:00Triangle mesh to level setWell I had the idea for this project since last summer, but I did not put it into practice until today.<br />
<br />
Level set field data is extremely useful in all kinds of simulation, especially in fluid simulation. Also level set is a good interface for blue noise sampling technique.<br />
<br />
Generating level set for a implicit surface is easy, all you have to do is to calculate the function value, and it's always somehow related to the minimum distance(signed).<br />
<br />
However, things are not that easy when it comes to general case. You'll always be given a triangle mesh(obj file or ply file) as input. The problem for triangle mesh to level set is: triangle mesh is not continuous.<br />
<br />
For a single point-triangle minimum distance, it's sometimes ambiguous how to determine the sign of the distance. For those points within the prism of the triangle, judging sign is easy, but for the those who's nearest point is on edge or vertex, it's hard to determine the sign.<br />
<br />
So the idea I had is to calculate normal for all vertices(if not given from input) and all edges. I'm using a weighted sum for all surfaces normal related to the vertex/edge. The weight is the incident angle.<br />
<br />
By using this method, the sign of a certain point to an arbitrary triangle is obvious and easy to compute.<br />
<br />
No one would like to compute the signed distance for all sample points against all triangles. Two possible solutions:<br />
1. using spacial subdivision data structure like KD tree. calculating signed distance for each sample points in a local region.<br />
2. going the other way around. splat each triangle to a certain neighbor region, forming a narrow-band level set, and propagate the date to the whole field.<br />
<br />
The second one is obviously faster but technically harder meanwhile. Since I've already spend a lot of time implementing fast sweeping, this approach fits me better. In fact it took me only a few hours to finish this method.<br />
<br />
It cost 3.6s to calculate the level set data of a Stanford bunny using a 105*104*82 grid, running on single core laptop without any optimization. I'm pretty satisfied with the performance, since this conversion has to be done only once off-line.<br />
<br />
Here's a demo showing the result of the level set. In order to show the correctness of the data, I shrink the whole field by a certain rate.<br />
<div class="separator" style="clear: both; text-align: center;">
<object width="320" height="266" class="BLOGGER-youtube-video" classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0" data-thumbnail-src="http://i.ytimg.com/vi/CbzcU_o_i4k/0.jpg"><param name="movie" value="http://www.youtube.com/v/CbzcU_o_i4k?version=3&f=user_uploads&c=google-webdrive-0&app=youtube_gdata" /><param name="bgcolor" value="#FFFFFF" /><param name="allowFullScreen" value="true" /><embed width="320" height="266" src="http://www.youtube.com/v/CbzcU_o_i4k?version=3&f=user_uploads&c=google-webdrive-0&app=youtube_gdata" type="application/x-shockwave-flash" allowfullscreen="true"></embed></object></div>
<br />
<br />Anonymoushttp://www.blogger.com/profile/03534347156739163614noreply@blogger.com0tag:blogger.com,1999:blog-6395475448336495150.post-12326833369389743112013-01-25T13:52:00.001-08:002013-01-25T13:52:21.074-08:00Global Intersection Analysis: a great idea for collision detectionFor the past few days I've been dedicated to solve the self-collision problem in cloth simulation, and here's some insights I've found for self-collision detection:<br />
<br />
1. collisions are generated because of positions of vertices are changed. So ideally, assuming the cloth starts without any self-collision, a naive collision detection need to be performed as long as all vertices are moved.<br />
<br />
2. there're two types of collision: continuous collision and static collision.<br />
continuous collisions are detected by testing the trajectory against a surface. That is a ray-triangle intersection test for common case. The ray starts from the position of a vertex in last frame, and ends on the current position.<br />
static collisions are detected by testing whether a vertex is under a certain surface, which is testing signed distance of a vertex and a triangle in general case.<br />
<br />
3. problem for cloth simulation: cloth is just one single sheet of mesh, which means there's no negative distance for vertex against triangle. So static collision detection would fail because in case like a vertex is below triangle, you cannot distinguish if it is penetrating from above or coming from below without penetration.<br />
<br />
4. problem for PBD: as I mentioned before, ideally a naive collision detection(contains either one type of the collision detection) has to be performed once vertices are moved. If PBD is being used, the problem would arise because the positions of vertices are moved in the resolving constraints pass without performing collision detection per iteration.<br />
So as always is the case, some vertices are move into a certain surface by resolving constraints, while no collision are detected. In the following frame, this kind of collision will not be detected because continuous collision will not treat this one as a collision, while static collision fails because of cloth is too thin to have a negative value.<br />
<br />
5. Possible solution: a. GIA(global intersection analysis). b. potential collision constraints.<br />
<br />
a. GIA is proposed in <a href="http://graphics.pixar.com/library/UntanglingCloth/paper.pdf">this paper</a>. Yet as mentioned in the paper, this method has limitations when it comes to a boundary-penetrating case. I had an idea for perfecting this method, by running flood-fill on both edge and surface.<br />
<br />
b. potential collision constraints is the idea I come up with after these days. When doing intersection test, we set a proper threshold for particle-triangle intersection. And add potential collision constraints for resolving pass. They have not collide yet, but since the distance is smaller than the threshold, it's possible for them to collide in the resolving pass. So if they collide in the resolving path, collision will be corrected. Make sure all self-collisions are resolved before entering next frame, so that even the collision detection only support continuous collision, there won't be any problem.<br />
<br />
These are two of my ideas, and I'll start a independent project on this. Since for the first idea I'm not sure how to implement it. There're too many topological things related. And for the second idea, I don't have any idea how to set a proper threshold.<br />
<br />
Good luck to me!<br />
<br />
<br />Anonymoushttp://www.blogger.com/profile/03534347156739163614noreply@blogger.com0tag:blogger.com,1999:blog-6395475448336495150.post-22609788517921436882013-01-23T10:28:00.000-08:002013-01-23T10:28:29.701-08:00Cloth Simulation using PBDRecently I've been working on a cloth simulation project as a new homework assignment for CIS 563.<br />
<br />
It turns out that cloth simulation is harder and thus more interesting than I imagined.<br />
<br />
I'm following Matthias Muller's Position Based Dynamics <a href="http://www.matthiasmueller.info/publications/posBasedDyn.pdf">paper</a> for implementation. I've also done a solid simulation using another of his paper with similar idea. These ideas are really innovative. They do not make that much physical sense, but they follow physical laws, and most importantly, they are a lot faster than physically based method like mass-spring-damper system.<br />
<br />
Here's a demo for the cloth simulation. Right now it does has stretch / bend / pinned point / collision constraints, but no self intersection has been taken into consideration.<br />
<div class="separator" style="clear: both; text-align: center;">
<object width="320" height="266" class="BLOGGER-youtube-video" classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0" data-thumbnail-src="http://i.ytimg.com/vi/H-gcutR_TQM/0.jpg"><param name="movie" value="http://www.youtube.com/v/H-gcutR_TQM?version=3&f=user_uploads&c=google-webdrive-0&app=youtube_gdata" /><param name="bgcolor" value="#FFFFFF" /><param name="allowFullScreen" value="true" /><embed width="320" height="266" src="http://www.youtube.com/v/H-gcutR_TQM?version=3&f=user_uploads&c=google-webdrive-0&app=youtube_gdata" type="application/x-shockwave-flash" allowfullscreen="true"></embed></object></div>
<br />
In fact, the self-intersection is the most interesting part to me. Because the cloth is just a thin layer of unclosed mesh, it's impossible to define a collision with it: position would make sense on both side of the cloth.<br />
<br />
I'm looking into this problem right now, following these papers:<br />
http://www.cs.ubc.ca/~rbridson/docs/cloth2002.pdf<br />
http://www.cs.ubc.ca/~rbridson/docs/cloth2003.pdf<br />
http://graphics.pixar.com/library/UntanglingCloth/paper.pdf<br />
<br />
Hopefully I can find some insights from these paper and improve this project in the following days.<br />
<br />Anonymoushttp://www.blogger.com/profile/03534347156739163614noreply@blogger.com0tag:blogger.com,1999:blog-6395475448336495150.post-68190267743640099052012-11-03T15:47:00.002-07:002012-11-03T15:47:46.820-07:00Fluid mechanics: continuity equationIn order to overcome the problem in FLIP solver, I looked into fluid mechanics.<div>
<br /></div>
<div>
It turns out the condition for incompressibility is not correct, or to say, based on an incorrect assumption.</div>
<div>
For continuous fluid, the governing equation is called <a href="http://en.wikipedia.org/wiki/Continuity_equation">Continuity Equation</a></div>
<div>
I'll skip the proving part of this equation, lots of reading materials could be found online discussing proving of this equation. In symbolic form, this equation could be expressed as:</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhYCiFuski1rSSagx1SauCRKbsBMK_FMevcfqUnT2yGeJ-XIABm3AsVbtHBNSHIbAxQyI6dd5siYXkpcJx6DhzT75BccH2fOZ6TSfehtTbJq5yUzVuFZUbgje8cp4zMXi5l5LTwixhIDfJn/s1600/continuity+equation.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhYCiFuski1rSSagx1SauCRKbsBMK_FMevcfqUnT2yGeJ-XIABm3AsVbtHBNSHIbAxQyI6dd5siYXkpcJx6DhzT75BccH2fOZ6TSfehtTbJq5yUzVuFZUbgje8cp4zMXi5l5LTwixhIDfJn/s1600/continuity+equation.jpg" /></a></div>
<div class="separator" style="clear: both; text-align: left;">
Compare to the incompressible confinement, we could see, it's just assuming material derivative for density is 0. Yet this assumption is not correct if fluid cells are marked by the particles: temporay incoherence would lead to a non-zero density material derivative.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
One big problem associated with adding this term to the confinement is, we'll have to deal with both time derivative and spacial gradient in the Eulerian grid.</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
So one of the possible solutions would be: combine SPH and FLIP together in this step. The density carried by each particle provides sufficient information for the material derivative. I'll keep on reading and thinking about other possible solutions.</div>
<div>
<br /></div>
<div>
<div>
<br /></div>
</div>
Anonymoushttp://www.blogger.com/profile/03534347156739163614noreply@blogger.com0tag:blogger.com,1999:blog-6395475448336495150.post-17072587733918286322012-11-03T11:45:00.001-07:002012-11-03T12:11:21.607-07:00Solution for compressibility problem in Eulerian methodAs I mentioned before, FLIP solver suffers from compressibility problem. In fact, all the grid-based method might be suffering from comressibility problem.<br />
<br />
You might consider me to be naive, but I'll try to convince you:<br />
<br />
The grid based solver calculate the velocity field based on two formula:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjas53h5EwXPMCC7gB-7l6WxgbDT_1rH4-AbgNmewdDl09SVYIpHJNdulazSftMyl3Tw6bdnNfOQTSwpNX0nqlLQYVHoNETJZo3aieOBvV54objGiTN9Qk3bVWflIQEizjcE2ppOxsEVzoq/s1600/formula.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="86" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjas53h5EwXPMCC7gB-7l6WxgbDT_1rH4-AbgNmewdDl09SVYIpHJNdulazSftMyl3Tw6bdnNfOQTSwpNX0nqlLQYVHoNETJZo3aieOBvV54objGiTN9Qk3bVWflIQEizjcE2ppOxsEVzoq/s320/formula.jpg" width="320" /></a></div>
<br />
1st is the Navier Stokes equation, 2nd is the incompressible constraint.<br />
<br />
For grid solver, all the calculations are done based on one simple assumption:<br />
<span style="color: red;">all the fluid cells have the same density as the rest density.</span><br />
<span style="color: red;"><br /></span>
And this is the key why these solvers are compressible. Though, it's just a small difference, which is hard to tell visually. This problem would become apparent in FLIP solver: with the same particle input, using different grid resolution would lead to different result, some shrink the volume(high resolution), some increase the volume(low resolution).<br />
<br />
I tried to associate a density coefficient for each cell for the pressure solve, but it does not contribute. If you combine the rhow and p term in the 1st equation as a whole, you'll find out although the density coefficient would affect the value for pressure, when calculating back to velocity field, this term would be eliminated.<br />
<br />
So I realize the problem lies in the second formula. There should be something more than just velocity field to conserve the volume.<br />
<br />
It turns out I'm right. Well it's a pity that I found out someone has already done research into this before, or maybe I could be the first one.<br />
With a simple search I found 2 papers dealing with this problem:<br />
<a href="http://www.matthiasmueller.info/publications/masscon_sca.pdf">This one</a> by Nuttapong Chentanez and Matthias Muller,<br />
and <a href="http://physbam.stanford.edu/~mlentine/images/conservative_fluids.pdf">this one</a> by Michael Lentine, Mridul Aanjaneya and Ronald Fedkiw.<br />
<br />
What is interesting is that what I've been doing always match these guys research. The papers I've been referring are always done by these names.<br />
<br />
So next step: reading these papers and integrate into the FLIP solver. This might be useful for SPH as well, but I need to take a further look.<br />
<br />
<br />Anonymoushttp://www.blogger.com/profile/03534347156739163614noreply@blogger.com0tag:blogger.com,1999:blog-6395475448336495150.post-29320778153490215432012-10-30T11:42:00.002-07:002012-10-31T21:22:13.931-07:00Flip solver updatefor the past few days I've been working on this solver.<br />
<br />
I imported the ghost sph sampler and anisotropic kernel part for surface reconstruction. This time everything was done with tbb(<a href="http://threadingbuildingblocks.org/">Thread Building Blocks</a>), parallelizing all the particles. The performance is really satisfying: for 150K particles, the mesh reconstruction took less than 2 seconds per frame, including neighbor search, iterative matrix decomposition and color field gathering.<br />
<br />
Compare to my previous implementation of this part, due to everything is done on the fly, the memory deallocation cost reduced drastically, and that is one of the major reason for boosting the performance.<br />
<br />
<br />
However, I discovered a huge problem for this solver.<br />
It is COMPRESSIBLE!<br />
The error is introduced by particle advection. Within each single frame, the projection is ensured to be incompressible, however, after particles being advected, the number for fluid cells would change, and that is the reason why it is compressible.<br />
<br />
With the 150k particle configuration, using 50*50*50 grid would lead to volume increasing, while 100*100*100 would shrink volume into a thin sheet.<br />
<br />
<br />
In fact, setting the divergence to be free would not be sufficient for FLIP solver. Because the density(or mass) is carried with the particles, while for the grid solving part, there's no way in ensuring the density of each cell to be constant.<br />
<br />
This problem could be alleviated with a certain grid size for a certain particle distribution. Yet only alleviation is possible, cause the particle and grid are kind of coupled in terms of density.<br />
<br />
Right now I had an idea which might be useful for decoupling the two, but I need to take a further look into the physics and math.<br />
<br />
I'll discuss with the author of FLIP paper, and hopefully I can find a better way for it soon.Anonymoushttp://www.blogger.com/profile/03534347156739163614noreply@blogger.com0tag:blogger.com,1999:blog-6395475448336495150.post-73220000999243494992012-10-24T16:51:00.002-07:002012-10-25T23:44:30.600-07:00FLIP solver done!Finally finished the FLIP solver.<br />
<br />
Right now it's a first version, so still, lots of details need to be improved.<br />
<br />
A quick demo for the new solver is here.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.youtube.com/embed/VdDUWiCELaQ?feature=player_embedded' frameborder='0'></iframe></div>
<br />
Well I'll keep on working on this project and come up with way better polished demo.Anonymoushttp://www.blogger.com/profile/03534347156739163614noreply@blogger.com0tag:blogger.com,1999:blog-6395475448336495150.post-76158929594974335392012-10-21T15:15:00.002-07:002012-10-21T15:17:57.556-07:00Fast sweepingStill working on the FLIP solver.<br />
The fast sweeping algorithm is significant for FLIP solver, in generating levelset data and extending velocity field.<br />
<br />
Based on <a href="http://graphics.stanford.edu/courses/cs468-03-fall/Papers/zhao_fastsweep1.pdf">Hongkai Zhao's paper</a> I implemented the fast sweeping algorithm for the FLIP solver.<br />
However proving part for the paper was using 2d example. So I extended the proving part to 3D, and here's the scanned image of the proving(also the pseudo code for the core part is included).<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjyejb-l1j2VNuC-ys-KaXnp2nDRJoPk3u8zHkEGJ8bfmRLmgQ1ZlIkXCLa3eklGKrOMP1_hwwyNXViMSRXPgzVSyd8kKzMpR6Ox5UDE1BX4kS9JWBXdH8cph6X2AuFjix0E1oyltOi5BEy/s1600/fastsweeping.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjyejb-l1j2VNuC-ys-KaXnp2nDRJoPk3u8zHkEGJ8bfmRLmgQ1ZlIkXCLa3eklGKrOMP1_hwwyNXViMSRXPgzVSyd8kKzMpR6Ox5UDE1BX4kS9JWBXdH8cph6X2AuFjix0E1oyltOi5BEy/s320/fastsweeping.jpg" width="232" /></a></div>
Compared to the implementation I did during the summer, the current one would be more abstract because of I'm using the result of my own proving directly. As a result of which, the operation would be faster, because the number of comparison and numerical calculation are minimized.<br />
<br />
As I always says, math and physics are real science compared to computer science. In my biased personal opinion, a huge difference between science and engineering is: science is continuous and engineering is discretized.<br />
<br />
Thanks math and physics for always providing guidance for everything.<br />
I really enjoy this kind of life, surrounded by science. Maybe I should go for a physics phd later. Seriously I'm not kidding.<br />
<br />Anonymoushttp://www.blogger.com/profile/03534347156739163614noreply@blogger.com0tag:blogger.com,1999:blog-6395475448336495150.post-91732518313115787702012-10-04T12:43:00.000-07:002012-10-21T15:17:57.560-07:00New solver for the fluid simulation projectSPH suffers from severe compressible problem. A combination of WCSPH(
<a href="http://cg.informatik.uni-freiburg.de/publications/2007_SCA_SPH.pdf">http://cg.informatik.uni-freiburg.de/publications/2007_SCA_SPH.pdf</a> ) and PCISPH(
<a href="http://dl.acm.org/citation.cfm?id=1531346">http://dl.acm.org/citation.cfm?id=1531346</a> ) could be a good improvement of SPH. <div>
<br /></div>
<div>
However, it's far from enough. It could be a good solution for the inner particles. while for the boundary particles(either in contact with air or solid), the pressure gathering is incorrect, which will lead to an artifact.</div>
<div>
<br /></div>
<div>
Ghost sph(<a href="http://www.cs.ubc.ca/~rbridson/docs/schechter-siggraph2012-ghostsph.pdf">http://www.cs.ubc.ca/~rbridson/docs/schechter-siggraph2012-ghostsph.pdf</a>) to me is the best solution for improving the quality of SPH. But the problem is ghost sph is kind of expensive. The number of sample particles for the solids is usually much more than the fluid particles. And re-sampling ghost particles is not trivial task.</div>
<div>
<br /></div>
<div>
In order to solve the compressibility problem and maintain the details that particles bring about, I decided to turn to FLIP(<a href="http://www.cs.ubc.ca/~rbridson/docs/zhu-siggraph05-sandfluid.pdf">http://www.cs.ubc.ca/~rbridson/docs/zhu-siggraph05-sandfluid.pdf</a>) for help.</div>
<div>
<br /></div>
<div>
FLIP is a combination of Lagrangian method and Eulerian method. Particles are used for advection, which eliminates the numerical dissipation caused by Eulerian methods, and MACGrid is used for solving the Poisson equation, which solve the pressure distribution problem perfectly.</div>
<div>
<br /></div>
<div>
For the past few days I've been reading lots of papers related to Eulerian fluid simulation, including Robert Bridson's book: Fluid Simulation for Computer Graphics. And finally I got a good understanding of each steps that is necessary for FLIP solver.</div>
<div>
<br /></div>
<div>
I'll list my understanding of the core steps in FLIP solver here:</div>
<div>
<br /></div>
<div>
1. transfer velocity from particles to grid. The grid is only used for solving the pressure distribution, so only the velocity field is needed. The way used for transferring is splatting each particle's velocity onto the grid using tri-linear weighting.</div>
<div>
<br /></div>
<div>
2. generate a level set from the particles. The level set is necessary for later use. </div>
<div>
<br /></div>
<div>
3. extend the velocity field to the whole grid. Before this step, only the grid cells that intersect the particles have a non-zero velocity. In order to get the correct velocity distribution, we have to extrapolate the velocity to the whole grid based on one simple principal: the dot product of the gradient of velocity and the gradient of level set should be zero. Which means the extrapolated velocity should not change in the normal direction of fluid surface.</div>
<div>
<br /></div>
<div>
4. solve for Poisson equation. I haven't start coding with this part. Yet from the previous smoke simulation project, this should be similar. The key is setting boundary condition, using ghost pressure and apply pre-conditioner.</div>
<div>
<br /></div>
<div>
5. extend the velocity again. In the previous step, only the velocity field of the fluid cells have been updated, so in order to get a correct value in interpolating back to particles, we need to update the rest part of the grid.</div>
<div>
<br /></div>
<div>
6. transfer back to the particles. Update particles' velocity based on the new velocity field.</div>
<div>
<br /></div>
<div>
That's basically my understanding for the FLIP solver from reading during the past few days. I'm starting to write this solver part and integrate that for my fluid simulation project.</div>
Anonymoushttp://www.blogger.com/profile/03534347156739163614noreply@blogger.com0tag:blogger.com,1999:blog-6395475448336495150.post-79817790309300809642012-09-28T12:55:00.000-07:002012-10-21T15:17:57.558-07:00Level set vs Color fieldFor the mesh extraction part of particle based fluid simulation, you have to create a surface field before actually execute the mesh extraction function(e.g. marching cube). And generally there're two ways of doing this: level set and color field.<div>
<br /></div>
<div>
For level set, the idea is to splat each particle to the region it belongs to, and in terms of different values assigned to a certain sample point, min operator could be employed to get the exact data. </div>
<div>
<br /></div>
<div>
However, this could be slow. The level set field for a certain particle is continuous, that is, no matter how far the sample point is, there's always a value for that. So you have to decide a radius that the particle have to splat to. Theoretically, the larger the splatting radius is, the closer the final result is to the "ideal" field. </div>
<div>
<br /></div>
<div>
Another huge problem is, this could only be useful for spherical particles. I'm using anisotropic particles(elliptic), and calculating the distance from an arbitrary point to an ellipsoid is not trivial task. With my current configuration, a single frame would cost more that 10 mins to create the level set field only. The problem lies in the solving part. By using ellipsoids, the distance solving part relies on iterative method, and that is the bottleneck for performance.</div>
<div>
<br /></div>
<div>
So in order to calculate a field faster, I turned to the "color field". Level set is defined as "signed distance field", while the color field is defined as "1 for the particle center, and 0 for outside region", and for the position within the radius of the particle, the value is decided by the smoothing kernel(e.g. B-cubic kernel).</div>
<div>
<br /></div>
<div>
The only problem for color field is, it does not satisfy the Eikonal Equation. And this would lead to a improper value for the normal. For the past few days, I've been thinking about methods that could eliminate or improve this part. </div>
<div>
<br /></div>
<div>
One of my idea is extend the kernel. Right now the kernel is limited to the (-r, r) range, and other from that, all the values are 0. Let's imagine the level set method works in a similar way as kernel: the kernel for level set has no boundary. And that's what I've mentioned about: no matter how far the sample point is, there's always a value. </div>
<div>
<br /></div>
<div>
If we could design a kernel without boundary, and also could return a value that somehow reflect the definition of color field, the problem would be improved. (Not totally solved cause the Eikonal Equation still remains a problem). </div>
<div>
<br /></div>
<div>
If anyone has good ideas about this, don't hesitate to contact me. This could be huge.</div>
Anonymoushttp://www.blogger.com/profile/03534347156739163614noreply@blogger.com0tag:blogger.com,1999:blog-6395475448336495150.post-59341972881923497022012-09-28T12:16:00.000-07:002012-10-21T15:17:57.555-07:00Fluid simulation projectDuring the past few days, I re-wrote my fluid simulation project.<br />
<br />
I modulized everything in a way similar to the fluid simulation pipeline in DreamWorks. Also I modified the mesh extraction part to make it much more faster.<br />
<br />
The performance right now is 15~20 seconds per frame using single core CPU for 150K particles. I didn't use the optimized algorithm for single core because I'm planning to doing everything in parallel on CPU. So in terms of performance, this is the worst case. However, to my satisfaction this is still pretty quick.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.youtube.com/embed/ouQiy0CywHQ?feature=player_embedded' frameborder='0'></iframe></div>
<br />
Here is my new demo reel, including a clip for fluid simulation using 125K particles.<br />
<br />
Yet this is not a good demo, because:<br />
1. I'm using a too small smoothing radius for the particles, and ends up with really bumpy surface, which should not be the case.<br />
2. SPH method suffers from severe compressible problem. That would lead to an additional layer in in the surface extraction part.<br />
3. Initialization part was kind of wacky. I've done another simulation using the sample method for initializing particles from the Ghost SPH paper I've been working on. And the result is better.<br />
<br />
<br />
For the 1st and the 3rd problem, I've already improved my project to solve these problem. The openGL version images are already there, and I'm planning to render out a maya version for a better demo.<br />
<br />
However, because of the compressibility, the second problem could not be easily solve. I've tried to use WCSPH, yet still I can not find a proper configuration for that implementation, and also I don't think that is a good way to solve the problem. It's just a numerical method dedicated for this problem, but not physically based. My original plan is to implement the Ghost SPH as a complimentary part of my simulator, yet that paper suffers from lacking of elaboration, and even by contacting the author I still did not get a satisfying answer. Now I've started to wonder how they implement that paper.<br />
<br />
The good news is, I'm planning to turn to FLIP for the solver part. FLIP combine the best part of eulerian method and lagrangian method, and I believe that would be the best solution for me. Hopefully that won't be too hard.<br />
<br />
Another item on the to-do-list is to substitute marching cube with dual contour method.<br />
<br />
BTW I might implement another version using PCISPH combine WCSPH for comparison. My personal expectation for the FLIP method is to be better than PCISPH + WCSPH.<br />
<br />
In the end, I'm still obsessed with ghost SPH. If anyone want to discuss about that with me, I'll be love to talk about that.<br />
<br />
I've been working on this project for a relative long time, and tried lots of things. like the creating level set field for ellipsoids(which is extremely time consuming), and converting obj file to level set. these would be done after I finished the compressibility problem. In addition, my tracer project has to be postponed, cause this project has the most priority to me.<br />
<br />Anonymoushttp://www.blogger.com/profile/03534347156739163614noreply@blogger.com0tag:blogger.com,1999:blog-6395475448336495150.post-21925108472300502402012-09-19T15:50:00.001-07:002012-09-19T15:50:42.764-07:00Demo Reel v0.22Updated the demo reel.<br />
<br />
Too much homework recently. Do not have enough time for my personal project.<br />
<br />
I hope that I can get more time for updating the fluid sim. I've done lots of improvement during the summer but haven't integrated into the reel yet.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.youtube.com/embed/ErQbRobnDhM?feature=player_embedded' frameborder='0'></iframe></div>
<br />Anonymoushttp://www.blogger.com/profile/03534347156739163614noreply@blogger.com0tag:blogger.com,1999:blog-6395475448336495150.post-23525909807355144362012-09-14T19:07:00.002-07:002012-09-14T19:08:11.696-07:00New demo for the GPU tracerthe image rendered with tone mapping satisfied me a lot. So I made a new demo for the tracer. I turned off the antweak bar and the fps viewer for less distraction.<br />
<br />
Here's the new demo:
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.youtube.com/embed/fvgzE9KJV5w?feature=player_embedded' frameborder='0'></iframe></div>
Anonymoushttp://www.blogger.com/profile/03534347156739163614noreply@blogger.com0tag:blogger.com,1999:blog-6395475448336495150.post-22365509159144923022012-09-10T11:57:00.000-07:002012-09-10T11:57:43.737-07:00Tone mappingI made a slight change to my GPU path tracer in color transferring.<div>
<br /></div>
<div>
Previous I was using gamma correction, with gamma equals 2.2, now I'm using the tone mapping operator proposed by Paul Debevec. </div>
<div>
<br /></div>
<div>
The difference is shown as following:</div>
<div>
<br /></div>
<div>
In the 1st comparison group I turned off the depth of field, just in order to focus on the color difference. And the render time is only 150s, no sufficient for full convergence but good enough for color comparison.</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMyOrR-5poNFbQfmqCAqZ1fk2cfdjRCtMzg8ziwGoOd8de-gWrbVyTRCd38UdPiQXmuet0RHcQIoeY_EZ2KTiLY3UNQouL3VXp9KVzIRK7J1dMbhOMf6goEdof4ttXlb8l9pUM9lNqwJ3j/s1600/NoDOF150s_tone.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="249" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMyOrR-5poNFbQfmqCAqZ1fk2cfdjRCtMzg8ziwGoOd8de-gWrbVyTRCd38UdPiQXmuet0RHcQIoeY_EZ2KTiLY3UNQouL3VXp9KVzIRK7J1dMbhOMf6goEdof4ttXlb8l9pUM9lNqwJ3j/s320/NoDOF150s_tone.jpg" width="320" /></a></div>
<div style="text-align: center;">
Image rendered with tone mapping operator</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNmRqWcF06P4LOV0YJdooPP0V1anNIxCAbgWzHC0esMErW9J1qWDw5IJIGPeZbAceUzQMfP1ofwqeovX2gqtaCqq4TbFnuqkazJcT58X25b7D8jjzqmeFw_PIF_wFEnrSwDIBVek4-CJN8/s1600/NoDof150s.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNmRqWcF06P4LOV0YJdooPP0V1anNIxCAbgWzHC0esMErW9J1qWDw5IJIGPeZbAceUzQMfP1ofwqeovX2gqtaCqq4TbFnuqkazJcT58X25b7D8jjzqmeFw_PIF_wFEnrSwDIBVek4-CJN8/s320/NoDof150s.jpg" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
Image rendered with Gamma correction</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
In the 2nd comparison group I kept all the features on and take 600s for the image to be fully converged.</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXZQDUabAdc7XOB0L2ZxdH28g7xxBwuyv3t1itSRjiTFpbMHmm5fu-7ZXoM9gWryAplWcK9iEnkQMTVQqDKNxfTnHNnbKoSRLOQ9L2sDUgGJ8vhBaW8lXfzt9VzhX9ESMlZ-UuMF4sALiC/s1600/600s_tone.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="249" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXZQDUabAdc7XOB0L2ZxdH28g7xxBwuyv3t1itSRjiTFpbMHmm5fu-7ZXoM9gWryAplWcK9iEnkQMTVQqDKNxfTnHNnbKoSRLOQ9L2sDUgGJ8vhBaW8lXfzt9VzhX9ESMlZ-UuMF4sALiC/s320/600s_tone.jpg" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
Image rendered with tone mapping operator
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8Likvu5Qo8ZrnGGqemr9E2iZAff2wc_mZLqKe6cEQulhiXJZAy6yK4CryY51-L4YrgTquyyol6vrNN1FLt4UDf-bFX6XOxPAgFoQmu6VM1tLCMDiqqc6M4KyoKxaeX0Gv6a9pzydKsGpE/s1600/600s.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8Likvu5Qo8ZrnGGqemr9E2iZAff2wc_mZLqKe6cEQulhiXJZAy6yK4CryY51-L4YrgTquyyol6vrNN1FLt4UDf-bFX6XOxPAgFoQmu6VM1tLCMDiqqc6M4KyoKxaeX0Gv6a9pzydKsGpE/s320/600s.jpg" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
Image rendered with Gamma correction
</div>
<div>
<br /></div>
<div>
For the gamma correction group I'm using radiance 16 for the light, but for the tone mapping group I'm using 75 for light.</div>
<div>
<br /></div>
<div>
Personally speaking I prefer tone mapping. It's not that shiny and looks way much better!</div>
Anonymoushttp://www.blogger.com/profile/03534347156739163614noreply@blogger.com0tag:blogger.com,1999:blog-6395475448336495150.post-45873973664881616272012-09-10T11:20:00.000-07:002012-09-10T11:22:50.580-07:00New tracerBased on the fact that I got stuck in implementing the GPU KD-tree paper, I decided to turn to something else.<br />
<br />
I've been reading things related to HDR and found it's pretty interesting, and I'm not satisfied with my previous two tracers. So I want to re-write my tracer.<br />
<br />
The plan could be like:<br />
<br />
Step 1: set up the all the openGL stuff. Rendering everything on to a frame buffer and display the frame buffer with time. Setting up the basic scene is necessary for this step, including camera and the most simple ray trace function(could be just returning a color based on the pixel coordinates).<br />
<br />
Step 2: enrich the scene. Setting up objects class and material class. Implement basic ray trace algorithm, which is easy. The material class should be compatible with the HDR, cause that's what I'm trying to focus on for this project. The result of this step is ray-traced image of a simple scene(could be a cornell box with a single sphere).<br />
<br />
Step 3: build a CPU kd-tree for complicated objects like stanford bunny or armadillo. Change ray trace with monte carlo path trace. My implementation of material class have to be changed to be better modulized to used BRDF. The result of this one should be a nice rendered image which could function as reference image. Yet the process must be really slow.<br />
<br />
Step 4: try implementing the GPU kd-tree paper again and ship everything onto GPU, or try implement photon mapping in the tracer. Haven't thought about that far yet.<br />
<br />
Hopefully this project could keep me busy for a while. I'll keep the progress updated.Anonymoushttp://www.blogger.com/profile/03534347156739163614noreply@blogger.com0tag:blogger.com,1999:blog-6395475448336495150.post-23003716021970177412012-08-03T23:59:00.000-07:002012-10-21T15:17:57.551-07:00Cube sampleThe most important thing today is I finished the cube sampling part.<br />
<br />
Usually a cube is consist of 6 discontinuous surfaces, which is hard for constructing level set field. Actually I've done several experiments to create level set from discrete meshes like a triangle mesh, but the I failed all the time because I don't know how to deal with the discontinuity.<br />
<br />
This time I'm using a trick to avoid the discontinuity. By using spherical surface on the corner and cylindrical surface on the edge, I could create correct level set data for a "soft" cube.<br />
<br />
This is important because in most cases, the scene is just a box, and I need this cube sample to do construct all the solid particles.<br />
<br />
Now everything new has pretty much be finished. The only thing left is to assemble all the parts together. Yet SigGraph is approaching, so I might not have enough time to finish this project. Anyway I'll try my best.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvVtQU3v_UwqRO4ye4mhDjwqjgfqHBsrNEVTefqm7d299mMjWxstKiSBkejz5atCKxcEp2IlMOgxis6_JLVjuECav6J2fHAa5fVyd5eu-nyDhgURSOfDtsZ19hvqugcd9DgHL6enPKGCt2/s1600/Cube.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em; text-align: center;"><img border="0" height="315" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvVtQU3v_UwqRO4ye4mhDjwqjgfqHBsrNEVTefqm7d299mMjWxstKiSBkejz5atCKxcEp2IlMOgxis6_JLVjuECav6J2fHAa5fVyd5eu-nyDhgURSOfDtsZ19hvqugcd9DgHL6enPKGCt2/s400/Cube.jpg" width="400" /></a></div>
<br />
<br />
<div class="separator" style="clear: both; text-align: left;">
Here's a image of cube sample. The test scene for me would be a large spherical water drop falling in a cube. Or maybe a water cube falling in a sphere. I'm so excited it's almost done!</div>
<br />Anonymoushttp://www.blogger.com/profile/03534347156739163614noreply@blogger.com0tag:blogger.com,1999:blog-6395475448336495150.post-90598679710531010452012-08-03T00:44:00.000-07:002012-10-21T15:17:57.559-07:00Solid particle sample finishedA good news is I totally understood the process of sampling today. <div>
<br /></div>
<div>
I've been reading this paper for around 10 times to finally figure out the main process of sampling. I really wish the authors could explain more on the basic process instead of focusing on their contribution.</div>
<div>
<br /></div>
<div>
Solid particles are sampled only once at the beginning of the simulation. And later the solid particles would not move, or move only with the solid object. The paper mentioned there is a velocity for solid particle. That is its ghost velocity and would only be used in calculating the viscosity force for fluid particles that is near the solid surface. In integration, there is no real velocity for solid particles(as long as solid object does not move).</div>
<div>
<br /></div>
<div>
Because the relaxation process is really really slow for fluid volume samples, and based on the fact that all the fluid sample and solid sample has only to be done once during the simulation, I decided to write a file exporter/importer. I can run the simulator with a relatively higher relaxation iteration number and thus getting a better distribution of all the particles, and export particles to a certain file. This has to be done only once, and later when I will be testing other parts, I can just import initial state of particle from the generated file instead of sampling again.</div>
<div>
<br /></div>
<div>
Now the biggest problem to be is related to projection. In the sample process, a random sample is supposed to be projected to the surface which is described by a level set. Yet when I tried to use level set grid to do the projection work, I ended up with a crappy result. I think that is my fault in implementation. Hopefully I will fix that tomorrow, cause I don't want to use implicit surface only.</div>
<div>
<br /></div>
<div>
I'll post a picture of all kinds of particles when I fixed the existing problems.</div>Anonymoushttp://www.blogger.com/profile/03534347156739163614noreply@blogger.com0tag:blogger.com,1999:blog-6395475448336495150.post-14602451502824798662012-08-02T01:39:00.000-07:002012-10-21T15:17:57.549-07:00Ghost SPH Sample FinishedFinished the sampling part of ghost SPH.<br />
<br />
The sampling process take a few steps as follows. The input is a level set field which describe the shape of the fluid.<br />
<br />
1. Sample the surface, namely those voxels where level set change sign.<br />
<br />
2. Relaxing surface particles. This would yield better distribution of surface particles(blue noise).<br />
<br />
3. Sample the interior part of fluid. Using the surface particles as initial seed.<br />
<br />
4. Apply volume relaxation. This is a extremely slow process, multiple iterations, and within each iteration, neighbor search has to be applied to each particle.<br />
<br />
5. Sample air particles. Similar as volume sample. Using surface particles as seeds, sample air particles outside the surface within a single-smooth-kernel layer with the help of level set.<br />
<br />
The result so far looks like this:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijJev9TB7rzCX1J5WAxEERnV6KbbOKZGqocUAkq02o7JaOkDN4eDAMwsddsxXquXZ3SMjVrPijFs0J5Dv_fX8R4WaS1Tj6kjIKW3NWDDn6iDN9LI0SXQLfdu30DncAgjq5RmzyshoHdlc/s1600/ghost.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="315" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijJev9TB7rzCX1J5WAxEERnV6KbbOKZGqocUAkq02o7JaOkDN4eDAMwsddsxXquXZ3SMjVrPijFs0J5Dv_fX8R4WaS1Tj6kjIKW3NWDDn6iDN9LI0SXQLfdu30DncAgjq5RmzyshoHdlc/s400/ghost.jpg" width="400" /></a></div>
<br />
The green particles are ghost particles, and blue ones as fluid particles.<br />
<br />
The next step would be transplanting all the calculation parts in my previous SPH simulator into this new one.<br />
<br />
My understanding for the paper might not be 100 percent correct, now I still have a few confusion related to this paper. I'm contacting the author recently and hopefully they could unravel my questions.<br />
<br />
I'll keep updated.Xing Duhttp://www.blogger.com/profile/04578285011641083021noreply@blogger.com3tag:blogger.com,1999:blog-6395475448336495150.post-56168454317317455232012-07-31T16:09:00.004-07:002012-10-21T15:17:57.554-07:00Preliminary result about Ghost SPHI started re-writing my SPH simulator a few days ago, and the desired new feature is ghost SPH based on Hagit and Bridson's new paper: Ghost SPH for Animating Water<br />
<a href="http://www.cs.ubc.ca/~rbridson/docs/schechter-siggraph2012-ghostsph.pdf">http://www.cs.ubc.ca/~rbridson/docs/schechter-siggraph2012-ghostsph.pdf</a>
<br />
<br />
A main problem about SPH lies in the density gathering.<br />
<br />
In real world, water is almost incompressible, which means at each sample point in the water, the density should be almost the same. However, in simulation, density for each particle may vary within a wide range. This would lead to conspicuous artifacts like y-stacking.<br />
<br />
Though generally speaking the result still looks like water, the details could not satisfy me when I pay close attention to the details. After all it's the details that matters in getting distinguished.<br />
<br />
To eliminate the problem, there're two important things to do: correct the density gathering and re-model the pressure calculation.<br />
<br />
1. correct density gathering. This is the core idea of the ghost sph paper. By using another layer of ghost particles, we could eliminate the density deficiency for particles near the surface. Also the paper discussed about how to initialize the particles, which is rarely seen in other SPH paper and is exactly what I need. The technique used for initialize SPH particles is Poisson Disk Sampling, which could arrange all the particles with blue noise. The sampling technique is based on Bridson's another paper: Fast Poisson Disk Sampling in Arbitrary Dimensions<br />
<a href="http://www.cs.ubc.ca/~rbridson/docs/bridson-siggraph07-poissondisk.pdf">http://www.cs.ubc.ca/~rbridson/docs/bridson-siggraph07-poissondisk.pdf</a><br />
<br />
2. re-model the method to calculate the pressure. Using Tait equation based on the WCSPH paper (whose performance could be improved by implementing the PCISPH). The Tait equation generate a pressure proportional to (rhow/rhow0)^7, so a correct density gathering is a prerequisite, or the pressure force would be too powerful and lead the simulator to an unstable stage.<br />
<br />
Now I've pretty much finished the sampling part. The particle sampling is consist of 4 parts: surface sample, surface relaxation, volume sample, volume relaxation.<br />
<br />
Here're two images of the result of applying particle sample to a simple spherical level set grid.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgY6gVvG0VAYsqhONuEYlEVF7n96hf3IavoLLjtYHRLnhhKdO10FAYeJYnmhHlzl7bY2Q8ZHWoG3ia-DjdaKxzkMFbVCkhSGcBcsvDeHzyo7zP-LigNL7eN5bmPvGZBL5n1mLumDsrzhtI/s1600/sample.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="315" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgY6gVvG0VAYsqhONuEYlEVF7n96hf3IavoLLjtYHRLnhhKdO10FAYeJYnmhHlzl7bY2Q8ZHWoG3ia-DjdaKxzkMFbVCkhSGcBcsvDeHzyo7zP-LigNL7eN5bmPvGZBL5n1mLumDsrzhtI/s400/sample.jpg" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj1VT5JaMBHXvouybuygokSd0E7DBShX8ngW-Mt6d5fysJODyAzN-ovOLitZnR9IiBsTUtvauXfdIx-nfk3mmxcfYbOfh8vMmqWNqTc5M-II9iksuVXLfgW9etou5mIHJlb6UT_x8lG04k/s1600/sample1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="315" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj1VT5JaMBHXvouybuygokSd0E7DBShX8ngW-Mt6d5fysJODyAzN-ovOLitZnR9IiBsTUtvauXfdIx-nfk3mmxcfYbOfh8vMmqWNqTc5M-II9iksuVXLfgW9etou5mIHJlb6UT_x8lG04k/s400/sample1.jpg" width="400" /></a></div>
<br />
<br />
<br />Xing Duhttp://www.blogger.com/profile/04578285011641083021noreply@blogger.com2