Real time fluid rendering and liquid surface reconstruction is a problem which still has yet to been truly solved perfectly for a range of different fluid dynamics problems. Even offline renders which are based on particle-fluid methods such as marching cubes to extract a mesh from a grid function have not produced 100% accurate results. In addition, the more accurate methods of surface extraction require a lot of memory and execution time, which do not make them very amenable to real time computation.
For my CIS565 final project I plan on implementing a NVIDIA developed approach for rendering the surface of of a particle-based fluid (in this case, SPH). This approach is a method which according to NVIDIA is simple to implement and can achieve real time performance with over 10,000k particles. In addition it does not use polygonization, voxelization (which means it does not deal with grid artifacts inherent to marching cube renders), and has parameters so that the user can control both surface level detail and smoothing. The method is basically a way to use SPH particles themselves instead of trying to extract a polygonization, by transforming them insto perspective space. At the highest level, we start from each fluid particle positions and extract surface depth, thickness and positions. Then smooth the surface depth using a gaussian filter, then create a dynamic (adjustable) noise texture on the fluid surface, and then finally we have a compositing pass which will combine the surface depth, the noise texture, and the background image scene into a final rendering of the fluid. In addition to this method I want to add a metric for rendering, so I may either integrate a real time marching cubes library (or write one using CUDA myself) and a point splatting shader for each particle to show some alternative fluid rendering methods, and perhaps to show why NVIDIA's is superior for large scale simulations with a lot of particles.
The paper I plan on implementing is this one:
With references to: