Next: 4.10.1 Parallelization Strategy
Up: 4. A Fast Level
Previous: 4.9 Surface Extraction
The increasing number of cores which are integrated in a single processor requires algorithms that are able to run in parallel. A parallelized version of the sparse field LS method for shared memory machines is already presented in [11]. There, a full grid is used which is distributed over multiple threads by partitioning the grid into slabs. If the H-RLE data structure is used, parallelization is more complicated. Run-length encoding is not predestined for parallelization, since the data structure is set up serially. In the following section a technique is described which distributes the information over multiple H-RLE data structures to enable parallel processing.
Subsections
Next: 4.10.1 Parallelization Strategy
Up: 4. A Fast Level
Previous: 4.9 Surface Extraction
Otmar Ertl: Numerical Methods for Topography Simulation